133 28 23MB
English Pages 1630 [1625] Year 2023
Scott N. Romaniuk · Péter Marton Editors-in-Chief
The Palgrave Encyclopedia of Global Security Studies
The Palgrave Encyclopedia of Global Security Studies
Scott N. Romaniuk • Pe´ter Marton Editors-in-Chief
The Palgrave Encyclopedia of Global Security Studies With 49 Figures and 27 Tables
Editors-in-Chief Scott N. Romaniuk International Centre for Policing and Security University of South Wales Pontypridd, UK
Péter Marton Institute for International, Political and Regional Studies Corvinus University of Budapest Budapest, Hungary
ISBN 978-3-319-74318-9 ISBN 978-3-319-74319-6 (eBook) https://doi.org/10.1007/978-3-319-74319-6 © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland.
Preface
There are two fundamentally important and difficult challenges for scholars studying security. One is that security – in the sense of being able to live without fear – is incredibly subjective, necessitating the scholar’s study of an infinite variety of perspectives to piece together some answers as to how conflicts between rival needs and competing actors may be resolved. The second major issue is that security – defined as whether one lives or dies – is terribly unambiguous. Understanding all that makes a difference in this respect requires that scholars study the results of numerous scientific fields, including the physical science of nuclear fission, the biological science of infectious diseases, and the social science of how social networks influence everything from pandemics to social movements and terrorism. Diverse viewpoints are essential, but none of these “perspectives,” or stated positions, on security are credible if they are founded on a lack of knowledge of what can be considered to be reality. Consequently, a science of security is also required, with a concentration on comprehending “security complexes.” The original concept of “Regional Security Complexes” from the Copenhagen School should not be included in the understanding of the latter. Instead, we should consider security complexes as emergent complex systems made up of players, processes, variables, and their relationships in the context of particular problems, as described in this encyclopedia’s article on Environmental Security Complexes. Security complexes are not always regional in terms of their spatial organization, and due to the high specificity of the implications of events and issues for security, they should not only be studied in an aggregated manner, as is typically done, but rather in a (dis)aggregated approach, focused on the analysis of issues requiring specialized knowledge. However, no analysis of even the existential implications of physical or biological relationships is complete without examining how various actors relate to and have an impact on the social relationships that serve as a medium for their effects. For the same reason, “security problems” and “solutions” to these problems will almost always exist in a specific form only in the imagination of the audience, with some degree of “intersubjective” consensus surrounding them, limited in time and space to where and when such a convergence may exist. Thus, the study of securitization processes and security is what the different strands of the collective “we” make of it. At the same time, since “the need for global security necessitates a re-examination of practices that are generally assumed and accepted to be good and advantageous for everyone” – as noted in the article on Non-traditional Security – v
vi
scholars of security may even have to take up the task of critically analyzing the outcomes of securitization. This implies the need to reorganize everything in a way that “breaks free from the theorising and practising of state-centric security.” With its seven parts and approximately 300 articles, the Palgrave Encyclopaedia of Global Security Studies (PEGSS) highlights the search for a balance in terms of the necessary range of perspectives and skills, as well as a significant focus on nontraditional security issues. The entries were written by authors from all around the world and are split into the following major topic categories: Conflict and Security, Human Rights and Societal Security, Poverty and Economic Security, Environmental Security, Food Security, Energy Security, and Health Security. Since the Copenhagen School conceptualized “sectors” of security analysis as political, military, societal, environmental, and economic, in a similarly dubious manner, the challenge of coming up with legitimate and mutually exclusive analytical categories for this kind of broad agenda has not, of course, gotten any easier. Given that societal and economic security are not only broad topic areas but also have the potential to be infinite, an effort was made here for a more concentrated debate, turning to human rights as a crucial framework for the former and to poverty for the latter. The “environment,” to the contrary, is not merely huge or boundless but all-encompassing; as a result, discussion of its ramifications is implicitly present in all areas of the encyclopaedia, from energy to food and health security. Much has happened since work began on PEGSS. In late 2019, a novel coronavirus began spreading in Hubei province, China, in a short time becoming a pandemic, revealing how there has been a “rise over the last decades in the systemic level of the threat of infectious diseases” as well as that “world population, population density in urban areas, and the interconnectedness of major population centres around the globe are all increasing at the same time, making a pathogen’s leap from a rural to an urban setting, as well as from one continent to another, through the hubs of civic aviation, more likely” – as pointed out in the article on Health Security, written in 2017. The COVID-19 pandemic is still ongoing, and its post-acute sequelae will continue to have a negative impact on our health for a very long time. In the meantime, the spread of other infectious diseases, like poliomyelitis or monkeypox, is also becoming more and more of a worldwide health threat. Furthermore, biotechnology is making rapid advancements, and its successes call into question the commonly accepted distinctions between the natural and the artificial, the human and the post-human. Yet, the focus on specific disease agents or the rise in ontologically disruptive technologies should not cause one to lose sight of the multifaceted systemic crisis of health systems around the world. The COVID-19 pandemic and other events have also highlighted global supply chain weaknesses, and crises in semi-conductor production and shipping containers have contributed to unrealized production in many industries, including the auto industry, even before the effects of Russia’s invasion of Ukraine could be felt globally. The causes and repercussions of economic
Preface
Preface
vii
events that cause insecurity (or vulnerability) frequently go outside the scope of standard economic studies and include political, military, sociological, public health-related, and environmental concerns. This lesson was reinforced when millions of people in the Middle East and Sub-Saharan Africa faced famine as a result of Russia’s deliberate devastation of rural and urban, agricultural and industrial infrastructure during its destructive attack on Ukraine. Concerns about the world’s economic underpinnings have been heightened by the threat of energy supply crises and an escalation toward sabotage of critical infrastructure outside of the immediate confines of war, ranging from physical attacks on pipelines to cyberattacks on the electric grid or the banking industry. It is difficult to find any solace in this, but it is also crucial to emphasize that the “foundations” of the contemporary world economy would have needed to be called into question in any case. “A . . . major complication structuring the pipeline security complex, as well as the energy security complex in general, is the diversity of political systems present in its framework. The cooperation of democratic and semi-autocratic or autocratic countries is problematic at best and may become impossible from time to time,” as it is pointed out in the article on Natural Gas as a Means of Influence. We also need to move away from an anthropocentric appraisal of the current and prospective conditions, expanding the scope of the aforesaid critical inquiry. As the article on the Anthropocene observes, “a view that takes the Anthropocene seriously is not an anthropocentric one in practice.” Quoting further from the same source: “if we grow conscious of the Anthropocene epoch as a period of crisis, we need to apply a framework that synthesizes the existing analysis of the economic and social history of phenomena central to the epoch.” It is easy to understand the wisdom of the preceding words at the time of writing, in the fall of 2022. During the summer, parts of the Yangtze River dried up in the People’s Republic of China. The drought in northern Mexico and the western “Cadillac Desert” portions of the United States heightened the tension between diverse uses of (and users of) water. Even as Europe experienced its hottest summer in 500 years in 2022, prompting widespread concern, parts of the Middle East experienced temperatures incompatible with human life for prolonged periods of time. Climate change is rapidly threatening human existence in many regions and prompting radical adaptation, including evacuations and ex-migration. Against this backdrop, massive amounts of military equipment and ammunition are burned in the war in Ukraine – Russia’s armed forces will have used millions of pieces of artillery ammunition by the end of the summer of 2022, financed in large part by revenues generated by the world’s dependence on fossil fuels. Moreover, while this highlights the need for the world to reduce its reliance on fossil fuels and move toward an energy transition to combat climate change, short-term measures may contradict this logic and increase greenhouse gas emissions as governments seek to secure a way forward under supply constraints. We hope that the print edition of PEGSS will enable the project’s author community to productively contribute to future discussions – and perhaps even
viii
Preface
some solutions to – the challenges and crises of our time, and that the articles presented here will serve as a valuable resource for all readers in engaging with ideas of fundamental importance related to specific subjects while also providing novel and forward-looking perspectives. Scott N. Romaniuk Péter Marton Editors-in-Chief
List of Topics
2007–2008 Financial Crisis Activists and Activism Actors and Stakeholders in Non-traditional Security Air Pollution Alcohol Abuse and Addiction Anthropocene Antibiotics Anti-globalizationists Antimicrobial Resistance Anti-piracy Cooperation Arab Spring Arctic Army Recruitment of Ethnic Minorities Asian Development Bank (ADB) Asian Infrastructure Investment Bank Asian Monetary Fund (AMF) Assimilation Authoritarianism Autonomous Weapon Systems (AWS) Balance of Power Bed Diplomacy Biopolitics Biosecurity and Biodefense Bioterrorism Bubonic Plague Ceasefires Center of Reform on Economics (CORE) Indonesia Child Soldiers Cholera Cities for Climate Protection Program Civil Liberties
Civilian Control of Armed Forces Civil-Military Relations Clean Development Mechanism (CDM) Climate Change and Public Health Collective Security Treaty Organization (CSTO) Commission on Human Rights Committee on World Food Security (CFS) Communism Conflict and Conflict Resolution Convention on Biological Diversity Conventions Against Corruption Core-Periphery Model Countering Violent Extremism (CVE) Critical Infrastructure Critical Security Studies Cyber Diplomacy Deforestation Democratic Security Democratic Transitions Democratization Desalination Desertification Disarmament Disruptive Technologies in Food Production: The Next Green Revolution Diversity Doctors Without Borders – Médecins Sans Frontières Dollar Diplomacy Drinking Water Drone Warfare: Distant Targets and Remote Killings Drought
ix
x
Drug Trafficking Ebola Ecological Degradation Economic Commission for Asia and the Far East (ECAFE) Economic Insecurity Economic Productivity Economic Security Economic Warfare Ecosystems Education, Conflict, and Peace Election Violence Emancipation Emergency Aid Emerging and Re-emerging Diseases Emerging Powers Endangered Species Energy Diversification in Central and Eastern Europe: The Case of Slovakia Energy Markets: Trade Energy Security (Dilemmas) Energy Security in the EU Energy Security Strategies Energy Security: Evolution of a Concept Energy Security: Fuel Mix Environmental Security Environmental Security and Conflict Environmental Security Complexes Epidemics Ethics of Security Ethnic Violence Ethnocentrism European Arms Embargo Euroscepticism Exclusive Economic Zone (EEZ) Exploitation of Resources Exploitation of Women’s Bodies in War Failed States Fair Trade Fascism Feminist International Relations (IR) Theory Firearms Protocol (UN) First Nations’ Food Security in Canada Food Basket Countries to Assure Global Food Security Food Insecurity Food Price Index
List of Topics
Food Prices and Economic Access to Food Food Sovereignty Forced Labor Forced Marriage Foreign Direct Investment (FDI) Fossil Fuels Freedom of Expression in the Digital Age: Internet Censorship Gang Violence Gender Empowerment Measurement Gender-Mainstreaming in Transitional Justice Geostrategic Approaches Global Civil Society Global Commons Global Governance Global Health Security Initiative Global Report on Trafficking in Persons Global Security Global Shift Global Threats Global Trends Impacting Food Security Globalization and Security Globalized Arms Industry Glocalization Governance Governance “Stretching” Green Theory (International Relations) Greenhouse Gas Emissions Gulf Cooperation Council (GCC) Health Security Health System Health-Related Aspects of Post-conflict Reconstruction HIV/AIDS Homeland Security Horizontal Inequalities Human Rights and Privilege Human Security Humanitarian Assistance Humanitarian Intervention Hybrid Conflict and Wars Ideology Illicit Arms Trade Income Inequality Indigenous Peacebuilding Indigenous Peoples Infrastructure Development
List of Topics
Insurgents and Insurgency Inter-American Commission on Human Rights International Atomic Energy Agency (IAEA) International Commission on Intervention and State Sovereignty (ICISS) International Diplomacy International Energy Agency (IEA) International Energy Charter Treaty (ECT) International Energy Forum (IEF) International Fact-Finding Missions International Monetary Fund (IMF) International Political Sociology Internationalism Internet (Governance) Interstate Challenges “Larger Freedom” (UN) Least Developed Countries in Africa Legitimacy in Statebuilding Leprosy Liquefied Natural Gas (LNG) Literacy Rates Malaria Malnutrition Maritime Piracy Mediation Migrant Health in the Nexus of Universal Health Coverage and Global Health Security Migration-Security Nexus Militant Democracy Military-Focused Security Money Laundering More Developed Countries in the Aftermath of the 2007–2008 Financial Crisis Multiculturalism National Climate Action Plans (Voluntary Reduction Plans) National Security Intelligence Natural Gas as a Means of Influence Natural Gas Pipeline Security in Central Asia Neoliberalism “New Wars” and Nontraditional Threats Non-alignment Policy Nondemocratic Systems Norms North Atlantic Treaty Organization (NATO) Offense-Defense Balance Official Development Assistance (ODA)
xi
Offshore Balancing Ontological Security Organic Agriculture Organization for Economic Cooperation and Development (OECD) Origins of Cyber-Warfare Paleoclimatology Paris Agreement Peace Agreements Peace and Reconciliation Peace Studies Peacebuilding Peaceful Unification and Coexistence Pharmaceutical Patent Protection: Key Issues and Dilemmas Piracy Poliomyelitis and Child Paralysis Political Communication Post-Cold War Environment Post-colonialism and Security Post-Traumatic Stress Disorder (PTSD) Property Rights Prostitution Protection of Civilians (POC) Protests and Conflict Public Health in Breakaway Regions Public Health in Failing States Refugees Regional Security Organizations Renewable Energy Agencies Resilience Resource Curse Responsibility to Protect (R2P) Right to Economic Dignity Right to Human Dignity Right to Life Right-Wing Terrorism Rio Declaration on Environment and Development Riots and Rioting Role of Naval Forces in Health Security Role of the Media Role of the Private Sector Rule of Law Securitization and De-securitization Securitization of Foreign Investments Security and Citizenship
xii
Security Deficit Security Discourse Security Landscape Security Sector Reform Security State Severe Acute Respiratory Syndrome (SARS) Small Arms and Light Weapons Small-Scale Weapon Transactions Smuggling Social Justice and Change Social Media and Peacebuilding Societal Identity and Security Societal Security Soft Power Solar Energy Space Debris Speech Act Theory Stability Operations Stable Peace State Legitimacy State Responsibility for Prevention of Disease of International Concern State Sovereignty and Stability: Conflicting and Converging Principles State-Centric Paradigm Stockholm Declaration on the Human Environment Strategic Culture Structuration Theory Supranational Actors Surveillance States Sustainable Development Territorial Integrity
List of Topics
The Green Revolution The President’s Emergency Plan for AIDS Relief (PEPFAR) Threats Which Disrupt Food Security Three Principles of the People Totalitarianism Transnational Corporations (TNCs) Trans-Pacific Partnership (TPP) Trauma in Conflict Truth Commission Tuberculosis United Nations Office on Drugs and Crime (UNODC) Unstable Peace Urban Poor Consortium Vaccination Vulnerability and Vulnerable Groups of People War on Drugs Water-Borne Diseases Wind Power Wolf Warrior II (战狼2) and the Manipulation of Chinese Nationalism Women and Terrorism Women in Combat Women in Global Politics Women, Peace, and Security World Bank World Commission on Environment and Development (Brundtland Commission) World Happiness Report World-Systems Theory
About the Editors-in-Chief
Dr. Scott N. Romaniuk is a visiting fellow at the International Centre for Policing and Security, University of South Wales, UK, and a non-resident expert at the Taiwan Center for Security Studies. Péter Marton holds a PhD in International Relations and is currently Lecturer in International Relations and International Security at Corvinus University in Budapest, Hungary, and Associate Professor at McDaniel College – Hungarian Campus. His fields of research include Security Studies, Foreign Policy Analysis, and Global Public Health. He has published, inter alia, in Defence Studies, Communist and Post-Communist Studies, the Journal of Soviet and Post-Soviet Politics and Society, the Journal of Contemporary African Studies, and New Perspectives.
xiii
Contributors
Yrys Abdieva OSCE Academy graduate, Bishkek, Kyrgyzstan Mirasol M. Abrenica College of Social Sciences, University of the Philippines Cebu, Cebu, Philippines Frank Aragbonfoh Abumere St Antony’s College, University of Oxford, Oxford, UK Department of International History, London School of Economics and Political Science (LSE), London, UK Department of Philosophy, UiT - The Arctic University of Norway, Tromsø, Norway Kristina M. L. Acri née Lybecker Department of Economics and Business, Colorado College, Colorado Springs, CO, USA Mercy M. Adeyeye Federal University of Technology, Minna, Nigeria Joshua Akintayo Brussels School of International Studies, University of Kent, Canterbury, United Kingdom International Centre for Policing and Security, University of South Wales, Pontypridd, UK Gordon Alley-Young Department of Communications and Performing Arts, Kingsborough Community College, City University of New York, Brooklyn, NY, USA Christopher Ankersen Center for Global Affairs, New York University, New York, NY, USA Rubén Arcos University Rey Juan Carlos, Madrid, Spain Aries A. Arugay University of the Philippines-Diliman, Quezon City, Metro Manila, Philippines Thiago Babo Center for Peace and Conflict Studies (CCP – NUPRI), USP – University of São Paulo, Sao Paulo, Brazil Fathima Azmiya Badurdeen Technical University of Mombasa, Mombasa, Kenya Róbert Balogh Institute for Central European Studies, National University of Public Service, Budapest, Hungary xv
xvi
Justin Keith A. Baquisal University of the Philippines-Diliman, Quezon City, Philippines Bilge Bas Istanbul Bilgi University, Istanbul, Turkey Piyali Basu Department of Political Science, Women’s Christian College, Kolkata, India Ákos Baumgartner Corvinus University of Budapest, Budapest, Hungary Murat Bayar Department of Political Science and Public Administration, Social Sciences University of Ankara, Ankara, Turkey Tuğba Bayar Department of International Relations, Bilkent University, Ankara, Turkey Briones Bedell Stanford Online High School, Stanford, CA, USA Mst Marzina Begum Department of Public Administration, University of Rajshahi, Rajshahi, Bangladesh Gáspár Békés Cold War History Research Center, Budapest, Hungary Beate Beller History and International Studies, Institute for History, Leiden University, Leiden, The Netherlands Arundhati Bhattacharyya The University of Burdwan, Burdwan, West Bengal, India Department of Political Science, Diamond Harbour Women’s University, Sarisha, West Bengal, India Department of Political Science, The University of Burdwan, Burdwan, West Bengal, India Srinjoy Bose School of Social Sciences, University of New South Wales (Sydney), Sydney, NSW, Australia Jeffrey Bradford Bradford Associates, New York, NY, USA Camila de Macedo Braga Center for Peace and Conflict Studies (CCP – NUPRI), USP – University of São Paulo, Sao Paulo, Brazil Michelle Grace R. Cabrales University of the Philippines Cebu, Cebu, Philippines Andrea Carlà Institute for Minority Rights, Eurac Research, Bolzano/ Bozen, Italy Pascal Carlucci Coventry University, Coventry, UK Stephanie Carver Monash University, Melbourne, VIC, Australia Martin Scott Catino Liberty University, Lynchburg, VA, USA Susmita Chatterjee Maharaja Manindra Chandra College (under University of Calcutta), Kolkata, West Bengal, India
Contributors
Contributors
xvii
Richard Chelin International and Public Affairs, University of KwaZuluNatal, Durban, South Africa Yuanyuan Chen Chongqing Energy Big Data Center, Beijing, China Mihai Sebastian Chihaia Department of Political Science, International Relations and European Studies, Alexandru Ioan Cuza University of Iasi, Iasi, Romania Nikos Christofis College of History and Civilization and Center for Turkish Studies, Shaanxi Normal University, Xi’an, China Sarah J. Clifford University of Copenhagen, Copenhagen, Denmark Beatrice Conti Stanford Online High School, Stanford, CA, USA Eugenio Dacrema University of Trento, Trento, Italy Amber Darwish Graduate Institute of International and Development Studies, Geneva, Switzerland Sarita Dash Jawaharal Nehru University, New Delhi, India Susan Davidson Brigham Young University, Provo, UT, USA Pablo de Orellana King’s College London, London, UK Patrice Natalie Delevante Richmond, VA, USA Rosita Dellios Faculty of Society and Design, Bond University, Gold Coast, QLD, Australia Tobias Denskus Communication for Development, Malmö University, Malmö, Sweden Fabio Andrés Díaz Pabón Department of Political and International Studies, Rhodes University, Grahamstown, South Africa International Institute of Social Studies, Erasmus University Rotterdam, The Hague, Netherlands John W. Dietrich History and Social Sciences, Bryant University, Smithfield, RI, USA Aaron D. Dilday Palo Alto College, San Antonio, TX, USA Glen M. E. Duerr Cedarville University, Cedarville, OH, USA Rita Durão Department of Political Studies, Faculty of Social Sciences and Humanities, NOVA University of Lisbon, Lisbon, Portugal Amitabh Vikram Dwivedi Faculty of Humanities and Social Sciences and School of Languages and Literature, Shri Mata Vaishno Devi University, Katra, India Rolando T. Dy Center for Food and Agri Business, University of Asia and the Pacific, Pasig City, Philippines Sarah Earnshaw University of California, Berkeley, CA, USA
xviii
Contributors
Tuba Eldem Fenerbahce University, Istanbul, Turkey Senem Ertan Department of Political Science and Public Administration, Social Sciences University of Ankara, Ankara, Turkey Jumel Gabilan Estrañero Research and Analysis, Armed Forces of the Philippines, Department of National Defense, Manila, N/A, Philippines Kasi Eswarappa Department of Tribal Studies, Indira Gandhi National Tribal University, Amarkantak, India Anna Etl-Nádudvari Corvinus University of Budapest, Budapest, Hungary Amparo Pamela H. Fabe National Police College Philippines, Quezon, Philippines Attila Farkas Corvinus University of Budapest, Budapest, Hungary Senia Febrica American Studies Center, Universitas Indonesia, Jakarta, Indonesia Karla Valeria Feijoo History and International Studies, Institute for History, Leiden University, Leiden, The Netherlands Robert A. Forster Political Settlements Research Programme, Edinburgh Law School, University of Edinburgh, Edinburgh, UK Viktor Friedmann Budapest Metropolitan University, Budapest, Hungary Oscar Gakuo Mwangi Department of Political and Administrative Studies, National University of Lesotho, Roma, Lesotho Evgenii Gamerman Institute for the Comprehensive Analysis of Regional Problems of the Far Eastern Branch of the Russian Academy of Sciences, Blagoveshchensk, Russia Molly Ghosh Department of Political Science, Barrackpore Rastraguru Surendranath College, Barrackpore, West Bengal, India Sutapa Ghosh Department of Sociology, Surendranath College, Barrackpore, India
Barrackpore
Rastraguru
Enrico V. Gloria University of the Philippines-Diliman, Quezon City, Metro Manila, Philippines Madhumita Gopal Rutgers University, New Brunswick, NJ, USA Louis Gosart Santa Monica College, Santa Monica, CA, USA Ulia Gosart UCLA, Los Angeles, CA, USA Francis Grice Department of Political Science and International Studies, McDaniel College, Westminster, CA, USA Mary Ruth Griffin Division of Natural Science, Walters State Community College, Greenville, TN, USA
Contributors
xix
Kürşad GÜÇ Department of International Relations, Faculty of Political Sciences, Ankara University, Ankara, Turkey Isabelle Guenther School of Social Sciences, Monash University, Melbourne, VIC, Australia Barnana Guha Thakurta (Banerjee) Department of Political Science, School of Social Sciences, Netaji Subhas Open University, Calcutta, India Riley Haacke Brigham Young University, Provo, UT, USA Zachary Hamel Nichols College, Dudley, MA, USA Ayelet Harel-Shalev Ben-Gurion University of the Negev, Beer-Sheva, Israel Henrik S. Hartmann Institute for History, Leiden University, Leiden, Netherlands Suzette A. Haughton The Department of Government, University of the West Indies, Mona Campus, Kingston, Jamaica Suzanne Hindmarch University of New Brunswick, Canada
Fredericton, NB,
Sara Hockett Brigham Young University, Provo, UT, USA Tamás Hoffmann Institute of International Studies, Corvinus University of Budapest, Budapest, Hungary Dániel T. Homlok Corvinus University of Budapest, Budapest, Hungary Nik Hynek Department of Security Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic Enemaku Umar Idachaba Department of Political Science, University of Ibadan, Oyo State, Ibadan, Nigeria Rebecca Shea Irvine Institute for Research on Women and Gender, University of Michigan, Ann Arbor, MI, USA Urban Jakša University of York, York, UK Łukasz Kamieński Department of International and Political Studies, Jagiellonian University, Kraków, Poland Eswarappa Kasi Department of Tribal Studies, Indira Gandhi National Tribal University, Amarkantak, Madhya Pradesh, India Tamer Kasikci Department of International Relations, Eskisehir Osmangazi University, Eskisehir, Turkey Serdar Kaya Simon Fraser University, Burnaby, BC, Canada János Kemény Center for Strategic and Defense Studies, József Eötvös Research Center, National University of Public Service, Budapest, Hungary
xx
Jonathan Kennedy Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, UK Elisabeth King International Education & Politics, New York University, New York, NY, USA Andrzej Klimczuk Department of Public Policy, SGH Warsaw School of Economics, Warsaw, Poland Magdalena Klimczuk-Kochańska Faculty of Management, University of Warsaw, Warsaw, Poland Trevor Kline Department of Political Science and International Studies, McDaniel College, Westminster, MD, USA Daniel Koehler German Institute on Radicalization and De-Radicalization Studies (GIRDS), Stuttgart, Germany Klevis Kolasi Ankara University, Ankara, Turkey Samantha Kruber Monash University, Melbourne, Australia Kutay Kutlu York University, Toronto, Canada Tianying Lan Columbia University, New York, NY, USA Oscar L. Larsson Department of Urban and Rural Development, The Swedish University of Agricultural Studies, Uppsala, Sweden Sorbarikor Lebura Rivers State University, Port Harcourt, Nigeria Eugenio Lilli University College Dublin, Dublin, Ireland Dan Liu School of Humanities and Social Science, North University of China, Taiyuan, China Christopher Long Department of International Relations, School of Global Studies, University of Sussex, Brighton, UK Chris Lyttleton Anthropology Department, Macquarie University, Sydney, NSW, Australia Mary Manjikian Regent University, Virginia Beach, VA, USA Gazi Arafat Uz Zaman Markony Department of Public Administration and Governance Studies, Jatiya Kabi Kazi Nazrul Islam University, Mymensingh, Bangladesh Péter Marton Institute for International, Political and Regional Studies, Corvinus University of Budapest, Budapest, Hungary Mallory Matheson Brigham Young University, Provo, UT, USA Gladis S. Mathew Department of Tribal Studies, Indira Gandhi National Tribal University, Amarkantak, India Janne Mende Department of International Relations, University of Giessen, Giessen, Germany
Contributors
Contributors
xxi
Dorottya Mendly Department of International Relations, Corvinus University of Budapest, Budapest, Hungary Hikmet Mengüaslan International Relations, Middle East Technical University, Ankara, Turkey Arthur Holland Michel Center for the Study of the Drone, Bard College, Annandale-on-Hudson, NY, USA Jason Miller Department of Political Science and International Studies, McDaniel College, Westminster, MD, USA Nima Mirzaei Department of Industrial Engineering, Istanbul Aydin University, Istanbul, Turkey Patit Paban Mishra Sambalpur University, BURLA, Dt- Sambalpur, Orissa, India Matúš Mišík Comenius University in Bratislava, Bratislava, Slovakia Emma Mitrotta School of International Studies, University of Trento, Trento, Italy Awal Hossain Mollah Professor of Public Administration, University of Rajshahi, Rajshahi, Bangladesh András Molnár Corvinus University of Budapest, Budapest, Hungary Md Nurul Momen Department of Public Administration, University of Rajshahi, Rajshahi, Bangladesh Jose Ma. Luis Montesclaros Centre for Non-Traditional Security Studies, S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore, Singapore David Morris International Relations Multidisciplinary Doctoral School, Corvinus University of Budapest, Budapest, Hungary Girish Sreevatsan Nandakumar Graduate Program in International Studies (GPIS), Old Dominion University, Norfolk, VA, USA Tara Neuffer Brigham Young University, Provo, UT, USA Aliaksandr Novikau Department of International Relations and Public Administration, International University of Sarajevo, Sarajevo, Bosnia and Herzegovina Haniyeh Nowzari Department of Environment, Abadeh Branch, Islamic Azad University, Abadeh, Iran Raheleh Nowzari Mechanical Engineering Department, Faculty of Engineering, Istanbul Aydin University, Istanbul, Turkey John O’Sullivan University of North Georgia – Gainesville Campus, Gainesville, GA, USA
xxii
Claire Michaela M. Obejas College of Social Sciences, University of the Philippines Cebu, Cebu, Philippines James Okolie-Osemene Department of International Relations, Wellspring University, Benin City, Nigeria Samuel Olufeso University of Ibadan, Ibadan, Nigeria Ayokunle Olumuyiwa Omobowale Department of Sociology, University of Ibadan, Ibadan, Nigeria University of Ibadan, Ibadan, Nigeria David Andrew Omona Uganda Christian University, Mukono, Uganda Veronika Oravcová Comenius University in Bratislava, Bratislava, Slovakia Chad Patrick Osorio University of the Philippines, College of Law, Quezon City, Philippines College of Economics and Management, University of the Philippines, Los Baños, Philippines Júlia Palik Peace Research Institute Oslo (PRIO), Corvinus University of Budapest (CUB), Oslo, Norway Piergiuseppe Parisi Centre for Applied Human Rights, University of York, York, UK William R. Patterson Washington, DC, USA Salvin Paul Department of Peace and Conflict Studies, Sikkim University, Gangtok, Sikkim, India Katie Paulot University of British Columbia Okanagan, Kelowna, BC, Canada Milica Pejovic School of International Studies, University of Trento, Trento, Italy Lady Isabelle Perez University of the Philippines Cebu, Cebu City, Philippines Lora Pitman School of Cybersecurity, Old Dominion University, Norfolk, VA, USA University of Maine, Orono, ME, USA Calvin Plank Cedarville University, Cedarville, OH, USA Elif Nisa Polat MA International Affairs, Johns Hopkins University School of Advanced International Studies (SAIS) Europe, Bologna, Italy Pedro Ponte e Sousa Department of Law, Universidade Portucalense, Porto, Portugal Portuguese Institute of International Relations (IPRI), Lisbon, Portugal
Contributors
Contributors
xxiii
Peter Popella Interfaculty Institute of Microbiology and Infection Medicine (IMIT), University of Tuebingen, Tuebingen, Germany Md. Mostafijur Rahman Department of Law, Prime University, Dhaka, Bangladesh Maheema Rai Department of Peace and Conflict Studies and Management, Sikkim University, Gangtok, India Salvador Santino Fulo Regilme Jr. History and International Studies, Institute for History, Leiden University, Leiden, The Netherlands Jan Rempala Technology Practice, FleishmanHillard, Brussels, Belgium Mowshimkka Renganathan Department of Social Anthropology, University of Jyväskylä, Jyväskylä, Finland Gwenola Ricordeau California State University, Chico, CA, USA Jacopo Roberti di Sarsina School of Law, Alma Mater Studiorum University of Bologna, Bologna, Italy Stephen L. Roberts Department of Health Policy, London School of Economics, London, UK James Rogers Centre for War Studies, University of Southern Denmark, Odense, Denmark Scott N. Romaniuk China Institute, University of Alberta, Edmonton, AB, Canada International Centre for Policing and Security, University of South Wales, Pontypridd, UK Natalie Romeri-Lewis Brigham Young University, Provo, UT, USA Ohio State University, Columbus, OH, USA Avilash Roul Indian Institute of Technology Madras (IITM), Chennai, India Indo-German Centre for Sustainability (IGCS), Indian Institute of Technology Madras (IITM), Chennai, India Animesh Roul Society for the Study of Peace and Conflict, New Delhi, India Adelina Sabani MacEwan University, Edmonton, AB, Canada Atrayee Saha Department of Sociology, Muralidhar Girls’ College, Calcutta University, Kolkata, India Isabella Samutin Stanford Online High School, Stanford, CA, USA Paromita Sarkar Department of Political Science, Sovarani Memorial College, Howrah, West Bengal, India Katarina Sárvári Corvinus University of Budapest, Budapest, Hungary
xxiv
Giulia Sciorati School of International Studies, University of Trento, Trento, Italy Margaret Seymour Old Dominion University, Norfolk, VA, USA Tamanna M. Shah Department of Sociology, University of Utah, Salt Lake City, Utah, USA Maria Shilina Faculty of Law, National Research University Higher School of Economics, Moscow, Russia Navagaye Simpson Department of Political Science and International Studies, McDaniel College, Westminster, MD, USA Manasi Singh Centre for Security Studies, Central University of Gujarat, Gandhinagar, Gujarat, India Maria Kristina Decena Siuagan Research and Analysis, Armed Forces of the Philippines, Department of National Defense, Manila, N/A, Philippines Erika Cornelius Smith Nichols College, Dudley, MA, USA Anzhelika Solovyeva Institute of Political Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic Max Steuer Department of Political Science, Comenius University, Bratislava, Slovakia Krisztina Szabó Central European University, Corvinus University of Budapest, Budapest, Hungary Kinga Szálkai Eötvös Loránd University, Budapest, Hungary Stephen Taylor Queen Mary University of London, London, UK William A. Taylor Angelo State University, San Angelo, TX, USA Paul Teng Centre for Non-Traditional Security Studies, Nanyang Technological University, Singapore, Singapore András Tétényi Institute of World Economy, Corvinus University of Budapest, Budapest, Hungary Zulfiya Tursunova Department of Peace and Conflict Studies, Guilford College, Greensboro, USA K. B. Usha Jawaharlal Nehru University, New Delhi, India Özüm Sezin Uzun Istanbul, Turkey Elena Val International Organization for Migration (IOM), Migration Health Department (MHD), Regional Office (RO), Brussels, Belgium Aletheia Kerygma B. Valenciano University of the Philipines-Diliman, Quezon City, Philippines Jame Scott Van Nest Corvinus University of Budapest, Budapest, Hungary Gergely Varga Corvinus University of Budapest, Budapest, Hungary
Contributors
Contributors
xxv
Dániel Vékony Corvinus University of Budapest, Budapest, Hungary Federica Viello International Organization for Migration (IOM), Migration Health Department (MHD), Regional Office (RO), Brussels, Belgium Cristian Vlas Corvinus University of Budapest, Budapest, Hungary Hasan Volkan Oral Department of Civil Engineering & EPPAM, Istanbul Aydin University, İstanbul, Turkey Bamidele A. Wale-Oshinowo Business Administration, University of Lagos, Lagos, Nigeria University of Lagos, Lagos, Nigeria Wendell C. Wallace Department of Behavioural Sciences, Centre for Criminology and Criminal Justice, The University of the West Indies, St. Augustine, Trinidad and Tobago Michael Wilt Cedarville University, Cedarville, OH, USA Carmit Wolberg Ben-Gurion University of the Negev, Beer-Sheva, Israel Mustafa Yetim Department of International Relations, Eskisehir Osmangazi University, Eskisehir, Turkey Fatma Yol Department of Political Science, Bilkent University, Ankara, Turkey Claire Yorke Yale University, New Haven, CT, USA Noureddin Mahmoud Zaamout Department of Political Science, University of Alberta, Edmonton, AB, Canada Dominik Zenner International Organization for Migration (IOM), Migration Health Department (MHD), Regional Office (RO), Brussels, Belgium Queen Mary University of London, London, UK
A
2007–2008 Financial Crisis Nima Mirzaei Department of Industrial Engineering, Istanbul Aydin University, Istanbul, Turkey
Keywords
Financial Crisis · Macroeconomic activities · Recession · Global security
Introduction The 2007–2008 financial crisis, also known as the global financial and economic crisis, began in September 2007 and lasted through to October 2008. Market conditions deteriorated precipitously and rapidly. It was the most significant financial and economic upheaval since the Great Depression (post-1929). Numerous countries were affected all over the world, mainly countries in the Euro area and the United States of America. The financial crisis led to a sharp reduction in both USA and European countries’ real activity and was followed by a long-lasting crash. In addition to the Euro area and the USA, many countries in the Middle East and Asia were also affected by the crisis. Initially, in the early stage of the financial crisis, Asia was only moderately affected. However, later the currencies and stock markets in the region came under robust downward pressure. To be brief, the world economy plunged into recession.
In the USA, the financial crisis devastated the banking industries and the financial market. In August 2007, the housing bubble in the USA burst. During the financial crisis, the Bush administration and the Federal Reserve were trying hard to avoid a complete collapse by the spending of billions of dollars to add liquidity to the financial market, but they did not succeed. By 2009, the situation got worse. In October 2009, the unemployment rate rose to 10% for the first time since 1982. The crisis was amplified by the lack of correct frameworks to explore interlinkages between developing and advanced economies. Finances as well as domestic and global trade were affected, in most of the countries. After US banks, next to be affected were the banks in the UK and Europe. In September 2007, Northern Rock failed in the UK. Based on a report that published by The Guardian, in September 2007, fears that the banks would shortly go bankrupt, prompted customers to queue round the block to withdraw their savings from banks in the UK. This was the first bank run in the history of a British bank for 150 years. One year later, the British government bailed out several banks (such as HBOS, Royal Bank of Scotland, and Lloyds TSB) to avoid the collapse of the UK banking sector. Global security was affected in various ways by the global financial crisis. The first challenge was an imminent drop in defense spending in the world. Many countries confronted serious cuts to compensate for the major debt they incurred in
© Springer Nature Switzerland AG 2023 S. N. Romaniuk, P. Marton (eds.), The Palgrave Encyclopedia of Global Security Studies, https://doi.org/10.1007/978-3-319-74319-6
2
battling bankruptcy and unemployment. This situation created an ambiance that countries were witnessing a fundamental breakdown of capitalism and globalization. Results and findings from different studies indicate that the financial shock suffered by any particular country was determined mainly by the degree of trade integration (Dwyer and Tan 2014; IMF 2012). Various judgments were leveled against the financial system at the time. The crisis had an enormous effect on every aspect of economic life. Recession, unemployment, bankruptcy, lower productivity growth, market instability, and poverty are some of the main outcomes. There are numerous factors that contributed to the crisis, for instance: certain financial innovations such as securitization bonds, insufficient regulatory measures, deregulation in the financial industry, the dysfunctional performance of rating agencies, various political factors, improper economic assumptions, poor monetary policies, weak banking systems, even the behavior of the media, etc. Although the crisis affected adversely many economies and caused substantial output losses initially, there are some examples (of investors, organizations, companies, authorities) who were capable of converting the crisis into opportunity. Consequently, they were able to attract resources and/or cash following out of the crisis and help to recover the economic swiftly. In this chapter, the causes and the effects of the 2007–2008 financial crisis and some of the relevant literature discussing these issues are summarized. Besides, some problems associated with global security are also addressed. Finally, in conclusion, some suggestions are offered as to how to avoid a new financial crisis.
Causes of the 2007–2008 Financial Crisis and Warning Indicators There is much debate about the causes of the 2007–2008 financial crisis. Researchers and specialists indicated different factors’ causal role in bringing about the global financial crisis. Uncertainties, a low degree of efficiency in the banking
2007–2008 Financial Crisis
system, low-skilled management, high capital costs, housing prices, unpayable debts, wrong financial policies, and many other reasons are named as causes of the crisis. Four of these are listed below (from Jawadi 2016): i. Insufficient regulatory measures that could not prevent financial downturn. ii. Increasing financial risk because of sophisticated financial products that were products of financial innovation. Hedge funds significantly amplified the effects of this. iii. The extreme dependency of the economic system on the credit market, especially in the USA in 2007, led to the bankruptcy of several banks and high recession in many emerging and developed countries. iv. The moderation principle pursued by major central banks (whereby they sought to counteract cyclical turns of the economy) resulted in a financial system disconnected from economic fundamentals and the real economy. Some studies have provided convincing empirical evidence that governments’ interferences, specific actions, and policies caused, deteriorated, and prolonged the financial crisis. For instance, Taylor argues that the US government’s monetary and housing policies were the root cause of the 2007–2008 global financial crisis, but adds that in addition, greedy investment bankers, shortsighted homeowners, incompetent rating agencies, predatory mortgage brokers, overly optimistic investors, and financial innovators of complex derivatives all played a role in the global financial crisis (Taylor 2010). He claims that governments and policy-makers caused the financial crisis by deviating from historical precedents and principles, which have a record of working in the past, in setting interest rates. Besides, misdiagnosing the problems in the bank credit markets, and responding wrongly by focusing on liquidity rather than risk, caused the prolongation of the financial crisis. Finally, it went from bad to worse by supporting some financial institutions and their creditors but not others in an ad hoc way, without a transparent and consistent framework (Taylor 2009).
2007–2008 Financial Crisis
In the USA, lack of regulation and some excessive risk-taking thus contributed to the crisis (Fair 2017). A study by the IMF (see cross-reference ▶ “International Monetary Fund (IMF)”) claimed that protracted Euro-area collapse implies weak aggregate demand, driven by restrictive fiscal policies, further hurting the world economy (IMF 2012). Others pointed toward the rigidities of European labor markets which may have hampered the rebounding of the European economy (Cette et al. 2015). An estimated model by Kollmann et al. (2016) showed that there was a strong rise in investment risk during the crisis in 2007–2008 that put an end to the precrisis investment boom in all three major affected regions (the USA, Europe, and the rest of world). Immediately after the financial crisis, many researchers started developing empirical models to explain and measure the variation across different countries (e.g., Shoham and Pelzman 2011). They also tried to find a model to predict feature crises, i.e., early warning models (Dwyer and Tan 2014). Some studies put forward simple mathematical models for ranking countries or predicting possible potential crises (Niroomand et al. 2018). Yet, there is no research or proposed model that would be able to show that specific indicators are empirically especially robust in terms of predicting the financial crisis. Several researchers claim that it is possible to recognize a financial crisis before it happens by monitoring specific economic indicators, a list of 17 early warning indicators which were thus gathered by Frankel and Saravelos (2012). This shows the particular indicators that were found to be statistically significant in their review. These indicators include inter alia, currency reserves, real exchange rates, gross domestic product (GDP), current account, money supply, export and import, inflation, equity return, real interest rate, debt composition, budget balance, terms of trade, political/legal, capital flow, and external debt (Hawinks and Klau 2000). The first two indicators (currency reserves and real exchange rates) are the most frequent statistically significant indicators that stood out in the case of the 2007–2008 financial crisis (Frankel and Saravelos 2012). It is, however, questionable if cause and effect may be
3
mixed up in the process of employing these indicators, and how much a long list of possible factors can really guide policy-makers’ and regulators’ attention as to where and when a crisis may await.
Effects of the 2007–2008 Financial Crisis on Economic and Postcrisis The first sign of the 2007–2008 global financial crisis appeared in US consumption and investment, and it spilled over quickly into international markets. The impact on worldwide trade and international finance has left no country untouched. The financial crisis had dramatic consequences for firms, companies, banks, investors, etc. Several working papers and reports published by two important organizations, the IMF and World Bank (see cross-reference ▶ “World Bank”), provide overview of the consequences. Studies show that the global financial crisis negatively affected human development overall, and the results of an analysis which used the World Bank’s dataset found that poverty increased in 173 countries across the globe (Kaya 2018). Commodity prices increased and these changes have had a worse impact on young children, pregnant women, and chronically ill people (Kaya 2018). Consequently, undernourishment rose around the world, and people with low income could afford to spend less on healthcare services. The 2007–2008 global financial crisis had a serious effect on economic activities in both advanced and emerging market economies. So far, several studies provide an estimate of the effects of the financial crisis on a different basis. Although the US and European countries were the main targets of the financial crisis, the rest of the world could not escape its effects, either (Mirzaei and Vizvari 2011). The crisis affected shares prices throughout the word. Many international banks that made substantial use of credit-scoring techniques actually reduced or stopped their credit expansion during the 2007–2008 global financial crisis, further impeding growth. The financial crisis led to a meeting of the G20 members in November 2008, the first since the
A
4
Lehman Brothers’ bankruptcy in the previous month, and 1 year later, in April 2009, they agreed on a global stimulus package worth five trillion dollars. Global security was also affected by the 2007–2008 financial crisis. For instance, according to a report published by the European Parliamentary Research Service (Marie Lauesen 2013), during the financial crisis, members of the North Atlantic Treaty Organization (NATO) have significantly reduced their defense budgets. They reduced investment in particular critical capabilities, and this is, up to this day, affecting NATO’s performance. Research and development in the area of defense was affected. This put pressure on the larger NATO members (the USA on the one hand, and Great Britain, France and Germany) to compensate for this trend. That trend was particularly visible at the time in the USA. Concerning the Asian region, research that was done on six emerging Asian economies (China, India, Indonesia, Malaysia, Taiwan, and Thailand) revealed that during the 2007–2008 financial crisis they all suffered from declines in real activity in terms of their exports, real GDP, and industrial output (Coulibaly et al. 2013). This coincided with a marked slowdown in private credit growth. In this situation, companies start to cut the labor force in order to reduce their costs. As a result, the unemployment rate starts to rise, a further negative consequence of the financial crisis. Although initially the Asian economies remained insulated and intact, the impact of the crisis ultimately became global. Investment, economic growth, and domestic consumption remained strong at first, but eventually exports from Asia declined (Das 2012). The reason was the unfolding financial crisis in the USA and Europe that developed a sense of uncertainty in Asia. The majority of Asian and Pacific economics, especially Japan and the newly industrialized economies, such as Republic of Korea, China, the Hong Kong SAR, Singapore, and Taiwan, experienced sharp GDP contractions (Das 2012). Only Chinese the economy continued to perform well throughout 2007–2008 and in the wake of the crisis. Based on statistical data, in November 2008 more than 240,000 Americans lost their jobs.
2007–2008 Financial Crisis
It was estimated that in the USA the rise of the unemployment rate caused a fall in the wealth of those affected of 2.1% in 2009 and 3.3% in 2010 (Fair 2017). Many people who remained unemployed were eventually forced to take jobs at far lower pay because they otherwise stood to lose their homes and their credit rating. Besides, the crisis’ contribution to the fall in real GDP is estimated to have been between 4.5% and 5.5% over the course of a two-year period (Fair 2017). The UK was one of the countries which have been hit the most deeply by the crisis, undergoing a decline of 6.4% in GDP as a result of the financial crisis (from December 2007 to June 2009) (Cowling et al. 2018). Another reason for unemployment was due to the recession of large and small firms during the 2007–2008 financial crisis. At that time it seemed that the large firms were more vulnerable compared with small firms. A study that was done during 2007–2008 financial crisis showed that the sales and the short term of the small firm debt suffered much less than the sales and the short term of the large firm debt (Kudlyak and Sánchez 2017). Moreover, export-oriented firms are more vulnerable compared with domesticoriented firms, as they faced a sharper decline in the sale during the financial crisis and relied more on trade credit as an alternative source of financing (Coulibaly et al. 2013). In this situation, investors seem reluctant to make investments. Unemployment is one of the important issues which affects global security. Unemployment inside a country creates poverty and poverty makes a country unstable. The lack of employment opportunities for people, especially in the metropolis around the world, is one of the main security and development challenges. During the global financial crisis, there was an increasing number of youth living in cities of the developing countries around the world who were facing daunting economic and social challenges. This situation forced many people to immigrate to other countries. Unemployment has a negative impact on global security, which are instability and violence. Particularly, in countries which suffer from with lack of economic opportunities and unemployment, the
2007–2008 Financial Crisis
young people are vulnerable to participate in armed violence, crime, terrorism, and other illegal activities. During the financial crisis, securities are more unstable than before. Dr. François Melese who is a professor of Economics at the Defence Resources Management Institute in California indicated that the impact of the financial crisis on global security should not be underestimated. In a report that was published by him, he stated that “The U.S. Director of National Intelligence recently told Congress the economic crisis has replaced terrorism as the primary near-term security concern” (Melese 2009). Decreasing investment is another result of the financial crisis. An analysis that was performed after the 2008–2009 financial crisis in Euro area indicated that, based on shocks to interbank liquidity, the lending conditions were tightening for the private sector and reduced the economic activities, especially in investment (Quint and Tristani 2018). Another research explained that the differential severity of interbank liquidity shortage in different countries across the euro area was the effect of the European financial crisis (Budnik and Kleibl 2018). Even though the financial recession has had many side effects on the economy, it had the reverse effect on the inflation rate. For instance, the postcrisis slowdown of inflation in the European area was one of the main consequences of private saving shock and fiscal austerity at that time (Kollmann et al. 2016). Another outcome of the global financial crisis was on the stock market. The stock market was severely affected by the global financial crisis. In a study on 50 equity markets in different regions, the results from both the conditional and unconditional correlation analysis showed that the impact of the 2007–2008 financial crisis on the stock market was significant (Kotkatvuori-Örnberg et al. 2013). Global financial crisis developed environmental threat which affect the global security and the international system. The threat is obvious, direct, and dangerous. For instance, in poor tropical countries, deforestation is a serious global problem, and it is directly related to poverty. In that regions, poor people cut down trees to clear land
5
for agriculture or habitation. In others (poor tropical countries) poor population use short-term economic advantages of selling wood products to rich countries.
Conclusion The 2008–2009 financial crisis was the deep-seated financial crisis of the twenty-first century, and it was the second worst financial crisis since the Great Depression of the 1930s. It started from the USA and, after a while, it continued in Europe and Asia. The consequences were extensive, spread out across a large number of countries on different continents, affecting a host of (practically all relevant) economic conditions. Numerous factors caused the global financial crisis. Some researchers believe that governments’ interventions before, during, as well as after the crisis did more harm than good. Up to now, many researchers, analysts, and econometricians argued about the factors that could be the seen as the main reason for the crisis. However, it is obvious that the 2007–2008 financial crisis was not the first and would not be the last economic crisis. From the global security perspective, the 2007–2008 financial crisis affected not only on economic growth, but also poverty and its distribution. It affected all sectors of society, particularly low-income people. Downward wage adjustment and decreased social protection were results of this. Millions of people in the world were plunged into poverty. There was a significant rise in the number of impoverished group of people around the world. Therefore, it is possible to conclude that the 2007–2008 financial crisis contributed to global poverty which in various ways affects global security negatively. Authorities, policy-makers, and politicians had better rethink and reform the international financial architecture. For instance, it may be better to introduce a new benchmark such as a global inflation target to monitor and track interest policy in a global economy (Taylor 2008). This benchmark would aid in avoiding rapid cuts in interest rates in one country if they contrarily affect decisions in
A
6
other countries. In addition, international organizations such as the International Monetary Fund should set up a special access framework to guide its lending decisions to investors, companies, and loaners for similar hard times to come. Further, as mentioned before, low-income people, young people, and women are especially vulnerable during times of global financial crisis and economic recession. Therefore, governments and authorities should focus on these group of people in seeking to alleviate the impact of economic downturn.
References Budnik, K., & Kleibl, J. (2018). Working paper series, (2123). https://doi.org/10.1111/j.1467-629X.1980. tb00220.x. Cette, G., De, B., & Fernald, F. J. (2015). The pre-globalfinancial-crisis slowdown in productivity . Federal reserve bank of san francisco working paper series. Coulibaly, B., Sapriza, H., & Zlate, A. (2013). Financial frictions, trade credit, and the 2008–09 global financial crisis. International Review of Economics and Finance, 26, 25–38. https://doi.org/10.1016/j.iref.2012.08.006. Cowling, M., Liu, W., & Zhang, N. (2018). Did firm age, experience, and access to finance count? SME performance after the global financial crisis. Journal of Evolutionary Economics, 28(1), 77–100. https://doi.org/ 10.1007/s00191-017-0502-z. Das, D. K. (2012). How did the Asian economy cope with the global financial crisis and recession? A revaluation and review. Asia Pacific Business Review, 18(1), 7–25. https://doi.org/10.1080/13602381.2011.601584. Dwyer, S., & Tan, C. M. (2014). Hits and runs: Determinants of the cross-country variation in the severity of impact from the 2008–09 financial crisis. Journal of Macroeconomics, 42, 69–90. https://doi.org/10.1016/j. jmacro.2014.07.002. Fair, R. C. (2017). Household wealth and macroeconomic activity: 2008–2013. Journal of Money, Credit and Banking, 49(2–3), 495–523. https://doi.org/10.1111/ jmcb.12387. Frankel, J., & Saravelos, G. (2012). Can leading indicators assess country vulnerability? Evidence from the 2008–09 global financial crisis. Journal of International Economics, 87(2), 216–231. https://doi.org/10. 1016/j.jinteco.2011.12.009. Hawinks, J., & Klau, M. (2000). Measuring potential vulnerabilities in emerging market economies. BIS Working Papers, (91). IMF (2012). World economic outl. World economic and financial surveys (Vol. 2012). International Monetary Fund, Washington, DC. ISBN 978-1-61635-389-6. Retrieved from http://www.imf.org/external/pubs/ft/ weo/2012/02/pdf/text.pdf
2007–2008 Financial Crisis Jawadi, F. (2016). What have we learned from the 2007-08 financial crisis? Papers presented at the second international workshop on financial markets and nonlinear dynamics (Paris, June 4–5, 2015). Open Economies Review, 27(5), 819–823. https://doi.org/10.1007/s11079016-9416-x. Kaya, H. D. (2018). The global crisis and poverty. Studies in Business and Economics, 13(3), 63–73. https://doi. org/10.2478/sbe-2018-0035. Kollmann, R., Pataracchia, B., Raciborski, R., Ratto, M., Roeger, W., & Vogel, L. (2016). The post-crisis slump in the Euro area and the US: Evidence from an estimated three-region DSGE model. European Economic Review, 88(612796), 21–41. https://doi.org/10.1016/j. euroecorev.2016.03.003. Kotkatvuori-Örnberg, J., Nikkinen, J., & Äijö, J. (2013). Stock market correlations during the financial crisis of 2008-2009: Evidence from 50 equity markets. International Review of Financial Analysis, 28, 70–78. https:// doi.org/10.1016/j.irfa.2013.01.009. Kudlyak, M., & Sánchez, J. M. (2017). Revisiting the behavior of small and large firms during the 2008 financial crisis. Journal of Economic Dynamics and Control, 77, 48–69. https://doi.org/10.1016/j.jedc.2017.01.017. Marie Lauesen, L. (2013). CSR in the aftermath of the financial crisis. Social Responsibility Journal, 9(4), 641–663. https://doi.org/10.1108/SRJ-11-2012-0140. Melese, F. (2009). The financial crisis: A similar effect to a terrorist attack? https://www.nato.int/docu/review/2009/ FinancialCrisis/Financial-terrorist-attack/EN/index.htm. Mirzaei, N., & Vizvari, B. (2011). Reconstruction of World Bank’s classification of countries. African Journal of Business Management, 5(32), 12577–12585. https:// doi.org/10.5897/Ajbm11.1868. Niroomand, S., Mirzaei, N., & Hadi-Vencheh, A. (2018). A simple mathematical programming model for countries’ credit ranking problem. International Journal of Finance and Economics, 24, 449–460. https://doi.org/ 10.1002/ijfe.1673. Quint, D., & Tristani, O. (2018). Liquidity provision as a monetary policy tool: The ECB’s non-standard measures after the financial crisis. Journal of International Money and Finance, 80, 15–34. https://doi.org/10.1016/j. jimonfin.2017.09.009. Shoham, A., & Pelzman, J. (2011). A review of the crises. Global Economy Journal, 11(2), 5. https://doi.org/10. 2202/1524-5861.1781. Taylor, J. B. (2008). A black swan in the money market. John C. Williams Working Paper 13943, National Bureau of Economic Research, April 2008. http:// www.nber.org/papers/w13943. Taylor, J. B. (2009). The financial crisis and the policy responses. NBER Working Paper, 23, 1–19. https://doi. org/10.1108/13581980911004352. Taylor, J. B. (2010). Getting back on track: Macroeconomic policy lessons from the financial crisis from the great moderation to the great deviation to the great recession. Federal Reserve Bank of St. Louis Review, 92(3), 165–176. Retrieved from http://citeseerx.ist.psu.edu/ viewdoc/download?doi¼10.1.1.182.4073&rep¼rep1& type¼pdf.
Activists and Activism
Activists and Activism Katie Paulot1 and Rebecca Shea Irvine2 1 University of British Columbia Okanagan, Kelowna, BC, Canada 2 Institute for Research on Women and Gender, University of Michigan, Ann Arbor, MI, USA
Keywords
Culture · Government · Institutions · Power · Protests · Security · Social movements
Activism Activism refers to efforts on behalf of a cause that expands beyond what is expected (Martin 2007) and can vary based on the cause, culture, context, and other associated factors. Although the term lacks a rigid definition, it is most frequently used to refer to people expressing disapproval of the actions of a state (or other entity with significant social, political, or economic powers) and with the goal of improving a particular aspect or aspects of society (Gonzales 2008; Martin 2007). Activism also typically occurs within a social movement. A social movement can be broadly defined as a mechanism by which a united group of people state their belief or cause, show their concern or disdain for certain situations, and fight for a solution to identified problems (Cammaerts 2015). A social movement can include three primary facets, including having an oppositional force; loose or informal connections to one another; and an identified collective working toward a common goal (Cammaerts 2015). Social movements provide a space for the people to engage critically and allow them to articulate their concerns to the state (Brӓuchler 2019). They also allow for groups to tackle complex causes with multiple players and geographic locations, for members to specialize in approaches that best utilize their individual skill sets and networks, and they also enable support systems between groups within a movement (Martin 2007). These factors help to propel activism beyond what an individual can do.
7
The landscape for activism has seen a dramatic change since the 1960s, as power has shifted from states to international organizations (such as the World Bank), the influence of nongovernmental organizations (NGOs) has increased, and corporate power under neoliberalism has been on the rise (Della-Porta et al. 2005). These factors, among others, have arguably lessened the power of the state and impacted the means through which activism is conducted. Due to changing landscapes, activism has become more transnational and global in its approach. Within a transnational framework, activists experience the influence of social movements globally, and these ideas diffuse across state borders allowing people in remote locations to protest issues of importance elsewhere (Della-Porta et al. 2005). This can be described as transnational collective action whereby activists from around the world participate in coordinated protests or actions on behalf of a particular cause (Della-Porta et al. 2005).
Activists An activist can be defined as someone who is “energetically engaged in action that is intended to right the wrongs that have led to or perpetuate injustice, inequality, and sociopolitical disparity” (Zuzelo 2020, 190). The reasons why people become activists can vary immensely, but many seek to address a particular issue of importance to them, enjoy a sense of contributing to something larger than themselves, and seek to make the world a better place. Despite this, individual activists face many obstacles when contributing to activism. The lifestyle of an activist typically involves long days, little to no pay, and a high emotional toll. Additional factors such as an individual’s socioeconomic status or fear of becoming targeted by institutions or their peers may impact someone’s ability to become or remain active within a social movement. Citizenship status can also complicate one’s capacity to become an activist, as many undocumented people can be afraid to engage importance issues out of fear of deportation (Gonzales 2008). Activists also have a history of being targeted by governments or institutions.
A
8
For example, following the Salvadoran Civil War (October 15, 1979, to January 16, 1992), disabled veterans of the Farabundo Martí Liberación Nacional (FMLN), the guerrilla wing of a coalition of opposition groups, were afraid to speak-out about the need for supplies and equipment for veterans out of fear of intimidation or violence from state security forces (Charlton 1998). One major fear among activists that is worth noting is the possibility of burnout. Burnout refers to activists, or anyone passionate, losing their energy to devote to the movement (Pines 1994). This can happen to activists when they feel as though they have failed to achieve their goals. This varies based on the individual, but this often correlates to their feeling of being unable to impact society in a positive way. Typically, success or failure that an individual might feel stems from the environment of their activism. This can be their organization or group of activists. Those who have supportive environments tend to feel more successful in their efforts than those with stressful activist environments and therefore more likely to burnout (Pines 1994). There are also a few factors that play into the likelihood of burnout. The ideology of an activist, for example, is one of them. More dedicated (or so-called radical) activists tend to face high levels of burnout because the change is harder to achieve the more it deviates from the status quo (Pines 1994). Another factor includes the ability for activists to create actions that achieve their goals. For those who cannot move their ideological goals into tangible action, the potential for burnout may be higher due to the frustration of not being able to act. Burnout is a real fear for activists that is often hard to avoid and complicates the ability for activists to continue their work.
Methods of Activism Activists often make deliberate decisions about whether they want to take violent or nonviolent action. According to Gene Sharp (Albert Einstein Institution 2021), a well-known scholar in the field of nonviolent action, there are three broad categories of nonviolent action: protest and
Activists and Activism
persuasion consisting of actions such as rallies, speeches, and protest signs; noncooperation or not complying with the institution that one is protesting (often expressed through boycotts or strikes); and direct action to impede the institution (such as a sit-in) (Martin 2007). Violent actions might also be taken to further a cause, and these are typically referred to as armed struggles (Martin 2007). People who perform violent actions frequently have the goal of creating economic damage in order to harm an institution, the state, or a targeted group. One such example was the strategic decision by the Irish Republican Army (IRA) to attack economic targets in order to deter outside investment and force the British government to invest in repairing the damage (Smith 1995). Grassroots activism refers to actions coordinated and conducted by groups seen to have little power or influence individually within their state (Martin 2007). Grassroots activism can span from small groups of people to millions of people uniting around a common goal. What defines the grassroots aspect is the power dynamic of the cause at hand. For example, a group of citizens who have organized via a social media campaign to protest the actions of an institution would be considered grassroots activists. Digital activism, or cyberactivism, is present when anyone uses the internet and digital capabilities to advocate for reform (Car and Musladin 2018). Such action can include the use of web pages, social media, or cellular devices to further a cause. This form of activism has gained in popularity owing to the ease with which it enables connections between large numbers of people across vast distances (Cammaerts 2015). Cyberactivism creates opportunities for many people to engage that may not have been involved in more traditional forms of activism, yet it is important to recognize that accessibility remains a barrier for many. In today’s world, there is an unequal distribution of technological accessibility, knowledge, and infrastructure (Cammaerts 2015), and the ability to access the internet reliably is a privilege not available to many. Cyberactivism is also changing how information is received and who is disseminating it, since
Activists and Activism
the internet provides platforms for individuals to challenge the state and give information that can alter the way identities are understood (Dartnell 2003). Some examples of social media as an amplifying tool include the Green Revolution in Iran, the Arab Spring, and Occupy Gezi (also known as the Gezi Park protests) (Mercea and Bastos 2016). By challenging the state, cyberactivism is changing how global security will look in the future. For example, cyberactivism has allowed individuals and groups to challenge national borders as well as the power of the state (Dartnell 2003). Another hurdle to cyberactivism has to do with the control of these online platforms by both the platforms themselves as well as the state. When utilizing social media as a platform of protest, one is reaffirming the power structure of the corporations that control the social media site (Cammaerts 2015). While this may be a nonissue to those who form many movements, corporate power can be the institution in which one is protesting and therefore complicates using such platforms. In addition to corporate control, states have begun to control internet usage (Cammaerts 2015). In 2005, the Kingdom of Nepal shut down all public telecommunications in an apparent attempt to curtail Maoist insurgents; however, the general public remained disconnected for 88 days (Ang et al. 2012).
9
and lawmakers within the institution of power; (2) the activists must have the ability to situate the issue outside of policy concerns in order to achieve public attention; (3) activists must be able to take on their cause with the resources necessary; and (4) activists must be able to align within the mainstream status quo so that the government or smaller actors are not discouraged from working with them (Krause 2014).
Cross-References ▶ Actors and Stakeholders in Non-traditional Security ▶ Arab Spring ▶ Authoritarianism ▶ Deforestation ▶ Election Violence ▶ Emancipation ▶ Freedom of Expression in the Digital Age: Internet Censorship ▶ Global Civil Society ▶ Governance ▶ Ideology ▶ Internet (Governance) ▶ Peace Agreements ▶ Peace and Reconciliation ▶ Protests and Conflict ▶ Riots and Rioting ▶ Security State ▶ World Bank
Effectiveness of Activism in a Security Framework References The success of a movement is important to activism work and strategizing what goals are attainable or manageable for an organization or group is crucial. While recognizing regional, national, and grassroots organizations, global security activism is typically most effective when done through transnational nongovernmental organizations (Krause 2014). Within the framework of global security, there are a few factors that indicate the ability for activists to be influential on a specific issue: (1) for activists to have the ability to make a change, the policy or issue must be something not well established among the community of experts
Albert Einstein Institution. (2021). 198 methods of nonviolent action. https://www.aeinstein.org/non violentaction/198-methods-of-nonviolent-action/ Ang, P. H., Tekwani, S., & Wang, G. (2012). Shutting down the mobile phone and the downfall of Nepalese society, economy and politics. Pacific Affairs, 85(3), 547–561. Brӓuchler, B. (2019). From transitional to performative justice: Peace activism in the aftermath of communal violence. Global Change, Peace & Security, 31(2), 201–220. Cammaerts, B. (2015). Social media and activism. In The international encyclopedia of digital communication and society. Retrieved from https://onlinelibrary.wiley. com/doi/10.1002/9781118767771.wbiedcs083
A
10 Car, V., & Musladin, M. (2018). Digital activism and human security: Two cases of Croatian leaks. Teorija in Praska, 55, 2. Charlton, J. (1998). Nothing about us without us: Disability oppression and empowerment. Berkeley/Los Angeles: University of California Press. Dartnell, M. (2003). Weapons of mass instruction: Web activism and the transformation of global security. Millennium Journal of International Studies, 32(3), 477–499. Retrieved from https://journals.sagepub. com/doi/abs/10.1177/03058298030320030701 Della-Porta, D., & Tarrow, S. (2005). Transnational protest and global activism. Lanham: Rowman & Littlefield. Gonzales, R. G. (2008). Left out but not shut down: Political activism and the undocumented student movement. Northwestern Journal of Law and Social Policy, 3(2), 219–239. Retrieved from http://scholarlycommons. law.northwestern.edu/njlsp/vol3/iss2/4 Krause, K. (2014). Transnational civil society activism and international security politics: From landmines to global zero. Global Policy, 5(2), 229–234. Martin, B. (2007). Activism, social and political. In Encyclopedia of activism and social justice. Retrieved from http://sk.sagepub.com.proxy1.cl.msu.edu/reference/ activism/n12.xml Mercea, D., & Bastos, M. T. (2016). Being a serial transnational activist. Journal of Computer-Mediated Communication, 21(2), 140–155. Retrieved from https:// academic-oup-com.proxy1.cl.msu.edu/jcmc/article/21/ 2/140/4065368 Pines, A. M. (1994). Burnout in political activism: An existential perspective. Journal of Health and Human Resources Administration, 16(4), 281–394. Retrieved from https://www.jstor.org/stable/25780582? seq¼1#metadata_info_tab_contents Smith, M. L. R. (1995). Fighting for Ireland?: The military strategy of the Irish republican movement. London: Routledge. Zuzelo, P. R. (2020). Ally, advocate, activist, and adversary: Rocking the status quo. Holistic Nursing Practice, 34(3), 190–192. Retrieved from https://oce-ovidcom.proxy1.cl.msu.edu/article/00004650-20200500000009/HTML
Further Reading Bennett, W. (2003). Communicating global activism. Information, Communication & Society, 6(2), 143–168. Bennett, L. (2014). ‘If we stick together we can do anything’: Lady Gaga fandom, philanthropy and activism through social media. Celebrity Studies, 5(1–2), 138–152. McCaughey, M., & Ayers, M. D. (Eds.). (2003). Cyberactivism: Online activism in theory and practice. London: Routledge. Reitan, R. (2006). Global activism. London: Routledge. Velasquez, A., & LaRose, R. (2014). Youth collective activism through social media: The role of collective efficacy. New Media & Society, 17(6), 899–918.
Actors and Stakeholders in Non-traditional Security
Actors and Stakeholders in Non-traditional Security Aletheia Kerygma B. Valenciano and Aries A. Arugay University of the Philipines-Diliman, Quezon City, Philippines
Keywords
Human security · Individual security · Nonstate actors · Non-traditional security · States
Definition Actors and Stakeholders in Non-Traditional Security refer to entities that play important roles in the provision and oversight of nontraditional security as well as the constituents or recipients of the this type of security.
Introduction The non-traditional concept of security acknowledges that there are issues which challenge the survival and well-being of peoples and states do not always come from conventional or military sources. Security is no longer just associated with state sovereignty or territorial integrity but must involve the security of people (i.e., survival, wellbeing, and dignity) at the individual and societal levels. Non-traditional security is the conceptual label given to challenges that have been overlooked in the past due to the overemphasis of and narrow views on traditional or hard security. Within the non-traditional security agenda, the key players are states, international governmental organizations (IGOs), international nongovernmental organizations (INGOs), and domestic nongovernmental organizations (NGOs). Non-traditional security issues that threaten individual security are often rooted in social, economic, and even cultural conditions. In addition, non-traditional security challenges are transnational in scope and have become more pronounced in recent years due to permeable
Actors and Stakeholders in Non-traditional Security
11
borders and an increasingly integrated world. The scale of these threats is also large in comparison to the relative capacity of traditional security actors, thus requiring the presence of other actors and stakeholders. Their position within this new security agenda provides the means to go beyond unilateral remedies and allows for a more comprehensive response to non-traditional threats to security.
IGOs have become essential actors in the international community and are vital in the areas of international security, humanitarian aid, environmental and maritime protection, agricultural and fisheries sustainability, energy, and finance and trade. The legal characteristic of IGOs, owing to the presence of a treaty for their creation, enables them to make rules and exercise power within member states, thus increasing their global impact.
Defining Actors and Stakeholders
The State The state remains to be the most dominant actor in resolving security challenges as it encompasses almost every human activity. As a political association, the state has sovereign jurisdiction over a well-defined territory and exercises authority through permanent and recognizable institutions (Heywood 2013). While the state remains to be an important, if not the dominant actor in the pursuit of security, threats to security happen outside and beyond the purview of the state. This makes the non-traditional security agenda open for the involvement of non-state actors.
International Nongovernmental Organizations (INGOs) INGOs are groups that are independent of government involvement but are international in scope. Although they operate internationally, they are not established through intergovernmental agreements, unlike IGOs. INGOs are loosely categorized as either advocacy INGOs, which aim to influence governments on key policy issues, or operational INGOs, which provide specific services to target communities or sectors. Some INGOs are transnational federations of national groups. The objectives of INGOs include human rights protection, environment preservation, education, and advancement of women. Well-known INGOs include the Amnesty International, Care International, the International Federation of Red Cross and Red Crescent Societies, Oxfam International, Save the Children, and the World Wildlife Fund.
Intergovernmental Organizations (IGOs) Intergovernmental organizations or international organizations are entities created through a multilateral treaty and composed primarily of sovereign states who choose to work in good faith on issues of common interest. The presence of a treaty, which acts as a charter, subjects all parties to international law and allows them to enter into enforceable agreements. Since IGOs usually cover multiple issues and involve governments, their offices are found in different regions. Among the most known IGOs are the United Nations (UN), World Trade Organization (WTO), European Union (UN), the Organization of Petroleum Exporting Countries (OPEC), and the North Atlantic Treaty Organization (NATO). These
Domestic Nongovernmental Organizations (NGOs) NGOs are nonprofit groups that are independent of the government. NGOs are structurally organized on community, national, and international levels to serve a social or political objective such as those advocated by INGOs. NGOs are loosely assorted under a variety of terms such as “voluntary organizations,” “nonprofit organizations,” “grassroots organizations,” or “intermediary organizations” with the common goal of carrying out various social activities. One of the main and defining features of NGOs is their creation “from below,” although in nondemocratic settings, the establishment of NGOs follows a state-led model with the government playing an important role in
Non-traditional security issues take multiple forms and arise from different sources, thus their management entails the involvement of various actors and stakeholders.
A
12
providing support and funding (Tang 2004, pp. 226–227).
The Role of Actors and Stakeholders in Non-traditional Security Non-traditional security provides a new paradigm where traditionalist approaches or solutions initiated by the state are no longer adequate, thus resolving non-traditional security issues must involve a transnational and multi-stakeholder approach. The State For non-traditional security issues such as human security, “the primary referent object is the individual,” but this does not negate the importance of the state as it remains to be the principal guarantor of the security of its citizens (Hadiwinata 2006, p. 201). There are three explanations to support this. First, state security and individual security are interrelated, and without a secure state, the security of individual citizens is threatened. Second, resolving non-traditional security issues requires governance and policies that can only come from the state, particularly in sectors overseen by state institutions such as education, health, social services, and labor and employment. And third, non-traditional security issues usually arise due to the lack or absence of vital public goods (Hadiwinata 2006, p. 201). This characteristic of non-traditional security issues as a public good necessitates state action to ensure that their provision and distribution will be as wide as possible. Non-traditional security issues tend to overlap and pose transboundary challenges to the state. Despite this, there has been a rising trend among countries to resort to a more state-led and traditional multilateral approach in response to isolationist and protectionist sentiments around the world (Trajano 2019, p. 3). Still, for some proponents of the non-traditional security agenda, the state should be the initiator since its policy actions and decisions can reach every layer and segment of society and have implications on the actions of non-state actors.
Actors and Stakeholders in Non-traditional Security
Intergovernmental Organizations (IGOs) The borderless nature of non-traditional security issues makes it more difficult for individual states to deal with them single-handedly. In the last few decades, states have come together with the common understanding that managing non-traditional security issues will not benefit solely from unilateral approaches and, therefore, a more comprehensive response is needed. Global governance has been cited as a more efficient way to resolving of non-traditional security issues. Through IGOs, rules, norms, and practices can be institutionalized, while agreements are given a legal character and are, thus, enforceable. Practitioners, however, have pointed out that non-traditional security issues will stagnate at the global level due to coordination problems. Therefore, its discussion and management at the regional level has been proposed as a more viable mechanism for non-traditional security issues. For example, the 1997 Asian financial crisis resulted in socio-economic problems and, at the individual level, job insecurity, and this helped promote discussions about human security concerns within the regional security agenda in East Asia. In response to the implications of the crisis on non-traditional security, member states of the Association of Southeast Asian Nations (ASEAN) embarked on several regional institutional innovations that include, for example, the ASEAN Plus Three (APT) arrangement, which formalized the economic ties between ASEAN and China, Japan, and South Korea (CaballeroAnthony 2012). Nongovernmental Organizations The tendency of state apparatus to prioritize traditional security issues gave INGOs and NGOs the opportunity to contribute to the securitization of non-traditional security issues. In fact, the non-traditional security agenda was mainly an initiative of non-state actors such as NGOs. There are also instances where state neglect or failure to carry out its mandate to provide security to the people allowed non-state actors to step in to fill the gap in alleviating non-traditional security threats. In the non-traditional security discourse, INGOs and NGOs are one of the most proactive
Actors and Stakeholders in Non-traditional Security
13
actors. Often, domestic NGOs form a transnational network, thus deepening engagement with vulnerable groups and increasing support outside of the formal apparatus of the state. Non-traditional security threats such as poverty, displacement, and natural disasters have been met by INGOs and NGOs with a more comprehensive humanitarian response that includes training programs for local actors, public-private partnerships, and empowering displaced peoples and migrant communities (Trajano 2019). The effectiveness of INGOs and NGOs is due to their willingness to go beyond conventional practices of governance and the emphasis on grassroots, people-centered, and people-initiated approaches to non-traditional security.
created a humanitarian crisis. According to the National Disaster Risk Reduction Management Council (NDRRMC 2014), Haiyan killed 6300 people, injured 28,689, displaced 4.1 million, and damaged Php 90 billion (USD1.788 billion) of property. In the immediate aftermath of the typhoon, electricity, telecommunication, and water supplies were damaged and became unusable. Disrupted government services initially posed a number of significant problems to the humanitarian relief operations, but despite logistical and physical challenges, relief efforts reportedly reached the devastated areas faster than expected. This is credited to the public pressure on the national and local government and the quick response of the international community in providing humanitarian aid.
Case Study: Typhoon Haiyan in the Philippines
State and Interstate Cooperation Operational dynamics between the national and local government is present during crisis situations such as the natural disaster brought by Typhoon Haiyan, but in this case, the emphasis is given to the local government unit (LGU). Before and after landfall, the LGU was given the task of being the first responders and therefore bore the main responsibility of evacuation, rescue, relief, and post-disaster measures for slowly rebuilding the areas devastated by the typhoon. The presence of the military was also crucial for reassuring people that the government is on top of the situation, from enforcing evacuation to relief distribution and, more importantly, ensuring their physical safety (Frago-Marasigan 2019, p. 134). Interstate cooperation was facilitated through official development assistance. Immediately after the landfall, the US Agency for International Development (USAID) authorized funds to be released for the implementation of an emergency response program, and the United States Pacific Command (USPACOM) also deployed rescue teams and dispatched helicopters for rescue via airlifts and other relief efforts (CRS Report 2014). The Japan International Cooperation Agency (JICA) has been at the forefront of rehabilitation efforts for disasterstricken areas. JICA has initiated evidence-
Human security is a prime example of a non-traditional security issue that has become increasingly important in the security agenda since its conceptualization in 1994. Initially, it had two components, “freedom from want” and “freedom from fear,” but it has since been supplemented with “freedom to live in dignity” (UNDP Report 1994). The emergence of human security has engendered considerable debate which has led to a review of the existing security discourse as well as discussions regarding narrow versus broad definitions, and theoretical and practical applications (Nishikawa 2009). According to the report, there are seven aspects to human security: economic, food, health, environmental, personal, community, and political security. The humanitarian disaster brought on by Typhoon Haiyan is an example of a nontraditional security issue that was managed by different actors and stakeholders with varying degrees of success. Typhoon Haiyan was one of the most powerful tropical cyclones to strike land on record. The Category five super typhoon made landfall in the Philippines on November 8, 2013, and for over 16 h, it swept through and devastated six provinces. As one of the deadliest Philippine typhoons ever recorded, the disaster quickly
A
14
based research to assess the extent of the damage as well as the services and infrastructure needed to rebuild the towns affected by Typhoon Haiyan. Years after the landfall, JICA continues to help in the restoration and reconstruction efforts, particularly in recovering public services and spaces, strengthening public facilities, and rebuilding social and economic infrastructures (JICA 2014). Intergovernmental Organizations (IGOs) Despite the presence of the National Disaster Risk Reduction and Management Plan (NDRRMP), the extent of the damage caused by Typhoon Haiyan rendered government efforts inadequate. The threats to human security became alarmingly high, and this led to calls for “humanitarian system-wide emergency response,” which commits humanitarian actors such as the UN and its agencies, INGOs, and NGOs to improve the coordination and efficiency of disaster response (Frago-Marasigan 2019, p. 134). To supplement the UN country team that was already in place, the UN-established teams tasked with disaster assessment, coordination of humanitarian aid, and donor enlistment. Areas that benefitted from the UN’s efforts include recovery, food, shelter, livelihood, education, communication, water, sanitation, and hygiene. Recovery efforts and humanitarian aid after the typhoon were complex due to the high-level coordination needed among actors and international bodies. Other than the UN and its attached agencies, INGOs and NGOs also responded to the humanitarian efforts (CRS Report 2014). INGOs and Domestic NGOs The involvement and contribution of INGOs and domestic NGOs during and post-Haiyan cannot be overstated. Natural disasters tend to strip people of their basic human dignity, and oftentimes, basic needs and hygiene become part of the collateral damage as governments scramble to account for economic damages and seek to reestablish stability and order. INGOs and NGOs that had been operating in affected areas even prior to the disaster have a better grasp of the people’s needs, and given their on-the-ground
Actors and Stakeholders in Non-traditional Security
engagement, expertise, and research, they are able to respond to the survivors’ needs more adequately. These organizations were credited for making survivors feel human again (FragoMarasigan 2019). INGOs such as Oxfam, Habitat for Humanity, and Care International partnered with domestic NGOs to enlist donor support and international funding and aid in relief operations. Their engagement and contributions were also sustained post-Haiyan through evidence-based research leading to policy recommendations. Observers noted that INGOs and NGOs were more efficient and faster during the disaster relief operations. In contrast, government agencies met heavy criticism due to partisanship which led to wastage of aid and lost opportunities. In addition, government response was rife with coordination problems between the national and local governments, insufficient and unequal disaster relief assistance, failure to deliver emergency shelter assistance, and inadequate military presence to ensure security in some areas (Frago-Marasigan 2019).
Issues and Challenges Issues continue to challenge actors and stakeholders in non-traditional security. For states, the main issue remains to be how to approach non-traditional security issues without falling back on traditional and state-led mechanisms. States also encounter problems related to identifying non-traditional security issues as well as deciding which issues will be given priority in terms of budget and logistics. For developing countries, success in combating human insecurity is hampered by infrastructure for basic social needs such as public health and emergency relief facilities, law and order issues, and continued political and economic discontent that lead to low citizen trust in the government. For IGOs, despite the presence of formal agreements, states still tend to take a traditional approach to security, therefore posing challenges to prevention, detection, and containment of non-traditional security threats such as infectious diseases, transnational crime, and human
Actors and Stakeholders in Non-traditional Security
trafficking. Issues related to “interference in internal affairs” such as extradition of perpetrators of transnational crimes arise from the emphasis on traditional conceptualization of borders. For INGOs and domestic NGOs, the main issue remains to be restrictive state policies and non-traditional security practices that inhibit deeper engagement by non-state actors, delivery of NGO services, and donor enlistment. Further, smaller and newly established domestic NGOs have limited organizational capacity, therefore they risk being dependent on private donors. In some instances, domestic NGOs that get their funding from INGOs may cease operations due to reduction in international aid. INGOs also have their own agenda and established practices, but will often have to operate different political, economic, and social environments. The absence of a culture of volunteerism, a lack of a strong private sector that could be a source of funding, and a lack of media involvement and coverage of NGO initiatives are other issues that hinder the creation of an enabling environment for non-state actors (Tang 2004). Within the non-traditional security discourse, there is a consensus that state security and individual security are interrelated, thus the agenda has constantly called for a collaborative approach between state and non-state actors in overcoming non-traditional threats to security. This has resulted in a lack of clarity on the specific roles that non-state actors need to perform to complement state actions. Often, the lines are blurred and non-state actors take on responsibilities that are traditionally performed by the state. Governments also outsource research and implementation of programs to domestic NGOs given the latter’s expertise and on-the-ground experience with vulnerable groups. The absence of corruption and partisanship also makes non-state actors more efficient in reaching their targets, and while this helps in combating non-traditional sources of security, the state may be perceived as redundant and can result in a decrease in trust in the government. The political setting also affects dynamics between the state and non-state actors. In countries such as China where civil society is state-led, domestic NGOs are overseen by government departments. Although some have alternative
15
sources of funding, the majority of them still rely on government support. Often, the expectation is to have advocacies that are aligned with that of national goals in order to secure funding. Despite these challenges that underscore the relationship between the state and non-state actors, proponents of non-traditional security continue to emphasize the need to pursue a transnational and collaborative approach. Information and resource sharing, coordination, training, and governance are incremental steps that can be taken to minimize conflict between actors that may derail the targets of both state and non-state actors. In the long run, the non-traditional security agenda will benefit from the institutionalization of practices by transnational networks and the establishment of accountability measures to increase trust between actors.
Conclusion The increasing importance given to nontraditional security points to its role in understanding global insecurity in a globalized and tightly interconnected world. Security threats have become multifaceted and are initiated by multiple sources. They are also transnational in nature as regards their origins and impact on vulnerable populations. Non-traditional threats to security such as hunger, diseases, job insecurity, crimes, and environmental degradation are all interlinked and are experienced by people everywhere and are therefore addressed through a cooperative approach. The transboundary nature, as well as the scope and scale of non-traditional security issues, necessitate the engagement of and collaboration between states, IGOs, INGOs, and domestic NGOs. Despite the traditionalist approach to security that sees states pushing back to ensure that state-led approaches remain dominant, non-state actors and stakeholders continue to be key actors in securitizing non-traditional security issues and finding ways to combat these beyond traditional mechanisms. In order to sustain the previous success in non-traditional security issue areas such as natural disasters, government decisions and actions should support the non-traditionalist approach instead of
A
16
weakening it and undermining the efforts of both the state and non-state actors and stakeholders. INGOs and NGOs also need to develop new capabilities and overcome organizational barriers to go beyond their initial contribution of securitizing non-traditional security issues.
Cross-References ▶ Actors and Stakeholders in Non-traditional Security ▶ Global Threats ▶ Human Security ▶ Peacebuilding ▶ Regional Security Organizations
References Caballero-Anthony, M. (2012). Non-traditional Security Challenges, Regional Governance, and the ASEAN Political Security Community (APSC). In R. Emmers (Ed.), ASEAN and the Institutionalization of East Asia (pp. 27–42). Routledge. Congressional Research Service. (2014). Typhoon Haiyan (Yolanda): US and international response to Philippine disaster. https://www.everycrsreport.com/reports/ R43309.html#_Toc379977666 Frago-Marasigan, P. (2019). The Haiyan Crisis: Empowering the Local, Engaging the Global. In C. G. Hernandez, E. M. Kim, Y. Mine, & X. Ren (Eds.), Human Security and Cross-Border Cooperation in East Asia (pp. 133–153). Palgrave Macmillan. Hadiwinata, B. (2006). Poverty and the Role of NGOs in Protecting Human Security in Indonesia. In M. C. Anthony, R. Emmers, & A. Acharya (Eds.), Non-traditional Security in Asia: Dilemmas in Securitization (pp. 198–224). Ashgate Publishing. Heywood, A. (2013). Politics (4th ed.). Macmillan International Higher Education. Japan International Cooperation Agency. (2014, November 19). Toward a new normal: one year after Typhoon Yolanda. Japan International Cooperation Agency. https://www.jica.go.jp/english/news/field/ 2014/141119_01.html National Disaster Risk Reduction Management Council. (2014). NDRRMC update: Situational report no. 108 effects of Typhoon Yolanda/Haiyan. http:// www.ndrrmc.gov.ph/attachments/article/1329/SitRep_ No_108_Effects_of_Typhoon_Yolanda_HAIYAN_ as_of_03APR2014_0600H.pdf Nishikawa, Y. (2009). Human security in Southeast Asia: Viable solution or empty slogan?. Security Dialogue, 40(2). http://sdi.sagepub.com/content/40/2/213
Air Pollution Tang, J. T. (2004). A regional approach to human security in East Asia: Global debate, regional insecurity and the role of civil society. Research Collection School of Social Sciences. Paper 2374. https://ink.library.smu. edu.sg/soss_research/2374 Trajano, J. C. (2019). Advancing non-traditional security governance through multi-stakeholder collaboration. RSIS-NTU. https://think-asia.org/bitstream/handle/ 11540/10495/PR190613_Advancing-Non-TraditionalSecurity-Governance.pdf?sequence¼1 United Nations Development Programme. (1994). Human Development Report. http://hdr.undp.org/en/content/ human-development-report-1994
Further Reading Caballero-Anthony, M. (Ed.). (2016). An Introduction to Non-traditional Security Studies: A Transnational Approach. Sage. Cook, A., & Nair, T. (Eds.). (2021). Non-traditional Security in the Asia-Pacific: A Decade of Perspectives. World Scientific. Masys, A. J. (2016). Exploring the Security Landscape: Non-traditional Security Challenges. Springer. Peou, S. (Ed.). (2015). Human Security in East Asia: Challenges for Collaborative Action. Routledge.
Air Pollution Haniyeh Nowzari Department of Environment, Abadeh Branch, Islamic Azad University, Abadeh, Iran
Keywords
Particulates · Industrialization · Air contamination · Stationary/mobile sources
Introduction The planet is made up of three major natural compartments: air, water, and soil. Pollution of those compartments will negatively affect human beings, as well as other living organisms and ecosystems. Therefore, air pollution has become an ever-increasing concern over recent decades. The metabolic activity and healthy development of most mammals relies on the availability of clean air. Oxygen – one of the major components of air – is necessary for the breathing process. The
Air Pollution
presence of pollutants in the atmosphere, such as carbon monoxide, may inhibit the role of oxygen in metabolic processes, while other pollutants, either organic or inorganic, may exhibit toxic and carcinogenic properties in humans. Plants, microorganisms, and buildings are all susceptible to the presence and undesirable effects of volatile pollutants (Kennes and Veiga 2013).
Definition Air pollution is the presence of one or more combinations of contaminants in the air at concentrations or for periods that could harm plants, animals, humans, and their belongings or disrupts their comfort. It is often not visible to the naked eye as the size of the pollutants is smaller than the human eye can detect. Pollution can become visible in some situations, for example, in the form of sooty smoke from the open burning of crop residues or other waste, as well as from burning wood, coal, petrol, and diesel fuels for cooking and heating, transport, or power production. The fact that you cannot see air pollution does not mean that it does not exist (WHO 2014). Factors affecting air pollution are: • Population growth: The increase in global population from six billion people in 2005 toward an expected number of nine billion in 2050 is inevitably associated with continuing trends of industrialization, urbanization, motorization, and of course air pollution (Janssen et al. 2007). • Industrialization: The iron ore mining metallurgy results in high levels of industrial dust, oxides of sulfur, carbon, and nitrogen. For example, the annual average emission of industrial dust was 1.25 million tons for the iron ore mining region of Dnepropetrovsk province in the last decade. Iron ore mining in the quarries involves blasting, which has a considerable environmental effect (Kharytonov et al. 2005). • Urbanization: Europe is an urbanized continent. More than two-thirds of the population
17
is living in cities. The concentration of human activities in a relatively small area imposes large pressures on the urban environment and can lead to serious environmental problems. As a result, cities experience increasing signs of environmental stress, notably in the form of poor air quality (Mensink et al. 2005). • Motorization: Without doubt, population growth will lead to a strong growth of the need for motorization, especially in countries such as India and China that are undergoing major industrialization. According to the International Energy Agency, global energy consumption will increase with some 60% by 2030, while by 2050, it is expected to be three times higher than today (Janssen et al. 2007). • Economic development: Increase in ambient air pollution is driven by the rapid expansion of megacities, the globalization of industrial production, the proliferation of pesticides and toxic chemicals, and the growing use of motor vehicles. Ambient air pollution deaths have been increasing worldwide since 1990, and increases are most substantial in the most rapidly industrializing countries (Landrigan 2017). • Climate and geographical position: It is well known that climate changes might affect the environment, including air pollution levels. Changes in climate variability, extreme weather, and climate events in the twentieth century, especially in the last two to three decades of the twentieth century, have been discussed in many recent scientific publications. The results indicate that although the annual means of the ozone concentrations are rather insensitive to the predicted climate changes, some related quantities, which might cause different damaging effects, are increased considerably (Dimov et al. 2005).
What Is an Air Pollutant? An air pollutant is a substance in the air that could have adverse or unwanted effects on human beings, fauna, flora, buildings or else. The matter
A
18
in question could be solid, liquid, gaseous, or a combination of these. Air Pollutant Classifications 1. There are two categories in terms of issuing sources: natural and anthropogenic air pollutants. 2. There are two categories in terms of forming processes: primary and secondary air pollutants. 3. There are two categories in terms of physical state: aerosols and gases. 4. There are two categories in terms of chemical composition: organics and minerals. Air Pollutant Types There are several substances that, by virtue of their massive rates of emission and harmful effects, are considered the most significant pollutants. The main types of air pollutants thus include the following (Cooper and Alley 2011; Kennes and Veiga 2013): 1. Particulate matter: Particulate matter can be defined as a solid or liquid mass in suspension in the atmosphere. Particulate matter is classified as PM10 for sizes up to 10 mm (that is, 10 micrometers) and PM2.5 for smaller sizes up to 2.5 mm. 2. CO/CO2. 3. SOx: Sulfur oxides (SOx) include both sulfur dioxide (SO2) and sulfur trioxide (SO3). 4. NOx: Among the different oxides of nitrogen (nitric oxide (NO), nitrogen dioxide (NO2), nitrate (NO3), nitrous oxide (N2O), dinitrogen trioxide (N2O3), dinitrogen tetroxide (N2O4), and dinitrogen pentoxide (N2O5)), the symbol NOx refers to the sum of NO and NO2 which are considered to be the major relevant contaminants of this group in the atmosphere. 5. VOCs: Volatile organic compounds are carbon-containing molecules that also contain other elements, such as H or O. 6. Lead. 7. Dust/soot. 8. Radioactive materials. 9. O3: Ozone formation in the lower atmosphere, called the troposphere (i.e., between ground level and about 10–12 km), is known as a secondary contaminant.
Air Pollution
Sources of Air Pollution Alterations in the composition of clean atmospheric air may originate from either natural or anthropogenic sources. Natural sources have been present on earth for ages, while pollution from anthropogenic sources appeared more recently and has increased exponentially with the industrial revolution. Anthropogenic sources are related to human and industrial activities (Kennes and Veiga 2001). They can be classified into: • Stationary sources which are heavy stationary sources (such as a coal-fired power plant or an incinerator) or small stationary sources, gathered in an area (e.g., in a residential area). • Mobile sources which are line or distinct-route mobile sources (such as moving automobiles on a highway, a moving train, etc.) or nondistinct-route mobile sources (such as moving automobiles in urban area, moving boats in sea area, etc.). Stationary sources include domestic sources, mainly heating devices and industrial sources releasing contaminants largely through combustion processes and in the form of all kinds of waste gases. Waste gases released from wastewater treatment plants, composting, or other waste treatment processes are considered stationary sources as well. Air pollution may also result from remediation technologies transferring volatile or semivolatile contaminants from soil or aquifers into the atmosphere. Mobile sources refer to all types of vehicles, airplanes, boats, and any other kind of transportation-related sources (Kennes and Veiga 2001). Air Pollution Effects Air pollution causes harmful effects to humans and other living organisms in various ways. We consider three problems related to air pollution, whereby the anthropogenic element is significant, including the greenhouse effect and global warming, the destruction of the stratospheric ozone layer, and acid rain (de Nevers 2000). Climate change is one of the grand challenges of our time, with the potential to impact global security. It is codependent with energy,
Air Pollution
sustainability, and public health challenges. To date, most research on countering the impacts of climate change has focused on mitigating climate change by reducing greenhouse gas (GHG) emissions or on adapting human and natural systems to make them more resilient to the effects of a changing climate. Recently, a committee was convened by the US National Academy of Sciences (NAS) to consider a third option, climate intervention, also known as geoengineering (National Research Council 2015a, b). The main finding of the report is that climate intervention is not a substitute for mitigation or adaptation. Efforts to address climate change should continue to focus most heavily in mitigating GHG emissions in combination with adapting to the impact of climate change because these approaches do not present poorly defined and poorly quantified risks and are at a more advanced state of technological readiness. Climate intervention strategies are at a very early stage of development. Many questions remain to be answered about their effectiveness (McNutt 2016).
Air Pollution and Health Air pollution kills an estimated seven million people worldwide every year: 4.2 million deaths every year as a result of exposure to ambient air pollution and 3.8 million deaths every year as a result of household exposure to smoke from dirty cooking stoves and fuels. WHO data show that nine out of ten people breathe air containing high levels of pollutants. As much as 91% of the world’s population lives in places where air pollution exceeds WHO guideline limits. As a result, WHO is working with countries to monitor air pollution and improve air quality and health, and environment ministers pledge climate actions to reduce 12.6 million environment-related deaths worldwide (WHO 2017). In the absence of aggressive control, ambient air pollution is projected by 2060 to cause between six and nine million deaths per year. Air pollution was responsible in 2015 for 19% of all cardiovascular deaths worldwide, 24% of ischemic heart disease deaths, 21% of stroke deaths, and 23% of lung cancer deaths (Global,
19
regional, and national life expectancy 2016). Additionally, ambient air pollution appears to be an important although not yet quantified risk factor for neurodevelopmental disorders in children (Grandjean and Landrigan 2014) and neurodegenerative diseases in adults (Kioumourtzoglou et al. 2015). Ambient air pollution is responsible for great economic losses. These losses include medical expenditures – an estimated US$21 billion globally in 2015 (OECD 2016) – lost economic productivity resulting from pollution-related disease and premature death, and the cost of environmental degradation. These costs are largely invisible because they are spread across large populations over many years and destroy natural resources that too often are taken for granted. But they are so large that they can distort health spending and sabotage the growth prospects of a country (Landrigan 2017). In particular, exposure to ambient air pollution is extremely harmful to young children, as it can increase their risk of mortality through lower respiratory tract infections (LRIs). According to WHO estimates and the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD), about one million children younger than 5 years died from LRIs in 2015, which can be partly attributed to air pollution exposure in combination with poor nutrition and healthcare. Of the total ambient air-pollution-induced under-5 mortality, 237,000 (96%) deaths were due to LRIs. Similarly, of the total years of life lost (YLLs) due to air pollution in children younger than 5 years, 96% were due to LRIs. Breaking down the results by geographic location, 99% (235,000 deaths) of under-5 mortality occurred in Africa (128,000 [54%]) and Asia (107,000 [45%]). Of the African portion of this burden, 40% of deaths (n ¼ 51,000) were in Nigeria, whereas in Asia, 48% of deaths (n ¼ 51,000) were in India. These two countries both have low incomes and large populations. Since under-5 mortality is dominated by LRIs and occurs almost exclusively in Africa and Asia, a targeted effort to improve nutrition and healthcare for children in low-income countries in Africa and Asia could be a very effective way to reduce under-5 mortality (Lee and Kim 2018). Air pollution may negatively affect health in general and health expenditures in particular.
A
20
Thus, the health effects of air pollution are diverse, and include direct and indirect ones. Overall, the most sensitive groups include children, older adults, and people with chronic heart or lung disease. Moreover, health effects of exposures to sulfur, nitrogen oxides, and carbon monoxides can cause reduced work capacity, aggravation of existing cardiovascular diseases, effects on pulmonary function, respiratory illnesses lung irritation, and alterations in the lung’s defense systems (Bernard et al. 2001). Thus, an increase in utilization (demand) of healthcare services due to bad healthcare outcomes is projected, and health management policies should therefore include considerations for the use of cleaner fuels in the OECD (Organization for Economic Cooperation and Development) countries. Overall, it is more crucial than ever to carry out an appropriate policy analysis at the macroeconomic level, which will allow policymakers to better allocate scarce resources. In an environment of financial constraints, every effort is bound to fall short. Nonetheless, the right health policies can have positive effects, depending on the fiscal policy frameworks in which they are implemented (Blázquez-Fernández et al. 2017). Effects of air pollution on health cause significant increase in healthcare utilization. For every increase in the presence of ozone and particulate matter, there was a 0.3–3.7% increase in hospital admissions and outpatient visits due to airpollution-related illnesses. The effects were more prominent in cases of short-term high-level exposure to particulate matter (Brunekreef and Holgate 2002; Jaafar et al. 2017). The financial implications of haze on health were measured using “cost of illness” (COI) and “willingness to pay” (WTP) approaches. It was calculated from either a provider’s perspective, a patient’s perspective (e.g., as mean lifetime cost for COI), or a combination of both. The financial implication of haze on health measured using WTP was generally higher because it included costs of prevention, averting and mitigating, as well as utility loss due to illness. The monetary burden due to economic loss and increase in healthcare expenditure is certainly very significant. Appropriate resources need to
Air Pollution
be allocated to reduce air pollution levels and to meet the healthcare demand associated with it (Jaafar et al. 2017). Road vehicle emissions are one of the most important sources of human exposure to air pollution. Depending on pollutant, mode of travel, travel distance etc., the exposure while commuting during rush hours along densely trafficked corridors may constitute a substantial fraction of one’s total daily exposure. High exposures occur both inside vehicles due to the proximity of air intakes to exhaust emissions from neighboring vehicles as well as while walking or biking alongside the roads. A study of the Swedish capital, Stockholm, shows that there is a very large potential for reducing emissions and exposure if all car drivers living within a distance corresponding to a maximum of a 30-min bicycle ride change to commuting to work by bicycle. Mean population exposure would be reduced by about 7% for both NOx and black carbon (BC) in the most densely populated area of the inner city. Applying a relative risk for NOx of 8% decrease in all-cause mortality associated with a 10 mg m3 decrease in NOx, this corresponds to >449 (95% CI: 340–558) years of life saved annually for the Stockholm county area with its 2.1 million inhabitants. This is more than double the effect of the reduced mortality estimated for the introduction of a congestion charge in Stockholm in 2006. Using NO2 or BC as an indicator of health impact, the study shows 395 (95% CI: 172–617) and 185 (95% CI: 158–209) years of life saved for the population, respectively. The calculated exposure to BC and its corresponding impact on mortality are likely underestimated. With this in mind, the estimates using NOx, NO2, and BC show quite similar health effects considering the 95% confidence intervals (Johansson et al. 2017). An unanswered question is whether or not walking or cycling in polluted cities might negate the health benefits of exercise by increasing exposure to airborne pollutants. A meticulous report provides a clear answer to this question. This systematic review compares exposure to carbon monoxide, black carbon, nitrogen dioxide, and fine and coarse particles between commuters using active and motorized transport. It also
Air Pollution
examines differences in life expectancy. On the basis of 42 studies selected from among 4037 potentially eligible reports, the authors found that car commuters had higher exposure to all pollutants than did active commuters in 30 (71%) of 42 comparisons (median ratio 1.22 [IQR 0.90–1.76]). However, active commuters had higher inhalation doses of pollutants than did commuters using motorized transport because of their increased proximity to traffic, higher air exchange, and longer trip times. Most importantly, commuters using motorized transport were found to lose up to 1 year of life expectancy compared with cyclists. This conclusion provides strong and welcome evidence for the benefits of active transportation. It shows that the gains from aerobic exercise outweigh the risks (Cepeda et al. 2016).
Site Security and Control of Air Pollution In cases where an emergency situation does not pose a threat to the public and off-site emergency response teams are not dispatched to the site, a responsible on-site party must coordinate the appropriate emergency response and communicate with the public as necessary. However, if an emergency arises that presents an immediate threat to the public or otherwise requires additional support, the emergency response system for the site or facility should be activated in the manner prescribed by the off-site emergency response organization. This response should include air monitoring to determine the extent of off-site risk and to establish site zones. Emergency response teams at hazardous waste sites are led by an incident commander. Emergency response at other chemical or radioactive sites may also be led by an incident commander. All air-monitoring results should be made available to the incident commander (Boss and Day 2001). One of the main goals of dispersion modelling is to provide a tool for supporting policy- and decision-making. In practical applications, modelling results should be compared with officially established criteria to draw certain
21
conclusions about the safety of humans and the environment. Short-term criteria are widely used in security applications, especially in those dealing with accidents. Dispersion models usually predict mean values (mathematical expectations) of short-time characteristics of the impact of air pollutants, e.g., mean concentrations corresponding to a certain averaging time. Both Russian and US dispersion models can be used to generate global majorant concentration fields that correspond to the worst-case combination of governing meteorological parameters observed during a year. Such characteristics are appropriate if the source operates more or less continuously in course of the year. In the case of accidental releases, however, which frequently could be most critical for environmental security, the discharge of noxious pollutants occurs at certain actual combinations of governing meteorological parameters that could significantly differ from the worst-case scenario. Thus, the global majorant could noticeably overestimate the actual impact of accidental releases or any other emissions that occur comparatively rarely (Genikhovich 2005).
Global Security and Air Pollution The sustainable development paradigm demands that humans leave to future generations at least the same or better possibilities for development as they exist today. This requires a balanced development of the economy, society, and the environment and the absence of wars and terrorism. Wars and terror attacks can cause grave damage to the development, including loss of human lives, destruction of property, decrease in economic activity, destruction of the natural environment and pollution. It is therefore very important for sustainability to establish a secure global environment. This requires both protection against violence and the elimination of its root causes, as well as active work of individuals, national states, and nongovernmental and international organizations toward peace. Security and its relation to sustainable development and energy in the contemporary world are characterized by the dominance of one superpower and the rise of nonstate
A
22
structures of comparable importance: transnational companies, regional and international organizations, international conferences such as the United Nations Symposium on Global Security for the twenty-first century (Helmer 1989), various nongovernmental organizations, and a web of educational and research institutions and organizations (Blinc et al. 2007). The relevance of the latter is manifest especially in terms of agenda-setting. For example, the “International conference on the changing atmosphere: Implications for global security” which opened in Toronto in 1988 featuring over 300 scientists and policy-makers was Canada’s response to a serious interest in the problems of the atmosphere, even before the United Nations Framework Convention on Climate Change (UNFCCC, adopted in 1992). This interest was reflected in the broad theme of the conference, which encompassed not only climate change but also the risk to the ozone layer, longrange transport of atmospheric pollutants, and acid deposition. The conference produced a 39-point document that consists of the opening ministerial statements and the recommendations of 13 working groups. The working groups were convened to address the principal issues, recommend avenues for the research community to explore, and suggest appropriate governmental responses to the problems. One of the more startling recommendations called for reducing CO2 emissions approximately 20% below 1988 levels by the year 2005, regardless of whether this reduction is effective or sufficient. In fact, model analyses of atmospheric response to future greenhouse-gas burdens indicate that a 20% reduction in CO2, albeit a step in the right direction, even more than this would still be woefully inadequate to stabilize the atmosphere. The choice thus represented a pragmatic estimation of what could be achieved rather than a reliable scientific evaluation of what will be sufficient to prevent unacceptable climate change (Usher 1989); yet, notably, the UNFCCC at first only aimed at a stabilization of emission at 1990 levels, and it took until the Kyoto Protocol adopted in 1997 for a considerable number of countries to commit to actual reductions. Currently, fuels are mainly derived from fossil resources such as oil, coal, and natural gas.
Air Pollution
Petroleum production will not meet the energy demand forever. The global rate of oil production is expected to be highest in the next 15 years (Boisen and Lage 2009). Other fuel sources will need to be explored in order to keep up with the demand for energy. Fossil fuel, particularly crude oil, is confined to a few areas of the world, and continuity of supply is governed by political, economic, and ecological factors. Reserves are diminishing and they will become increasingly expensive. In the past decades, the world has witnessed dramatic fluctuations in oil prices, which peaked so far in 2008 (IEA, 2011b). Despite the decline observed since then, the systematic rise in oil demand by the emerging economies in the Far East, coupled with political instability in oil-rich countries, will likely lead to new increases in oil prices in the future (Ahmad et al. 2011). Together with price fluctuations, the anticipated depletion of oil reserves has added momentum to the quest for the development of new power generation technologies. On the other hand, the accumulation of GHGs, mainly CO2, from fossil fuels in the atmosphere is believed to cause a temperature rise on Earth and a subsequent rise of sea level. In order to lessen the effect of global warming, it is necessary to reduce the consumption of fossil fuels, and to increase the supply of environmentally friendly energy, such as renewable sources and fuel cells (Arent et al. 2011; Suzuki 1982). The International Energy Agency (IEA) has predicted that the use of renewable energies (e.g., wind, solar, hydro- and geothermal) as well as novel energy feedstocks (e.g., biomass) will triple by 2035 (IEA 2010). The rate of emission of GHGs also needs to be dramatically reduced, in order to honor the commitments made under the Copenhagen Accord (Kennes and Veiga 2013). For example, hydrogen is an attractive alternative to carbon-based fuels. Part of its attraction is that it can be produced from diverse resources, both renewable (hydro, wind, solar, biomass, geothermal) and nonrenewable (coal, natural gas, nuclear) (Suzuki 1982). Hydrogen can be utilized in high-efficiency power generation systems, including fuel cells, for both vehicular transportation and distributed electricity
Air Pollution
generation (Lee and Hung 2012; Barreto et al. 2003). Hydrogen fuel cells, by enabling the so-called hydrogen economy, hold great promise for meeting, in a quite unique way, our concerns over security of supply and climate change (Neef 2009). Whereas the nineteenth century was the century of the steam engine, and the twentieth century was the century of the internal combustion engine, it is likely that the twenty-first century will thus be the century of the fuel cell. Hence, the transition to a hydrogen economy will progress through the development and commercialization of advanced technologies to produce, store, and use hydrogen (Kennes and Veiga 2013). With a view to this, many governments have developed policies and roadmaps for the replacement of fossil fuels by renewable fuels. These policies have been adopted by companies and organizations. Biofuels, fuels derived from biomass, will play an important role in this transition. Currently, bioethanol and biodiesel are the most important transportation biofuels. For example, according to the technical definition, biodiesel is a fuel comprising mono alkyl esters of long-chain fatty acids derived from vegetable oils or animal fats, designated B100, and meeting the requirements of ASTMD 6751 standard specification (National Biodiesel Board 2012). Biodiesel has many advantages, such as environmental friendliness and better efficiency than fossil fuels (Demirbas 2007). Moreover, the exhaust gas from this type of fuel contains little SOx and only relatively small amounts of CO, unburnt hydrocarbons, and particulate matter, which can make it a “green” fuel substitute (Oner and Altun 2009). The EU has a legal framework concerning transport fuels. The promotion of the use of energy from renewable sources is described in the Renewable Energy Directive RED 2009/28/EC, which sets a goal of 20% renewable energy and a 10% share of renewable energy in the transportation sector by 2020 (European Commission 2012). Furthermore, the Fuel Quality Directive FQD 2009/30/ EC demanded a minimum of 6% reduction of GHGs per energy unit of transport fuel by 2020. Both directives included sustainability
23
criteria for biofuels and demanded at least 35% savings in GHG emissions as compared to fossil fuels for 2011 and 2013, respectively. This requirement was increased to at least 50% for 2017 and 60% for 2018 for biofuels produced by new facilities. Any such EU Directives are binding for all member states and need to be implemented through the introduction of respective national laws (IEA, 2011a). The individual member states are free to stimulate the production of biofuels from waste biomass (such as cellulosic ethanol) in contrast to biofuels from food raw materials (Kennes and Veiga 2013). In the US, a similar policy was defined in the Renewable Fuel Standard (RFS) as part of the 2007 Energy Independence and Security Act. The RFS describes how much corn-based cellulosic ethanol, biodiesel, and advanced biofuels should be produced in the USA by 2022. In that year, 60 billion liters of cellulosic ethanol are required. To fulfil these ambitions, almost 400 cellulosic ethanol factories should be operational by that year (Dijkgraaf 2012). For the planning of Research and Development (R&D) activities, the time horizon lies even further away. The Biomass R&D Technical Advisory Committee, a panel established by the US Congress to guide the future direction of federally funded biomass R&D, envisions a 30% replacement of the 2005 US petroleum consumption with biofuels by 2030 (Perlack et al. 2005).
Conclusion Ambient air pollution can be controlled and the diseases it causes can be prevented. Ambient air pollution is not an unavoidable consequence of modern economic growth (Arrow et al. 1995). Wise leadership can decouple development from pollution and help emerging economies to leapfrog over the disasters of the past. The technical, economic, and political feasibility of pollution control is shown by the successes of countries and cities around the world in curbing ambient air pollution. Proven effective strategies include the establishment and enforcement of air
A
24
standards; the reduction of emissions from coalfired power plants and other stationary sources via a requirement of a transition to clean fuels and ultimately to renewable energy sources; banning of the use of polluting fuels in urban centers; improvement of access to public transportation; mandating fuel efficiency standards for cars, trucks, and buses; and restriction of access to private vehicles. Urban planning initiatives that reduce sprawl and encourage walking and cycling, such as new zoning laws and the construction of bicycle paths, the creation of pedestrian malls, and the institution of bicycle rental programs represent an additional aesthetically attractive and low-cost strategy for ambient air pollution control (Frumkin et al. 2004). An added benefit of these approaches is that they increase aerobic exercise and thus reduce risk of obesity, diabetes, and cardiovascular disease. Global elimination of ambient air pollution will require courageous leadership, substantial new resources from the international community, and sweeping societal changes (Whitmee et al. 2015). Cities and countries will need to switch to nonpolluting energy sources, encourage active commuting, enhance their transportation networks, redesign industrial processes to eliminate waste, and move away from the resourceintensive so-called take-make-use-dispose model of economic growth toward a clean, sustainable, circular economic model. These changes will not be easy. They will need to overcome strong opposition by powerful vested interests. But, fortunately, the technical, institutional, and policy tools needed to control air pollution are already at hand. They have been developed and have proven effective in countries at all levels of income. They are available off the shelf and can be deployed today to gain short-term and long-term victories (Landrigan 2017).
Cross-References ▶ Greenhouse Gas Emissions ▶ National Climate Action Plans (Voluntary Reduction Plans)
Air Pollution
References Ahmad, A. L., Mat-Yasin, N. H., Derek, C. J. C., & Lim, J. K. (2011). Microalgae as a sustainable energy source for biodiesel production: A review. Renewable and Sustainable Energy Reviews, 15, 584–593. Arent, D. J., Wise, A., & Gelman, R. (2011). The status and prospects of renewable energy for combating global warming. Energy Economics, 33, 584–593. Arrow, K., Bolin, B., Costanza, R., Dasgupta, P., Folke, C., Holling, C. S., Jansson, B. O., Levin, S., Maler, K. G., Perrings, C., & Pimentel, D. (1995). Economic growth, carrying capacity, and the environment. Science, 268, 520–521. Barreto, L., Makihira, A., & Riahi, K. (2003). The hydrogen economy in the 21st century: A sustainable development scenario. International Journal of Hydrogen Energy, 28, 267–284. Bernard, S. M., Samet, J. M., Grambsch, A., Ebi, K. L., & Romieu, I. (2001). The potential impacts of climate variability and change on air pollution-related health effects in the United States. Environmental Health Perspectives, 109(2), 199–209. Blázquez-Fernández, C., Cantarero-Prieto, D., & Pascual-Sáez, M. (2017). On the nexus of air pollution and health expenditures: New empirical evidence. Gaceta Sanitaria. https://doi.org/10.1016/j. gaceta.2018.01.006. Blinc, R., Zidans ek, A., & Slaus, I. (2007). Sustainable development and global security. Energy, 32(6), 883–889. Boisen, P., & Lage, M. (2009). NG/bio methane used as vehicle fuel. Fact Sheet. NGVA Europe. Boss, M. J., & Day, D. W. (2001). Air sampling and industrial hygiene engineering. Florida, USA: CRC Press LLC. ISBN: 1-56670-417-0. Brunekreef, B., & Holgate, S. T. (2002). Air pollution and health. Lancet, 360(9341), 1233–1242. Cepeda, M., Schoufour, J., Freak-Poli, R., Koolhaas, C. M., Dhana, K., Bramer, W. M., & Franco, O. H. (2016). Levels of ambient air pollution according to mode of transport: a systematic review. The Lancet Public Health. https://doi.org/10.1016/ S2468-2667(16)30021-4. Cooper, C. D., & Alley, F. C. (2011). Air pollution control: A design approach (4th ed.). Illinois, USA: Waveland Press. ISBN: 978-1-57766-678-3. de Nevers, N. (2000). Air pollution control engineering (2nd ed.). Massachusetts, USA: McGraw-Hill. ISBN: 0-07-039367-2. Demirbas, A. (2007). Importance of biodiesel as transportation fuel. Energy Policy, 35, 4661–4670. Dijkgraaf, A. (2012). De hele maisplant de tank in, C2W. Life Sciences, 108(2), 21. Dimov, I., Geernaert, G., & Zlatev, Z. (2005). Fighting the great challenges in large-scale environmental modelling. In: Proceedings of the NATO advanced research workshop on advances in air pollution modelling for environmental security. Springer in corporation with
Air Pollution NATO Public Diplomacy Division. May 8–12th, 2004, Borovetz, Bulgaria. European Commission. (2012). Renewable Energy: Targets by 2020. See http://ec.europa.eu/energy/renew ables/targets_en.htm. Frumkin, H., Frank, L., & Jackson, R. (2004). Urban sprawl and public health: Designing, planning and building for healthy communities. Washington, DC: Island Press. Genikhovich, E. (2005). Dispersion modelling for environmental security: principles and their application in the Russian regulatory guideline on accidental releases. In Proceedings of the NATO advanced research workshop on advances in air pollution modelling for environmental security. Springer in corporation with NATO Public Diplomacy Division. May 8–12th, 2004, Borovetz. Global, regional, and national life expectancy. (2016). All-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: A systematic analysis for the global burden of disease study 2015. Lancet, 388, 1459–1544. Grandjean, P., & Landrigan, P. J. (2014). Neurobehavioural effects of developmental toxicity. Lancet Neurology, 13, 330–338. Helmer, O. (1989). Symposium on global security for the twenty-first century (Technological forecasting and social change) (Vol. 35, No. 1, pp. 93–94). New York: United Nations (1987). IEA. (2010). World energy outlook 2010. Paris: International Energy Agency. IEA. (2011a). Commercializing Liquid Biofuels from Biomass. IEA Bioenergy Task. 39: Newsletter Issue 28. IEA. (2011b). Key world energy statistics. Paris: International Energy Agency. Jaafar, H., Azzeri, A., Isahak, M., & Dahlui, M. (2017). Systematic review on economic impact of air pollution on health. Value in Health, 20(9), A642. https://doi.org/ 10.1016/j.jval.2017.08.1473. Janssen, A. J. H., Van Leerdam, R., Van Den Bosch, P., Van Zessen, E., Van Heeringen, G., & Buisman, C. (2007). Development of a family of large-scale biotechnological processes to desulphurise industrial gasses. In Proceedings of the 2nd international congress on biotechniques for air pollution control, October 3–5th, Spain. Johansson, C., Lövenheim, B., Schantz, P., Wahlgren, L., Almström, P., Markstedt, A., Strömgren, M., Forsberg, B., & Sommar, J. N. (2017). Impacts on air pollution and health by changing commuting from car to bicycle. The Science of the Total Environment, 584–585, 55–63. Kennes, C., & Veiga, M. C. (2001). Bioreactors for waste gas treatment. Dordrecht: Springer Science and Business Media. Originally published by Kluwer Academic Publishers. ISBN: 978-90-481-5772-3. Kennes, C., & Veiga, M. C. (2013). Air pollution prevention and control: Bioreactors and bioenergy (1st ed.). West Sussex, UK: Wiley. ISBN: 9781119943310.
25 Kharytonov, M., Zberovsky, A., Drizhenko, A., & Babiy, A. (2005). Air pollution assessment inside and around iron ore quarries. In Proceedings of the NATO advanced research workshop on advances in air pollution modelling for environmental security. Springer in corporation with NATO Public Diplomacy Division, May 8–12th, 2004, Borovetz. Kioumourtzoglou, M. A., Schwartz, J. D., Weisskopf, M. G., Melly, S. J., Wang, Y., Dominici, F., & Zanobetti, A. (2015). Long-term PM2.5 exposure and neurological hospital admissions in the north eastern United States. Environmental Health Perspectives, 124(1), 23–29. Landrigan, P. J. (2017). Air pollution and health. The Lancet Public Health, 2(1), 23–34. Lee, D. H., & Hung, C. P. (2012). Toward a clean energy economy: With discussion on role of hydrogen sectors. International Journal of Hydrogen Energy, 37, 15753–15765. Lee, J. Y., & Kim, H. (2018). Ambient air pollutioninduced health risk for children worldwide. Lancet, 2, e292. McNutt, M. (2016). Climate intervention: Possible impacts on global security and resilience. Engineering, 2, 50–51. Mensink, C., Lefebre, F., & De Ridder, K. (2005). Developments and applications in urban air pollution modelling. In: Proceedings of the NATO advanced research workshop on advances in air pollution modelling for environmental security. Springer in corporation with NATO Public Diplomacy Division. May 8–12th, Borovetz. National Biodiesel Board. (2012). Biodiesel Basics. Online at http://www.biodiesel.org/what-is-biodiesel/ biodiesel-basics. National Research Council. (2015a). Climate intervention: carbon dioxide removal and reliable sequestration. Washington, DC: The National Academies Press. National Research Council. (2015b). Climate intervention: Reflecting sunlight to cool earth. Washington, DC: The National Academies Press. Neef, H. J. (2009). International overview of hydrogen and fuel cell research. Energy, 34, 327–333. OECD. (2016). The economic consequences of outdoor air pollution. Paris: Organization for Economic Co-operation and Development Publishing. Oner, C., & Altun, S. (2009). Biodiesel production from inedible animal tallow and an experimental investigation of its use as alternative fuel in a direct injection diesel engine. Applied Energy, 86, 2114–2120. Perlack, R. D., Wright, L. L., Turhollow, A. F., Graham, R. L., Stokes, B. J., & Erbach, D. C. (2005). Biomass as feedstock for a bioenergy and bioproducts industry: The technical feasibility of a billion-ton annual supply. DOE Report DOE/GO-102005-2135. Suzuki, Y. (1982). On hydrogen as fuel gas. International Journal of Hydrogen Energy, 7, 227–230.
A
26
Alcohol Abuse and Addiction
Usher, P. (1989). World conference on the changing atmosphere: Implications for global security. Environment: Science and Policy for Sustainable Development, 31(1), 25–27. Whitmee, S., Haines, A., Beyrer, C., Boltz, F., Capon, A. G., de Souza Dias, B. F., Ezeh, A., Frumkin, H., Gong, P., Head, P., Horton, R., Mace, G. M., Marten, R., Myers, S. S., Nishtar, S., Osofsky, S. A., Pattanayak, S. K., Pongsiri, M. J., Romanelli, C., Soucat, A., Vega, J., & Yach, D. (2015). Safeguarding human health in the Anthropocene epoch: Report of the Rockefeller Foundation–Lancet Commission on planetary health. Lancet, 386, 1973–2028. WHO. (2014). What is air pollution? Regional office for South-East Asia. 25 March 2014. WHO. (2017). Air pollution- the silent killer. World Health Organization. 6 March 2017.
Alcohol Abuse and Addiction David Andrew Omona Uganda Christian University, Mukono, Uganda
Keywords
Alcohol · Alcohol abuse · Addiction · Alcoholism · Drunkenness
Introduction Alcohol is a substance that people have consumed from time immemorial. Numerous examples from ancient literatures and myths allude to alcohol consumption as a part of cultural celebrations. In some societies rituals and ceremonies were not complete without alcohol use. However, “enduring alcohol consumption and the passing down of this habit through generations does not adequately explain why alcohol is consumed” (Freeman and Parry 2006). What certainly have changed over the years are the patterns of alcohol use. Available evidence suggests that the quantity of alcohol consumed is far greater today than in earlier times (Freeman and Parry 2006). The 2004 World Health Organization (WHO) estimate of the people who consume alcohol around the world stands at two billion (World Health Organization 2014).
People take alcohol in three main kinds of beverages. The first category of beverage is beers, made from grains through fermentation and brewing. Normally, beers have between 3% and 8% alcohol content. The second category of beverages is wines, made through fermentation of fruits such as grapes. Wines contain between 8% and 12% alcohol naturally and up to 21% when fortified by adding alcohol. The third category is that of distilled beverages (spirits), such as whiskey, gin, and vodka, which on average contain between 40% and 50% alcohol. Usually, those who drink may become abusers of or addicted to any of the aforementioned beverages (Nordegren 2002, p. 31). Although some people use the terms “alcohol abuse” and “addiction” interchangeably, they are not the same. Alcohol abuse is the habitual excessive and destructive pattern of alcohol use, leading to significant social, occupational, or medical impairment (Nordegren 2002, p. 35). Signs of alcohol abuse may include (inter alia): • Excessive drinking, despite resulting social, legal, or interpersonal problems • Harmful use of alcohol that results in mental or physical damage • Alcohol consumption to cope with psychological or interpersonal problems • Choosing to continue drinking despite alcoholrelated illnesses or other physical problems • Anger when confronted about alcohol use • Feelings of guilt about alcohol use • Drinking in the morning to treat hangovers • Withdrawal symptoms when alcohol consumption ceases (The Recovery Village 2019) On the other hand, alcohol addiction or alcoholism is a disease that results from dependency on alcohol. It is an extreme form of alcohol use, associated with compulsive or uncontrolled use of alcohol. Therefore, being an alcohol addict means that one has a chronic brain disease that has the potential for both recurrence (relapse) and recovery. In the United States and many other countries, it is very easy to become an alcohol addict, because, unlike cocaine and heroin, which are restricted substances, alcohol is widely available
Alcohol Abuse and Addiction
and accepted in many families. Alcohol use may be at the center of social events and closely linked to celebrations and enjoyment (Gateway Foundation 2019). Some of the key symptoms of alcohol addiction, according to the Gateway Foundation (2019) include: • Increased quantity or frequency of alcohol use • High tolerance for alcohol or lack of “hangover” symptoms • Drinking at inappropriate times, such as first thing in the morning or at church or work places • Wanting to be where alcohol is present and avoiding situations where there is none • Changes in friendships, especially choosing friends who also drink heavily and avoiding those who do not drink like them • Avoiding contact with loved ones • Hiding alcohol or hiding while drinking • Dependence on alcohol to function in everyday life • Increased fatigue, depression, or other emotional issues • Legal or professional problems such as an arrest or the loss of a job
The Extent of the Problem Through the Example of Alcohol Abuse and Addiction in the United States Stacy Mosel (2019), quoting from the 2017 United States National Survey on Drug Use and Health (NSDUH), notes that 51% of the population aged 12 and older reported binge drinking in the past month. To have participated in binge drinking meant that each of the male participants took at least five or more drinks and the female participants took about four or more drinks on at least 1 day in the past month (Mosel 2019). In 2015 alone, as many as 66.7 million people in the United States had participated in binge drinking. Besides the general state as indicated, a 2017 Recovery Brand Survey of America also revealed alcohol to be the most abused drug among people in recovery (Brande n.d). Accordingly, nearly 70% of the people in recovery who sought help did so because of
27
drinking problem. Whereas 71% Americans are reported to be consuming alcohol (NIAAA 2016), more than half of the alcohol in any given year is consumed by the top 10% of drinkers (Brande n.d). A latest report from The Recovery Village (2019) corroborates the above statistics. Quoting from the National Institute on Alcohol Abuse and Alcoholism’s (NIAAA) latest statistics on alcohol addiction in America, The Recovery Village’s report shows that: • 86.4% of adults aged 18 and over report that they drank alcohol at some point in their lifetime. • 70.1% of adults report drinking within the last year and 56.0% report drinking within the last month. • 26.9% of adults report binge drinking within the last month. • 15.1 million adults 18 or older have an alcohol use disorder. • Only 6.7% of people with an alcohol use disorder received treatment. • More than 10% of American children live in a household where at least one parent has a drinking problem. • Alcohol abuse is a leading risk factor in contracting mouth, esophagus, pharynx, larynx, and liver and breast cancer. • It is estimated that every 5 h a college student dies from alcohol-related unintentional injuries (The Recovery Village 2019). Indeed, the above statistics is indicative of the increasing challenges of alcohol abuse and addiction in the United States. The World Health Organization (2014, pp. 7–11) attributes drinking patterns in the United States to factors such as age, gender, familial risk factors, socioeconomic status, economic development, culture and context, and alcohol control and regulation. Looking at age, in 2014, for example, more than 16 million adults, an equivalent to 7% of American adult population, had an alcohol use disorder. Besides, over five million were involved in risky alcohol drinking such as binge drinking, a precursor to alcohol abuse. Like adults, children between the age of 12 and
A
28
20 years are also victims of alcohol abuse in the United States. According to NIAAA (2006), the children in that age group have also reported drinking a few sips of alcohol. The U.S. Department of Health and Human Services (HHS) (2017, p. 2) reports that about 10% of 12-yearolds say they have used alcohol at least once. By age 13, the rate of alcohol use doubles. The truth, however, is that children who start drinking before age 21 do so when they are about 13–14 years old. Such children start to drink with the help of adults in one form or another, given that they cannot legally buy alcohol on their own. The worry that this brings is that, compared to adults who started to drink at around 21 years of age, children who begin drinking before the age of 15 are four times more likely to develop dependence on alcohol and other drugs (NIAAA 2006). As such, Brande (n.d) opines, “the younger a person begins to drink, the more likely they will . . . engage in harmful behavior.” Although the rate of binge and heavy alcohol drinking among the underage reduced drastically between 2002 and 2014, above five million youth indicate binge drinking, and 1.3 million report heavy drinking (Brande n.d). In regard to gender, studies in America show that more men (10.6 million) than women (5.7 million) suffer from alcohol abuse and addiction (Brande n.d). Unfortunately, in spite of this, women bear the brunt of alcohol-related challenges such as abusive relationships, indecent sexual advances, and depression compared to men (NIAAA 2015b). In relation to culture and context, high-risk drinking rates are reportedly higher among ethnic minorities, particularly Native Americans and Hispanics. Chertier and Gaetano (2010) approximated 27% Native American women and almost 20% Black women as daily heavy drinkers. Whether the socioeconomic status of those concerned or the level of economic development in their communities helps to explain this or not is a subject of debate as well as for further research. Looking at alcohol abuse and addiction from a family dynamics perspective, in 2017, an estimated 76 million children of alcoholics lived in the United States. In 2012, more than 10% of
Alcohol Abuse and Addiction
children lived with a parent with an alcohol problem (Brande n.d). This implies that such children are predisposed to become alcohol abusers and addicts, given their proximity to alcoholic beverages. Regarding the level of economic development, socioeconomic status, and alcohol control policies, working adults, for example, are found more predisposed to alcohol abuse and addiction than those who are not working. To the extent, 1 in 10 or 88,000 working adults succumb to death each year, that is, 1 in 10 deaths among working adults (The Surgeon General’s Report 2016, p. 1). The developments in their personal income level, coupled with the non-stringent application of alcohol control policies given that these people are adults, contribute to this outcome.
Stages of Alcohol Abuse and Addiction Milhorn (1994) and Kristeen Cherney (2016) list six and five stages of alcohol abuse and addiction, respectively. While Milhorn (1994) refers to these as the pre-alcoholism stage (binge drinking), the early alcoholism stage, the acute stage, the early chronic stage, the late chronic stage, and the death stage; Cherney (2016) refers to them as occasional abuse and binge drinking, early abuse stage, problem drinking, alcohol dependence, and addiction and alcoholism. The pre-abuse and addiction stage (Milhorn 1994) or occasional abuse and binge drinking (Cherney 2016) covers a person’s drinking history from the time of first drinking until before serious drinking starts. Throughout this stage, a gradual increase in alcohol intake in both frequency and quantity becomes manifest (Milhorn 1994, p. 33). In most cases, such type of drinking occurs among people aged between 18 and 34, a phenomenon that is twice as common among men than women in the United States. Among 12- to 17-year-olds, 5.3% reported binge drinking in the past month, with 0.7% reporting heavy alcohol use in the past month. While not everyone who binge drinks has an alcohol use disorder (AUD), binge drinking can be a very significant risk factor for the development of an AUD. Furthermore, the US NSDUH
Alcohol Abuse and Addiction
reports that more than 14 million people aged 12 and older had an AUD in 2017, with AUD occurring in 7% of males and 3.8% of females aged 12 and older (Mosel 2019). The early abuse and addiction stage (Milhorn 1994), the early alcoholic stage, or increased drinking stage (Cherney 2016) is the stage where signs and symptoms of addiction such as blackouts, sneak drinks, drinking before a social gathering, gulping drinks, avoiding talking about drinking, frequent blackouts, and getting drunk before the evening are manifest (Milhorn 1994, pp. 34–35). The acute stage (Milhorn 1995) or problem drinking (Cherney 2016) is the stage of the worsening of the previous symptoms leading to the loss of control over drinking. At this stage, the abuser is on a slippery slope to addiction and eventually takes to solitary drinking, fantasizes about drinking, loses self-esteem, engages in extravagant behavior, becomes aggressive, experiences surges of remorse, goes through periods of salience, and walks out of friends and employers (Milhorn 1994, pp. 35–38). The early chronic stage (Milhorn 1994) or alcohol dependence (Cherney 2016) is a stage where “physical and mental distortion arising out of the long abuse of body and mind” takes place (Milhorn 1994, p. 38). At this stage, the addict dejects family, experiences great selfpity, and attempts to blame the community for his/her state. Also, the addict’s poor health becomes evident, and, if married, decrease in sexual energy sets in. The addict starts to drink in the early morning because of fear, frustration, and a sense of remorse (Milhorn 1994, pp. 38–40). The late chronic stage (Milhorn 1994) or addiction and alcoholism (Cherney 2016) is the stage where the addict “experiences total social isolation from . . . people . . . as well as gross physical deterioration with marked susceptibility to disease and ever increasing mental confusion” (Milhorn 1994, p. 40). The addict’s ability to hold a job is impaired, and he/she is no longer able to hide drinking habits. The addict drinks more and more alcohol to reach the required state of intoxication and develops irrational fears. As the stage
29
progresses, the body of the addict starts to shake; she/he drinks in order to relieve the symptoms of drinking, may turn to be religion out of desperation, and eventually develops an attitude of indifferent resignation by becoming bitter and resistant to any effort to change his/her way of life (Milhorn 1994, pp. 40–42). The death stage is the stage that comes due to constant drunkenness. Deaths results from suicide, accident, or alcohol-related health problems (Milhorn 1994, p. 42). Whereas an abuser who stops alcohol abuse might not move through all of the above listed stages (up to the late chronic stage), an addict moves through all of them. An abuser of alcohol may, nonetheless, also die as a result of alcohol abuse, especially due to road accidents or suicide.
Causes of Alcohol Abuse and Addiction There are many causes of alcohol abuse and addiction. Josh McDowell and Bob Hostetler (1996) attribute alcohol abuse and addiction to physiological factors, family and cultural background, and outside influence. Based on their studies, they argue, numerous studies support the view that alcohol abuse and addiction stem from physiological sources. Accordingly, they assert that some people may possess an inborn predisposition toward alcohol use. Whereas such predisposition is not easily seen in the people who had never experimented with alcohol, those who do will experience a different reaction to alcohol than many of their friends (McDowell and Hostetler 1996, p. 391). Quoting psychologist Gary Collins, McDowell and Hostetler (1996) present three models in their attempt to explain alcohol abuse and addiction from the vantage point of background. They refer to these models as the parental model and models focusing on the role of parental attitude and cultural expectations, respectively. (a) The parental model, focusing on the impact that the parents’ behavior has on children. For example, if children grow up seeing their parents drinking, they will either copy this or
A
30
avoid to do so, given the experience they have had with their parents who have been abusing or addicted to alcohol. (b) Parental attitude: parental permissiveness and parental rejection can both stimulate alcohol use and abuse. For instance, when parents do not care whether or not their children drink, and express no concern about the dangers of alcohol, misuse of alcohol by children often follows. (c) Cultural expectations: If a culture is tolerant to drinking, then it follows that instances of the emergence of many alcohol abusers and addicts in that cultural group is likely. On the other hand, a culture that restricts alcohol use is likely to have fewer people abusing and getting addicted to alcohol (McDowell and Hostetler 1996, pp. 391–392). Regarding outside influence, peer pressure (McDowell and Hostetler 1996) and pressure from advertisement (Snyder 2006) are the major sources of outside influence. In their opinion many people, especially youth, may be, or feel, pressurized to drink alcohol because they see it is as the social norm or the norm of a particular age or social/cultural grouping. The pressure to conform, especially amongst youth, is a well-documented psychological phenomenon. Given that some people may feel (or fear that they may be) excluded from or ostracized by the group if they do not partake in alcohol consumption, they tend to join those who abuse alcohol so they can fit in the group. In a like manner, some people resort to alcohol abuse and thereby become addicts due to the pressure from advertising. Whereas the alcohol industry claims that alcohol advertising is meant to raise brand awareness and not aimed at promoting additional consumption (especially drinking amongst youth), there is clear evidence that advertising does increase alcohol consumption (Snyder 2006). In that, in an attempt to imitate those shown in the advertisements, the imitators themselves engage in alcohol consumption. The fact that some people regard alcohol as a social lubricant is also significant. They may resort to drinking alcohol, because they think alcohol drinking disinhibits defenses and facilitates “good
Alcohol Abuse and Addiction
company.” According to them, through the process of drinking alcohol, social sharing, bonding with other people, and a connectedness among consumers are realized, an experience that those who are drinkers of nonalcoholic beverages cannot gain. Unfortunately, as they take alcohol to relax, converse more easily, and mix socially with others, they end up becoming frequent abusers and subsequently, possibly, addicts. The use of alcohol during ritual performance by Native Americans also contributes to alcohol abuse and addiction. Since alcohol has a “mystique” not shared by nonalcoholic beverages, its use in traditional rituals appears to add to the aura of special occasions. To the extent, some rituals are not complete minus sharing in alcoholic beverages. Unfortunately, the frequency of participation in such rituals and sharing in alcoholic beverages increases people’s propensity to abuse alcohol and eventually become addicted to it. Some people in the United States abuse alcohol and thereby become addicts because they think drinking alcohol is a part of life and even an expected behavior. To the young, it shows a transitional stage in life, the stage of the coming of age and identifying with the “status quo.” Seeing alcohol as a reducer of stress, dulling the pain of poverty or hardship in life can also lead to alcohol abuse and addiction. Whereas research suggests that drinking alcohol can reduce stress in certain people and under certain circumstances, it may not be true that taking alcohol can dull the pain of poverty or hardship in life (Sayette 1999). Yet, some people attribute their alcohol drinking habits to this. As time goes by, such people often end up becoming alcohol abusers, if not addicts. Consumption of alcohol to exude “macho” behavior among men can also lead to alcohol abuse and addiction. For whatever reason, there are men who try to consume large amounts of alcohol as a sign of their strength and manliness. Whereas the display of behaviors such as drinking more than anyone else or more quickly than anyone else are often regarded as admirable masculine qualities, they can end up being a path to abusing or getting addicted to alcohol. In some situations, the same can also hold in the case of women, given changing gender roles and some
Alcohol Abuse and Addiction
women seeking similarly to “prove” themselves by developing binge-drinking patterns. Alcohol abuse and addiction may also result from taking alcohol for enjoying a state of intoxication and maintaining a state of inebriation. Many people in the United States take alcohol simply so as to enjoy the feeling of intoxication and inebriation, leading to states of drunkenness not necessarily intended when starting to drink. Since some people lack information regarding the impact and effects of alcohol, and drink without knowing the dangers, they may slip into becoming frequent abusers and eventually addicts.
31
•
•
Effects of Alcohol Abuse and Addiction Alcohol abuse and addiction constitute a physical, psychological, socioeconomic, and health threat. In the Handbook on Counseling Youth: A Comprehensive Guide for Equipping Youth Workers, Pastors, Teachers, Parents, McDowell and Hostetler (1996, pp. 392–394) enumerate several effects of alcohol abuse and addiction. Among other pertinent points, they raise the issues of anguish, confusion and disorientation, low self-esteem, personal distortion, loss of control, arrested maturity, guilt, alienation, remorse, and despair as some of the effects of alcohol abuse and addiction on the individuals directly concerned. These they explain as follows: • Anguish: abusers and addicts of alcohol frequently experience a combination of physical and mental pain. They wonder if they are going crazy, fearing that they have lost control or will lose control soon. They become frustrated about their life. • Confusion and disorientation: there is high level of confusion in a person who abuses or is addicted to alcohol. It becomes difficult for such persons to focus their minds; thus they routinely forget the names of people, places, details, and appointments. In some instances, such people experience blackouts, which is an indication that their condition is getting worse. • Low self-esteem: alcohol abusers and addicts usually experience a fatal blow to their self-
•
•
•
esteem because of the mess their lives are in. They feel worthless when they compare themselves with their peers, especially those who are in a better state. To attract recognition, they do queer things to draw people’s attention to their existence. Personal distortion: abusers and addicts become careless to the extent that even people who knew them fail to recognize their past selves in them, as they no longer hold their former values and interests. They do not care about the way they look, how they dress, and how they conduct themselves. Loss of control: alcohol abusers and addicts tend to lose control of their drinking habits. Rather than controlling the level of their drinking, drinking controls them. They fail to control their emotions, at times crying for nothing or laughing even when there is nothing to laugh at. Depression: alcohol abusers and addicts sooner or later get depressed, having developed a sense of worthlessness. In their depression, they can easily do harm to themselves, their families, or other persons around them. In an attempt at trying to regain a grip to life, to get out of powerlessness, self-pity, and paralysis of the mind, they may take to more drinking, in a vicious circle, feeding back into their depression. Arrested maturity: the emotional and, in some respects, even the physical development of those who start abusing alcohol early or become addicts while young might be stunted. They may not develop the judgment and coping skills required to get to, and succeed in, the adult stage, and this can easily make them distressed, angered, moody, and easily offended, and they may seek attention similarly to children in the process of growing up. Guilt and shame: most alcohol abusers and addicts have a sense of guilt that usually lingers around in their minds in a persistent manner. Such guilt come about because they know in their drunken state that at times they do things that they would not want to identify with when they are sober. Such guilt and shame make them feel not loved.
A
32
• Remorse: some alcohol abusers and addicts develop a sense of regret for the lies, insults, and fights they got involved in in their drunken state and how this may have hurt their mates and caused embarrassment to the people around them. • Alienation and isolation: this becomes complete when the abuser or addict seeks to be by oneself, in their own world. By this stage, those concerned are usually dangerous to themselves and the community, because no one understands what goes on in their minds. • Despair: worries usually take a toll on alcohol abusers and addicts given the imagined hopelessness in which they see themselves. They tend to look at life as though it has no meaning and feel that they have come to the end of the road. Committing suicide is a possible outcome in this stage. Alcohol abuse and addiction can result in health complications. The most common health complications resulting from alcohol abuse and addiction are: • Heart disease, high blood pressure, irregular heartbeat, and stroke • Liver disease: liver inflammation, including alcoholic hepatitis, fibrosis, and cirrhosis • Acute kidney failure and chronic kidney disease • Pancreas inflammation and the swelling of blood vessels that prevent proper digestion • A suppressed or reduced immune system leading to increased susceptibility to infection, including diseases such as tuberculosis and pneumonia • Ulcers • Diabetes complications • Sexual malfunctions • Bone loss • Vision impairment • Increased risk of cancer of the breast, mouth, esophagus, throat, larynx, stomach, pancreas, colon and rectum • Short- and long-term effects on the brain, disrupting the brain’s communication pathways that influence mood, behavior, and other cognitive functions
Alcohol Abuse and Addiction
• Birth defects in the children of abusers and addicts (Mosel 2019) Alcohol abuse and addiction lead to death. In the assessment of the Centers for Disease Control and Prevention (CDC), drunk driving takes 28 lives every day in the United States alone. Drinking is also associated with an increased incidence of suicide and homicide. About 1,825 college students aged 18–24 die from unintentional injuries related to alcohol, per annum (NIAAA 2015a). The statistics given by Brande (n.d) on deaths resulting from alcohol abuse and addiction are similarly quite alarming. According to her: • About 1/3 of deaths resulting from alcohol problems take the form of suicides and such accidents as head injuries, drowning incidents, and motor vehicle crashes. • About 20% of suicide cases in the United States involve people with alcohol problems. • In 2014, as many as 30% of the country’s fatal traffic incidents were related to alcoholimpaired driving. • Among youth, underage drinking is responsible for more than 4,300 deaths each year and 189,000 emergency room visits for alcoholrelated injuries and other conditions. • Excessive drinking was responsible for 1 in 10 deaths among adults between 20 and 64 years (Brande n.d). Alcohol abuse and addiction also affect the general behavior of abusers and addicts, leading to slurred speech, motor impairment, confusion, and memory problems which are just a few of the common consequences of alcohol consumption in the short-term. This can make drinkers more prone to accidents, injuries, and violent behavior. Alcohol is a factor in more than half of fatal burn injuries, drownings, and homicides. It is also a significant factor in moderate to severe injuries, suicides, and sexual assaults. Heavy drinking may also result in risky sexual behaviors such as unprotected sex, which can lead to unintended pregnancy and infection with sexually transmitted diseases. These effects of alcohol addiction can have lifelong consequences.
Alcohol Abuse and Addiction
Alcohol abuse and addiction is a significant contributing factor to high levels of violence in general and domestic violence in particular (Wood 2000, p. 28). As alluded to already in the case of birth defects in the children of abusers and addicts, alcohol consumption thus has considerable effects not just on the consumers themselves. In this sense, “second-hand drinking” or the second-hand implications of drinking are further cause for concern, contributing to the multifaceted threat of the alcohol epidemic (Cassella 2019).
How Alcohol Abuse and Addiction Is Being Addressed: The Example of the United States There are several ways the US government, private health associations, and individuals have in place to address the challenges of alcohol abuse and addiction. When alcohol abuse and addiction manifests in some people, a public health systems approach (see the entry on ▶ “Health System”) in handling substance misuse and its consequences aims to: • Define the problem through the systematic collection of data on the scope, characteristics, and consequences of substance misuse. • Identify the risk and protective factors that increase or decrease the risk for substance misuse and its consequences and the factors that could be modified through interventions. • Work across the public and private sector to develop and test interventions that address social, environmental, or economic determinants of substance misuse and related health consequences. • Support broad implementation of effective prevention and treatment interventions and recovery supports in a wide range of settings. • Monitor the impact of these interventions on substance misuse and related problems as well as on risk and protective factors (HHS & Office of the Surgeon General 2016, p. 4).
33
The above is but a procedural description of how the challenge is handled from identification to intervention and monitoring in a holistic sense, including with a view to prevention. In practice, it is often the people concerned who seek help from treatment centers. The National Survey on Drug Use and Health (NSDUH) notes that more than 2.4 million people of age 12 or older received substance use treatment in 2017 for alcohol use (American Addiction Centers Editorial Staff 2019). The centers for alcohol treatment are set to help individuals who are addicted or who abuse alcohol in a number of ways. According to the National Institute on Drug Abuse (2019), while some of the treatment centers require an individual to stay at the center for a specific amount of time, others offer outpatient treatment. Besides, there are centers that offer both long- and shortterm treatment options. Darla Burke (2017) suggests a couple of different medications that may help with alcohol abuse and, by extension, addiction. These are the following: • Naltrexone (ReVia) is used only after someone has detoxed from alcohol. This type of drug works by blocking certain receptors in the brain that are associated with the alcoholic “high.” This type of drug, in combination with counseling, may help decrease a person’s craving for alcohol. • Acamprosate is a medication that can help re-establish the brain’s original chemical state before alcohol dependence. This drug should also be combined with therapy. • Disulfiram (Antabuse) is a drug that causes physical discomfort (such as nausea, vomiting, and headaches) any time the person consumes alcohol (Burke 2017). Although people react to treatment differently, by and large, administering such medication has assisted people in addressing alcohol abuse and addiction. Detoxification, referred to above, is a set of interventions used to keep a person safe as they readjust to a lack of alcohol in the body (Substance Abuse and Mental Health Services Administration 2015b). Medical detoxification is
A
34
essential to treat someone who has been dependent on alcohol and is now trying to come out of the challenge. This can help address the delirium that often comes as a result of withdrawal. Thereafter, therapy treatment can follow. The National Institute on Drug Abuse (2019) asserts that during therapy sessions patients can explore the reasons behind their excessive alcohol consumption as well as what they can do to overcome their abusive behavior. Some people seek the services of Alcoholic Anonymous (AA) therapists for counseling and rehabilitation. This, when blended with aftercare, will lead alcohol abusers and addicts to recovery with time.
Conclusion Given the first-hand as well as the second-hand implications of alcohol abuse and alcohol addiction, there is a need for a concerted effort from all stakeholders to respond to the dangers involved, to rescue those concerned at the present time from a future of very drastic consequences. As Dennis Thombs (2006, p. 7) argues, because alcohol abusers and addicts “are seen as suffering from an illness, the logical conclusion is that they deserve compassionate care, help, and treatment.” Importantly, this approach is also vital from the perspective of those suffering from the implications of excessive alcohol consumption by others.
Cross-References ▶ Health System
References American Addiction Centers Editorial Staff. (2019). Alcohol abuse, K. Sclar (Editor) and R. Kelley (Reviewer). Available from https://drugabuse.com/alcohol/. Accessed 26 Nov 2019. Brande, L. (n.d). Alcoholism & drug addiction stats in the United States, M. Watkins (Editor). Available from https://www.projectknow.com/drug-addiction/alcoholdrugs-stats/#statistics-on-alcoholism-in-the-u-s-. Accessed 26 Nov 2019.
Alcohol Abuse and Addiction Burke, D. (2017). Healthline, T. J. Legg (Medically reviewer). Available from https://www.healthline. com/health/alcoholism/basics. Accessed 10 Oct 2019. Cassella, C. (2019). ‘Second-hand drinking’ is the public health problem you’ve probably never heard of. Science Alert, 28 July 2019. https://www. sciencealert.com/second-hand-drinking-is-a-massiveproblem-that-impacts-millions-of-americans. Accessed 1 Jan 2019. Chartier K., & Caetano R. (2010). Ethnicity and health disparities in alcohol research. Available from https:// www.ncbi.nlm.nih.gov/pubmed/21209793. Accessed 1 Dec 2019. Cherney, K. (2016). Stages of alcoholism, T. J. Legg (Medically reviewer). Available from https://www. healthline.com/health/stages-alcoholism#outlook. Accessed 10 Oct 2019. Freeman, M., & Perry, C. (2006). Alcohol use literature review, available from http://saapa.net/research-andresources/research/alcohol-use-literaturereview.pdf. Accessed 26/11/2019 Gateway Foundation. (2019). Effects of alcohol addiction a n d a b u s e . Av a i l a b l e f r o m h t t p s : / / w w w. gatewayfoundation.org/substance-abuse-treatment-progr ams/effects-of-alcohol-addiction/amp/. Accessed 2 Oct 2019. McDowell, J., & Hostetler, B. (1996). The handbook on counseling youth: A comprehensive guide for equipping youth, workers, pastors, teachers, parents. Nashville/ Dallas/Mexico City: Thomas Nelson INP Publishers. Milhorn, T. H. (1994). Drug and alcohol abuse: Authoritative guide for parents, teachers, and counselors. New York: Plenum Press. Mosel, S. (2019). Alcoholism & alcoholics, N. Monico (Editor), and S. Thomas, M.D. (Medically reviewer). Available from https://www.alcohol.org/alcoholism/. Accessed 10 Oct 2019. National Institute on Alcohol Abuse and Alcoholism. (2006). Underage drinking: Why do adolescents drink, what are the risks, and how can underage drinking be prevented? Available from https://pubs.niaaa. nih.gov/publications/AA67/AA67.htm. Accessed 1 Dec 2019. National Institute on Alcohol Abuse and Alcoholism. (2015a). College drinking. Available from https:// www.una.edu/manesafety/Alcohol%20Brochures/ Collegefactsheet.pdf. Accessed 26/11/2019 National Institute on Alcohol Abuse and Alcoholism. (2015b). Alcohol: A women’s health issue. Rochville MD: National Institute on Alcohol Abuse and Alcoholim National Institute on Alcohol Abuse and Alcoholism. (2016). Alcohol facts and statistics. The National Institute on Drug Abuse (2019). Drug Facts: Treatment Approaches for Drug Addiction. Available from https://d14rmgtrwzf5a.cloudfront.net/sites/default/ files/drugfacts-treatmentapproaches.pdf. Accessed 26/11/2019 Nordegren, T. (2002). The A-Z encyclopedia of alcohol and drug abuse. Parkland: Brown Walker Press.
Anthropocene Sayette, M. (1999). Does drinking reduce stress? Alcohol Research and Health, 23(4), 250–255. Snyder, L. (2006). Effects of alcohol advertising exposure on drinking among youth. Archives of Pediatrics & Adolescent Medicine, 160(18), 18–24. Available from https:// www.alcohol.org/alcoholism/. Accessed 10 Oct 2019. Substance Abuse and Mental Health Services Administration. (2015a). TIP 45: Detoxification and substance abuse treatment. Substance Abuse and Mental Health Services Administration. (2015b). TIP 45: Detoxification and substance abuse treatment. Available from https://store.samhsa. gov/system/files/sma15-4131.pdf. Accessed on 26/11/ 2019. The Recovery Village. (2019). “Alcohol abuse”, C. Renzoni (Editor) & B. C. Williams (Reviewer). Available from https://www.therecoveryvillage.com/ alcohol-abuse/#gref. Accessed 28 Nov 2019. Thombs, L. D. (2006). Introduction to addictive behaviors (3rd ed.). New York: The Guilford Press. U.S. Department of Health and Human Services (HHS), Office of the Surgeon General. (2016). Facing addiction in America: The Surgeon General’s report on alcohol, drugs, and health. Washington, DC: HHS. U.S. Department of Health & Human Services. (2017). Facing addiction in America: The Surgeon General’s report on alcohol, drugs, and health. available from https://addiction.surgeongeneral.gov/surgeon-generalsreport.pdf. Accessed 27/11/2019. Wood, D. (2000). A review of research on alcohol and drug use, criminal behavior, and the criminal justice system response in American Indian and Alaska Native communities. Vancouver: Washington State University. World Health Organization. (2014). Global status report on alcohol and health 2014. Luxemburg: WHO.
Further Reading McDowell, J., & Hostetler, B. (1996). The handbook on counseling youth: A comprehensive guide for equipping youth, workers, pastors, teachers, parents. Nashville/Dallas/Mexico City: Thomas Nelson INP Publishers. Milhorn, T. H. (1994). Drug and alcohol abuse: Authoritative guide for parents, teachers, and counselors. New York: Plenum Press. Nordegren, T. (2002). The A-Z encyclopedia of alcohol and drug abuse. Parkland: Brown Walker Press. Thombs, L. D. (2006). Introduction to addictive behaviors (3rd ed.). New York: The Guilford Press. U.S. Department of Health and Human Services (HHS), Office of the Surgeon General. (2016). Facing addiction in America: The Surgeon General’s report on alcohol, drugs, and health. Washington, DC: HHS. Wood, D. (2000). A review of research on alcohol and drug use, criminal behavior, and the criminal justice system response in American Indian and Alaska Native communities. Vancouver: Washington State University. World Health Organization. (2014). Global status report on alcohol and health 2014. Luxemburg: WHO.
35
Anthropocene Róbert Balogh Institute for Central European Studies, National University of Public Service, Budapest, Hungary
Keywords
Scales · Materials · Chronology · Great Acceleration · Afforestation
Introduction Literally, Anthropocene may be translated as “the age of humankind.” The notion first appeared among scientists in the Soviet Union, and, thus, the term is a case of entangled history, which has not yet been thoroughly explored at the time of writing (Brookes-Fratto 2020). The still-ongoing academic debate about the existence, content, and consequences of the Anthropocene began with Paul Crutzen’s brief article published in Nature in 2002 (Crutzen 2002). The word reflects the realization that human activity irreversibly altered the way the Earth as a biophysical system functions and that this change has already left stratigraphically meaningful, thus evolutionary and chemical, traces. Thus, Anthropocene is the new geological epoch in which humankind as a species currently lives and is the outcome of human activities. As a stratigraphic layer, the Anthropocene contains traces showing that: the level of carbon dioxide has nearly doubled in the atmosphere, there is increasing biological homogeneity across the continents, there is a wave of extinction of species, domestic animals dominate among terrestrial vertebrate species, there is an abnormally high presence of radioactive isotopes and of the radioactive isotope of carbon, the amount of nitrogen is rapidly increasing (as a result of agricultural technologies) and that various kinds of plastic accumulate beyond measure. The various fields of science have illuminated a number of interactions among these factors. However, the ways in which these interact are
A
36
probably even more complex than one can currently describe. Based on existing models about these interactions, the biophysical changes are so fundamental, and the damage done to the ecological systems on Earth is so extensive that these will trigger a series of events that humans will term catastrophic and might lead to deaths on a mass level already during the lifetime of those currently living. The loss or decline of the material base of human societies in many regions and a global decrease of available freshwater and food may also be predicted. Among these phenomena, extinction and the homogenization of species reach back as far as the time colonization. The burning of coal reserves accelerated in the past 200 years, while some developments, such as pollution or the ecological damage that plastic causes, are the product of the last 70 years. These are temporal frames that historiography can tackle. Therefore, there are historians who have started to explore historical sources for telling what the Anthropocene is, how it operates, and what were the causes that triggered it (Robin and Steffen 2007). The internationally renowned philosopher of history, Zoltán Boldizsár Simon, is, nevertheless, of the opinion that placing the Anthropocene in a historical context has an adverse effect in that it makes it less likely that collapse will eventually be avoided (Simon 2017, 2018). He argues that a historical point of view has a soothing effect and, thus, takes away the feeling of urgency. Simon would like historians to stay away from research into the Anthropocene. Historian Dipesh Chakrabarty was one of the first humanities scholars to stress that the Anthropocene is a crisis that must induce all fields, including humanities and the social sciences, to prioritize the crisis over all other things. However, Chakrabarty’s views are in line with Simon’s, since the Indian-born scholar argues that for the latter fields this would mean that they give up their focus on social and global inequalities if the related findings go against the social- and geoengineering methods that are needed to reduce greenhouse emissions (Chakrabarty 2009). Chakrabarty also advocates a new global history of humankind that would
Anthropocene
replace social criticism with locating humans both as a species and as conscious beings in the Anthropocene epoch (Chakrabarty 2018). Despite these concerns from within the field, historical research has already contributed to clarifying what the Anthropocene means. First, historians have presented that the consciousness that there is a global ecological crisis is not the product of the years around 2000. On the contrary, such thinking went hand in hand with actual biophysical changes in the past 250 years (Horn and Bergthaller 2020). Moreover, historians of science have also pointed out that the idea that humankind is a geological force appeared already at the time when the Holocene was considered as a geological epoch (Lewis and Maslin 2018). Instead of centuries the ignorance, we have a history of a politics that sidelined those that have been trying to address the impending global crisis throughout all this time. Historians have also demonstrated that in the era of global connectedness, unequal relations that seem local may have a much wider impact. Thus, the production and reproduction of dominance and hegemony have been and are central to the formation of the Anthropocene. This brand of research includes the study of the burden of economic activities impacting another area, the socalled ghost acres (an example of this is the impact that agricultural, mining, and shipping activities of the British Empire had on carbon dioxide emissions), the enforcement of certain land rights, and land-use regimes. To a large extent, the history of the Anthropocene is the study of differences and clashes among forms of knowledge production and their consequences. For example, research into the so-called El Nino weather anomaly of the eighteenth and nineteenth centuries makes it clear that the large-scale loss of human life in South Asia was not the outcome of some kind of inherent socioeconomic backwardness of the inhabitants of the region but resulted from newly enforced colonial socioeconomic relations that increased vulnerability (Davis 2000). Even if El Nino was not a climate change-related event, the social impact of increasingly frequent extreme weather conditions may follow the same
Anthropocene
pattern. Besides suppressed voices of alarm, the enforced nature of legal land titles and ownership patterns, and the determinants of resilience also suggest that if we wish to historicize the Anthropocene deconstruction of past and existing hierarchies is a good starting point. The next section discusses the various options for dating the Anthropocene. The third section turns to the possibilities of finding a proper name for the epoch. The fourth section will dwell on the importance of scaling. The last part highlights some of the difficulties of analyzing phenomena in the Anthropocene, through the case of afforestation.
Dating the Anthropocene The debate about finding the starting date for the new epoch has been going on ever since the publication of Crutzen’s paper mentioned above. There are a number of indicators that support the argument that 5000 years ago the biophysical and biochemical impact of agriculture were already large enough to prevent the onset of a cooling period. Accordingly, one might argue that the Anthropocene actually began cc. 3000 BCE (Ruddiman 2010). However, Crutzen’s paper proposed a much narrower time frame, arguing that it was not mere coincidence that the increase in the presence of carbon dioxide in the atmosphere and the invention of a new type of steam engine occurred in the same years, in the 1780s. Among pioneering historians of the Anthropocene, Paul Dukes (2011), Christophe Bonneuil and JeanBaptiste Fressoz (2017), Andreas Malm and Alf Hornborg (2014), and Jason W. Moore (2014) applied this time frame. Publications that trace regional meanings and manifestations of the Anthropocene usually take modernity as their time frame even if they refrain from choosing specific years as a starting point (Austin 2017; Hedin and Gremaud 2018; Körber et al. 2017; Liu and Beattie 2016). However, one of the landmark studies of the field, Simon Lewis and Mark Maslin’s How We Created the Anthropocene?, took a different stance (Lewis and Maslin 2018). The authors put
37
forward several arguments for linking inequality and global ecological crisis. Their choice regarding the starting date is one of these arguments. Lewis and Maslin propose that the first significant human-made change occurred in the level of atmospheric carbon dioxide may be dated to 1610. This change was not a rise but a drop: the annihilation of the indigenous population of Central and South America triggered reforestation on a mass scale, and, by 1610, this led to a downward spike that may be traced in the ice sheets in Antarctica. Dating the start of the Anthropocene to the seventeenth century would also be reasonable from the point of view of biological homogenization across continents, which sped up due to increased maritime traffic serving colonialist purposes. Yet, in mid-2020, it seemed likely that in geology the official starting point of the Anthropocene will be linked to 1945. This is the proposal that the designated bodies of geologists will vote on in 2021. Those that argue for this date emphasize the importance of the proliferation of nuclear weapons with the potential to destroy all forms of life and that total human activity started to cross thresholds of sustainability after the end of World War II. Among historians, John R. McNeill and Peter Engelke opted for this date and proposed that Anthropocene should be used interchangeably with the Great Acceleration, which is the period of accelerating biophysical and biochemical change after 1945 (McNeill and Engelke 2014). Vinita Damodaran, a key figure in the institutionalization of global environmental history, agrees with this view (Damodaran 2017). As a twist, William Ruddiman, the same scientist who put forward the argument for 3000 BCE, claims that stratigraphy is losing importance to absolute dating technology in younger layers and therefore naming the epoch is not crucial at all (Ruddiman 2018).
The Link Between the Name of the Epoch and Economic History It is not only the chronology of the Anthropocene that is debated. The name itself has triggered
A
38
much discussion, in fact, an even livelier one than the issue of the starting date. Those that argue that the Anthropocene shall not be a name for any epoch claim that such a name reenacts the division between nature and culture, while it is clear that a sharp distinction is among the root causes of ecological destruction and pollution (Dibley 2012). Daniel Chernilo, among others, stresses that tying the name of the epoch to the classical tradition and projecting it to the whole Earth neglect that historically specific nature of Western attitudes to nature that cannot be taken as universal. Chernilo also criticized the anthropocentric nature of the term: for explaining the current ecological crisis it is essential that we do not place humans at the center of the universe (Chernilo 2016). Researchers such as Malm, Hornborg, Moore, and Bellamy Foster posit that calling the epoch Anthropocene veils the key role that capitalism played in triggering destruction and increasing hazards. The call for rebranding the epoch as Capitalocene is the most widely known challenge in the battlefield of naming. In the last section of this entry, it will become clear that while carrying out analysis of the phenomena that had a role in the formation of the Anthropocene, it is inevitable to place emphasis on the role of nonhuman actors. This is also true of the link between a critical reappraisal of economic history and the Anthropocene. In fact, one of the prime arguments of Bonneuil and Fressoz’s groundbreaking work on the epoch is that becoming conscious of the existence of the epoch necessitates a new model for economic history. For example, due to its role in global warming, one of the key issues of the history of the Anthropocene is the history of energy. For placing the use of alternative sources of energy in context, it is important to know that at the end of the nineteenth century there were six million wind mills in the USA and that solar energy as a source of power for running mechanical structures was first experimented within the 1870s. The first fact indicates that independent and carbon-zero energy production technologies serving a modern economy existed in the past and that studies are needed to find out why these ended. The second one highlights that aggressive expansion and the
Anthropocene
search for alternatives might go together. The same work also points out that regarding the decades after 1945, the new model needs include the link between the emergence of consumer society and Cold War nationalism coupled with an urge to get to the stage of all-encompassing society before the Soviet Union (Bonneuil and Fressoz 2017). Commodification is another key aspect of the economic history of the Anthropocene epoch. In an eloquent study about how a specific species of mushroom thrives as a result of the way humans follow their traditional routines, and how this species enters a value chain of production and further processing to become a niche global commodity, Anne Loewenhaupt Tsing presents that an analysis of commodification needs to include the history of linkages between places and between species and how these histories interact with profit-oriented institutions (Tsing 2015). Zsuzsa Gille’s work on the meaning of the perception and social role of materials points to situating waste and recycling within the economic history of the drive for cheap raw materials and commodification (Gille 2007).
Scaling as Synthesis When thinking within the framework of the Anthropocene epoch, one necessarily asks if the phenomenon observed may be scaled up or scaled down, that is, whether it also occurred in certain narrower localities or if local processes may go against the global pattern. According to the scale we look at, one may identify different key features of certain phenomena. From a local point of view, physical and chemical qualities of material mined near a studied location might matter less than disputes over land or wages. Conversely, for a global look at the changes of the source of energy, local accidents might look irrelevant. When a research wishes to understand not only when certain aspects of the Anthropocene – such as the rise in the atmospheric presence of carbon dioxide – occurred but also what processes triggered it, a synthesis between local, cultural, environmental, social, economic, business, and global histories
Anthropocene
becomes necessary. Moreover, as the works of Kate Brown exemplify, local observations and case studies may serve as junctures for describing key features and drivers of the Anthropocene. In her studies Dispatches from Dystopia and Manual for Survival, Brown approaches the global logic of nuclear pollution, exploitation of natural resources and humans, and deprivation from the perspective of the relationship between landscape and individual lives in sites located in the semiperiphery or in the periphery (Brown 2015, Brown 2019). Such a synthesis-oriented analysis is a considerable challenge for capacity and intellect, but the purpose is not bravado. The point of the analysis is to present stories: as best-selling author Yuval Noah Harari (Harari 2016) writes, human minds are dependent on stories for their functioning and actions. Since we live in times of ecological crisis, we need stories to be able to act with the purpose of overwriting the models of our near extinction.
Afforestation Campaigns as Stories from the Anthropocene Afforestation is an excellent case for pointing out features of the Anthropocene. The number of trees and the area covered by forests are relevant to the carbon cycle, to local freshwater conditions, as habitats and sites of human activities. Anthropogenic changes of these may be studied at various scales. Moreover, during the 2010s, a number of plans appeared in which trees would be the viable and feasible tools through which humans might actively decrease the level of carbon dioxide, to thus consciously act to exit the Anthropocene crisis. Historicizing afforestation is a useful tool to assess this claim and to reveal alternatives. Afforestation, an often centralized effort that demands the involvement of large numbers of people and resources, such as seedlings, has a more than century-long history. Forest laws containing related clauses appeared in a wave in the second half of the nineteenth century, throughout the world. This synchrony was the outcome of interaction and entanglements: colonial states had a global presence and sent out European foresters
39
to manage and control forests that were much larger than areas they had been overseeing in their home states. In turn, administrative centers and these foresters gained experience that resulted in textbooks, specialized journals, and legislation. This knowledge was expected to maintain forest cover in terms of statistics and ensure profit from falling, at the same time. Thus, obligations and technologies to replant and replace forests were integral to these legal measures. Throughout the nineteenth century, there was a widespread fear of desertification and the power of floods that would put land, perceived as capital, to risk. This prompted regulations foreseeing the creation of protective forest belts, a form of afforestation. Research related to agrarian history and the ethnobiology of various regions, such as South Asia and the Carpathian Basin, call attention to the contemporaneity of new forest laws and land titles and the beginning of the dominance of profitoriented agricultural technology (Varga et al. 2020). Land surveys and so-called settlement operations linked the two sets of developments. In their initial form, afforestation, classification of land and soil, exclusions, and the profit-oriented management of resources formed a single package. Authorities protecting this mode of operation saw backward or primitive practices as the chief hazard for their success. It was the commonly managed or scattered plots and the pastoralists that embodied this hazard. Thus, the first lesson we can draw for the Anthropocene from historicizing afforestation is the need to reassess the regenerative potential of practices that fell outside the structure that was based on seeing landscape as a form of current or future profit and savings. Importantly, this is a global story. Looking at the subsequent developments in afforestation will make it visible that much experience has already been accumulated about the potential and limitations of afforestation used as bioengineering. The history of afforestation in post-World War I Hungary is a relevant case for three reasons. First, in European comparison, post-1920 Hungary was a country poor in forest cover. Second, it has been positioned in the global semi-periphery all the way through the period of Great Acceleration and was part of the so-called
A
40
Socialist Bloc dominated by Soviet Union during Cold War. Thus, Hungary exemplifies a situation that resembles the position of many other countries, globally. It is also important that the Hungarian case is optimal for studying the relationship between decades-long continuities of practice and sudden political changes in afforestation. After World War I, in Hungary, the notion of reconstruction was intertwined with the demand that it should be the state that organizes and finances large-scale efforts of anthropogenic landscape change, such as the construction of canals for water management and the afforestation of the Great Plains. There was also a link between these two types of efforts, since influential engineers and administrators believed that the reason for the scarcity of water and periodic drought in the Great Plains was the lack of forests and that this posed a limiting burden on the economy of the whole country. In the late 1940s, there began an even more ambitious afforestation campaign under the new Communist regime. Although contemporary propaganda stressed the gap between the regime of Mátyás Rákosi and the preceding one, the afforestation campaign relied on experiments that began in the second half of the 1920s and on the expertise of foresters that managed these experiments. Politically, the afforestation campaign that began in the late 1940s in Hungary was part of the response to the “Great Plan to Transform Nature” that Stalin launched in the Soviet Union in order to prevent the political damage that the famine of 1947–1948 might have done to his rule. In Hungary and the Soviet Union, the afforestation campaign of this period focused on creating narrow forest belts with multiple crown levels placed perpendicular to the main wind direction in order to stop or prevent desertification and on planting trees around plots. These methods proved effective and durable and remained part of agroforestry practices to date. In the late 1950s, fast-growing paper consumption and the resulting increase in the value of imports directed the attention of Hungarian foresters taking part in planning activities to the hybrid poplar varieties that maybe used as “paper-poplar.” As a result, there began a large-
Anthropocene
scale project aiming at increasing the availability of such varieties. The most salient issues were if poplar stands planted as monoculture were sufficiently resistant and, thus, sustainable and the interaction between conditions in micro-regions and the needs of poplar trees (Balogh 2018). The twist in the story is that a recent study published about the interaction between freshwater levels in one of the regions most threatened by desertification in Hungary and where both afforestation and the poplar project had an impact revealed that poplar and pine stands had an adverse impact on groundwater level and, thus, on the ecological sustainability of such areas (Tölgyesi et al. 2020). Thus, this is an example when the longevity of paradigmatic views on the role of forest stands, the politicization of efforts to realize landscape change, and the narrowing space for professional deliberation prevented due caution and preliminary studies when launching afforestation. During the 1960s, forestry experiments with Scotch pine were also motivated by economic and industrial concerns in a number of countries of the Socialist Bloc, including in Hungary. In Germany, Scotch pine has been a characteristic species grown in plantation-like settings since the mid-nineteenth century. Research realized as cooperation between researchers from the German Democratic Republic (Eastern Germany) and Hungary showed that seedlings grown in one location cannot be transplanted in another location with different climatic conditions and that, thus, there was no scope for building up a shared seed production between the two countries. Yet, the Scotch pine project extended to large areas within national boundaries. Even though decades-long research preceded the large-scale propagation of the species about which global forestry had been producing data for a century, by the early twenty-first century, dry spells during springtime reduced the ability of Scotch pine stands to resist insects preying on them. Engineering cannot secure the outcome of efforts directed at landscape change as nonhuman actors or political changes are extremely likely to occur. Thus, bioengineering is not a tool that is likely to lead us out of the Anthropocene crisis.
Anthropocene
Conclusion If we grow conscious of the Anthropocene epoch as a period of crisis, we need to apply a framework that synthesizes the existing analysis of the economic and social history of phenomena central to the epoch. This also means that materials treated as natural or artificial raw materials, or waste, and forms of knowledge that were behind anthropogenic change stand in the focus of studies in the Anthropocene. Thus, a view that takes the Anthropocene seriously is not an anthropocentric one in practice. Such research is critical with the growth of consumption and production. Processes leading to the crisis of the Anthropocene and alternatives may be studied by looking at localities and regions with sensitivity to the possibility of scaling up or scaling down observed phenomena. We could see that the choice of the starting date of the epoch significantly influences the road criticism may take. However, the choice of the name is not so consequential. One can carry out critical economic historical analysis without using a politically charged term, such as Capitalocene, that many people look at with prejudice. Based on experience with bioengineering solutions, such measures rarely change outcomes in the desired and planned ways. At the same time, historical analysis also shows that a shift away from the currently dominant notions of land titles and rights, toward the idea of the commons, may lead to regenerative processes.
Cross-References ▶ Endangered Species ▶ Environmental Security ▶ Exploitation of Resources ▶ Greenhouse Gas Emissions ▶ Paleoclimatology
References Austin, G. (Ed.). (2017). Economic development and environmental history in the Anthropocene. Perspectives on Asia and Africa. London: Bloomsbury Academic.
41 Balogh, R. (2018). Was there a socialist type of anthropocene during the cold war? Science, economy, and the history of the poplar species in hungary, 1945–1975. Hungarian Historical Review, 7(3), 594–622. Bonneuil, C., & Fressoz, J.-B. (2017). The shock of the Anthropocene. London, New York: Verso. Brookes, A., & Fratto, E. (2020). Towards Russian literature of the Anthropocene. Introduction. Russian Literature, 114–115, 1–22. Brown, K. (2015). Dispatches from dystopia. Histories of places not yet forgotten. Chicago: The University of Chicago Press. Brown, K. (2019). Manual for survival. A Chernobyl guide to the future. London: Penguin. Chakrabarty, D. (2009). The climate of history: Four theses. Critical Inquiry, 35(2), 197–222. Chakrabarty, D. (2018). Anthropocene time. History and Theory, 57(1), 5–32. Chernilo, D. (2016). The question of the human in the Anthropocene debate. European Journal of Social Theory, 20(1), 27–43. Crutzen, P. J. (2002). Geology of mankind. Nature, 415 (6867), 23–23. Damodaran, V. (2017). The locality in the Anthropocene: Perspectives on the environmental history of Eastern India. In E. Alexander, J. Cullis, & V. Damodaran (Eds.), Climate change and the humanities. Historical, philosophical and interdisciplinary approaches to the contemporary environmental crisis (pp. 93–116). London: Palgrave Macmillan. Davis, M. (2000). Late Victorian Holocausts: El Niño famines and the making of the third world. London: Verso. Dibley, B. (2012). “Nature is us:” the Anthropocene and species-being. Transformation, 21. http://www.trans formationsjournal.org/issues/21/article_07.shtml. Dukes, P. (2011). Minutes to midnight: History and the Anthropocene Era from 1763. London: Anthem Press. Gille, Z. (2007). From the cult of waste to the trash heap of history: The politics of waste in socialist and postsocialist Hungary. Bloomington: Bloomington Indiana University Press. Harari, Y. N. (2016). Homo Deus. A brief history of tomorrow. London: Harvill Secker. Hedin, G., & Gremaud, A.-S. N. (2018). Artistic visions of the Anthropocene North. Climate change and the nature in art. London: Routledge. Horn, E., & Bergthaller, H. (2020). The Anthropocene. Key issues for the Humanities. London: Routledge. Körber, L.-A., et al. (2017). Arctic environmental modernities: From the age of polar exploration to the Era of Anthropocene. London: Palgrave Macmillan. Lewis, S., & Maslin, M. A. (2018). The human planet. How we created the Anthropocene. London: Pelican. Liu, T.-J., & Beattie, J. (Eds.). (2016). Environment, modernization and development in East Asia. Perspectives from environmental history. London: Palgrave Macmillan.
A
42 Malm, A., & Hornborg, A. (2014). The geology of mankind? A critique of the anthropocene narrative. The Anthropocene Review, 1, 62–69. McNeill, J. R., & Engelke, P. (2014). The great acceleration: An environmental history of the Anthropocene since 1945. Cambridge, MA: The Belknap Press of Harvard University Press. Moore, J. W. (2014). The end of cheap nature or: how i learned to stop worrying about ‘the’ environment and love the crisis of capitalism. In Christian Suter, Christopher Chase-Dunn (Eds.), Structures of the World Political Economy and the Future of Global Conflict and Cooperation (pp. 285–314). Berlin: LIT. Robin, L., & Steffen, W. (2007). History for the Anthropocene. History Compass, 5, 1694–1719. Ruddiman, W. (2010). Plows, plagues, and petroleum: How humans took control of climate. Princeton: Princeton University Press. Ruddiman, W. F. (2018). Three flaws in defining a formal ‘Anthropocene’. Progress in Physical Geography: Earth and Environment, 42(4), 451–461. Simon, Z. B. (2017). Why the Anthropocene has no history: Facing the unprecedented. The Anthropocene Review, 4(3), 239–245. Simon, Z. B. (2018). The limits of Anthropocene narratives. European Journal of Social Theory, 20(1), 9–38. Tölgyesi, Cs., Török, P., Hábenczyus, A. A., Bátori, Z., Valkó, O., Deák, B., Tóthmérész, B., Erdős, L., & Kelemen, A. (2020). Underground deserts below fertility islands? Woody species desiccate lower soil layers in sandy drylands. Ecography, 43(6), 848–859. Varga, A., Demeter, L, Ulicsni, V., Öllerer, K., Biró, M., Babai, D., & Molnár, Zs. (2020). Prohibited, but still present: local and traditional knowledge about the practice and impact of forest grazing by domestic livestock in Hungary. Journal of Ethnobiology and Ethnomedicine, 16, 51.
Antibiotics
Antibiotics Peter Popella Interfaculty Institute of Microbiology and Infection Medicine (IMIT), University of Tuebingen, Tuebingen, Germany
Keywords
Antibiotics · Anti-infectives · Antimicrobials · Drugs · Infection · Bacteria
Introduction Antibiotics are low-molecular substances, which exhibit a harmful effect on bacterial cells. As other anti-infective agents, e.g., antivirals (targeting viruses) or antihelminthics (targeting parasitic worms), antibiotics are used as drugs to treat humans in the context of infections with bacterial pathogens. However, pets and livestock are also treated with antibiotics to cure and prevent bacterial infections. Antibiotics should be differentiated from antiseptics, which are used as prophylaxis to remove bacteria from living tissue like the human skin, and disinfectants, which are applied to reduce the bacterial load on nonliving objects (McDonnell and Russell 1999).
Further Reading Chua, L., & Fair, H. (2019). Anthropocene. In F. Stein, S. Lazar, M. Candea, H. Diemberger, J. Robbins, A. Sanchez, & R. Stasch (Eds.), The Cambridge encyclopedia of anthropology. Haraway, D. (2015). Anthropocene, Capitalocene, Plantationocene, Chthulucene: Making Kin. Environmental Humanities, 6, 159–165. Simon, Z. B. (2020). The epochal event. Transformations in the entangled human, technological, and natural worlds. London: Palgrave Macmillan. Tsing, L. A. (2015). The mushroom at the end of the world: On the possibility of life in capitalist ruins. Princeton: Princeton University Press. Ulmer, J. B. (2017). Posthumanism as research methodology: Inquiry in the Anthropocene. International Journal of Qualitative Studies in Education, 30(9), 832–848. Ulmer, J. B. (2019). The Anthropocene is a question, not a strategic plan. Philosophy and Theory in Higher Education, 1(1), 65–84.
A Brief History of Antibiotics Herbal medicines and the application of molds for the treatment of sickness have been traced back to ancient cultures, e.g., the Nubians in 300 BC (Nelson et al. 2010). However, the conscious application of pure antibiotics as drugs is a rather recent development. The German physician Paul Ehrlich and the Japanese microbiologist Sahachiro Hata laid the foundation of the modern targeted antibacterial therapy in the early 1900s with the discovery of the arsphenamine, marketed as Salvarsan for the treatment of syphilis. However, as a product of chemical synthesis, arsphenamine is defined as chemotherapeutic, not as antibiotic. Alexander Fleming, a
Antibiotics
Scottish physician and microbiologist, isolated the first antibiotic, benzylpenicillin (Penicillin G), in 1928 from the mold Penicillium notatum. After returning from a holiday, Fleming noticed bacteria were killed by the mold, which grew as a contamination in old petri dishes – making this a discovery by way of a lucky accident. From the 1940s onward, penicillin found widespread application, and further antibiotics, such as the tetracyclines, were discovered. The “golden age” of antibiotic discovery and development lasted until the 1970s. However, contrary to the notion of healthcare professionals at that time, bacterial infections were not defeated. The first bacterial strains showing resistance to antibiotics already emerged in the 1950s, and at the end of the 1970s, it became more and more clear that the emergence of resistant bacteria will pose a serious threat to public health and may render especially older drugs ineffective (Aminov 2010).
Production of Antibiotics Historically, antibiotics were produced by growing the producing microorganisms in large vessels (so-called fermenters), filled with liquid culture medium. These fermenters are inoculated with the antibiotic producers under controlled conditions (e.g., in terms of temperature, stirring, pH, oxygen, nutrients), until the nutrients are depleted, and a sufficient amount of the antibiotic has been produced. Following this, the antibiotic is isolated from the culture broth and purified for application as a drug. Producing strains are optimized to yield a high amount of product, e.g., by undirected mutation via long selection process or by targeted genetic modification. Nowadays, many natural products are subject to chemical modification after the fermentation and isolation procedure, to boost beneficial properties, e.g., stability, solubility, or avoiding resistance mechanisms. In addition to this semi-synthesis, some antibiotics can also be produced by chemical total synthesis, omitting the biological fermentation process and the variability in product quality that is inherent to it (Smith 1986). While many microorganisms can
43
produce substances with antibiotic properties, only a few are applicable as drugs for human use. Unfavorable properties of the compounds, such as toxicity, instability, inefficient uptake and distribution, or low amounts produced by the production strains, restrict the medical use of many substances. Large screening endeavors have been undertaken to discover new classes of antibiotics but attained limited success. Together with the low amount of return on investment for antibiotics, few pharmaceutical companies are willing to undertake research and development programs, leaving academic institutions and small, idealistic start-ups as the only players left (Gulland 2018; O’Neill 2016).
Mode of Action and Targets Depending on their cellular target and the exhibited activity after interaction with the former, antibiotics can either inhibit the growth of bacteria, without affecting the viability of the cell (in which case they are called “bacteriostatic”) or kill them upon contact (as “bactericidal” antibiotics). Many bacteriostatic agents inhibit the biosynthesis of new proteins in the bacterial cell, usually by binding to the ribosomes, which constitute the protein biosynthetic machinery, and thereby impairing the growth of the bacteria. Bactericidal antibiotics cause a harsh interference with the structural integrity of the cell, e.g., by targeting the cell wall or membrane or damaging the DNA, subsequently generating so-called toxic radicals. Toxic radicals are very harmful to bacteria, since they directly damage the bacterial cell, but also disturb metabolic pathways (e.g., the TCA cycle). Antibiotics leading to the production of high amounts of toxic radicals and the synergistic combination of antibiotics with compounds which trigger the production of toxic radicals are hopeful prospects to combat persisting infections. Whether an antibiotic exhibits bacteriostatic or bactericidal activity might also depend on the actual concentration of the agent that reaches the bacterial cell. Antibiotics used as drugs for
A
44
humans must target cellular structures or metabolic pathways which can only be found in bacterial, but not in human cells. Hence only the pathogen is harmed by the antibiotic, but not the human body. Prominent targets of antibiotics are thus the unique cell walls of bacteria, the bacterial ribosomes which are distinct from the ones present in human cells and the biosynthetic pathway for folic acid, which is also absent from the human metabolism. Further targets are the replication of the genetic material (DNA) and the thereofderived template of protein biosynthesis (RNA), as well as the bacterial membrane (Kapoor et al. 2017; Kohanski et al. 2010).
Treatment of Infections with Antibiotics The immune system of the human body, a complex, interwoven net of specific cells, organs, and molecular structures, is usually capable of keeping invading bacteria in check. Additionally, the human body is colonized by trillions of unharmful bacteria and other microorganisms, so-called commensals, which constitute the human microbiota. However, if a critical number of pathogenic bacteria can enter the body, e.g., by inhalation, swallowing, or through an injury of the skin, a bacterial infection is the outcome. This poses a problem especially for humans with a weakened immune system due to age, medical conditions like cancer or HIV/AIDS, or alcoholism. In these cases, treatment with bacteriostatic antibiotics is often not enough to fight the dwelling infection. For effective treatment of bacterial infections, the concentration of the specific antibiotic in the human body must exceed the value of the minimal inhibitory concentration (MIC) over a certain amount of time. During treatment, the concentration of antibiotics is lowered over time by the excretion and metabolization of the compounds by the human body. To keep the regimen level sufficiently high, dosage and duration of the treatment must be carefully adjusted (Leekha et al. 2011). Antibiotics are usually administered systemically or topically. For systemic treatment, tablets or liquids containing the antibiotics are either swallowed (oral application), injected with a syringe, or administered intravenously, causing
Antibiotics
the distribution of the antibiotic throughout the whole body of the patient. For topical administration, antibiotics are formulated in creams, lotions, or drops. Topical administration circumvents the systemic distribution of the antibiotic, allowing higher local concentrations on the site of infection. However, dosage is more difficult, and when systemic uptake is in this way circumvented, adverse local reactions are more likely to occur (Enenkel and Stille 1988).
Antibiotic Overuse and Misuse Antibiotic malpractice and misuse cover different topics. Antibiotics are often mistakenly prescribed to treat illnesses which are not caused by bacteria but by viruses. Examples are the common cold, influenza, acute bronchitis, or sore throats. Antibiotics are not able to combat these infections. In contrast, they might even be harmful and hinder the recovery. In the case of bacterial infections, the concentration of the antibiotic must be kept sufficiently high to combat the bacteria. A common mistake on the side of the patient is to refrain from taking the prescribed dose, as soon as the patient feels better, which can result in a recurring infection and the emergence of resistance in the bacteria. But the excessive prescription and use of certain classes of antibiotics have also led to the emergence of resistant bacteria. Antibiotics are also fed to animals in livestock husbandry, to prevent infections of the animals but also to take advantage of the growth-promoting effect of some antibiotic compounds. This will not only lead to the emergence of resistant bacteria but also to the spillage of antibiotics into the wastewater and the harmful contamination of the environment (Ventola 2015). So far, direct harmful effects on humans stemming from antibiotic overuse in the livestock industry have been regarded as relatively low. This picture appears to be changing; however, studies highlight the presence of antibiotic resistance genes against antibiotics predominantly used in food production in the microbiome of humans and also link obesity to the continuous uptake of low levels of residual antimicrobials through the human diet (Forslund et al. 2013; Riley et al. 2013).
Antibiotics
Trends and Future Directions Since many prescribed antibiotics belong to only a few substance classes and the screening of established strain libraries yields few new compounds, new sources most be tapped to discover novel compounds. While many of the current antibiotics are derived from soil dwellers, unusual habitats are nowadays screened for the presence of antibiotic producers. These habitats comprise caves, plants and their endophytic microorganisms, or hot springs and geysers. Also, the human body is a source for antimicrobial compounds, which are produced by commensal bacteria, or stem from bacteriophages (i.e., viruses that affect bacteria). Many known antibiotic producers harbor additional so-called “silent gene clusters,” which are not expressed under established production conditions. Many of these clusters bear the potential to encode for yet-untapped antibiotics. Ideally, novel compounds should also target yet-unexploited cellular structures of the bacteria to avoid cross-resistance. Innovative ideas are not only restricted to the process of discovery and production but also lead to novel therapeutic approaches. This includes combination therapy using multiple antibiotics, alternating the administration of antibiotics to prevent the emergence of resistance, the targeting of persisting bacteria, the introduction of quorum-sensing inhibitors which hinder the communication between bacteria, and the pharmacological targeting of host-pathogen interactions (Hauser et al. 2016).
Conclusion Antibiotics are mankind’s most important drugs to combat bacterial infections. However, existing compounds are losing their effectiveness, due to over- and misuse and the ensuing emergence of antibiotic-resistant bacteria. Aggravating this, shortcomings in research and development efforts prevent novel drugs from entering the market. Also, a decreasing availability of antibiotics due to various reasons such as production shortages, in the time of disasters or in the context of economic warfare (as in the case of general sanctions applied against a given country), threaten the
45
human health as well as, thus, global security (Otero et al. 2013).
A Cross-References ▶ Antimicrobial Resistance
References Aminov, R. I. (2010). A brief history of the antibiotic era: Lessons learned and challenges for the future. Frontiers in Microbiology, 1, 134. https://doi.org/10. 3389/fmicb.2010.00134. Enenkel, S., & Stille, W. (1988). Administration of antiBiotics. In Antibiotics in the Tropics (pp. 39–40). Berlin/ Heidelberg: Springer. https://doi.org/10.1007/978-3642-73276-8_5. Forslund, K., Sunagawa, S., Kultima, J. R., Mende, D. R., Arumugam, M., Typas, A., & Bork, P. (2013). Countryspecific antibiotic use practices impact the human gut resistome. Genome Research, 23(7), 1163–1169. https://doi.org/10.1101/gr.155465.113. Gulland, A. (2018). Drug companies are starting to tackle antimicrobial resistance but could do more, report shows. BMJ, 360, k269. https://doi.org/10.1136/bmj.k269. Hauser, A. R., Mecsas, J., & Moir, D. T. (2016). Beyond antibiotics: New therapeutic approaches for bacterial infections. Clinical Infectious Diseases, 63(1), 89–95. https://doi.org/10.1093/cid/ciw200. Kapoor, G., Saigal, S., & Elongavan, A. (2017). Action and resistance mechanisms of antibiotics: A guide for clinicians. Journal of Anaesthesiology Clinical Pharmacology, 33(3), 300. https://doi.org/10.4103/joacp. JOACP_349_15. Kohanski, M. A., Dwyer, D. J., & Collins, J. J. (2010). How antibiotics kill bacteria: From targets to networks. Nature Reviews Microbiology, 8(6), 423– 435. https://doi.org/10.1038/nrmicro2333. Leekha, S., Terrell, C. L., & Edson, R. S. (2011). General principles of antimicrobial therapy. Mayo Clinic Proceedings, 86(2), 156–167. https://doi.org/10.4065/ mcp.2010.0639. McDonnell, G., & Russell, A. D. (1999). Antiseptics and disinfectants: Activity, action, and resistance. Clinical Microbiology Reviews, 12(1), 147–179. Retrieved from http://www.ncbi.nlm.nih.gov/ pubmed/9880479. Nelson, M. L., Dinardo, A., Hochberg, J., & Armelagos, G. J. (2010). Brief communication: Mass spectroscopic characterization of tetracycline in the skeletal remains of an ancient population from Sudanese Nubia 350–550 CE. American Journal of Physical Anthropology, 143 (1), 151–154. https://doi.org/10.1002/ajpa.21340. O’Neill, J. (2016). Antimicrobial resistance: tackling a crisis for the health and wealth of nations. Rev Antimicrob Resist. Retrieved from https://amr-review.org/ Publications.html
46
Anti-globalizationists
Otero, L. H., Rojas-Altuve, A., Llarrull, L. I., CarrascoLopez, C., Kumarasiri, M., Lastochkin, E., . . . Hermoso, J. A. (2013). How allosteric control of Staphylococcus aureus penicillin binding protein 2a enables methicillin resistance and physiological function. Proceedings of the National Academy of Sciences of the United States of America, 110(42), 16808–16813. https://doi.org/10.1073/pnas.1300118110. Riley, L. W., Raphael, E., & Faerstein, E. (2013). Obesity in the United States – dysbiosis from exposure to lowdose antibiotics? Frontiers in Public Health, 1, 69. https://doi.org/10.3389/fpubh.2013.00069. Smith, J. E. (1986). Concepts of Industrial Antibiotic Production. In Perspectives in Biotechnology and Applied Microbiology (pp. 105–142). Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94009-4321-6_9. Ventola, C. L. (2015). The antibiotic resistance crisis: Part 1: Causes and threats. P & T: A Peer-Reviewed Journal for Formulary Management, 40(4), 277–283. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/ 25859123.
Further Reading Flandroy, L., Poutahidis, T., int, & Rook, G. (2018). The impact of human activities and lifestyles on the interlinked microbiota and health of humans and of ecosystems. Science of the Total Environment, 627, 1018–1038. https://doi.org/10.1016/j.scitotenv. 2018.01.288. Jason, C., & Gallagher, C. M. (2017). Antibiotics simplified (4th ed.). Burlington: Jones & Bartlett Learning. ISBN10: 1284111296. Walsh, F. (2013). The multiple roles of antibiotics and antibiotic resistance in nature. Frontiers in Microbiology, 4, 255. https://doi.org/10.3389/fmicb.2013.00255.
Anti-globalizationists Milica Pejovic School of International Studies, University of Trento, Trento, Italy
Keywords
Globalization · Social movements · Neoliberalism · Protest politics
Introduction Globalization is a controversial term that has raised both fears and hopes – a process generating
“winners” and “losers,” who are, however, unevenly distributed across different countries and social categories (see the entry on ▶ “Globalization and Security”). The fact that globalization has become a buzzword in the public discourse and an integral part of public opinion shows the importance attributed to both its economic and political aspects (Inglehart 1999). However, globalization’s opponents have been a particularly vocal part of the public opinion, effectively communicating and displaying all the shortcomings of globalization as well as the costs it has allegedly generated for certain social groups. The activists of anti-globalization movements have been skillful in organizing events and obstructing international entities associated with the neoliberal agenda (see ▶ “Neoliberalism”). Moreover, these events have been under an intense media scrutiny due to the conflictual interactions between political elites and their opponents. Both activists and scholars debate if this movement has a unitary character or rather represents a network of various groups – a “movement of movements” – since it tackles grievances that are widespread and diverse (Mertes 2004). Indeed, the movement is highly heterogeneous and encompasses a range of movements dealing with various issues such as class or inequality, indigenous rights, or environmental protection (Arrighi et al. 1989; Brecher et al. 2000; Tarrow 2005). Its name has also been contentious. According to some activists, the term “anti-globalization” may wrongly connote a position of isolationism and protectionism, despite the movement being rather supportive of the free circulation of people as well as transnational in its nature. They consider the term “anti-globalization” a misnomer due to its failure to differentiate a leftist and internationalist’s stance from the nationalist and protectionist ideology of rightist political movements. The majority of groups labeled as “anti-globalizationists” actually advocate globalization “from below,” as opposed to the dominant form of globalization “from above.” Thus, activists and scholars have proposed alternative names for the movement such as “global justice movement” (Hosseini 2010) or “alter-globalization movement” (Pleyers and Touraine 2010).
Anti-globalizationists
Anti-neoliberal Movements It has been widely accepted that the Zapatista insurgency – a nonviolent movement for land reform and indigenous rights – in Chiapas, Mexico, in January 1994, represented the start of the anti-globalization movement. The goal of this locally focused guerilla action – launched in parallel with the beginning of the North American Free Trade Agreement (NAFTA) – was to defend indigenous peoples and farmers against the domination of the Mexican state, the effects of neoliberal policy measures, and the inhumanity of global capitalism (see Hayden 2002). The Zapatistas made a compelling case for their cause by shedding light on the link between local problems, the corrupt national authorities, and unjust global economic arrangements (Schulz 1998). Their appeals powerfully echoed across the globe prompting solidarity and support of various social movements, which was effective in raising questions about neoliberal capitalism and politicizing it. In 1996, the Zapatistas hosted an International Encounter for Humanity Against Neoliberalism in the jungles of Chiapas, where around 5000 activists from over 40 countries rallied. The group gathered again in Geneva in 1998 and established the Peoples’ Global Action, a network of independent organizations critical of capitalism, imperialism, and cultural domination. Following the Zapatista insurgency and its success in protecting the rights of indigenous peoples, a number of protests challenged the summits of global economic and political institutions in the late 1990s (Houtart and Polet 1999; Pianta and Marchetti 2007). Nationally based social movements, opposing the dominant model of neoliberal globalization, formed transnational networks in order to coordinate their activities and foster international political action. Moreover, the antiglobalizationists denounced multinational corporations for maximizing profits while neglecting work safety conditions, remuneration standards, and/or environmental ethics. According to them, large corporations, led by their financial interests, undermined local decision-making and small businesses. Furthermore, the anti-globalizationists lamented global inequality and the gap
47
between the rich North and the poor South, denouncing the dominant neoliberal economic setup for having allowed more affluent economies to exploit developing countries (O’Byrne and Darren 2011). The majority of anti-globalization movements oppose the neoliberal aspects of globalization by protesting against global and regional organizations, whose main mission is to foster corporate globalization by promoting and implementing neoliberal free-market policies. These organizations include the International Monetary Fund (IMF), the World Trade Organization (WTO), the Group of Eight (G8), and the World Economic Forum (WEF). The IMF, which epitomizes economic globalization, was founded after the Second World War with the mission to be a guarantee of international financial stability and to prevent economic downfalls similar to the Great Depression (see ▶ “International Monetary Fund (IMF)”). However, since the 1980s, the IMF has embraced the main postulates of neoliberal ideology, namely, liberalization, deregulation, and privatization (Stiglitz 2002). The WTO is another key actor of economic globalization; it was created out of the General Agreement on Tariffs and Trade (GATT), established after the Second World War, to decrease tariffs in goods and services. Since 1971, the WEF has held annual meetings of executives from the world’s most affluent corporations, national political leaders, prominent intellectuals, and journalists, usually in Davos, Switzerland. The WEF provides space for the global elite to meet and network and represents an example of globalization from above. The G8 comprises of the eight most powerful economies in the world: the United States, the United Kingdom, Canada, France, Russia, Italy, Germany, and Japan. Although this formation lacks legal underpinnings, the leaders of the G8 meet annually to informally set the global agenda, create initiatives, and prompt governments to take or revise their positions on key issues of the summit, ranging from security to trade. Anti-globalization protests were mainly organized in parallel with the meetings and summits of the organizations and institutions mentioned above, reaching their peak intensity in the period between 1999 and 2001. The 1999 Seattle protest
A
48
was the first more important protest in this intense series of global protests directed at neoliberal global institutions. The protest expressed opposition to the WTO summit and the Millennium Round of trade liberalization talks, gathering approximately 75,000 activists. During the socalled Battle of Seattle, a group of students, anarchists, and militant environmentalists formed a nonviolent human “wall” preventing trade ministers from accessing the convention center to hold their meetings. Police forces reacted by using tear gas and rubber bullets. In response, a group of anarchists resorted to vandalism destroying the windows of major banks and corporations. Consequently, the city authorities declared martial law and arrested several hundreds of protesters, charging them with civil disobedience. Finally, the Seattle round of trade negotiations failed as developing nations, encouraged by the protests, refused the solutions of developed countries. Although the protest – which gained unprecedented attention of the public and policymakers – only partially contributed to the failure of the WTO conference, in the eyes of the activists, media, and the political elites, this failure was evidence of the palpable effect of the transnational antiglobalization movement on the course of global decision-making. In 2000 and 2001, a dozen massive anti-globalization protests coalesced around major gatherings of the global, political, and financial elites such as the IMF meetings in Washington, the WEF gathering in Melbourne, the G20 meetings in Montreal, the Summit of the Americas in Quebec City, and the G8 summit in Genoa, where one young protester was killed. Moreover, the beginning of the twenty-first century saw several campaigns directed against the neoliberal doctrine, such as the Jubilee 2000 campaign or the anti-MAI campaign. The aim of the Jubilee 2000 campaign was to cancel the foreign debt of the least developed countries by the year 2000, and it indeed managed to prompt the creditor governments and the IMF to lay the foundations for debt relief of highly indebted low-income countries. Another successful example was the anti-MAI campaign, which managed to block the Multilateral Agreement on Investment proposed within the Organisation for Economic Co-operation and Development.
Anti-globalizationists
The anti-globalization movement was particularly powerful and compact in the United States. However, the terrorist attacks on the New York’s World Trade Center on September 11, 2001, marked a watershed, since both the US and global security contexts were radically altered, putting into jeopardy the destiny of the movement. One of the immediate effects of the aforementioned attacks was the decision of organizers to cancel an anti-globalization protest planned for the end of September 2001 in Washington, DC. The new security climate restrained combative tactics of anti-globalization activists, due to law and order forces that tended to show little understanding for the acts of public disturbance (see ▶ “Democratic Security”). Although the police occasionally attempted to stifle protests by using force even prior to the 9/11 attacks (della Porta and Reiter 1998), the situation of emergency propelled the use of coercive methods by the forces of order. The USA Patriot Act passed shortly after the terrorist attacks granted additional legitimacy to coercive acts of the police against protesters, allowing US law enforcement agencies to more easily outlaw civil strife.
The Anti-war Movement and the World Social Forum Although the new security context and rules hindered the mobilization of activists, the anti-war movement that emerged as a consequence of the war in Iraq in 2003 reinvigorated the antiglobalizationist movement. Anti-globalizationists tried to link war and globalization by accusing large military corporations of inciting war to reap benefits from armament contracts. As a result of the American military operations in Iraq and Afghanistan, many anti-globalization activists linked corporate globalization and US power, directing their protests against the “war on terror” waged by the Bush administration. The anti-war demonstrations reached their peaks with more than one million participants mobilized in events organized across several European cities during the 2003 and 2004 global days of action against the war in Iraq. The ideas of opposition to war and support for peaceful
Anti-globalizationists
conflict resolution managed to mobilize masses of people of very diverse backgrounds, all willing to put pressure on national and global political leaders and stop the war. Anti-globalization groups expressed their concerns regarding the proper functioning of democratic institutions since the leaders of many of the countries involved (such as Spain, Italy, or the United Kingdom) neglected the predominantly anti-war sentiments of their electorates. The Word Social Forum (WSF) and its regional replications were key leaders of the anti-war movement. This Forum represents a space for the gathering of all organizations, social movements, and individuals opposing the neoliberal form of globalization. The first World Social Forum was held in Porto Alegre, Brazil, in 2001 and was followed by a series of global, national, and regional social forums all over the world (see Sen and Waterman 2007). The World Social Forum provides a space for local and national social movements to network, plan future action, and showcase their compactness and common international identity at the global level, in a global scene. The initial idea was to establish a parallel counter-event to the WEF held in Davos. In June 2001, the International Council adopted the World Social Forum Charter of Principles, which provides a framework for international, national, and local social forums. The WSF has become a periodic meeting organized almost every year at different locations across the globe. In addition to the events organized at the global level, social forums have also been organized at the regional level. One of the main replications of the WSF is the European Social Forum, which was held for the first time in November 2002 in Florence under the slogan “Against the war, against racism and against neo-liberalism.” Subsequent demonstrations were held in other European cities and contributed to the continuity of the meetings. During the November 2002 European Social Forum, it was announced that the first day of action against the invasion of Iraq would take place in February 2003. It was the largest global day of action ever, involving tens of millions of people in over 500 cities and hundreds of coordinated antiwar events across the five continents to denounce the attack on Iraq (Walgrave and Rucht 2010).
49
Conclusion The anti-globalization movement has been mainly fueled by the negative consequences of structural adjustments, privatization, and deregulation, which have generated the so-called losers of globalization. According to many movement activists, global, political, and financial elites have led the process of economic globalization based on neoliberal postulates, exacerbating global inequality. Starting in the late 1990s, the anti-globalization movement chose IMF, WTO, WEF, and G8 meetings as locations for protests, since these political and economic formations represented the quintessence of neoliberal philosophy. According to different groups within the anti-globalization movement, the current form of globalization has increased the pauperization of developing nations, heavily hitting disadvantaged groups. Although after September 11, 2001, the anti-globalization movement risked falling into oblivion due to the altered security context, the movement showed resilience and assumed the form of an anti-war movement, focusing its attention on US-led military operations in Iraq and Afghanistan. The initiatives against globalization are of a miscellaneous character, tackling a variety of issues such as child labor’s exploitation, deforestation, human rights in developing countries, and military interventions by Western countries. While denouncing neoliberal policies and wars, the anti-globalization movement promotes participatory democracy and bottom-up decision-making. According to the activists of the movement, the protection of the environment, human rights, and democratic institutions has to be coupled with efforts to govern globalization in ways producing an ethical and equitable impact on developing economies and vulnerable social groups.
Cross-References ▶ Democratic Security ▶ Globalization and Security ▶ International Monetary Fund (IMF) ▶ Neoliberalism
A
50
Antimicrobial Resistance
References
Antimicrobial Resistance Arrighi, G., Hopkins, T. K., & Wallerstein, I. (1989). Antisystemic movements. London: Verso. Brecher, J., Costello, T., & Smith, B. (2000). Globalization from below: The power of solidarity. Boston: South End Press. della Porta, D., & Reiter, H. (Eds.). (1998). Policing protest: The control of mass demonstrations in western democracies. Minneapolis: University of Minnesota Press. Hayden, T. (2002). The Zapatista reader. New York: Thunder’s Mouth/Nation Books. Hosseini, S. A. H. (2010). Alternative globalizations: An integrative approach to studying dissident knowledge in the global justice movement. New York: Routledge. Houtart, F., & Polet, F. (2001). The other Davos. The globalization of resistance to the world economic system. London: Zed Books. Inglehart, R. (1999). Globalization and postmodern values. The Washington Quarterly, 23, 215–228. Mertes, T. (Ed.). (2004). A movement of movements. London: Verso. O’Byrne, H., & Darren, J. A. (2011). Theorizing global studies. London: Palgrave Macmillan. Pianta, M., & Marchetti, R. (2007). The global justice movements: The transnational dimension. In D. della Porta (Ed.), The global justice movement: A crossnational and transnational perspective (pp. 29–51). Boulder: Paradigm. Pleyers, G., & Touraine, A. (2010). Alter-globalization: Becoming actors in a global age. Cambridge: Polity. Schulz, M. S. (1998). Collective action across borders: Opportunity structures, network capacities, and communicative praxis in the age of advanced globalization. Sociological Perspectives, 41(3), 587–616. Sen, J., & Waterman, P. (Eds.). (2007). World social forum: Challenging empires (2nd ed.). Tonawanda: Black Rose Books. Stiglitz, J. E. (2002). Globalization and its discontents. New York: Norton. Tarrow, S. (2005). The new transnational contention. New York/Cambridge: Cambridge University Press. Walgrave, S., & Rucht, D. (Eds.). (2010). Protest politics. Antiwar mobilization in advanced industrial democracies. Minneapolis: University of Minnesota Press.
Further Reading Burris, W. C. (Ed.). (2010). Protectionism and anti-globalization. New York: Nova Science. Drake, M. S. (2007). Power, resistance and “anti-globalisation” movements in the context of the “war on terror”. Northampton: Edward Elgar Publishing. Kiely, R. (2005). The clash of globalisations: Neo-liberalism, the third way, and anti-globalisation. Leiden/Boston: Brill. Klein, N. (2002). Fences and windows: Dispatches from the front lines of the globalization debate. New York: Picador.
Peter Popella Interfaculty Institute of Microbiology and Infection Medicine (IMIT), University of Tuebingen, Tübingen, Germany
Keywords
Antimicrobial resistance · Antibiotics · Antiinfectives · Resistance · Drugs · Bacteria
Introduction Antimicrobial resistance (AMR) is the ability of pathogenic microorganisms to withstand the effect of an antimicrobial drug, thus allowing them to survive the exposure, continue growth, and spread further. Infections with microorganisms which are resistant against certain antimicrobial drugs are difficult to treat and can often result in the death of the patient due to therapy failure – accounting for an estimated 700,000 deaths per year, worldwide (O’Neill 2016). Just as bacteria can develop and acquire resistance against antibiotics, other microorganisms can also develop and acquire resistance: viruses against antivirals, helminths against antihelminthics, protozoans against antiprotozoals, and so forth.
Development and Acquisition of Antimicrobial Resistance Some bacteria are inherently resistant to specific antibiotic classes, which often goes hand in hand with the ability of the specific bacterial strain to produce a compound structurally related with antibiotic activity. However, bacteria can also acquire resistance either by randomly occurring mutations within their genome or by the acquisition of genes conferring resistance from the environment, via so-called horizontal gene transfer (HGT). Antibiotics target specific structures of the bacterial cell, e.g., components of the cell wall and its biosynthesis, the protein biosynthesis machinery (the ribosomes), or the replication of
Antimicrobial Resistance
the genetic material (DNA). Spontaneous mutations within the genes encoding for these structures emerge under lethal stress, which are often detrimental under normal conditions. However, if a specific mutation results in the hampered activity of the antibiotic, e.g., by rendering the binding site on the target less accessible to the antibiotic, this mutation provides a fitness advantage and will be inherited by the daughter generations. The “wild-type” bacteria, which do not harbor the mutations, will succumb to the antibiotic, while the mutants can withstand the treatment. By analogy, the uptake of foreign DNA through HGT can lead to the acquisition of genes, which provide resistance against certain antibiotics. These genes are often encoded on mobile genetic elements, which can be found outside of the bacterial chromosomes, such as circular DNA molecules (plasmids), moving within the chromosome, as transposons (i.e., as transposable elements), or within the genetic material of bacteriophages, which are viruses that infect and replicate within bacteria. Modes of Antimicrobial Resistance Bacteria can show distinct levels of resistance: (1) susceptible, bacteria succumb when exposed to a given antibiotic; (2) low-level resistance, bacteria can resist low doses of the antibiotic; (3) highlevel resistance, bacteria are able to resist high doses of the antibiotic, often due to the presence of a sophisticated defense mechanism; (4) multiple resistances, bacteria possess resistances against multiple antibiotics, often referred to as “superbugs”; and (5) intrinsic resistance, when the target structure of the antibiotic is either not present or not within reach of the antibiotic. The mechanisms of antibiotic resistance are manifold but can be roughly categorized in three basic forms: 1. Modification of the target structure. The presence of point mutations within the genes encoding for the target structures or the presence of genes encoding alternative targets with lowered affinity for the antibiotic will lead to resistance. 2. Inactivation of the antibiotic. Specific enzymes can cleave and thus destroy the
51
antibiotic. Other enzymes can add small chemical groups to the antibiotic compounds and cause inactivation. 3. Export of the antibiotic out of the bacterial cell. Bacteria possess different sets of transporters, which can facilitate the export of antibiotics by pumping the antibiotic out of the cell. Further nonclassical or seldom encountered mechanisms are: 1. Metabolic bypass. Some bacteria can facilitate specific precursor molecules, allowing them to use unusual metabolic pathways. While the metabolism of common bacteria is inhibited, these bacteria as well as “auxotrophic mutants” of other strains (i.e., mutants with nutrition that is different from that of the “wild types”) are not affected. 2. Detaining the antibiotic from reaching its target. Some bacteria can modify their cell envelope, e.g., by incorporating small chemical groups which alter the charge and thus repulse antibiotics with the same net charge. More common is the formation of biofilms, large matrices consisting of sugars, DNA, proteins, and lipids, in which the bacterial cells are embedded and thus become hard to reach for antibiotics. 3. Persister formation. So-called persisters are bacterial cells in a dormant state. By lowering their metabolic activity to a minimum, this state allows the cells to survive otherwise lethal antibiotic concentrations (Munita and Arias 2016).
The Spread of Antimicrobial Resistance Emergence and development of antibiotic resistance are natural phenomena, which can be explained by considering the necessity for antibiotic-producing bacteria to defend themselves against their very own products and the genetic adaptation by random mutations which occurs during prolonged periods of stress. However, the sharp increase in bacterial infections with strains which possess antibiotic resistance mechanisms is mostly due to human impact. Overuse of certain
A
52
classes of antibiotics and the inappropriate prescription against diseases, which cannot be cured by antibacterial treatment, are problematic. From 2000 to 2010, the application of antibiotics in human medicine increased by 35%. The BRICS states alone are responsible for 76% of this increase (Van Boeckel et al. 2014). Diseases such as the common cold, influenza, acute bronchitis, or sore throats are often caused by viral infections, not by bacteria. However, in up to 30% of the cases, physicians in the USA wrongfully prescribe antibiotics (Hersh et al. 2016). Additionally, patient compliance, the degree to which the patient follows the prescribed treatment, also poses a problem. If the medication is not taken until the end of therapy, once the patient feels better, usually after a few days, the subinhibitory concentration of the antibiotic in the patient’s body may promote the development of resistance. The absence of regulatory guidelines which limit the prescription of antibiotics or even the possibility to buy antibiotics in the supermarket in some countries strongly contributes to the indiscriminate use of these drugs. However, antibiotic resistances do not only emerge in humans but also in animals. Antibiotics are used in animal husbandry to promote the health of the livestock. Additionally, some antibiotics exert a growth-promoting effect, leading to faster growth of the animals. According to some calculations, the amount of antibiotics used in animal husbandry is four times higher than the amount used to treat humans. This favors the emergence of bacteria resistant to the given antibiotics as well as to the spread of resistances to other bacterial species within the ecosystem. At the end of the cycle, antibiotics and resistant bacteria end up in sewage plants, which have been identified as incubation vessels fast-forwarding evolution, promoting both the evolution and, ultimately, the spread of antibiotic resistances (O’Neill 2016). ESKAPE and Other Pathogens Especially problematic is the emergence of bacteria with multiple resistances in health clinics. Among these, six pathogens are the cause for the majority of nosocomial infections (i.e., infections originating in hospitals): Enterococcus faecium,
Antimicrobial Resistance
Staphylococcus aureus, Klebsiella pneumonia, Acinetobacter baumannii, Pseudomonas aeruginosa, and different species of Enterobacter – known as the ESKAPE pathogens. Only a few effective antibiotics are left to combat infections with these bacteria, e.g., vancomycin, daptomycin, or colistin. However, high concentrations of colistin are toxic to the kidneys and the nervous system, further complicating the treatment of elderly and severely ill patients (Pendleton et al. 2013). Another bacterial pathogen with increasing resistance against the current state-of-the-art treatment is Mycobacterium tuberculosis, which accounted for 490,000 infections with multiple-drug-resistant strains in 2016. Isolates which are extensively resistant even against second-line drugs (XDR TB) spread rapidly and have been reported by 123 nations in 2016 (WHO 2017). Not only bacteria but also viruses and protozoans are showing increasing resistance to drugs. Poor treatment compliance boosts the emergence of resistant HIV strains. Similarly, the major pathogens causing malaria, Plasmodium falciparum and P. vivax, are increasingly difficult to treat with the former gold-standard therapeutics (Menard and Dondorp 2017).
Conclusion Infections with antimicrobial-resistant pathogens, even if not deadly for the patient, result in high economic costs. Treatments are prolonged, longer hospital stays are needed, doctors and healthcare personnel must be educated, and specialized infection units with isolation rooms must be maintained. This results in up to $20 billion in direct costs and a further $35 billion in indirect costs in the USA alone, as of 2014. Until 2050, an increase in healthcare costs between 6% and 25% is estimated, resulting in GDP loss worth $1 trillion, and a further increase in AMR-related deaths (O’Neill 2016). Tackling antimicrobial resistances and the spread of pathogens is a global challenge, which must be addressed at different levels: awareness of the public and policy-makers in regard to hygiene and the spread of infections has to be improved; the misuse of
Anti-piracy Cooperation
antibiotics in the agricultural industry has to be reduced; novel anti-infectives and alternatives, like vaccines, have to be developed; incentives for the pharmaceutical industry to invest into antimicrobial research have to be created; and, in regard to health clinics, antimicrobial stewardship programs, the number of jobs, and the salaries of healthcare practitioners must be increased.
53 Munita, J. M., & Arias, C. A. (2016). Mechanisms of antibiotic resistance. Microbiology Spectrum, 4 (2). https://doi.org/10.1128/microbiolspec.VMBF0016-2015. O’Neill, J. (2016). Antimicrobial resistance: Tackling a crisis for the health and wealth of nations. Review on Antimicrobial Resistance.
Anti-piracy Cooperation Cross-References ▶ Antibiotics
Senia Febrica American Studies Center, Universitas Indonesia, Jakarta, Indonesia
References Keywords Hersh, A. L., Fleming-Dutra, K. E., Shapiro, D. J., Hyun, D. Y., & Hicks, L. A. (2016). Frequency of first-line antibiotic selection among US ambulatory care visits for otitis media, sinusitis, and pharyngitis. JAMA Internal Medicine, 176(12), 1870. https://doi.org/10.1001/ jamainternmed.2016.6625. Menard, D., & Dondorp, A. (2017). Antimalarial drug resistance: A threat to malaria elimination. Cold Spring Harbor Perspectives in Medicine, 7(7), a025619. https://doi.org/10.1101/cshperspect.a025619. Munita, J. M., & Arias, C. A. (2016). Mechanisms of antibiotic resistance. Microbiology Spectrum, 4 (2). https://doi.org/10.1128/microbiolspec.VMBF0016-2015. O’Neill, J. (2016). Antimicrobial resistance: Tackling a crisis for the health and wealth of nations. Review on Antimicrobial Resistance. Retrieved from https://amrreview.org/Publications.html Pendleton, J. N., Gorman, S. P., & Gilmore, B. F. (2013). Clinical relevance of the ESKAPE pathogens. Expert Review of Anti-Infective Therapy, 11(3), 297–308. https://doi.org/10.1586/eri.13.12. Van Boeckel, T. P., Gandra, S., Ashok, A., Caudron, Q., Grenfell, B. T., Levin, S. A., & Laxminarayan, R. (2014). Global antibiotic consumption 2000 to 2010: An analysis of national pharmaceutical sales data. The Lancet Infectious Diseases, 14(8), 742–750. https://doi. org/10.1016/S1473-3099(14)70780-7. WHO. (2017). Global tuberculosis report 2017. ISBN 978-92-4-156551-1.
Further Reading Center for Disease Dynamics, E. & P. (2015). State of the world’s antibiotics, 2015. Retrieved from https://www.cddep.org/publications/state_worlds_anti biotics_2015/
Anti-piracy · Counter-piracy · Pirate
Definition Article 101 of the 1982 United Nations Convention on the Law of the Seas (UNCLOS) defines piracy as: (a) any illegal acts of violence or detention, or any act of depredation, committed for private ends by the crew or the passengers of a private ship or a private aircraft, and directed: (i) on the high seas, against another ship or aircraft, or against persons or property on board such ship or aircraft, (ii) against a ship, aircraft, persons or property in a place outside the jurisdiction of any State; (b) any act of voluntary participation in the operation of a ship or of an aircraft with knowledge of facts making it a pirate ship or aircraft (c) any act inciting or of intentionally facilitating an act described in sub-paragraph (a) or (b) The term antipiracy cooperation refers to bilateral, regional, or international cooperation to prevent or stop the acts of piracy as defined in Article 101 of the UNCLOS.
A
54
Introduction The earliest attempt to deal with piracy was recorded by the ancient Greeks. Thucydides noted that Minos of Crete was the first to build a powerful navy fleet, control the seas, and clear them from pirates as far as he could to ensure the safety of its commercial interests (Ormerod 1924, p. 80). Efforts to address piracy continue until today. In 2010, it was estimated that piracy off the coast of Somalia alone was costing the shipping industry $3.2 billion per year in excess costs of insurance, approximately $238 million in ransom payments, while the cost of anti-piracy naval operations was around $2 billion (ICS n.d.). Despite a general acknowledgment of the threat of piracy, prior to 9/11, there had been limited concerted international cooperation to address this security concern. In the years following the 9/11 attacks, states, international organizations, and business communities began to anticipate a number of worst scenario piracy attacks in busy shipping lanes, including collision and explosion of vessels that may block key waterways, damage undersea pipelines and communication cables, or cause fatal marine pollution as well as bring a massive loss of life. The 9/11 attacks have thus raised the profile of threats posed by piracy. As a consequence, most maritime security cooperation initiatives to halt piracy were launched a few years after 9/11. Nowadays there is extensive anti-piracy regional and multilateral cooperation carried out under the auspices of the United Nations (UN), the International Maritime Organization (IMO), and the European Union, to mention a few. In order to understand global anti-piracy cooperation, this chapter will explain states’ duty to cooperate in the repression of piracy and the existing international and regional cooperation arrangements to combat piracy.
Duty to Cooperate The customary international law to repress piracy is codified in the 1982 United Nations Convention
Anti-piracy Cooperation
on the Law of the Sea (UNCLOS). Article 100 of the UNCLOS articulated the duty for all states to cooperate in the repression of piracy. To quote Article 100 of the UNCLOS “All states shall cooperate to the fullest possible extent in the repression of piracy on the high seas or in any other place outside the jurisdiction of any state” (Article 100 of the UNCLOS). Article 100 of the UNCLOS is deemed unique for two reasons. First, it is the only article in UNCLOS that makes specific and explicit reference to “the duty to cooperate” (Gottlieb 2014, p. 306). There are no other cooperation-related sections or provisions under UNCLOS – among which we find Section 2 (titled Global and Regional Cooperation) of Part XII (Protection and Preservation of the Marine Environment) or Section 2 (titled International Cooperation) of Part XIII (Marine Scientific Research) to mention a couple – which use the term “duty” in their titles (Gottlieb 2014, p. 306). The precise wording of Article 100 means that states not only have rights but also obligations to address piracy. Second, this provision uses the strongest wording to describe obligation under UNCLOS as it requires states to cooperate “to the fullest possible extent” (Gottlieb 2014, p. 307). In other words, compliance with Article 100 of UNCLOS requires sincere, concerted, and proactive efforts among states to cooperate at the international level to repress maritime piracy (Gottlieb 2014, p. 312). The UNCLOS’s guiding principle on duty to cooperate in repression of piracy is also echoed in other international conventions. Article 13 of the 1988 Convention for the Suppression of the Unlawful Acts against the Safety of Maritime Navigation (SUA Convention) requires, for instance, that “states parties shall cooperate in the prevention unlawful acts at sea...” by taking all practicable measures to prevent those offenses within or outside their territories and “exchanging information. . .and coordinating administrative and other measures. . .to prevent the commission of offences” (Article 13(1) of the SUA Convention). An array of the International Maritime Organization’s resolutions on piracy and armed robbery against ships in waters off the coast of Somalia have repeatedly urged the governments
Anti-piracy Cooperation
in the region to cooperate to prevent, deter, and suppress piracy and armed robbery against ships (e.g., see IMO Resolution A.1002(25) adopted on November 27, 2007, and the Djibouti Code of Conduct Concerning the Repression of Piracy and Armed Robbery against Ships in the Western Indian Ocean and the Gulf of Aden adopted on January 29, 2009).
International and Regional Anti-piracy Cooperation A range of mechanisms of international and regional anti-piracy cooperation have been carried out by the United Nations Security Council, the International Maritime Organization, the Maritime Organization for the West and Central Africa, and participants of the Djibouti Code of Conduct, the European Union, the North Atlantic Treaty Organization, the Association of the South East Asian Nations, and the Regional Cooperation Agreement on Combating Piracy and Armed Robbery against Ships in Asia. In order to understand the scope and depth of antipiracy cooperation in the world, this section will provide explanation on works that have been carried out by these international and regional organizations.
United Nations Security Council (UNSC) The UNSC has passed a number of resolutions to address piracy. These include ten resolutions on piracy off the coast of Somalia and two resolutions on piracy in the Gulf of Guinea (UN 2012). The UNSC resolutions help form the foundation of cooperation to foreign and international actors seeking to address piracy in high-risk areas such as Somalia’s waters (Bento 2011, pp. 427–428). The UNSC resolutions are neither permanent nor applicable worldwide (Bento 2011, p. 428). They are limited by geographical focus and have short shelf-life (Bento 2011, p. 428). For instance, the first resolution issued by the UNSC on piracy off the coast of Somalia, Resolution 1816 (2008), decides that:
55 for a period of six months ... states cooperating with the Transitional Federal Government of Somalia in the fight against piracy ... may enter the territorial waters of Somalia ... and use, within the territorial waters of Somalia ... all necessary means to repress acts of piracy and armed robbery.
This states that the UNSC Resolution 1816 is limited to Somalia’s territorial waters for a specific length of time and, more importantly, not to be extended to other countries’ territorial waters in the world. Despite these limitations, the UNSC has played a crucial role in informing international anti-piracy cooperation by urging states and regional organizations to take action and protect commercial shipping and humanitarian maritime convoys transiting through areas of high risk of piracy. The EU’s Operation Atalanta and NATO’s Allied Provider, Allied Protector, and Ocean Shield Operations were/are conducted in support of the UNSC resolutions on piracy off the coast of Somalia.
International Maritime Organization (IMO) The IMO is the United Nations specialized agency with responsibility in setting global standards for the safety and security of shipping (IMO 2017a). The IMO has issued incident reports on piracy and armed robbery attacks against ships worldwide since 1982 (IMO 2017b). The IMO’s activities in anti-piracy cooperation started in 1983 when it adopted a resolution on piracy and armed robbery against ships prevention measures (IMO Resolution A.545(13) adopted on November 17, 1983, as cited in Nanda 2011, 188). The IMOsponsored meetings have contributed to the establishment of a number of subregional and regional anti-piracy arrangements including the 2008 SubRegional Coast Guard Network for the West and Central African Regions under the auspices of the Maritime Organization for West and Central Africa (MOWCA) and the 2009 Djibouti Code of Conduct Concerning the Repression of Piracy and Armed Robbery against Ships in the Western Indian Ocean and the Gulf of Aden (Djibouti Code of Conduct) (Nanda 2011, p. 188).
A
56
The Sub-Regional Coast Guard Network for the West and Central African Regions is a joint initiative between the IMO and the MOWCA to enhance cooperation among coastal states in these regions to deal with piracy and other illicit activities in the area. The MoU on the implementation of the IMO/MOWCA Sub-Regional Coast Guard Network was signed by 15 states in July 2008. The MoU divides the regions into four coast guard zones, establishes coordinating centers for each zones (Zone I, Dakar; Senegal, Zone II, Abidjan, Côte d’Ivoire; Zone III, Lagos, Nigeria; and Zone IV, Pointe-Noire, Congo), and provides guidance on the operation of the Coast Guard Network in times of crisis and peace (OMAOC/ MOWCA 2011). The Djibouti Code of Conduct was adopted under the auspices of the IMO. The Code was signed by 20 countries on January 29, 2009. Participating states have agreed to cooperate “to the fullest possible extent in the repression of piracy” through information-sharing, interdicting ships or aircraft engaging in piracy, apprehending and prosecuting suspected pirates, and facilitating treatment and repatriation of victims of piracy (Article 2 of the Djibouti Code of Conduct). The Djibouti Code of Conduct also laid the foundation for the establishment of a multidonor voluntary fund to support counter-piracy capacity-building in the Western Indian Ocean and the Gulf of Aden. As of August 2015, Denmark, France, Japan, Malta, Marshall Islands, the Netherlands, Norway, Republic of Korea, Saudi Arabia, Shipowners Bahrain, and the United Nations Trust Fund had contributed US$ 18,266,365.08 to the Djibouti Code of Conduct Trust Fund (IMO Maritime Safety Division 2015, p. 4) The IMO has developed guidance to states and the shipping industry regarding measures to deter piracy attacks and investigate offenses (IMO 2015). In order to prevent piracy attacks, in May 2011, the IMO’s Maritime Safety Committee (MSC) adopted Resolution MSC.324(89) on the Implementation of Best Management Practice Guidance. The resolution urges merchant shipping to take necessary measures to develop effective
Anti-piracy Cooperation
self-protection from pirate attacks. At a minimum, these measures include providing ships’ masters with updated information before navigating through a high-risk area, registering ships with the Maritime Security Centre Horn of Africa (MSCHOA), reporting to United Kingdom Maritime Trade Operations (UKMTO) Dubai, and implementing all recommended preventive and defensive measures in ships (IMO 2017b). As part of the efforts to facilitate the criminal investigation of piracy, the IMO introduced the Code of Practice for the Investigation of Crimes of Piracy and Armed Robbery against Ships on December 2, 2009. The Code of Conduct recommends states to take all necessary measures to establish their jurisdiction over the offenses of piracy including adjusting national legislation to enable apprehension and prosecution of pirates; implementing “national legislative, judicial and law enforcement actions” to enable them to “receive, prosecute or extradite” pirates; and considering “appropriate penalties when drafting legislation on piracy” (IMO 2009). In May 2011, the IMO circulated the Guidelines to Assist in the Investigation of the Crimes of Piracy and Armed Robbery against Ships. It provides details of actions and procedures for investigators of crimes of piracy and sea robbery, to be followed in the collection of evidence (IMO 2011).
European Union (EU) In December 2008, the EU launched the EU Naval Force Atalanta (EU-NAVFOR) to protect vessels of the World Food Programme (WFP) delivering food aid to the Somali people and the African Union Mission in Somalia, deter piracy and armed robbery against ships in the area, monitor fishing activities off the coast of Somalia, and strengthen maritime security and capacity in the region (EU-NAVFOR 2017b). Operation Atalanta is conducted in accordance with the United Security Council Resolutions 1816 (2008), 1838 (2008), 1846 (2008), 1851 (2008), 1897 (2009), 1918 (2010), 1950 (2010), and 2020 (2011) (United Nations 2012).
Anti-piracy Cooperation
The operation covers a geographical area of 4,700,000 square nautical miles (around 8,700,000 km2) (EU-NAVFOR 2017b). This includes the Southern Red Sea; the Gulf of Aden; the Indian Ocean including the Seychelles, Mauritius, and Comoros; and the Somali coastal territory, as well as Somali territorial and internal waters (EU-NAVFOR 2017b). Participating states in the Atalanta operation are EU member states and a number of non-EU countries including Norway, Montenegro, Serbia, Ukraine, and New Zealand. States participate in this anti-piracy cooperation by contributing naval vessels, Maritime Patrol and Reconnaissance Aircraft, Vessel Protection Detachment teams, and military and civilian staffs to work at the operation headquarters in Northwood, United Kingdom. Typically, the composition of the EU-NAVFOR Somalia consists of approximately 1200 personnel, 4–6 surface combat vessels, and 2–3 Maritime Patrol and Reconnaissance Aircraft. The EU member states bear the costs of €8 million per year to finance the Atlanta operation. Since it was first launched in 2008, the EU-NAVFOR has claimed 100 percent success rate in providing protection for the WFP vessels delivering humanitarian aid from Mombasa to Somalia (EUNAVFOR 2017a, b). As a result of Operation Atalanta, the EU has also transferred suspected pirates for prosecution and conviction to the Republic of Seychelles, Mauritius, and Kenya (EU-NAVFOR 2017b). As of December 2013, the EU had arrested and transferred 149 suspected pirates for prosecution (European Parliament 2013). On November 28, 2016, the Council of the European Union extended the mandate of the anti-piracy operation until December 2018 (EU-NAVFOR 2017b). In order to support justice system in Kenya, Seychelles, and Mauritius to cope with the extra requirements associated with the prosecution and detention of suspected pirates, a joint EU and United Nations Office on Drugs and Crime program was launched in 2009. The support program for Kenya was designed for 24 months (€1.75 million), and a similar program was also launched for the Seychelles (€0.78 million) and Mauritius (€1.08 million) (European Parliament 2013).
57
North Atlantic Treaty Organization (NATO) NATO involvement in international anti-piracy cooperation in the Gulf of Aden, off the Horn of Africa, and in the Indian Ocean began in 2008. NATO carried out its first anti-piracy mission, Operation Allied Provider from October–December 2008. Allied Provider was a temporary operation to provide naval escorts to World Food Programme (WFP) vessels and patrol the waters around Somalia (NATO 2016). NATO conducted this operation to answer a request made by the UN Secretary General Ban Kimoon on September 25, 2008 and in support of UN Security Council Resolutions 1814, 1816, and 1838 (NATO 2016). With the increase of pirate attacks, NATO launched the Operation Allied Protector (March–August 2009) to curb piracy in the Gulf of Aden and off the Horn of Africa. From August 2009 until December 2016, NATO led an anti-piracy mission, namely, Operation Ocean Shield, which covered the area off the Horn of Africa, including the Gulf of Aden and the Western Indian Ocean up to the Strait of Hormuz (NATO 2017). As part of the operation, NATO vessels monitored shipping activities off the coast of Somalia, provided naval escorts to commercial ships, pursued and stopped suspected pirate ships, intervened in hijackings, transferred pirates or suspected pirates to designated national law enforcement agencies, and increased cooperation with other existing anti-piracy operations in the area such as the EU’s Operation Atalanta and the US-led Combined Task Force 151 to mention a few (NATO 2017). States contributed ships and maritime patrol aircraft to NATO Standing Maritime Groups which then assigned a number of ships for the operation. NATO terminated Operation Ocean Shield on December 15, 2016.
Association of the South East Asian Nations (ASEAN) Regional cooperation in Southeast Asia against piracy and armed robbery against ships is primarily conducted under two ASEAN forums: the
A
58
ARF (ASEAN Regional Forum) and the ASEAN Maritime Forum (AMF) (Jailani 2005, p. 56; Indonesian MFA 2009a, p. 1). The discussion of piracy and sea robbery in the ARF has been carried out through ad hoc activities and subsumed under general discussion on transnational crimes for some years (email correspondence with ASEAN Secretariat-Security Cooperation Officer, ASEAN Political Security Community Department, June 30, 2010). A leap forward took place in 2003 when participating states endorsed the ARF Statement on Cooperation against Piracy and Other Threats to Maritime Security during the 10th ARF meeting in Phnom Penh. Since then, the ARF has conducted various meetings to discuss maritime security and carry out maritime exercises (Indonesian MFA 2009a, pp. 14–21; Jailani 2005, p. 69). The ARF Statement on Cooperation against Piracy and Other Threats to Maritime Security requires participating states to cooperate on the bilateral and multilateral levels to combat armed robbery against ships; consider IMB proposals on prescribed traffic lanes for large super-tankers with naval escort; provide technical and capacity-building assistance to countries that need help; share information; develop regional training on anti-armed robbery; encourage member states’ shipping communities to report incidents to the relevant coastal states; review progress on efforts to combat sea robbery; establish a legal framework for regional cooperation to combat piracy and armed robberies against ships; and welcome the IMO discussion pertaining to the delivery of criminals who have committed crimes on a ship on the high sea or in the EEZ (ASEAN 2003). Outside of the ARF, Indonesia, the largest country in Southeast Asia, has driven forward the proposal for the establishment of the ASEAN Maritime Forum (AMF) in 2005 (Indonesian MFA 2009a, b). The AMF is designed to improve the region’s confidence-building measures and capacity-building ,and, in the long run, the AMF is expected to be a maritime dispute settlement forum in the region (Indonesian MFA 2007, pp. 2,4; Indonesian MFA 2009b). It requires states to exchange information; carry out capacity-
Anti-piracy Cooperation
building programs such as educational and training programs; cooperate in maritime surveillance programs; exchange naval personnel; cooperate to halt transnational crimes including sea robbery, smuggling, and illegal fishing; improve cooperation among law enforcement; and conduct other collaborative activities not only in the area of maritime security but also marine environment and safety of navigation (ASEAN 2012; Indonesian MFA 2007, p. 63).
Regional Cooperation Agreement on Combating Piracy and Armed Robbery Against Ships in Asia (ReCAAP) The ReCAAP was established through a negotiated process which involved ten ASEAN member states, three East Asian states (Japan, China, and South Korea), and three South Asian states (India, Bangladesh, and Sri Lanka) (Bateman 2009, p. 118). The agreement was finalized in Tokyo on November 11, 2004, and came into effect on September 4, 2006 (ReCAAP 2011). When the ReCAAP agreement came into force, it became open to accession by other states (Article 18 (5) of the ReCAAP agreement; Guilfoyle 2009, p. 58). The agreement requires states to communicate with the ReCAAP Information Sharing Centre (ISC), respect the confidentiality of information transmitted from the center, ensure smooth communication between its national focal point and other relevant government and nongovernmental organizations, oblige its shipping businesses to notify national focal points and the ISC of armed robbery incidents, disseminate alerts to ships when receiving a warning from the ISC, cooperate in detecting the perpetrators of armed robberies against ships, and participate in the rescuing of victims of piracy and armed robbery against ships (Articles 9–16 of the ReCAAP agreement 2004). The ReCAAP reserves the rights of states to exercise jurisdiction on their own territory (Article 2 (5) of the ReCAAP agreement). The agreement also obliges states to endeavor to extradite pirates or sea robbery and render mutual legal assistance in criminal matters to others but only after
Anti-piracy Cooperation
considering their national laws (Article 12 and 13 of the ReCAAP agreement). The ReCAAP ISC in Singapore has provided a platform for communication and information exchange between shipping businesses and governments and thus facilitated appropriate responses to piracy and sea robbery attacks in Asia (ReCAAP 2011). In November 2007, the IMO urged East African states to develop a similar agreement to fight piracy (IMO Resolution 1002 adopted on November 29, 2007, as cited in Nanda 2011, 190). Information provided by shipping firms to the ReCAAP ISC has played a crucial role in enabling an immediate piracy and countersea robbery response (Graham 2014; Bateman and Chan 2014, p. 141). The success of Singaporean and Malaysian naval vessels in disrupting “the siphoning of Ai Maru, a product tanker carrying 1,500 tonnes of MGO,” that was boarded in the South China Sea in November 2014, had been attributed to the coordination between the shipping industry, the ReCAAP, and Singapore’s Information Fusion Centre (Graham 2014). On May 8, 2016, coordination and information-sharing between shipping industry, the ReCAAP ISC, and law enforcement agencies contributed to defeating a group of armed pirates that hijacked a tanker called Hai Soon 12 (Panneerselvam 2016). As a result of the shared information, the Indonesian Navy intercepted the Hai Soon 12 and arrested all the pirates onboard the vessel on May 9, 2016 (Panneerselvam 2016).
Conclusion Since the era of the ancient Greeks, states have made various efforts to curb piracy. The customary international law obliges each state to cooperate to the fullest possible extent to address piracy. In the modern era, anti-piracy cooperation has gained momentum following the 9/11 attacks. Following the 9/11 attacks, the international community began to review the vulnerability of maritime transportation and the possibilities for criminals to use ships as deadly weapons. As a consequence, cooperation to deal with piracy and armed robbery against ships intensified in a
59
multilateral format. At the multilateral level, anti-piracy cooperation is often carried out under the IMO, while the EU, ASEAN, ReCAAP, MOWCA, and participants of the Djibouti Code of Conduct also play a significant role.
References ASEAN. (2003). ARF statement on cooperation against piracy and other threats to security. http://www. aseansec.org/14837.htm. Last accessed 16 May 2010. ASEAN. (2012). Chairman’s statement, 3rd ASEAN Maritime Forum. http://www.asean.org/news/aseanstatement-communiques/item/chairman-s-statement3rd-asean-maritime-forum. Last accessed 15 Oct 2012. Bateman, S. (2009). Piracy and armed robbery against ships in Indonesian waters. In Indonesia beyond the water’s edge. Singapore: ISEAS. Bateman, S., & Chan, J. (2014). Piracy and armed robbery against ships in the South China Sea – possible causes and solutions. In S. Wu & K. Zou (Eds.), Nontraditional security issues and the South China Sea: Shaping a new framework for cooperation (pp. 133– 143). Surrey: Ashgate. Bento, L. (2011). Toward an international law of Piracy Sui Generis: How the dual nature of maritime piracy law enables piracy to flourish. Berkeley Journal of International Law, 29(2), 399–455. European Parliament. (2013). Factsheet: the EU fight against piracy in the Horn of Africa. http://www. europarl.europa.eu/meetdocs/2009_2014/documents/s ede/dv/sede010414factsheetcounterpiracy_/sede01041 4factsheetcounterpiracy_en.pdf. Last accessed 16 Dec 2017. European Union Naval Force (EU-NAVFOR). (2017a). EU NAVFOR Ninth Anniversary. http://eunavfor.eu/eunavfor-ninth-anniversary/. Last accessed 16 Dec 2017. European Union Naval Force (EU-NAVFOR). (2017b). Mission. http://eunavfor.eu/mission/. Last accessed 16 Dec 2017. Gottlieb, Y. (2014). Combating maritime piracy: Inter-disciplinary cooperation and information sharing. Case Western Reserve Journal of International Law, 46(1), 303–333. Graham, E. (2014, November 24). Siphoning confidence: Piracy and fuel theft. Straits Times. http://www. straitstimes.com/opinion/siphoning-confidencepiracy-and-fuel-theft. Accessed 23 Aug 2016. Guilfoyle, D. (2009). Shiping interdiction and the law of the sea. Cambridge, UK: Cambridge University Press. ICS. (n.d.). The economic cost of piracy. Oceans beyond piracy program. http://www.ics-shipping.org/docs/ default-source/Piracy-Docs/the-economic-cost-ofpiracy.pdf?sfvrsn¼0. Accessed 28 Feb 2019. Indonesian Ministry of Foreign Affairs (MFA). (2007). Pertemuan Kelompok Ahli: Optimalisasi Kerjasama
A
60 Kelautan Intra ASEAN Melalui Pembentukan ASEAN Maritim Forum (Bandung, 21–22 Maret 2007). Jakarta: Badan Pengkajian dan Pengembangan Kebijakan. Indonesian Ministry of Foreign Affairs (MFA). (2009a). ASEAN Regional Forum: The First Inter-Sessional Meeting on Maritime Security, Surabaya, Indonesia, 5–6 March 2009. Jakarta: Directorate General of Asia Pacific and African Affairs. Indonesian Ministry of Foreign Affairs (MFA). (2009b). Background Singkat Pembentukan ASEAN Maritime Forum, made available to author through an email correspondence with the Head of Security Division. Directorate of ASEAN Political and Security Cooperation, Heru H. Subolo (Jakarta). International Maritime Organization (IMO). (2009). Resolution A.1025(26) code of practice for the investigation of crimes of piracy and armed robbery against ships. Adopted 2 December 2009. http://www.imo.org/ en/OurWork/Security/PiracyArmedRobbery/Guid ance/Documents/A.1025.pdf. Last accessed 17 Dec 2017. International Maritime Organization (IMO). (2011). MSC.1/Circ.1404 guidelines to assist in the investigation of the crimes of piracy and armed robbery against ship. http://www.imo.org/en/ OurWork/Security/PiracyArmedRobbery/Guidance/ Documents/MSC.1Circ.1404.pdf. Last accessed 17 Dec 2017. International Maritime Organization (IMO). (2015). Resolution A.1097(29) adopted on 25 November 2015: Strategic Plan for the Organization (For the sixyear period 2016–2021). http://www.imo.org/en/ About/strategy/Documents/A%2029-Res.1097%20% 20Strat egic%20Plan%20for%202016-2021.pdf. Last accessed 17 Dec 2017. International Maritime Organization (IMO). (2017a). Introduction to IMO. http://www.imo.org/en/About/ Pages/Default.aspx. Last accessed 16 Dec 2017. International Maritime Organization (IMO). (2017b). Piracy and armed robbery against ships. http://www. imo.org/en/OurWork/Security/PiracyArmedRobbery/ Pages/Default.aspx. Last accessed 17 Dec 2017. Jailani, A. (2005). [Staf Direktorat Perjanjian Politik Keamanan Kewilayahan Kementerian Luar Negeri]. “Pokok-Pokok Masalah Kebijakan Luar Negeri Tentang Issue Keamanan Laut dan Kewilayahan Selat Malaka.” In Pertemuan Kelompok Ahli: Kebijakan Terpadu Pengelolaan Keamanan Selat Malaka. Jakarta: Badan Pengkajian dan Pengembangan Kebijakan Departemen Luar Negeri Republik Indonesia. Nanda, V. P. (2011). Maritime piracy: How can international law and policy address this growing global menace? Denver Journal of International Law and Policy, 39(2), 177–207. North Atlantic Treaty Organization (NATO). (2016). Counter-piracy operations (Archived). https://www. nato.int/cps/en/natohq/topics_48815.htm. Last accessed 18 Dec 2017.
Arab Spring North Atlantic Treaty Organization (NATO). (2017). Operation ocean shield. https://www.mc.nato.int/mis sions/operation-ocean-shield.aspx. Last accessed 18 Dec 2017. Organisation Maritime De L’Afrique De L’Ouest Et Du Centre/the Maritime Organisation for the West and Central Africa (OMAOC/MOWCA). (2011). Report: 7th Session of the Bureau of Ministers, Dakar 11–13 April 2011. http://www.omaoc.org/application/ archivage/Biblio/7eme%20bureau%20anglais.pdf. Last accessed 18 Dec 2017. Ormerod, H. A. (1924). Piracy in the ancient world. Liverpool: The University Press of Liverpool. Panneerselvam, P. (2016). 10 years of fighting pirates in Asia: As ReCAAP marks its 10th anniversary, a look back at the achievements. https://thediplomat. com/2016/09/10-years-of-fighting-pirates-in-asia/. Last accessed 22 Apr 2016. ReCAAP. (2011). About ReCAAP. http://www.recaap.org/ AboutReCAAPISC.aspx. Last accessed 9 Jan 2011. United Nations. (2012). United Nations documents on piracy. http://www.un.org/Depts/los/piracy/piracy_doc uments.htm. Last accessed 18 Dec 2017.
Further Reading Bento, L. (2011). Toward an international law of piracy sui generis: How the dual nature of maritime piracy law enables piracy to flourish. Berkeley Journal of International Law, 29(2), 399–455. Lehr, P. (Ed.). (2007). Violence at sea piracy in the age of global terrorism. New York: Routledge. Nanda, V. P. (2011). Maritime piracy: How can international law and policy address this growing global menace? Denver Journal of International Law and Policy, 39(2), 177–207.
Arab Spring Samantha Kruber Monash University, Melbourne, Australia
Keywords
Democratization · Authoritarianism · Middle East
Introduction The uprisings that swept across the Middle East and North African (MENA) region throughout 2010–2011 were optimistically perceived as a
Arab Spring
signal that the repressive authoritarian rule that had characterized much of the region was coming to an end. Protesters took to the streets in unprecedented numbers first in Tunisia, then in Egypt, and eventually spread throughout much of the region, all determined to have their voices heard on a range of political and economic grievances that they had long endured. Initially the protest movements appeared to fulfill the promise of change. Long-reigning dictatorships in Tunisia and Egypt were swiftly brought to an end, and assistance from the international community suggested that Libya too would shortly see the demise of its authoritarian leader. However, this initial positive trajectory soon took a turn. Violent crackdowns by regime forces in Bahrain, Yemen, Libya, and Syria saw many killed by open fire against demonstrators, while others were arrested, tortured, and in some cases killed or disappeared in custody throughout the uprisings. The results of repressive measures were mixed; in some cases they served to successfully quash the protest movements, whereas in others they only encouraged more populous and intense opposition. Although the Arab Spring saw a number of shifts toward democratization in some states, this progress has been overshadowed by the variety of new challenges that have since emerged. These challenges include the failure to achieve cooperation between domestic opposition movements, the endurance of authoritarianism in both cases of regime change and regime survival, intervention from international actors, and violent civil wars that continued to escalate long after the Spring.
Sources of Discontent The Arab Spring took even the most avid observers of the MENA region by surprise. Consequently, many have sought to account for the timing of the uprisings, often pointing to the availability and rising use of social media and to the youth bulge in many of the Arab Spring states. As Haas and Lesch have observed, some of the Arab Spring states that saw particularly powerful and
61
widespread protests also had quite pronounced youth bulges: people under the age of 25 made up 42% of the population in Tunisia, 48% in Libya, 51% in Egypt, and 57% in Syria (2013, p. 3). Despite increasing levels of education, youth unemployment across the region was high and opportunities few, leading to dissatisfaction among these young populations. Often regarded as the domain of the youth, the increasing availability and use of new information technology and social media throughout the region has also been attributed to the timing and scale of the Spring. These new technologies limited the traditional monopoly on information held by states in the region; social media offered an outlet for grievances to be heard and shared, and the website WikiLeaks exposed details of corruption and extravagance of Arab ruling elites (Cleveland and Bunton 2013, p. 523). However, though in many cases it was the youth who first took to the streets and social media indeed played a valuable role in organizing and publicizing the protest movements, these factors do not fully account for the scale of the Spring. The protest movements attracted a deeply diverse demographic, and despite many regimes shutting off access to social media and the Internet during the uprisings, protest movements continued to grow. Rather, the social, political, and economic sources of discontent that drove the uprisings had been enduring features of these states for some decades, exacerbated over time by increasing regime power and a range of internal and external economic challenges. Protesters took to the streets demanding change in the form of political participation, social justice, as well as recognition of their human rights and human dignity. In many cases, including Tunisia, Egypt, Syria, and Libya, protesters called for the fall of the regime; in others, such as Saudi Arabia, Morocco, and Jordan, leaders were instead faced with demands for reform, including greater representation, an end to corruption, and constitutional checks on monarchical power (Gelvin 2013, p. 239). Further, declining access to, and the rising cost of, basic staples such as water, bread and fuel; poor public services such as health care and education;
A
62
stagnant wages; and insufficient job creation to address rising unemployment and underemployment further fuelled tensions between the ruling elite and their constituencies. These challenges had only worsened for the average citizen following structural adjustment policies of the late 1980s and 1990s and the 2008 global financial crisis. Occurring in conjunction with the rollback of the state, economic liberalization had been characterized by cronyism, with the benefits of reform going directly to those well connected to the regime and economic opportunities determined only by connections to the elite (Cammett et al. 2015, p. 4). As a result, the declining living standards of the average citizen stood in stark contrast to the extravagant wealth of the ruling elite.
The Course of the Uprisings While the sources of discontent were wide-ranging, the undisputed trigger of the uprisings was the self-immolation of Tunisian Mohamed Bouazizi on December 17, 2010, in an act fueled by frustration and desperation. This event not only sparked protest movements in Tunisia but caused a ripple effect in states throughout the region. Bouazizi, a 26-year-old street vendor from Sidi Bouzid, had, like many others, long been forced to contend with the harassment and corruption of local authorities. On one such occasion, Bouazizi had his cart and scales confiscated by local police. When he attempted to raise his complaints in the governor’s office, he was refused the opportunity to have his grievances heard. Bouazizi, at this point humiliated, out of options, and feeling powerless, doused himself in petrol and set fire to himself in front of the Sidi Bouzid municipal headquarters. Bouazizi died from his injuries on January 4, 2011, having remained in a coma from the time of his self-immolation until his death. He never saw the revolution he had begun, nor that his act afforded him a visit from soon-to-beremoved President Zine El Abidine Ben Ali. Following Bouazizi’s self-immolation, protesters took to the streets of Sidi Bouzid, and by December 27, protest movements had emerged
Arab Spring
throughout the country and had arrived in the capital, Tunis. Bouazizi’s act of desperation resonated with many young people in Tunisia, especially those in rural areas where unemployment was particularly high. However, the protest movement was not made up only of underemployed youth but a vast variety of Tunisians demanding change. Lawyers, doctors, shopkeepers, bloggers, hackers, artists, housewives, children, and professors all took to the streets to demand genuine political change (Marzouki 2013, p. 20). Central to these demands was the removal of long-term leader, President Ben Ali. As the protest movement grew in size and intensity, and their demands for regime change grew louder, it became clear that it was too late for promises from the regime addressing the high cost of living and rising unemployment. The protester’s demands were for more fundamental change: the removal of Ben Ali, the dissolution of the state security apparatus, and constitutional change (Gana 2013, p. 5). On January 14, 2011, Ben Ali fled to Saudi Arabia, and his Prime Minister, Mohamed Ghannouchi, stepped in to replace him but stood down soon after in response to continued protests. Inspired by the events in Tunisia, Egyptian protesters took to the streets on January 25, 2011, demanding their own change of leadership and improved living conditions. Gathered at Cairo’s Tahrir Square, protesters bore signs and chanted “the people want the fall of the regime” – a phrase that emerged during the Tunisian protests, as the revolution itself, and would resonate throughout the region. Though Egyptian protesters were demanding many of the same changes as their Tunisian counterparts, there were also a number of tensions specific to the Egyptian context. The 30-year reign of Egypt’s President Hosni Mubarak was coming to a close; however, the regime was evidently preparing to pass power directly to Mubarak’s son, Gamal. This move was poorly received not only by the general population but also by the armed forces. Consequently, when the armed forces were deployed to control the protest movement, soldiers refused to fire at demonstrators so long as their activity remained peaceful, which subsequently
Arab Spring
emboldened more protesters to join the demonstrations (Brownlee et al. 2015, p. 73). It wasn’t until February 2 however, in response to extensive violence by Mubarak loyalists against protesters, that the army sided with the protesters. On February 11, at the urging of the Supreme Council of the Armed Forces (SCAF), Mubarak stood down, and the SCAF assumed interim control of Egypt (Barany 2011, p. 28). Inspired by the rapid apparent victories by demonstrators in Tunisia and Egypt, protests soon erupted in virtually every state throughout the region. These movements varied in size and demands, with protesters in states such as Morocco, Oman, and Jordan demanding political change without the overthrow of the existing regime. Libya, Yemen, Bahrain, and Syria, however, were to see major protest movements call for the removal of long-standing authoritarian leadership and, along with Tunisia and Egypt, they have come to be considered the key Arab Spring states. However, in these subsequent uprisings, the hopeful outlook set by the Tunisian and Egyptian experiences was to be crushed by state repression, violence, and conflict. In Libya, the first mass protests occurred on February 17, 2011. Protesters gathered in a number of cities across the country including the joint capitals Benghazi and Tripoli, calling for the end of Muammar Gaddafi’s 42-year reign. However, the protesters in Libya did not see the same rapid change as their counterparts in Tunisia and Egypt. Unlike in Tunisia and Egypt, where security forces opted to side with the protesters rather than the regime, Gaddafi’s security apparatus remained on side, however he was unable to secure the same support from the armed forces. Having previously faced challenges to his power from within the armed forces, Gaddafi had neglected the formal state military and instead invested heavily in a variety of loyal paramilitary forces. As a result, despite Gaddafi’s efforts to both incentivize and threaten the regular military into cooperating, the state army defected almost in its entirety (Barany 2011, p. 30). On March 17, responding to fears of an impending genocide by Gaddafi and his remaining loyal security forces, the United Nations
63
Security Council passed Resolution 1973 (10-0-5). This resolution called for an immediate ceasefire in Libya, imposed a no-fly zone over Libyan airspace, and authorized all measures necessary to protect civilians and civilian populated areas under threat of attack (United Nations Security Council Resolution 1973, 2011). On March 19, France, the UK, and the USA launched the first airstrikes on Libya, initiating what was to be a 6-month-long NATOled operation against regime targets. On October 20, as a result of NATO airstrikes, Gaddafi was captured and killed by rebel forces. Libya’s National Transitional Council (NTC), which by that stage had been largely recognized as the legitimate government of Libya, declared Libya liberated and set an 8-month timeline for elections. In Bahrain, thousands of protesters congregated at the Pearl Roundabout in the state’s capital, Manama, on February 14, 2011, demanding political and economic reform. The population of Bahrain is two-thirds Shi’a, but it is ruled by the Sunni Al-Khalifa monarchy; consequently, although the protesters’ demands largely centered on democratic reform and economic grievances, the narrative surrounding the uprising in Bahrain went on to take a deeply sectarian tone. Protesters were met by violent crackdowns from security forces that saw several killed in an effort to clear the protesters. The indiscriminate use of force involved in the violent clearing of the Pearl Roundabout led to the activist core of the protest movement being joined by ordinary citizens, a development mirroring events in Egypt and to be seen in subsequent uprisings and leading to Bahrain’s uprisings having the largest turnout on a per capita basis (Lynch 2012, p. 136; Brownlee et al. 2015, p. 85). Despite the large-scale turnout of Bahraini citizens, the Bahraini protest movement was the first of the regional uprisings to be quashed by the regime. In March, King Hamad bin Isa Al Khalifa invited support from his GCC neighbors which led to some 2000 troops from Saudi Arabia and the United Arab Emirates being rapidly deployed to Manama on March 14. It is important to note however that while the Gulf Cooperation Council (GCC) forces no doubt served to intimidate protesters, it was the Bahraini
A
64
security apparatus that cleared the demonstrators and destroyed the Pearl Roundabout, the iconic site of the protests (Brownlee et al. 2015, p. 86). A state of emergency was declared which saw martial law imposed, and the following month two of the major political parties representing Bahrain’s Shi’a majority were banned. The uprising in Yemen began in January, with protesters demanding the removal of longreigning President Ali Abdullah Saleh. Saleh had ruled in Yemen for 33 years, first serving as President of North Yemen and then President of the newly unified state from 1990. Protests took place not only in the capital, Sana’a, but also in provincial cities including Ta’izz, Ibb, and Hudayda. On March 18, government-affiliated snipers fired at demonstrators, killing 50 and injuring many more. It was at this point that the challenge to Saleh’s rule intensified; support for Saleh in the general population diminished further, and dozens of MPs, diplomats, tribal leaders, and military officers withdrew their support for the president (Juneau 2014, p. 382). Initially made up of young people, NGOs, and democracy activists, Yemen’s protests brought out a wide range of different, and at times competing, political actors in Yemen that had long opposed Saleh. This included the secessionist movement in the South, the Houthis in the North, Islamists, and powerful tribes that have continued to play a prominent role in political life in Yemen (Lynch 2012, p. 106). Saleh’s refusal to concede, the persistence of the protest movement, and the increasingly violent character of the uprisings in Yemen led to international efforts to achieve a political solution. The subsequent US-backed GCC plan to achieve a power transition by May 2011 however proved to be ambitious; it was not until November of that year that Saleh signed the deal agreeing to cede power to Vice President Hadi in exchange for his personal immunity. Hadi assumed power in February 2012 in a one-candidate election, a far cry from the democratic victory sought by the Spring. In Syria, the beginning of the uprisings has come to be marked by the imagery of a group of young teenagers spray-painting on a wall the words: “the people want the fall of the regime.” This resulted in their arrest and torture at the hands
Arab Spring
of Syria’s mukhabarat (intelligence services). When protesters took to the streets of Daraa demanding the release of these children, security forces responded violently, firing upon and killed a number of them (McMurray and Ufheil-Somers 2013, p. 157). This triggered the first large-scale protests in Syria which saw some 100,000 protesters in Daraa on March 25. Protests soon began to emerge elsewhere in Syria, including in Homs, Aleppo, Baniyas, and Hama, demanding an end to the repressive and excessive conduct of the security apparatus and for the removal of President Bashar al-Assad. As in other Arab Spring states, these demands were the consequence not only of state repression but also had strong economic, political, and social grievances attached. As the unrest spread across the country, protesters continued to be met with violence from the security forces, with casualties among protesters rapidly mounting. ProAssad counter protests also emerged, demonstrating that although Assad faced significant opposition within the country, many still supported his continued rule. Perhaps hoping that they would gain international support as their Libyan counterparts had, the Syrian opposition protesters were to be disappointed. Though Assad faced sanctions and condemnation from the USA, the EU, and the UN, attempts to intervene in Syria as had been done in Libya were continuously thwarted by vetoes from Russia and China. Complicating matters, the opposition in Syria was diverse and divided. By September 2011, two main opposition groups had emerged, the Syrian National Council (SNC) and the Free Syrian Army (FSA), although both faced criticism for representing foreign interests. With the Libyan intervention facing intense scrutiny, and the complexity of the opposition in Syria, intervention instead came in the form of state backing for both opposition forces and pro-Assad forces. This has served to fuel the civil war that has raged on for 8 years since the original protests began.
Outcomes: Shifts to Democratization, Enduring Authoritarianism, and Civil War Much as the course of the Spring differed widely between states, so too have the outcomes of the
Arab Spring
uprisings. Tunisia, Egypt, Libya, and Yemen each saw the removal of long-reigning leaders. In Tunisia and Egypt, these shifts were domestically driven, whereas in Libya and Yemen, international actors played a critical role in achieving leadership change. Syria and Bahrain have seen authoritarian leaders hold their grip on power; however for Bahrain this was a decisive victory, whereas in Syria, Assad has endured a lengthy civil war. In the cases of Egypt and Tunisia, the decisive moment in the uprisings that saw the scale tipped in favor of regime change was the decision of the military to cease its support for the regime. In Tunisia, this was followed by a transition period culminating in the election of the moderate Islamic Ennahdha Party in what has been deemed a free and fair electoral process (Freedom House 2012). In Egypt, by contrast, the military used this opportunity to seize power and has since continued to have a strong hold over Egypt, mirroring many of the same repressive tactics of the overthrown Mubarak regime. By the time parliamentary and presidential elections were held, the SCAF had made a range of constitutional changes expanding its own power. In 2013, Mohammad Morsi, the Muslim Brotherhood candidate elected president in 2012, was overthrown by a military coup (Brownlee et al. 2015, p. 212). His replacement, former Defense Minister Abdel Fattah el-Sisi, has since enacted a range of constitutional amendments to cement his hold on power. For Libya and Yemen, regime change and external intervention resulted in instability and civil war. Intense competition for power among political actors in Libya has seen civil war rage on in that country since the removal of Gaddafi. Further, the UN-mandated intervention in Libya faced intense criticism, perceived as an act of regime change rather than civilian protection. This has had significant consequences for the prospect of intervention elsewhere, such as in Syria, with Russia and China vehemently opposing UN-mandated intervention. In Yemen, the power transition from Saleh to Hadi was deeply unsuccessful, and in 2015, Hadi fled to Saudi Arabia following the Houthi takeover of Sanaa. Yemen has since been in a state of civil war between the Houthis and the Yemeni government
65
in a conflict that is often depicted in sectarian terms. Saudi Arabia formed a US-backed coalition, which has provided extensive support to the government forces, while Iran is accused of providing support to the Houthis. This war has resulted in what the UN has called the worst humanitarian crisis in the world (UN News 2019). Though Assad maintained his hold on power, the Arab Spring also launched Syria into a violent civil war. Syria’s civil war has been characterized by the diversity of actors involved, spanning from the regime and its supporters, pro-democracy opposition, foreign jihadists, foreign Shi’a militias, Kurdish forces, as well as the international community, with a number of states including the USA, Russia, the UK, Iran, and the GCC states intervening directly and through support for various actors on the ground. The Syrian Observatory for Human Rights has estimated the death toll since the start of the Syrian conflict at over 511,000 people, while some 6.6 million Syrians have been left internally displaced, and another 5.6 million are refugees, many of whom remain in neighboring states, living outside of refugee camps and below the poverty line (Human Rights Watch 2019). Bahrain saw the first and perhaps only instance of decisive regime victory in the face of mass protests. Following the violent clearing of the Pearl Roundabout in March 2011, a state of emergency was declared which saw security forces became even more repressive. Massive crackdowns against not only those involved in the protest movement but also perceived sympathizers saw the purging of universities and professional associations; activists, students, and journalists imprisoned; thousands of people lost their jobs and university appointments; and sectarian tensions were exploited to further divide the population (Lynch 2012, p. 111). Bahrain’s fellow monarchies were not immune to the wave of uprisings that took the region. Saudi Arabia, Oman, Kuwait, Morocco, and Jordan all faced domestic protest movements demanding political and economic change, though shying away from calls for regimes to fall. In the resource-rich Gulf states, these protests were met by economic incentives and moderate political reforms. Morocco too responded with
A
66
political reform, while Jordan’s protests resulted in the dismissal of unpopular Prime Minister Samir Al-Rifai (Lynch 2012, p. 121). Importantly, however, none of these reforms or concessions resulted in any fundamental change in the power of the respective leaders of these states.
Conclusion The Arab Spring uprisings of 2011 set in motion a path of change throughout the MENA region. The removal of long-serving authoritarian leaders occurred rapidly in some states, more gradually in others, and not at all in others still. Though political change has occurred in some states, the fundamental nature of that change remains to be seen. In others, violent civil wars continue on, compelling international involvement and sparking humanitarian crises. Though the Arab Spring has now passed into what some have phrased an Arab Winter, many of the political outcomes sparked by the events of 2011 are yet to be fully realized.
References Barany, Z. (2011). Comparing the Arab revolts: The role of the military. Journal of Democracy, 22(4), 24–35. https://doi.org/10.1353/jod.2011.0069. Brownlee, J., Masoud, T., & Reynolds, A. (2015). The Arab Spring: Pathways of repression and reform. Oxford: Oxford University Press. Cammett, M., Diwan, I., & Richards, A. (2015). A political economy of the middle east (4th ed.). Boulder: Westview Press. Cleveland, W., & Bunton, M. (2013). A history of the modern middle east (5th ed.). Boulder: Westview Press. Freedom House. (2012). Tunisia. Retrieved from: https:// freedomhouse.org/report/freedom-world/2012/tunisia Gana, N. (2013). Tunisia. In P. Amar & V. Prashad (Eds.), Dispatches from the Arab Spring: Understanding the new middle east (pp. 1–23). Minneapolis: University of Minnesota Press. Gelvin, J. L. (2013). Conclusion: The Arab world at the intersection of the national and transnational. In M. L. Haas & D. W. Lesch (Eds.), The Arab Spring: Change and resistance in the middle east (pp. 238–255). Boulder: Westview Press. Haas, M. L., & Lesch, D. W. (2013). The Arab Spring: Change and resistance in the middle east. Boulder: Westview Press.
Arctic Human Rights Watch. (2019). World report 2019: Syria. Retrieved from: https://www.hrw.org/world-report/ 2019/country-chapters/syria Juneau, T. (2014). Yemen and the Arab Spring. In M. Kamrava (Ed.), Beyond the Arab Spring: The evolving ruling bargain in the middle east (pp. 373–396). Oxford: Oxford University Press. Lynch, M. (2012). The Arab uprising: The unfinished revolutions of the new middle east. New York: PublicAffairs. Marzouki, N. (2013). Tunisia’s wall has fallen. In D. McMurray & A. Ufheil-Somers (Eds.), The Arab revolts: Dispatches on militant democracy in the middle east (pp. 16–23). Indiana: Indiana University Press. McMurray, D., & Ufheil-Somers, A. (2013). Syria. In D. McMurray & A. Ufheil-Somers (Eds.), The Arab revolts: Dispatches on militant democracy in the middle east (pp. 16–23). Indiana: Indiana University Press. UN News. (2019). Humanitarian crisis in Yemen remains the worst in the world. Retrieved from: https://news.un. org/en/story/2019/02/1032811 United Nations Security Council. (2011). Resolution 1973. Retrieved from: https://www.securitycouncilreport.org/ atf/cf/%7B65BFCF9B-6D27-4E9C-8CD3-CF6E4FF96 FF9%7D/Libya%20S%20RES%201973.pdf
Further Reading Brownlee, J., Masoud, T., Reynolds, A.. (2015). The Arab Spring: Pathways of repression and reform. Oxford: Oxford University Press. Danahar, P. (2013). The new middle east: The world after the Arab Spring. London: Bloomsbury. Davis, J. (2013). The Arab Spring and the Arab Thaw: Unfinished revolutions and the quest for democracy. Surrey: Ashgate. Seikaly, M., & Mattar, K. (Eds.). (2015). The silent revolution: The Arab Spring and the Gulf States. Berlin: Gerlach Press.
Arctic Lora Pitman1 and Girish Sreevatsan Nandakumar2 1 School of Cybersecurity, Old Dominion University, Norfolk, VA, USA 2 Graduate Program in International Studies (GPIS), Old Dominion University, Norfolk, VA, USA
Keywords
Climate change · Cooperation · Energy · Environmental security · Indigenous Peoples
Arctic
Introduction Covered in snow, ice, and permafrost, the picturesque Arctic has long been a home of Indigenous cultures and of a variety of ecosystems. Even after the first waves of explorations in the North Pole, it became clear that the life of the inhabitants of the Arctic is not going to be the same, as the interest in acquiring power over the Arctic’s natural resources and potential shipping routes, mostly the Northwest Passage (NWP), only grew over the years. The Arctic states felt the need to negotiate a model of governance for the Arctic that will lead to an agreed-upon direction in which the region will develop. Indigenous Peoples in the Arctic insisted that their voices be heard, and their rights of native inhabitants recognized and respected by the Arctic states. The end of the Cold War opened the doors for increased cooperation and with the Ottawa Declaration, the Arctic Council was created. It incorporates the eight Arctic states, six organizations of Indigenous Peoples (permanent participants), and observers. While they all have different interests, the work of the Arctic Council has been defined by many as successful. However, rising temperatures, melting ice, and severe threats to the Arctic environment created new topics for discussions within the Council. The number of entities outside of the Arctic Council has also increased significantly. For some actors, new opportunities are arising from under the ice cover of the Arctic – access to natural resources, shipping routes, and profitable development of new industrial sectors. For others, these new conditions present challenges related to food, financial, and health insecurity. The changing Arctic offers venues for more joint efforts to combat shared problems, but also possibilities for a clash of interests, economic and/or military conflicts, and a growing uncertainty about the future of the northernmost point of the Earth.
The Arctic Council In 1991, the government of Finland, in conjunction with eight other states, proposed the Arctic
67
Environmental Protection Strategy (AEPS). Its goal was to address environmental issues in the Arctic. However, the intergovernmental work produced as a result of this strategy was so successful that Canada urged the states to create an organization that will have an expanded scope of issues to be considered, such as maritime and economic policy (Bloom 1999). As a result, the Ottawa Declaration, signed on September 16, 1996, created the Arctic Council. The core mission of this intergovernmental body is to provide a constructive dialogue “on issues of sustainable development and environmental protection in the Arctic” (Arctic Secretariat 2020, p. 4). The Arctic Council provides representation to the eight Arctic states, six organizations of Indigenous Peoples (permanent participants), compounding six working groups, and observers (non-Arctic states, intergovernmental organizations, and nongovernmental organizations). The eight Arctic states are Canada, the Kingdom of Denmark, Finland, Iceland, Norway, the Russian Federation, Sweden, and the United States. The qualifying characteristic of an Arctic state is possessing territories within the Arctic. The Indigenous Peoples’ Councils include the Aleut International Organization, the Arctic Athabaskan Council, the Gwich’in Council International, the Inuit Circumpolar Council, the Russian Association of Indigenous Peoples of the North (RAIPON), and the Saami Council. The work of the Indigenous Peoples Council is facilitated by a Secretariat, which has its own board, budget, and working objectives. Its core mission is to support the activities of the permanent members of the Council and improve the representation and collaboration of Indigenous Peoples within the working groups. The official language in the Indigenous Peoples Secretariat, located in Tromso, Norway, is English, even though to ensure effective communication Russian is also used (Arctic Council Secretariat 2020). The third group of significance in the Council is the one of the Observers. It consists of non-Arctic states (France, Germany, People’s Republic of China, Republic of India, Republic of Japan, Switzerland, the UK, and others),
A
68
intergovernmental and interparliamentary observers, as well as nongovernmental organizations. In 2013, the European Union applied for an observer status, and while the final decision is still pending as of March 2021, it was allowed to observe the proceedings. Some of the main principles that Observers need to acknowledge relate to respecting the sovereignty of the Arctic states and the right of the Indigenous Peoples and other inhabitants of the Arctic region, and having an interest and political and financial capabilities to support the work done by the Arctic Council. The permanent members of the Arctic Council are chairing it on a rotation principle, every 2 years, beginning with Canada in 1996 and followed by the United States (1998–2000), Finland (2000–2002), Iceland (2002–2004), Russia (2004–2006), Norway (2006–2008), the Kingdom of Denmark (2008–2010), and Sweden (2010–2012). In the second rotation cycle, Iceland will conclude its chairmanship in 2021 and the Russian Federation will begin its chairmanship. The programs of the Arctic Council are funded on a voluntary basis by the permanent members (Bloom 1999). However, this standard has been challenged by researchers, who underline the importance of a more stable funding for the Arctic Council (Smieszek 2019a). Other scholars focus on the transition that the Arctic Council is trying to make from a forum to an official organization, highlighting the establishment of a Secretariat that began to function in June 2013 (Dong 2017). Specifically excluding the area of military security from the scope of work of the Arctic Council (Declaration on the Establishment of the Arctic Council 1996), the latter was successful in producing various agreements in the field of environmental security and cooperation. Some examples include the Agreement on Cooperation on Aeronautical and Maritime Search and Rescue in the Arctic (2011), the Agreement on Cooperation on Marine Oil Pollution Preparedness and Response in the Arctic (2013), and the Agreement on Enhancing International Arctic Scientific Cooperation (2017). The need for reforms in the Arctic Council has been recognized by the scholarly community. Among the efforts made by the Arctic Council’s
Arctic
eight state members is the addendum to the Observer Manual, adopted in 2015, that sought to harmonize the efforts of supplementary groups in the Council such as working groups, expert groups, and task-force groups, and take a step toward making it a more inclusive space for discussion of Arctic policies (Knecht 2016; Hossain and Mihejeva 2017). There are different voices within the Arctic Council, raising either support or concerns regarding an expanded role of observers. Regardless, in light of the growing interest of actors to join it as observers and the rapidly changing conditions in the Arctic from an environmental perspective, it remains crucial to implement reforms in the Council that establish leadership roles that focus on collective, rather than on national, goals and priorities (Smieszek 2019b).
Geopolitics and Security of the Arctic Celebrating 25 years of cooperation in the Arctic Council, on a question about its success, the Russian Arctic Council senior official, Ambassador Anton Vasiliev, responded that “many of the explanations of why the Arctic Council is so successful comes back to geography and very harsh realities” (Vasiliev 2021). These very same conditions, however, shaped not only room for collaboration and understanding but also conflicting interests of the different stakeholders in the Arctic. Russia, the country with the most significant presence in the Arctic, has various interests in the region among related to geopolitics, the economy, and energy policy. In its national strategy about the Arctic, Moscow has set forth various goals that may create tension between Russia and other Arctic and non-Arctic states. This longterm national strategy about the Arctic includes further exploration of the Arctic’s natural resources for satisfying the needs of Russia’s population, and national defense objectives, supported by a military presence. Another goal that the State Policy of the Russian Federation in the Arctic Region for the Period Until 2020 and Beyond presupposes is, nonetheless, entry into bilateral and multilateral agreements with other Arctic states (Pilyavsky 2011).
Arctic
Scholars are not united in their opinion as to how Russia’s military posture in the Arctic should best be interpreted. Some see in it signs that point to no offensive designs underscoring the presence of the Russia’s Navy fleet (the Northern Fleet) in the Arctic (Roi 2010). While agreeing that Russia wants to showcase its military strength, other scholars see in Russia’s military posture a way to maintain its economic interests through retaining control over territories that are rich in natural resources, and it thus rather represents a protectionist stance than an offensive one (Konyshev and Sergunin 2014). Another argument informing Russia’s national strategy about the Arctic is the decreased birth rate and territorial development in Russia’s North. Contrary to this trend, in “Alaska, Iceland, Greenland, and the Arctic regions of Canada, the population grew even faster than in the world and in these countries. The fastest growing region in the Arctic was Nunavut (Canada), the population of which has increased by almost 20% since 2000” (Romashkina et al. 2017). Issues regarding security are particularly important, especially in light of the Ottawa Declaration, which excluded military security from the scope of Arctic issues on which it focuses (Declaration on the Establishment of the Arctic Council 1996). One of the largest coastal Arctic nations, Canada has been invested in Arctic governance and cooperation since the very beginning of the Arctic Council. Naturally, its voice plays a significant role in the decision-making within this body and with non-Arctic states and entities on issues pertaining to the Arctic. Canada’s identity as an Arctic state is an integral part of why its national strategy concerning the region emphasizes the need of protecting its sovereignty and the Canadian Indigenous Peoples who live there. Over the years, the interests in exploring the natural resources that the Canadian North has to offer posed a complex dilemma that involved Canadian Indigenous Peoples’ rights, and their socioeconomic welfare. In the years between 2006 and 2009, Canada adopted more militaristic methods to the protection of its sovereignty, “including new Arctic patrol vessels and more vigorous patrolling, reinforced the government’s emphasis on ‘hard security’ rather than ‘human security’
69
like its predecessors” (Lackenbauer and Lalonde 2017). This approach was severely criticized by the Inuit community, which opposed military investments, when the resources could have been better used for environmental and socioeconomic initiatives directed at the well-being of the Indigenous Peoples in the Canadian North. Another contentious topic that includes the Canadian government and the Indigenous Peoples in the Canadian Arctic, on the one hand, and EU, on the other, is related to sealing – a traditional source of food and income for many North Canadian communities. In 2009, the European Union imposed a ban on importing and selling seal products, as it stressed concerns over the practices of obtaining such products. Canada insisted that seal hunting and the sealskin trade is a sustainable sector, that is strictly regulated. The increased industrialization and climate change in the Arctic poses serious challenges to the survival of communities inhabiting the Arctic. The Survey of Living Conditions in the Arctic (SLiCA) confirmed a relationship between well-being and job opportunities (both employment and other traditional activities – fishing, hunting, etc.), while at the same, the loss of food and financial security in general can lead to exacerbated mental health of Indigenous communities and increased suicide rates (Kruse et al. 2008; Lehti et al. 2009). The United States has been perceived as Canada’s main partner when it comes to shared issues and concerns in the region. Washington’s policy regarding the Arctic has remained overall unchanged during the years, confirming six core goals of the United States toward the Arctic, across the presidencies of Bush, Clinton, and Obama. These six principles include adherence to national security objectives, protection of the environment and biological diversity in the region, sustainable development of economic goals, enhancing cooperation with the other Arctic nations, including the voices of Indigenous communities in the Arctic, and lastly, encouraging scientific research pertaining to environmental issues (Arnaudo 2013). In response to some of these stated goals, in 2016, Barack Obama and Justin Trudeau offered a United States-Canada Joint Statement on Climate, Energy, and Arctic
A
70
Leadership, which was also consistent to a large extent with the European Union’s policy about the region (Lackenbauer and Lalonde 2017). However, even this relationship had its tumultuous moments regarding the Northwest Passage (NWP) and Beaufort Sea boundaries. In the 1969, S.S. Manhattan, an icebreaking tanker, owned by the US Humble Oil and Refining Company, crossed the Northwest Passage (NWP) with the mission to explore whether there was a viable shipping route to transport natural resources from the Beaufort Sea. As a consequence, Canada introduced to the United Nations the issue of potential pollution in Arctic areas and as a consequence, coastal states, including Canada, were allowed to impose measures intended to reduce marine pollution from vessels within their exclusive economic zone (Gavrilov et al. 2019). Despite some initial resistance by the United States, the latter supported the Canadian-sponsored article. Later, in 1986, Canada officially established control over the Northwest Passage (NWP) and over waters it has claimed in the Arctic Archipelago. While Canada and the United States still had opposing viewpoints on the question about the legality of such claims, the leaders of both countries were able to negotiate a compromise that was found suitable by both sides. According to the 1988 Arctic Cooperation Agreement, all traffic of US icebreaker ships should be approved by Canadian authorities. This did not affect US perspective on the issue with the Northwestern Passage (NWP), being seen by Washington as an international strait rather than a strait over which Canada should have exclusive rights and control. The Nordic states (Denmark, including Greenland, Finland, Iceland, Norway, and Sweden) have a long history of being examined collectively, because of their common economic, cultural, and historical features. At the same time, they remain different in terms of their security arrangements with organizations, such as the European Union (EU) and the North Atlantic Treaty Organization (NATO), as Norway and Iceland are not members of EU, and Finland and Sweden are not members of NATO. They also
Arctic
have varying degrees of interests in the Arctic in terms of geography and economy – some are coastal Arctic states, others are not, some have much more in terms of economic interests in the Arctic, others do not. Norway is a coastal Arctic state with significant interests in the region, as it relies on it to extract oil, gas, and continues to take into account a potential development of the Northern Sea Route (NSR) as a large shipping nation. Denmark, which through its Greenland territories has coastal presence in the Arctic region, also has a notable interest in natural resources exploration, climate change research, trade, and shipping. One important element that can cast doubt in the Arctic strategy of Denmark is Greenland’s ambition to gain full independence in the near future. If such a move is approved in a referendum, Greenland will become independent from Denmark. However, the heavy reliance on Denmark for half of Greenland’s budget is one reason why Greenland may decide to continue maintaining autonomy, while continuing to be part of Denmark. Political dynamics in Greenland are also watched closely by the world, as Greenland is rich in rare earth minerals, which are currently supplied primarily by China (Dunning 2021). Finland, a non-coastal Arctic state, presented a comprehensive Arctic strategy in 2013, “stressing mineral development, shipping, shipbuilding, investment in Arctic knowledge, and sustainable development, among other goals” (Lunde 2014). The strategy was later updated in 2016 to include the attention that the harmful effects of climate change need, along with “bolstering employment and welfare within the limits of sustainable development” and recognizing the importance of Indigenous communities (Finland’s Office of the Prime Minister 2016). Sweden, an Arctic state that has been very vocal about the need for environmental protections in the Arctic has, among others, interests in fishing, shipping, exploration of natural resources, and the use of reactor-powered vehicles, such as icebreakers and container ships (Government Offices of Sweden 2020). At the same time,
Arctic
while being an active policy-making entity for the Arctic, Sweden had to consider interests of other actors in the realm of energy security and accommodate them in the appeals for a more environment-friendly approach to development in the Arctic. Sharing interests similar to the other Arctic nations, Iceland has also been very active in advocating for common Arctic policymaking and governance. Its priorities for the Arctic include fishing, shipping, and energy-security objectives. Evidence of the latter is the license given in 2014 to the China National Offshore Oil Corporation for oil extraction in Icelandic waters (Lunde 2014). Chinese interests to increase and subsidize gas and oil explorations in the Arctic could be easily understood in the context of the growing need for energy sources in the country. Scholars wonder if the relationship between Russia and China would strengthen even more in the future, as the substantial Beijing-sponsored Yamal LNG energy project in the Russian North may suggest (Weidacher Hsiung 2016). At the same time, other actors, such as the North Atlantic Treaty Organization (NATO), also show an increased interest to the region. In 2020, NATO joined efforts with the Danish Joint Arctic Command in Greenland (NATO 2020). Whether the Arctic will become a new arena for the old Cold War tensions is a question that will highly affect the security of the region and the stakeholders involved.
Future Issues The rapidly changing environmental conditions as a result of the global warming threaten to unlock a wide range of new challenges and opportunities for international actors, and thus to endanger the stability and peace in the Arctic region. The eight Arctic states may have to face disputes engendered from the clash between their “sovereign rights and jurisdiction over their land, internal waters, territorial seas, exclusive economic zones (EEZs), and continental shelves” (CFR 2018), on the one hand, and the regulations of the Convention on the Law of the Sea (UNCLOS) and the
71
international law which guarantees “all states to enjoy the rights of navigation, overflight, fishing, scientific investigation, and resource exploration and exploitation, including in parts of the Arctic Ocean,” on the other (CFR 2018). For instance, one of the most central issues with importance for the future is the melting ice, which can open new shipping routes. The Northeastern Passage is 37% shorter and thus cheaper and more convenient to use than the Suez Canal (CFR 2018). In particular, scholars argue that the Northeastern Passage will be of great geopolitical and strategic importance for the United States and Russia, in the context of supply, and China, South Korea, and Japan, in the context of demand (Schach and Madlener 2018). Another consideration that may trigger contentious moments between Arctic- and non-Arctic countries is the rising popularity of the Arctic as a resource-rich area. Different actors may start fiercely competing for access to the significant amount of oil and gas in the Arctic by filing competing legal claims over the outer limits of their continental shelves. In terms of defense, the old Cold War dynamics that are still felt even after the collapse of the Soviet Union can transfer to the Arctic. However, in the post-Cold War world, focused more on economic factors than on demonstration of hard power capabilities, it is likely that the tensions between Arctic- and non-Arctic actors will take economic rather than military shape. However, it was recognized that the competition for rare earth metals, energy, and new shipping routes that the Arctic offers can create security issues. In 2012, after the inaugural Arctic Chiefs of Defence Staff meeting took place, it was never resumed as an annual event, as it was initially planned (Strader 2012). In February 2021, Anatoly Antonov, ambassador of Russia to the United States declared Moscow’s willingness to resume the annual Arctic Chiefs of Defence Staff meetings in order to continue the dialogue about security in the Arctic (Antonov 2019). This statement came in the context of concerns that Russia is expanding its military presence in the region.
A
72
Some scholars and Arctic nations argue that such an avenue for debating and confronting security issues is not necessary and that creating one will be counterproductive. As mentioned previously, Indigenous populations, especially in Canada, have opposed defense investments in the Arctic. Most of the stakeholders with strategic interests in the Arctic, however, are alarmed by the lack of an entity overseeing security. There is some support for the idea that NATO should have a more significant role in the Arctic security debate, but at the same time, such a forum is not likely to deliver a unanimous agreement for the balance of power, as some actors (e.g., non-NATO members, and NATO-rivals, such as Russia and China) may not be given an equal role in the negotiations, deliberations, and decisions (Postler 2019). NATO itself, understanding these tensions, lacks a detailed Arctic Military Strategy and has been hesitant in creating one. Russia, traditionally concerned with peripheral buffer zones ever since the beginning of the Cold War (Flake 2014), may feel especially threatened by an increased NATO presence and influence in the Arctic, given that there are no buffer states there. It is not surprising, therefore, that Russia is placing great importance on the Arctic and its resources for economic development and has shown determination to protect them through asserting its sovereignty through military capabilities. The role of Russia in the Arctic will be a key determinant of the direction in which security issues evolve, especially in light of a potential ad-hoc partnership with China. The latter, being a non-Arctic country, has ambitious plans for the region, that are supported by his economic power. Underscoring its meaning for the region as a “near Arctic-state,” China still remains outside of the circle of actual Arctic states, having most power in the region. Consequently, it will need Russia to fulfill its strategic goals, related to the Arctic. The “Polar Silk Road” is the first comprehensive document detailing China’s perspective on the Arctic and its own involvement in the array of challenges and opportunities in the region (Tillman et al. 2018). Aware of the situation, in 2019, the Trump administration warned Russia and China
Arctic
not to undertake any aggressive actions, as they will entail negative consequences for the parties involved in violating the established rules (McBride 2019). It is yet to be seen how the Biden administration will respond to the Arctic’s changing climate and its effects both environmentally and politically.
Conclusion For many years, the Arctic was not an object of serious interest for the international community. The melting ice, revealing more opportunities for exploration of natural resources, shipping routes, and development of the region gives new meaning to political agendas not only of the Arctic states and their Indigenous populations, but also to non-Arctic states with goals related to the Earth’s northernmost area. Furthermore, non-state actors, such as EU, NATO, the Red Cross, and the Red Crescent organizations, and research institutes also require a voice in Arctic governance. With the changing climate, the issues of the Arctic are threatening to become issues for the international community at large. The globalization of Arctic affairs engenders common problems, but also presents avenues for common solutions. How the latter will be achieved in the future depends on the ability of the actors involved to focus on areas of mutual interest and to peacefully resolve conflicts that will inevitably surface with the increased activities in the Arctic. Whether the Arctic Council will emerge as the central body that coordinates stakeholders’ efforts to achieve both their individual interests in the framework of the common ones, or there will be another entity that leads the initiatives and the plans in the Arctic, is a question that is yet to be answered in the following decades.
Cross-References ▶ Climate Change and Public Health ▶ Energy Security Strategies ▶ Environmental Security and Conflict ▶ Exploitation of Resources
Arctic
References Antonov, A. (2019). Russia stands ready to work together in the Arctic. Arctic Today. https://www.arctictoday. com/russia-stands-ready-to-work-together-in-the-arc tic/?wallit_nosession¼1 Arctic Council Secretariat. (2020). The Arctic Council: A quick guide (2nd ed.). Arctic Council Secretariat. https://oaarchive.arcticcouncil.org/bitstream/handle/ 11374/2424/2019-09-30-A_quick_guide_to_the_AC_ online.pdf?sequence=1&isAllowed=y Arnaudo, R. V. (2013). United States policy in the Arctic. In P. Berkman & A. Vylegzhanin (Eds.), Environmental security in the Arctic Ocean (NATO science for peace and security series C: Environmental security). Dordrecht: Springer. https://doi.org/10.1007/978-94-0074713-5_9. Bloom, E. (1999). Establishment of the Arctic Council. The American Journal of International Law, 93(3), 712–722. https://doi.org/10.2307/2555272. Council on Foreign Relations (CFR). (2018). Arctic governance: Challenges and opportunities (Global governance working paper). https://www.cfr.org/report/ arctic-governance Declaration on the Establishment of the Arctic Council. (1996, September 19). Ottawa, Canada. Dong, L. (2017). Difficulties facing the Arctic Council reform and domainal governance. China Oceans Law Review, 241–161. Dunning, S. (2021). 56,000 Greenlanders could shape the future of rare earths. Foreign Policy. https:// foreignpolicy.com/2021/03/10/greenland-electionrare-earth-elements-china-us-europe/ Finland’s Office of the Prime Minister. (2016). Government policy regarding the priorities in the updated Arctic strategy. Prime Minister’s Office, Finland. https://vnk.fi/documents/10616/334509/Arktisen +strategian+päivitys+ENG.pdf/7efd3ed1-af83-4736b80b-c00e26aebc05 Flake, L. E. (2014). Russia’s security intentions in a melting Arctic. Military and Strategic Affairs, 6(1), 99–116. Gavrilov, V., Dremliuga, R., & Nurimbetov, R. (2019). Article 234 of the 1982 United Nations convention on the law of the sea and reduction of ice cover in the Arctic Ocean. Marine Policy, 106, 1–6. Government Offices of Sweden. (2020). Sweden’s strategy for the Arctic region. Government Offices of Sweden. h t t p s : / / w w w. g o v e r n m e n t . s e / c o n t e n t a s s e t s / 85de9103bbbe4373b55eddd7f71608da/swedens-strat egy-for-the-arctic-region?TSPD_101_R0=0840bf6 8c4ab2000ab79a37155c59a1d0183b1f360c2d370 c790675effc42eb3a7027fbba796a35e083b369de6 1430000946f45de49be7254615773e12d580f3 eb478a2a52916d0ec91ca485f0e63a5c30f1e53a53 9f1910cfe9aaab8c889c3a Hossain, K., & Mihejeva, M. (2017). Governing the Arctic: Is the Arctic Council going global? Jindal Global Law Review, 8(1), 7–22.
73 Knecht, S. (2016). Procedural reform at the Arctic Council: The amended 2015 observer manual. The Polar Record, 52(5), 601–605. Konyshev, V., & Sergunin, A. (2014). Is Russia a revisionist military power in the Arctic? Defense & Security Analysis, 30(4), 323–335. Kruse, J., Poppel, B., Abryutina, L., Duhaime, G., Martin, S., Poppel, M., . . . Hanna, V. (2008). Survey of living conditions in the Arctic (SLiCA). In V. Møller, D. Huschka, & A. C. Michalos (Eds.), Barometers of quality of life around the globe (Social indicators research series) (Vol. 33, pp. 107–134). Dordrecht: Springer. https://doi.org/10.1007/978-1-4020-8686-1_5. Lackenbauer, P. W., & Lalonde, S. (2017). Searching for common ground in evolving Canadian and EU Arctic strategies. In The European Union and the Arctic. Leiden: Brill|Nijhoff. https://do i.o rg/10.1163/ 9789004349179_007. Lehti, V., Niemelä, S., Hoven, C., Mandell, D., & Sourander, A. (2009). Mental health, substance use and suicidal behaviour among young indigenous people in the Arctic: A systematic review. Social Science & Medicine, 69(8), 1194–1203. Lunde, L. (2014). The Nordic embrace: Why the Nordic countries welcome Asia to the Arctic table. Asia Policy, 18(1), 39–45. McBride, C. (2019). Pompeo issues warning to China, Russia on Arctic. The Wall Street Journal. https:// www.wsj.com/articles/pompeo-issues-warning-tochina-russia-on-arctic-11557153220. North Atlantic Treaty Organization (2020). NATO begins cooperation with Danish joint arctic command in Greenland [News]. https://mc.nato.int/media-centre/ news/2020/nato-begins-cooperation-with-danish-jointarctic-command-in-greenland Pilyavsky, V. P. (2011). Russian geopolitical and economic interest (Friedrich Ebert Stiftung briefing paper). Postler, A. (2019). Bringing NATO into the fold: A Dilemma for Arctic security. Georgetown Security Studies Review. https://georgetownsecuritystu diesreview.org/2019/10/28/bringing-nato-into-thefold-a-dilemma-for-arctic-security/ Roi, M. L. (2010). Russia: The greatest Arctic power? The Journal of Slavic Military Studies, 23(4), 551–573. Romashkina, G. F., Didenko, N. I., & Skripnuk, D. F. (2017). Socioeconomic modernization of Russia and its Arctic regions. Studies on Russian Economic Development, 28(1), 22–30. Schach, M., & Madlener, R. (2018). Impacts of an ice-free northeast passage on LNG markets and geopolitics. Energy Policy, 122, 438–448. Smieszek, M. (2019a). Costs and reality of reforming the Arctic Council. The Arctic Institute. https://www. thearcticinstitute.org/costs-reality-reforming-arcticcouncil/. Smieszek, M. (2019b). The Arctic Council in transition. In D. Nord (Ed.), Leadership for the North (Springer Polar Sciences) (pp. 33–51). Cham: Springer. https://doi.org/ 10.1007/978-3-030-03107-7_3.
A
74
Army Recruitment of Ethnic Minorities
Strader, O. (2012). Arctic chiefs of defence staff conference – An opportunity to formalize Arctic security. The Arctic Institute. https://www.thearcticinstitute.org/arc tic-chiefs-defence-staff/ Tillman, H., Yang, J., & Nielsson, E. T. (2018). The polar silk road: China’s new frontier of international cooperation. China Quarterly of International Strategic Studies, 4(03), 345–362. Vasiliev, A. (2021). 25 years of peace and cooperation – Highlights from the Arctic Frontiers panel. The Arctic Council [News]. https://arctic-council.org/en/news/25years-of-peace-and-cooperation-highlights-from-thearctic-frontiers-panel/ Weidacher Hsiung, C. (2016). China and Arctic energy: Drivers and limitations. The Polar Journal, 6(2), 243–258.
diversity initiatives. Internationally, armies may face issues in their minority recruitment efforts that are not prevalent in the West, including, issues of ethnic divisions that are much more complicated to handle due to religious differences, historical divisions, political separations, and/or sectarianism. There have been challenges faced by armies seeking to recruit more ethnic minorities and the different strategies suggested for overcoming these challenges have met with varying levels of success.
Historical Perspectives on Ethnic Minority Recruitment in the Military
Army Recruitment of Ethnic Minorities Gordon Alley-Young Department of Communications and Performing Arts, Kingsborough Community College, City University of New York, Brooklyn, NY, USA
Keywords
African-Americans · Army recruitment · Britain · Canada · Ethnic minorities · Middle East · United States
Introduction Army recruitment of ethnic minorities has different implications depending on the historical period and sociocultural context. It is a topic that has received more attention in recent years. Western armies historically have looked to increase their recruitment of ethnic minority members as their traditional recruitment populations (i.e., white, rural, male) have diminished. Another reason Western armies have sought to increase ethnic minority recruitment is to make their organizations reflective of the ethnic make-up of the nations they represent and to gain needed cultural insight and/or linguistic skills needed to manage international conflicts. Western armies’ push for more ethnic diversity also coincides with gender
During the American Revolutionary War, army recruitment of ethnic minorities happened on both sides. Gilbert (2012) notes that Virginia’s British Royal Governor Lord Dunmore threatened to level colonists mansions and free all their slaves and indentured servants if they challenged royal authority. In 1775, when Governor Dunmore delivered this message, Black-Americans sympathized with the British as they interpreted Dunmore’s words as his endorsement of their emancipation (Gilbert 2012). The states of Virginia and South Carolina wanted to maintain slavery and indentured servitude thus they came to support the revolutionary cause against Britain. Gilbert (2012) notes how war freed tens of thousands of African-Americans. For example, Thomas Peters, previously an African prince who was sold into slavery, was recruited into the British Black Pioneers. After fighting alongside the British during the Revolutionary War, Peters along with thousands of Black Loyalists were transported to Nova Scotia where they were promised farmland. Subsequently Peters led other Black Loyalists in protest when the land promised was not given, and he later helped to settle the newly formed colony of Sierra Leone (Gilbert 2012). Freedom was promised to AfricanAmerican slaves in exchange for enlisting with the British forces while, in general, already freed African-Americans were more likely to fight with the American Patriots (NBC News 2015, February 15).
Army Recruitment of Ethnic Minorities
Both Black-Americans and Narragansett Native Americans were recruited in 1778 as soldiers for the American cause under the 1st Rhode Island regiment (Gilbert 2012). The regiment was initially created to replace White soldiers who were allowed to enlist for ten months and then leave service while Black-Americans and Narragansett Native Americans served for three years on average. Around 20% of Black soldiers died in part due to the 1st Rhode Island Regiment’s heroic battle record and the service of Black soldiers in state militias from New Hampshire to Pennsylvania (Gilbert 2012). The recruitment of former slaves and free AfricanAmericans during the American Revolutionary War and subsequently during the Civil War would eventually help bring emancipation, first to the northern states, and then to the entire United States. With the start of the US Civil War in 1861 abolitionist, orator, newspaper publisher, and escaped slave Frederick Douglass argued for African-American soldiers’ recruitment into the largely white Union Army. Douglass argued that fighting on the side of the Union would secure the country and allow African-American soldiers to gain their citizenship and he lobbied President Lincoln and political leaders, wrote to powerful and influential friends, and published speeches and editorials in his paper to further this goal (Frederick Douglass Heritage n.d.). President Lincoln and the generals, afraid that white soldiers would revolt, initially admitted African-Americans in the Union Army only as support staff, though by 1862 African-Americans began to organize their own infantry units (Frederick Douglass Heritage n.d.). In 1863 the Emancipation Proclamation took effect freeing over 3 million slaves and the need for new troops meant that African-Americans were allowed to fight, for a country that at that point did not consider them citizens, for the 54th and 55th Colored Massachusetts Regiments organized under Governor John Andrew (Frederick Douglass Heritage n.d.). The former regiment counted two of Douglass’s sons and Sojourner Truth’s grandson among its ranks (Frederick Douglass Heritage n.d.). After emancipation, Frederick Douglass traveled thousands of miles to actively recruit African-
75
Americans for the Union (Frederick Douglass Heritage n.d.). African-Americans were recruited and served under segregation in the US Army during WWI and WWII. During WWII African-American newspaper, The Pittsburgh Courier, launched The Double Victory campaign in 1942 urging Black citizens to enlist in order to fight for victory over both foreign fascism and domestic racism (Delmont 2017, August 24). During the summer of 1943 interracial racial violence broke out across in cities across the country including on segregated military bases in the Northern and Southern United States (Delmont 2017, August 24). President Truman ended military segregation in 1948.
Army Recruitment of Ethnic Minorities in the West The United States The Army remains the largest branch of the US military, and in 2015, 36% of all active-duty military personnel were serving in the Army (Parker et al. 2017). The US Army has given increased focus and resources to recruiting and keeping ethnic minorities. In 2014 there were only six African-American four-star commanders (only one of them female), the highest possible rank, across the Army, Air Force, and Navy (Zoroya 2014, February 17). In 2014 about 20% of Army soldiers were African-Americans compared to 27% in both 1985 and 1995. Still 20% is a higher percentage of African-American soldiers than the 17% of African-Americans of in the US population that recruiters have identified as being of the right age and education level for recruitment. At the same time, percentages of African-Americans in the US Navy have declined slightly (21% in 2005 vs. 17% in 2014) while Air Force numbers have stayed consistent at about 17% AfricanAmericans from 1984 to 2014 (Zoroya 2014, February 17). In 2014 the US Army devoted of 1/3 of its recruitment campaigns to attracting ethnic minorities (e.g., targeting parents, educators, clergy, and coaches) (Zoroya 2014, February 17). In 2009, 50% of African-Americans soldiers worked in
A
76
support positions (e.g., cooks, maintenance technicians) while 24% served in combat arms with the latter associated with greater advancement in the military and percentages in both categories having increased slightly since 2013 (Zoroya 2014, February 17). By 2013, 17% of African Americans expressed interest in Army careers, up from 10% in 2009 (Zoroya 2014, February 17). Experts argue that the military has to work harder to recruit African-Americans as more nonmilitary career and educational opportunities are open to AfricanAmericans today. Research by Asch, Heaton, Savych, and RAND (2009) finds that potential African-American and Hispanic recruits respond differently to various US Army recruiting resources (i.e., enlistment bonuses, military pay, education benefits, recruiter influence). Experts argue for breaking with traditional recruitment patterns that draw mainly on white recruitment, educating currently serving Blacks/ minorities about advancement and/or education opportunities, and diversifying military academies (i.e., where only 6% of cadets were African-American in 2014) (Zoroya 2014, February 17). Based on the US Army’s own statistics from 2016, National Guard Officers were 9% Black, 6% Hispanic, and 3% Asian with a higher percentage of enlisted soldiers at 16% Black, 11% Hispanic, and 3% Asian (US Army 2016). In the active ranks of the US Army Blacks are 11% of Officers and 24% of enlisted soldiers, Hispanics make up 7% of Officers and 16% of enlisted soldiers, and Asians make up 6% of Officers and 4% of enlisted soldiers (US Army 2016). The UK The UK’s armed forces are looking to recruit more ethnic minorities into their ranks in response to dwindling numbers of recruits and calls for greater inclusion and diversity. General Sir Nick Carter, head of the British Army, expressed publically that the Army needed to do a better job of recruiting Black and minority ethnic (BME) individuals (Eastern Eye 2015, February 13). BME made up only 10% of the forces in 2015 (Eastern Eye 2015, February 13). British Defense Ministry numbers have 42% or 4,660 BME troops coming from the UK while 58% (6,300) came from
Army Recruitment of Ethnic Minorities
Commonwealth countries (e.g., India, Pakistan) (Eastern Eye 2015, February 13). Carter made BME recruitment a priority by participating in ten events in 2015 to increase recruitment. Multilingual ability is one area where supporters of Carter’s diversity initiatives felt that the army could benefit from more BME recruits (Eastern Eye 2015, February 13). In 2016, while ethnic minorities were approximately 14% of the UK’s citizens (expected to rise to 28% by 2050), they were less than 3% of the British Army’s officers despite a recruitment target of 6% (Greene 2016, December 13). A majority of British Army soldiers/officers from Commonwealth countries are ethnic minorities and more conservative than soldiers/officers from the UK (Greene 2016, December 13). Commonwealth soldiers have different beliefs around alcohol, swearing, family values, and respect for elders that could make them feel less connection, commitment, and belonging in the British Army and to their UK-born military peers (Greene 2016, December 13). Commonwealth soldiers/officers also feel they are treated differently regarding security checks, visa requirements, and jobs available to them (Greene 2016, December 13). Senior ethnic Commonwealth officers reported a better experience but still felt the pressure to conform to other officers (Greene 2016, December 13). By April 2017, ethnic minorities were still under 3% (2.9%) of British Army officers (1% higher than the Royal Navy) (Gov.UK 2018). Below the rank of officer, the British Army had the highest representation of other ethnic groups at 11.9% (up 0.8% since 2012), while the Royal Air Force (RAF) had the lowest at 2.3% (Gov.UK 2018). Between 2012 and 2017 both the Army and Royal Navy/Marines saw increases in ethnic minority officers while the RAF saw a decline during this same period, though the RAF has steadily had the lowest rates of ethnic minorities (here meaning nonwhite ethnic minorities) (Gov. UK 2018). In terms of officer ranks, the highest numbers of ethnic groups was in OF-1 (e.g., Lieutenant/Second Lieutenant), the lowest officer rank, and the numbers decline steadily moving toward OF-5 (e.g., Colonel) (Gov.UK 2018). Regarding non-officer ranks, OR-3 (e.g., Army
Army Recruitment of Ethnic Minorities
Lance Corporal) had the highest percentage of ethnic individuals but this number drops dramatically as ranks rise toward OR-9 (e.g., Army Warrant Officer) (Gov.UK 2018). By 2018, under General Sir Nick Carter the British Army undertook a £1.6 million (i.e., under $2.2 million US dollars) advertising campaign in 2018 to address BME recruitment (Beale 2018). The campaign, dubbed the “belong campaign,” is narrated by actively serving soldiers (Beale 2018). Different advertisements focus on ethnic identity issues like freedom of religion and acceptance of racial diversity although other ads also address gender and sexual orientation (Beale 2018). Traditionally the British Military recruited young, white males, aged 16–25 and as recruitment numbers from this demographic began to decline new ethnic recruitment strategies were adopted (Beale 2018). General Carter noted a recent increase of 30–35% of applicants especially those from underrepresented groups (Beale 2018). The British Army’s “be the best” campaign was critiqued for being out of date and exclusionary (Beale 2018). Retention issues have helped generate new recruitment efforts, as from April 2016 to March 2017 it is posited that 8,194 soldiers enlisted while 9,775 left during the same period citing family and outside employment opportunities as reasons (Beale 2018). In Canada Similar to their British and American counterparts, the Canadian Armed Forces/Forces armées canadiennes (CAF/FAC), of which the Canadian Army/Armée canadienne (CA/AC) is part, has traditionally recruited white males aged 17–24 (Chong 2010). CAF/FAC recruits tend to come from rural or small urban centers (i.e., the population is becoming increasingly urban), have/had family in the military, and/or have a high school education (or less) (Chong 2010). The CAF/FAC’s enrollment criteria might eliminate ethnic immigrants as Canadian citizenship has traditionally been a requirement (with permanent residents considered only in special circumstances) and security clearances, required for most in the military, can be affected by an immigrant’s country of origins and the ties one has to
77
family in those countries (Chong 2010). Critics argue that the CAF/FAC’s us-them approach to training and fighting allowed a low tolerance for diversity to take hold (Chong 2010). In response to such criticism, recent campaigns have tried to include images of soldiers providing humanitarian aid and participating in search and rescue (Chong 2010). The CAF/FAC has an international reputation for their work as United Nations Peacekeepers. Ethnic minorities aged 17–24 are one of the fastest growing Canadian population groups and are a population that the CAF/FAC wants to recruit (Chong 2010). The CAF/FAC predicted that visible minority groups would exceed 21% in their ranks by 2016. In 2006 CAF/FAC aimed to have 9% of its forces be from visible minority groups though the actual number was only 2.8% (Chong 2010). The CAF/FAC is motivated to make their ranks mirror the representation found both in society and the civilian workforce (Chong 2010). A report in 2006 by the Auditor General of Canada noted that the number of minorities in the CF had been declining since 2002 (Chong 2010). Declining recruitment of ethnic minorities by the CAF/FAC was attributed to several reasons. One is that ethnic minorities tended to see higher education, and not the military, as a means to overcome discrimination and achieve success (Chong 2010). A second reason is that ethnic minorities reside in enclaves in large urban cities (i.e., the CAF/FAC recruits more from rural areas) and some cultural communities (e.g., Canada’s Chinese and South Asian communities) might discourage military service as a career due to the lack of prestige and earning potential perceived to be associated with such careers (Chong 2010). Some cultural communities might also be more collectivistic and thus families might be resistant to the idea of children leaving cities for training at rural bases (Chong 2010). Also, some immigrants from African, Asian, or South-American nations, who came to Canada to escape repressive, oppressive, and/or totalitarian military-led dictatorships, might have resistance to the idea of military service for their children (Chong 2010). Additionally when there are few high-ranking ethnic minorities in the CAF/FAC this limits the potential for
A
78
community role models who can help with outreach (Chong 2010). A 2014 survey of Filipino-Canadian, BlackCanadian and Latin American-Canadian youth found less than 1% of youth from these groups would be interested in pursuing military careers or promoting it as a career to a young person (Postmedia News 2014, December 30). Interestingly respondents to the survey were far less likely than the general population to know a military member or know about the CAF (Postmedia News 2014, December 30). Alternately military service was perceived as a good way to get work experience in medicine or information technology (IT), a way to help others, and an honorable/heroic role (Postmedia News 2014, December 30). In the general public 57% of youth said they were not at all likely to join the CAF compared to 34% of Filipino-Canadian, 45% of Black-Canadian, and 41% of Latin-Canadian youths (Postmedia News 2014, December 30). By 2014 the Canadian military was considering adjusting targets for visible minority recruitment as by this point they only had reached 4.2% of the proposed 11.7% inclusion target (Berthiaume 2014, May 19). An adjusted target of 8.2% was suggested. In 2008, the Canadian Human Rights Commission/Commission canadienne des droits de la personne is responsible for monitoring progress toward the goals, told the CAF they would have to demonstrate how the goals are unrealistic before any minority inclusion targets could be changed (Berthiaume 2014, May 19). By May 27, 2013, the total number of visible minorities in uniform in the Canadian military was 4,930 and the total number of Aboriginal Canadians was around 2,110 (Berthiaume 2014, May 19). Chief of military personnel Major Gen. David Millar wrote to senior Canadian military commanders in July 2013 informing them to increase their recruitment of visible ethnic minorities, aboriginals, and women (Berthiaume 2014, May 19). Subsequently, General Jonathan Vance, chief of the defense staff, announced a new diversity strategy in 2017 (Berthiaume 2017, June 25). As part of this strategy, senior staff reviewed uniforms, military ceremonies, food, and religious accommodations to better suit the needs of a diverse military
Army Recruitment of Ethnic Minorities
(Berthiaume 2017, June 25). This also involved senior military leaders addressing Canadian citizenship ceremonies about the merits of military service and meeting with Indigenous Canadian leaders to facilitate more recruitment from their communities (Berthiaume 2017, June 25).
Army Recruitment of Ethnic Minorities in the Middle East and Asia In Israel As Israel has mandatory military service, diversifying its army has not centered on issues of replacing dwindling populations of white recruits as has motivated the United States American, British, and Canadian Armies who do not employ mandatory service. Orthodox Jews and indigenous Arab Palestinians were typically exempt from service (Baladna-Association for Arab Youth 2015). In 1956 the Israeli State and leaders of the ethnic/ religious minority group the Druze agreed that male youths would enlist, though conscientious objection has risen in recent years (BaladnaAssociation for Arab Youth 2015). Some Palestinian Christian youth in Israel also undertake military service for the financial/workforce benefits extended to them post-service (BaladnaAssociation for Arab Youth 2015). Some have advocated extending a national civic service option to groups in Israel who might prefer this to military service (i.e., Orthodox Jews, Arab and Christian Palestinians, and Druze) (Baladna-Association for Arab Youth 2015). Kachtan (2012) argues, based on her study of Mizrahi and Ashkenazi Jewish soldiers, that counter to the State of Israel’s goal for the military to become a melting pot of ethnic identities, instead is an active participant in creating extreme ethnic identities. Kachtan (2012) argues that this is reflective of how important ethnicity is in Israeli society. Advocates of minority recruitment hope it will foster closer connections between different minority populations and the State of Israel. Some opponents question whether requiring service of all minority ethnic groups might diminish the political and/or social identity goals for minority groups in Israel and/or the compromising of ethno-cultural values.
Army Recruitment of Ethnic Minorities
In Iraq Elsewhere in the Middle East the army has faced trouble recruiting soldiers who are in some cases lured to join non-state armed groups and tribal militias (Mansour 2015). This is part of Iraq’s challenge as the country is divided along ethnoreligious sectarian lines. Iraq’s Kurdish population tend to join their own Peshmerga forces at higher rates, the Sunni population may distrust the state and be drawn more to militant groups, while most army recruits are southern-Iraqi Shiites (Mansour 2015). To overcome these challenges and increase enrollment the Iraqi Defense Ministry promotes nationalism and anti-sectarianism in their recruitment videos and on television. For example, one promotional video titled “Iraq in Our Hearts” depicts soldiers of different backgrounds saluting the flags on their uniform, another video titled “Listen to Iraq” shows a soldier rejecting a sectarian radio commentary from abroad, and on one episode of the Defense Ministry’s weekly TV program soldiers are shown rescuing members of the Yezidi religious minority from Islamic State militants (Mansour 2015). The Iraqi Army’s recruitment campaign projects a united Iraq in the face of sectarian, ethnic, and regional divisions in the country thus effectively reaching out to minority ethnic groups by constructing a singular Iraqi national identity with shared values (Mansour 2015). In India India is a country characterized by vast ethnic and religious diversity, it has an army that is one of the largest in the world with hundreds of years of history, and yet some ethnic minorities are overrepresented while others are underrepresented. In the Indian Army, Sikhs have been historically overrepresented at 8–13% while their numbers in the population are only a fraction of that (e.g., estimates range from 1.7% to 2.5% accounting for change over time) (Khalidi 2001). The recruitment of the Gurkhas, famed for their fighting abilities, is also strong given Nepal’s treaty relations with India (Khalidi 2001). However, two underrepresented groups are the Muslims and the Telugus. Critics argue
79
that Muslims are under-recruited due to suspicions about loyalty as relating to longstanding sociopolitical and religious conflict between Muslims and Hindus (Khalidi 2001). Also, the Indian Army banned Friday prayers and beards, both of which would be issues for observant Muslims (Khalidi 2001). For the Teluguspeaking population, critics argue that it is a lack of proficiency in Hindi that prohibits them from passing the Hindi language military exams (Khalidi 2001). The Indian Army does encourage the religious teachers it employs to be open to different religions and unit leaders are encouraged to take part in the festivals of all the religions represented in their units, yet critics argue that the lack of recruitment and nonadvancement of some minority groups, while other groups are enthusiastically recruited and promoted, still proves problematic and controversial in the multiethnic nation of over 1.3 billion people (Khalidi 2001).
Conclusion Armies around the world, now more than ever, have been turning to historically under-recruited ethnic minority groups to revive their ranks, to keep their enlistment numbers high, and to respond to social criticism by those who charge the military with a lack of diversity. At the same time, ethnic minorities, for some of whom military service historically has meant social mobility, now have more career and education opportunities available to them than in the past thus making army enlistment less attractive. In addition, decades and/or centuries of non-recruitment, exclusionary military climates and practices, and/or fears of discriminatory promotion practices pose significant social and psychological barriers for potential new recruits, and will, critics suggest, require more than just diversity recruitment campaigns in order to change. The move to recruit more ethnic minorities into the army coincides in some countries, though not all, with efforts to recruit and promote more women, LGBTQI people, and/or people from conservative religious backgrounds.
A
80
Cross-References ▶ Diversity ▶ Emancipation ▶ Indigenous Peoples
References Asch, B. J., Heaton, P., Savych, B., & RAND National Defense Research Institute. (2009). Recruiting minorities: What explains recent trends in the army and navy? Santa Monica: RAND National Defense Research Institute. Baladna-Association for Arab Youth. (2015). Sectarian recruitment: Israel’s policies for conscripting the youth of Arab Palestinian citizens. Retrieved from Momken at http://www.momken.org/Public/image/Sec tarian%20Recruitment%202015(1).pdf Beale, J. (2018). New army recruitment adverts ‘won’t appeal to new soldiers.’ Retrieved from BBC News http://www.bbc.com/news/uk-42629529 Berthiaume, L. (2014, May 19). Canadian military hopes to cut hiring targets for women, minorities. Retrieved from The Ottawa Citizen at http://www.ottawacitizen. com/life/Canadian+military+hopes+hiring+targets+ women+minorities/9855180/story.html Berthiaume, L. (2017, June 25). Canadian Armed Forces aims to fix its recruitment system to foster diversity. Retrieved from The Star at https://www.thestar.com/ news/canada/2017/06/25/canadian-forces-aims-to-fixits-recruitment-system-to-foster-diversity.html Chong, E. (2010). Putting diversity in uniform: A brief examination of visible minority recruitment in the Canadian Forces. Public Policy & Governance Review, 2(1), 37–50. Delmont, M. (2017, August 24). Why African-American soldiers saw World War II as a two-front battle. Retrieved from Smithsonian Institution at https:// www.smithsonianmag.com/history/why-africanamerican-soldiers-saw-world-war-ii-two-front-battle180964616/ Eastern Eye. (2015, February 13). Army chief wants more diversity. Eastern Eye. p. 12. Frederick Douglass Heritage. (n.d.). Recruiting African American soldiers for the Union Army. Retrieved from Frederick Douglass Heritage: The Official Website at http://www.frederick-douglass-heritage. org/african-american-civil-war/ Gilbert, A. (2012). Black patriots and loyalists: Fighting for emancipation in the War for Independence. Chicago: The University of Chicago Press. Gov.UK. (2018). Work, pay and benefits: Armed forces workforce. Retrieved from Gov.UK: Ethnicity Facts and Figures at https://www.ethnicity-facts-figures. service.gov.uk/work-pay-and-benefits/public-sectorworkforce/armed-forces-workforce/latest
Army Recruitment of Ethnic Minorities Greene, B. (2016, December 13). A study of the British Army: White, male and little diversity. Retrieved from The London School of Economics and Political Science at http://www.lse.ac.uk/News/Research-Highlights/ Society-media-and-science/British-Army Kachtan, D. (2012). The construction of ethnic identity in the military – From the bottom up. Israel Studies, 17(3), 150–175. Khalidi, O. (2001). Ethnic group recruitment in the Indian Army: The contrasting cases of Sikhs, Muslims, Gurkhas and others. Pacific Affairs, 74(4), 529–552. Mansour, R. (2015). Your country needs you: Iraq’s faltering military recruitment campaign. Retrieved from Carnegie Middle East Center at http://carnegie-mec. org/diwan/60810 NBC News. (2015, February 15). From slaves to British Loyalists; ‘The Book of Negroes’ revealed Retrieved from NBC News at https://www.nbcnews.com/news/ nbcblk/slaves-british-loyalists-book-negroes-revealedn307161 Parker, K., Cilluffo, A., & Stepler, R. (2017). 6 facts about the U.S. military and its changing demographics. Retrieved from Pew Research at http://www. pewresearch.org/fact-tank/2017/04/13/6-facts-aboutthe-u-s-military-and-its-changing-demographics/ Postmedia News. (2014, December 30). Overall interest in military careers low for Black, Latin-American and Filipino Canadians. Retrieved from The National Post at http://nationalpost.com/news/canada/overallinterest-in-military-careers-low-for-black-latinamerican-and-filipino-canadians US Army. (2016). Army demographics: FY16 army profile. Retrieved from Go Army. https://m.goarmy.com/ content/dam/goarmy/downloaded_assets/pdfs/ advocates-demographics.pdf Zoroya, G. (2014, February 17). Military backslides on ethnic diversity. Retrieved from USA Today at https://www.usatoday.com/story/news/nation/2014/ 02/17/black-history-month-military-diversity/ 5564363/
Further Reading Crouthamel, J., Geheran, M., Grady, T., & Köhne, J. B. (Eds.). (2018). Beyond inclusion and exclusion: Jewish experiences of the First World War in Central Europe. New York: Berghahn Books. Edgar, A., Mangat, R., & Momani, B. (Eds.). (2019). Strengthening the Canadian Armed Forces through diversity and inclusion. Toronto: UTP Insights. Mezurek, K. D. (2016). For their own cause: The 27th United States colored troops. Kent: The Kent State University Press. Olusoga, D. (2019). The world’s war: Forgotten soldiers of empire. London: Head of Zeus. Subramaniam, A. (2017). India’s wars: A military history, 1947–1971. Annapolis: Naval Institute Press.
Asian Development Bank (ADB)
Asian Development Bank (ADB) Avilash Roul Indo-German Centre for Sustainability (IGCS), Indian Institute of Technology Madras (IITM), Chennai, India
Keywords
MDB · Poverty · Shareholders · Governance · Accountability · Voting
81
in the late 1990s the Bank unanimously adopted “eradication of poverty” as its overarching goal. Its mission is to help DMCs reduce poverty and improve living conditions and quality of life of citizens. After more than 50 years of its existence, the Bank, arguably, has emerged as the leading source of development finance in the region. From the first loan amounting $80 million given to Indonesia for food grain production project in 1967, the Bank’s operations in 2017 have reached $32.2 billion in various sectors in the region (ADB 2018b). Since 2005, the growth of its operations has been sharply increased (see Graph 1). When the Bank was established, it had an initial authorized capital of $1 billion. By the end of 1967, members had subscribed $970 million of this amount, only 50% of which was paid-in (paid-in capital represents the funds raised by the business from equity). Subsequent increases in subscribed capital, along with contributions from donor countries, allowed ADB to expand its lending. At the end of 2016, the total authorized capital was $143 billion; the subscribed capital was $142.7 billion, of which only $7.2 billion was paid-in. Meanwhile, 34 donors contributed $35 billion under regular Asian Development Fund (ADF) replenishments. Forty-seven regional DMCs are classified into three categories for receiving the types of assistance from the Bank either from Ordinary Capital Resources (OCR) or ADF or blend of both. Regular market-based OCR loans are generally made to DMCs that have attained a higher level of financial development, while concessional OCR loans are made to lower-income DMCs. ADF is a soft loan with lower interest rate than OCR. However, the OCR and ADF merged in 2014 and is now operational as a single source since January 2017.
Despite the ambiguity surrounding as to who first conceptualized the idea of a multilateral financial institution for Asia, many factors prevailing during postwar global politics, reconstruction led by Bretton Wood institutions, establishment of regional Inter-American Development Bank (IDB) for Latin America and Caribbean in Washington, rise of nationalism of independent countries, and nation-building and spirit of increasing regionalism are behind the foundation of the Asian Development Bank (ADB). In 1963, a resolution was passed at the first Ministerial Conference on Asian Economic Cooperation held by the then governance Economic Commission for Asia and the Far East (presently United Nations Economic and Social Commission for Asia and the Pacific) (see chapter ▶ “Economic Commission for Asia and the Far East (ECAFE)”) which culminated in establishment of the ADB in 1966 as a developmental financial institution at Manila, Philippines (McCawley 2016). The purpose of the Bank then was spelled out to foster economic growth and cooperation and to contribute to the acceleration of the process of economic development of the developing member countries (DMCs) in the Asia and Pacific region, collectively and individually (ADB 1965). However,
Governance and Decision-Making
The core mission of the Asian Development Bank is poverty eradication in Asia-Pacific through infrastructure-led growth.
As a multilateral development bank (MDB), like the World Bank (see chapter ▶ “World Bank”), the European Bank for Reconstruction and Development (EBRD), the Inter-American
A
82
1,60,000
1,40,311
1,40,000 1,20,000 $ million
Asian Development Bank (ADB), Graph 1 Operational approvals of ADB by decades. (Source: ADB dataset 1968–2016, graph prepared by author)
Asian Development Bank (ADB)
1,00,000 80,000
64,075
60,000
43,063
40,000 20,000 0
3,361
16,041
1967–1976 1977–1986 1987–1996 1997–2006 2007–2016 Year
Development Bank (IDB), the African Development Bank (AfDB), the Asian Infrastructure Investment Bank (AIIB), and the New Development Bank (NDB), the ADB was originally composed of 31 members (19 regional and 12 non-regional) at its establishment in 1966. Membership has now grown to (and is owned by) 67. The 48 regional members provide 63.5% of its capital, and 19 non-regional members provide 36.5% of its capital. During the formation stage in the early 1960s, it was suggested that memberships should be restricted to only Asian countries. But, to garner international recognition and funding, membership was extended to countries in North America and Europe. However, to protect its “Asia-ness,” it is categorically mentioned in the ADB Charter that regional members must always hold at least 60% of the capital stock (ADB 1965). In other words, authorized capital stock of the non-regional members has to be less than 40% of the total subscribed capital. The ADB Charter confers all the powers of the institution in the Board of Governors, which in turn delegates some of these powers to the Board of Directors (BODs). The Board of Governors meet formally once a year during ADB’s Annual Meeting, generally in the first week of May, either in Manila or any of the DMCs. Each member country has a representative serving on the Board of Governors, usually chosen from persons who hold high-level positions in their respective country’s Ministry of Finance, either
Minister or President of the central bank. Their duties include passing resolutions approving country memberships and issuing statements which provide an assessment of areas where the ADB is excelling and areas where the Bank needs to make improvements. The Governors have a limited role in the day-to-day running of the ADB. The Board of Governors delegate most of its authority to the 12-member BODs based at headquarter in Manila. Among BODs, eight members are elected by the Governors representing regional member countries and four are elected by the Governors representing non-regional member countries for 2 years and can be reelected. Each Director also appoints an Alternate Director to serve in his/her absence. The USA, Japan, and China, being the largest shareholders in the Bank, are represented by their own Directors. The other nine Directors represent different groups of countries that are clustered together. In recognition of the size of their shareholdings, India and Australia always have their own Directors of two such clusters. Two of the offices are principally made up of European donors and are always headed by a Western European Director on a rotating basis. In total, 6 of the 12 Director positions are filled by donor country representatives (although the Australian ED also represents some borrowers in the Pacific as well as Cambodia). The BODs are responsible for the direction of the general operations of the Bank. The Board (a)
Asian Development Bank (ADB)
takes decisions concerning policies of the Bank and loans, guarantees, investments, and technical assistance by the Bank, (b) approves borrowings by the Bank, (c) clears the financial accounts of the Bank for approval by the Board of Governors, and (d) approves the budgets of the Bank (ADB 1965). The BODs work full time at the Bank and meet twice a week to make decisions. Each Director’s vote, and therefore influence, depends on the size of the shareholdings held by the country(ies) he/she represents. Unlike the UN system of “one country, one vote,” the largest donors, the USA and Japan, each control approximately 13% of the voting power followed by China and India. Thus member countries having large shareholders have much greater influence at the ADB. The BODs, by a vote of a majority of the total number of Governors, representing not less than a majority of the total voting power of the members, elect a President for 5 years term (ADB 1965). The President, however, ceases to hold office when the Board of Governors so decide by a vote of two-thirds of the total number of Governors, representing not less than two-thirds of the total voting power of the members. Like at other MDBs, some leadership positions at the ADB are traditionally, though not officially, reserved for officials from the Bank’s most influential countries. For example, the ADB President has always been from Japan (ADB’s Charter states that the ADB President must be from a regional member country). The President, who is the legal representative of the Bank, chairs the BOD meeting and is the chief of the staff (management) of the Bank. Currently, six Vice-Presidents who are appointed by BODs head administration of operations of the Bank. In a closer look of the appointments of the Vice-Presidents, three Vice-President positions have traditionally been assigned to Europe, the USA, and Asia, respectively. Similarly, the Bank’s General Counsel has always been a US citizen.
ADB Operations The long-term strategic framework guides the operations and priority areas of the Bank. In its
83
present long-term strategic framework (Strategy 2020), adopted in 2008, the ADB promotes three complementary agendas on inclusive economic growth, environmentally sustainable growth, and regional integration (ADB 2008). ADB provides various forms of financial assistance to its DMCs such as loans, technical assistance (TA), grants, guarantees, and equity investments. These products are financed through OCR, Special Funds, and trust funds (e.g., Technical Assistance Special Fund (TASF), Japan Special Fund (JSF), ADB Institute (ADBI), Regional Cooperation and Integration Fund (RCIF), Climate Change Fund (CCF), Asia Pacific Disaster Response Fund (APDRF), and Financial Sector Development Partnership Special Fund (FSDPSF)). ADB’s ordinary operations are financed from OCR and special operations from Special Funds. The Charter requires that funds from each resource be kept and used separately. Trust funds are generally financed by contributions and administered by ADB as the trustee. With financial clout, ADB also provides policy dialogue and advisory services to DMCs on either policy changes or introduction of new national or sectoral policies. In response to the evolving needs of borrowers, ADB has adopted many changes in its financial and lending programs. Over its 50-year history, the list of options of financial products has been continuously changed and updated. During the 1960s, ADB focused much of its assistance on food production and rural development. In the subsequent decades, the Bank increased its support for energy projects, especially those promoting the development of domestic energy sources in DMCs. In 1995, ADB became the first multilateral organization to have a Board-approved governance policy to ensure that development assistance fully benefits the poor. With the new century, the Bank focused on helping its DMCs to achieve the Millennium Development Goals (MDGs). Currently, achieving Sustainable Development Goals (SDGs) and meeting commitments of the Paris Agreement on Climate Change are the major priority of the Bank. The Bank’s projects are mostly in the infrastructure sector (see chapter ▶ “Infrastructure Development”) including energy, transport, and communication sectors. However, the Bank’s
A
84
Asian Development Bank (ADB)
operation in social sectors like education, agriculture, health, and gender traditionally have been receiving lower economic priority. In 2017, the Bank approved 33.8% of total loans to energy sector while merely 0.17% for the health sector in DMCs (see Graph 2). The Strategy 2030, approved in July 2018, has renewed Bank’s strong commitment to eradicate extreme poverty in the region and expand its vision to achieve a prosperous, inclusive, resilient, and sustainable Asia and the Pacific. The Strategy, which is aligned with international agreements like SDGs and Paris Climate Agreement, aims to address remaining poverty and increasing inequalities by implementing various SDGs (employment, higher education, universal health care, social protection programs, gender equality, and so on) (ADB 2018a). The Bank’s major lending focus will be on climate and disaster
5.80
0.03
resilience; building liveable cities; enhancing gender equality and regional cooperation and integration; mobilizing private sector resources; enhancing resource mobilization through credit enhancement operations and cofinancing with bilateral and multilateral partners; and strengthening its role as a provider and facilitator of knowledge through stronger, better, and faster ADB. In 1966, the Bank had 40 employees in Manila, including both international and national staff, drawn from 6 member countries. In 1982, ADB opened its first field office (Resident Mission) in Bangladesh to bring operations closer to the people in need. Now, 36 Resident Missions are in operation in 38 countries. At present, it has nearly 3,134 staff drawn from 60 countries. Over the same period, ADB’s internal budget grew from less than $3 million to around $636 million.
5.56 3.72
21.23
33.80 5.28
0.16 1.71 0.71
22.00 Agriculture, Natural Resources, and Rural Development
Education
Energy
Finance
Health
Industry and Trade
Information and Communication Technology
Public Sector Management
Transport
Water and Other Urban Infrastructure and Services
Multisector
Asian Development Bank (ADB), Graph 2 Sector-wise portfolio approval in 2017. (Source: ADB annual report 2017, prepared by author)
Asian Development Bank (ADB)
Impacts of ADB Operations Despite Bank’s transformative adoption of “eradication of poverty” as its overarching goal and “improve living conditions” and “quality of life” as its mission, the ADB’s operations, especially projects funded and administered by Bank, have not been averse to severe criticism due to its negative social and environmental impacts. Between 1994 and 2005, 1.77 million people were displaced by ADB-funded projects, out of which 76% alone from transport projects (ADB 2006). From Cambodia Railway Rehabilitation Project in Southeast Asia to Kyrgyzstan’s Bishkek Road Rehabilitation Project in Central Asia and Khulna-Jessore Drainage Rehabilitation Project in South Asia, the Bank’s projects have been negatively affecting vulnerable people and communities and their livelihood occupations. One indicator is that there are 163 number of complaints that have been filed by the affected people in ADB’s own “accountability mechanism” since its functioning – a forum where people adversely affected by ADB-assisted projects can voice and seek solutions to their problems. The Bank is being criticized several times for its bureaucratic system where information related to projects and policies vital to affected communities are not being proactively shared in a manner and language which communities can understand. The Bank has been formulating from time to time several policy documents which guide the bank operations such as Safeguard Policy Statement, Public Disclosure Policy and Accountability Mechanism, etc. Despite these policies, Bank is struggling to minimize harms it creates with its projects.
Conclusion The Bank has always been updating its image through innovation to be relevant in Asia as one of the major sources of development finance and a knowledge provider. The Strategy 2030 has provided the Bank an upper advantage to address development requirements of member
85
countries. However, by the emergence of Asian Infrastructure Investment Bank (AIIB) and New Development Bank (NDB), the Bank, at best, has been trying to streamline its operations with a sense of complementarity but cautiously. As an intergovernmental institution, it has yet to demonstrate institutional leadership in accountability and democratization of development discourse.
Cross-References ▶ Economic Commission for Asia and the Far East (ECAFE) ▶ Infrastructure Development ▶ World Bank
References ADB. (1965). Agreement establishing the Asian Development Bank. https://www.adb.org/sites/default/ files/institutional-document/32120/charter.pdf ADB. (2006). Special evaluation study on involuntary resettlement safeguards. https://www.adb.org/sites/ default/files/evaluation-document/35442/files/sst-reg2006-14.pdf ADB. (2008). Strategy 2020: The long-term strategic framework of the Asian Development Bank 2008– 2020. https://www.adb.org/sites/default/files/institu tional-document/32121/strategy2020-print.pdf ADB. (2018a). Strategy 2030- achieving a prosperous, inclusive, resilient, and sustainable Asia and the Pacific. https://www.adb.org/sites/default/files/instituti onal-document/435391/strategy-2030-main-document .pdf ADB. (2018b). Annual report-2017. https://www.adb.org/ documents/adb-annual-report-2017 McCawley, P. (2016). Banking on the future of Asia and the Pacific: 50 years of the Asian Development Bank. Manila: ADB.
Further Reading BIC/Forum on ADB. (2006). Unpacking ADB. Washington, DC: Bank Information Center. Chalkley, A. (1977). Asian development bank: A decade of progress. Manila: ADB. Erquiaga, P. (2016). A history of financial management at the Asian Development Bank: Engineering financial innovation and impact on an emerging Asia. Manila: ADB.
A
86
Asian Infrastructure Investment Bank
For Civil Society perspectives on the ADB, visit NGO Forum on ADB, Manila. https://www.forum-adb.org NGO Forum on ADB. (2012). Is ADB accountable? Evaluating accountability mechanism. Manila: Forum. Wilson, D. (1987). A bank for half the world: The story of the Asian Development Bank, 1966–1986. Manila: ADB.
Asian Infrastructure Investment Bank David Morris International Relations Multidisciplinary Doctoral School, Corvinus University of Budapest, Budapest, Hungary
Keywords
Asian Infrastructure Investment Bank · Infrastructure finance · China · United States · Multilateralism
Definition The Asian Infrastructure Investment Bank is a multilateral development bank established in 2016 with a mission to improve social and economic outcomes in Asia, by investing in sustainable infrastructure and other productive sectors.
Introduction The Asian Infrastructure Investment Bank (AIIB) is a Multilateral Development Bank (MDB), established by China and 56 other member states in 2016, with initial pledged capital of US$100
The research related to this article was co-sponsored by the European Union, Hungary and the European Social Fund in the framework of the EFOP-3.6.3-VEKOP-16-201700007 project (titled “Tehetségből fiatal kutató – A kutatói életpályát támogató tevékenységek a felsőoktatásban”). Interview July 2, 2019: Zheng Quan, Director General, Policy and Strategy, and Thia Jangping, Principal Economist, Asian Infrastructure Investment Bank, Beijing, July 2, 2019
billion, to address the infrastructure deficit in Asia, the world’s fastest-growing region. By bolstering finance for development, the bank promises improved economic security for its member states, most of which have unmet demand for energy and other infrastructure. The new institution represents, however, a challenge to the international system. Along with a series of other new institutions and platforms for engaging with the developing world, the AIIB marks China’s return to great power status, contributing to global economic governance alongside the United States of America (USA). In 1944, at Bretton Woods, the USA established the International Monetary Fund (IMF) and the World Bank Group as part of a network of multilateral institutions centered around the United Nations (UN) system, described here collectively as the liberal international order. From the end of the Cold War until the second decade of the twentyfirst century, the US leadership of that international order went largely unchallenged. The rapid growth of Asia has created significant unmet demand for infrastructure finance, however, and the Bretton Woods MDBs and Japan’s subsequent establishment of the Asian Development Bank (ADB) have been unable to meet this demand. Whether the establishment of the AIIB is complementary to the liberal international order and the economic security it seeks to provide, or constitutes a threat, is a matter of debate in international relations. At around the same time as the establishment of the AIIB, the USA was abandoning four decades of constructive engagement with China and embracing a new doctrine of strategic competition. The AIIB has therefore come to represent in the eyes of Realists (those focusing on the balance of power) a threat to the US-led order. Meanwhile, the confidence of liberal internationalists that China will rise within the liberal international order has yet to be tested. To be sure, the AIIB does pose a number of normative challenges to how the USA had designed and reformed the liberal international order. As will be discussed below, the AIIB diverges in some important respects from the US-led Bretton Woods institutions. In other important respects, however, the
Asian Infrastructure Investment Bank
AIIB is learning from and adopting best practices by co-funding projects with other MDBs, governments, and private sector partners, spreading risks and leveraging its funds to maximize infrastructure investment. It therefore appears likely to deliver on its promise to improve the economic security of its member states, even as China’s role in the liberal international order continues to be a work in progress.
The Geopolitical Background The evolution of the multilateral system to accommodate new economic powers and the priorities of the developing world has been a long time coming. The IMF and the World Bank were established towards the end of World War Two as part of an allied plan for a liberal postwar order with multilateral mechanisms for international economic cooperation. In the immediate postwar decades, financing for infrastructure development was a priority, first in the reconstruction of Europe and later in developing countries (Humphrey 2015). Soon after the postwar establishment of the Bretton Woods institutions, however, the world order was frozen by the Cold War into two rival camps, a US-led grouping and a Soviet-led grouping (with scarce space for nonaligned states). Despite the process of decolonization, which brought a large number of new nations into the UN system, and the rapid development and growing importance of key Asian and other economies, the multilateral institutions did not adapt to the changing balance and the nonborrower countries continued to determine the allocation of financing for development. Just as the UN Security Council remained controlled by the same Permanent Five since its establishment, the World Bank and the IMF remained controlled by the USA (and its European allies). In the later Cold War era, the Group of 77 (G77) developing nations agitated for a New International Economic Order and the response from the USA, convinced of its economic model for development, was to advocate for a more muscular neo-liberal, market-led economic agenda as integral to the liberal
87
international order (Gilman 2015). From the 1990s, the IMF and the World Bank sharply reduced their funding for infrastructure investment to focus instead on poverty alleviation programs and neo-liberal policy prescriptions, applying stringent conditions on finance, including market-oriented economic policy reform and structural economic change, as well as political reforms including the observation of human rights (Larionova and Shelepov 2016; Stephen and Skidmore 2019). Financial liberalization, including capital account liberalization, promoted by the Bretton Woods institutions based on the new neo-liberal orthodoxy, generated a significant backlash in the developing world following the Asian Financial Crisis of 1997 and demonstrably failed to deliver the economic success that economists had predicted (Liao 2015; Chan and Lee 2017). Further, the conditions imposed by the IMF and the World Bank are widely perceived to have failed in their objectives to drive economic and political reforms (Limpach and Michaelowa 2010). While the ADB remained focused on infrastructure financing, the MDBs as a whole were failing to address the demand, in particular in rapidly growing Asia, for critical infrastructure (ADB 2017) and demonstrated an aversion to investing in high-risk environments (Humphrey 2015). The rise of China to a leading role in global economic governance became evident by the second decade of the twenty-first century, following decades of its “reform and opening up” begun in late 1978. Following elements of the earlier East Asian development models but at a larger scale and with greater role for its Party State, China, embraced private sector development, foreign direct investment, integration into global supply chains, and investment in large-scale infrastructure. In important respects, this marketization process involved being co-opted into – and cooperating with – the liberal international order. This was confirmed with China’s role in Asia Pacific Economic Cooperation (APEC) and in particular by its entry to the World Trade Organisation (WTO) in 2001 and its subsequent membership of the G20 and the Regional Comprehensive Economic Partnership with
A
88
fourteen major Asia Pacific economies. In each of these platforms, China’s political uniqueness is accepted, but it is required to adhere to regional and global trade rules. Over the last decades, China has gradually liberalized the internationally exposed sectors of the economy, while retaining state control of strategic sectors. By 2010, it had overtaken Japan to become the second largest economy in the world, by 2013 China became the world’s leading trading nation (Woetzel et al. 2019) and by 2019 two thirds of the world traded more goods with China than with the USA (Leng and Rajah 2019). Its rapid growth has transformed China from a uniformly poor society four decades ago to a leading global economy with a middle class numbering in the hundreds of millions, although its per capita GDP remains well behind that of the advanced economies. The dominant international narrative of China’s “peaceful rise” rested upon common assumptions (notably in the USA) that China would follow a predictable course of integration and interdependence with the forces of globalization. Indeed, there were widely distributed relative gains from China’s increasingly important role in global value chains, with international firms benefiting from low cost manufacturing and consumers around the world benefiting from low cost products. The dominant liberal internationalist discourse was based on a normative confidence that China would not only engage economically with the liberal international order but that it would politically liberalize, following the experience of other East Asian economies such as Korea and Taiwan (Overholt 1993). However, China has not politically liberalized, doubling down during the leadership of Xi Jinping on reforming but strengthening its Marxist-Leninist Party State with Chinese Characteristics, promising a combination of continued economic growth and social order. Indeed, while a significant number of emerging economies across Asia have achieved impressive economic development in recent decades, based on pragmatic policy making, the region remains wracked by serious security dilemmas, including longstanding strategic rivalries, and a resurgent China is feared by many of its neighbors (Muttalib 2010). The
Asian Infrastructure Investment Bank
greatest fear of the implications of China’s rise, however, has emerged in the world’s leading economy. Despite mutual dependence through investment and trade, US engagement with China was unbalanced due to credit-fuelled consumption in the USA and the resulting severe savings deficit, which resulted in soaring Chinese foreign exchange reserves. This provided China with significant capacity to redeploy US currency to financing for development across the world, even while China itself remained theoretically a developing nation itself. Further, Xi’s newly assertive leadership began to signal China’s wish for a greater role in the world order as institutionbuilder, rule-maker, and norm-setter. Liberal internationalists expected either that China would seek leadership but not contest the key principles underpinning the liberal international order from which it has demonstrably benefited (Ikenberry 2011) or that China could at least be encouraged to be a responsible stakeholder through a combination of US coercive power and reassurance of respect (Christensen 2015). The rapidly diminishing US confidence in these narratives in recent years will be discussed below, but first it is important to place the growing Chinese assertiveness into context, as it included far more than just the AIIB. By the second decade of the twenty-first century, not only had China emerged as a major economic power with considerable financial resources, but its rise coincided with a collapse in US and Western economic and political confidence and declining support for free trade and investment (arguably one of the building blocks of the liberal international order), following the financial crises of 2007–2008, crowning two decades of sharpening inequality stemming from a declining share of national productivity distributed to labor (Manyika et al. 2019). Following his elevation to the leadership of the Chinese Party State in 2012, Xi set about announcing a series of initiatives to build upon China’s bilateral financing support to developing countries, to construct new regional and global platforms for economic cooperation. The most ambitious of these was the Belt and Road
Asian Infrastructure Investment Bank
Initiative (BRI), an umbrella term for a suite of Chinese infrastructure-funding initiatives, utilizing Chinese capital and expertise for investment in infrastructure connectivity, policy coordination, trade, financial integration, and people-topeople links (State Council of the People’s Republic of China 2015; National Development and Reform Commission 2015). It includes projects that will bring new funding, new technology, management capabilities, employment, and, most importantly, new infrastructure such as transport, power, communications, and water to support trade, development, and economic integration across Central Asia, South Asia, SouthEast Asia, and beyond to Central, Eastern, and Southern Europe and South America as well as to the South Pacific and the Arctic. More than one hundred and twenty-five nations have signed BRI cooperation agreements with China (Raiser and Ruta 2019). In its massive scale, the BRI is likely to drive substantial trade and investment development, with the World Bank estimating that trade in the BRI economic corridors is currently 30 % below potential and that foreign direct investment may be as much as 70 % below potential, with BRI investments likely to raise global income with significant net benefits for BRI countries, particularly in East Asia (Maliszewska and van der Mensbrugghe 2019). The AIIB was announced as part of a suite of financing vehicles to support the BRI, including another MDB, the New Development Bank (founded in partnership with the BRICS countries, i.e., Brazil, Russia, India, and South Africa), a Silk Road Fund, and various regional infrastructure financing platforms. The Chinese initiatives were described in terms of building “new Silk Roads” reminiscent of the trading routes that linked China to the civilizations across Eurasia in the centuries before the rise of Western Europe and its colonization of the world. To Realists and a renewed chorus of geopolitical commentators in the West (as well as to some enthusiastic commentators in China), it began to look like China wanted to challenge the world order. Whether the new Chinese initiatives were a threat to the liberal international order or would complement and strengthen
89
processes of international economic cooperation became a matter of vigorous debate (and, in relation to the AIIB, this will be further discussed below). The shift in the Western narrative is well described by Mearsheimer’s influential “offensive realism,” in which great powers seek to maximize power and a rising power will seek to become a regional hegemon, while its competitor will seek to prevent the rising power from reaching regional hegemony (Mearsheimer 2014). The assumption rapidly took hold in the West that China was indeed seeking hegemony, seemingly reinforced by militarization of the disputed islands in the strategically important South China Sea and the rapid modernization of China’s maritime military and cyber-security capabilities. Following the election of US President Donald Trump, the USA switched its strategy from constructive engagement with China to a doctrine of “strategic competition” (US Government 2017; Department of Defense 2018) and launched a trade war and a campaign against the BRI, labeling it a “debt trap.” The USA, Mearsheimer had predicted, would respond to China’s rise just as it responded towards the Soviet Union as a peer competitor, by combining with allies to prevent China from gaining power. Indeed, Mearsheimer expected that countries in Asia would need to “choose sides” and that most would rationally seek to preserve their relationship with their powerful protector in the face of a more and more powerful China. On the basis of Mearsheimer’s analysis, the USA expected that its allies would support it in boycotting the AIIB. To the USA, the AIIB constituted one element of China’s threat to the liberal international order and the Obama administration (which “pivoted” to Asia to reassure its allies of its continued role in the Pacific) actively lobbied allies to oppose the new institution. Only Japan followed the USA. In joining the AIIB, other US allies demonstrated that they rejected the Realist analysis and were prepared to welcome the AIIB as complementary to the liberal international order. This suggests the binary US-led narrative that frames Chinese institution-building as a risk to the liberal international order does not
A
90
adequately explain the context in which the AIIB was founded or the way it is operating. Indeed, the AIIB can alternatively be seen as one of a number of innovations in the international system of the early twenty-first century to accommodate a wider range of powers and address regional and global problems, including the founding of the G20 and East Asia Summit and the Paris Agreement on Climate Change. The failed US campaign against the AIIB arguably weakened its standing in the region and strengthened the role and standing of China (Aberg 2016). The AIIB will evolve to provide China with a greater role in regional and global rule-making, particularly when linked with the much larger BRI. How China will play such a role over time remains an open question but some of the normative challenges are apparent in the design of the AIIB.
The AIIB’s Normative Challenges As the first MDB to be established by a developing country with some important new characteristics, the AIIB does indeed represent a set of normative challenges to the liberal international order led by the USA. First, its creation by rising China reflects a demand for multipolarity in the leadership of global economic governance. Second, its structure reflects a more equitable representation of the developing world. Third, its focus is on developing world priorities. Fourth, and finally, it imposes no conditions based on internal political concerns. Together, these are priorities that China is likely to promote more broadly across the multilateral system over time. These will thus impact on the external perception of China’s leadership and its norm-setting power in global governance (Peng and Tok 2016). The establishment of a new multilateral organization led by and headquartered in China provides China with an enhanced role in global economic governance, with new rule-making and norm-setting power that is likely to generate strengthened influence in the developing world. It provides legitimacy to China’s major role in
Asian Infrastructure Investment Bank
financing for development in the face of US (and Japanese) resistance to giving China such legitimacy. A greater role for China based on its economic weight goes directly to the matters of geopolitics discussed above. For some time in the early twenty-first century, liberal internationalists talked about a “G2” which would see greater global economic coordination between the USA and China. The recent switch in US geopolitical strategy to embrace the Realist fear of rising China and to designate it as a strategic competitor is a critical piece in the puzzle of whether China can play a greater role in economic governance without conflict with the USA. China’s claimed role is to reform and democratize, rather than to overthrow, the international system. The structure of the AIIB itself importantly diverges from the norm established in the Bretton Woods institutions that entrenched US leadership and the structural dominance of the developed world in decision making. The creation of the new MDB reflected the resistance over many years to voting share reform in the existing institutions. Not only does the USA by convention provide the President of the World Bank (by convention, the IMF is led by a European), but it retains veto-wielding stakes in each. Voting rights of major developing countries, although slightly adjusted in recent years, have not grown to match their relative weight in the global economy. In the IMF, the USA has a weighted voting power of 16.52%, with the world’s third largest economy, Japan, at 6.15%. China, although a larger economy than Japan, has 6.09% since recent reforms. In 2016, the IMF included the renminbi in its basket of currencies that comprise the Special Drawing Rights, boosting the long-term internationalization of the Chinese currency. In the World Bank Group, the USA retains voting rights of 15.98%, with Japan at 6.89%, and China at only 4.45%. The Asian Development Bank established by Japan (the USA prevented its establishment of an Asian Monetary Fund) provides equal voting rights of 15.6% to Japan and the USA, but only 6.4% to China. The imbalance is clear, but significant reform has long been blocked by the US
Asian Infrastructure Investment Bank
Congress, with only a small-scale adjustment of voting shares approved by Congress in late 2015, arguably in response to the establishment of the AIIB itself (Peng and Tok 2016). The AIIB uses a similar formula to the other MDBs in determining voting rights based on a combination of basic votes, share votes, and Founding Member Votes. Where it diverges, however, is that from its establishment, developing countries have a structural majority and nine of the twelve directorships are reserved for Asian members. China holds 26.59% voting power providing it with a veto over key strategic decisions (a super-majority of 75% of the votes is required to amend the founding treaty), although its share will decrease as new members join. Indeed, China offered to reduce its voting share to below the veto threshold if the USA and Japan agreed to join as co-founding members, as part of its bid to attract broad international involvement (Hu 2015). The Chinese veto does not extend to operational matters such as project approvals. Other major shareholders include Australia, India (second largest shareholder at 7.64%), Indonesia, Korea, Russia, and nonregional members such as France, Germany, and the United Kingdom. The normative challenge set by the AIIB would, if extended to other organizations in the multilateral system, see a shift of power to the developing world commensurate with its economic weight. It may be difficult in the long term for the USA to justify its resistance to this trend. The AIIB normatively endorses China’s model of infrastructure-driven development, which itself builds upon the successful East Asian model of development. By successfully leveraging more finance for infrastructure and implementing projects well, the AIIB may reinforce the emerging consensus in Asia (and more broadly across the developing world) that there is an alternative development path to the “Washington Consensus.” As noted above, the disillusionment with the IMF and the World Bank’s preferred neo-liberal economic solutions had already become apparent well before the AIIB was established. Some have referred to the alternative Chinese
91
model as a “Beijing Consensus” (Yagci 2016), characterized by a focus on industrialization and a central role for state investment in infrastructure to build capabilities to attract investment and build export industries. Prior to the establishment of the AIIB, or even the announcement of the BRI, China had already begun to embark on an ambitious program of extending development finance for developing country partners to build infrastructure. Its domestic policy banks, the Export Import Bank of China (Exim Bank), and China Development Bank (CDB) remain the largest funders of what are now labeled “Belt and Road” projects, lending Chinese foreign currency reserves to developing countries. Indeed, the CDB is now the world’s largest source of development finance. The new initiatives such as the AIIB and the NDB represent the internationalization of this effort and further spreading of risk, as Chinese capital alone cannot fund the infrastructure needs of the developing world. The final important normative challenge of the AIIB is its commitment not to attach political conditions to its lending. This is a significant break from the practices of the Bretton Woods institutions. The IMF and World Bank have explicitly linked financing to encouraging practices considered important to good governance, including accountability, rule of law, human rights, decentralized political authority, political pluralism, and participation. China has consistently opposed the imposition of such conditions, and, in this, it is supported by others in the developing world. For China, “noninterference” in the internal affairs of other countries is a key plank of its contemporary foreign policy orthodoxy and a key driving principle in its participation in the multilateral system. China often abstains or opposes moves in the UN that it perceives are interfering in the internal affairs of member states. China is particularly sensitive about international interference within its sovereign territory, considered to be a result of its “century of humiliation” at the hands of imperial powers. Such sensitivity to foreign interference is not unusual for any nation but China is unique as a rising power in its claim that it will not interfere in other nations’ internal
A
92
affairs. The claim is widely distrusted in the international community. Article 31 of the AIIB’s Articles of Agreement states, nonetheless, that the Bank “shall not interfere in the political affairs of any member” and further that its decisions will not be influenced by “political character” of any members. China may still be a “partial power” (Shambaugh 2013), a long way from matching the hard and soft power of the USA, but the creation of the AIIB, as part of a suite of other initiatives for financing development, indicates that China is beginning to exercise influence in shaping new norms in the international system. While some commentators perceive a geopolitical project, to build a “parallel order” and sphere of influence to ultimately challenge the prevailing international rules (Hodzi and Chen 2017), a closer examination of the operations of the AIIB allows a preliminary assessment of how China will wield its growing influence.
The AIIB’s Operations The AIIB was launched in 2014 and commenced operations in January 2016 with an initial pledged capital of US$100 billion (about two thirds the size of the ADB). The Bank reached a milestone of one hundred approved memberships by mid2019, at which time it had provided US$8.5 billion in finance to 45 projects in 18 countries and received the highest credit ratings from the world’s three leading rating agencies, Standard & Poor’s, Moody’s and Fitch Group (Chen and Chen 2019). The leading destination for AIIB investment has been India, with funding for thirteen projects worth a total of US$2.9 billion at November 2019 (AIIB 2019). Despite fears when it was launched that the AIIB would prioritize Chinese firms and would operate at lower standards than other MDBs (Hameiri and Jones 2018), the bank has implemented best practice operations. The AIIB has pursued a “lean, clean, and green” philosophy, seeking to demonstrate that it can overcome the widely observed constraints on
Asian Infrastructure Investment Bank
the traditional MDBs by operating with a smaller team without a resident board as well as with less cumbersome and costly processes for borrowers, yet still meet high standards (Humphrey 2015). In time, it may provide healthy competition to the Bretton Woods institutions, although in its early years it also seeks to learn from them and, as an ongoing priority, to work with the established MDBs collaboratively. Jin Liqun, a former Vice President of the ADB and Chinese Vice Minister of Finance, was appointed to lead the establishment of the AIIB as inaugural President and Chair of the Board of Directors. The Board of Directors is nonresident, designed to provide separation between strategic policy and budget and management supervision determined by member states, on the one hand, and day-to-day operations determined by professional management on the other hand. This arguably provides higher efficiency than in the case of the Bretton Woods institutions, with their resident boards of directors overseeing operations (Stephen and Skidmore 2019) driven by nonborrower priorities (Humphrey 2015). Expert and experienced staff have been recruited from other MDBs to assist the AIIB to learn, build its capabilities, and implement best practices. A high proportion of staff is internationally educated (Oswald 2018; Shelepov 2018). While the operating structure mirrors those of the Bretton Woods institutions, it is slimmer, allowing fast and efficient decisionmaking. Throughout its operations, its commitment to best practices, including in project appraisal, zero tolerance for corruption, open public procurement, and transparent tendering, is modeled on the established MDBs. In so doing, the AIIB has internalized a set of operating norms from other multilateral institutions. China’s failure to adopt more transparent processes in its bilateral aid and infrastructure-lending has been a matter of some criticism, and it is as yet unclear if the AIIB will be a source of learning for the Chinese domestic policy banks over time (Chan and Lee 2017; Stephen and Skidmore 2019). Cooperation with other multilateral and bilateral development institutions is cited as core to the AIIB’s purpose in Article 1 of the Bank’s Articles
Asian Infrastructure Investment Bank
of Agreement. In its operations to date, the AIIB has sought to maximize its impact, leverage its contributions and to spread risks and harness other benefits from co-financing projects with other MDBs. Approximately three quarters of projects approved to date have been co-financed with other institutions (Shelepov 2018) and these have been widely observed to be constructive partnerships (Gåsemyr 2018). The AIIB has cooperation and co-financing agreements with the World Bank, ADB, NDB, European Bank for Reconstruction and Development (EBRD), European Investment Bank (EIB), Inter-American Development Bank (IDB), and the Eurasian Development Bank (EaDB). Co-financing with MDBs has three main operational effects in addition to increasing the pool of funds available for infrastructure projects. First, co-financing shares and mitigates risks. Second, it ensures that the AIIB aligns with and therefore reinforces the best practice standards and operations of other MDBs, as it is bound by co-financing agreements to follow standards in areas from procurement to environmental sustainability. Third, by leveraging resources, developing common approaches and spreading risks across multiple institutions, the AIIB facilitates coordination, cooperation, and the strengthening of the multilateral system, ensuring that it is positioned over time as a noncontroversial contributor to the network of MDBs. Further, the AIIB seeks to become a leader in catalyzing private capital investment in infrastructure by working on developing emerging market infrastructure as an asset class. The objective is to develop a pipeline of private sector projects for which the bank will provide leveraging finance, in partnership with other MDBs, commercial banks, and institutional investors. Becoming a leader in this field will require maintenance of the highest reputation for the AIIB, its staff, operations, and outcomes. Finally, the bank’s commitment to sustainability throughout its lending portfolio is an important operational principle that will strengthen its reputation and which aligns the work of the AIIB with the priorities of the multilateral system as articulated in the UN 2030 Sustainable Development
93
Goals. Sustainable infrastructure is one of the bank’s three thematic priorities and it is funding a number of projects in the fields of renewable energy, energy efficiency, rehabilitation and upgrading of existing plants, and transmission and distribution networks, to support member states achieving their commitments under the Paris Climate Agreement and national development plans. The AIIB has adopted a Social and Environmental Framework to ensure environmental and social sustainability of its infrastructure projects. It is also developing a Water Strategy, to guide the investment sector in addressing water security challenges, to which the AIIB has contributed US$1.4 billion (International Institute for Sustainable Development 2019). China is seeking to position itself as a champion of renewable energy with massive investments in new technologies and infrastructure within China. Notably, and by contrast, China continues through its bilateral programs of infrastructure financing to finance the building of coal-fired power stations across the developing world, attracting much criticism.
Conclusion The AIIB represents a new public good created by China that is both complementary to the liberal international order, in augmenting the deficient financing of infrastructure by the existing institutions, as well as creating norm-challenging competition to the existing institutions. It reinforces China’s new role as a member of the leading group of nations in the international system and its capacity to be a norm entrepreneur, strengthening the case for more equitable representation of developing nations and shifting emphasis in international development away from the Washington Consensus to a new, perhaps more practical, focus on infrastructure development. Its co-funding with other MDBs of important infrastructure projects bolsters economic security for its members and for the region. Indeed, in the post-Cold War world, economic security has been more often provided through international cooperation than geopolitical confrontation (Cable 1995).
A
94
The AIIB has implemented best practice governance and operations and, on its own, the bank poses no apparent threat to the liberal international order. Of course, it must be placed within a broader context of the geopolitical contest underway between the USA and China and China’s ambitious programs – on a much larger scale than the AIIB – of funding infrastructure development through its Belt and Road Initiative and other regional and bilateral arrangements. Some nations that remain nervous of China’s broader ambitions have nevertheless welcomed the AIIB as a new source of multilateral infrastructure funding. China’s BRI and other initiatives reinforce the trend to multipolarity and a diversity of political, social, and economic systems within multilateralism. If the AIIB can be a source of learning to implement best practices across those broader programs, it will be very helpful indeed.
Cross-References ▶ Asian Development Bank (ADB) ▶ Critical Infrastructure ▶ Foreign Direct Investment (FDI) ▶ Infrastructure Development ▶ International Monetary Fund (IMF)
References Aberg, J. (2016). Chinese bridge-building: The AIIB and the struggle for regional leadership. Global Asia, 11(1), 70–75. Asian Development Bank. (2017). Meeting Asia’s infrastructure needs. https://www.adb.org/sites/default/ files/publication/227496/special-report-infrastructure. pdf Asian Infrastructure Investment Bank. (2019, November 15). AIIB Investment in India nears USD 3 Billion. https://www.aiib.org/en/news-events/news/2019/2019 1115_001.html Asian Infrastructure Investment Bank. https://www.aiib. org/en/ Cable, V. (1995). What is international economic security?” International Affairs (Royal Institute of International Affairs), 71(2): 305–324. Chan, L., & Lee, P. (2017). Power, ideas and institutions: China’s emergent footprints in global governance of
Asian Infrastructure Investment Bank development aid. CSGR working paper no. 281/17, Centre for the Study of Globalisation and Regionalisation, University of Warwick. www.war wick.ac.uk/csgr/papers/281-17.pdf Chen, J., & Chen, W. (2019). AIIB reaches milestone of 100 members. China Daily, July 15. http://www. chinadaily.com.cn/a/201907/15/WS5d2b7f05a31058 95c2e7d54e.html Christensen, T. (2015) The China challenge. New York: W.W. Norton and Company. Department of Defense. (2018). National defense strategy. Washington, DC. https://dod.defense.gov/Portals/1/ Documents/pubs/2018-National-Defense-Strategy-Su mmary.pdf Gåsemyr, H. (2018). China and multilateral development banks. Norwegian Institute of International Affairs. https://www.nupi.no/nupi/Publikasjoner/CRIStin-Pub/ China-and-Multilateral-Development-Banks-PositionsMotivations-Ambitions Gilman, N. (2015). The new international economic order: A reintroduction. Humanity, 6(1), 1–16. Hameiri, S., & Jones, L. (2018). China challenges global governance? Chinese international development finance and the AIIB. International Affairs, 94(3): 573–593. https://academic.oup.com/ia/article/94/3/573/4992402 Hodzi, O., & Chen Y. (2017). The great Rejuvenation? China’s search for a new ‘global order’. Institute for Security & Development Policy, Asia Paper. Hu, W. (2015). De-Sinicization can counter concerns about AIIB. Global Times, December 3. http://www. globaltimes.cn/content/956445.shtml Humphrey, C. (2015). Infrastructure finance in the developing world: Challenges and opportunities for multilateral development banks in 21st century infrastructure finance. Global Green Growth Institute Working Paper, Seoul. Ikenberry, G. (2011). The future of the liberal world order. Foreign Affairs, May/June. https://www.foreignaffairs. com/articles/2011-05-01/future-liberal-world-order International Institute for Sustainable Development. (2019). Budapest Water Summit. BWS Bulletin 82(36). Larionova, M., & Shelepov, A. (2016). Potential role of the New Development Bank and Asian Infrastructure Investment Bank in the global financial system. International Relations, 16(4), 700–716. Leng, A., & Rajah, R. (2019). Chart of the week: Global trade through a US-China lens. The Interpreter, December 18, Lowy Institute. https://www.lowyinstitute.org/the-inter preter/chart-week-global-trade-through-us-china-lens Liao, R. (2015). Out of the Bretton Woods: How the AIIB is different. Foreign Affairs, July 27. https://www. foreignaffairs.com/articles/asia/2015-07-27/out-brettonwoods Limpach, S., & Michaelowa, K. (2010). The impact of World Bank and IMF Programs on democratization in developing countries. CIS Working Paper 62. https:// www.ethz.ch/content/dam/ethz/special-interest/gess/cis/ cis-dam/Research/Working_Papers/WP_2010/2010_W P62_Limpach_Michaelowa.pdf
Asian Monetary Fund (AMF) Maliszewska, M., & van der Mensbrugghe, D. (2019). The belt and road initiative: Economic, poverty and environmental impacts (Policy Research working paper) (Vol. 8814). Washington, DC: World Bank Group. Manyika, J., et al. (2019, May). A new look at the declining labor share of income in the United States. McKinsey Global Institute discussion paper. https://www.mckinsey. com/featured-insights/employment-and-growth/a-new-loo k-at-the-declining-labor-share-of-income-in-the-unitedstates Mearsheimer, J. (2014). The tragedy of great power politics. New York: W.W. Norton and Company. Muttalib, H. (2010). Singapore’s embrace of globalization and its implications for the Republic’s security. Contemporary Security Policy, 23(1), 129–148. National Development and Reform Commission. Vision and actions on jointly building silk road economic belt and 21st century maritime silk road, Beijing, March 2015. Oswald, S. (2018). The new architects: Brazil, China, and innovation in multilateral development lending. Public Administration and Development, 39, 4–5. https://doi. org/10.1002/pad.1837 Overholt, W. (1993). The rise of China: How economic reform is creating a new superpower. New York: W.W. Norton and Company. Peng, Z., & Tok, S. (2016). The AIIB and China’s normative power in international financial governance structure. Chinese Political Science Review 1. https://doi. org/10.1007/s41111-016-0042-y Raiser, M., & Ruta, M. (2019). Managing the risks of the Belt and Road. World Bank Blogs, June 20. https:// blogs.worldbank.org/eastasiapacific/managing-the-ris ks-of-the-belt-and-road Shambaugh, D. (2013). China goes global: The partial power. Oxford: Oxford University Press. Shelepov, A. (2018). The AIIB, multilateral and national development banks: Potential for cooperation. International Relations, 18(1), 135–147. State Council of the People’s Republic of China. (2015, March 30). Action plan on the belt and road initiative. Beijing. Stephen, M., & Skidmore, D. (2019). The AIIB in the Liberal International Order. The China Journal of International Politics, 12(1), 61–91. https://doi.org/10. 1093/cjip/poy021 United States Government. (2017). National Security Strategy. Washington, DC. https://www.whitehouse. gov/wp-content/uploads/2017/12/NSS-Final-12-182017-0905-2.pdf Woetzel, J., et al. (2019). China and the world: Inside the dynamics of a changing relationship. McKinsey Global Institute. https://www.mckinsey.com/featured-insights/ china/china-and-the-world-inside-the-dynamics-of-achanging-relationship Yagci, M. (2016). A Beijing consensus in the making: The rise of Chinese initiatives in the international political economy and implications for developing countries. Perceptions, XXI(2): 29–56.
95
Asian Monetary Fund (AMF) Claire Michaela M. Obejas College of Social Sciences, University of the Philippines Cebu, Cebu, Philippines
Keywords
Japan · IMF · AMF · Financial crisis · Asia · Regional · Economy
Introduction The Asian Monetary Fund (AMF) was a proposal made by Japanese financial authorities at the height of the Asian financial crisis of 1997. This proposal was made at the Group of Seven (G7)International Monetary Fund (IMF) meetings held in Hong Kong in September of that year. The goal of the AMF was to overcome present and future economic crises in the region by securing a network meant for and funded by Asian countries. This entry will provide a brief overview of the circumstances and the considerations that contributed to the emergence, evolution, and failure of the idea, based on a few authoritative sources on the subject (Narine 2001; Amyx 2002; Liu 2002; Lipscy 2003; Masahiro 2015).
The Evolution of the AMF Idea Initially, the creation of an IMF counterpart in Asia was first idealized when the Asian Development Bank (ADB) was established in 1966. Its premise was that the AMF would complement the activities of the ADB, similarly to the relationship of the IMF and the World Bank. This idea, however, was not actively pursued for decades due to its failure to gain traction among world leaders. When the Thai baht collapsed in 1997, it provided an opportunity for the resurgence of the AMF. The amount of aid an IMF member country would be able to secure from the Fund is determined by the country’s quota and Thailand required more than three times its quota. The
A
96
IMF was unable to provide sufficient funds to lift Thailand from its crisis, leading Thailand to turn to Japan for assistance. Japan then responded by organizing bilateral aid packages, which was able to quell the Thai crisis. However, the set-up of individual aid packages and the accumulation of money to fend off currency attacks were time-consuming endeavors that could lead to even more economic and financial damage. Other countries in the region began to fear attacks on their own currencies. With the case of Thailand as an example, it was clear that the IMF quota on aid amounts proved too small to prevent crisis should the currencies of other nations similarly fail. This then prompted Japanese officials to advance the AMF idea. The United States abstained from contributing to the aid effort for Thailand and instead lent their support for Japan’s then-Vice Minister for International Finance Eisuke Sakakibara’s suggestion that Japan should carry a greater leadership role in Asia, a role that is also independent of the United States. Furthermore, the initial proposal of the AMF came about as Japan had grown increasingly resentful of the US criticism of their financial and economic problems. Comprehensive management of national financial systems was considered important, but it was simply not part of the bilateral agenda. However, Japan’s irresolution of its nonperforming loan problem became a topic pursued by American officials in bilateral meetings. It created a defensive mood within the bureaucracy, lending to a context that would support an initiative that excluded the United States. Thus, the AMF that emerged as the official government proposal stemmed from Sakakibara’s conceptualization and was first advanced on August 11, 1997, at a meeting in Tokyo with the other countries that had contributed to the aid for Thailand. During the rest of the month and the beginning of September 1997, the idea that a $US100billion fund would provide support to Asian countries hit by crisis through trade finance and balance of payments, while also acting as a pooled reserve for currency defense, was informally proposed by Sakakibara. This initiative, however, was not met without opposition. While Malaysia and most countries of
Asian Monetary Fund (AMF)
the Association of Southeast Asian Nations (ASEAN) were generally supportive, China and Singapore did not approach the idea with the same enthusiasm. China, being the second-largest holder of foreign exchange reserves in Asia, had a role that was considered important in the participation of any regional fund, and thus their lack of support for a regional institution without the United States was deemed an opposition, along with other countries who had close ties with the United States. These managed to inform US officials of Japan’s lobbying activities in the region. The initiative was also met with strong opposition from Japan’s counterparts in the USA and Europe when Japan raised the idea at the G7-IMF meetings in Hong Kong. An unspoken incentive to object was the fear that the IMF and the US government’s respective roles in Asia would diminish. As a result of the USA becoming constrained by Congress to commit financial resources to international initiatives after the 1995 Mexican peso bailout, the IMF had become a valuable mechanism for US influence in international financial crisis management and international monetary affairs, and the prospective existence of an AMF threatened the IMF’s authority. Formally, the IMF and the United States’ main reason for opposition was the AMF’s duplication of IMF functions. Many IMF officials also argued that because an increase in IMF resources was pending for 1998, it was unnecessary to augment the IMF with a separate fund. Many European officials, along with the United States, also suspected that Japan would provide a softer and more lenient conditionality to aid. They argued that doing so would not help countries in crisis in the long run, as it would only promote moral hazard. At this point in the progression of the AMF, Sakakibara’s conceptualization was still an abstract concept and so details remained vague. While Ministry of Finance (MOF) officials sought to minimize concerns over a weaker conditionality, conditions within Japan contributed to the suspicions of other countries. Japan’s standards of financial regulation failed to conform to internationally recognized best standards in 1997. Furthermore, “fundamentals” that the IMF focused on in establishing conditions for the receipt of funds had similarly failed in Japan.
Asian Monetary Fund (AMF)
After Japanese officials relayed the resistance to their Asian counterparts, many Asian countries wavered in advancing the AMF idea, especially amid strong US objection. Many of these countries feared a weakening in relations with the United States as US-ASEAN relations were considered to be just as vital as Japan-ASEAN relations. Additionally, Chinese opposition was also a major factor. However, concerns about the failures of the IMF were not augmented as the financial crisis spread beyond Thailand and into Indonesia. Fourteen countries representing the Asia Pacific region, including the United States, Canada, Australia, and New Zealand, met in Manila to discuss a framework meant to aim for currency stability by strengthening Asian regional cooperation. The Manila Framework led to several programs of technical support and the decision that deputy finance ministers and central bank deputies would meet twice a year to deliberate monetary surveillance in the region and international finance. Among the notable support programs to come out of the Manila Framework was the Cooperative Financing Arrangement (CFA), which was an understanding among members of the group that each member country would assist the other in times of crisis. An IMF agreement was necessary for assistance to be carried out and the CFA would only be actively utilized if the IMF proved inadequate. Resource inadequacy and the speed of financing provision were both addressed with the CFA proposition. Both problems had generated the initial idea for a regional fund. Japanese officials would agree that the AMF’s objectives were in line with those of the CFA. Due to several instances of the IMF failing in the region – such as the case of Indonesia where banks failed without any system of deposit insurance – and the US government’s relatively “harsh” disposition towards crisis-hit countries, the IMF’s conditions in return for providing assistance received great criticism. Japanese officials also became even more outspoken about this criticism towards the IMF, wherein former MOF Vice Minister for International Finance Toyo Gyohten wrote that Japan should play a more weighty role
97
in the affairs of Asia. Japan’s Ambassador to Korea, in another article, argued that the crisis should bring East Asian countries together to secure the creation of a new Asian fund. Japan’s diplomatic efforts also increased with visits by the head of state and other officials to crisis-hit countries like Indonesia and Malaysia. All these contributed to regional interest in establishing a lender not dominated by the USA. In October 1998, a new program of bilateral aid that would provide US$30billion in loans and loan guarantees, called the New Miyazawa Initiative, was announced by Finance Minister Kiichi Miyazawa to help revive the countries hit by crisis. This initiative was deemed a first step to the revival of the AMF concept. Because the efforts of Japanese officials aided other Asian countries and made up for the shortcomings that came with IMF conditionalities, US officials came to be more appreciative of such efforts. However, because it would mean that Japan was going off on its own, both US and Japanese officials agreed to create a cooperative scheme. Through this, the Asian Growth and Recovery Initiative was established in November 1998. One of its major components was the establishment of an Asian Currency Crisis Support Facility within the ADB from the New Miyazawa Initiative funds. It was expected that the cooperative nature of the initiative would assuage perceptions of rising tensions between Japanese and US officials. In 1999, the establishment of backup facilities was added to the New Miyazawa Initiative in the form of currency swap agreements. This represented a shift from simply providing aid to overcoming current economic difficulties overall by establishing an institutional framework, and thus preventing contagion. This led to another push for the AMF idea. Sentiments of cooperation in the region encouraged a new regional grouping with Japan, China, and South Korea as the “ASEAN+3,” which was seen as a catalyst for strengthened bilateral ties. This strengthening was then viewed as a means for overall regional stability. By January 1999, the ASEAN+3 met to discuss the formation of a permanent regional fund.
A
98
During this, ASEAN member nations expected Japan to be more visible during G-7 meetings in order to establish representation for the region. Japan responded positively to these expectations, thus strengthening ties between Asian countries and allowing the AMF idea to be reconsidered. Asian countries did recognize that the original AMF plan had shortcomings, but due to greater criticisms of the IMF, these countries now turned to Japan for leadership in establishing a regional fund supplementary to the IMF. Because the IMF lacked the necessary information needed to aid each country and instead used the same template for all, it was believed that a regional arrangement would allow officials to look deeper into each country’s specific needs, thus leading to a more accurate solution for crises. When the ASEAN+3 met in Chiang Mai, Thailand, in May 2000, a framework for strengthening East Asian financial cooperation was announced. It had two components: the expansion of an existing ASEAN swap agreement (ASA) and the establishment of a network of bilateral currency swap arrangements (BSA), which were expected to aid in defending against future speculative currency attacks. An existing swap network among the top four ASEAN countries to all 10 ASEAN member nations was expanded by ASA and funds in the network were increased from $200 million to $1 billion. However, this was still very small and unable to establish a believable defense against any possible attack. This did little to assuage possible contagion. The second component of the Chiang Mai Initiative was seen to be a more advanced development, as it enabled a considerable expansion in swap facility, because it included Japan, China, and South Korea. By April 2001, finance ministers of ASEAN+3 agreed that a condition of the bilateral swap arrangement to acquire assistance would depend on the swap partner borrowing the equivalent of 90% of the swapped amount from the IMF. The swap-providing countries must determine that the swap-requesting country was indeed facing a short-term liquidity issue as only up to 10% of the maximum amount drawn could be provided that did not need any connection to IMF facilities.
Assimilation
Conclusion In the end, by implication of the conditions referred to above, the recipient countries were still required to abide by the strict economic and fiscal conditions stipulated by the IMF in terms of lending. The development of the AMF idea, despite its many revisions and efforts at its revival, thus ultimately left dependence on the IMF intact.
References Amyx, J. A. (2002, September). Moving beyond bilateralism? Japan and the Asian Monetary Fund (Asia Pacific economic papers no. 331). Canberra: Australia-Japan Research Centre. Lipscy, P. Y. (2003). Japan’s Asian monetary fund proposal. Stanford Journal of East Asian Affairs, 3(1), 93–104. Liu, H. C. K. (2002). The case for an Asian Monetary Fund. Asia Times Online. Retrieved originally from http://www.atimes.com/, https://henryckliu.com/page 119.html. Accessed 20 Jan 2020. Masahiro, K. (2015, May). From the Chiang Mai initiative to an Asian Monetary Fund (ADBI working paper series no. 527). https://www.adb.org/sites/default/files/publica tion/160056/adbi-wp527.pdf. Accessed 20 Jan 2020. Narine, S. (2001). ASEAN and the idea of an “Asian Monetary Fund”: Institutional uncertainty in the Asia Pacific. In M. Caballero-Anthony & A. D. B. Cook (Eds.), Non-traditional security issues in Southeast Asia (pp. 227–254). Singapore: Institute of Southeast Asian Studies.
Assimilation Tuğba Bayar Department of International Relations, Bilkent University, Ankara, Turkey
Keywords
Human rights · Integration · Culture · Minority
Introduction As the twenty-first century began, the number of international migrants worldwide has started to
Assimilation
climb from 173 million in 2000 to 220 million in 2010 and 258 million in 2017. The movement does not relate to certain areas, but it includes all continents and states. According to studies, migration mainly occurs between countries that are situated in the same world region. This trend, however, does not promise migration among similar cultures. As a response to the migration influx, states started to adopt diverse policies regarding the existence of migrants in their own states. Especially those states that are more alert regarding the security concerns created by the presence of migrants, they tend to take more strict measures vis-à-vis migrants culture and their integration into the host country.
Definition In the broadest sense, the term assimilation refers to elimination of differences during a cultural encounter. During this sociocultural process, the dissimilarities of cultures disappear, which results in creation of a super culture. Assimilation is an equivalent of the term acculturation, which is preferred by anthropologists. Though, acculturation refers to the adaptation of behavioral patterns of the host country, where assimilation is a structural and radical adoption.
Rethinking Assimilation Policies During Mass Migration This political philosophy is the opposite of multiculturalism approach that advocates cultural diversity in a society. In a pluralist society, individual identities are kept, and social differences remain. Assimilation is cultural identity transformation that is mostly observable when a dominant culture comes across with minorities. The concept was first put forward by J. Hector St. John Crèvecoeur through his melting pot theory of migrants melting into the hosting culture. This theory explains a reciprocal assimilation process that the meeting cultures disappear to create a third culture. This is how white people from various European countries arrived in America and become Americans
99
collectively. The great majority of assimilation theories emphasize the unilaterality of the practice. The unilateral assimilation does not create a third culture. It takes place when the minorities relinquish their own features and assume the features of the dominant culture. Scholars distinguish between forced and unforced assimilation. The unforced assimilation is not a product of systematic policies, and it does not create frictions; instead it is a peaceful course since it grows out of voluntary integration efforts. As a result of both unforced and forced assimilation, the cultural distinctiveness of the minority disappears. However, forced assimilation has a shattering impact on the minority. The forced assimilation is a controversial issue. A religious, ethnic, linguistic, etc. minority is compelled to take on the religion, culture, language, etc. of the dominant society. Historically, assimilation was first implemented during colonialism, especially by France, by adopting French language and culture for uniformity. The colonial assimilation policy assumed that spreading of own language and culture would facilitate colonization in many ways, especially by integrating indigenous community into the settler majority. Regardless of their ethnicity, the indigenous people should resemble to Frenchmen with their language; culture, even daily practices; behaving; and thinking. The aim was merging the material, moral, and mental ties with remote colonies and France. By the expansion of human rights in time, it is understood that the colonial policy of assimilation is an infringement of the basic rights of the indigenous people. The United Nations Declaration on the Rights of Indigenous Peoples establishes a universal framework to ensure rights and freedoms of indigenous peoples. According to Article 8 (2.a.) of the Declaration “states shall provide effective mechanisms for prevention of, and redress for any form of forced assimilation or integration.” This article is formulated to prevent the traumas of colonization repeating in the future. Following colonialism, assimilation became a significant tool of nation building and consolidation. The minority groups with the nation-building nation become the subject of assimilation. Assimilation is also used as a component of genocide. This concept is also referred as cultural genocide
A
100
or ethnocide, which leads to extreme violation of cultural rights. The aim is destruction of a specific culture or a specific community by restricting them from using their own culture, language, or religion or even by forcibly transferring the children of the community. Today, due to the global asylum influx, immigrants are the main subjects of the assimilation policy. Immigrants change the homogeneity of a society. Moreover, immigrant society itself is already not monolithic. The multiplicity of languages, religions, cultural practices, and general skills is not similar to the host country’s features. The immigrant assimilation aims to naturalize the population to create homogeneity. This approach is seen as an extreme nationalistic policy, and it assumes a shared national identity is the basis of a state’s integrity. Existence of different identities in a state is perceived as a weakness and may lead to disintegration in time. Therefore this kind of a political view sees assimilation as the cement of the national unity and everlasting sovereignty. In other words, the assimilation is used to transform potential secessionists into loyal citizens. In this sense, assimilation is perceived as a security strategy to eliminate social fissures and foreign influences in the country. Contrariwise, the neoliberal economic system can be upheld by mobility of skilled or unskilled labor (besides mobility of capital, interests, etc.). As the globalization and neoliberalism strain the nation state, this tension is reflected on the rights of the newcomers to the state. In various political discourses, sustainability of migrants is assessed through assimilationist tendencies. The public understandings of migrant assimilation have two turning points, which are the 9/11 attacks and the increasing ultraright populism in the twenty-first century. The post-9/11 security policies, particularly in the USA, were formulated as a response to Islamic fundamentalism. The rise of orientalist worldview led a racial terrorist profiling and presented Islam as a basis prone to extremism and terror. The infiltration of potential terrorists created a division among the American society as us and potential terrorists. A similar wave is observable by the emergence of the Syria crisis followed by a migration influx and the rise of European ultraright policies. The migrant flood
Assimilation
fueled a nationalistic discourse that revolves around the challenged welfare, labor markets, national identity, and national security. The populist rhetoric disfavors multiculturalism and pushes the migrants to cultural assimilation, including their belief and value systems. The populist upsurge is based on racist and xenophobic elements. The aim is to take control of the national borders and prevent sharing of economic welfare with outsiders. Immigrants must assimilate into Western culture for better jobs and residence/ employment permit. The minority cultures and languages are undermined within the administrative network of the state, like the education system or judicial system, as well as in media, publications, etc. The assimilation process is completed when the former minority group becomes indistinguishable from the dominant group. The completeness of assimilation is observed in certain fields like cessation of value conflict; ending of discrimination; termination of prejudices; extensive intermarriage; attachment in clubs and institutions of the host country; etc. Be it willful or coercive, the term assimilation comes under scrutiny if it violates human rights. Some scholars discuss that consent for assimilation is a result of direct or indirect coercion. In order to survive within the dominant culture as a minority, to receive services like education and health, to get employed, and to be socially accepted, minorities drop their distinctiveness. The process of assimilation clashes with the idea of minority rights that aim protection and promotion of their identity. Minority rights are about preventing discriminatory approaches and creating equality among distinctive identities. Minorities are vulnerable before dominant cultures; they face direct/indirect, de jure/de facto discrimination in the societies they inhabit. Through assimilation, cultures, languages, and religions disappear. The rights of the minorities are protected by two basic principles of international law: principle of nondiscrimination and equality before law. Direct discrimination is observable and punishable by any state of law. Indirect discrimination is not easily diagnosed and therefore cannot be easily eliminated. The inequality can however be observed in the
Authoritarianism
outcome. The unequal treatment is permitted by law and by the minority rights if it provides positive discrimination for the advancement of certain disadvantaged groups.
Conclusion Cultural survival is an undeniable civil right. The minorities can be protected against assimilation by the advancement of the nondiscrimination policies, by which minority groups enjoy economic, social, and cultural rights and find equal access to social services and to employment. The possibility of assimilation minimizes when the minority groups are reflected in public institutions, when they are included in policy making and in public affairs and in all possible aspects of political and economic and cultural life.
Cross-References ▶ Migration-Security Nexus ▶ Refugees
101 Heckmann, F., & Schnapper, D. (Eds.). (2016). The integration of immigrants in European societies: National differences and trends of convergence (Vol. 7). Berlin: Walter de Gruyter GmbH & Co KG. Kivisto, P. (2015). Incorporating diversity: Rethinking assimilation in a multicultural age. London: Routledge. Schneider, J., & Crul, M. (Eds.). (2014). Theorising integration and assimilation. Oxon: Routledge. Vollebergh, W., Veenman, J., & Hagendoorn, L. (Eds.). (2017). Integrating immigrants in the Netherlands: Cultural versus socio-economic integration: Cultural versus socio-economic integration. Routledge. Xie, S., Leng, X., & Ritakallio, V. M. (2016). The urban integration of migrant workers in China: An assimilation–integration pattern. China Journal of Social Work, 9(3), 257–277.
Authoritarianism Francis Grice Department of Political Science and International Studies, McDaniel College, Westminster, CA, USA
Keywords
References 1948 Universal Declaration of Human Rights. de Crèvecoeur, J. H. S. J. (1995). More letters from the American farmer: An edition of the essays in English left unpublished by Crévecoeur. Athens: University of Georgia Press. Gordon, M. M. (1964). Assimilation in American life. Oxford: Oxford University Press. International Convention on the Elimination of All Forms of Racial Discrimination. (1963). The 1951 Convention Relating to the Status of Refugees. The 1966 International Covenant on Economic, Social, and Cultural Rights. The Convention on the Elimination of All Forms of Discrimination against Women. (1979). The International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families. (1990). The UN Declaration on the Rights of Indigenous Peoples. (2007).
Further Reading Favell, A. (2015). Immigration, integration and mobility. Colchester: Ecpr Press.
Authoritarianism · Vladimir Putin · Recep Tayyip Erdogan · Military dictatorship · Sudan · Nationalist China
Introduction Authoritarianism as a concept has experienced an upswing in attention over the past few years, driven at least partially by the devolution of multiple democracies from proto-democratic models of government to more authoritarian models. Vladimir Putin in Russia and Recep Tayyip Erdogan in Turkey have stood out as particularly highprofile examples of authoritarian leaders who have worked to distort the democratic dimensions of their governments to prevent the appearance of serious rivals to their rule, alter the political systems in which they operate to enable their protracted reign, and generally embed themselves as uncontestable rulers. After years of increasing democratization and liberalization of political
A
102
systems across the world, these cases have helped to reignite interest in authoritarianism, its characteristics, and its security implications. Within the United States, the election and tenure to date of Donald Trump to President have raised questions – rightly or wrongly – about whether even the state that purports to be the “leader of the free world” has begun sliding down the precariously slippery path toward authoritarianism. Paul Mason summed up the somber mood of many analysts in his recent claim in The Guardian newspaper that “Democracy is dying” (Mason 2017). Despite the recent resurgence of interest in authoritarianism, the concept is scarcely new and has formed the focus of numerous studies since the end of the World War II. Juan J. Linz was one of the first prominent writers on the topic in the postwar era, with an opening treatise that examined the character of authoritarianism in Spain specifically in 1964. Linz wrote a series of subsequent works that addressed authoritarianism as a form of government, including his seminal work on Totalitarian and Authoritarian Regimes, which was first published in 1975 and reprinted in 2000 with an extensive introduction that covered the evolution of authoritarianism in the intervening period. Paul C. Sondrol also made substantial contributions to the field by fleshing out the differences between totalitarianism and authoritarianism, especially in his 1991 article on “Totalitarian and Authoritarian Dictators: A Comparison of Fidel Castro and Alfredo Stroessner.” Milan W. Svolik added further to the field through his discussion of the challenges that authoritarian regimes face in maintaining their control over the population in The Politics of Authoritarian Rule. These represent only a sample of the major authors who have made notable contributions to the academic literature on the topic of authoritarianism.
Authoritarianism as a Concept Authoritarianism is both difficult and controversial to unpack as a concept. In many ways it is a term that can be defined as much by talking about
Authoritarianism
what it does not embody as much as what it does include. As Linz notes (1964), authoritarian regimes are those: Political systems with limited, not responsible, political pluralism, without elaborate and guiding ideology, but with distinctive mentalities, without extensive nor intensive political mobilization, except at some points in their development, and in which a leader or occasionally a small group exercises power within formally ill-defined limits but actually quite predictable ones. (p. 291)
At its core, then, an authoritarian state is ruled over by one or a small number of leaders, yet in a way that is neither totalitarian nor democratic. The goal of the leadership is typically to stay in power no matter the cost, with any ideological considerations playing at best a secondary role to this goal. This has led some scholars to use the term as a kind of wastebasket diagnosis for all nondemocratic systems that do not fall obviously into other categories. Some of the government types that have been placed into the category as a result include military juntas, single-ruler dictatorships, and oligarchies with a weak or nonexistent ideological drive. Yet, this depiction of the regime type appears too broad to provide a satisfactory account of the term, and it is important to break down the term further into its constituent types and parts. Military-dominated regimes are one major form of authoritarianism because the military often – but not always – benefits from an elevated position of power and privilege within these regimes, especially if they played an instrumental role in bringing the government into power (Linz 1970, pp 271–272). This reality is bolstered by the moral hazard faced by authoritarian governments, that is, they need the military to help deter and prevent the population from rising up against them. In exchange for their cooperation, authoritarian leaders often feel compelled to surrender positions of high prestige and influence over to the military, which in turn supplies the military with the ideal vantage point needed to launch a coup d’etat against the regime. Most militaries are restrained from such an action, however, by the realization that their own positions could be weakened and the privileges that they have garnered
Authoritarianism
under their current regime would be put at risk if they attempted such an endeavor (Svolik 2012a). Military-dominated authoritarian regimes are not the only kinds that exist however, and Linz identifies a total of five main breeds. The first type is traditional authoritarian regimes, which hold onto power through appeals to tradition, the cultivation of patron-client relationships with influential groups among the population, and the application of repression by groups with personal loyalty to the regime. The second is bureaucraticmilitary authoritarian regimes, in which military officers and technocrats within the bureaucracy work to maintain power over the population using pragmatic measures, often with a focus upon the economy. The third is corporatist authoritarian regimes, in which the regime uses corporatist institutions to coopt and defuse interest groups that might otherwise threaten them. The fourth is racial and ethnic democracies, where specific racial and ethnic groups are empowered with full democratic rights while other groups are denied them. The fifth are post-totalitarian authoritarian regimes, in which totalitarian institutions such as a zealously ideological party, highly mobilized population, and extreme state surveillance were once present but have faded and become shadows of their former selves, but have not been replaced with democratic government (Gasiorowski 1990, pp. 114–115). Within all of these types, a number of actors play influential roles to varying degrees, including the police, the military, the bureaucracy, and the judiciary (Neumann 1957, p. 236), as well as the bureaucratic-military complex, political police, paramilitary forces, and youth movements (Brooker 2009, p. 30). One of the defining challenges for authoritarian regimes is retaining power. Svolik (2012b) asserts that two main challenges face the dictators of authoritarian regimes in this regard: first, the problem of a small ruling group controlling a much larger population and second, the challenge of power-sharing among the ruling group because even dictators need the support of elites to cement their rule and implement their agenda. Managing both challenges, especially the latter, is essential for the durability of authoritarian governments because over two-thirds of all authoritarian
103
dictators that were ousted between 1946 and 2008 were removed by regime insiders through coups d’etat and other palace intrigues, while another 11% were forced out of power by a popular uprising (the remainder either transitioned incrementally to democracy or were toppled by foreign invasion or assassinated). To forestall the threat of popular uprising, authoritarian regimes typically depend heavily upon coercive repression, divide-and-rule tactics, scapegoating minority groups to distract attention away from their own corrupt rule, and the provision of incentives in exchange for loyalty among key civilian groups (Bove et al. 2017, p. 411; Malantowicz 2010, p. 161). To manage the latter, they usually engage in bargaining and power-sharing with political elites, often by creating a parallel power hierarchy, such as a political party, and allocating influential positions within this structure to their loyal supporters and friendly political elites (Magaloni 2008, p. 2). Going beyond simply rewarding their closest supporters, authoritarian leaders also frequently create legislatures with marginal powers as ways to tempt potentially hostile individuals and groups into accepting seats in this body in exchange for acquiescing to the overarching rule of the regime (Ghandi and Przeworski 2006).
Comparisons with Other Regimes Authoritarianism shares many similarities and overlaps with other types of government, as well as notable differences and variations. The three most notable regime types that possess interconnections with authoritarianism are totalitarianism, democracy, and absolute monarchy. Comparisons with totalitarian states are commonplace among scholars, and the two types of system contain many similarities. The ruler of both systems is either a single or small body of leaders, who aim to centralize all of the power in the state within their control and not share it with the population. They often use austere and violent measures to maintain their control over the political apparatus in the country and clamp down upon basic political rights and civil liberties, such as a free press. These similarities and overlaps have led scholars
A
104
to differing opinions regarding whether states such as Cuba and North Korea are authoritarian or totalitarian regimes. There are, however, a variety of distinctions that set the two types of government firmly apart. One major difference is that the charisma of leaders in authoritarian governments tends to be lower than those in totalitarian systems. A second variance is that the rulers of an authoritarian state usually view themselves as individuals, whereas totalitarian leaders often believe that they are functionaries entrusted with carrying out their ideological mission. A third is that authoritarian rulers typically use their power for their own benefit, whereas totalitarian leaders aim for their use of power to affect the broader population. In addition, authoritarian governments frequently lack strong ideological motivators, possess low legitimacy in the eyes of the population, and employ limited and fraudulent pluralism to bolster their position as a result. In contrast, totalitarian regimes are usually dominated by powerful ideological drivers, possess much higher legitimacy among the population, and decline to employ quasi-pluralist charades to support their rule (Sondrol 1991). Another major difference is that totalitarian governments strive to obtain total control over all aspects of the social, cultural, and political lives of the population over whom they rule (Arendt 1976, p. 326). As one mechanism to achieve this end, they typically create and maintain an extensive security, surveillance, and social control infrastructure that enable them to monitor and “correct” the everyday lives and actions of their subjects. This includes far-reaching secret policing, the embedding of political commissars and party cadres at all levels of military and civilian government that have monitoring, correcting, and if need be punishing acts of dissidence against the state, pervasive media censorship, neighborhood watch groups in which subjects are pushed to report one another for any nonconformity with the rule of the party, and propaganda that indoctrinates the population with pro-government dogma. In contrast, authoritarian governments have less interest or capability to control the lives of their citizens so fully. They may possess
Authoritarianism
and employ surveillance, pro-government propaganda and censorship, and other forms of control, but these are substantially reduced both in terms of breadth of use and intensity of application. The task of delineating authoritarianism from totalitarianism is complicated by a lack of conformity in how the two terms are defined. In their article on “Pyongyang’s Survival Strategy: Tools of Authoritarian Control in North Korea,” for example, Daniel Byman and Jennifer Lind label North Korea repeatedly as authoritarian and describe a variety of characteristics to support their claim. Yet the qualities of authoritarianism that they match North Korea against, including “restrictive social policies; manipulation of ideas and information; use of force; co-optation; manipulation of foreign governments; and institutional coup-proofing,” appear more closely aligned with traditional depictions of totalitarianism. They even refer directly to North Korea as a totalitarian state (Byman and Lind 2010). Another reason why a blurred line often exists between the two regime types is that many totalitarian governments were founded on the back of a fervent revolution but later transitioned to authoritarianism after their initial leaders died and the ideological luster of their successors faded. This happened in the former Soviet Union, which shed many of its totalitarian features and became effectively an authoritarian state after the death of Stalin in 1953, before making the next transition into democracy in 1991. It also happened in China, which morphed from being a totalitarian state under Mao Zedong into an authoritarian one following his death in 1976 and succession by Deng Xiaoping. This represents an interesting occurrence because it could be argued that a totalitarian state is best equipped to become a democratic state if it first goes through a period of authoritarianism rather than attempting to leap from one extreme of the spectrum to the other. The challenges encountered by the United States when it tried to replace the partially totalitarian rule of the Baathist Party under Saddam Hussein with a democratic order in Iraq has been posited by some scholars as an example of the struggles that can ensue when a totalitarian regime is replaced with a democratic government without
Authoritarianism
first going through a period of authoritarianism (Jabar 2003). Using this perspective, it would also be possible to argue that the current retrenchment away from democracy in Russia shows that the state had not transitioned fully enough through an authoritarian period for democracy to root itself in the political culture of the country. It could also be used to give hope to the idea that China will one day become a democratic state, once it has finished gestating through its current authoritarian period. Conversely, it is also possible for the evolution to happen the other way, with authoritarian states transitioning into totalitarian ones. Some scholars have suggested that this sequence is currently happening in Russia, where Vladimir Putin has begun reconstructing a security state with extensive surveillance and secret police, planting and fostering ideological goals, and stimulating mass mobilization to support these goals (Kaylan 2016; Gessen 2017). Similar observations have been made about China today, with Xi Jinping systematically deconstructing the limited pluralist institutions that had been developed in post-Maoist China, dramatically increasing surveillance and curtailing freedom of speech, cementing a cult of personality around himself, and pushing ideological visions such as the China Dream and Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era (Editorial Board 2018; Gracie 2017). The number of wholly nondemocratic authoritarian states has declined since the end of the Cold War, but there are nevertheless a substantial number of states that ostensibly report to be democratic but actually possess the trappings of authoritarian government. Elections are commonplace in nearly all countries around the world, for example, but in many cases these elections are rigged by the ruling party to exclude potential rivals, compel people to vote for the government rather than its opponents, silence any and all criticisms of the government in the press, and rewrite the results. Some examples of states that have been accused of this kind of hybrid democraticauthoritarian rule include Azerbaijan, which has been ruled by the New Azerbaijan Party since becoming a state in 1992; Belarus, which has
105
been under the rule of Alexander Lukashenko since 1994; and Singapore, which has been ruled by the People’s Action Party since 1959 (Rumyantsev 2017; Hurynovich 2011; Toh 2015). A third government system that possesses significant overlaps with authoritarianism is absolute monarchies. In both kinds of system, there exists an absence of democratic rule, no constitutional constraints upon the ruler, and the absolute authority of the ruler over the population. There are again, however, notable differences, including that in an absolute monarchy, power is usually legitimized as being concentrated in the hands of the monarchy as a result of lineage and, in some cases, divine right. In an authoritarian system, power is often justified by calls upon other sources of authority, including nationalism and xenophobia, threats to social disorder, and sectarianism. This raises another issue which is the degree to which authoritarian governments possess more or less stability than their totalitarian, democratic, and absolute monarchical counterparts. This stems in part from the challenge of claiming legitimacy. While totalitarian regimes can point to the alleged benefits of their ideological doctrine, democratic governments are bolstered by the presence of a popular mandate, and absolute monarchies can point to tradition and divine right, authoritarian governments lack an equivalent claim to legitimacy. The authoritarian government in postMaoist China has attempted to circumvent this problem by using economic strength as one of the primary validations of its rule, including pointing to the existence of a booming economy and rapidly increasing quality of life under their rule. Supporting their claims, they can point to annual GDP growth that was in the double digits for much of the 1980s and 1990s and has only now come back down to around 6% – a figure that is still over double that of most Western democracies. The difficulty with this strategy is that if the economy crumbles, then so too does the government’s case. It is for this reason that so many scholars have predicted the demise of China’s government if it should ever experience an economic bust or fail to supply its citizens with the resources they need to meet their needs. The fact
A
106
that the Chinese government is aware of this risk helps to explain why they have been so determined to seek alternative forms of wealth generation, including through rapid expansion of foreign trade and forceful contestation or seizure of territorial areas with high resource availability.
Examples of Authoritarianism One current-day example of an authoritarian regime can be found in Sudan, which has been ruled by Omar al-Bashir – the world’s only serving head of state who has been indicted by the International Criminal Court – and his National Congress Party since 1989 (Massoud 2013; Sudan 2017). Despite the existence of a limited amount of political parties and civic activity, the president continues to deploy violent repression against any potential opposition, enact and implement legislation to deprive people of their right to assemble and exercise free speech, and harass journalists and human rights organizations (Cairo Institute for Human Rights Studies 2012, pp. 309–322). In addition to deploying repression against potential dissidents, the regime offers incentives to key groups to buy their support (Sudan 2017). It further has a long tradition of scapegoating various minority groups within the country and committing mass violence against them, including in the Darfur conflict when Sudanese militias were deployed in a horrifying ethnic cleansing campaign against South Sudan’s non-Arab population, in part as a political mechanism to focus the remainder of the population away from the harsh and authoritarian rule of Bashir and his supporters (Malantowicz 2010, p. 162). An historical example of an authoritarian regime was Taiwan under the reign of the Guomindang Party from 1949 to 1990. Led by Chiang Kai-shek, the Guomindang fled to the island province and settled there in 1949 after losing the Chinese Revolutionary Civil War against the communists in mainland China. Yet, their presence was severely resented by large portions of the population from the outset. Proof of their low popularity can be seen by the fact that
Authoritarianism
the indigenous Taiwanese population had risen up against the rule of Chiang Kai-shek, 2 years earlier 1947, an act that was quickly and ruthlessly suppressed by Guomindang security forces (Chiou 1992). Throughout the following 41 years, limited pluralism did exist within Taiwan, but the democratic system was effectively held under the monopoly of the Guomindang Party, and little if any real opposition was allowed to flourish. While the Guomindang loosely adhered to the three revolutionary principles of their founder, Sun Yat-sen, its leaders tended to value pragmatism over ideological doctrine. Power was centralized around first Chiang Kaishek and then his son, Chiang Ching-kuo, but no real efforts were made to achieve mass mobilization over their agenda, with political control limited to suppressing potential rivals rather than seeking totalitarian levels of control over the population as a whole (Přibyla 2011, p. 76). Although the state did not formally scapegoat a minority population in the same way as the Sudanese, they did leverage the threat of Communist invasion as a mechanism to dampen dissent against the regime (Alagappa 2017, p. 12).
Conclusion Authoritarianism is a challenging concept to define, with interpretations ranging from complex models involving multiple variants and components to more simplistic depictions of it as a catchall term for all nondemocratic systems that do not fit easily into other categories. Retaining power represents the primary driving concern for most authoritarian governments, yet they lack the ideological vision, popular mandate, and claims to divine right and tradition that totalitarian regimes, democracies, and absolute monarchies use to promote their legitimacy to their population, which can undermine the durability of their rule. The degree to which authoritarianism is currently on the rise, rolling back decades of democratization, remains an area of both contestation and concern for many scholars and policy makers across the globe today.
Authoritarianism
Cross-References ▶ Communism ▶ Fascism ▶ Nondemocratic Systems ▶ Totalitarianism
References Adorno, T. W., Frenkel-Brunswik, E., Levinson, D., & Sanford, N. (1950). The authoritarian personality. New York: Harper & Brothers. Alagappa, M. (2017). Taiwan’s presidential politics: Democratization and cross-strait relations in the twenty-first century. London: Routledge. Arendt, H. (1976). The origins of totalitarianism. San Diego: Harcourt Brace & Company. Bove, V., Platteau, J., & Sekeris, P. G. (2017). Political repression in autocratic regimes. Journal of Comparative Economics, 45(2), 410–428. Brooker, P. (2009). Non-democratic regimes. Basingstoke: Palgrave. Byman, D., & Lind, J. (2010). Pyongyang’s survival strategy: Tools of authoritarian control in North Korea. International Security, 35(1), 44–74. Cairo Institute for Human Rights Studies. (2012). Delivering democracy repercussions of the “Arab Spring” on human rights human rights in the Arab region: Annual report 2012. Cairo: Cairo Institute for Human Rights Studies. Chiou, C. L. (1992). The uprising of 28 February 1947 on Taiwan: The official 1992 investigation report. China Information, 7(4), 1–19. Editorial Board. (2018). A new form of totalitarianism takes root in China. The Washington Post. Retrieved from https://www.washingtonpost.com/opinions/globalopinions/a-new-form-of-totalitarianism-takes-root-inchina/2018/02/26/afd7c5cc-1b1d-11e8-b2d908e748f892c0_story.html?utm_term¼.469ee855dcb6 Fromm, E. (1957). The authoritarian personality. Deutsche Universitätszeitung, 12(9), 3–4. Gasiorowski, M. J. (1990). The political regimes project. Studies in Comparative International Development, 25(1), 109–125. Gessen, M. (2017). The future is history: How totalitarianism reclaimed Russia. New York: Riverhead Books. Ghandi, J., & Przeworski, A. (2006). Cooperation, cooptation, and rebellion under dictatorships. Economics and Politics, 18(1), 1–26. Gracie, C. (2017). China’s Xi Jinping consolidates power with new ideology. BBC News. Retrieved from http:// www.bbc.com/news/world-asia-china-41677062 Hurynovich, P. T. (2011). Belarus: Stability instead of democracy. Nouvelle Europe. Retrieved from http:// www.nouvelle-europe.eu/node/1138
107 Jabar, F. A. (2003). Analysis: Conditions for democracy in Iraq. BBC News. Retrieved from http://news.bbc.co.uk/ 2/hi/middle_east/2952867.stm Kaylan, M. (2016). Putin brings back the KGB as Russia moves from authoritarian to totalitarian. Forbes. Retrieved from https://www.forbes.com/sites/ melikkaylan/2016/09/20/putin-brings-back-the-kgb-asrussia-moves-from-authoritarian-to-totalitarian/#47c9 2bc2398a Linz, J. J. (1964). An authoritarian regime: Spain. In E. Allardt & Y. Littunen (Eds.), Cleavages, ideologies, and party systems: Contributions to comparative political sociology. Helsinki: Transactions of the Westermark Society. Linz, J. J. (1970). An Authoritarian Regime: Spain. In E. Allard and S. Rokkan. Mass Politics: Studies in Political Sociology. New York: Free Press. Linz, J. J. (1975/2000). Totalitarian and authoritarian regimes. Boulder: Lynne Rienner Publishers. Magaloni, B. (2008). Credible power-sharing and the longevity of authoritarian rule. Comparative Political Studies, 41(4–5), 715–741. Malantowicz, A. (2010). Do ‘New Wars’ theories contribute to our understanding of the African conflicts? Cases of Rwanda and Darfur. Africana Bulletin, 58, 159–172. Mason, P. (2017). Democracy is dying – and it’s startling how few people are worried. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/ 2017/jul/31/democracy-dying-apeople-worried-putinerdogan-trump-world Massoud, Mark Fathi. (2013). Sudan’s Struggle for Peace. Foreign Policy. Retreived from https://foreignpolicy. com/2013/10/06/sudans-struggle-for-peace/ Neumann, F. (1957). The democratic and the authoritarian state; essays in political and legal theory. London: The Free Press of Glencoe. Přibyla, Petr. (2011). Legitimizing Taiwanese Regime: Kuomintang’s Quest for Legitimacy And National Identity after 1949 (Master’s Thesis). Masaryk University, Poland. Rumyantsev, S. (2017). Behind Azerbaijan’s facades. openDEMOCRACY. Retrieved from https://www. opendemocracy.net/od-russia/sergey-rumyantsev/ behind-azerbaijan-s-facades Sondrol, P. C. (1991). Totalitarian and authoritarian dictators: A comparison of Fidel Castro and Alfredo Stroessner. Journal of Latin American Studies, 23(3), 599–620. Sudan. (2017). Freedom in the World 2017. Retreived from https://freedomhouse.org/report/freedom-world/ 2017/sudan Svolik, M. W. (2012a). Contracting on violence: The moral hazard in authoritarian repression and military intervention in politics. Journal of Conflict Resolution, 57(5), 765–794. Svolik, M. W. (2012b). The politics of authoritarian rule. Cambridge, UK: Cambridge University Press. Toh, H. (2015). Why Singapore’s mix of authoritarianism and democracy is a warning for Hong Kong. Hong Kong Free
A
108 Press. Retrieved from https://www.hongkongfp.com/ 2015/09/16/why-singapores-mix-of-authoritarianismand-democracy-is-a-warning-for-hong-kong/
Further Reading Frantz, E. (2018). Authoritarianism: What everyone needs to know. Oxford, UK: Oxford University Press. Levitsky, S., & Ziblatt, D. (2018). How democracies die. New York: Penguin Random House. Sunstein, C. (Ed.). (2018). Can it happen here? Authoritarianism in America. New York: Harper Collins.
Autonomous Weapon Systems (AWS) Anzhelika Solovyeva1 and Nik Hynek2 1 Institute of Political Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic 2 Department of Security Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic
Keywords
Autonomous weapon systems · Artificial intelligence · Campaign to stop killer robots · International law · Revolution in warfare
Definition Autonomous weapon systems (AWS) are reusable weapon systems and smart munitions that can be differentiated from all the existing weapons by their full autonomy. It is based on (a) their ability to operate without human control or supervision in dynamic, unstructured and/or open environments; (b) their ability to engage in autonomous (lethal) decision-making, targeting and force; and (c) their ability to engage in defensive and/or offensive combat (Sharkey 2010, p. 370, 2012, p. 787; Asaro 2012, p. 690; Kastan 2013, p. 49; Open Letter 2015; Altmann and Sauer 2017, p. 118). These capabilities technologically build upon advances in Artificial Intelligence (AI), in particular Machine Learning (ML), and especially Deep Learning (DL) and Artificial Neural
Autonomous Weapon Systems (AWS)
Networks (ANNs) (O’Connell 2014, p. 526; Walsh 2015, p. 2; Gadiyar et al. 2019). Weapons with various degrees of autonomy are widely present on the modern battlefield; however, fully autonomous ones “do not yet exist” (Walsh 2015, p. 2). As for more advanced manifestations of autonomy, there exist remotely operated systems, including drones/unmanned aerial vehicles (UAVs) (see ▶ “Drone Warfare: Distant Targets and Remote Killings”) and unmanned ground and underwater vehicles. The examples include the United States (US) MQ1-Predator and MQ9-Reaper UAVs, and weaponized ground robots such as the Talon SWORDS (Special Weapons Observation Reconnaissance Detection System). While they can be deployed offensively (Lele 2017, pp. 58–59) and can be lethally armed (Lucas 2014, p. 319), they are “uninhabited” rather than unmanned (Leveringhaus 2016, p. 3). This is because their primary autonomous mission is to navigate, while the selection and engagement of targets require human input (Lele 2017, p. 59). Such technologies are portrayed as operational with a human “in-the-loop” (Noone and Noone 2015, p. 28). In contrast, AWS will allow humans, besides being physically removed from the kinetic action, to become detached from decisions to fire/ kill and their execution (Heyns 2013, p. 5). They will close the gap between uninhabited and unmanned warfare (Leveringhaus 2016, p. 3) and “eliminate human judgement in the initiation of lethal force” (Asaro 2012, p. 693). There have also been deployed weapon systems that are “able to identify, track and engage incoming targets on their own” (Altmann and Sauer 2017, p. 118). However, most of them are mere “extensions of electric fences” (Johnson and Axinn 2013, pp. 137–138). They are stationary and fixed, and operate within tightly set parameters and time frames. The key limitations include their solely defensive character and ability to fire only at inanimate targets (Altmann and Sauer 2017, p. 118). Humans decide when and where to deploy them, (de)activate their “autonomous mode,” and can intervene to prevent or override their operation (Horowitz 2016, pp. 89–90; Walsh 2015, p. 2). Systems of this sort can be regarded as operational with a human “on-the-loop” (Noone
Autonomous Weapon Systems (AWS)
and Noone 2015, p. 28). Counter-rocket, antimissile, and antiaircraft systems embody such weapon systems in practice (Lele 2017, p. 59). The examples include the Israeli Iron Dome and the US Patriot and Aegis missile defense systems, the Phalanx CIWS (Close-In Weapon System), and the Counter Rocket, Artillery, and Mortar (C-RAM) system. Another example is SGR-A1, a stationary platform designed to guard the demilitarized zone between North and South Korea. Capable of autonomous operation, this system is distinct in that it classifies human beings detected there as its targets. However, it similarly operates in a strictly structured environment, in particular the demilitarized zone where human access is “categorically prohibited” (Tamburrini 2016, p. 126). Altmann and Sauer (2017, p. 118) summarized that, unlike all these weapon systems, AWS will “operate without human control or supervision in dynamic, unstructured, open environments, attacking various sets of targets, including inhabited vehicles, structures or even individuals.” Therefore, AWS will displace a human “outof-the-loop,” a feature not yet manifest on the modern battlefield (Noone and Noone 2015, p. 28). Drawing analogies, it is possible to utilize the same categorization to delineate AWS also in the realm of munitions. For instance, the US Long Range Anti-Ship Missile (LRASM) operates with a human “in-the-loop.” It merely “arrives at the target area” and “uses its sensors to confirm the human-selected target” (Scharre 2018, Chap. 4). The Israeli Harop and Harpy anti-radar “fire and forget” UAVs function with a human “on-theloop.” They are capable of “selecting the target based on the radar signal” (Horowitz 2016, pp. 90–91) and of subsequent “automated armed response” (Vallor 2018). With no human input required, they can loiter for hours before detecting, locking onto, and destroying hostile radars. The latter albeit feature their principal targets (Finn and Scheding 2010, p. 178), and their independent operation occurs within the limits of predetermined parameters and target areas (Vallor 2016, p. 212; Brenneke 2018, p. 65). Humans can also insist upon “target verification” (Finn and Scheding 2010, p. 178). Future
109
munitions with the “anti-vehicle,” and prospectively “anti-personnel,” capabilities of AWS such as the US Cannon-Delivered Area Effects Munition (C-DAEM) “will be able to able to isolate and choose targets” (Lye 2019). In contrast to all the existing weapons, AWS will embody fully autonomous (lethal) weapons (Altmann and Sauer 2017, p. 132). This technological turn is increasingly described with a reference to a new revolution in warfare (Open Letter 2015; Garcia 2018, p. 336). The principal difference is that AI learning algorithms, rather than programmable and predefined instructions, will define the rules for their intelligent operation (Layton 2018, pp. 6–7). This was found relevant even to munitions pertinent to the generation of AWS. For example, “self-targeting” C-DAEM will be “AI-powered” (Lye 2019). ML originally proved as the “enabler” of AI (Hallaq et al. 2017). It comprises supervised and unsupervised learning algorithms that can autonomously make decisions and perform tasks, while detecting patterns in data, learning from them and adapting their behavior. The development of DL, an advanced subset of ML, was partly motivated by the failure of simple ML algorithms to correctly generalize, especially with regard to complicated, imprecise, and multidimensional data, and complex tasks (Goodfellow et al. 2016, pp. 95–96, 151; Layton 2018, pp. 7–8; Gadiyar et al. 2019). Inspired by biological neural networks, ANNs underlie DL algorithms (Chen et al. 2017). Learning algorithms make AI-based systems capable of autonomous operation in a real-world environment without any form of human control (Cummings 2017, pp. 1–2). However, most current research relates to the development of weak/modular/narrow/specialized AI, implying (pretrained) intelligent expertise with respect to certain tasks in certain domains, rather than general/strong AI or super(human)-intelligence (Krishnan 2009, p. 47; Gubrud 2014, p. 33; Ayoub and Payne 2016, pp. 795–796; Boulanin 2016, pp. 9–10; Leveringhaus 2016, p. 7; Krieg and Rickli 2019, p. 103). Yet unable to fully replicate complex human intelligence, AI-based systems, at certain tasks at least, become more effective than humans and preprogrammable software (Layton 2018, pp. 7–9;
A
110
Springer 2018, p. 10). This differentiation is important to emphasize that advances in AI not only pave the way for future fully autonomous weapons, but just now give rise to more complex autonomous functions in weapon systems. For example, the Russian heavy reconnaissance and attack drone “Okhotnik” (“Hunter”) and the US Advanced Targeting and Lethality Automated System (ATLAS) seek to reconcile advanced AI elements and autonomous capabilities with meaningful human involvement in the use of force (TASS 2018a, b; Freedberg 2019).
Normative and Legal Considerations Normative considerations curb the progress with the removal of this human element. The Campaign to Stop Killer Robots has united more than a hundred nongovernmental organizations (NGOs) based in various countries, gained broad public support and successfully dragged into its agenda a good number of state governments, thousands of experts, over 20 Nobel Peace laureates as well as elements of the United Nations (UN) and the European Union (CSKR n.d.). They advocate for a blanket global preventive, or rather preemptive, ban on the development, production, and use of fully autonomous weapons (CSKR 2014; Open Letter 2015). Diplomatic facilitation by the International Committee of the Red Cross (ICRC) cannot be overlooked (ICRC 2016). Media around the world (see ▶ “Role of the Media”) have favored the case, including photos from The Terminator and Battlestar Galactica being featured (Carpenter 2016, p. 53). The primary concern is the delegation “to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy” (Asaro 2012, p. 688). Human judgment and reasoning are traditionally considered indispensable in the context of war (Sharkey 2017, p. 180). This complex context makes it nearly impossible to anticipate all the possible situations and often presupposes situational decisions (Lin et al. 2008, p. 32). This is also because laws of war are often imperfect,
Autonomous Weapon Systems (AWS)
incomplete and subject to interpretation, and human situational understanding is crucial to evaluate different, potentially incompatible and vague, imperatives. It is a serious challenge to transform legal rules into fixed computational rules (Asaro 2012, p. 700). Most importantly, the dividing line between friends and foes is “heavily value-laden,” and there are multiple “shades of gray” in regards to sample distinctions between combatants and civilians (Asaro 2008, pp. 60– 61). Besides the lack of a clear legal definition of civilians (Kastan 2013, p. 60), there exist legal justifications to kill civilians and legal restrictions to kill combatants, which become ever more blurred in warfare environments such as guerrilla or insurgent warfare (Asaro 2008, p. 60). Furthermore, there is a distinction between law and morality (Ibid). The “ability to think morally” can never be programmed since it is a “distinctive human characteristic” (Johnson and Axinn 2013, p. 135). None of the mentioned categories is a fixed and readily programmable value. Even the most intelligent weapons with adequate sensing mechanisms would “still be missing battlefield awareness or common sense reasoning” (Sharkey 2012, p. 789). What aggravates the case is the inherent inability of such weapons to strictly fix upon any original instructions. First, with their learning algorithms “not fixed during the production process” (Andreas 2004, p. 177), it is hard to predict what they may learn (Lin et al. 2008, p. 8). Second, such algorithms operate at “superhuman speed” (Gubrud 2014, p. 33), i.e., supersonic or hypersonic speeds beyond the speed of human decision-making, leaving little chance for human control and intervention (Sharkey 2017, p. 182). Third, any computational system, especially that it gets more complex and sophisticated, has “inherent weaknesses” such as bugs, errors, breakdowns, malfunctions, glitches, and cyber interferences (see ▶ “Origins of Cyber-Warfare”) (Klincewicz 2015, pp. 168–169; Noone and Noone 2015, p. 33). The aforesaid potentially implicates killings by accident or by the means of hacking/hijacking, especially dangerous if performed by criminals and terrorists (Klincewicz 2015, p. 168). This challenges the fundamental values of human
Autonomous Weapon Systems (AWS)
dignity and human life (see ▶ “Right to Life”), as well as the principles of international humanitarian and human rights law (Johnson and Axinn 2013, p. 134; Garcia 2015, p. 60; Heyns 2016, pp. 10–11). The key principles of concern are those of military necessity, distinction, proportionality, and humanity (Kastan 2013, pp. 54– 55). Often based upon “subjective estimates of value and context-specificity,” they may become precarious in light of AWS and their “restricted abilities to interpret context and to make valuebased calculations” (Heyns 2013, pp. 11–13). The principle of individual and state accountability for violations is also important to keep the legal system on track (Ibid., p. 14). This is a challenge with fully autonomous weapons because they will have too many people, or rather no one, to be held accountable for their mistakes (Lin et al. 2008, pp. 64, 73). This implies the risk of “shielding” human perpetrators of war crimes (Asaro 2012, p. 693) and may encourage more unethical choices (Sparrow 2009, p. 183). Fully autonomous weapons also promise to reduce political, particularly democratic, resistance to armed conflicts (Liu 2012, p. 633) and potentially even normalize them (Heyns 2013, p. 11). This is because they create an imaginary of “push button” warfare (Asaro 2008, p. 62). It means the threshold from peace to war might be lowered (Sharkey 2017, p. 182), and military force might cease to be a measure of “last resort” (Heyns 2013, p. 11). This challenges the law of jus ad bellum (Asaro 2008, p. 53) and the (non-)use of force principles codified in the UN Charter (Garcia 2015, p. 60). These norms may become even less sustainable especially that the risk of unintendedly initiated or escalated wars runs high with fully autonomous weapons on the battlefield (Asaro 2012, p. 692). Operating within a “lawless zone” (Kastan 2013, p. 47) and the “persistent responsibility gap” (Liu 2012, p. 630), AWS will seriously challenge the foundations of international law.
Military Utility and Strategic Importance However, there is still no legal prohibition on the development, production, and/or use of fully
111
autonomous weapons. This is because there is a lot to be said for their military utility and strategic importance. Some have also noted multiple biases underlying the consideration of related risks. First, it is necessary to “beware of idealizations of human warfare” (Birnbacher 2016, p. 121). Prone to a wide range of performance-hindering psychological factors, emotional distortions, and exigencies of the “fog of war” (Klincewicz 2015, p. 164; Korać 2018, p. 56), humans have a “dismal record in ethical behavior in the battlefield” (Arkin 2018, p. 318). AWS promise to remedy the associated problem of human under-/over-reaction (Noone and Noone 2015, p. 32) by the means of data-driven and bias-free analysis (Ayoub and Payne 2016, p. 799) and increased accuracy (Scharre 2018, Chap. 17). This may prevent many unwarranted injuries and killings (Heyns 2016, p. 7). Second, concerns about software unpredictabilities inherent in fully autonomous weapons seem exaggerated. The risk of malfunction is not unique to AWS but has been relevant to different weapons, ranging from catapults to more complex computer systems (Schmitt 2013, p. 7). Cyberattacks are also not new (Klincewicz 2015, p. 172). Fully autonomous weapons should not be identified with “complete” autonomy either because they still feature “an exercise in software development” (McFarland 2015, pp. 1323–1328). Scharre (2018, Chap. 17) assumed that, if “programmed to never break the laws of war,” AWS would be “incapable of doing so.” Third, the (un)lawfulness of AWS performance “must be judged on a case-by-case basis” (Schmitt 2013, p. 8). Historical experience already contains examples of clear cases of war crimes, violations of human dignity and human life, and the unlawful use of lawful weapons. Drawing distinctions between civilians and combatants, and proportionality decisions are similarly difficult in airstrikes and long-range attacks. All of this proves that certain uses of weapons with unlawful effects, rather than weapons themselves or their (autonomous) capabilities, underlie violations of the laws of war (Schmitt 2013, pp. 9, 14; Birnbacher 2016, pp. 118–121). Furthermore, there is a chance that AWS will “outperform” humans with respect to conformance to
A
112
international humanitarian and human rights law (Arkin 2018, p. 323). This is because they promise to be “more accurate in their targeting and more considerate” (Birnbacher 2016, p. 119). Concerns about the lack of legal accountability for AWS performance also has “an air of triviality.” Since such weapons are still to be programmed and deployed by humans, responsibility for their operation should lie “unconditionally” with them (Ibid., p. 120). Fourth, fully autonomous weapons become increasingly associated with multiple tactical and operational advantages, in turn promising enormous strategic gains (Ayoub and Payne 2016, pp. 807–808). Among them is the potential for force multiplication (Arkin 2017, p. 36). It is twofold and implies that fewer personnel will be able to do more (Heyns 2013, p. 10), and that each system will withal effectively do the work of many human soldiers (Lin et al. 2008, p. 1). Also, AWS promise to significantly reduce or even eliminate most of human physical limitations on the battlefield. They will be capable of better-informed and faster reactions (Liu 2012, p. 633). Not only will they have superior capacities for data absorption and analysis (Wagner 2014, p. 1413), but they will also be able to make decisions in nanoseconds, unlike humans who need a minimum of hundreds of milliseconds for their decisions (Sharkey 2008, p. 16). Such capabilities may even help to remedy human flaws in strategic decision-making, thus improve its quality (Ayoub and Payne 2016: 807). In the case of remotely operated systems, the speed of human decisionmaking in combat is “further slowed down through the inevitable time-lag of global communications” (Heyns 2013, p. 10). The ability to operate in the absence of these constant control and communication links, also vulnerable to electronic counter-measures and environmental factors, further adds on the military utility of AWS (Sparrow 2007, p. 68). Last but not least, these weapons will be less vulnerable on the battlefield as they are insusceptible to the requirements for breathable air, rest and sleep, drinkable water and food, physical extremes of acceleration and cognitive load, as well as temperatures, radiation, biological, and chemical weapons (Gubrud 2014, p. 38).
Autonomous Weapon Systems (AWS)
A Weapon for Whom? The issue is of global significance because the capability to deploy AWS is a cogent trigger for a far-reaching strategic competition between and among states (Sparrow 2007, p. 69; Gubrud 2014, p. 39). Based upon widely disseminated dual-use AI and robotic technologies (Altmann and Sauer 2017, p. 132), AWS might also proliferate to nonstate actors (Asaro 2012, p. 692) and perhaps become “the Kalashnikovs of tomorrow” (Open Letter 2015). Alternatively, they may rather epitomize “the product of a rich and elaborate economy” (Asaro 2008, p. 63) and generate an “imbalanced system of haves and have-nots” (Garcia 2018, p. 339). The “asymmetric ‘push-button’ war” entails other concerns of global relevance (Asaro 2008, pp. 62–63). It denotes “one-sided killing” (Heyns 2013, p. 16) and bodes dangerous and bloody reprisals, including terrorist attacks and intensifying efforts to acquire and utilize weapons of mass destruction (Lin et al. 2008, p. 81; Sharkey 2008, p. 16).
Cross-References ▶ Drone Warfare: Distant Targets and Remote Killings Acknowledgments Anzhelika Solovyeva gratefully acknowledges funding for this work from Charles University, SVV Grant “Political Order in the Times of Changes” (SVV 260 595). Nik Hynek gratefully acknowledges funding for this work from Charles University, UNCE Grant “Human-Machine Nexus and Its Implications for International Order” (UNCE/HUM/037).
References Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59 (5), 117–142. Andreas, M. (2004). The responsibility gap in ascribing responsibility for the actions of automata. Ethics and Information Technology, 6(3), 175–183. Arkin, R. (2017). A roboticist’s perspective on lethal autonomous weapon systems. In Perspectives on lethal autonomous weapon systems (UNODA occasional papers no. 30). New York: United Nations Publication.
Autonomous Weapon Systems (AWS) Arkin, R. (2018). Lethal autonomous systems and the plight of the non-combatant. In R. Kiggins (Ed.), The political economy of robots. Cham: Palgrave Macmillan. Asaro, P. (2008). How just could a robot war be? In A. Briggle, K. Waelbers, & P. Brey (Eds.), Current issues in computing and philosophy. Amsterdam: IOS Press. Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687–709. Ayoub, K., & Payne, K. (2016). Strategy in the age of artificial intelligence. Journal of Strategic Studies, 39 (5/6), 793–819. Birnbacher, D. (2016). Are autonomous weapons systems a threat to human dignity? In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge: Cambridge University Press. Boulanin, V. (2016). Mapping the innovation ecosystem driving the advance of autonomy in weapon systems (SIPRI working papers), December. https://www.sipri. org/sites/default/files/Mapping-innovation-ecosystemdriving-autonomy-in-weapon-systems.pdf. Brenneke, M. (2018). Lethal autonomous weapon systems and their compatibility with international humanitarian law: A primer of the debate. In T. Gill, R. Geiß, H. Krieger, & C. Paulussen (Eds.), Yearbook of international humanitarian law (Vol. 21). Berlin: Springer. Carpenter, C. (2016). Rethinking the political/-science-/ fiction nexus: Global policy making and the Campaign to Stop Killer Robots. American Political Science Association, 14(1), 53–69. Chen, M., Challita, U., Saad, W., Yin, C., & Debbah, M. (2017). Artificial neural networks-based machine learning for wireless networks: A tutorial. IEEE Communications Surveys & Tutorials, 21(4), 3039–3071. CSKR [Campaign to Stop Killer Robots]. (2014). Missile systems and human control, November 24. https:// www.stopkillerrobots.org/2014/11/missile-systemsand-human-control/ CSKR [Campaign to Stop Killer Robots]. (n.d.). Who wants to ban fully autonomous weapons. Video. https://www.stopkillerrobots.org. Accessed 1 Oct 2019. Cummings, M. (2017). Artificial intelligence and the future of warfare. London: Chatham House. Finn, A., & Scheding, S. (2010). Developments and challenges for autonomous unmanned vehicles: A compendium. Berlin: Springer. Freedberg, S. (2019, March 4). ATLAS: Killer robot? No. Virtual crewman? Yes. Breaking Defense. https:// breakingdefense.com/2019/03/atlas-killer-robot-no-vir tual-crewman-yes/ Gadiyar, R., Zhang, T., & Sankaranarayanan, A. (2019). Artificial intelligence software and hardware platforms. In M. Gilbert (Ed.), Artificial intelligence for autonomous networks. Boca Raton: CRC Press. Garcia, D. (2015). Killer robots: Why the US should lead the ban. Global Policy, 6(1), 57–63.
113 Garcia, D. (2018). Lethal artificial intelligence and change: The future of international peace and security. International Studies Review, 20(2), 334–341. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge: MIT Press. Gubrud, M. (2014). Stopping killer robots. Bulletin of the Atomic Scientists, 70(1), 32–42. Hallaq, B., Somer, T., Osula, A.-M., Ngo, K., & MitchenerNissen, T. (2017). Artificial intelligence within the military domain and cyber warfare. In M. Scanlon & N.-A. Le-Khac (Eds.), Proceedings of 16th European conference on cyber warfare and security. Dublin: University College Dublin. Heyns, C. (2013). Report of the special rapporteur on extrajudicial, summary or arbitrary executions. United Nations Doc.A/HRC/23/47. Heyns, C. (2016). Autonomous weapons systems: Living a dignified life and dying a dignified death. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge: Cambridge University Press. Horowitz, M. (2016). Why words matter: The real world consequences of defining autonomous weapons systems. Temple International and Comparative Law Journal, 30(1), 85–98. ICRC [International Committee оf the Red Cross]. (2016). Views of the International Committee of the Red Cross (ICRC) on autonomous weapon systems, April 11. https://www.icrc.org/en/document/views-icrc-autono mous-weapon-system Johnson, A., & Axinn, S. (2013). The morality of autonomous robots. Journal of Military Ethics, 12(2), 129– 141. Kastan, B. (2013). Autonomous weapons systems: A coming legal ‘singularity’? University of Illinois Journal of Law, Technology and Policy, 2013(1), 45–82. Klincewicz, M. (2015). Autonomous weapons systems, the frame problem and computer security. Journal of Military Ethics, 14(2), 162–176. Korać, S. (2018). Depersonalisation of killing: Towards a 21st century use of force ‘beyond good and evil?’. Philosophy and Society, 29(1), 49–64. Krieg, A., & Rickli, J.-M. (2019). Surrogate warfare: The transformation of war in the twenty-first century. Washington, DC: Georgetown University Press. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Farnham: Ashgate Publishing. Layton, P. (2018). Algorithmic warfare: Applying artificial intelligence to warfighting. Canberra: Air Power Development Centre. Lele, A. (2017). A military perspective on lethal autonomous weapon systems. In Perspectives on lethal autonomous weapon systems (UNODA occasional papers no. 30). New York: United Nations Publication. Leveringhaus, A. (2016). Ethics and autonomous weapons. London: Palgrave Macmillan. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. San Luis Obispo: US Department of Navy, California Polytechnic State University.
A
114 Liu, H.-Y. (2012). Categorization and legality of autonomous and remote weapons systems. International Review of the Red Cross, 94(886), 627–652. Lucas, G. (2014). Automated warfare. Stanford Law and Policy Review, 25, 317–339. Lye, H. (2019, August 16). US army developing selftargeting AI artillery. Army: Technology. https://www. army-technology.com/news/us-army-developing-selftargeting-ai-artillery/ McFarland, T. (2015). Factors shaping the legal implications of increasingly autonomous military systems. International Review of the Red Cross, 97(900), 1313–1339. Noone, G., & Noone, D. (2015). Debate over autonomous weapons systems. Case Western Reserve Journal of International Law, 47(1), 25–35. O’Connell, M. (2014). 21st century arms control challenges: Drones, cyber weapons, killer robots, and WMDs. Washington University Global Studies Law Review, 13(3), 515–533. Open Letter. (2015). Autonomous weapons: An open letter from AI and robotics researchers. Future of Life Institute. http://futureoflife.org/open-letter-autonomousweapons/ Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. New York: W.W. Norton. Schmitt, M. (2013). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal Feature, 1–37. Sharkey, N. (2008). Cassandra or false prophet of doom: AI robots and war. IEEE Intelligent Systems, 23(4), 14–17. Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383. Sharkey, N. (2012). The evitability of autonomous robot warfare. International Review of the Red Cross, 94 (886), 787–799. Sharkey, N. (2017). Why robots should not be delegated with the decision to kill. Connection Science, 29(2), 177–186. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77. Sparrow, R. (2009). Building a better WarBot: Ethical issues in the design of unmanned systems for military
Autonomous Weapon Systems (AWS) applications. Science and Engineering Ethics, 15(2), 169–187. Springer, P. (2018). Outsourcing war to machines: The military robotics revolution. Santa Barbara: Praeger. Tamburrini, G. (2016). On banning autonomous weapons systems: From deontological to wide consequentialist reasons. In N. Bhuta, S. Beck, R. Geiss, H.-Y. Liu, & C. Kress (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge: Cambridge University Press. TASS [Russian News Agency]. (2018a). Russia’s Okhotnik attack drone to become prototype of sixth generation fighter, July 20. https://tass.com/defense/1014154 TASS [Russian News Agency]. (2018b). Russia’s Okhotnik heavy drone makes first ground run, November 23. https://tass.com/defense/1032118 Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. New York: Oxford University Press. Vallor, S. (2018). Robots with guns. In J. Pitt & A. Shew (Eds.), Spaces for the future: A companion to philosophy of technology. New York: Routledge. Wagner, M. (2014). The dehumanization of international humanitarian law: Legal, ethical, and political implications of autonomous weapon systems. Vanderbilt Journal of Transnational Law, 47(5), 1371–1424. Walsh, J. (2015). Political accountability and autonomous weapons. Research and Politics, 2(4), 1–6.
Further Reading Del Monte, L. A. (2018). Genius weapons: Artificial intelligence, autonomous weaponry, and the future оf warfare. New York: Prometheus Books. Payne, K. (2018). Strategy, evolution, and war: Apes to artificial intelligence. Washington, DC: Georgetown University Press. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: Penguin Books. Wallace, R. (2018). Carl von Clausewitz, the fog-of-war, and the AI revolution: The real world is not a game оf go. Cham: Springer.
B
Balance of Power Glen M. E. Duerr and Michael Wilt Cedarville University, Cedarville, OH, USA Keywords
Power · Realism · Liberalism · International Relations · Buck-passing · USA · China · World War II
Introduction Balance of power theory has been the focal point of international relations (IR) political theory, especially concerning the various generations of realism – one of the main theories in this field of academic study. Under realism – classical realism, structural realism, and neoclassical realism – the three “generations” of the theory, power remains a major consideration in any calculation. For any scholar seeking to oppose the arguments of realists, balance of power is an important point of rebuttal or repurposing in order to undercut a power-centric position. Thus, balance of power has been used in a variety of ways with a range of different definitions. Most revolve around the idea of balancing the amount and accumulation of power among great powers in world politics. Past events – such as World War II between 1939 and 1945 – have pointed to the importance of understanding
balance of power as a consistently fluid process that affects systems drastically through balancing. Numerous other periods are also important in light of balance of power politics. From Thucydides’ History of the Peloponnesian War to contemporary calculations of China’s rise vis-à-vis the United States, balance of power considerations are very important to some analysts. Moreover, a discussion of balance of power politics also inform other related areas of study including buck-passing and bandwagoning – options for states when considering what to do when confronted by a powerful actor in world affairs.
Definitions The balance of power political theory has been a staple discussion in IR study because of its rootedness in power, one of the main contentions of debate between theorists and practitioners. The term has been used in a wide variety and in reference to multiple situations and scenarios. In fact, Iris Claude, Jr. noted that since antiquity, various persons have struggled defining balance of power, and the “trouble with the balance of power is not that it has no meaning, but that it has too many meanings” (Claude 1962). Balance of power, as affirmed above, is a multifaceted term that requires a variety of elements to fully comprehend. Conceptually, balance of power is normally used in reference to a systemic
© Springer Nature Switzerland AG 2023 S. N. Romaniuk, P. Marton (eds.), The Palgrave Encyclopedia of Global Security Studies, https://doi.org/10.1007/978-3-319-74319-6
116
concept based on the major unit of IR study: states. States are the main actors within the international system, and they themselves are the most influential component when it comes to “balancing” at least as a primary point of reference for realists and liberals alike. A realist system in particular is also based on anarchic tendencies, which is born out of an element of distrust among states. States are self-seeking and pursue their own safety and security. Therefore, the element of distrust is brought about, as each state understands that other states are seeking their own protection and interests. States will then seek to balance against one another, especially when there are differences in the accumulation of power. Within this balance of power system is the necessity and drive to move toward a point of equilibrium or a point of stability and restructuring within a system. Hans Morgenthau wrote in Politics Among Nations that “whenever the equilibrium is disturbed either by an outside force or by a change in one of the other elements composing the system, the system shows a tendency to reestablish either the original or a new equilibrium” (Morgenthau 1948). Because states are ever-changing in their interests and policies and because the equilibrium can be disturbed at any moment, the balance of power concept itself is dynamic, meaning it too is subject to change and fluidity. Understanding the term of power is also important when considering how to balance it. Power too is multifaceted, and Morgenthau lays out a comprehensive list in Politics Among Nations for the components of power. The list can include but is certainly not limited to the following: • Wealth is seen, especially in modern context, as the most obvious and important component, as it incorporates both the nominal gross domestic product (GDP) and the GDP purchasing power parity (PPP). • Geography is seen as one of the most important components for balancing, as it relates to both a state’s size and location in proximity to other member states.
Balance of Power
• Trade relations are another key component that influences alliances. • Population trends and size will determine a state’s ability to produce a large military. • Natural resources within a given member state, such as raw materials, oil, coal, and steel, allow member states to become either more self-sufficient or seek more connection with other states who have the resources they are looking for. • Industrial capacity flows from both population and natural resources, as both are key components to being able to develop industries. • Military preparedness and military development are key components in determining the strength of a state as well as the capability; military preparedness comes in the form of technological advancements, leadership capabilities, and both the quantity and quality of the armed forces. • Nationalistic tendencies of a member state to not only seek its own honor but to export its own ideals onto another member state play key roles in “balancing.” • The quality of a state’s diplomacy is considered to be “the most important factor” according to Morgenthau, because it transforms a nation’s power by giving it “direction and weight” or, in other words, purpose and conviction. • And, last, but certainly not least, is the quality of the government’s structure, organization, efficacy, durability, and execution of laws (Morgenthau 1948). Balancing is the action associated with balance of power, which refers to the goal of states “to oppose power concentrations or threats” (Ikenberry 2009) in a variety of ways. Thus, balance of power can be broader than simply one state accumulating power against another. Such methods for balancing could be in the formation of alliances through treaties or trading partnerships, as well as forceful negotiations to persuade a member state to seek a new policy or ultimately pursuing war and conflict in the hopes of checking another member state’s growth. The United States, for example, in 2019 is the most powerful
Balance of Power
country by most economic (nominal GDP) and military (perceived power through technology, equipment, and troop size) metrics. Yet, Washington also maintains numerous alliances and trading partnerships. The United States still needs the North Atlantic Treaty Organization (NATO) and the other 28 states within the alliance. Such a combination is formidable to any potential peer competitor. Additionally, multilateral trade relationships such as the North American Free Trade Agreement (NAFTA) and the Dominican Republic-Central American Free Trade Agreement (DRCAFTA). These agreements, among numerous others, provide the United States with additional elements of power and factor into a balance of power calculation. Such balancing policies have been pursued by nearly every member state, with European countries being the most notable examples. Moreover, balancing can also be a policy for a state; Iris Claude, Jr. argues that it is a “policy of prudence” – or “stability, survival, protection of national rights and interest” (Claude 1962). In The Tragedy of Great Power Politics, John Mearsheimer underscores the opposite trait to “balancing,” which he calls “buck-passing,” an infamous reference to President Truman’s slogan, “The buck stops here” (Mearsheimer 2001, 269).
World War II: An Example World War II was the “poster child” for balance of power – and buck-passing. By the close of the World War I, Germany was severely hampered by a range of different war-related factors. At home, Germany was fiscally bankrupt, nationally depleted in resources, and militarily empty. The Treaty of Versailles placed sanctioned the German government both militarily and economically. In the aftermath of 1918, Weimar Germany could not build a larger standing army and was thus rendered largely defenseless, let alone able to strike another state. Further, the economy was encumbered by the Treaty of Versailles’ reparations payments system – so much so that it took Germany officially over nine decades to repay. Germany, however, was determined to make a comeback – especially in
117
the aftermath of Hitler’s democratic ascent to power following the 1933 parliamentary election and his subsequent accumulation of power given via President Paul von Hindenburg. France at the time retained the second-largest army size in Europe – second only to the Soviet Union, which had over three million persons (Mearsheimer 2001, 305). France also maintained an aggressive defense measure on its Eastern front, known as the Maginot Line. The Soviet Union, too, was a recognizable great power following World War I. Though tattered from the destruction of war and revolutions in 1917 and the subsequent Russian Civil War, the Soviet Union focused on increasing its ability to become a global hegemon. The United Kingdom was the other major power in the European theater since both the Austro-Hungarian and Ottoman Empires both dissolved following 1918. Arguably, Italy maintained a place among the larger powers in Europe, but Rome remained significantly behind the United Kingdom, Germany, France, and the USSR. Germany, however, became a realized threat after the ascension of Hitler to the position of Chancellor of Germany in 1933. Though the three major powers of Europe – the United Kingdom, France, and the Soviet Union – hoped to check Germany’s rise to power, they went to great lengths to ensure they could buck-pass. The United Kingdom, at the time, maintained a strong industrial section of the economy, as well as over 48 million people. Moreover, its Royal Navy and Royal Air force were each well-developed, as was the United Kingdom’s relationship with the United States (Tucker 2004, 191). The Soviet Union straddled both Eastern Europe and Siberian Asia. The Soviet Union, though nearly 200 million persons-strong, was still reeling from centuries of backwardness in comparison to the other main European powers. This led to major problems for the Soviet Union to boost its production and industrial capacity. Furthermore, Joseph Stalin – the head of the Soviet Union government following the death of Vladimir Lenin– relied on horrifying methods to garner success, imposing communism and harshly forcing subservience on the part of restive republics such as Ukraine with
B
118
the Holodomor starvation genocide of the early 1930s. When Hitler accumulated power and began to engage in aggressive military action taking the Sudetenland in 1938 – an area with a high concentration of ethnic Germans along the border areas of Czechoslovakia, the British Prime Minister, Neville Chamberlain, sought a peace agreement with Hitler as a means of “buckpassing” and avoiding an outright confrontation over balance of power considerations. The United Kingdom passed the buck on to France in the hopes that it could secure a military victory. The United Kingdom did this for several reasons: (1) geography was an important factor, as the English Channel severed the United Kingdom from direct contact with Germany; (2) the United Kingdom was also interested in maintaining its colonies abroad, specifically with the outbreak of war on the African and Asian continents; and, (3) the United Kingdom believed – and rightfully so – that France could defeat Germany with little to no assistance. The Soviet Union, too, passed the buck to not only France but also the United Kingdom. The Soviet Union hoped it would be able to avoid going to another costly and time-consuming war. Moreover, the thought of Germany and France weakening each other through another war delighted the Soviet Union, which was looking to embolden its power and prestige to become the world’s hegemon. When Hitler commanded the invasion of the rest of Czechoslovakia in March 1939, the Allied Powers in Europe, France, and the United Kingdom realized that the buck-passing strategy was not going to satiate Hitler’s desire for power. This culminated in September 1939 when both Nazi Germany and the Soviet Union acted upon the nonaggression pact (also known as the MolotovRibbentrop Pact) to divide Eastern Europe between the two countries, starting with Poland. World War II officially started (at least in Europe) when the United Kingdom and France declared war on Germany in the aftermath of the invasion of Poland. It was at this point that the neoclassic realist scholar, Randall Schweller, notes the “deadly imbalance of power” among the states
Balance of Power
of Europe descending the continent into conflict (Schweller 1998). It was not until the defeat of France in 1940 when the sense of immense seriousness came to fruition. The German campaign, which led to the defeat of France, was relatively swift and albeit astonishing. The French – one of the most powerful and capable militaries in the world, with the defense known as the Maginot Line – should have at least put up a strong fight against the Germans. Instead, the French lost decisively, mainly due to mismanagement of resources and weaponry, the lack of French intelligence, and the lack of coordinated efforts to prepare for such an invasion ahead of time (Tucker 2004, 50–60). Overall, the French were unable to successfully balance the rise of Germany alone. Though it received British support economically and militarily, it was not enough to allow France to ensure a victorious outcome. The defeat of France was a wake-up call for the United Kingdom and some other great powers in Europe that eventually felt the wrath of Hitler’s desire to accumulate power. The Battle of Britain resulted in extensive bombing by the Luftwaffe, and even more extensive destruction of southeastern United Kingdom. Moreover, the invasion of the Soviet Union by Hitler in 1941–1943 resulted in a destroyed Western Soviet Union. However, the arrival of the United States into World War II helped to shift the momentum of the war in Europe. The United States, following the devastating surprise attack on Pearl Harbor by the Japanese on December 7, 1941, allowed the American forces to be invigorated to put an end to the World War – a plus for the Allied side. The introduction of the United States also pulled together in a more organized fashion the efforts of the Allied powers – the United States, the United Kingdom, and the Soviet Union. The inclusion of the United States into World War II on the Allied side allowed for more organization and pull of resources among the Allied forces, organized military power, and organized military strategy. This new alliance sought to not only check German expansion but to also seek its full and decisive defeat (Spykman 1942).
Balance of Power
Ultimately, the balancing efforts by the Allied Powers concluded in victory over Hitler’s Germany. The culmination of natural resources, manpower, industrial strength and resolve, and geography allowed the Allied Powers to defeat Hitler’s Germany following nearly 7 years’ worth of fighting. The United States, Canada, and the United Kingdom were able to invade Normandy, France in a decisive, albeit disparaging, victory for the Allied side in a campaign known as Operation Overlord. Further battles continued as the American, Canadian, and British forces pushed from the west. Moreover, the Soviet forces pushed in on Nazi Germany from the east, forcing the German army to face two fronts now. Following the Western Allied invasion of Germany and the Battle of Berlin, the Allied forces were able to declare official victory over the German forces. Such a victory led to the establishment of a new balance of power within Europe overnight. World War II remains a classic case study in balance of power politics, as well as related areas of study in IR theory. Historians and laypersons alike still critique the decision by Chamberlain to try to appease Hitler, yet the idea of buck-passing seemed attractive to the British Prime Minister as a mechanism of avoiding another continental catastrophe. Hindsight, as they say is twentytwenty, and so the lessons of balance of power politics remain central to realist calculations of power.
Conclusion Overall, the balance of power theory still holds in the modern world as states seek to empower themselves and garner strength, prestige, and power in relation to the other states in the global theater. Some states have flatly rejected balance of power considerations in the hopes of building the United Nations as an effective mode of global governance (Keohane and Nye 2012). Others argue that enemy construction is a central reason for conflict and therefore attempt to build bridges to peace. Yet, wars and conflicts remain. Among the most powerful states in the world, especially the United States and China, as well as resurgent or
119
growing powers like Russia and India, balance of power considerations remain central to their foreign policy decisions. World War II was a clear demonstration of balance of power theory – and buck-passing – at work as the major European powers sought to check the rise of Hitler’s Germany. Culminating with the entrance of the United States into Europe, the tide changed, and the balancing affects work, as Hitler’s Germany abruptly concluded by May of 1945. Although new mechanisms of peace such as the United Nations were built in the aftermath of the conflict, the institution maintains numerous realist calculations to watch changes to balance of power politics. In modern-day Europe, despite decades of peace, the European states are still mired in a balance of power struggle. “Brexit” – or Britain’s exit – from the European Union (EU) has and will continue to lead to reverberations throughout economic and political institutions and relationships within the European theater. Germany’s balance of power position will change with Brexit as the other member states will likely further rely on German Chancellor Angela Merkel, or her successor, to lead Europe into the next decade. Furthermore, France could gain considerable standing within the EU, as it remains to be the sole remaining permanent, nuclear-weapon holding member on the UN Security Council. The United Kingdom, however, will possibly bear the heaviest setback fiscally and politically. Economically speaking, the United Kingdom will be forced into new negotiations with other states to salvage trade partnerships and relations. The United Kingdom will be relying on its new negotiations to foster a stronger political standing in Europe and among trading partners elsewhere in the world – a difficult task. Altogether, Brexit has ushered a change to the balance of power within Europe that could shift away from the United Kingdom to Germany and France if the EU remains a steadfast institution; if the EU falters, the United Kingdom’s independent position could be strengthened. With the rise of China in the East Asian theater, balance of power remains important in the twentyfirst century. The United States remains a powerful actor in the Asia-Pacific region, alongside a growing India. A resurgent Russia recently
B
120
annexed Crimea, and thus the power calculation of balance of power politics remains in flux. For various American allies such as Japan, South Korea, Vietnam, and Australia, there are shifting dynamics in East Asia, wherein balance of power considerations remain at the forefront of national foreign policy decisions.
Cross-References ▶ Emerging Powers
References Claude, I. L. (1962). Power and international relations. New York: Random House. Ikenberry, G. J. (2009). After victory: Institutions, strategic restraint, and the rebuilding of order after major wars. Princeton: Princeton University Press. Keohane, R. O., & Nye, J. S. (2012). Power and interdependence (4th ed.). New York: Pearson. Mearsheimer, J. J. (2001). The tragedy of great power politics. New York: WW Norton & Company. Morgenthau, H. (1948). Politics among nations: The struggle for power and peace. New York: Alfred Knopf. Schweller, R. L. (1998). Deadly imbalances: Tripolarity and Hitler’s strategy of world conquest. New York: Columbia University Press. Spykman, N. J. (1942). America’s strategy in world politics: The United States and the balance of power. San Diego: Harcourt, Brace and Company. Tucker, S. C. (2004). The second world war. New York: Palgrave Macmillan.
Further Reading Fearon, J. D. (1994). Signaling versus the balance of power and interests: An empirical test of a crisis bargaining model. Journal of Conflict Resolution, 38(2), 236–269. Schweller, R. L. (1998). Deadly imbalances: Tripolarity and Hitler’s strategy of world conquest. New York: Columbia University Press.
Bed Diplomacy David Andrew Omona Uganda Christian University, Mukono, Uganda Keywords
Bed · Conflict · Diplomacy
Bed Diplomacy
Introduction Bed diplomacy is an informal kind of diplomacy where peaceful relationship between two or more parties is hatched, enacted, cemented, maintained, and reenacted through marriage. This form of establishing, maintaining, and reenacting relationships between people of diverse traditions has been part of human interaction from antiquity. Whereas right from antiquity women were blamed for all ills of life, as seen in the writings of the Greek Hesiod in the myth of the great woman “Pandora” – who is believed to have “opened the lid of a jar containing all plagues and diseases of the world and let them out” (Pomeroy et al. 2004, p. 72), they also have invariably been the source of enacting, maintaining, and reenacting peaceful relationships between people who are or would have been enemies. Besides, marriage is a mark of responsibility and a symbol of maturity, and it has been used over the years as a means of forging political, economic, and military alliances between people who are not kinsmen (Talbot 1967, p. 193). Although some relationships established based on marriage have failed to achieve the intended goals, it has helped to maintain peaceful coexistence between people. The power of marriage to unite and maintain peace was well understood by ancient Empires, Kingdoms, and Chiefdoms. They have invariably used marriage to build, maintain, and reenact friendships. This entry on bed diplomacy places the bed at the center of relationship in marriage because it is on the bed that marriage is consummated. This discourse is followed in the entry by a way of analyzing how the application of bed diplomacy during preconflict, active conflict, and postconflict levels can be used to address conflicts.
Preconflict Bed Diplomacy The preconflict bed diplomacy is the preemptive approach of dealing with conflicts. It aims at preventing conflicts from occurring once it is detected or its possibility is assumed. The use of bed diplomacy at this level helps to nab conflict in
Bed Diplomacy
the bud before they surface. This has been done through marrying women from different territories and raising of children who are conscious of the values of peace. This approach of addressing conflicts was well known to Emperors, Kings, and Chiefs. According to McMahon, the acts of marrying many women by sovereigns . . .was the rule rather than the expectation in royal courts throughout the world, including China, Japan, Korea, Vietnam, Siam, Lars, Java, Arabia, Persia, Mongol Central Asia, Mughal India, Ottoman Turkey, Nigeria, Mayan and Aztec regimes, ancient Ireland and Iceland, and ancient Biblical Kingdoms, among others. (McMahon 2013, p. 917)
The above reveals that multiple marriage is not only African, but it is a widespread practice. Peaceful relationships with prospective enemies were neutralized by sovereigns through marriage to women from across their neighboring communities, Empires, Kingdoms, or Chiefdoms as a way of safeguarding their hold to power. In some instances, it might not be the Emperor, King, or Chief, but their children who are made to marry from other royal families for the purpose. Whenever an Emperor, King, Chief, or their children marries from a community, the members of that community know they will benefit from the dividends that come as a result of the marriage. Whenever an enemy attempts to attack, they will all rally their support alongside the Emperor, King, or Chief who married from their community. King Solomon of Israel, a nonmilitary genius unlike his father King David, used marriage to pacify his relations with the neighboring kingdoms and chiefdoms. He married seven hundred wives of noble births and three hundred concubines (1 Kings 11: 3; Betzig 2005, pp. 331–332). Although most Biblicists tend to look at King Solomon’s marriage to many women in the context of how the women turned his heart away from the Lord, he was strategic in trying to pacify his kingdom with potential enemies. Besides these marriages helped to sustain poor women using state funds, convert gentile women to Judaism, and increase wisdom by having many Jewish families (Cohen 1981, pp. 24–37). By and large, marrying these women helped King Solomon to neutralize the would-be enemies of Israel by
121
building a strong alliance and increasing trade and commercial activities (1 Kings 10, 28–29). In the sixth century BC when Greek States were tired of fighting each other, they began to earnestly establish formal mechanism of avoiding war. The tyrants who were in charge of these states conducted foreign policy by creating pacts of friendship or marriage alliance with other tyrants or with the top Aristocrats (Pomeroy et al. 2004, p. 89). Through such marriage arrangements, they were able to abate conflicts with their neighbors through soft power unlike the hard power that men wield. Whereas this form of diplomacy starts as a private affair (Bound et al. 2007, pp. 22–23), it eventually diffuses to become a public diplomacy by subsuming all those within a community in the relationship. To safeguard relationship between royals, there is formal documentation meant to guide the continuation of the tradition. In 507 AD, for example, when King Clovis of the Franks was preparing to march against the Visigoths under the leadership of King Alaric II, King Theodoric the Great of Italy tried to use their relationship by marriage to mediate. At the center of this marriage relationship was Theodoric the Great. King Clovis’ sister married Theodoric and King Alaric married Theodoric’s daughter. In a letter to King Clovis, King Theodoric wrote The holy laws of kinship by marriage [affivitatis iura divina] ties purpose to take root among monarchs for this reason: that this tranquil spirit may bring the peace which peoples long for. For this is something sacred, which is not right to violate by any conflict. For what hostages will assure good faith, if it cannot be entrusted to the affection? Let rulers be allied by family [sociantur proximitate], so that separate nations may glory in a common policy, and tribal purposes join together, through special channels of concord. (Crisp 2003, p. 2; cf. Moorhead 1992, p. 186)
Whereas King Theodoric the Great tried to use, “bonds of kinship to control, administer, or influence the diplomatic and military relations among the rulers of Western Europe” – because “kinship both by blood and marriage is an important and meaningful connection that regulate ones’ behavior” (Crisp 2003, pp. 2–3), it ended in disaster. King Clovis did not obey the pact, he went ahead
B
122
and killed King Alaric II. Importantly the existing tradition of kinship bonds shown in the above letter indicates a very strong regard for marriage ties in creating alliances. Perhaps this arrangement did not work because the relationship between King Clovis and King Alaric II was through King Theodoric the Great of Italy but not directly through marriage between the two kings. If that was the case, the two could have seen and resort to peaceful way of dealing with the crisis, other than resorting to war. At times relationships could be established when local rulers and very influential persons intentionally contracted and exchanged their children in marriage to concretize their bond. Through such bonds, they will formally establish political, military, or economic link. This help to create alliances that boosts “the political prestige and the military strength of the chief contractors who often were linage or community heads” (Uchendu 2006). History has shown that over the years inter royal marriages has been the principle method of establishing and maintaining peace between territories. Such arrangements have assisted in pacifying Europe. This kind of bond is seen in the complex relationship between the premodern European royal families attests to this. For example, Ferdinand III (1201–1254) King of Leon and Castile was the son of Alfonso IX, King of Leon, and his mother was Berengaria, who was the elder daughter of Alfonso III, king of Castile: her mother was a daughter of Henry II of England, and her sister Blanche became the mother of St. Louis of France. The marriage between King John II of Castile and Queen Isabella of Portugal united the two kingdoms (Nykanen 2014, p. ii), and the marriage of their daughter Queen Isabella of Castile and King Ferdinand II of Aragon cemented the bond between their kingdoms and made the two monarchs to co-rule their Kingdoms (Nykanen 2014, p. 1), while leaving the inhabitants of each kingdom to maintain their languages. Further still, the marriage between Princes Catherine of Aragon and Prince Arthur of Britain, and subsequently to King Henry VIII after the death of Prince Arthur (Karlie 2016), notwithstanding the squabbles that followed thereafter,
Bed Diplomacy
helped King Ferdinand II of Aragon to get the much needed ally with Tudor Britain against France. The marriage of Queen Alexandrina Victoria of Britain to her first cousin Prince Albert of Saxe-Coburg-German in 1840 and their nine children’s marriages in other European royal families made Queen Victoria to be regarded as the grandmother of Europe (Veldman and Williams 2018). Although most of these boys and girls are married off, not because they want to, but for the good of their parent’s rule. In all this, we see the power of the bed uniting entities, helping them to create alliances and work towards their joint progress for the good of the citizenry. Of course, marriage does not only create peaceful relationship between people when royal families are the primary parties to it. At the local levels, the marriage between ordinary people works to unite different ethnicities. Whenever marriage from across boarders occurs, even if the communities concerned had some differences, the bond between the couple will unite the communities. This clearly brings to the fore the application of the spirit of Ubuntu since people are people because of others. Through this the negative spirit of the “otherness” is replaced with the positive spirit of “togetherness” imbued in people. This is rekindled as a result of the marriage between their kin. Such a spirit is energized when the couple gets a child. This immediately transforms the woman into being a mother. At the center of this metamorphosis is the bed. As mothers, a women commands respect in the community. Motherhood puts women to be the first teachers of the children they give birth to. As principle parties, they take active role in children upbringing. In case “a woman raises her children well and make them know the value of peace, these children will grow to be peace lovers.” For a woman to raise children well, the bed has to be peaceful. If the relationship between the couple is not cordial, the children will take a negative picture of marriage. And if children grow in an environment where witnessing of violence is the order of the day, they will grow knowing that violence is the only way of relating with others, thus leading to a nonpeaceful society.
Bed Diplomacy
The above assertion, therefore, indicates that women’s pacification of communities through positive childcare, responsible mothering, and nurturing of children in ways that prepare and socialize them towards peaceful co-existence is a key aspect of their role (Nwoye 2013). In most precolonial societies, the culture of peace, tolerance, and an antiwar tradition are embedded in and transmitted through folktales, proverbs, poetry, songs, and dance. Traditionally, women are often seen as the transmitters of these cultural values to their progeny and to future generations through such artistic expressions (Isike and Uzodike 2011, p. 42). Mohamed Abdi Mohamed candidly brought this out by citing a Somali proverb that goes: “The values with which children are brought up precede their actual birth” (Mohamed 2003), because these values “are transmitted by mothers even while the child is still in the womb.” As a result, Somalians believe that, “before becoming adults, we attend a basic school, and that school is mother” (Mohamed 2003, p. 102). Indeed, in different precolonial societies, women used songs, proverbs, and poetry to transmit positive social capital values upon which peace is predicated. These values include patience, tolerance, honesty, respect for elders, communality and mutuality, compassion, regard for due discretion, gentleness, modesty, self-control, moderation, flexibility, and open-mindedness (Nwoye 2013). This can only be possible when the bed is in order. If the bed is hot, it will be difficult to express the value of patience, tolerance, honesty, respect for others, and so forth. Bed diplomacy also played a pivotal role in securing a smooth and flourishing fur trade in Hudson Bay and Montreal regions of Canada between settlers and Aboriginal Indians. This relationship is well captured by Jay Nelson (n.d.) in a paper entitled “‘A strange revolution in the manners of the country’: Aboriginal-settler intermarriage in the 19th century British Colombia.” When the London based Hudson’s Bay Company (HBC) sent its employees, their initial stand was for these male employees not to intermarry with the aboriginal Indians. Their fear was that intermarriage with non-whites would be like diluting the white race. However, given the lack of white
123
ladies, the men resorted to marrying the local women. The strong stance of the far off executives in London could not be observed because the managers at the plant in Hudson Bay were equally victims of the intermarriage. However, unlike the HBC that discouraged their staff from intermarrying with the Aboriginals, the Montreal based Northwestern Company’s liberality to allowing their employees to intermarry with the aboriginals worked to their advantage. “Recognizing the values of such union in securing trade ties and acculturation traders with Aboriginal languages and customs” (Nelson n.d., p. 26) assisted them to entrench their business at all levels of the Aboriginal community. The cordial relationship was not only as a result of a few administrative staff of the NWC, but it was rather entrenched as a result of giving all levels of employees the freedom to intermarry with the local women. This made the NWC to be linked to Aboriginal communities along all levels of the trade hierarchy. The NWC’s graciousness of accepting the responsibility for maintaining the wives of their employees and families further worked to their advantage. As Nelson asserts Thus, from the ‘mutual beneficial economic symbiosis of the early Pacific coast fur trade, a unique and relatively egalitarian institution of marriage, largely mirroring Aboriginal marriage rites and customs, emerged as the primary economic and social foundation of Aboriginal-settler interaction. The insulated social and economic world of the fur trade, which encouraged and required both Aboriginal and female autonomy, provided an environment in which individual traders were able to transcend racist sensitivity prevailing in their home countries. (Nelson n.d., p. 23)
Indeed, the resort of the settler employees to intermarry with the Aboriginal women helped to secure the necessary peace that was required for mutual interaction and also for the business to boom given that this intermarriage gave them freedom. Using the bed to help hatch and maintain this relationship did help the fur business to progress uninterrupted because both parties knew, apart from the fur business, they had become relatives by blood. The parents of the women who were married to the settlers did also not stand against the fur business because they knew
B
124
if they did, the survival of their daughters were at stake. More so, the local community also stands to benefit from their sell of fur to the settlers. Hence, the mutual give and take relationship that ensued was as a result of marriage worked to bind these people together for the good of both communities.
Active Conflict Bed Diplomacy In his antiwar comedy entitled “Lysistrata,” the fifth-century Greek comic playwright Aristophanes cogently describes how Lysistrata, the main character, organized women to barricade themselves in the acropolis and go on a sex strike to persuade their husbands to stop the Peloponnesian war, a war that pitched Athens against Sparta and her allies. Because the pain of erection became unbearable for the men, they agreed to stop the war and sign a peace agreement (Aristophanes 2002; Raghuram 2016). The author seemed to have understood what women are capable of doing to control men’s ego. This could also be true to women from men’s side. Severally some men made their wives to come to order by suspending sexual intercourse with them until to a point when they come to their senses and stop being cantankerous. There are several modern examples of antiwar/ conflict campaign led by women following Lysistrata’s footsteps. For example, in 2003 women in Liberia declared sex strike as part of peace movement to end the second Liberian Civil War (Garau 2017). Many women supported the course and participated in the strike. Most prominent among these women was Ellen Johnson Sirlef who eventually became President of Liberia. This strike created the required publicity for peace to be realized after heinous years of war in Liberia. In 2006, the women in Colombia organized the “Strike of Crossed Legs” in protest against violence that had made live very difficult for ordinary Colombians (Agbedchin 2014, p. 12; Garau 2017). As a result of this, crime dropped by 26.6% in Colombia (Garau 2017). That aside, the women in Barbacoas- Colombia also declared sex strike from June to August 2011 demanding the road connecting their town to the rest of the
Bed Diplomacy
country to be paved (Agbedchin 2014, p. 12; Garlow 2011). This strike became unbearable because even the wife of the Mayor of the town decided to leave her matrimonial bed and moved to another room. Even when the authorities agreed to start work on the road, the women did not relent until the actual work started then they went back to their bedroom. This eventually afforded men the opportunity to resume normal conjugal rights with their wives (Garlow 2011). Similar sex strikes were followed in the Philippines, Togo, and Ukraine (Garau 2017). In the Ukrainian case, women declared sex strike particularly against Russian men in protest to Russia’s attempt to annex Crimea (Engineer 2014; Raghuram 2016). Then in 2009, Kenyan women called for a week long sex strike to protest against the political stalemate between the President Kibaki and the Opposition leader Raila Odinga. These women groups reasoned that “As the politicians argued over policies and procedures, the women and children were the ones being disproportionately affected by corruption and poverty” (Global Nonviolent Action Database. 2009). To this end, the G10 women group directly laid out their demands: • That the two principals respect the people and nation of Kenya by ending forthwith the little power games that undermine the dignity, safety, and democratic spaces of our country • That the president and prime minister give respect, full intent, interpretation, and observation to the spirit and letter of the National Accord and reconciliation • That the two principals show commitment, good faith, and leadership in the implementation of the accord by making the interests of the nation paramount • A responsive, sensitive, and people-driven leadership and coalition government that is decisive, clear about the country’s priorities, willing to sacrifice individual ambition for the greater good of the nation and which represents a force that inspires confidence among the country’s people • That the reform agenda be fast tracked and given priority over all else
Bed Diplomacy
• That Vice-President Kalonzo Musyoka step aside and refuse to allow himself to be used to defeat the good intentions of the National Accord. (Global Nonviolent Action Database 2009) Unfortunately, in the Kenyan case when a section of women called for sex strike, another section – the prostitutes – saw this in economic terms and announced they were ready to serve men sexually (Agbedchin 2014, p. 13). In a situation where there is disharmony within a group that is calling strike action, achieving the intended result through a strike of such nature is not easy. In a way of retaliation, men also called for a 30 days’ sex strike to protest the pain women caused to them during the seven days sex strike. However, whichever side won, what is clear is that anything to do with the bed can cause a change in action. Indeed after a few days, the principal parties to the crisis were able to sign for peace and normalcy returned to Kenya (Global Nonviolent Action Database. 2009). The power of women to cause change of direction could further be seen in their action of causing President Lansana Conte of Guinea to accept a meeting with President Taylor. Isike and Uzodike (2011), quoting from Fleshman (2003), observe that when women realized “their strategy of focusing on human insecurity implications of conflict which worked with President Taylor was not working with President Conte, they changed tactics”. Through a representatives, these women emphatically told President Conte: You and President Taylor have to meet as men and iron out your differences, and we the women want to be present. We will lock you in this room until you come to your senses, and I will sit on the key. (Fleshman 2003, p. 18)
The courage of the women’s representative sent a shock wave to President Conte. He could not believe how these women could be so bold to approach him. Isike and Uzodike (2011) report that after “a long silence,” president Conte started laughing and thereafter commented: “What man do you think would say that to me? Only a woman could do such a thing and get by with it.” Indeed it was such a bold and risky stand, though for the
125
sake of peace. “Crediting the women for changing his mind to attend the peace summit, the president said” (Isike and Uzodike 2011): “Many people have tried to convince me to meet with President Taylor, but only your commitment and your appeal have convinced” (Fleshman 2003, p. 18). This is a clear indication that women can cause a change in men’s attitude regardless of the dangers that comes with it. Indeed across the world women’s organization, though limited in scope, has been mobilizing, lobbying, and campaigning against structural violence, unfair and oppressive laws, poverty, discrimination, and domestic violence. Such activities of women have far-reaching potential for peace (Lihamba 2003, p. 127). Through activities as these, women’s strong stand against wars and violence of all sorts has yielded great results. The examples of Lysistrata and the women group around the world clearly bring out women’s potentiality to propose means to end wars. However, unless women fully understand their power not only for their “personal development but also to fully play their role in the building of a society free of violence” (Lihamba 2003, p. 128), they will continue to leave issues of peacebuilding and conflict resolution to men – who will not find quick solutions unless they are denied that special bed right or conjugal right. Commenting on what happens among the pastoralist communities in Kenya, Pkyala, Adan, and Masinde (2010) assert that whenever a quarrel or fight erupts among men, an older women would come and either stand in between them or removes her waist band and lays it between the belligerents. Once that happens, the fight or quarrel stops forthwith. Women have used their position as custodians of the bed to make peace is seen through the Somali women’s use of their womanhood to foster peace. To this end, Mohamed Abdi Mohamed asserts In some areas when war broke out between two clans, women sent envoys to both parties, in order to identity and establish contacts in both camps. And in some regions women organize and financed the bulk of the peace conferences. What is more, they confronted those men who were reluctant to join the peace process and at times even dragged
B
126 them into the conference hall. Women are able to do this because of their position in the community as wives, mothers, sisters and artists. As mothers, women bring up their children, inculcate in them basic decency and tolerance and explain to them the fatality of war. They make every effort to shield their sons from the lure of violence. As wives, some women try to remove their husbands from the war zone. They even threaten to leave them if they do not sever their links with conflict and war. Even those who are only engaged embrace this tactics with conviction, threatening their- would be- husbands to dissociate themselves from the conflict or else. As sisters, there are signs that many women take issues with their brothers and eventually convince them that wars produce only death and destruction. As artists, women actively contribute to the search for peace by composing poems and songs that discourage violence and promote peace. Sometimes they organize poetic contests with emphasis on themes relating to peace and reconciliation. (Mohamed 2004, pp. 106–107)
Four main issues could be teased from the above excerpt. Firstly, the power of women to cause a change in their husbands is a reality. It is true that men in the quietness of the bed could cause a change in the lives of their wives. So the bed has the power to tame people’s characters for the better. Secondly, children heed to their mothers more. Whenever a mother tells a child not to participate in an action, out of respect, the child will not participate. In Africa, people fear to go against the wish of their mothers because they think a mother’s curse is terrible. Thirdly, the influence of a sister in the life of her brother is powerful. This is more powerful when the home from which these children grew up is loving and peaceful. A lady’s love for her brother would make her do whatever it takes to keep him alive. And fourthly, there is power in music, especially those composed by women to celebrate men’s courage or ill manners. Just like how the song composed by women in praise of the young David after killing Goliath caused King Saul to be jealous (1 Sam 17:57–18:16), so is the power of women’s peace song to men. Peace songs composed by women like Yvonne Chaka-Chaka, Brenda Facie, and others helped in the fight of Apartheid in South Africa. This, therefore, gives us the say that the power of the bed to cause order in an order-less state is real.
Bed Diplomacy
Postconflict Bed Diplomacy Postconflict bed diplomacy is the application of bed diplomacy for reenacting relationship after conflicts. Given the fragility of the postconflict period because of the loss of trust for each other, cross ethnic or community marriage could help to better resurrect relationship. The marriage relationships that helped to address conflicts described above come after previous conflicts. Before the ancient kingdoms in the Middle/Far East, Asia, Europe, Africa, and America resorted to peaceful relationship, they could have fought deadly wars and thereafter realized the power of marriage in uniting them. Besides, in some communities, after taking time to settle their disputes, they resort to encouraging intermarriage. If this is done, the women who get married across the board act as envoys of peace. It is understood that through such marriages, those who were killed during the war could/would be replaced through child bearing.
Conclusion The power of the bed to hatch, cement, maintain, and reenact relationship is a tool that should not be taken lightly. From the foregone discussion, it is clearly revealed that over the years people have used the power of marital relationship to abate war or cause peace between warring parties. This is not just a recent innovation. It is something that humans have been using from antiquity. And indeed, when people marry from across communities, such marriages could help to bind the “would-be” enemies together because the marriage of their kinsmen would unite them and the two communities would benefit from such relationship. We have seen that the marriage of Emperors, Kings, Chiefs, and members of the Royal Families has worked to create peace and unite communities of diverse traditions. It is also true that when the bed is peaceful, the peace in bed is reflected in the living room, compound, and the community through harmonious living. Through such peace, the children born into these families will grow seeing the value of
Bed Diplomacy
peaceful coexistence between people of diverse background and thus making them to be lovers of peace. On the other side, if the bed is not peaceful, the children will grow to know and become violent. It is, therefore, the considered view of this entry that people should be encouraged to allow intermarriage between different communities without restrictions so that the spirit of unity in diversity will be inculcated in children and conflicts avoided. It is also essential in intermarriage that children are brought up in peaceful environment so that they will be peace loving other than violent.
Cross-References ▶ Indigenous Peacebuilding ▶ Peacebuilding ▶ Women, Peace, and Security
References Agbedahin, K. (2014). Interrogating the Togolese historical sex strike. International Journal On World Peace, Vol. XXXI No. 1 Betzig, L. (2005). Politics as sex: the old testament case. Evolutionary Psychology, 3, 326–346. Bound, K., Briggs, R., Holden, J., & Jones, S. (2007). Cultural diplomacy. Leicester: IPrint. Cohen, A. (1981). The Politics of Elite Culture: Explorations in the Dramaturgy of Power in a Modern African Society. Berkeley: University of California Press Crisp, R. P. (2003). Marriage and alliance in the Merovingian Kingdom (pp. 481–639). Ph.D. thesis, Ohio State University. Engineer, C. (2014). Ukrainian women go on sex strike in protest against Russia’s annexing of the Crimea. https:// www.dailystar.co.uk/news/latest-news/371677/Ukrainianwomen-go-on-sex-strike-in-protest-against-Russia-sCrimea-takeover. Accessed 14 Apr 2017. Fleshman, M. (2003). African women struggle for a seat at the peace table. Africa Recovery, United Nation Department of Public Information, 16(4), 16–19. Garau, A. (2017). “Sex strike” may happen in Kenya to make men vote. http://allthatsinteresting.com/kenyasex-strike. Accessed 14 Apr 2017. Garlow, S. S. (2011). Women end sex strike in Colombia. https://www.pri.org/stories/2011-10-13/women-endsex-strike-colombia. Accessed 14 Apr 2018. Global Nonviolent Action Database. (2009). Kenyan women sex strike against government’s paralysis.
127 https://nvdatabase.swarthmore.edu. Accessed 13 Apr 2018. Isike, C. & Uzodike, U. O, (2011). Towards an indigenous model of conflict resolution: Reinventing women’s roles as traditional peace-builders in neocolonial Africa. In Malan, J. (Ed), African Journal on Conflict Resolution 11(2), 33–58. Durban: ACCORD. Karlie, (2016). Herny VII and Catherine of Aragon; The King of the Pauper Princes. Available from https:// henrytudorsociety.com/2016/11/28/henry-vii-andcatherineof-aragon-the-king-and-the-pauper-princess/, accessed 18 Nov 2018. Lihamba, A. (2003). Women’s peace-building and conflict resolution skills, Morogoro Region, Tanzania. In Munoz E. C. (Fwd.), Women and peace building in Africa: Case studies on traditional conflict resolution practices. Paris: UNESCO. McMahon, K. (2013). The institution of polygamy in the Chinese Imperial Palace. The Journal of Asian Studies, 72(4), 917–936. Mohamed, A. M. (2003). The role of Somali women in search for peace. In: Munoz E. C. (Fwd.) Women and peace building in Africa: Case studies on traditional conflict resolution practices (pp. 75–110). Paris: UNESCO. Moorhead, J. (1992). Theodoric in Italy. Oxford: Clarendon Press. Nelson, J. (n.d.). ‘A strange revolution in the manners of the country’: Aboriginal-settler intermarriage in the 19th century British Colombia. Nwoye, C. M. A. (2013). Role of women in peace building and conflict resolution in African traditional societies: A selective review. http://www.afrikaworld.net/afrel/ chinwenwoye.htm. Accessed 7 Apr 2018. Nykanen, L. (2014). Queen Isabella and the Spanish inquisition 1478–1505. Honors thesis, Orlando, University of Central Florida. Pomeroy, S. B., Beerstein, S. M., Dolan, W., & Roberts, T. (2004). A brief history of ancient Greece. New York/ Oxford: Oxford University Press. Raghuram, N. (2016). No rights, no sex: The powerful history of women going on strike. https://broadly.vice. com/en_us/article/gyxbw3/no-rights-no-sex-thepowerful-history-of-women-going-on-strike. Accessed 14 Apr 2017. Talbot, P. A. (1967). Tribes of the Niger delta. London: Frank Cass. Uchendu, E. (2006). Women-women marriage in Igboland, Research Gate. http://www.researchgate.net/publica tion/273122433. Accessed 7 Apr 2018. Veldman, M. & Williams, E. T. (2018). Victoria: Queen of United Kingdom, available from https://www. britannica.com/biography/Victoria-queen-of-UnitedKingdom, Accessed 18 Mar 2019.
Further Reading Aristophanes, Lysistrata and other plays (Penguin classics, 2002, translated by Alan H. Sommerstein).
B
128 Muñoz, E. C. (Fwd.). (2003), Women and peace in Africa: Case studies on traditional conflict resolution practices. Paris: UNESCO. Moorhead, J. (1992b). Theodoric in Italy. Oxford: Clarendon Press.
Biopolitics Dorottya Mendly Department of International Relations, Corvinus University of Budapest, Budapest, Hungary Keywords
Body · Health · Demography · Migration
Introduction The notion of “biopolitics” is extremely popular in contemporary social science research. Popularity entails numerous rival conceptualizations and layers of meaning, as well as many exciting research agendas. This piece aims to introduce to readers the main areas in which the concept of “biopolitics” has been used in the past decades and to sketch the outlines of this diverse research agenda. In a more general sense, biopolitics refers to an intersectional field, at the frontier of biology and politics. Biology means, according to its etymology, the study of life itself. This broad definition should first be narrowed, in this case, to the study of human life and more specifically the study of human life through the body. With the idea of biopolitics, an ancient question resurfaces: are humans inherently political? Aristotle, notably, imagined men as living beings with political capacities and treated questions of biological existence separately. This tradition then remained intact for centuries, until the modern man “whose politics” placed “his existence as a living being in question” (Foucault 1980, pp. 142–143). A notable moment, where this amalgamation of biology and politics became a key concern, was around the turn of the nineteenth and twentieth centuries. It was the Swedish political scientist,
Biopolitics
Rudolf Kjellén, who used the term for the first time, creating a neologism in which his era’s political debates were finely condensed. As Lemke notes (2011), among others, informed by the then-fashionable evolutionism and a new concern of philosophy with life, the majority of the epoch’s political thinkers shared an “organicist” view of states. It means that they saw states as “collective subjects,” which are as natural as any living creature and which, therefore, had comparable life instincts and needs, a single body, and soul. Any form of politics and any policy that served these biologically justified needs of the state were “good” politics, and anything that went against the “laws of biology” was considered harmful. These lines of thought culminated in the dreadful biopolitical regime of the Nazi Germany, which has been the example of a violent and exclusionist form of biopolitics ever since. Afterward, one of the key debates in biopolitics has taken place between those who like to see it as pertaining to totalitarian regimes and those who argue that it is a key political logic in any modern state. The key theorist supporting the latter approach is Michel Foucault, the hugely influential French scholar, who developed his concept of biopower and biopolitics as an integral part of his many works on power, during his lectures at the Collège de France (2003, 2008). He was mindful of early modern politics’ focus on the individual body: how it is controlled, surveilled, and disciplined in and by a growing number of modern social institutions, supervised by states (1972, 1995). On the other hand – and this is his great contribution to the theory of biopolitics – he could develop these insights further and show how politics’ focus on the individual body was in itself motivated by societal considerations and, later on, how a new focus and form of power had changed the game by the nineteenth century, in (Western) Europe. At the center of these developments, he found the emergence of states’ novel interest in “the population” as such. Not unrelated to the coincidental development of industrial capitalism, the sovereign’s key concern came to be, instead of “making die and letting live,” to “make live and
Biopolitics
let die” (Foucault 2003, p. 241) and, instead of disciplining the individual body, to manage and regulate the collective body, the population, so that it can perform at its maximum. Power, thus, became refocused from “man-as-body” to “manas-species” (2003, p. 242), a unit which could then be assessed, measured, improved, and monitored. Apart from the shift in the logic of power, this process was accompanied by the development of new instruments, used for the above purposes, such as the statistical methods of sociology and demography, processing various fertility, mortality, and morbidity data. These scientific tools became important instruments in the hands of states, which, based on these, launched a series of policies and programs, aiming to optimize the population, through targeted natalist policies, advertising public hygiene and healthcare, and supporting welfare funds as well as social and health insurance. In this sense, thus, biopolitics is the regulation and management of a collective body, which is among the most important tools in modern states’ toolkit of exercising power.
Biopolitics as a Form of Governance This very concise summary of Foucault’s extremely influential approach to biopolitics makes it hard to miss how and why the concept is so intimately tied to contemporary understandings of politics as the governance of a complex set of issue areas (Levi-Faur 2012). Foucault developed this concept as part of his more encompassing theory of governmentality, which refers to an emerging “art of government” in the same historical context, which works through, among others, the abovementioned new forms of power. It reconfigures (governmentalizes) the state itself and aims to shape the conduct of subjects, standing on a complex set of new forms of knowledge and governmental apparatuses (Foucault 1991). Biopolitics is an integral part of this, as its basic rationale is to render populations governable. Seeing this connection is especially important in this section, the aim of which is to
129
provide an overview of pressing contemporary problem areas and the related governance solutions. It shows how the concept of biopolitics is useful in making sense of these and in placing them into a wider interpretive framework. First, it is important to stress that, while many tend to focus on the central role of states, biopolitics should not be imagined (and analyzed) exclusively on the state level. A complex set of actors both below and above states take their part in the optimization, management, and perfection of populations, from international organizations to civil actors and, importantly, to individuals themselves. The new biopolitical methods in governing health are a case in point in this respect. In this sector, ever-smarter technological innovations appear every day, along with new public awareness and attitude-shaping programs, created and executed by governments, civil organizations, as well as the for-profit sector (French and Smith 2013; Gilmore 2016; Rich and Miah 2017; Ajana 2017). One should not be deceived by the fact that such collective endeavors are focused on the individuals’ body, as, e.g., in the case of the measurement of steps taken or the quality of sleep or in the case of the monitoring of one’s heart rate and pulse through smartphone applications, smart watches, and other devices used for these purposes. In the age of Big Data, the host of the individual data ultimately has access to a great pool of information, available for different types of usage and intervention. Not unrelated to this, the boundary between individual and collective concerns is generally rather narrow – Foucault himself pointed at this by analyzing sexuality especially closely (1980). Importantly, beyond communities like Quantified Self, companies like Fitbit, or apps like Strava, many international actors participate in making biopolitics a global phenomenon. For instance, the biopolitical apparatuses and practices of international organizations (IO) have been at the center of much interesting research of late. The question in this sense might be approached from two main directions: interpreting in biopolitical terms the relations between IOs and states or between IOs and populations
B
130
and/or individuals directly. In the first case, emphasis is put on how IOs’ – and more specifically intergovernmental organizations’ (IGOs’) – “biopolitical techniques seek to secure institutional arrangements through which authorities can regulate, administer and control” national processes (Merlingen 2003, p. 368; also: Dean 1999). Merlingen puts his emphasis on IGOs’ policies toward states which “are in need” of improvement: “monitoring countries, comparing their behaviour to international institutional standards of normal statehood and developing the meticulous knowledge through which countries can be corrected and controlled” are among the techniques which are the most frequently applied by a wide set of IOs (2003, p. 369). These measures are often understood to be part of “powerneutral,” “management-like,” and “professional” governance activities. A biopolitical framework, however, makes their intimate relations to power quite clear. The other approach is just as interesting, as IOs can direct their policies straight toward populations, leaving out states as the intermediaries. For instance, in an enlightening example, Zanotti and her colleagues analyze the United Nations’ (UN) peacebuilding activity in a disciplinary and biopolitical framework. They argue that when official materials denote sport as a vehicle of peacebuilding, carrying various positive effects for conflict-struck societies, such discourses and practices are also deeply biopolitical, as they “focus on governing the processes under which populations live together” (2015, p. 192). The goal, as in any biopolitical activity, is “to reinforce, control, monitor, optimize and organize the forces under it: a power bent on generating forces, making them grow and ordering them” (ibid., p. 194). Such activities sit comfortably also in the neoliberal agenda, generally speaking (Hayhurst 2009). The key thing in this sense is that neoliberalism requires and aims to have resilient subjects, who are able to take care of themselves and need no costly interventions and/or investments from their states. This line of thinking in terms of biopolitics is also apparent in corporate frameworks, as it seems
Biopolitics
that corporations also “discovered” the importance of having a resilient workforce, able to achieve higher levels of work performance – meaning higher levels of profit (Gleadle et al. 2008; Joseph 2013; Yoon et al. 2019). Such accounts can be understood as linking biopolitics explicitly to the most recent forms of capitalism – something that Foucault did not do with such enthusiasm. A key direction in these interpretations was famously set by Michael Hardt and Antonio Negri in their book series (2000, 2004, 2009) explaining the changing logic and functioning of postmodern politics in the age of global capitalism. An important distinction they make – with which they depart from a Foucauldian understanding – is on the one hand between biopolitics, which they link directly to production (a new, complex form, integrating production in the classic sense with broader social production), and, on the other hand, biopower, which “stands above society” and “imposes its order” on it (2004, pp. 94–95). Their work has been a great influence inside and outside of academia and contributed to the spread of the term in intellectual debates.
Dividing and Ordering Populations: Further Salient Fields of Biopolitics A symbolic entry point for this section could be a reference to the popular expression, “the body politic.” According to Encyclopedia Britannica, it is “an ancient metaphor by which a state, society, or church and its institutions are conceived of as a biological (usually human) body” (Encyclopedia Britannica). A classic example of this understanding is the front cover of Thomas Hobbes’ Leviathan: Abraham Bosse’s famous work features the sovereign, whose body is made up of a multitude of human bodies, all looking up to the crowned head, the sovereign himself. The key thing in this case is that the (ideal) society is seen in the tradition of modern European political theory as a homogeneous whole, a mass which is connected by certain attributes, contained within and by the state borders,
Biopolitics
and is to be preserved as such through political means. Ordering and classifying “the body politic” remains a key biopolitical consideration, following in this sense also Kjellén’s and his contemporaries’ views. This section discusses briefly some characteristic fields in this problematic, like those of nationality, human rights, demography, migration, and colonialism. For this, the approach of another immensely important theoretician should be introduced: Giorgio Agamben’s. Being one of the most important contemporary philosophers, he took up many of Foucault’s ideas and reworked them – which was also the case with biopolitics, “the growing inclusion of man’s natural life in the mechanisms and calculations of power” (1998, p. 71). While Foucauldian accounts talked about complex, “socialized” forms of power, executed and reproduced by a multitude of actors, Agamben’s work focuses more on the classic theme of sovereign power. In the case of biopolitics, therefore, he sees the issue in the sovereign’s logic and practices: dividing societies, deciding who is inside and outside, who is included or excluded (1998, 2005). He reaches back to Antiquity’s distinction between zoe and bios, the first meaning the mere fact of biological existence (with no political relevance for antique philosophers) and the second meaning an inherently political form of life. Agamben’s famous contribution was a third form, bare life (homo sacer as he calls it), which he sees as an important foundation on which the modern state’s sovereign order rests. While it refers to a status in which one is actively stripped from political capacities, importantly, the “location” of this form of existence is not outside or the political community, but on its margins, rather, in a “zone of indistinction.” The typical examples are concentrations camps as well as modern-day asylum facilities – places where law and even basic human rights are not directly applicable and where people are reduced to their bare existence, prone to the abuses of, among others and importantly, sovereign power (see also Edkins 2000; Edkins and Pin-Fat 2005). How nations and nationality become a central concern in this sense is through their very
131
conception. Agamben sees nation-states originating in the Declaration of the Rights of Man and of the Citizen, for it represents “the originary figure of the inscription of natural life in the juridicopolitical order of the nation-state,” since nationality and citizenship are formulated through the act of birth (Agamben 1998, p. 75). Those, who are thuswise born to be citizens, enjoy the rights of citizens and men in general, while this system of modern nation-states clearly cannot properly deal with those who are excluded from this circle. The asylum seeker becomes the symbolic contemporary subject of such situations, but the majority of conclusions also apply to migrants more generally. A more historical example with long-term effects is provided by the study of colonialism and colonial systems. The great recent advances in postcolonial studies allow us to see more clearly how these systems functioned, how biopolitics was essential in keeping them running, and how we (on both ends of this historical relationship) still have to live with this heritage. A basic observation in this sense is that it was not an accident that racism started to have its “scientific” bases formed in the eighteenth century. As colonial encounters became more and more frequent, a series of European scholars from Carl von Linné, the father of taxonomy, through Louis Buffon, who established the theory of humans’ climatic determinism, worked on delineating a system of racial hierarchy, based on biology. Colonizers and colonial administrators did not hesitate to put these theories into practice. A key concern seems to have evolved around the question of sexuality and “mixing races”: while the colonial experience was special in each case, a general feature seems to be the colonizers’ desire to keep the relations as racially “tidy” as possible and to construct clear systems of hierarchy, with meticulous regulations for those cases where the first requirement did not hold. Ann Laura Stoler (2000), for instance, shows that, even as such regulations could take different forms in different contexts, addressing the question of métissage (i.e., the blending of peoples and cultures) in this way was important, e.g., in drawing the symbolic borders of nations.
B
132
Racism is, thus, a practice inherent to modern states, aiming “to fragment, to create caesuras within the biological continuum addressed by biopower” (Foucault 2003, p. 255). This does not only imply “negative” types of interventions, the likes of which an openly racist state may engage in (as with concentration camps or ethnic cleansing) to preserve a clean, pure, and homogeneous population. It also implies those “positive” measures which take certain groups of the population and aim to support these in welfare and reproduction while taking a “passive” stance toward other groups, neglecting them, or “letting them die.” This, importantly, has a strong connection to the allocation of resources within societies: it has been showed that, as competition for resources is becoming fiercer, following the dynamics of capitalism’s cycles, the biopolitical agendas of governments, especially toward the peripheries, sharpen existing social tensions and create new ones, inducing what might be called a “biopolitical panic” (Melegh 2016; see also: Antal 2019). Despite a clear convergence in fertility, the population of the Global South is expected to grow at a considerable pace, until world population reaches 10–11 billion, while the Global North visibly declines in both fertility rates and population (UN 2019). Persistent disparities between the world’s macro-regions in demographic terms therefore suggest that we cannot expect such problems to go away in the near future, something that may render the knowledge of biopolitics timely, pertinent, and valuable.
Conclusion A final point which should be mentioned, along with which some meaningful conclusions can be drawn, is what Lemke calls “ecological biopolitics” (2011, p. 3). In his comprehensive account of biopolitics, he categorizes this approach as one of the more overtly “political” ones, meaning that the practices and ideas he is referring to address a broadly understood biology as an object of regulation for politics. He identifies the ecological discourse starting in the end of the 1960s and early 1970s as a particular
Biopolitics
manifestation of this approach, urging politics to act for the protection of our natural environment. With this, he draws attention to the fact that the ecological crisis, which is unfolding in front of our eyes, does not only have “indirect” biopolitical relevance – through migration and the related, ongoing, and future racist political programs and exclusivist attempts at drawing the borders of political communities, etc. It points to the fact that the relationship between “the biological” and “the political” is a living one. In the form of the man-made climate catastrophe, it is on our doorstep, begging to break down yet another barrier raised by our modern ways of thinking, namely, that our concept of nature “is based on Cartesian dualism and the assumption that the natural and the social are ontologically different” (Dingler 2005, p. 210). On these bases, throughout the past centuries, a system of knowledge has been built, separating humans from the ecosystem, imagining the relationship only in terms of dominance and exploitation. This imaginary has also figured in the responses that global governance has been giving to the crisis. This is a very different take on what biopolitics might mean; and while it is not necessarily advantageous to stretch concepts too wide, the connections between these different domains may suggest much to think about. If one can believe the key theorists of biopolitics introduced above, the inclusion of the body, the questions of health, and the laws of biology have been key concerns of politics in the modern era – arguably even one around which the most important developments have evolved. At the same time, however, understanding the place of humans in their environments in terms of harmony and treating ecological aspects as a horizontal concern have been strangely missing from the agenda in the last several hundred years. This situation was recently addressed by green political theory, which calls into question the general anthropocentrism in our thinking: “the idea that humans are the apex of evolution, the centre of value and meaning in the world, and the only beings that possess moral worth” (Eckersley 2007, p. 251). It is noteworthy that such views seem to be trickling also into popular discussions. The study of biopolitics, as
Biopolitics
a deeply critical endeavor, still has a lot to process in this sense, and we can expect that the research agenda will remain as exciting and dynamically changing as ever, in the future.
Cross-References ▶ Health Security ▶ Refugees
References Agamben, G. (1998). Homo Sacer: Sovereign power and bare life (D. Heller-Roazen, Trans.). Stanford: Stanford University Press. Agamben, G. (2005). State of exception (K. Attell, Trans.). Chicago: The University of Chicago Press. Ajana, B. (2017). Digital health and the biopolitics of the Quantified Self. Digital Health. https://doi.org/10. 1177/2055207616689509. Antal, A. (2019). Kivételes állapotban. A modern politikai rendszerek biopolitikája. Budapest: Napvilág kiadó. Dean, M. (1999). Governmentality: Power and Rule in Modern Society. London: Sage. Dingler, J. (2005). The discursive nature of nature: Towards a postmodern concept of nature. Journal of Environmental Policy & Planning, 7(3), 209–225. Eckersley, R. (2007). Green Theory. In T. Dunne, M. Kurki & S. Smith (Eds.), International Relations Theories: Discipline and Diversity. Oxford: Oxford University Press. Edkins, J. (2000). Sovereign power, zones of indistinction and the camp. Alternatives: Social Transformation and Humane Governance, 25(1), 3–25. Edkins, J., & Pin-Fat, V. (2005). Through the wire: Relations of power and relations of violence. Millennium – Journal of International Studies, 34(1), 1–24. Encyclopedia Britannica. Body politic. Written by Joëlle Rollo-Koster. https://www.britannica.com/topic/bodypolitic. Accessed 26 Sept 2019. Foucault, M. (1972). The archeology of knowledge and the discourses on language (A. Sheridan, Trans.). London: Routledge. Foucault, M. (1980). The history of sexuality, vol. 1: An introduction. New York: Vintage Books. Foucault, M. (1991). Governmentality. In G. Burchell, C. Gordon, & P. Miller (Eds.), The Foucault effect. Studies in governmentality (P. Pasquino, Trans.; pp. 87–104). Chicago: University of Chicago Press. Foucault, M. (1995). Discipline and punish. The birth of the prison (A. Sheridan, Trans.). New York: Vintage Books. Foucault, M. (2003). The society must be defended. Lectures at the College de France, 1975–76 (D. Macey, Trans.). New York: Picador.
133 Foucault, M. (2008). The birth of biopolitics. Lectures at the College de France, 1978–79 (G. Burchell, Trans.). Houndmills: Palgrave Macmillan. French, M., & Smith, G. (2013). ‘Health’ surveillance: New modes of monitoring bodies, populations, and polities. Critical Public Health, 23(4), 383–392. Gilmore, J. N. (2016). Everywear: The quantified self and wearable fitness technologies. New Media & Society, 18(11), 2524–2539. Gleadle, P., Cornelius, N., & Peze, E. (2008). Enterprising selves: How governmentality meets agency. Organization, 15(3), 307–313. Hardt, M., & Negri, A. (2000). Empire. Cambridge: Harvard University Press. Hardt, M., & Negri, A. (2004). Multitude. New York: The Penguin Press. Hardt, M., & Negri, A. (2009). Commonwealth. Cambridge: Harvard University Press. Hayhurst, L. M. C. (2009). The power to shape policy: Charting sport for development and peace policy discourses. International Journal of Sport Policy and Politics, 1(2), 203–227. Joseph, J. (2013). Resilience as embedded neoliberalism: A governmentality approach. Resilience, 1(1), 38–52. Lemke, T. (2011). Biopolitics. An advanced introduction (E. F. Trump, Trans.). New York: New York University Press. Levi-Faur, D. (2012). From “big government” to “big governance”? In D. Levi-Faur (Ed.), The Oxford handbook of governance (pp. 3–18). Oxford: Oxford University Press. Melegh, A. (2016). Unequal exchanges and the radicalization of demographic nationalism. Intersections: East European Journal of Society and Politics, 2(4), 87–108. Merlingen, M. (2003). Governmentality. Towards a Foucauldian Framework for the Study of IGOs. Cooperation and Conflict, 38(4), 361–384. Rich, E., & Miah, A. (2017). Mobile, wearable and ingestible health technologies: Towards a critical research agenda. Health Sociology Review, 26(1), 84–97. Stoler, A. L. (2000). Sexual affronts and racial frontiers. European identities and the cultural politics of exclusion in colonial Southeast Asia. Comparative Studies in Society and History, 34(3), 514–551. United Nations, Department of Economic and Social Affairs, Population Division. (2019). World population prospects 2019. https://www.un.org/development/ desa/publications/world-population-prospects-2019highlights.html. Accessed 30 April 2020. Yoon, S. J., Chae, Y. J., Yang, K., & Kim, H. (2019). Governing through creativity: Discursive formation and neoliberal subjectivity in Korean firms. Organization, 26(2), 175–198. Zanotti, L., Stephenson, M., & Schnitzer, M. (2015). Biopolitical and Disciplinary Peacebuilding: Sport, Reforming Bodies and Rebuilding Societies. International Peacekeeping, 22(2), 186–201.
B
134
Biosecurity and Biodefense Christopher Long Department of International Relations, School of Global Studies, University of Sussex, Brighton, UK Keywords
Bioterrorism · Dual-use · Biotechnology · Medical countermeasures
Introduction There is no one agreed-upon definition of biosecurity, but generally it can be understood as society’s collective responsibility to safeguard the population from the dangers presented by pathogenic microbes (Fidler and Gostin 2008, p. 4). Crucially, these dangers can arise from natural sources such as the emergence of a novel influenza virus whose pandemic potential is increased as a result of intensified global circuits of circulation and exchange. Or they can be released in deliberate acts via biological weapons and biological terrorism. In response to these threats, biodefense efforts utilize tools such as vaccines, therapeutics, and detection methods in coordination with data collection, analysis, and intelligence gathering to prevent or mitigate biological attacks against people and agriculture (Ryan and Glarum 2008, p. 19). In recent years, a whole panoply of unique institutions and organizations have been developed, primarily in the USA, along these lines. This has included the emergence of a unique category of medicine termed the “medical countermeasure”: security technologies, created to combat a range of threats such as anthrax, smallpox, and botulism, developed in coordination with private industry and stockpiled to respond to any attack. Advances in the biological sciences have played a key role in the emergence of these new technologies that, as will be investigated, also provide the basis for new and disturbing biological weapons.
Biosecurity and Biodefense
This entry will begin with a brief history of the use of disease as a weapon before turning to the efforts that characterize biodefense today. It will look primarily at the creation of the largest civilian biodefense system set up to date, in the USA. Through an analysis of the development of this apparatus, the key political and conceptual issues that shape the arena of biosecurity and biodefense are detailed. This includes the role of technologies and research of dual-use concern, the issues of attribution and the strategic limitations that arise with the use of biological weapons, the bridging of public health and national security in the search for biosecurity, and the socio-technical nature of knowledge necessary to take advantage of advances in the life sciences in the development of biological weapons and new defenses.
History of Biological Weapons It may be fair to say that pathogens and biological toxins have been used as weapons in conflicts throughout history. The mobilization of disease as a weapon parallels the scientific knowledge and understanding developed regarding the nature and workings of pathogens (Ryan and Glarum 2008, p. 7). As early as 600 B.C., filth, cadavers, animal carcasses, and contagion were recognized as having a debilitating effect on opposition personnel. During the siege of Caffa in 1346, diseased cadavers were hurled into the besieged city to spread plague and panic (Riedel 2004, p. 400). Smallpox developed a notorious reputation, in part as a result of its accidental and deliberate spread to unsuspecting and susceptible populations with little to no immunity, particularly during the first contact between European colonizers and the New World. In the 1800s British armed forces in North America distributed blankets previously used by patients infected with smallpox. The weaponization of smallpox in this fashion against American Indians created epidemics, killing more than 50% of many affected tribes. One key factor that complicates our understanding of the deliberate use and weaponization
Biosecurity and Biodefense
of disease throughout history is the level of responsibility that can be attributed to particular actors. Biological agents that are favored in deliberate attacks are naturally occurring, and without microbiological and epidemiological data, outbreaks often cannot be separated from natural endemic or epidemic cases (see Christopher et al. 1997). Another key factor that complicates the use of disease in historical conflicts is one’s level of understanding of the microbes themselves. Terms such as “miasmas” and “malaria” reflect the perception and understanding of the origin and spread of disease that predominated before it could be attributed to particular biological organisms (Ryan and Glarum 2008, p. 7). The idea that microorganisms shared our environment and caused disease, the basis of the “germ theory” of disease, remained controversial well into the nineteenth century. The arguments within this theory, advanced by Louis Pasteur and Robert Koch, would eventually be formally endorsed by the French Academy of Sciences in 1864 (Levy 2002, p. 16). With this shift in scientific understanding, the development and use of biological weapons would take an altogether more programmatic shape. The First and Second World Wars would see heavy state investment into biological weapons as an added dimension in the military arsenal. Initially such efforts were directed against animals and their central role in the logistical process. The Second World War would see attacks and experiments on military personnel and civilians as well. One of the most notorious organizations in this regard was the Japanese Unit 731. Numerous attacks and experiments utilizing agents such as plague and anthrax were responsible for the deaths of thousands (Ryan and Glarum 2008, p. 9). An unusual outbreak of inhalational tularemia in 1942, shortly before the battle of Stalingrad, sickened troops on both sides. The potential for blowback against one’s own troops is one strategic limitation of biological weapons. Accounts from former Soviet scientists such as Kanatjan Alibekov (Ken Alibek) later revealed that the Soviets had weaponized this disease a year before (see Alibek and Handelman 1999). From 1943 onward the
135
British and US biological weapons programs came into effect. British efforts would focus on the viability and dissemination of Bacillus anthracis, the spores of which are the causative agent of anthrax (Ryan and Glarum 2008, p. 10). Testing of the delivery of the spores via a conventional bomb was conducted at Gruinard Island off the coast of Scotland. The island would remain contaminated for decades until an extensive decontamination program in the 1980s and 1990s removed the possibility of infection and death. The potential for long-term contamination of a territory or area preventing its habitation represents another strategic limitation on the use of biological agents as a weapon of war. State development of biological weapons would continue during the Cold War. This period would also see a number of allegations between states accusing others of the deliberate use of biological agents. This included statements by the Eastern European press of the use of biological weapons in Oman in 1957 by Great Britain, Chinese allegations that the USA caused a cholera epidemic in Hong Kong in 1961, and in 1969 accusations by Egypt against “imperialistic aggressors” of the use of biological weapons in the Middle East, more specifically a cholera epidemic in Iraq in 1966 (Riedel 2004, p. 403). Such accusations highlight the difficulty of placing concrete responsibility for the emergence of disease at an actor’s door and the way in which blame is mobilized politically within wider conflicts. Concern regarding the widespread development and potential use of these weapons grew in the international arena. As a result, the 1972 Convention on the Prohibition of the Development, Production, and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on Their Destruction, also known as the Biological Weapons Convention (BWC), was created (Riedel 2004, p. 403). The BWC only permits the development, production, and stockpiling of pathogens or toxins for prophylactic, protective, or other peaceful purposes. Unfortunately, the BWC suffers from a number of issues including a lack of clear guidelines on inspections, control of disarmament, adherence to the protocol,
B
136
enforcement, and the issue of how to deal with violations. Controversies also abound regarding the definition of “defensive research” and the quantities of pathogens necessary for benevolent research (Riedel 2004, p. 403). For all its limitations, the BWC represents a concerted effort to achieve consensus at the international level regarding biosecurity. The USA would terminate its offensive biological weapons program by executive orders in 1969 and 1970, with the entire arsenal of biological weapons destroyed between May 1971 and February 1973 (Riedel 2004, p. 404). Despite widespread signatures to the BWC in 1972, many countries would not follow suit and continue to work on activities prohibited by the convention. An epidemic of anthrax in 1979 would illuminate the massive size and scale of the Soviet program. Reports of an anthrax outbreak in Sverdlovsk, a city of 1.2 million people 1400 km east of Moscow, appeared in the Western press in 1980. The emergence of gastrointestinal anthrax was attributed at first to the consumption of contaminated meat and, in the case of cutaneous anthrax, to contact with diseased animals. This explanation for the 96 cases of human anthrax, 79 gastrointestinal and 17 cutaneous leading to 64 deaths, was heavily questioned and debated (Meselson et al. 1994, p. 1203). An investigation by independent scientists mapped the geographical distribution of human and animal cases in conjunction with wind and meteorological conditions. It concluded that the outbreak resulted from the wind-borne spread of an aerosol of anthrax pathogen (Meselson et al. 1994, p. 1206). In May of 1992, President Boris Yeltsin, the chief Communist Party official of the Sverdlovsk region in 1979, confirmed that the Soviet military was responsible for the release (Meselson et al. 1994, p. 1203). The largest documented outbreak of human inhalation anthrax was thus the result of an accidental release from a military microbiology facility on Monday, April 2, 1979. The accidental release of anthrax from a biological weapons production facility paid testament to the industrial scale and capability of the Soviet biological weapons efforts. Such efforts that fell under the organization of Biopreparat
Biosecurity and Biodefense
employed and trained around 60,000 people in biological weapons competence over a 30-year period (Koplow 2003, p. 86). Accounts from former Soviet scientists such as Vladimir Pasechnik, Ken Alibek, and Sergei Popov also testified to the scale and capability of the Soviet biological weapons efforts. Such revelations in combination with advances in the biological sciences, the emerging interest of terrorist groups in biological agents following the end of the Cold War, the attacks of September 11, 2001, and the anthrax letters in their wake, would drive investment by the US government into the largest civilian biodefense apparatus seen to date.
The Emergence of Civilian Biodefense in the USA Advances in the Life Sciences and the “Dual-Use Dilemma” Biological agents have been divided into three categories – A, B, and C – by the Centers for Disease Control and Prevention (CDC) in the USA. The rationale for this categorization is based on the level of public health importance. High-priority agents pose a risk to national security as they can be easily disseminated and transmitted from person to person, result in high mortality rates, have the potential to cause panic and disruption, and require special action for public health preparedness (Ryan and Glarum 2008, p. 39). Category A agents are of the most concern for terrorism and defense experts as they have the greatest potential for harm and disruption. Included in this category are diseases such as anthrax, smallpox, plague, tularemia, botulism, and viral hemorrhagic fevers such as Ebola. Advances in the biological sciences in general and the molecularization of biology in particular have ushered in a new era regarding the development of biological weapons and the need for defenses against these weapons. Genetic engineering, also known as gene splicing, recombinant DNA, and genetic modification, opens up the possibility that terrorists may use this molecular knowledge to create a new class of biological agents to expand the biological weapons (BW)
Biosecurity and Biodefense
paradigm (Petro et al. 2003, p. 161). The possibilities opened up by this technology have led to a new classification of genetically modified BW agents as a separate category of BW. This category of weapon influenced the US government’s understanding of the threat of bioterrorism and its medical countermeasure development strategy deployed in response. Potential modifications of traditional agents include antibiotic resistance, increased aerosol stability, or heightened pathogenesis (Petro et al. 2003, p. 162). It may also be possible to make it harder to detect traditional pathogens. These molecular possibilities have also driven fears that potential terrorists may use biotechnology to generate an entirely new class of fully engineered agents referred to as advanced biological warfare (ABW) agents (Petro et al. 2003, p. 162). Future agents may be rationally engineered to “target specific human biological systems at the molecular level” (Petro et al. 2003, p. 162). In a move away from traditional agents, the specific biochemical pathways critical for physiological processes may be targeted by engineered agents. The capabilities of these ABWs are only limited by the extent of the parallel advances in biotechnology and would pose significant problems for the development of new defenses. Molecular-based technologies have also opened up the possibility of synthesizing viral genomes facilitating the creation and reconstruction of viruses from scratch (Lentzos and Silver 2012, p. 133). Technologies such as genetic engineering pose a “dual-use dilemma” because it is difficult to prevent their use without foregoing their beneficial application (Tucker 2012, p. 1). Further, it has been recognized that many of the technologies with the potential to do the most good are also capable of causing the most harm. “Dual-use” refers to “materials, hardware, and knowledge that have peaceful applications but could also be exploited for the illicit production of nuclear, chemical, or biological weapons” (Tucker 2012, p. 2). In contrast to, say, nuclear technology, the pathogenic bacteria and viruses that are used in biotechnology are readily available from natural sources; have numerous legitimate applications in
137
research and industry; are present in many types of facilities, such as hospitals and universities; and are impossible to detect at a distance (Tucker 2012, p.2). These factors make the use of biological agents and biotechnologies a particularly pressing dilemma and would significantly shape the US government’s biodefense efforts. Concern Regarding Terrorist Use of Biological Weapons in the USA Concern regarding the ability of terrorists to shape and enhance the killing power of biological weapons rose to prominence in the USA in the 1990s. Foremost in shaping these concerns were the activities of the Japanese religious cult, the Aum Shinrikyo. In March 1995, the cult attacked the Tokyo subway with the chemical nerve agent sarin, killing 12 people. Following an investigation it was revealed that between 1990 and 1994 the group had attempted to produce a number of biological agents including anthrax and botulinum toxin. On nine occasions they attempted to disperse what they had produced but this caused no effect (Leitenberg 2001, p. 140). These failures occurred despite the fact that the group had access to virtually unlimited funds, had four years to work undisturbed, and could draw on a dozen people with graduate training. Further, despite the expenditure of several million dollars, the group was unable to obtain any information concerning biological weapons from scientists that worked in the former Soviet Union’s industrial-size biological weapons program (Leitenberg 2001, p. 140). The Aum case was interpreted in government circles as representing a new avenue opened up to potential terrorists as a result of the increased power of biotechnology. A key conceptual issue recently raised in this area may explain the group’s failure. The effective and efficient use of biotechnology has been characterized as being dependent upon knowledge of a social and technical character (Vogel 2008, pp. 239–240). Such knowledge must be developed over time through experimentation, without which efforts are bound to fail. The actions of the Aum group stoked fears that terrorists may gain access to widespread technology that could make the job of biological weapons
B
138
production much easier. The US government’s perception of insecurity posed by bioterrorism in the 1990s was also influenced by Iraq’s biological weapons development program and the revelations from former Soviet scientists, noted above, as to the scale and capability of the Soviet biological weapons program. Of particular concern was the Soviet’s use of genetic engineering technologies to create an enhanced strain of plague (Miller et al. 2002, p. 303) and anthrax (Garrett 2002, p. 359). The Soviet program was recognized as carrying out the first applications of new genetic engineering technologies to “improve” biological agents (Dando 2007, p. 79). In order to understand the weaponization of biological agents past and future, state and terrorist, the Central Intelligence Agency (CIA) embarked on project Clear Vision in 1997. This project tested a Soviet-style bomblet and engaged in the military implications of gene splicing (Miller et al. 2002, pp. 295–296). The initial response of the US government to this threat focused on funding for public health infrastructure, research and development, and state preparedness, including the stockpiling of antibiotics and other medicines. Importantly, the Department for Health and Human Services, up until this time an exclusively public health institution, would receive funding in its budget to implement counterterrorism measures. This marked the first time that the public health system had been integrated directly into the national security system. One of the key political challenges in the area of biosecurity is the management of the effects of the integration of national security and public health. Concerns have arisen around this integration, particularly regarding the prioritization of national security at the expense of public health. Following the terrorist attacks of September 11, 2001, letters filled with “weapons-grade” anthrax traced back to the US Army Medical Research Institute of Infectious Diseases (USAMRIID) (Ryan and Glarum 2008, p. 282) would kill five and spread disease and panic in Washington D.C. and neighboring areas as far afield as Florida. These events significantly intensified the response premised upon the integration of public health and national security and sparked into motion the development and stockpiling of an
Biosecurity and Biodefense
entirely new discursive category of medicine that would signify the merging of these two areas: the medical countermeasure. The Development of US Civilian Biodefense In June of 2002, the Public Health Security and Bioterrorism Preparedness and Response Act was signed into law establishing the Strategic National Stockpile (SNS) to store medical countermeasures, extending and replacing the National Pharmaceutical Stockpile created in 1998. The National Strategy for Homeland Security of 2002 set out the decision to develop broad-spectrum vaccines, antimicrobials, and antidotes. This would augment the SNS, which at that time already contained a sufficient antibiotic supply to begin treatment for 20 million persons exposed to Bacillus anthracis and was projected to contain enough smallpox vaccine for every US resident by the end of that year. In 2004, the Project BioShield Act was signed, delivering US$5.6bn over ten years to incentivize and encourage the private sector to partner with the US government to develop medical countermeasures against biological, chemical, radiological, and nuclear attacks (Ryan and Glarum 2008, p. 256). It was also set up to provide a novel mechanism for federal acquisition of those newly developed countermeasures. Such mechanisms are necessary as historically markets have failed to inspire socially optimal levels of drug and vaccine innovation and consumption in the biodefense arena (see Hoyt 2012). This novel mechanism provides private industry with a guaranteed government-backed market for the sale of medical countermeasures. Unfortunately, this incentive structure was not sufficient to entice large and experienced pharmaceutical companies to work in this area. Smaller and less experienced biotech companies that have proved interested do not possess the experience or the resources sufficient to drive a potential product through the extremely arduous development pathway. These issues presented themselves starkly in the failure of the first Project BioShield contract for a new anthrax vaccine. To address these issues, the US government created the Biomedical Advanced Research and Development Authority
Biosecurity and Biodefense
(BARDA) in 2006, following the passage of the Pandemic and All-Hazards Preparedness Act. BARDA addresses the shortcomings of Project BioShield by providing companies with a range of financial and technical support mechanisms throughout the drug development pathway (see Elbe et al. 2015). Over ten years from 2004 to 2014, the combined efforts of Project BioShield and BARDA have invested just over $3bn into the stockpiling of 75,025,000 medical countermeasures to address smallpox, anthrax, and botulism (Gottron 2014, p. 8). Project BioShield, BARDA, and the production, stockpiling, and dissemination of medical countermeasures sit within the mitigation arm of the US government’s comprehensive emergency management of bioterrorism and biodefense. Comprehensive emergency management consists altogether of mitigation, preparedness, response, and recovery (Ryan and Glarum 2008, p. 260). The BioWatch and BioSense programs and the Cities Readiness Initiative (CRI) represent further preparedness tools. BioWatch provides early warning of a biological attack by sampling the air in high-risk cities for six particular pathogens. BioSense collects nationwide public health data to identify peaks or trends in disease occurrence (Ryan and Glarum 2008, p. 264). The CRI facilitates the dissemination of medical countermeasures during an emergency. A response element is provided by the Laboratory Response Network (LRN) of federal and state public health laboratories, which consists of sentinel, reference, and national laboratories. The Federal Bureau of Investigation’s Hazardous Materials Response Unit (HMRU) and the National Guard WMD Civil Support Teams (CST) both also carry out emergency response and recovery. This array of institutions and organizations represents the most advanced efforts to date to implement civilian biodefense and protect populations from deliberate biological attack.
Key Developments and Future Issues Various experiments have taken place over the last decade that have sent shock waves through the
139
biosecurity community. In 2011, using synthetic genomics, scientists mutated the H5N1 flu virus into a version that was transmissible through the air between ferrets, the laboratory equivalent of human beings. Ron Fouchier, the scientist responsible, was criticized for deliberately creating a mammalian strain of pandemic flu. The details of the experiment were termed a cookbook for terrorists and prevented from being published. In 2002, synthetic genomics was used to recreate the poliovirus prompting fears that terrorists may use this technology to recreate other more deadly viral agents. In 2005, the Spanish influenza virus, responsible for killing 50 million people worldwide between 1918 and 1919, was also recreated using this tool. This was done with the aim of understanding the genetic basis of its virulence so as to guide the development of effective antiviral drugs. In principle it is now possible for scientists to reconstruct any virus for which an accurate genetic sequence exists. This has had particularly disturbing implications for the area of biodefense. In March 2017, it was announced that synthetic biology was used to successfully synthesize the horsepox virus in the search to develop a safer vaccine against smallpox. The basis of these efforts resides in the fact that smallpox and horsepox are a part of the same family of orthopox viruses. Using the publicly available genome of the horsepox virus, the virus was created from scratch. With the complete genome sequence of multiple strains of the variola virus, the causative agent of smallpox, available on the Internet since the early 1990s (Koblentz 2017, p. 3), fears have arisen that similar efforts could be used to reintroduce this disease among an extremely vulnerable population. Smallpox was declared eradicated by the World Health Organization in 1980. British medical photographer, Janet Parker, was the last recorded person to die from the disease as a result of a laboratory accident in the UK in 1978. Along with the reconstruction of organisms, the editing of their genetic compliment has become easier as a result of the emergence of CRISPR (clustered regularly interspaced short palindromic repeats) along with CRISPR-associated (Cas) proteins. CRISPR represents a “powerful, efficient, and reliable tool
B
140
for editing genes in any organism” (Caplan et al. 2015, p. 1421). This offers up the prospect that potential bioterror pathogens may be enhanced in respect to their infectivity and virulence. These types of technologies serve to intensify the “dualuse dilemma” and present a particular difficulty that must be addressed in any future investment in biodefense measures and medical countermeasure development efforts.
Conclusion This entry has analyzed the history of the use of biological weapons and the development of civilian biodefense in the USA in relation to key conceptual issues that fundamentally shape the area of biosecurity and biodefense. The historical use of biological weapons revealed the strategic limitations that arise in any deployment, specifically from blowback and long-term contamination. Biotechnological advances have served to raise fears that existing pathogens could be enhanced and that previously eradicated diseases could be reconstituted in synthetic form. However, the socio-technical nature of knowledge regarding the effective use of these technologies complicates the ease with which non-state actors can mobilize them for nefarious purposes. Paradoxically, the proliferation of biodefense efforts and laboratories across the USA following the anthrax letters may even increase the risk of infection that populations face in the future, either through accidental or deliberate means.
Cross-References ▶ Biopolitics ▶ Bioterrorism ▶ Health Security
References Alibek, K., & Handelman, S. (1999). Biohazard. London: Random House
Biosecurity and Biodefense Caplan, A. L., Parent, B., Shen, M., & Plunkett, C. (2015). No time to waste – the ethical challenges created by CRISPR. EMBO Reports, 16(11), 1421– 1426. Christopher, G. W., Cieslak, T. J., Pavlin, J. A., Eitzen, E., & Jr, M. (1997). Biological warfare a historical perspective. JAMA, 278(5), 412–417. Dando, M. (2007). The impact of scientific and technological change. In A. Wenger & R. Wollenmann (Eds.), Bioterrorism: Confronting a complex threat (pp. 77–92). Colorado: Lynne Rienner. Elbe, S., Roemer-Mahler, A., & Long, C. (2015). Medical countermeasures for national security: A new government role in the pharmaceuticalization of society. Social Science & Medicine, 131, 263–271. Fidler, D. P., & Gostin, L. O. (2008). Biosecurity in the global age: Biological weapons, public health, and the rule of law. Stanford: Stanford University Press. Garrett, L. (2002). Betrayal of trust. Oxford: Oxford University Press. Gottron, F. (2014). The project BioShield act: Issues for the 113th congress. Washington, DC: Congressional Research Service. Hoyt, K. (2012). Long shot: Vaccines for national defense. Cambridge, MA: Harvard University Press. Koblentz, G. D. (2017). The de novo synthesis of horsepox virus: Implications for biosecurity and recommendations for preventing the reemergence of smallpox. Health Security, 15(5), 1–9. Koplow, D. A. (2003). Smallpox: The fight to eradicate a global scourge. London: University of California Press. Leitenberg, M. (2001). An assessment of biological weapons threat to the United States. In J. Rosen & C. Lucey (Eds.), Emerging technologies: Recommendations for counter-terrorism (pp. 132–155). Hanover: Institute for Security Technology Studies. Lentzos, F., & Silver, P. (2012). Synthesis of viral genomes. In J. B. Tucker (Ed.), Innovation, dual use, and security (pp. 133–146). Cambridge, MA: MIT Press. Levy, S. B. (2002). The antibiotic paradox. Boston: Perseus Publishing. Meselson, M., Guillemin, J., Hugh-Jones, M., Langmuir, A., Popova, I., Shelokov, A., et al. (1994). The Sverdlovsk anthrax outbreak of 1979. Science, 266 (5188), 1202–1208. Miller, J., Broad, W. J., & Engelberg, S. (2002). Germs: Biological weapons and America’s secret war. New York: Touchstone. Petro, J. B., Plasse, T. R., & McNulty, J. A. (2003). Biotechnology: Impact on biological warfare and biodefense. Biosecurity and Bioterrorism, 1(3), 161–168. Riedel, S. (2004). Biological warfare and bioterrorism: A historical review. Baylor University Medical Center Proceedings, 17(4), 400–406. Ryan, J., & Glarum, J. (2008). Biosecurity and bioterrorism. Oxford: Butterworth-Heinemann.
Bioterrorism Tucker, J. B. (2012). Introduction. In J. B. Tucker (Ed.), Innovation, dual use, and security (pp. 1–16). Cambridge, MA: MIT Press. Vogel, K. M. (2008). Biodefence: Considering the sociotechnical dimension. In S. J. Collier & A. Lakoff (Eds.), Biosecurity interventions: Global health and security in question (pp. 227–255). New York: Columbia University Press.
Further Readings Elbe, S. (2010). Security and global health. Cambridge, UK: Polity Press. Enemark, C. (2017). Biosecurity dilemmas: Dreaded diseases, ethical responses, and the health of nations. Washington, DC: Georgetown University Press. Guillemin, J. (2005). Biological weapons: From the invention of state-sponsored programs to contemporary bioterrorism. New York: Columbia University Press. Hoyt, K. (2015). Medical countermeasures and security. In S. Rushton & J. Youde (Eds.), Routledge handbook of global health security (pp. 215–225). Abingdon: Routledge. Katona, P., Sullivan, J. P., & Intriligator, M. D. (Eds.). (2010). Global biosecurity. Abingdon: Routledge. Lakoff, A., & Collier, S. (Eds.). (2008). Biosecurity interventions: Global health and security in question. New York: Columbia University Press. Rushton, S., & Youde, J. (Eds.). (2015). Routledge handbook of global health security. Abingdon: Routledge.
Bioterrorism Animesh Roul Society for the Study of Peace and Conflict, New Delhi, India Keywords
Bioterrorism · Cults · Jihadists · Non-state actors · Pathogens
Introduction The threat of the intentional or deliberate use of disease pathogens or biological agents emanating from both rogue state actors and violent non-state actors (NSAs) remains a major concern for national and international security. By violent non-state actors, we generally mean armed insurgent groups, criminal syndicates, apocalyptic religious cults and jihadi terrorist groups and
141
individuals. The rogue state actors are those countries that are irresponsible and dishonest and may develop, stockpile, and use biological weapons or pathogens targeting the civilian populace to kill or terrorize them. These countries might play a proliferator role by transferring bioweapon materials to terrorist organizations for geopolitical purposes. Historically, state actors are not averse to use chemical weapons against civilians in conflict, although more recently a “chemical weapons taboo” seems to have taken hold across most of the international community (Price 1995), with a few notable exceptions to the nonuse of chemical weapons. There is no empirical evidence suggesting that states have resorted to biological weapons to settle political or military scores so far in the post-World War-II era. However, a few countries of concern have developed or stockpiled biological weapons in their secret arsenals, which can play a major role in any bioterrorism event. When there is a discourse on bioterrorism, it is always focused on non-state actors. There were instances where these actors have attempted to acquire or develop and even threatened to use biological weapons against their perceived enemy targets. However, with very few exceptions (e.g., in the case of the Rajneesh cult in the USA, in 1984, or the Aum Shinrikyo sect in Japan), these non-state actors have never attempted or perpetrated attacks using disease pathogens or even weaponized them. The objective behind bioterrorism may not be mass casualties, but it can be aimed at major sociopolitical and economic disruption. Over the years, the capability and intentions of NSAs, primarily armed terrorist groups across the world, have shifted towards more violent, destructive, and spectacular methods. It can be very well argued that if biological materials or technologies and delivery systems were to be acquired by the terrorist groups, they probably would not hesitate to use them against perceived enemies to maximize the impact due to the fears associated with biological weapons. As far as terrorist groups are concerned, they wish to survive and endeavor to thrive with continuous innovation and improvisation. Trends show terrorist
B
142
groups have always improvised their tactics and methods, starting from knife and machete attacks to suicide bombings. In the face of this continuous up-gradation of terror tactics, the use of a biological weapon or deadly pathogen by terrorist groups or an inspired extremist into the civilian population or targeting individuals may not remain a distant reality. Biological weapons or, for that matter, any deadly weapon system or technology could be lethal in the hands of terrorists, violent doomsday (religious) cults, and organized criminal syndicates. No terrorist groups, including the transnational jihadist groups such as Al Qaeda and the Islamic State, have achieved any success in producing or employing destructive and disruptive bioweapons systems or materials so far. However, the demonstrated intention and interest to acquire WMDs (weapons of mass destruction/disruption), including biological weapon materials and the related know-how as well as delivery poses a major challenge. Much of the existing knowledge in the study of bioterrorism is focused on related speculations. The bioterrorism and bio-risk literature has portrayed mostly hypothetical scenarios and the aftermath of a prospective bioterror event, including consequence management (Wright 2007). Experts have discussed at length the improbable nature of bioterrorism events. The common reasons cited are the volatile character and inherent difficulties associated with the acquisition and successful use of the preferred pathogens. Also, policy scholars and scientists have discouraged government agencies from exaggerating the risks of bioterrorism (Leitenberg 2005). However, there are several assessments of the likelihood of bioterrorism events in the light of past suspected bio-crime and bioterrorism efforts by violent non-state actors. This literature deliberated the possibilities in the wake of Japanese cult group Aum Shinrikyo’s futile attempt to develop and use biological weapons in 1990–1994 and following the 2001 anthrax letters in the USA. Also, the Al Qaeda terrorist group’s interests in acquiring WMDs contributed to the spurt of literature about bioterrorism (Carus 2001; Cole 2009; Kellman 2007).
Bioterrorism
Definition Before we dive into the issue of bioterrorism, it is imperative to know what is a biological agent. Simply speaking, biological agents are pathogens and their byproducts (e.g., viruses, bacteria, fungi, rickettsiae, and biotoxins) which can cause disease and death to living beings, including human beings, animals, and plants. Plant pathogens and pests thus fall within the scope of the notion of biological agents. As far as bioterrorism is concerned, there are several definitions from medical, legal, or military perspectives. Ironically, there is no commonly accepted definition of bioterrorism. However, the most accepted and prevalent definition encompasses a few common elements: intentional or deliberate use of a disease pathogen to harm, kill, or disrupt the human environment by non-state actors such as individual terrorists or terrorist groups who may be motivated and inspired by political or religious causes. By human environment, we mean here broadly all living organisms, as alluded to above, as well as air and water and the food supply chains. The vital distinction from biowarfare is that bioterrorism acts target nonmilitary targets, i.e., the civilian population. Bioterrorism as a subject should also be treated differently from bio-crimes which may be nonpolitical or nonreligious, and are often motivated by profit and retribution (Wheelis and Sugishima 2006).
Agents of Bioterror Potential bioterrorism agents are classified in three broad categories by the Center for Disease Control (CDC), USA. Category A lists the most harmful disease pathogens such as anthrax, botulism, plague, smallpox, and viral hemorrhagic fevers like Ebola, tularemia, Marburg, Lassa, Machupo, etc. Category B lists less severe pathogens such as brucellosis, Q fever, ricin, encephalitis, and food-borne agents such as Salmonella typhi. Category C lists lesser-risk agents as well as emerging and disruptive pathogens such as Nipah and hantaviruses. These
Bioterrorism
high-priority bioterror agents are mostly virulent, infectious, and disruptive in nature, and can be easily acquired or produced in secret laboratories. However, past evidence suggested that non-state actors such as Islamic terrorists and separatist ethnic insurgents have shown interest in less severe biological pathogens and other poisonous biomaterials besides these listed agents. For instance, a Tamil rebel group in Sri Lanka had issued threats to use biological materials against the native Sinhalese in the early 1980s. They had threatened to spread Bilbariasis (River Blindness) through infected snails in the Sri Lankan rivers and allegedly plotted to use antiplant agents targeting rubber plantations and tea gardens (Carus 2001). Similarly, in 2003, a series of biotoxin-related threats surfaced in Europe and the USA. In one such instance, members of Al Qaeda linked Ansar al-Islam were found to be in possession of Ricin in Wood Green, North London (Roul 2008).
Bioterror Events in the Past The history of bioterrorism is mostly replete with conspiracies, hoaxes, and threats rather than the actual use of any deadly pathogens for mass destruction. Even though several of these bioterror events remained unconfirmed allegations or failed attempts, a few of them have witnessed the actual use of biological or toxic agents against civilians. Such events are discussed briefly below to understand better bioterrorism aimed at disruption rather than extensive-scale destruction or mass human fatality. Four significant bioterrorism events in history are discussed in the following paragraphs. The Rajneesh Cult (the USA, 1984): In 1984, Rajneesh (Osho) cult in Dalles, Oregon (USA), a religious sect, intentionally and indiscriminately contaminated at least ten salad bar restaurants with Salmonella typhimurium bacteria. Over 750 people were affected with gastrointestinal illness in two waves between September 9 and October 10. Although no fatality occurred, nearly 45 of them were hospitalized and treated for
143
moderate to severe illness. Subsequent investigation revealed the cult members’ role in deliberately contaminating the salad bars. During investigations, a vial of the Salmonella bacterium was found inside the premise of the religious group identical to the outbreak strain. Subsequently, members of the group admitted to the crime to disrupt a local election by contaminating the salad bars and a plan to release the bacteria into a city water supply tank. (McDade and Franz 1998; Torak et al. 1997). Aum Shinrikyo (Japan, 1990–1995): In the early 1990s, Japanese cult Aum Shinrikyo (its name means “supreme truth”), led by Shoko Asahara, had attempted, unsuccessfully, to procure, produce, and disperse biological pathogens like anthrax and botulinum toxin targeting the civilian population in Japan. Shoko Asahara and his followers believed in doomsday prophecies such as the end of the world and of the human race, in a coming third world war and nuclear Armageddon. This apocalyptic cult later committed the deadly Tokyo subway attack using Sarin gas, a chemical nerve agent, in March 1995, that killed 13 people and affected thousands more directly and indirectly for years to come by the ill effects of the gas (Vale 2005). Irrespective of its success in using a chemical weapon such as Sarin nerve gas, Aum Shinrikyo’s multiple bioterror attempts have failed to cause any actual damage between April 1990 and March 1995. The failure also proved to some extent that acquiring, stockpiling, and effectively disseminating biological agents are a difficult task for non-state actors. However, the cult’s efforts to produce bioweapons on a large scale coupled with the dangerous intent to use aerosolized biological agents such as anthrax, Ebola, and botulinum toxins against the civilian population remain a dangerous instance of bioterrorism in the last century. The cult members have attempted to disseminate botulinum toxin targeting the civilian population near two naval bases at Yokohoma and Yokosuka and a few Tokyo landmarks, including the Narita Airport, the National Diet (Parliament), and the Imperial Palace in April 1990. In October 1992, the cult members, including the founder
B
144
Asahara, attempted to acquire the Ebola virus during their visit to Zaire, Africa. In June 1993, again, the cult members attempted to disperse botulinum toxin using a car-mounted spraying device during Japanese crown Prince Naruhito’s wedding (Olson 1999). After unsuccessful attempts with botulinum toxin, the cult moved to target civilian populations between June and August 1993 using an anthrax strain near their Tokyo headquarters. The Aum cult was also suspected of having plotted assassination and sabotage attempts using botulinum without any success. Finally, in March 1995, before the Sarin attack, the Aum cult disseminated botulinum toxin at the Kasumigaseki subway station in Tokyo. This too failed to have any impact on the ground (Ballard et al. 2005). “Amerithrax”: US Anthrax Letter Attacks, 2001: In the USA, postal letters purportedly containing anthrax bacteria were mailed to media and congressional offices in late 2001. These infectious cargoes resulted in five deaths due to inhalational anthrax, i.e., infection affecting the respiratory system, primarily the lungs. At least 17 people who have either inhaled or touched the anthrax-laced mails (resulting in cutaneous infection) suffered long-term disfigurement. Besides the deaths and disabilities, these anthrax letters caused immense financial and health disruptions in the USA (United States Department of Justice 2010). The anthrax-laced letters had warned to take penicillin (as antibiotics against bacterial infection) and came with the message “Death To America Death To Israel; Allah Is Great,” mimicking an Islamist motive as a deception. The investigation into the anthrax mail terror by 2007 found Bruce E. Ivins, a microbiologist at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID), to have been behind these anthrax attacks. Before he was persecuted, Bruce Ivins committed suicide. With his death, some unanswered questions remain to be solved. For example, Ivins’ role in the actual production of these highly infectious anthrax spores and the motive behind the use of the rhetoric of Islamist terrorists in the letters (Engelberg 2011).
Bioterrorism
Bioterrorism: Threat Scenarios Even though the brief history of the actual use of biological pathogens by a non-state violent actor (s) remains sparse and largely ineffective, Jihadist groups and far-right groups have shown interest in bioterrorism in past decades. The argument against the possibilities of this type of bio-violence is mostly centered on the premise that technological challenges would hinder these groups from weaponizing the pathogens and using them successfully. This is also somewhat substantiated by the lack of actual terrorist events involving biological weapon materials in the last couple of decades. It is also plausible to opine that bioterrorism has not been considered to significant contribute to their aims by at least some of the terrorist groups. However, the jihadist strategy on weapons of war, credible evidence of their focus to seize or acquire WMDs along with their willingness to use these weapons to inflict mass fatality or disruption make bioterrorism issues (or, for that matter, chemical and radiological terror) compelling for policy discourse.
Jihadists and the Threat of Bioterrorism Several Islamist ideologues and jihadists have propounded the use of biological and chemical weapons as a legitimate act of war for mass killings of apostates and nonbelievers. In 2003, Saudi cleric Nasir bin Hamd al-Fahd brought out a treatise on the legal status of using weapons for mass killings, especially against nonbelievers (Al-Fahd 2013). In February 2009, anti-Western Islamic clerics such as Kuwaiti Professor Abdullah Nafisi reignited debate about the possibility of chemical and biological terrorism events when he recommended biological and nuclear attacks on the USA (Al Jazeera 2013). Newer evidence suggests that groups like Al Qaeda and the Islamic State are more than capable of using nonconventional methods of war, such as chemical and biological weapon materials targeting the
Bioterrorism
civilian population or military. Their intentions to use these types of weapons have been made clear through the available Jihadist literature. Whether in Pakistan or Syria, the religious extremists want to take over the State and its military arsenals, industries, and infrastructures. Such a mindset among these zealots has increased the specter of bioterrorism. A prominent example of related endeavors was al Qaeda’s plan to undertake a program known as the “Yogurt Project” or “Project al-Zabadi” to develop chemical and biological weapons in the past. This program reportedly had a starting budget of US $2000 to $4000. It was handled by Abu Khabab al-Masri, an Al Qaeda commander and former scientist in the Egyptian chemical weapons program (Cullison 2004). Traces of this are present in Al Qaeda’s “Encyclopedia of Jihad,” which provides early insights into the strategy and operational aspects of the group and its network. The Encyclopedia of Jihad was found in 1999 in the home of Khalil Deek, an al-Qaedalinked businessman, when he was arrested in connection with an alleged plot to bomb Jordan’s main airport in the capital Amman, on the eve of the millennium. The 11th volume of the Encyclopedia offers guidance on how to disperse potentially lethal biological organisms and poisons, ranging from botulinum toxin, anthrax, and ricin. This volume also details targets such as water and food supplies and how to maximize panic and fear by poisoning medicines. Another relevant treatise, which is considered the Jihadist chemical and biological weapons manual, is Abu Hadhifa al-Shami’s “A Course in Popular Poisons and Deadly Gases” (ACLU 2015). Beyond strategizing about the possible use of such weapons, al Qaeda actively sought to acquire or develop an effective capability. Reports in the 1990s surfaced that associates of al Qaeda leader Osama Bin Laden attempted to purchase anthrax, plague, and other agents from Kazakhstan and the Czech Republic. There are confirmed reports about al-Qaeda’s interests in acquiring crop dusters to disseminate biological agents over cities or population centers (Salama and Hansell 2005). The examples of Abdur Rauf and Menad
145
Benchellali’s interest in anthrax and ricin also suggest that al-Qaeda pursued and trained its network members to carry out chemical and biological terror acts. Rauf, a Pakistani microbiologist, was on a mission to obtain anthrax spores and equipment for an al-Qaeda bio-laboratory in Afghanistan, to weaponize the pathogens. Similarly, Menad Benchellali’s an al-Qaeda trained terrorist set out to weaponize ricin before his arrest in early 2004 (Warrick 2004). One of Al Qaeda’s influential leaders, Anwar al Awlaki, rationalized the use of biological agents, citing classical Islamic scholars primarily to remove moral and Islamic legal barriers on using these weapons against civilians (noncombatants). He once observed that “the use of chemical and biological weapons against population centres is allowed and strongly recommended due to the effect on the enemy” (Lister and Cruickshank 2012). He cited Islamic scholars to prove that it is allowed to use poison or other methods of mass killing against “disbelievers” in a war. Awlaki noted this piece of advice and thoughts in the eighth edition of al-Qaeda’s “Inspire” magazine. In the article entitled “Targeting the Populations of Countries at War With Muslims,” Awlaki justified the killing of civilians (women and children included) and the use of chemical and biological weapons (Al Qaeda in Arabian Peninsula 2011). Similarly to al-Qaeda, the Islamic State organization has demonstrated interest in biological and chemical weapons. In 2014, information gathered from a seized laptop belonging to a Tunisian Islamic State militant indicated the group’s interest to acquire or develop a biological weapon capability. A 19-page document in Arabic found in that laptop was on developing biological weapons and how to weaponize the bubonic plague from infected animals (Gold 2014). The instruction found on the computer describing the benefits of biological agents indicated IS approval on the work to weaponize the bubonic plague along with other bacteria and viruses that would have an even more significant effect than a localized chemical attack. What is more alarming is that the laptop information had a message of religious approval for the use of such weapons. It
B
146
reportedly read, “If the Muslims can’t overwhelm the infidels in any other way, they are allowed to use weapons of mass destruction to kill everyone and erase them and their descendants from the earth” (Ynet News 2014). The IS’ intertest in bioweapons surfaced again in several propaganda materials in July–August 2018, when the pro-IS social media account, AlAbd-AlFaqir Media, circulated a video titled “Bio-Terror.” This media outlet released more posters subsequently threatening with the following messages: “We will make you fear the air that you breath” and “We will fight you with the same weapon you used to kill innocents” (TRAC 2018). Most recently, a Spanish IS-aligned Group disseminated explosives and biological toxin manuals on April 30, 2021(SITE 2021). However, these abovementioned jihadist interests in acquiring or developing biological and toxin weapons have not yet succeeded. At least, these groups have not demonstrated their bioterror capability on the ground.
Bioterrorism Threat and Right-Wing Extremism The radical American or European right-wing individuals and groups have openly suggested biodefence tactics and how to wage biological warfare, primarily portraying a scenario of possible future physical attacks. The case of Anders Behring Breivik, a Norwegian right-wing extremist, raised concerns about the bioterror threat. His online manifesto titled “2083: A European Declaration of Independence” revealed his views on the use of Weapons of Mass Destruction (WMDs) and how to change the system and society. His manifesto deals with issues related to conventional as well as chemical, biological, and nuclear weapons. Breivik’s attention was primarily focused on anthrax and ricin. He envisioned large-scale use of Anthrax in an attack that could kill over 200,000 people (NTI 2011) (Karmon 2020). There have been a number of similar instances of messaging and chatter on bioterrorism and other forms of violence within social, political, and religious groups of the extreme right. In the
Bioterrorism
USA, the emerging threat of white supremacist groups brought back to the fore concern regarding biological and chemical terror threats. One such recent threat surfaced over social media channels discussing plans to weaponize the coronavirus (Covid-19) via saliva-laced items, to use them against non-White people (Walker and Winter 2020). However, these can be considered under the broad category of bio-scare or bio-risk rather than actual, fully fledged bioterrorism.
Conclusion Any large scale bioterrorism event would overwhelm government health infrastructure, resulting in widespread chaos and tragic consequences for civilian populations. As is rightly observed by a US Office of Technology Assessment report, if used under optimal conditions, biological weapons could have an impact similar to that of a small nuclear device (OTA 1993). This observation sounds true when every country in the world, irrespective of how sophisticated their public health systems may be, is grappling with coronavirus disease since early 2020. Empirically, although the world has not witnessed any mass-fatality bioterrorist events to date, the prospective threat thereof exists. It would be too simplistic and immature on the part of strategic thinkers or policymakers to overlook the dangers of bioterror events in the future targeting population centers or public spaces. It would not be prudent to ignore the future possibility of bioterrorism.
Cross-References ▶ Biosecurity and Biodefense ▶ Emerging and Re-emerging Diseases ▶ Health Security
References Al Qaeda in Arabian Peninsula. (2011). Inspire. Al-Malahem Media, 8, 41–47.
Bioterrorism Al-Fahd, Nasiir Bin Hamad. (2013). A treatise on the legal status of using weapons of mass destruction against infidels. Accessible at https://ahlussunnahpublicaties. files.wordpress.com/2013/04/42288104-nasir-al-fahdthe-ruling-on-using-weapons-of-mass-destructionagainst-the-infidels.pdf Al-Jazeera TV. (2013). Abdallah Al-Nafisi terror speech against the United States. Accessible at https://www. youtube.com/watch?v¼I6G2BvB4TPw. Accessed 02 May 2021. American Civil Liberties Union. (2015). Survey of prevalent Al-Qa'ida Manuals. Accessible https://www.aclu. org/files/fbimappingfoia/20150309/ACLURM016597. pdf Ballard, T., Pate, J., Ackerman, G., McCauley, D., & Lawson, S., (2005). Chronology of Aum Shinrikyo’s CBW activities. James Martin Center for Nonproliferation Studies, Middlebury Institute of International Studies. Available at http://www.nonproliferation.org/ wp-content/uploads/2016/06/aum_chrn.pdf. Carus, W. S. (2001). Bioterrorism and biocrimes: The illicit use of biological agents since 1900. Washington, DC: Center for Counterproliferation Research, National Defense University. Accessible at https://fas.org/irp/threat/cbw/carus.pdf Cole, L. A. (2009). The Anthrax letters: A bioterrorism expert investigates the attack that shocked America. News York: Skyhorse. Cullison, A. (2004). Inside Al-Qaeda’s hard drive. The Atlantic. Accessible at http://www.theatlantic.com/ magazine/archive/2004/09/inside-al-Qaeda-s-harddrive/303428 Engelberg, S. (2011, October). New evidence adds doubt to FBI’s case against anthrax suspect. ProPublica. Available at https://www.propublica.org/article/newevidence-disputes-case-against-bruce-e-ivins Gold, D. (2014). Seized islamic state laptop reveals research into weaponising the bubonic plague. Vice News. Accessible at https://news.vice.com/article/ seized-islamic-state-laptop-reveals-research-intoweaponizing-the-bubonic-plague. Accessed 11 Sept 2015. Karmon, E. (2020). The radical right’s obsession with bioterrorism. International Institute for CounterTerrorism. Accessible at http://www.ict.org.il/images/ The%20Radical%20Right%20and%20Bioterrorism. pdf Kellman, B. (2007). Bioviolence: Preventing biological terror and crime. Cambridge: Cambridge University Press. Lister, T., & Cruickshank, P. (2012). From the grave, al-Awlaki calls for bio-chem attacks on the U.S. CNN. Accessible at http://security.blogs.cnn.com/2012/05/ 02/from-the-grave-al-Awlaki-calls-for-biochemattacks-on-the-u-s/ McDade, Joseph E., & Franz, D. (1998). Bioterrorism as a public health threat. Emerging Infectious Diseases 4(3), 493–494. Available at https://www.hsdl.org/?view& did¼444969
147 NTI. (2011). Norway killer wrote of anthrax attacks. Accessible at https://www.nti.org/gsn/article/norwaykiller-wrote-of-anthrax-attacks/ Olson, K. B. (1999). Aum Shinrikyo: Once and future threat? Emerging Infectious Diseases, 5(4), 413–416. https://doi.org/10.3201/eid0504.990409 Price, R. M. (1995). A genealogy of the chemical weapons taboo. International Organization, 49(1), 73–103. Roul, A. (2008). Is bioterrorism threat credible? CBW Magazine, 1(3). Accessible at https://idsa.in/cbwmagazine/ IsBioterrorismThreatCredible_aroul_0408. Salama, S., & Hansell, L. (2005). Does intent equal capability? Al-Qaeda and weapons of mass destruction. Nonproliferation Review, 12(3). Accessible at http:// cns.miis.edu/npr/pdfs/123salama.pdf SITE. (2021 April). Spanish IS aligned group disseminates explosives, biological toxin manuals. https://ent. siteintelgroup.com/Guide-Tracker/spanish-is-alignedgroup-disseminates-explosives-biological-toxin-man uals.html The United States Department of Justice. (2010, February). Amerithrax investigative summary. Available at https:// www.justice.gov/archive/amerithrax/docs/amx-investi gative-summary.pdf Torak, T. J., Tauxe, R. V., Wise, R. P., Livengood, J. R., Sokolow, R. Mauvais, S., Birkness, K. A., Skeels, M. R., Horan, J. M., & Foster, L. R. (1997). A large community outbreak of salmonellosis caused by intentional contamination of restaurant salad bars. JAMA, 278(5), 389–395. Accessible at https://www.cdc.gov/ phlp/docs/forensic_epidemiology/Additional%20Mate rials/Articles/Torok%20et%20al.pdf TRAC. (2018, August). Islamic State supporters Al Faqeer (AF Media) depicting bioterror threatening. We will fight you with the same weapon you used to kill innocents. Accessible at https://www.trackingterrorism.org/ chatter/cgi-islamic-state-supporters-al-faqeer-afmedia-depicting-bio-terror-threatening-we-will-fig U.S. Congress, Office of Technology Assessment. (1993, August). Proliferation of weapons of mass destruction: Assessing the risk, OTA-ISC-559. Washington, DC: U.S. Government Printing Office. Vale, A. (2005). What lessons can we learn from the Japanese sarin attacks? Przegla d Lekarski, 62(6), 528–532. Available at https://pubmed.ncbi.nlm.nih.gov/16225116/ Walker, H., & Winter, J. (2020). White supremacists discussed using coronavirus as a bioweapon. Huffington Post. Available at https://www.huffpost. com/entry/white-supremacists-coronavirus-bio weapon_n_5e76a0ebc5b6f5b7c5458af2 Warrick, J. (2004). An Al Qaeda’ Chemist’ and the quest for ricin. Washington Post. Accessible at https://www. washingtonpost.com/archive/politics/2004/05/05/anal-qaeda-chemist-and-the-quest-for-ricin/72d0f492b8f3-4e98-bd74-369fa5bb2761/ Wheelis, M., & Sugishima, M. (2006). Terrorist use of biological weapons. In M. Wheelis, L. Rózsa, & M. Dando (Eds.), Deadly cultures: Biological weapons since 1945 (pp. 284–303). Cambridge: Harvard University Press.
B
148 Wright, S. (2007). Terrorists and biological weapons: Forging the linkage in the Clinton Administration. Politics and the Life Sciences, 25(1–2), 57–115. https://doi.org/ 10.2990/1471-5457(2006)25[57:TABW]2.0.CO;2 Ynet News. (2014, August). ISIS laptop reveals project to build biological weapons. Available at http://www. ynetnews.com/articles/0,7340,L-4566367,00.html. Accessed 01 Sept 2014.
Bubonic Plague Animesh Roul Society for the Study of Peace and Conflict, New Delhi, India Keywords
Bubonic · Plague · Pandemic · Yersinia pestis
Introduction The plague, otherwise notorious as the Black Death or the Pestilence, often regarded as a curse from God, has its place in every religious scripture. For Christians, it was divine punishment, for Muslims, a symbol of self-sacrifice (martyrdom). In the Hindu scripture (Bhagwat Purana), the plague was known as Mahamari, the “great death” which was caused by rats or rodents (Park 2000). Originated from a Greek word, plaga, meaning a blow or sudden strike, the Plague has a detailed description, including its clinical manifestations, in Thucydides’ The History of the Peloponnesian War (Crawley 2013; Rao 1994). It was recorded in the wake of the political struggle between Athens and Sparta. Plague broke out in its most lethal form, causing many deaths in Athens. It also caused total disruption of community ties and massive demoralization in society. Its impact on military and economic strength resulted in undermining civil and religious institutions at that time (Smith 1997).
Plague Pandemics The history of bubonic plague is documented among the Great Pandemics and is one of the
Bubonic Plague
historic diseases that killed over 200 million people in the world so far and continues to be synonymous with dread and fear. Plague had a devastating effect on the human civilizations starting from 542 B.C., when the disease affected the Nile valley and Egypt during the Pharaonic era. Archeozoological evidence suggests that the Nile rat was the original carrier of the disease (Panagiotakopulu 2004). Previously, it was advocated that the plague had a Central Asian origin. Arguably, bubonic plague is one of the most feared and deadly infectious pestilence known to humankind until the late nineteenth century. There are three plague pandemics or major human outbreaks so far. The world witnessed the first plague pandemic in AD 541 during the reign of the Byzantine emperor Justinian I, and for almost six decades it caused widespread casualties (Rosen 2007; Retief and Cilliers 2005). The outbreak mostly affected the Mediterranean region and ravaged Constantinople (present-day Istanbul, Turkey). This pandemic witnessed the death of tens of millions of people and contributed to the fall of the Roman Empire. Its spread was chronicled by Byzantine historian Procopius. According to him, the epidemic entered Europe in 541 A.D. from the port of Pelusium (in Egypt). The Second Pandemic in the fourteenth century devastated most of Europe and is estimated to have killed more than 25 million people. Notoriously and controversially termed as the Black Death, the disease lasted from 1347 and 1351, unfolding as the most violent epidemic in recorded history (Mackenzie 2001). It is widely believed that the spread of the disease to Europe was connected to one of the first recorded incidents of biological warfare, documented in the memoirs of the Italian Gabriele de’ Mussi. According to this work, soldiers of the Golden Horde (Mongols) catapulted corpses of plague victims into the besieged Genoese trading port of Caffa (now in Ukraine) on the Black Sea (Wheelis 2002; Derbes 1966). From here, the disease spread to Italy, Spain, England, France, and North Africa and soon engulfed Europe. Even after the great pandemic wore out, plague
Bubonic Plague
remained endemic among the rodent population, the primary vector of this deadly disease. As a result, the epidemic lingered on for centuries as the disease surfaced sporadically in Asia and Europe throughout the seventeenth century. The example of the Great Plague of London (1665– 1666) was the deadliest reminder, an episode in which 70,000 people died. The third pandemic was caused by the increasing human mobility with the movement of steamships. It started in the later part of the nineteenth century (1893–1894) in China and reached Indian shores in 1896, and subsequently other major port cities of the world. India had experienced plague before, in 1612, which primarily affected Agra city. It was in the modern or Third Pandemic era that India experienced the epidemic in its most virulent form, along with the United States and many South and Central Asian countries (Dhanukar and Hazra 1994). This pandemic reportedly killed more than 12 million people in China and India alone. In Egypt, it entered through the main ports, Alexandria and Port Said, around 1900. According to the World Health Organization, the third pandemic was considered active until 1959 as fatality rates diminished substantially to 200 per year. During this period, scientists identified and cultured the plague bacillus after thorough investigation and developed a crude vaccine. Alexander Yersin first cultured the plague bacillus in Hong Kong in the mid-1890s, and it is now known as Yersinia pestis (formerly, Pasturella pestis) (Solomon 1995). A French scientist, while investigating the bubonic plague in Bombay, discovered the connection between rat, rat fleas (Xenopsylla cheopis), and the plague bacillus. Later, Waldemmar Haffkine developed an anti-plague vaccine, also in Bombay (now Mumbai in Maharashtra State, India). However, by this time, plague had spread around the globe except for Australia, and the endemic foci had established itself in rodent populations in almost every continent, irrespective of climatic conditions. Though the pandemic subsided gradually, it was the international regulations on rat control in ports and ships which mostly restricted the
149
spread of plague disease. There were also several other preventive and control measures with the development of modern science and the use of disinfectants such as Dichloro-Diphenyl-Trichloroethane or DDT, adopted in later period of the third pandemic.
Bubonic Plague Etiology What is plague? How are humans susceptible to it? The answers to these questions depict the etiology of plague. It is a zoonotic disease primarily spread to humans from its natural hosts, i.e., rats. The disease or infection cycle involves causative organisms (Yersinia pestis, hereafter Y.pestis), the reservoirs (Rats), the vector (fleas) and the human host. The most common carrier is the wild rat (Tatera indica). But the transmission occurs when there are some disturbances in the environment which facilitate contacts between wild rats and house rats (Rattus rattus) or the brown sewer rats (Rattus norvegicus) and field rats (Rattus argentiventer). It is also transmitted to rabbits and squirrels through rat fleas (Xenopsylla cheopis and X. brasiilensis). There are cases where unsuspecting huntsmen kill and eat these animals (such as infected or sick wildcats or marmots), thus becoming infected. This type of transmission is often termed as sylvatic plague. There are three principal clinical manifestations of plague: bubonic, pneumonic, and septicemic. Bubonic plague is characterized by the swollen lymph nodes (buboes), mainly in the groin and less often in the neck and armpits, depending on the site of the flea bite. It cannot spread from person to person. Pneumonic plague involves the lungs and is highly infectious. It can spread among humans as the plague bacillus is present in the sputum of infected persons. The septicemic plague is very rare and only occurs when the bacilli invade the bloodstream (Park 2000). The disease starts with the rapid onset of fever and other systemic manifestations of gramnegative bacterial infection. Among these threeprincipal types of plague, pneumonic plague is more fatal, and patients who do not receive
B
150
treatment within 18 h after the onset of symptoms are unlikely to survive. The patient goes through shock, multiple organ failure, and death. Bubonic plague, which is of highest interest to this volume, reminds us about the Great Pestilence that wreaked havoc for centuries. It should be noted that the other manifestations, both septicemic and pneumonic forms, may be primary or secondary to bubonic disease. Irrespective of the mortality rates among these three types of plague infection, bubonic plague draws more fear and dread due to various reasons, including the physical manifestations of the disease on the human body. Arguably the most common form, bubonic plague is typically caused by the bite of an infected flea (the vector) that has fed previously on a plagueinfected animal, but it can also result from the consumption of infectious fluids. During an incubation period of 2–6 days, Y. pestis bacteria are transported from the initial bite site to the nearest lymph nodes that become swollen and tender, forming a bubo. Simultaneously, symptoms of blood poisoning or toxemia appear, including severe headache, chills, and fever accompanying physical fatigue. The final stage consists of shock and respiratory arrest, which occurs in 50–90% of untreated bubonic plague cases. Naturally occurring plague is endemic in many regions of the world where the enzootic cycle involves numerous species of rodents and small mammals. According to one estimation, there are over 340 species of mammals that can be hosts to fleas, and nearly, 30 varieties of fleas can be transmitters of Y. pestis. When fleas feed on the infected animals, they suck out blood containing Y. pestis bacteria that then clog the upper gut of the flea. The flea attempts to bite other animals and humans subsequently regurgitating or discharging the bacteria-laden blood into the wound, infecting the victim. Undoubtedly, the development in modern science and the pharmaceutical industry played a significant role in the successful management of the disease outbreak and the treatment of plague. For years, bubonic plague has been successfully treated with potent bactericidal antibiotics, such
Bubonic Plague
as streptomycin, gentamicin, doxycycline, and ciprofloxacin. However, a case of multiple antibiotic-resistant strains of Y. pestis occurred in Madagascar in 1995, and this brought attention to the issue of the long-term/future effectiveness of antibiotic treatment against the plague infection. At present, the antibiotic choices available can counter the emergence of resistant strains. Although plague as a disease no longer causes alarm and is mostly wiped out from the major urban centers of the world, it still occurs around the world. The most endemic countries are, according to the WHO (World Health Organization), the Democratic Republic of the Congo, Madagascar (Africa), and Peru (South America). From 2010 to 2015, there were 3248 cases of plague reported worldwide, including 584 deaths (WHO 2017b). The ensuing section of the article examines three brief case studies of bubonic plague outbreaks, with a view to their historical importance and other intriguing aspects: an urban epidemic (in London); the complexity of disease manifestations after years of quiescence (both of bubonic and pneumonic cases in Surat, India); and endemic challenges related to plague (such as in Madagascar).
The Great Plague of London England was a plague-endemic country and it had a history of outbreaks of different intensity since the time of Black Death, or from 1348, up till 1665. Various descriptions of the disease symptoms and manifestations from 1665 and the recent DNA analysis of mass graves from the Bedlam burial ground dating to that period support the theory that Y. pestis was responsible for the 1665 epidemic. A significant proportion of samples tested positive for Y. pestis (Independent 2016, September 8). The controversy around the origin of the outbreak notwithstanding, in the preceding years of the London Plague, a massive plague outbreak devastated cities in the Netherlands with nearly 35,000 deaths in Amsterdam alone. Researchers
Bubonic Plague
believe that the Dutch outbreak may have a connection with the London outbreak, either through human or animal connection. The bubonic plague that ravaged seventeenth century London (between 1665 and 1666) started from the city’s overcrowded northwestern outskirt St. Giles-inthe-Field. This was the worst plague outbreak since the Black Death of the fourteenth century. What started from St. Giles-in-the-Field, eventually engulfed the whole of London within 4 months and killed approximately 68,500 people as per available records. In total, the death toll reached somewhere between 75,000 and 100,000 by the end of this devastating pestilence (Porter 2009). The worst affected areas were located around the edge of the London city, besides St. Giles-in-the-Field, Cripplegate, Holborn, Bishopsgate, St. Botolph, and a few suburbs South of the Thames river, for example, Southwark (Porter 2009). The great plague exposed London’s and, to that effect, England’s basic and struggling public health and inconsistent relief measures, mostly leaving the plague-affected people at the mercy of the prevalent “parish” and monastery system. The plague also disrupted the burgeoning trade and commerce in London which was already becoming a hub during that period.
Plague in Surat, India, 1994 The epidemic is not new to India or Indians. However, the reemergence of the disease in India in 1994 (in Surat, in Gujarat state) after several years of quiescence invoked a public health debate in the country. The plague which ravaged the Diamond City of Surat and scenic hamlets in Shimla affected the economic and political activity in the country, and also posed a serious question about the management of emerging and reemerging infectious diseases. During the Third Plague Pandemic, India experienced the epidemic in 1895–1896, which continued for at least two decades, killing approximately ten million people. One estimate
151
posits that it may have lasted up until 1950, with the number of deaths at 12.5 million (Ramalingaswami 1996). However, it is believed that after 1950, due to the emergence of many broad-spectrum antibiotics and disinfectants like DDT and Gamaxine, the spread/transmission of plague was contained. This period of quiescence (1967–1993) raised hopes that eradication may have been completed, and, along with that hope, complacency in the health administration. In the decade following this quiescence, India experienced plague outbreaks that took a heavy toll on India’s health infrastructure and urban management. In India, empirical studies show that the plague occurs either in Spring or in Autumn, but at most times is interrupted by the hot Indian summer (Cohn 2002). The Surat plague outbreaks only confirmed this pattern. In Surat, the disease broke out in the month of September. It is believed that the earthquake in Beed district in 1993 disturbed the territorial equilibrium between wild rats and house rats, which facilitated fleas jumping their hosts. Also, a flood in the river Tapti during that time aggravated this nature-driven development (Parasuraman and Unnikrishnan 2000). Meanwhile, the growth in the city skyline and haphazard planning, increasing slums and unhygienic conditions, all combined to cause the epidemic. Surat came under the grip of the plague in both forms: bubonic and pneumonic. The official figure put the case-fatality rate at 752/44 by the end of September 1994 (CDC 1994a). Although Surat remained the epicenter of the plague outbreak, it spread to other parts of the country, primarily due to unrestricted human movement. In early September 1994, a bubonic plague outbreak occurred in Mamla village in Beed district of Maharashtra state. When the panic-stricken inhabitants fled the area, they carried the disease to other parts of India. Besides Surat, cases were reported from Maharashtra, Karnataka, Uttar Pradesh, Madhya Pradesh, and New Delhi (CDC 1994b). Further research confirmed the association of Y.pestis with the epidemic in Surat and Beed, and demonstrated that Y.pestis had an enzootic existence in the region (Panda 1996).
B
152
Bubonic Plague
Plague in Madagascar (1995–2017)
Conclusions
Bubonic plague remains a significant public health challenge in Madagascar It is endemic in Madagascar, especially to the central and northern highlands. The disease found its way to Madagascar in 1898 via the port of Toamasina, reaching the island by steamboats from India, and by 1921 it reached the capital Antananarivo. The disease mostly remained under control between 1928 and 1990. However, a first major outbreak occurred in 1991 around the coastal town of Mahajanga. From 1995 to 1998, outbreaks of bubonic plague occurred annually in the coastal city of Mahajanga. A total of 1,702 clinically suspected cases of bubonic plague were reported, including 515 laboratory-confirmed cases of Y. pestis infection. The epicenters of the outbreaks were usually crowded and unhygienic districts with proximity among human and rodent populations. Available records show that from 1998 to 2016, a total of 13,234 suspected cases were recorded, mainly from the central highlands; 27% were confirmed cases, and 17% were presumptive cases. Patients with bubonic plague represented 93% of confirmed and presumptive cases, and patients with pneumonic plague represented 7% (Andrianaivoarimanana et al. 2019). Besides Mahajanga, Mandritsara city was worst affected in 2013, when the bubonic plague killed 39 villagers despite an early warning in October from the International Committee of the Red Cross (ICRC) about the impending risk of a plague epidemic (The National 2013). The last major plague outbreak occurred in the beginning of August, 2017, primarily affecting the capital Antananarivo and the central port city of Toamasina. The outbreak lasted until November 22, 2017, with a total of 2348 confirmed, probable and suspected cases of plague (1791 cases of pneumonic, 341 cases of bubonic and one case of septicemic plague). There were also 215 unspecified cases. Over 200 people died of the infections (WHO 2017a). This rise in pneumonic plague cases caused concern with a view to the possible future reemergence and rapid spread of plague in urban settings.
In the words of Ira Klein, “Plague was a savage cause of death and, equally, of social turmoil and conflict between State and populace” (Klein 1988). Though rightly observed, bubonic plague outbreaks taught valuable lessons to the evercomplacent humankind in the sphere of public health with regards to disease surveillance and epidemic management, teaching us lessons that we had better keep in mind even in the present day.
Cross-References ▶ Antimicrobial Resistance ▶ Emerging and Re-Emerging Diseases ▶ Epidemics
References Andrianaivoarimanana, V, Piola, P, Wagner, D.M., Rakotomanana, F., Maheriniaina, V., Andrianalimanana, S., S. Chanteau, S., Rahalison, L., Ratsitorahina, M. , Rajerison, M. (2019) Trends of Human Plague, Madagascar, 1998–2016. Emerging Infectious Diseases, 25(2):220–228. Centres for Disease Control and Prevention. (1994a). International notes update: Human plague-India. Morbidity and Mortality Weekly Report, 43(41), 761–762. Retrieved from https://www.cdc.gov/mmwr/preview/ mmwrhtml/00032992.htm Centres for Disease Control and Prevention. (1994b). International notes update: Human plague-India. Morbidity and Mortality Weekly Report, 43(39), 722–723. Retrieved from https://stacks.cdc.gov/view/cdc/26892 Cohn, S. K., Jr. (2002). The black death: End of paradigm. The American Historical Review, 107(3), 725. Crawley, R.(2013) Thucydides: The History of the Peloponnesian War, Accessible at https://www.gutenberg. org/files/7142/7142-h/7142-h.htm Derbes, V. (1966). De Mussis’ and the great plague of 1348: A forgotten episode of bacteriological warfare. Journal of the American Medical Association, 196(1), 59–62. Dhanukar, S. A., & Hazra, A. (1994). Return of the ancient scourge. Science Reporter, 31(11), 21. Independent. (2016). Cause of 1665 Great Plague of London confirmed through DNA testing. Retrieved from https://www.independent.co.uk/news/science/archa eology/plague-cause-discovered-great-1665-londoncrossrail-dna-testing-don-walker-bubonic-a7231956.html
Bubonic Plague Klein, I. (1988). Plague, policy, and popular unrest in British India. Modern Asian Studies, 22(4), 723–755. (Cambridge University Press). Mackenzie, D. (2001). Did Bubonic plague really cause the black death? New Scientist. 2001. Accessible at https:// www.newscientist.com/article/mg17223184-000-didbubonic-plague-really-cause-the-black-death/#ixzz6G gxWqcE1 Panagiotakopulu, E. (2004). Pharaonic Egypt and the origins of plague. Journal of Biogeography, 31(2), 269–275. https://doi.org/10.1046/j.0305-0270.2003. 01009.x. The University of Edinburgh. Panda, S. K. (1996). The 1994 plague epidemics of India: Molecular diagnosis and characterisation of Y.pestis isolates from Surat and Beed. Current Science, 71 (10), 794–799. Parasuraman, S., & Unnikrishnan, P. V. (2000). India disasters report: Towards a policy initiative (p. 291). New Delhi: Oxford University Press. Park, K. (2000). Preventive and social medicine (16th ed., p. 220). Jabalpur: Banarasidas Bhanot. Porter, S. (2009). The great plague (pp. 57–58). Stroud: Amberley Publishing. Ramalingaswami, V. (1996). The plague outbreaks in India. Current Science, 71(10), 781. Rao, M. (1994). Plague: The fourth horseman. Economic and Political Weekly, 29(42), 2720–2721. Retief, F. P., & Cilliers, L. (2005). The epidemic of Justinian (Ad 542): A prelude to the middle ages. Acta Theologica Supplementum, 7, 115–127. Rosen, W. (2007). Justinian’s flea: The first great plague and the end of the Roman empire. New York: Penguin. Smith, C. A. (1997). Plague in the ancient World: A study from Thucydides to Justinian. The Student Historical
153 Journal 1996–1997, Loyola University, New Orleans. (28). Retrieved from, http://people.loyno.edu/~history/ journal/1996-7/documents/PlagueintheAncientWorld_ AStudyfromThucydidestoJustinian.pdf Solomon, T. (1995, June). Alexandre Yersin and the plague Bacillus. The Journal of Tropical Medicine and Hygiene, 98(3), 209–212. The National. (2013). 39 die in bubonic plague outbreak in Madagascar. Retrieved from https://www.thenational. ae/39-die-in-bubonic-plague-outbreak-in-madagascar1.319378 Wheelis, M. (2002). Biological warfare at the 1346 siege of Caffa. Emerging Infectious Diseases, 8(9), 971–975. https://doi.org/10.3201/eid0809.010536. WHO. (2017a). Plague, Madagascar: WHO disease outbreak news. Retrieved from https://www.who.int/csr/ don/27-november-2017-plague-madagascar/en/ WHO. (2017b). Plague: Key facts. Accessible at https:// www.who.int/news-room/fact-sheets/detail/plague
Further Reading Christopher, W. (1996). Plagues: Their origins. Flamingo: History and Future. Gage, K. L., & Kosoy, M. Y. (2005). Natural history of Plague: Perspectives from more than a century of research. The Annual Review of Entomology, 50, 505. https://doi.org/10.1146/annurev.ento.50.071803. 130337. Orent, W. (2004). Plague: The mysterious past and terrifying future of the World’s most deadly disease. New York: Free Press. Shannon, G. W., & Cromley, R. G. (2013). The great plague of London, 1665. Urban Geography, 1(3), 254. https://doi.org/10.2747/0272-3638.1.3.254.
B
C
Ceasefires Robert A. Forster Political Settlements Research Programme, Edinburgh Law School, University of Edinburgh, Edinburgh, UK Keywords
Cessation of hostilities · Conflict resolution · Peace accord · Peace processes
Introduction The term “ceasefire” is the antonym of the military expression “open fire” and signifies a call to terminate hostilities. Ceasefire agreements are regularly announced as part of a peace process and can suggest a level of commitment between warring parties to seek an end to armed conflict. Ceasefire periods can also be used as cover by groups to remobilize, rearm, and manoeuver. Announcing a ceasefire can be done unilaterally, but could also follow an agreement between warring parties. Ceasefires can be verbal or written, and their terms can be public or secret. Third party mediation can lead to a ceasefire, or, alternatively, ceasefires can be imposed on parties by United Nations Security Council (UNSC) resolutions under chapter VII of the United Nations Charter. The scope of ceasefires may be general and encompass an entire conflict zone and all parties
active in it, or the ceasefire can be specific wherein the locations and actors are limited.
Nomenclature of Ceasefires There is no commonly recognized or legal definition of the term “ceasefire,” which became popularly used by media and government documentation in the post-Second World War era. In scholarly work, the outcomes of studies dictate the definition of a “ceasefire.” As a result, definitions for ceasefires range from a break in fighting to a specific conflict outcome, to a component of peace agreements, or a distinct agreement type (Åkebo 2016: 19). The Peace Agreement Access Tool (PA-X) (2018), consisting of over 1500 peace agreements, differentiates between ceasefire provisions contained within peace agreements, which can be signed at any point during a peace process, and ceasefire agreements that have a primary purpose of limiting violence and often feature in the early stages of peace talks. A practical definition of a ceasefire agreement is a negotiated agreement that “defines the rules and modalities for conflict parties to stop fighting” (Chounet-Cambas 2011; Barsa et al. 2016). In practice, the term “ceasefire” overlaps with other terms such as “cessation of hostilities,” “truce,” and “armistice.” The applied meaning of all these terms is to provide for a suspension of
© Springer Nature Switzerland AG 2023 S. N. Romaniuk, P. Marton (eds.), The Palgrave Encyclopedia of Global Security Studies, https://doi.org/10.1007/978-3-319-74319-6
156
hostilities between belligerent parties during armed conflict (Azarova and Blum 2012). Although used interchangeably, the terms “armistice,” “ceasefire,” “cessation of hostilities,” and “truce” have varied meanings under international law (Wählisch 2015: 966). Of the above terms, truce and armistice have long-standing precedent pertaining to interstate conflicts. The “white flag of truce” is a ubiquitous symbol for an immediate reprieve in hostilities on the battlefield to attend to the dead and wounded, to surrender, or to begin negotiations (Article 32, Hague Convention 1907). According to Article 36 of the 1907 Hague Conventions, an armistice “suspends military operations by mutual agreement between the belligerent parties,” with the specific intention of negotiating a more permanent agreement. Another defining feature in the nomenclature is the longevity of a ceasefire that can be either temporary or permanent (Barsa et al. 2016: 9). Truces are regularly preliminary and as such maintain a local scope that allows field commanders or other local actors to implement them for humanitarian purposes, civilian evacuation, or prisoner exchanges. Whereas, terms such as armistice and cessation of hostilities regularly refer to more permanent arrangements, the latter being a less formal iteration of the former. In some conflict zones, war-weariness may lead to a cessation of hostilities (a de facto armistice) that puts an end to fighting, but does not progress with a political solution, leading to the formation of semipermanent boundaries between warring parties (Mac Ginty and Gormley-Heenan 2010). Cyprus and Korea are two prominent examples of such following World War II. Other examples include the “frozen conflicts” in Ossetia, Abkhazia, Nagorno-Karabakh, and Moldova that arose after the dissolution of the Soviet Union. Lastly, due to the political repercussions for parties involved in a ceasefire, they are often not referred to in those terms, but rather as “codes of conduct” or “humanitarian pauses” or even more generically as “joint statements,” “memorandums,” “declarations,” or “peace accords.” From 2008 to 2010, the Government of Nepal entered into separate ceasefire agreements with the eight
Ceasefires
different Nepalese rebel groups, but none of these agreements were referred to as ceasefires. In summary, “armistice,” “ceasefire,” “cessation of hostilities,” and “truce” are regularly used interchangeably, and although their original meanings have diverged from their original purpose, they all refer to a uni- or multilateral agreements to suspend hostilities.
Purpose of Ceasefires In addition to suspending hostilities, the purpose of a ceasefire agreement is defined by its scope, degree of inclusion, and the implementing actors. Preliminary ceasefires can be utilized in conflict situations as a momentary reprieve for humanitarian purposes, confidence building measures, or due to pressure by third party governments, international organizations, or nongovernmental organizations. The Joint Understanding on Humanitarian Pause for Aceh, signed by the Indonesian Government and the Free Aceh Movement in April 2000, had the stated primary aim of delivering “humanitarian assistance to the population of Aceh affected by the conflict situation.” Similarly, the purpose of the Local Ceasefire Agreement Mostar-Bijela signed on December 16, 1993, was to “allow free movement of humanitarian convoys of UNHCR or other international agencies escorted by [UN Protection Force] units.” The opening of “humanitarian corridors,” “peace zones,” and other demilitarized areas are a regular feature and appear in ceasefires signed in Bosnia, Burundi, Central African Republic, Democratic Republic of Congo, Guinea-Bissau, Indonesia Mozambique, Nicaragua, Republic of Congo, Rwanda, Sierra Leone, South Sudan, Sudan, and Syria. Such mechanisms can be used to provide for the safe evacuation of civilians, wounded and surrendering troops, and access for humanitarian aid convoys and personnel, or to send in international monitors to ensure adherence to international humanitarian law. The success of such zones has been varied. Ceasefires can also be announced to mark festivals and religious occasions. On December 20, 1999, the Revolutionary Armed Forces of
Ceasefires
Colombia – People’s Army (FARC-EP) instituted a unilateral 20-day truce to “allow Colombians to celebrate the end of the year and the start of the new Millennium with their families and friends.” More recently, an 8-h ceasefire was attempted in the Filipino city of Marawi on Mindanao to allow for a brief respite during the Islamic festival of Eid al-Fitr on June 25, 2017. Preliminary ceasefires can lead to more permanent ceasefires connected to broader peace processes. Ceasefire agreements are often seen as a minimal step before belligerents can enter into negotiations and determine a pathway to a sustainable settlement (Mac Ginty 2006). The Agreement on Confirmation of Commitment to Ceasefire signed on July 27, 1994, between parties in the Nagorno-Karabakh conflict states the purpose of the agreement is “to preserve the conditions for signing a comprehensive political agreement.” However, governments are at times reluctant to recognize a ceasefire due to the fear of conferring legitimacy on rebel groups. Moreover, since peace processes ebb and flow and are rarely linear, a ceasefire is not always necessary. The Colombian peace process between the Colombian Government and FARC-EP from 2012 to 2016 demonstrates how a peace process can continue without a ceasefire, which was only secured after the provision of greater political assurances with the signing of the Agreement on Bilateral and Final Ceasefire, End of Hostilities, and Surrender of Weapons on June 23, 2016. Similarly, the 1990–1996 peace process between the Government of Guatemala and the Unidad Revolucionaria Nacional Guatemalteca (URNG) did not include a ceasefire agreement until the Definitive Ceasefire was signed as the first of five substantial agreements in December 1996. Before this took place, however, the URNG required greater assurances on matters related to human rights, indigenous rights, internally displaced persons, economic and land reform, as well as the strengthening of civilian control over the Guatemalan military. Nonetheless, ceasefires can be used as a suitable entry point for parties to enter negotiations (Chounet-Cambas 2011). Following the change in regime that threatened two-decade long ceasefires
157
between the Myanmar Government and numerous ethnic militias, a new round of ceasefires were signed between the 11 remaining armed groups and state or union-level government negotiation committees from 2011 onward (Oo 2014). Most of these ceasefires followed similar templates; however, initial concessions allowed for the inclusion of a greater range of items during the second round of negotiations. An initial ceasefire signed on December 2, 2011, between the Myanmar State-level Peace Committee and the Shan State Army-South (SSA-S) included provisions for the SSA-S to open liaison offices in towns, cooperate in narcotics prevention, and an agreement from both sides to ceasefire. A later agreement signed on May 19, 2012, between the SSA-S and the Union-level Peace Committee broadened concessions to include guarantees by the state to preserve and promote Shan literature and culture, help SSA members “earn adequate means of livelihood,” and set up a special industrial zone under SSA control. In summary, introducing political concessions into ceasefire agreements is not an uncommon practice among conflict mediators. Ceasefires are also used as confidence-building measures, as the absence of violence can function as an indicator of the level of commitment to peace by the parties involved (Fortna 2004). Starting in June 1993, Azerbaijan and NagornoKarabakh leaders instituted nine temporary ceasefires that lasted between 3 and 11 days. At first confined to the cities of Stepanakert and Agdam, the agreements were later expanded to become universal. With regional pressure, these temporary ceasefires created the necessary momentum for the signing of the Agreement on Confirmation of Commitment to Ceasefire in July 1994 and a further Agreement on Strengthening the Ceasefire in February 1995. However, the Nagorno-Karabakh case study also highlights how ceasefires can be detrimental to peace processes, in that they may alleviate the necessity to come to a political settlement and lead to a normalization of a state of “no war, no peace.” Ceasefires may also be enforced by the UNSC following a decision between parties to ceasefire, wherein UNSC resolutions specify the modalities and supervision of the suspension of hostilities.
C
158
Incidents of such include Resolution 687 in relation to the First Gulf War as well as Resolution 1701 regarding the 2006 Lebanon Conflict (Bell 2009).
Content of Ceasefires All ceasefire agreements contain ceasefire provisions, but not all ceasefire provisions are found in ceasefire agreements. Beyond this, the content of ceasefire agreements is dictated by the immediate needs of the parties involved and third party pressures. A survey of the 267 ceasefire agreements located on the Peace Agreement Access Tool (PA-X) (2018) identifies 11 items including ceasefire provisions that are included in 33% of the ceasefires listed. These 11 categories address three main areas, namely, humanitarian needs, security, and mechanisms mitigating conflict escalation. Humanitarian Provisions Addressing humanitarian needs in ceasefires can be a result of third party pressure as well as a response to local needs. Humanitarian provisions can also be used as confidence-building measures between parties to overcome commitment problems and build trust. Most commonly, humanitarian provisions address the issues of access and reconstruction, facilitation of the return of internally displaced peoples (IDPs) and refugees, and prisoner release. Return of Refugees and Internally Displaced Persons
Provisions for the return of refugees and IDPs focus predominantly on facilitating and creating conditions favorable for the resettlement of an area following conflict. Within ceasefire agreements, the prevention of refugee return is regularly listed as a ceasefire violation in addition to the endangerment of refugees by attacks on or near camps or the use of forcible displacement as a tactic of war. In an effort to create the necessary conditions, ceasefires provide for security guarantees and granting IDPs mobility and access to infrastructure, the deployment of peacekeepers,
Ceasefires
the restoration of systems of governance, the provision of humanitarian assistance, and the commencement of demining activities. The GuineaBissau Ceasefire Agreement of August 26, 1998, specifies the reopening of Osvaldo Airport to facilitate the return of refugees and delivery of humanitarian aid. On the other hand, Liberia’s Lomé Ceasefire Agreement of February 13, 1991, goes as far as committing security escorts and means of transportation by the Economic Community of West African States Monitoring Group (ECOMOG) to facilitate IDP return. Political conditions may also be necessary to facilitate returns. For this purpose, the Agreement on the Principles for a Peaceful Settlement signed on July 21, 1992, retracts a ban on political parties to facilitate the return of political exiles. In addition to creating the necessary conditions, Joint Commissions consisting of warring and third parties have been created to help facilitate returns in conflicts as diverse as Abkhazia, Myanmar, and the Republic of Congo. In contrast, some ceasefires facilitate the evacuation of persons from conflict zones and facilitate their movement to refugee camps, such as the September 20, 2015, truce in Zabadani, Kafraiya, and Al-Fu’ah, or the February 7, 2014, truce in Homs. Humanitarian Aid, Access, and Reconstruction
Humanitarian aid, access, and reconstruction regularly serve as another suitable entry point for identifying common aims between warring parties. Several ceasefire agreements iterate the trope that there “cannot be development without an end to the war.” Ceasefires from Myanmar highlight this trend to various degrees. The 2015 Nationwide Ceasefire Agreement states that the parties must “collaborate to carry out relief and rescue efforts and provision of medical supplies in the case of a natural disaster causing an emergency situation in a ceasefire a ceasefire area.” On the other hand, the 2013 8-Point Agreement between the Union Peacemaking Working Committee and Karenni National Progressive Party (KNPP) provides that “the government and KNPP [are to] cooperate for regional development.” To facilitate better living conditions, ceasefires often prescribe the formation of Joint
Ceasefires
Commission as occurred in Aceh, Nicaragua, Somalia, and former Yugoslavia. To facilitate better conditions in the short term, ceasefires often negotiate access to organizations such as nongovernmental organizations (NGOs), UN agencies, and peacekeeping troops and facilitate movement by issuing necessary licenses and/or fast-tracking such organizations and civilians through checkpoints. Granting access can be universal or for specific locations. The 1993 Agreement on the cessation of hostilities in Bosnia and Herzegovina, for example, guarantees freedom of movement, but emphasizes the use of marked corridors. Attacking third party organizations generally constitutes a ceasefire violation as in the 2010 Ceasefire Agreement between the Government of Sudan and the Liberation and Justice Movement or the 1991 Lomé Ceasefire. Alongside humanitarian aid, ceasefires can broker the transfer of captured facilities, as well as the restoration of public utilities such as water, electricity, and telecommunications in addition to guaranteeing access to such services. In Bosnia, the General Agreement to Halt the Conflict in Bosnia-Herzegovina of June 15, 1993, lists the use of utilities as a weapon, i.e., the act of shutting off water, as a ceasefire violation. Ceasefires can also emphasize the normalization of commercial activities such as the movement of goods and people, fishing, farming, and trading as highlighted by ceasefires in Sri Lanka and Sudan. Additionally, ceasefires can provide approaches to longer-term development or the stable functioning of education, transportation, and health-care infrastructure as a mutually beneficial concession between parties. The February 22, 2002, Agreement on a Ceasefire between the Government of the Democratic Socialist Republic of Sri Lanka and the Liberation Tigers of Tamil Eelam provides that the parties cooperate to “facilitate the extension of the rail service on the Batticaloa-line to Welikanda.” This example is context specific, but some ceasefires emphasize that disruption of schools, universities, hospitals, health centers, and industrial enterprises on both sides constitutes a ceasefire violation, particularly in Myanmar.
159
Prisoner Release
Prisoner release is a confidence-building measure with low decision costs and is regularly provided for in ceasefire agreements after 1990. Prisoners can be prisoners of war, civilians, political prisoners, and hostages, and their release can be universal or subject to restrictions, such as whether the prisoner has criminal charges pending. The Nationwide Ceasefire Agreement signed by the Myanmar Government and the Ethnic Armed Organizations in 2015 committed to releasing only those charged under the Unlawful Associations Act. Eight ceasefires signed between the Nepalese Government and Nepalese rebel groups provided for prisoner release dependent on lists of prisoners submitted by each group and only after government investigations. Prisoner release can be unilateral, as occurred in the 24-h ceasefire in Arsal, Lebanon, signed in August 2014, reciprocal, where both parties match the numbers of the other, or according to the “all for all” principle, as adopted in ceasefire agreements related to Chechnya and Abkhazia. Since 2014, the Syrian conflict has also seen the development of prisoner release based on principle of “whitening the prisons” as seen in the points of truce with the People’s Protection Units of April 24, 2014. This principle pertains to releasing combatants and political prisoners not affiliated with the Syrian regime or the Islamic State. Rarely, prisoner release is undertaken in parallel to an amnesty. In addition to the warring parties, prisoner release is commonly facilitated by a neutral third party. Between 1990 and 2015, the International Commission of the Red Cross has aided in the processing of released prisoners in Bosnia, the Central African Republic, the Democratic Republic of Congo, Somalia, and South Sudan. The UN Protection Force, on the other hand, facilitated the release of prisoners in the 1995 Ceasefire Agreement for Bosnia and Herzegovina. Security Provisions Security provisions regularly define which acts constitute a ceasefire violation in addition to mechanisms designed to avoid confrontation. Ceasefire violations fall broadly into two
C
160
categories: military activities and human rights violations. Military activities include the use of arms, mobilization or recruitment of troops, attacks, the manufacture or procurement of arms, revenge attacks, reconnaissance, and disguising military vehicles and personnel as civilians or humanitarian personnel. Human rights violations, on the other hand, have a greater scope and include not only violence against civilians – such as harassment, enslavement, torture, unjustified detention, the taking of hostages, displacement of civilians, the confiscation of land, sexual violence, and extrajudicial killing – but also the limitation of mobility through the use of checkpoints or other means as well as the disruption of government procedures and performance, or the disruption of services of elections. Security provisions may also include a list of what constitutes an exception to a ceasefire violation, such as defensive acts, peacekeeping activities, and police actions, including preventative patrols and investigations to combat crime. Other security provisions raise the financial and physical costs of violating a ceasefire. Costraising provisions include the separation of forces, the use of demilitarized zones, the cantonment of forces, and, occasionally, a partial merger of forces (Fortna 2004). Full or partial demobilization and disarmament and reintegration (DDR) provisions can also be included as part of a ceasefire agreement, such as the withdrawal of heavy weapons beyond firing range (25 km) or the placement of all heavy and medium weaponry under third party supervision. Other measures can be instituted as confidence building measures or to facilitate implementation, such as an exchange of information regarding personnel, armaments, and prisoners, as well as opening channels of communication between parties, from high-level mediation delegations down to field commanders. Security provisions may also include the handover of buildings or strategic infrastructure captured or garrisoned during combat. Mitigating Conflict Escalation To ensure enforcement of a ceasefire, agreements regularly outline monitoring or enforcement mechanisms commonly incorporating aspects of power sharing or third party verification. Additional
Ceasefires
support mechanisms can be created such as Independent Fact Finding Committees, as agreed upon in Mindanao. Joint verification committees (JVCs), consisting of three or more members, often contain representatives from the warring parties alongside neutral observers from civil society, the international community, religious groups, and academia. JVCs can be placed on multiple levels of military command as well as in specific geographic zones. The mandate of JVCs includes the inspection of vehicles entering the conflict zone in addition to investigating ceasefire violations, noting violation details, and regularly issuing reports to ensure transparency. Other enforcement mechanisms include the creation of jointly staffed posts, patrols, and checkpoints. International or regional organizations or guarantor countries may send observers who participate in joint activities or be embedded into chains of command as liaison officers. Peacekeepers regularly facilitate monitoring and verification activities when deployed. Limitations on Rhetoric and the Media
In addition to ensuring implementation, ceasefire agreements often contain clauses aimed at avoiding the escalation of armed conflict including communication transparency and recognition of rhetoric and the media as a potential source of conflict reignition. Over 50 ceasefires list “hostile propaganda” for “inflammatory purposes,” “sedition,” or “media war” by any other means as a ceasefire violation. The Brazzaville Agreement on Cessation of Hostilities signed in the Central African Republic on July 23, 2014, provides for the parties “to desist from all propaganda, and discourse of hatred and division based on religious, tribal or partisan allegiance; and to put an end to acts of intolerance and media campaigns liable to provoke religious or political confrontation.” Territorial restrictions may apply. The Guatemalan 1996 Agreement on a Definitive Ceasefire permits propaganda and political activities within cantonment areas, whereas one ceasefire from Liberia prohibits hostile propaganda “within and outside the country.” Other agreements place the onus on warring parties and bid them to “commit themselves to exercise the utmost restraint” to avoid hostile
Ceasefires
statements. In addition, freedom of press is encouraged, and some ceasefires grant the right for rebel groups to communicate with the press, or in the case of the Shan State Army-South, register its own media platform. In ceasefires from the Ossetia and Moldova conflicts, on the other hand, the parties committed to set up joint media platforms as a means of guaranteeing compliance. Sui Generis Provisions in Ceasefires Lastly, the content of ceasefires is defined by the needs of the stakeholders involved. As a result, ceasefires regularly contain clauses particular to local context that fall outside of broad classifications. One such example is the inclusion of a provision to investigate the cause of a helicopter crash in the Minutes of Disengagement between the Areas of Washafanah and al-Zawiya signed on November 12, 2015, near Tripoli, Libya. Likely brought down by one of the parties, the helicopter crash resulted in the death of several high-ranking militia commanders and catalyzed a return to conflict following several months of ceasefire in the area. Another context-specific example includes the prohibition of goods such as penlight batteries and binoculars, as well as limitations on fuel and construction materials into Northeast Sri Lanka in the February 22, 2002, Agreement on a Ceasefire between the Government of Sri Lanka and the Liberation Tigers of Tamil Eelam.
Conclusion Regardless of the terminology used, a ceasefire can be a provision or an agreement where parties commit to suspending hostilities temporarily or permanently in international and civil conflicts. The content of ceasefires is determined by the immediate needs of those party to the agreement, as well as third parties. Items regularly touched upon in ceasefire agreements include provisions with humanitarian aims, provisions with security aims, and provisions outlining mechanisms that mitigate conflict escalation. Ceasefires also regularly include political provisions and commitments to continue negotiations between warring parties, but can also be issued as a stand-alone agreement amid a larger peace process.
161
Cross-References ▶ Conflict and Conflict Resolution ▶ Disarmament ▶ Humanitarian Assistance ▶ Hybrid Conflict and Wars ▶ Insurgents and Insurgency ▶ International Diplomacy ▶ Mediation ▶ Peace Agreements ▶ Peace and Reconciliation ▶ Refugees ▶ Role of the Media Acknowledgments This is an output of the Political Settlements Research Programme, funded by the Department for International Development (DFID), UK. The views expressed and information contained herein are not necessarily those of or endorsed by DFID which can accept no responsibility for such views or information or for any reliance placed on them.
References Åkebo, M. (2016). Ceasefire agreements and peace processes: A comparative study. New York: Routledge. Azarova, V., & Blum, I. (2012). Suspension of hostilities. In R. Wolfrum (Ed.), Max Planck encyclopedia of public international law. Oxford: Oxford University Press. Barsa, M., Holt-Ivry, O., & Meuhlenbeck, A. (2016). Inclusive ceasefires: Women, gender, and a sustainable end to violence. Inclusive Security. https://www. inclusivesecurity.org/publication/inclusive-ceasefireswomen-gender-sustainable-end-violence/. Accessed 1 Nov 2017. Bell, C. (2009). Ceasefires. In R. Wolfrum (Ed.), Max Planck encyclopedia of public international law. Oxford: Oxford University Press. Chounet-Cambas, L. (2011). Negotiating ceasefires: Dilemmas and options for mediators (Practice Series). Geneva: Centre for Humanitarian Dialogue. International Conferences (The Hague). (1907). Hague convention (IV) respecting the laws and customs of war on land and its annex: Regulations concerning the laws and customs of war on land. http://www. refworld.org/docid/4374cae64.html. Accessed 4 Dec 2017. Mac Ginty, R. (2006). No war, no peace: The rejuvenation of stalled peace processes. Houndmills: Palgrave MacMillan. Mac Ginty, R., & Gormley-Heenan, C. (2010). Ceasefires and facilitation of ceasefires. In N. J. Young (Ed.), The Oxford international encyclopedia of peace. Oxford: Oxford University Press.
C
162 Oo, M. Z. (2014). Understanding Myanmar’s peace process: Ceasefire agreements. Geneva: Swisspeace. Peace Agreement Access Tool (PA-X). (2018). University of Edinburgh. www.peaceagreements.eu Wählisch, M. (2015). Peace settlements and the prohibition on the use of force. In M. Weller (Ed.), The Oxford handbook of the use of force in international law (pp. 962–987). Oxford: Oxford University Press.
Further Reading Example of a comprehensive ceasefire agreement: Ceasefire Agreement (Lusaka Agreement). (July 10, 1999). Movement for the Liberation of Congo-Democratic Republic of Congo. PA-X. https://www. peaceagreements.org/masterdocument/319. Accessed 15 Nov 2017. Example of a truce: Damascus Truce I between Bayt Sahem and Babila. (January 15, 2014). Syrian opposition in Bayt Sahem and Babila-Syrian Government of Bashar alAsad. PA-X. https://www.peaceagreements.org/master document/1527. Accessed 15 Nov 2017. Fortna, V. (2004). Peacetime: Cease-fires and the durability of peace. Princeton: Princeton University Press. Haysom, N., & Hottinger, J. (2010). Do’s and don’ts of sustainable ceasefire agreements. http://peacemaker.un. org/sites/peacemaker.un.org/files/DosAndDontofCease fireAgreements_HaysomHottinger2010.pdf. Accessed 4 Nov 2017. Public International Law and Policy Group. (2013). The ceasefire drafter’s handbook: An introduction and template for negotiators, mediators, and stakeholders. New York: PILPG. Smith, J. D. D. (1995). Stopping wars: Defining the obstacles to cease-fire. Boulder: Westview Studies. Wählisch, M. (2015). Peace settlements and the prohibition on the use of force. In M. Weller (Ed.), The Oxford handbook of the use of force in international law (pp. 962–987). Oxford: Oxford University Press
Center of Reform on Economics (CORE) Indonesia Chad Patrick Osorio University of the Philippines, College of Law, Quezon City, Philippines Keywords
Center of Reform on Economics · Nongovernment organizations · Economic research · Partnerships · Poverty alleviation · Human security
Center of Reform on Economics (CORE) Indonesia
Introduction The Center of Reform on Economics (CORE Indonesia) is a nongovernment economic research institution based in Jakarta, Indonesia. The acronym is also a play on the word “core,” which is intended to mean that the think-tank organization aimed to solve economic problems at their core. CORE Indonesia’s primary outputs are largely research-based, but they also have related services which they offer to consulting entities. These include public policy research, public policy education, public policy communication, regional development, industrial strategy, and business advisory. It focuses on the national economic issues affecting Indonesia, but at the same time taking into account international relations as well as its membership in the Association of Southeast Asian Nations (ASEAN) Economic Community. CORE Indonesia was founded in 2011 by a group of Indonesian scholars and academics. It came with the realization of two things: first, the untapped tremendous economic potential that Indonesia has, with its natural resources and strategic geographic location underutilized, and, second, despite the relatively stable economic growth even in the face of international financial crises, public welfare in the country has not been significantly improved. In particular, it noted the wide disparity of development in the island of Java, where the capital Jakarta is situated, as compared to those areas outside the seat of government. It also took into account the growing gap between the rich and the poor. These scholars sought to understand the core cause of the problem, and instead of merely criticizing the government, they offered to provide sound economic advice and possible solutions and strategies to properly handle the consequences of past problems, address immediate dilemmas, and prevent future complications arising from unaddressed, potentially problematic circumstances.
Organization Background Challenges and Approaches Indonesia, with more than 13,000 islands, is the world’s largest island country. It is located mainly
Center of Reform on Economics (CORE) Indonesia
in Southeast Asia, but it also retains territorial jurisdiction over certain islands in Oceania. Combining the land areas of its various islands, it is the world’s 14th largest country; combining both land and maritime areas, it lands in the 7th position. It is the 4th most populous country in the world, with the majority as adherents to Islam. Because of this geography, Indonesians are very diverse, ethnically and culturally. At the same time, because of the varied access to government and other services of the different islands, educational attainment and employment opportunities are similarly not uniform. Upon CORE Indonesia’s appraisal of existing economic policies in the country, it is apparent that many of them have been promulgated with the intent of meeting only short-term objectives, with the sole purpose of enhancing the ruling administration’s image. These policies and strategies do little to solve the core cause which lies at the heart of the problem of inequality and nondistribution of welfare. Research in economics, particularly in the subfields of developmental and macroeconomics, has to take such factors into account. However, such variance makes analysis more complicated and requires careful nuancing in order to flesh out properly applicable recommendations for economic policies and strategies. Such is the approach being taken by CORE Indonesia. By being aware of these elements at play in the economic and political climate of the various areas of Indonesia, the think-tank is able to provide balanced yet independent opinions as well as policy and strategy alternatives on current and future issues related to economics. Key to CORE Indonesia’s approach is to seek fruitful partnerships with fellow academicians, state managers, and business actors. Doing so assists in the facilitation of gathering statistical and economic data per region, as well as the legal framework within which these economic policies and strategies are to be applied. At the same time, CORE maintains high visibility in the various news networks in the country. From providing forecasts on international investments based on the national security situation to commenting on rice distribution and importation,
163
CORE Indonesia is a fixture on the national media (Investment Rises in Indonesia 2018; News Desk 2018). It also cultivates good relations with international government entities and organizations like the ASEAN Foundation, the Australian Government, and the Asian Development Bank (ADB). With the ASEAN Foundation, CORE offered capacity building seminars to ensure readiness for economic integration prior to 2015 (The Visit of CORE Indonesia to ASEAN Foundation 2013). It also co-authored a paper published by the ADB regarding industrial policy in Indonesia, coming from a global value chain perspective. The members of CORE are as diverse as their subjects. Its founder, Dr. Hendri Saparini, is a member of the board of experts in a number of Indonesian organizations, including the Sharia Economic Community (Masyarakat Ekonomi Sharia) and the Association of Islamic Scholars in Indonesia. She is also multi-awarded and has been named the “Young Economist of Indonesia” (2009, Megawati Institute) and is one of the “100 Young Leaders of Indonesia” (2008, Justice and Welfare Party/PKS) and “100 Most Influential Women in Indonesia” (2012, Globe Indonesia Magazine). Its other members are equally noteworthy, including current Executive Director Dr. Mohammad Faisal, Commissioner Rachmat Basuki, and Research Director Dr. Piter Abdullah Redjalam. Ultimately, these local, national, and international partnerships and constant exposure to the media, as well as their continuous endeavors in reaching out to the young people of Indonesia to educate them about economics, help ensure that Indonesia’s economic problems take center stage and that the policies that CORE Indonesia forwards will be given proper attention and action.
Conclusion Undeniably, economic research is important in discussions of global security, whether they be in the nature of traditional security concerns or nontraditional security issues. Economics and security are intersecting sociological elements which
C
164
rely on each other, and a change in one necessitates a corresponding change in the other. For one, a traditional security problem like the looming threat of an impending international conflict, or nontraditional ones like transnational crime or transboundary haze pollution, would be bound to affect the economy of a given country. External factors affecting the economy are likely to suffer, including tourism and export trade, as well as foreign investments. Internally, economic productivity may also be burdened by similar concerns. Cross-border problems like terrorism are also likely to affect not only the country which is the target of the attack but also neighboring ones in the region, particularly those connected by land or with porous maritime borders, as in the case of Indonesia and its neighbors in the ASEAN, like the Philippines. These are but some of the many examples by which security issues affect economics. At the same time, economics affects rising security concerns. Poverty debilitates a country’s potential to participate in international trade, thereby isolating them from economic opportunities and growth (Sewell 2008). With lesser financial and humanpower leeway to prevent and address both traditional and nontraditional security concerns, it leaves developing countries more vulnerable to such security threats. Because of this, they might not be able to prevent transnational problems, even before it escalates (National Security Strategy 2000). The Center of Reform on Economics, while not directly conducting research on security issues, provides an important viewpoint from which to view these concerns, especially for a country as Indonesia. Preventing economic collapse, or even just working toward the improvement of the economic conditions of the 4th most populous country in the world, helps in the global effort in order to make Indonesia a better state partner in international endeavors, especially for security and peace.
References Investment Rises in Indonesia. (2018). Retrieved from http://www.centerofrisk-sia.com/investment-rises-in-
Child Soldiers indonesia-because-the-national-security-situation-inindonesia-still-stable/ National Security Strategy for a Global Age. (2000). Retrieved from https://history.defense.gov/LinkClick. aspx?fileticket¼j62D2rp4uJs%3D&tabid¼9115& portalid¼70&mid¼20231 News Desk. (2018, January 15). Unreliable data prompted govt to import rice: Analyst. The Jakarta Post. Retrieved from http://www.thejakartapost.com Sewell, J.W. (2008). Poverty: Combating the Global Crisis. Better World Campaign. Retrieved from http://www. globalproblems-globalsolutions-files.org/gpgs_files/ pdf/bwc/2008_sewell_poverty.pdf The Visit of CORE Indonesia to ASEAN Foundation. (2013 November 11). ASEAN Foundation. Retrieved from http://aseanfoundation.org/newsroom/the-visitof-core-indonesia-to-asean-foundation
Further Readings Center of Reform on Economics. (n.d.) Center of Reform on Economics Home Page. Retrieved from http://www. coreindonesia.org/ Counting the Cost. (2002). The Economist. Retrieved from https://www.economist.com/news/2002/10/17/ counting-the-cost
Child Soldiers The Child Soldier in History Mary Manjikian Regent University, Virginia Beach, VA, USA Keywords
Conflict · Rehabilitation · Recruitment · Children
Introduction What is a child soldier, and why is child soldiering wrong? Historically, children have been identified as participants in conflicts – from the medieval Children’s Crusade, to the American Revolutionary War, and up the twentieth century in defending the Soviet Union from German invaders during World War Two, as well as during the Communist uprising in China. Today, data suggests that as many as 300,000 children may be involved in the conduct of
Child Soldiers
armed activities, serving either in direct combat or support roles. Child soldiers have been identified in 50 countries and nearly 60 nongovernmental organizations – including terrorist cells (Child Soldiers International 2017). Among this number, some 40 % of child soldiers are currently found within African nations (Dudenhoefer 2016). Today, most observers would agree that there is something uniquely horrifying about seeing a child participating in armed hostilities. However, the notion of children as innocent souls in need of protection is actually a relatively recent idea, which is regarded as Western in origin. Bernstein (2011) argues that the notion of children as innocent can only be traced back 300 years, to the 1700s, and that the notion has always been racialized. She suggests that a certain type of child – namely, a Western, white child – was regarded as innocent and needing to be protected from labor, military activities, and exposure to certain types of knowledge (Keches 2010). In contrast, other analysts note that particularly in Africa, a “warrior culture” has always existed. Even today, in some African cultures, coming-ofage rituals often involve warlike activities, and young men may thus conceptualize of themselves as having a responsibility to engage in violence for the purpose of protecting their village and livestock herds (Burke and Hatcher-Moore 2017). In unstable parts of the world where people may exist in a state of constant low-level conflict, the borders between peacetime and wartime and civilians and soldiers may not be as clear-cut. Thus, child soldiers have been identified in Rwanda, in Burundi, in the Central African Republic, in South Sudan, in Kenya, and in Uganda, where they have been inducted to fight as part of the Lord’s Resistance Army. On the international level today, there is an emerging consensus that there is some category of people defined as “children” who should be protected from participation in direct armed hostilities – although nation and nongovernmental organizations may disagree on the age at which one formally leaves childhood behind and becomes an “adult.” At the same time, there are
165
still many aspects of the child soldier controversy where there is no consensus – including the question of whether or not children can ever be regarded as ethically or legally culpable for their participation in armed hostilities. That is, the parties disagree as to whether child soldiers are always victims and always pawns or whether they can legitimately exercise their own agency to participate in armed conflict.
Types of Child Soldiers In considering the status of child soldiers, analysts often distinguish between children who “enlist” on their own; children who are enlisted by their parents; children who are kidnapped by armed groups; and children who have been abandoned or orphaned and are taken into armed groups to be trained as soldiers. Dudenhoefer suggests that children may be drawn into a conflict, particularly in Africa, due to a number of factors: as African states lost the military support they had once been granted by the Soviet Union, which had territorial ambitions in Africa, she notes that they may have sought to fill the gap in their fighting forces through abducting or recruiting children as relatively cheap warriors. In addition, children may be recruited due to economic circumstances caused by wartime, such as food shortages or a lack of housing. In addition, children may be available for mobilization when schools cease to function. The term “soldiering” is also an incomplete description of the situation since the United Nations notes that children drawn into conflict may include both boys and girls and that female children in particular may be sex trafficked and used for sex or child marriage by those engaged in conflict (UNICEF 2011). Thus, the UN’s definition discourages the use of the term “child soldier” and instead notes that children associated with the armed forces or conflict includes “any person under the age of 18 years of age who is part of any kind of regular or irregular armed force or armed group in any capacity.”
C
166
Preventing the Use of Child Soldiers In the United States, the Department of State regards child soldiers who are forcibly recruited (i.e., captured or kidnapped and forced into soldiering) as the subject of human trafficking, and figures are kept on forcible recruitment along with statistics about activities such as child labor and child prostitution, which might occur in the context of armed conflict (US Dept. of State 2017). The annual trafficking in persons report issued by the Department of State includes statistics on conflicts in which child soldiers are being utilized. The United States has attempted to sanction nations using child soldiers by preventing those nations from receiving foreign military aid from the United States (Atwood 2017). In addition, customary international law includes provisions against the use of child soldiers, and within the context of just war ethics, the use of child soldiers has historically been problematic. The ethics of just war spell out the conditions under which a state is lawfully and ethically justified in instigating conflict (through the principles of Jus ad Bellum) and the conditions which a state must follow during conflict (through the principles of Jus in Bello). Many of these principles have been codified in international humanitarian law, having come into existence through the development of norms over the past several hundred years. Jus ad Bellum principles state that a country should not enter into a conflict if it has less than a “reasonable chance of success.” That is, a country should not continue to fight a conflict once it is obvious that they are outmanned and outgunned but should instead surrender in order to end the conflict and create a peaceful solution. Doing so saves lives and prevents the senseless slaughter of humans (Internet Encyclopedia of Philosophy 2018). However, we can identify many instances where a nation has instead decided to “hold out to the last man,” and when conventional, adult soldiers became scarce, the nation has instead turned to drafting children as young as 8 or 9, sending them into a conflict in a war they are unlikely to win. Iran, for example, has been faulted for having contributed child soldiers to the Iran-Iraq War in the 1980s and more
Child Soldiers
recently, for supplying child soldiers to fight alongside ISIS fighters in the Syrian conflict (Far 2017). Allowing children to fight in a conflict thus violates Jus in Bello principles while recruiting child soldiers because adult soldiers are unavailable violates Jus ad Bellum principles. However, the implementation of specific legislation and policies based on this law is complicated due to differences in interpretation of these provisions. First, there is not an international consensus on what constitutes the age of majority or the point at which one ceases to be a child and becomes an adult. Here, Fox (2005) notes the existence of a two-tiered division, which distinguishes between those who are under age 15 and those who are under age 18. The distinction appears in the United Nations Convention on the Rights of the Child. Here, the UN Convention defines a child as “. . .every human being below the age of eighteen years of age” in Article 1 but also describes the duty of states to refrain from recruiting anyone into their armed forces who is below the age of 15 in Article 38 (UN 1989). Activists have attempted to address this discrepancy through a campaign called “Zero Under 18,” which would seek to alter the law so that no one under the age of 18 could be recruited into a military-type role (UN 2000).
Legal Culpability of Child Soldiers In determining one’s culpability in a conflict as well as one’s claim to protection, current international law focuses largely on the distinction between civilians and military personnel and does not make an exception for those who are children participating in military action. It also does not address the circumstances under which children may have been recruited into armed conflict (Bosch 2015). However, Rossi has suggested that child soldiers should be eligible for specific “refugee status” and that they should be allowed to invoke an “infancy defense” in cases where they are being tried for acts of war committed as children. She argues that there is a general principle that one cannot be convicted if he or she did not understand the importance of the
Child Soldiers
consequences of his or her actions and that children should be allowed to use the assumed innocence of youth to advance the claim that they did not understand the consequences of their actions (Rossi 2013). In addition, one can advance the claim that children who inflicted harm upon others may have been coerced into doing so (Grover 2008). In 2016, the International Criminal Court (ICC) began hearing the case of The Prosecutor v. Dominic Ongwen (ICC-02/04–01/14). At the request of the Ugandan government, international legal proceedings were levied against Dominic Ongwen, a former child soldier in the Lord’s Resistance Army (LRA) who became a commander in the LRA. The international community accused Ongwen of war crimes and crimes against humanity. Ongwen pled not guilty, in part because he was abducted from his village at the age of 14 and forcibly enlisted into the LRA, although he stayed with the organization for several years, including as an adult (ICC 2018). The case is ongoing as of 2018.
Rehabilitating and Caring for Child Soldiers International organizations such as the United Nations, as well as advocacy groups such as Child Soldiers International, recognize the responsibility to help reintegrate former child soldiers back into their societies. However, these efforts are being challenged since in many countries former child soldiers have been detained or imprisoned, since they are still viewed as dangerous to society (UNICEF 2011). Currently, in many locations, children, some now nearing adulthood, remain in administrative detention, including the US prison at Guantanamo Bay and in Afghanistan, as illegal combatants in the “War on Terror.” Hamilton et al. (2011) suggest that such detentions may be necessary on security grounds during the conduct of a conflict but that administrative detention should not be used as a substitute for a legal trial of the child combatants for their actions, nor should detention be ongoing or long term, and the tenets of the UN’s Convention on the Rights of the Child
167
should be respected during administrative detention of anyone who is a child.
The Child Soldier in Literature and Film In recent years, there has been an increased interest in the phenomenon of child soldiering due largely to the release of several autobiographies by former child soldiers. In A Long Way Gone: Memoirs of a Child Soldier, Ishmael Beah describes how he became a child soldier in the government army of Sierra Leone at age 13, while in War Child: A Child Soldier’s Story, Emmanuel Jal tells the story of how he joined the Christian Sudanese Liberation Army at the age of nine. Both stories help to illuminate the circumstances under which children can be drawn into conflict, as well as the ways in which war can damage and shape their psyches. In 2017, Netflix released the film “Beasts of No Nation,” directed by Cary Fukunaga. The film’s story takes place in an unnamed African country and relays the story of a boy named Agu who is adopted into a national liberation army led by a charismatic commander. In addition, the nongovernmental organization Invisible Children utilized a social media campaign calling for the capture Joseph Kony, the illusive commander of the Lord’s Resistance Army in Uganda, who is said to have recruited over 30,000 children into active combat activities. These are graphic portrayals of the plight of children who were trained to participate in acts of war, and all of these media products have raised public awareness of the issue, including the scope of the problem and the controversies associated with it.
Conclusion Defining both childhood and the child soldier continues to be challenging, particularly given how differently childhood is understood in a Western versus a non-Western context. In addition, significant disputes still exist in regard to how responsibility and culpability for their acts (legally, morally, and psychologically) may be
C
168
assigned to those who were children when such acts were committed. Although the issue of child soldiering has received more attention in the media and press as of late, the high profile of the issue has not led to the creation of significant legislation or a unified approach to tackling this problem.
Cross-References ▶ Critical Security Studies ▶ Failed States
References Atwood, K. (2017). “Rex Tillerson makes unilateral determination on child soldiers,” CBS news. Available at https:// www.cbsnews.com/news/text-tillerson-makes-unilateraldetermination-on-child-soldiers/. Bernstein, R. (2011). Racial innocence: Performing American childhood from slavery to civil rights. New York: NYU Press. Bosch, S. (2015). A legal analysis of how the International Committee of the red Cross’s interpretation of the revolving door phenomenon applies in the case of Africa’s child soldiers. African Security Review, 24(1), 3–22. Burke, J., & Hatcher-Moore, P. (2017). “If you are old enough to carry a gun, you are old enough to be a soldier.” The Guardian. 24 July 2017. Available at https://www.theguardian.com/global-development/201/. Dudenhoefer, A-L. (2016). Understanding the recruitment of child soldiers in Africa. 16 Aug 2016. ACCORD (African Center for Constructive Resolution of Disputes). Available at http://www.accord.org.za/conflicttrends/understanding-recruitment-child-soldiers. Far, T. S. (2017). Iran’s child soldiers in Syria. Human Rights Watch. 30 Nov 2017. Available at www.hrw. org/news/2-17/11/30/irans-child-soldiers-syria. Fox, M. J. (2005). Child soldiers and international law: Patchwork gains and conceptual debates. Human Rights Review., 7(1), 27–48. Grover, S. (2008). Child soldiers as ‘non-combatants’: The inapplicability of the refugee convention exclusion clause. The International Journal of Human Rights, 12(1), 53–65. Hamilton, C., Anderson, K., Barnes, R., & Dorling, K. (2011). Administrative detention of children: A global report. New York: UNICEF and Children’s Legal Center, University of Essex (UK). Available at https://www.unicef.orga/protection/Administrative-dete ntion-discussion_paper-April2011.pdf. International Criminal Court. (2018). Ongwen case: Situation in Uganda. ICC Case: The Prosecutor v. Dominic Ongwen ICC-02/04–01/15. Ongwen Case Update. The
Cholera Hague: International Criminal Court (ICC). Available at: https://www.icc-cpi.int/uganda/ongwen. Internet Encyclopedia of Philosophy. (2018) Just War Theory. Available at http://www.iep.utm.edu/justwar/. Keches, K. A. (2010). The invention of childhood innocence: Professor says concept only dates to 19th century, and only applied to whites. The Harvard Gazette. 29 Apr 2010. Available at https://news.harvard.edu/ gazette/story/2010/04/the-invention-of-childhoodinnocence/. Rossi, E. (2013). A ‘special track’ for former child soldiers: Enacting a ‘child soldier visa’ as an alternative to asylum protection. Berkeley Journal of International Law, 31(2), 392–460. UN. (2000). Optional protocol to the convention on the rights of the child on the involvement of children in armed conflict. New York: United Nations. Online at https://childrenarmedconflict.un.org/mandate/optionalprotocol. United Nations. (1989). UN Convention on the Rights of the Child. New York: UN Committee on the Rights of the Child (CRC). Available at http://www.ohchr.org/ EN/ProfessionalINterest/Pages/CRC.aspx. UNICEF. (2011). Child recruitment by armed forces or armed groups. New York: UNICEF. Available at https://www.unicef.org/protection/57929_58007.html. United States Department of State. (2017). Trafficking in Persons 2017 Report. Washington, DC: US Department of State. Available at https://www.state.gov/j/tip/ rls/tiprpt/2017/.
Further Reading Child Soldiers International Website (2017). https://www. child-soldiers.org/. Briggs, B.. (2017). 10 countries where child soldiers are still recruited in armed conflicts. Relief Web. Available at https://reliefweb.int?report/central-african-republic/10countries-where-child-soldiers-are-still-recruited-armed. Invisible Children website (2020). https://invisiblechildren. com/. Beah, I. (2008). A long way gone: Memoirs of a child soldier. New York: Sarah Crichton Books. Jal, E. (2010). War child: A child soldier’s story. New York: St. Martin’s Griffin.
Cholera Jonathan Kennedy Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, UK Keywords
Cholera · Inequality · Colonialism · Europe · India · Global South
Cholera
Introduction Cholera is a bacterial infection that is transmitted in water or food contaminated with a carrier’s excreta. It enters the body through the mouth and the digestive system. The worst affected people will lose a quarter of the body’s fluid through violent diarrhea and vomiting. This results in severe dehydration, often turning the skin bluishgrey. Without treatment, cholera can kill in a matter of hours. There are references to what is thought to be cholera in Sanskrit texts dating back to the fifth century BCE (Harris et al. 2012). However, cholera first spread from its endemic haunts in the Ganges Delta when the East India Company army invaded the Maratha Empire in 1817. From there, cholera travelled via trade routes to the rest of the world. Over the past two centuries, there have been seven pandemics that have resulted in tens of millions of deaths (Lee and Dodgson 2000). The first pandemic (1817–1823) reached China, Japan, Southeast Asia, East Africa, and the Near East but did not make it to Europe or North America. The second (1826–1837) killed hundreds of thousands in Russia and affected major cities in Europe and the United States. The third (1841–1859) impact the same areas as well as parts of Latin America. The fourth wave (1863– 1875) had the greatest geographical reach, impacting Europe and large parts of the Americas, Africa, China, Japan, and South-East Asia. The fifth (1881–1896) was more limited, but once again affected much of Europe. The sixth pandemic (1899–1923), was largely confined to Asia and did not reach Western Europe or the Americas. Since 1961, the world has been in the midst of the seventh pandemic, which has hardly touched high-income countries but continues to infect about 2.9 million people and kill 95,000 thousand a year, predominantly in sub-Saharan Africa and South Asia (Ali et al. 2015: 1). Cholera is particularly interesting to historians and social scientists because it reflects social, economic, and political problems in the societies that it affects. As David Arnold (1986: 151) notes: Like any other disease, cholera has in itself no meaning: it is a micro-organism. It acquires meaning and significance from human context, from the
169 ways in which it infiltrates the lives of people, from the reactions it provokes, and from the manner in which it gives expression to cultural and political values.
This entry focuses on the link between cholera and security. It considers how military interventions facilitated the disease’s spread, from British India in the nineteenth century to contemporary Yemen. It outlines how fears that cholera threatened political stability and economic growth led to sanitary reforms and the disappearance of cholera in nineteenth century Europe, but that cholera remains endemic in large parts of sub-Saharan Africa and South Asia precisely because it does not threaten the political or economic interests of domestic elites or high-income countries. This entry is organized into three sections. The first section “Cholera and Colonialism in British India” focuses on the relationship between colonial conquest and cholera in British India. The second section “Cholera, Political Upheaval, and Social Unrest in Europe” investigates how war and industrialization facilitated the spread of cholera in Europe in the nineteenth century and then explores how outbreaks frequently led to social unrest. The third section “Cholera in the Contemporary Global South” describes the distribution of cholera in the world today, showing that the disease disproportionately affects the most vulnerable people in the poorest countries. It explores how major recent outbreaks in Haiti, Zimbabwe, and Yemen were precipitated by natural disasters, political crises, and wars. We conclude by considering whether the WHO’s recently stated aim to reduce cholera’s prevalence by 90% by 2030 is realistic.
Cholera and Colonialism in British India Arnold (1986: 123) points out that there is a “literal correspondence between cholera and armed conflict in early colonial India” (also see Harrison 2019). The first pandemic occurred at a time of British military expansion. While the East India Company had been present in Bengal since the mid-1700s, it was only at the end of the century that it began to extend its power to the rest of the subcontinent. The crowded, insanitary, and
C
170
unhygienic conditions in barracks and camps, together with the movement of troops from endemic areas in Bengal to the rest of India, facilitated the spread of cholera. For example, the arrival of cholera in western India coincided with the British war against the Maratha Empire (1817–1818). Similarly, the First War of Indian Independence (1857–1858) resulted in a surge in cholera cases as British troops moved across the country to put down the uprising. Cholera was a serious problem for the British military in India: it was responsible for the deaths of over 8500 British soldiers between 1818 and 1854, and one-third of all troop fatalities between 1859 and 1867 (Arnold 1986: 127). It is interesting to note that the disease had a much more severe impact on rank and file soldiers than officers. However, the numbers were small when compared to civilian deaths, with cholera killing an estimated 33–38 million people in British India between 1817 and 1947 (Arnold 1986: 120). Cholera did sometimes strike elites – for example, Sir Thomas Munro, the governor of Madras, died from the disease in 1827 – but nonmilitary Europeans and upper-class Indians were spared the worst because of their better diets and living conditions. Cholera had an especially strong impact on the poor (Arnold 1986; Harrison 2019). In the first half of the nineteenth century, it was noted that cholera disproportionately affected urban slum-dwellers (Jameson 1820). Sanitary reform in major cities in the second half of the century – in Calcutta, for example, the construction of a new sewer system in 1865 and a filtered water supply in 1869 – led to a marked fall in the number of cholera cases in cities. Unlike in Europe, cholera in South Asia then became a disease that primarily affected the rural poor, particularly at times of famine. For example, cholera killed over two million people in Madras Presidency during the Guntur famine (1833) (Arnold 1986: 125). There was a similar coincidence of cholera and famine again in Madras in 1866 and 1877, and Bombay in 1877 and 1900. The relative absence of extreme deprivation in the early twentieth century led to a fall in cholera mortality but it increased again during the Bengal famine (1943–1944) (Arnold 1986). It must be noted that famines in India were a direct
Cholera
consequence of apathy and inaction on the part of the colonial administration (Sen 1982). More generally, the poverty that created welcoming conditions for cholera was, to a large extent, created by British colonial policy, which decimated the Indian economy. India accounted for 24.4% of the world’s economic output in 1700, 16.0% in 1820, but only 4.2% just after independence in 1947 (Maddison 2007: 263). In the 1820s and 1830s, colonial administrators were concerned that cholera might generate anger and resistance to British rule (Arnold 1986; Harrison 2019). But in the absence of concerted efforts by the state to stop the spread of the disease – as occurred in Europe – these fears did not materialize. Rather, cholera had a similar impact on the Indian population in the nineteenth century as infectious diseases such as measles had on the Aztec and Inca empires of the sixteenth century. Explanations for cholera epidemics held the British colonizers responsible for breaking Hindu taboos or disrupting the Hindu cosmos, but believed the consequences of divine wrath were borne by the indigenous population. There was a widespread belief in Northern India that the first epidemic occurred after cows were killed to feed British troops in a grove that was sacred to the son of a former Raja called Hurdoul Lal. The disease was seen as a manifestation of the gods’ anger and, whenever it resurfaced, Hurdoul Lal was worshipped in order to appease them (Arnold 1986). For the colonizers, cholera became “a convenient symbol for much that the west feared or despised about a society so different from its own” (Arnold 1986: 138). The early epidemics coincided with the arrival of Christian missionaries in British India, whose vilification of Hindu beliefs and practices had a deep and long lasting impact on European thought. The supposed link between Hindu pilgrimage and cholera was a particular concern, especially the 12-yearly Kumbh Mela festival, which involved ritual immersion in a sacred tank or river and the sipping of water. For example, the 1866 International Sanitary Conference declared India’s pilgrimages as “the most powerful of all the causes which conduce to the development and propagation of cholera
Cholera
epidemics” (quoted in: Arnold 1986: 141). The Hajj, the Muslim pilgrimage to Mecca, was seen as the next leg in the cholera’s journey from Asia to Europe (Low 2008). Such arguments overlook the fact that Kumbh Mela and Hajj predated cholera pandemics by hundreds of years. They also ignore the important role that European military and economic activities played in the global spread of cholera. Notwithstanding, such ahistorical, depoliticizing, and Orientalist tropes still influence contemporary global health (Kennedy 2016).
Cholera, Political Upheaval, and Social Unrest in Europe Cholera arrived in Europe in the early 1830s via trade routes. The expansion of colonialism, improvements in transport, and growth in the movement of people and goods between Europe and its colonies all contributed to make this possible (Lee and Dodgson 2000). Cholera outbreaks in Europe occurred at times of war, revolution, and political upheaval (Evans 1988). Cholera first arrived in Europe in the early 1830s when the effects of the 1830 revolutions were still being felt. Carl von Clausewitz died from the disease in 1831 while leading the Prussian army’s efforts to construct a cordon sanitaire. Cholera reached Britain in 1832, the year of the Great Reform Act. The next and most devastating cholera epidemic in Europe occurred in 1848, a year of revolutions across the continent. Further outbreaks coincided with the Crimean War (1854–1855), the end of the German Confederation’s and a number of German states’ independence after Prussia defeated Austria (1866), the overthrow of the French Second Empire (1871), and disturbances in Polish regions of the Romanov Empire (1892). As in India, the mass movement of troops and civilians that these events involved enabled cholera to spread quickly across Europe. In Western Europe, the crowded and insanitary conditions in the rapidly growing towns and cities provided fertile breeding grounds for cholera (Evans 1988; Lee and Dodgson 2000). Although data on the socioeconomic distribution of cholera
171
is limited, it is widely agreed that the urban poor were disproportionately affected because they were more likely to live in crowded, unhygienic, and insanitary conditions, and to work in occupations that brought them into contact with contaminated water. For example, in the middle of the nineteenth century, cholera mortality rates were almost 13 times higher in Rotherhithe, a working class part of London, than in wealthy St. James, Westminster (205 deaths by cholera per 10,000 people in 1849 versus 16) (Smith 1979: 231). Across Europe, cholera outbreaks were accompanied by public anger and riots, particularly in the 1830s – although cholera-related unrest persisted in Russia into the 1890s (Evans 1988; Cohn 2017). It is interesting to consider why this unrest took place and who it focused on. In medieval Europe, various outsider groups were blamed for outbreaks of infectious disease (Ginzburg 2004). During the Black Death in the mid-fourteenth century, for example, it was rumored that the massive number of fatalities that resulted from bubonic plague were, in fact, caused by Jews poisoning wells. This resulted in widespread pogroms. However, such outsider groups were not the target of public anger during cholera epidemics (Evans 1988; Cohn 2017). In the nineteenth century, popular anger was instead focused on the state and the medical profession (Evans 1988). In an interesting analogy, the state and the medical profession are once again the focus of suspicions about vaccine safety in contemporary Western Europe (Kennedy 2019). Cities in the mid-nineteenth century were characterized by stark inequalities between the bourgeoisie and the working classes who toiled in their factories. Cholera brought latent social tensions to the surface (Evans 1988). The public did not believe that the death and suffering they were witnessing was caused by a new disease. Across Europe, it was widely held that the state and the medical authorities were poisoning the poor in order to reduce their numbers. In the UK, especially in the 1830s – when the notorious case of Burke and Hare (1828), who murdered 16 people in order to sell their corpses to the Medical School at University of Edinburgh, was at the forefront of the public consciousness – doctors were accused
C
172
of killing the poor in order to use their bodies for research (Evans 1988). Just as they did during the Black Death in the fourteenth century, the state and medical authorities used cordons sanitaires, quarantine, isolation and rapid disposal of bodies. However, the effect of such policies was far more intrusive because the “infrastructural power” of the state had increased markedly in the intervening centuries (Mann 1984). Restrictions on movement were unpopular because they prevented people fleeing the outbreak, as well as separating goods from markets, which led to steep increases in food prices (Evans 1988). Also, taking away corpses caused anger because it violated customs of mourning and burial. The specific target of the public’s anger varied according to the political and economic system in each country (Evans 1988). In areas where feudal structures remained – Russia, Austria-Hungary, and parts of Prussia until the mid-nineteenth century – the nobility were attacked. For example, between June and September 1831, Hungarian castles were sacked and nobles killed. Soldiers and police were the focus of popular anger in most of Europe. In Britain, where doctors played more of a role in implementing the emergency measures, physicians faced the brunt of the attacks. In the second half of the nineteenth century, cholera cases in Western Europe fell markedly and, from the late 1890s onwards, the disease hardly had any impact on the continent. This was a result of intervention by increasingly powerful and well-resourced states to provide cities with clean water and effective sewage systems (Evans 1988). Several factors coalesced in the second half of the nineteenth century to make this possible. First, scientific advancements made an important contribution. When cholera arrived in Europe, it was widely believed that it spread through exposure to bad air. In 1854, John Snow demonstrated that it was a waterborne disease by tracing the source of an outbreak to a pump on Broad Street in Soho, central London. While Snow’s ideas were not immediately accepted, they came to provide the scientific justification for sanitary reforms. Second, there was growing consensus that something needed to be done to improve water supplies and sanitation
Cholera
in cities. Christopher Bayly (1994: 26) neatly captures how cholera outbreaks shocked political elites: “It seemed as if the horrid filth and turbulence of the Orient had infected the seamy underworld of the European city.” This horror precipitated broad support from political and economic elites for state intervention to improve living conditions of the urban poor. Those of a progressive bent were motivated by compassion and solidarity. Conservatives were concerned that unhealthy plebs made neither productive workers nor good soldiers but might be tempted by the promise of revolution. Inequality in Western European cities peaked in the 1860s and from then on living and working conditions of the urban poor steadily improved (Piketty 2014). Third, where outbreaks did occur in the second half of the nineteenth century, the state’s response tended to be less contested than earlier efforts. This was in part because the state was now powerful enough to put down resistance, especially after the creation of professional police forces following the 1848 revolutions (Evans 1988). The advancement in scientific understanding of cholera transmission, combined with improvements in mass education, also contributed to the public’s greater level of acceptance. It is interesting to note that cholera outbreaks and the fear that they created led to the first efforts by national governments to cooperate on issues related to health (Birn et al. 2009). Similar to global health efforts today, the main concern was not the death and suffering of millions of people but the threat that infectious disease posed to political stability and economic prosperity in rich countries. Restrictions on movement of people and goods interfered with free trade and the first of a series of international sanitary conferences was organized in 1851 with the aim of reaching an international agreement on measures to control the spread of cholera, plague, and yellow fever. The conferences were largely ineffective because the British in particular consistently opposed any regulations that might hinder their economic interests. The conferences’ most marked impact was the decision in 1903 to set up L’Office Internationale d’Hygiène Publique “to collect
Cholera
and bring to the knowledge of the participating states the facts and documents of a general character which relate to public health and especially as regards infectious diseases, notably cholera” (Birn et al. 2009: 49). The “Paris Office,” the first international health organization, opened in 1909.
Cholera in the Contemporary Global South It is easy to prevent and treat cholera. People do not contract cholera where they have access to clean water, sanitation, and vaccines. If one has cholera, it can be effectively treated with antibiotics and oral rehydration therapy. Notwithstanding, while cholera has been absent from Western Europe and North America since the late nineteenth century, there are still 2.9 million cases and 95,000 deaths a year in the world, predominantly in Sub-Saharan Africa and South Asia (Ali et al. 2015: 2). Cholera’s distribution must be understood in the context of the unequal distribution of economic and political power between high and low-income countries (Lee and Dodgson 2000). This reflects the fact that by the 1890s, class-based inequality within countries had been surpassed by inequality between countries (Van Zanden et al. 2014). Cholera is endemic in areas where water and sanitation systems are inadequate. Around 844 million people do not have access to a basic drinking water source, more than 2 billion drink water from sources that are contaminated with feces, and 2.4 billion do not have basic sanitation facilities (Global Task Force on Cholera Control 2017: 7). The continued prevalence of cholera is a consequence of a political and economic system that fails to provide an enormous number of people in low-income countries with the basic resources needed to live healthy lives. A full explanation for why this is the case is beyond the scope of this entry, but important factors include the pro-market policies that high-income countries have forced on low-income countries through structural adjustment, which limits the state’s capacity to fund healthcare, hollows out the capacity of the
173
state, and undermines the broader determinants of health (Farmer 2004). Over the past decade, several acute cholera outbreaks have occurred where water and sanitation systems have been destroyed by manmade disasters or natural disasters made worse by an inadequate humanitarian response. In 2008–2009, a cholera outbreak in Zimbabwe resulted in 100,000 cases and nearly 5000 fatalities – the biggest recorded in African history (Chigudu 2019: 1). The epidemic was preceded by a profound political and economic crisis: between 2000 and 2008, Zimbabwe’s GDP nearly halved – the most extreme contraction of any peacetime economy (Pushak and Briceño-Garmendia 2011: 3). One of the consequences was the breakdown of water and sanitation systems, especially in densely populated townships in Harare (Chigudu 2019). Consequently, when the rainy season came, the feces of infected people contaminated drinking water. The Mugabe regime’s role in creating conditions conducive to the epidemic and then failing to respond adequately led to widespread public anger and protests. In January 2010, an earthquake struck Haiti, killing approximately 100,000 and destroying vital infrastructure in what was already the poorest country in the western hemisphere (Sturcke 2010). At the time, there had never been a recorded cholera case in the country (Frerichs et al. 2012). But later in the year, a cholera outbreak began which infected 800,000 people and killed over 9000 (UN 2016). The outbreak was traced to Nepalese UN peacekeepers, who were camped near a tributary to a river from which many Haitians collect drinking water (Frerichs et al. 2012). In 2017, war-torn Yemen experienced the largest recorded cholera outbreak in history, which infected over 1.2 million people and killed more than two and a half thousand (Federspiel and Ali 2018: 1). The disease was markedly worse in Houthi-controlled areas because of the Saudi-led airstrikes and blockade of imports (Kennedy et al. 2017). Airstrikes destroyed infrastructure, including hospitals, and hit civilian areas, displacing people into crowded and insanitary conditions. The blockade resulted in shortages of, among other things, food, medicine, and fuel, which is
C
174
used to pump water where the power network has been destroyed by the conflict.
Conclusion In late 2017, the Global Task Force on Cholera Control (2017: 4) – a WHO-led coalition of UN agencies, NGOs, and academic institutions – vowed to reduce cholera cases by 90% by 2030. This goal is certainly achievable. In theory, cholera is easy to prevent and treat. Cholera was the chief public health problem for western European governments in the mid-1800s, but it disappeared by the turn of the century due to improvements in urban sanitation and water supplies. All that is needed to eliminate cholera is to make sure that everyone has access to the cholera vaccines, safe drinking water, and basic sanitation. The Global Task Force must, however, overcome some big obstacles if they are to achieve their goal. Cholera in the contemporary Global South is a manifestation of a deeper malaise, just as it was in midnineteenth century India and Europe. Cholera disappeared from Europe because increasingly strong and well-resourced states intervened to improve the living conditions of the urban working class, a group that was crucial to the prosperity and security of nineteenth century capitalist societies. Cholera outbreaks only occur in places where functioning health, water, and sanitation systems are absent, and people who have contracted cholera die when they do not have access to basic healthcare. Cholera remains widespread in many parts of sub-Saharan Africa and South Asia because postcolonial states do not have the will or the capacity to resolve the problem. It is not a priority for donor countries because waterborne diseases such as cholera do not threaten them in the same way as airborne diseases do. Ebola is a good point for comparison. It affects far fewer people than cholera: the West African Ebola epidemic (2013–2016) – the largest recorded outbreak – resulted in 28,000 cases and over 11,000 fatalities (WHO Ebola Response Team 2016: 587). There are 100 times more cases of cholera and 10 times more deaths in the world every year (Ali et al. 2015: 1). Nevertheless,
Cholera
the Ebola outbreak led to a massive response from global actors, because it was framed as a potential threat to health in the Global North and the functioning of the world economy. Challenging the political and economic system that creates the inequalities highlighted in this entry is beyond the remit of global health actors. However, as long as these broader determinants persist, large parts of the world’s population will not have access to basic needs like clean water, basic sanitation, and healthcare, and will be susceptible to contracting or even dying from preventable and treatable infectious diseases such as cholera.
Cross-References ▶ Drinking Water ▶ Ebola ▶ Epidemics ▶ Water-Borne Diseases
References Ali, M., Nelson, A. R., Lopez, A. L., & Sack, D. A. (2015). Updated global burden of cholera in endemic countries. PLoS Neglected Tropical Diseases, 9(6), e0003832. Arnold, D. (1986). Cholera and colonialism in British India. Past and Present, 113, 118–151. Bayly, C. (1994). Empire of the doctors. London Review of Books, 16(23), 26–27. Birn, A.-E., Pillay, Y., & Holtz, T. H. (2009). Textbook of international health: Global health in a dynamic world. Oxford, UK: Oxford University Press. Chigudu, S. (2019). The politics of cholera, crisis and citizenship in urban Zimbabwe: ‘People were dying like flies’. African Affairs 118(472), 413–434. Cohn, S. K. (2017). Cholera revolts: A class struggle we may not like. Social History, 42(2), 162–180. Evans, R. J. (1988). Epidemics and revolutions: Cholera in nineteenth-century Europe. Past and Present, 120, 123–146. Farmer, P. (2004). Pathologies of power: Health, human rights, and the new war on the poor. Berkeley: University of California Press. Federspiel, F., & Ali, M. (2018). The cholera outbreak in Yemen: Lessons learned and way forward. BMC Public Health, 18(1), 1338. Frerichs, R. R., Keim, P. S., Barrais, R., & Piarroux, R. (2012). Nepalese origin of cholera epidemic in Haiti. Clinical Microbiology and Infection, 18(6), E158–E163. Ginzburg, C. (2004). Ecstasies: Deciphering the witches’ sabbath. Chicago: University of Chicago Press.
Cities for Climate Protection Program Global Task Force on Cholera Control. (2017). Ending cholera: A global roadmap to 2030. https://www.who. int/cholera/publications/global-roadmap/en/ Harris, J. B., LaRoque, R. C., Qadri, F., Ryan, E. T., & Calderwood, S. B. (2012). Cholera. Lancet, 379(9835), 2466–2476. Harrison, M. (2019). A dreadful scourge: Cholera in nineteenth century India. Modern Asian Studies (online first). Jameson, J. (1820). Report on the epidemick cholera morbus: As it visited the territories subject to the Presidency of Bengal, in the years 1817, 1818 and 1819. Calcutta: Government Gazette Press. Kennedy, J. (2016). Why have the majority of recent polio cases occurred in countries affected by Islamist militancy? A historical comparative analysis of the political determinants of polio in Nigeria, Somalia, Pakistan, Afghanistan and Syria. Medicine, Conflict and Survival, 32(4), 295–316. Kennedy, J. (2019). Populist politics and vaccine hesitancy in Western Europe: An analysis of national-level data. European Journal of Public Health, 29(3), 512–516. Kennedy, J., Harmer, A., & McCoy, D. (2017). The political determinants of the cholera outbreak in Yemen. The Lancet Global Health, 5(10), e970–e971. Lee, K., & Dodgson, R. (2000). Globalization and cholera: Implications for global governance. Global Governance, 6, 213–236. Low, M. C. (2008). Empire and the Hajj: Pilgrims, plagues, and pan-Islam under British surveillance, 1865–1908. International Journal of Middle East Studies, 40(2), 269–290. Maddison, A. (2007). The world economy: Historical statistics (Vol. 2). Paris: OECD. Mann, M. (1984). The autonomous power of the state: Its origins, mechanisms and results. European Journal of Sociology, 25(2), 185–213. Piketty, T. (2014). Capital in the twenty-first century. Cambridge, MA: Belknap Press. Pushak, N., & Briceño-Garmendia, C. M. (2011). Zimbabwe’s infrastructure: A continental perspective. New York: World Bank. Sen, A. (1982). Poverty and famines: An essay on entitlement and deprivation. Oxford, UK: Oxford University Press. Smith, F. B. (1979). The people’s health, 1830–1910. Canberra: Australian National University Press. Sturcke, J. (2010). Haiti earthquake: Up to 100,000 may have died. https://www.theguardian.com/world/2010/ jan/14/haiti-earthquake-rescue-operation UN. (2016). Secretary-general apologizes for United Nations role in Haiti cholera epidemic. https://www. un.org/press/en/2016/sgsm18323.doc.htm Van Zanden, J. L., Baten, J., Foldvari, P., & Van Leeuwen, B. (2014). The changing shape of global inequality 1820–2000; exploring a new dataset. Review of Income and Wealth, 60(2), 279–297. WHO Ebola Response Team. (2016). After Ebola in West Africa – unpredictable risks, preventable epidemics. New England Journal of Medicine, 375(6), 587–596.
175
Cities for Climate Protection Program Avilash Roul Indo-German Centre for Sustainability (IGCS), Indian Institute of Technology Madras (IITM), Chennai, India Keywords
Mitigation · GHG inventory · Resilient city · Climate proof · Urban local bodies While cities are more vulnerable to climate change, especially urban poor and most marginalized inhabitants, urban local bodies have unrestrained potential to mitigate and adapt to climate change. The cities and towns are subjected to multiple climate hazards depending upon their geographical location and climatic conditions, ranging from increased precipitation, frequent inland and coastal flooding, and more frequent and stronger cyclones and storms to prolonged periods of extreme heat and cold, rising sea level and storm surges. Urbanization is one of the key trends of this century that is expected to continue with around 60% of world population living in cities in 2030. The cities, as engines of economic growth, contribute significantly to emission of greenhouse gases (GHG) (see chapter ▶ “Greenhouse Gas Emissions”), but at the same time, they are highly vulnerable to climate change impacts. In 1900, 10% of world population lived in cities. In 2015, 54% (3.9 billion) population were living in cities. According to projections, the urban share of the world population will grow to 6.4 billion (66%) by 2050 (UN 2014). With the growth of urban population, the global built-up area is correspondingly expected to triple. Although cities and towns cover less than 2% of the earth’s land surface, they consume 78% of the world’s energy to sustain 70% of world economy, consequently emitting more than 70% GHGs. The adaptation and mitigation of cities therefore provide significant opportunities, with cities having a key role to play in addressing climate change.
C
176
Genesis of Cities for Climate Protection Prior to deliberations on climate change at a global scale, number of local governments in North America and Europe took cognizance of the need to reduce GHGs, especially carbon dioxide (CO2). While the 1992 UN Conference on Environment and Development (UNCED) agreed to address climate change and created the right environment for implementation of the United Nations Framework Convention on Climate Change (UNFCCC), the International Council for Local Environmental Initiatives (ICLEI), now “ICLEI-Local Governments for Sustainability,” launched the “Urban CO2 Reduction Project” to encourage urban local governments to reduce GHGs in June 1991. The 2-year Urban CO2 Reduction Project was initiated by a select group of 14 municipalities in the United States, Canada, and Europe to develop a comprehensive municipal planning framework for GHG reduction, especially CO2, and strategic energy management (Lambright et al. 1996). Through a series of policy workshops, technical consultation, and research on the data gathered by each municipality, the Project sought to develop a generic framework for municipal energy policy that local governments could use to develop policies to reduce GHG emissions. Phase I of the Project concluded in June 1993 when municipalities submitted their “local action plan” to their governing councils for consideration and approval. The experience of the Urban CO2 Reduction Project led to the development of a five-milestone framework and software product for municipal use.
Cities for Climate Protection: International Campaign Based on the case studies, ICLEI expanded internationally by launching the “Cities for Climate Protection” (CCP) Campaign. At the Municipal Leaders’ Summit on Climate Change and the Urban Environment, held at the UN in January 1993 and co-sponsored by ICLEI and UN Environment (UNEP), earlier UN Environment Program,
Cities for Climate Protection Program
ICLEI announced Phase II of the Project – Cities for Climate Protection – to strengthen local governments’ ability to develop and implement municipal energy policies that reduce local emissions of GHGs. The CCP engages local governments to achieve measurable reductions in the emissions that cause air pollution and climate change. The Campaign provides a framework that enables local governments to integrate climate protection policies with actions that address immediate municipal concerns such as cost reduction, urban infrastructure improvements, pollution control, enhanced community liveability, etc. The cities which participate in the campaign have to complete five key tasks such as conducting an energy and emissions inventory; preparing a forecast of future emissions; setting an emissions reduction target; formulating a local action plan to achieve the target; and implementing policy measures and program to reduce emissions of CO2 and methane (ICLEI 1993). The four stated goals of CCP Campaign are (a) strengthening local commitments to reduce emissions of GHGs in urban areas; (b) disseminating planning and management tools to facilitate development of cost-effective CO2 reduction policies; (c) research and development of best practices and development of model municipalities; and (d) enhancing national and international ties so that municipal-level actions are included in national action plans and international deliberations. However, these tasks need significant scientific effort which urban local governments generally lack to produce tools of analysis for their own emissions tracking and forecast changes over time and assess the potential impact of various technical and policy measures to achieve their targets. Over the years, the CCP has engaged over 650 municipal governments worldwide in its effort to reduce GHG emissions. Through its various sectoral projects related to land-use policies, infrastructure and other service provisions, transportation management systems, building codes, and waste management, the CCP Campaign helps local governments directly influence and control many of the activities linked to climate change.
Cities for Climate Protection Program
As part of the CCP Campaign, ICLEI provides technical support to each city to achieve the five milestones, including preparing emissions inventories, identifying emission reduction measures, fostering city-to-city exchanges of knowledge and experience through workshops and other training, and helping to broker funding for project implementation. ICLEI also seeks to bring together corporate staff, elected officials, commercial institutions, industry, and other stakeholders to build the much-need consensus to achieve its goals.
Estimating Emissions Under Cities for Climate Protection The purpose of the local inventory of GHG emissions is to help identify and quantify the sources of GHGs and to help identify the most effective opportunities for reducing such emissions. The local governments, in charge of the cities, directly or indirectly influence, govern, and manage cities, which determine GHG emissions. The local governments are the effective catalysts to design local strategies for emission reduction than either global or national strategies. The CCP campaign is a performance-based initiative which requires methods than can be widely applied by individuals working in local governments who are primarily interested in pragmatic techniques that can help them design effective emission reduction policies and programs (Kates et al. 1998). However, the CCP puts emphasis, which is limited to, on reduction of carbon dioxide generated from fossil fuel combustion and methane from landfills. Despite the CCP methodology includes both emission inventories and emission reduction measures quantification, the priority has been on emission reductions. In contrast, a similar project of the Association of American Geographers-Global Change in Local Places (AAG-GCLP) project was to understand the causes and dynamics of GHG emissions at the local scale and the capacity of local people to deal with climate change challenges (AAG 2003). For CCP, the choice of urban places are both big and small as long a smallest jurisdiction has made commitment to tracking and reducing
177
emissions. Municipalities usually approach the task of GHGs emission inventory and emission reduction analysis in terms of their political boundaries. The CCP approaches only the most important sources of GHGs emissions. A key reason for developing a practical and standardized method for CCP is that local governments with varying levels of time and resources will be able to maintain and regularly update the GHG emissions inventories. Local governments join the CCP campaign by passing a resolution pledging to reduction of GHG emissions from their local government operations and throughout their communities. ICLEI provides regionally specific tools and technical assistance to assist local governments in reducing their GHG emissions.
Assessing Cities for Climate Protection The CCP campaign has localized the policy of controlling GHG emissions rather than the problem of climate change (Lindseth 2004). The CCP campaign emphasizes local benefits without referring to climate change or the harm it causes nature. As the CCP campaign states that climate change is a local issue because it entails local actors working with local projects to reduce GHG emissions, at the same time, the motivation for action at the beginning of CCPC has an element of global awareness. The idea of climate change as a moral responsibility and risk issue requiring immediate action was lost as CCP entered the phase of local implementation. In most of the CCP cities in the United States, local politics and programs to control GHG emissions are motivated by co-benefits rather than by concern about global climate change. Rather than focus solely on the threat posed by climate change, ICLEI encourages local government to join the CCP Campaign by stressing the co-benefits of taking action. Specifically, ICLEI frames municipal responses to climate change vis-à-vis other policy challenges cities face, such as air pollution or urban energy costs (Fay 2007), while cities’ participation in the CCP means members agree to the normative statement that climate change poses a threat.
C
178
Programs Related to Cities and Climate Change A number of programs or projects related to climate change and cities have been implemented after the CCP. At the end of the decade 2001–2010, growing interest in the concept of urban climate change resilience (UCCR) (Bahadur et al. 2016), which recognizes the complexity of rapid urbanization and uncertainties associated with climate change, has witnessed the launch of a number of climate change resilience-building initiatives, including the Climate Investment Funds’ Pilot Program for Climate Resilience, Strengthening Climate Resilience Initiative funded by the Department for International Development of the United Kingdom, and the Rockefeller Foundation’s Asian Cities Climate Change Resilience Network (ACCCRN). ICLEI itself has initiated programs, namely, Resilient City, Low-Carbon City, Smart City, EcoMobile City, and so on. The C40 Cities is a global network of large cities taking action to address climate change by developing and implementing policies and programs that generate measurable reductions in both GHG emissions and climate risks. The 100 Resilient Cities (100RC), supported by the Rockefeller Foundation, is dedicated to helping cities around the world become more resilient to the physical, social, and economic challenges that are a growing part of the twenty-first century. Similarly, The “Cities Fit for Climate Change” project is guided by the Leipzig Charter on Sustainable European Cities and the Memorandum Urban Energies – Urban Challenges’ of the German Federal Ministry for the Environment, Nature Conservation, Building and Nuclear Safety (BMUB). The German donor-GIZ-is implementing this project in three cities: Chennai in India, Santiago in Chile, and Durban (eThekwini) in South Africa since 2016. The project aims at integration and climate-sensitive urban development to facilitate creation of a climate-proof urban development model that fosters new urban design. Climateproofing means that city development strategies, urban designs, land-use and master plans, and all related investments are resilient and adaptable to
Cities for Climate Protection Program
the current and future impacts of climate change. Furthermore, they must take climate change mitigation considerations into account. The project supports innovative solutions for urban planning and makes cities fit for climate change. The intention is to make tackling climate change an integrated and strategic element of urban development. Plans, programs, and strategies and the associated investments are being made more resilient and adaptable to current and future impacts of climate change. The project supports implementation of the UNFCCC and the UN Habitat III process aimed at setting a new urban agenda, among other initiatives.
Conclusion Climate change continues to have severe impacts on a broad spectrum of infrastructure systems (water and energy supply, sanitation and drainage, transport, and telecommunication), services (including health care and emergency services), environment, and ecosystem services. Urban dwellers are particularly vulnerable to disruptions in essential infrastructure services due to interlinking of many of these infrastructure systems. For any city, the scale of damage due to climate change is directly connected to the extent and effectiveness of urban planning, the quality of housing and infrastructure, the level of preparedness among the city’s population, the availability of key emergency services, etc. However, cities to be resilient to climate change, they must look beyond mitigation in the form of emission reduction. Land-use planning should be overhauled in mainstreaming climate change and disaster risk reduction. Residents of poor and informal settlements and slums – unless assisted – would in all probability lack the tenure and resources to vacate the vulnerable areas in exchange for safer/resilient ones. The Fifth Assessment Report of Intergovernmental Panel on Climate Change (IPCC) has provided long list of prescriptions for urban government (IPCC 2014). Action in urban centers is essential to successful global climate change adaptation. Cities are composed of complex interdependent systems
Civil Liberties
that can be leveraged to support climate change adaptation and mitigation through effective city governments in the form of cooperative multilevel adaptive governance. This can enable synergies with infrastructure investment and maintenance, land-use management, livelihood creation, and ecosystem service protection. The direct and indirect linkages of hinterland (peri-urban) with urban centers must be considered holistically in addressing both climate change mitigation and adaptation. Addressing political interests, mobilizing institutional support for climate adaptation, and ensuring voice and influence to those most at risk are important strategic adaptation concerns. Enabling the capacity of low-income groups and vulnerable communities and their partnership with local governments can be an effective urban adaptation strategy. Urban climate adaptation and resilience provide opportunities for both incremental and transformative development. Implementing effective urban adaptation and resilience is possible and can be accelerated further.
Cross-References
179 Lambright, W. H., et al. (1996). Urban reactions to the global warming issue: Agenda setting in Toronto and Chicago. Climatic Change, 34, 463–478. Lindseth, G. (2004). The Cities for Climate Protection Campaign (CCPC) and the framing of Local Climate Policy. Local Environment, 9(4), 325–336. UN (2014). 2014 Revision of World Urbanization Prospects. https://esa.un.org/unpd/wup/Publications/Files/ WUP2014-Report.pdf. Accessed on 20 April 2018.
Further Readings UN Habitat. (2011). Cities and climate change: Global report on human settlements. http://unfccc.int/ resource/docs/convkp/conveng.pdf UN Habitat. (2012). Developing local climate change plans: A guide for cities in developing countries. www.unhabitat.org/downloads/docs/11424_1_594548. pdf UN Habitat. (2017). Guiding principle for city climate action planning: Toolkit for city-level review. https:// unhabitat.org/the-guiding-principles/ UNEP. (2014). Climate finance for cities and buildings: A handbook for local governments. https://wedocs. unep.org/Climatefinance/cities World Bank. (2012). Urban risk assessments: Understanding disaster and climate risk in cities. Washington DC: World Bank. Carbon Disclosure Project (CDP). (2012). Measurement for management: CDP Cities 2012 Global Report including special report on C40 cities, Carbon Disclosure Project and C40 Cities. https://www.cdproject.net/ Pages/CDP-Cities-Infographic.html
▶ Greenhouse Gas Emissions
Civil Liberties References AAG (Association of American Geographers). (2003). Global change and local places: Estimating, understanding, and reducing greenhouse gases. Cambridge: Cambridge University Press. Bahadur, A. et al. (2016). Enhancing urban climate change resilience: Seven entry points for action, Sustainable Development Working Paper Series, No. 47, Manila: ADB, pp. 2–8. Fay, C. (2007). Think locally, act globally: Lessons to learn from the cities for climate protection campaign. Innovations: Journal of Politics, 7, 1–12. ICLEI. (1993). Cities for climate protection. In An international campaign to reduce urban emissions of greenhouse gases. Toronto: ICLEI. IPCC. (2014). Urban areas. http://www.ipcc.ch/pdf/ assessment-report/ar5/wg2/WGIIAR5-Chap8_FINAL. pdf. Accessed on 2 May 2018. Kates, R. W., et al. (1998). Methods for estimating greenhouse gases from local places. Local Environment, 3(3), 279–297.
Tuğba Bayar Department of International Relations, Bilkent University, Ankara, Turkey Keywords
Civil liberties · Balance of powers · Counterterrorism
Definition Civil liberties are the basic rights and freedoms derived from positive law. Civil liberties are defined in accordance with the legal status of the subject vis-à-vis a given state. The basis of the civil liberties is defined by each state individually by a contract (like a constitution) among the given
C
180
state and subjects under its jurisdiction. The civil liberties are essentially designed to balance the power relationship between the subject and the state. In other words, the civil liberties protect individuals from the tyranny of the state. They guarantee protection of citizens from extrajudicial execution, arbitrary detention, and other similar arbitrary undertakings. Civil liberties represent a specific portion of the human rights. Human rights term is the umbrella phrase to define the common standard of rights inherent to all peoples and all nations equally without any discrimination, as stated by the universal Declaration of Human Rights. Human rights are inherent to all human beings, universally. They are not granted by any state, as they are sourced from law of nature. Civil liberties represent a significant component of the human rights universe. Their scope and objectives are determined by each sovereignty state individually. The scope enlarges in democracies and gets narrower in less or nondemocratic countries. States are able to limit civil liberties in accordance with circumstances, like state of emergency. High-standard civil liberties require state refraining from interfering with them, and fulfilling protective responsibilities by establishing effective mechanisms, like security personnel trained about human rights, or acceptable levels of rule of law.
Introduction Civil liberty is a concept that is often mistaken for human rights or for civil rights. Human Rights are rights inherent to all human beings, regardless of their age, skin color, gender, or ethnic, religious, national, mental, or any other status. All human beings are entitled to human rights equally, without any discrimination. The principle of universality is the main basis of human rights. Human rights include right to life, right to marriage and family, right to work, and right to shelter, as well as freedom from slavery and torture, freedom of opinion and expression, and so on. Civil rights are a specific set of rights designed to guarantee equal social protection and opportunities. These rights
Civil Liberties
are granted to citizens of a particular state. The source of civil rights is the constitution and relevant laws of the state of individual’s citizenship. For instance, the rights to vote: Who can vote from what age on is based upon individual national laws. Civil rights are political and social rights determined by a given state, and their purpose is to guarantee equal citizenship opportunities without discrimination. Therefore, the basic difference among human rights and civil rights is the reason for possessing them: Human rights are possessed for the virtue of being a human being, and civil rights are enjoyed for the virtue of being a citizen. As for the civil liberties, they indicate the freedom to act as is needed for the good of other people. Through civil liberties, individuals are granted the right to dissent from social norms, free from state interference. Civil liberties include freedom of speech, freedom of religion, freedom from discrimination, the right to privacy, the due process, the right to vote, the right to free court trial, the right to marriage and family, or the right to associate and assemble. The difference between civil rights and civil liberties is to consider what right is affected and whose right is affected. For instance, admission to master’s studies is not a guaranteed civil liberty; however, female university graduates have the equal legal right to free from discrimination being considered for admission to master’s studies. The legal right to master’s studies cannot be denied based upon gender. Hence, civil liberties are personal freedoms and they are considered as citizens’ protection from their government. The right to marriage is a right; however, the age of marriageability differs from state to state. Although marriage is a right, the right to same sex marriage is not protected by all states; while cousin marriage is legal in numerous countries, sibling marriage is universally illegal (See Stone 2014). Civil liberties stem from interpretation of constitutional and other legal rights and court cases. Civil liberties can also be interpreted as limitations on the behavior of the state. The state is the ultimate protector of human rights and freedoms. Yet, the state is also the holder of the utmost power in a society. Therefore, subjects of a state require limitations to the state power. Thus, the civil
Civil Liberties
liberties serve to prevent abuse of power. In this context, civil liberties can be assessed under two distinct categories: substantive liberties and procedural liberties. These two categories reflect practical and sensible limitations on the powers of a government. Substantive liberties are limits on the power of the government. Procedural liberties are limits on the ways through which the government can act. Substantive liberties become relevant with the issues like freedom of religion and religious practices, about which the government would not pass laws. Procedural liberties mainly consider the executive and judicial branches of the government. For instance, courts shall consider the defendants procedurally as innocent, until otherwise is proven. Unlike human rights issues, discussions of civil liberties were not prominent until recently. The counterterrorism measures implemented after 9/11 attacks have brought the civil liberties concern into the daily life of ordinary citizens. States, especially those that have been a target of terrorist attacks, started to introduce new security measures to the society. These measures also meant restrictions of civil liberties (Deflem and McDonough, 2015). Although citizens were ready to accept limitations of their civil liberties in the beginning, in time criticisms against states’ surveillance methods increased. People expect from the state to provide security and civil liberties simultaneously, and in an equal manner. Civil liberties become the eye of discussions as a result of the increasing surveillance in those states that become the target of terrorist attacks. As the degree of safety and security measures increases, citizens have started to sacrifice more liberties to the state. From surveillance of shopping and banking activities to communications or search of personal belongings, the increasing measures started to narrow down the circle of civil liberties. As the scholarly debates emphasize, security and civil liberties do not have a ranking; they both matter for the individual. With the swelling safety and security measures, individuals started to find it harder to enjoy the benefits of the democracy as well as their rights and liberties (Freedom House, 2014). On the other hand,
181
governments have partially perceived the support for civil liberties as a threat against national security (Finkelstein et al., 2017). Therefore, the two concepts, security and civil liberties, began to clash as competing notions. The trade-off between the two conceptions creates various complications, such as discrimination among the members of a society. In this sense, one of the most common critics on counterterrorism is regarding terrorist profiling. Certain group of people are classified as potential criminals, according to their physical appearance, faith, or country or origin, and they are subjected to additional search and inspections. These implementations are of discriminatory nature and cause injustice and wrongfulness. By employing such implementations, states place emphasis on security at the expense of causing harm to the rights of a specific group of people. The inequalities are justified by the governments utilizing the national security discourse. The egalitarian norm holds that basic civil liberties shall be distributed equally, without any exception. The maldistribution would lead to decrease of protections and mistrust to the state. For most of the cases, the counterterror measures lead to a blurring in the separation of the powers. The rising influence of the executive and legislative branches over the judiciary creates a loophole in the constitutional balance of power. The increase in the coercive power of the state is a factor which leads to the weakening of democratic processes. It unavoidably gives rise to authoritarianism (Shor et al., 2018). The government reaction to terrorist violence is self-destructive due to association of civil liberties with counterterrorism, since the implementations harm the state mechanism. This claim rests upon a literature on empirical case studies. The promotion and protection of civil liberties rest heavily upon democratic and transparent structures besides a balanced distribution of state’s powers (Goold and Lazarus, 2019). Not only due to terror-related reasons, but the state limits the civil liberties also for other reasons, as experienced during the Covid-19 pandemic that caused a trade-off between public health and civil liberties. The emergency measures to prevent the
C
182
spreading of the virus and increasing insecurities prepare the floor for willingness to sacrifice civil liberties. Closing schools, declaring curfews, and canceling arts and sports events and similar public gatherings were popular global measures for the protection of health services and public health. Yet, elderlies were locked in their homes, students were deprived of their rights to education, and countless other sacrifices were the price of pandemic measures (D’cruz and Banerjee, 2020; Sekalala et al. 2020). Protection and promotion of civil liberties is one of the duties of the state. At the same time, the maintenance of safety and security are significant duties of the state, too. The state is, for this reason, expected to improve counterinsecurity efforts within the framework of the constitution and relevant laws that induce civil liberties. Although it is obvious that the sacrifice of civil liberties strengthens state’s efforts against the source of insecurity (terror, virus, etc.)
Conclusion The civil liberties are central to interconnected spheres of increasing insecurities and governmental measures. The marginalization of particular groups (like elderlies during the Covid-19 Pandemic or Muslim males with beard in the post 9/11 era) and civil rights deprivation emerge as remedies for the approaching risks. Governments tend to fulfill their responsibilities regarding national security and public health by prioritizing measures over civil liberties. The measures cause lowering of governmental accountability on abuses. Simultaneously, majority of the public is ready to sacrifice their civil liberties for increased protection and prevention against the approaching risks. According to the universal human rights norms and principles, together with international human rights law, governments are entitled to provide safeguarding and fulfill their responsibility to respect and protect civil liberties simultaneously. Taking proportionate measures, upholding the nondiscrimination and equality principles, as well as providing the necessary transparency and accountability are
Civil Liberties
not represented in a hierarchical order. As all human rights, including entire civil liberties, are interdependent, interrelated, and indivisible, both individual governments and global governance institutions and organizations are to provide them all simultaneously avoiding abusing rights and liberties of their subjects.
Cross-References ▶ Surveillance States
References D’cruz, M., & Banerjee, D. (2020). ‘An invisible human rights crisis’: The marginalization of older adults during the COVID-19 pandemic–An advocacy review. Psychiatry Research, 292, 113369. Deflem, M., & McDonough, S. (2015). The fear of counterterrorism: Surveillance and civil liberties since 9/11. Society, 52(1), 70–79. Finkelstein, E. A., Mansfield, C., Wood, D., Rowe, B., Chay, J., & Ozdemir, S. (2017). Trade-offs between civil liberties and National Security: A discrete choice experiment. Contemporary Economic Policy, 35(2), 292–311. Freedom House. (2014). Freedom in the world 2014: The annual survey of political rights and civil liberties. Rowman & Littlefield. Goold, B. J., & Lazarus, L. (Eds.). (2019). Security and human rights. Bloomsbury Publishing. Sekalala, S., Forman, L., Habibi, R., & Meier, B. M. (2020). Health and human rights are inextricably linked in the COVID-19 response. BMJ Global Health, 5(9), e003359. Shor, E., Baccini, L., Tsai, C. T., Lin, T. H., & Chen, T. C. (2018). Counterterrorist legislation and respect for civil liberties: An inevitable collision? Studies in Conflict & Terrorism, 41(5), 339–364.
Further Reading Deflem, M., & McDonough, S. (2015b). The fear of counterterrorism: Surveillance and civil liberties since 9/11. Society, 52(1), 70–79. McIntyre, L., Michael, K., & Albrecht, K. (2015). RFID: Helpful new technology or threat to privacy and civil liberties? IEEE Potentials, 34(5), 13–18. Richards, N. (2015). Intellectual privacy: Rethinking civil liberties in the digital age. Oxford: Oxford University Press. Stone, R. (2014). Textbook on civil liberties and human rights. USA: Oxford University Press. Sullivan, H. J. (2015). Civil rights and liberties: Provocative questions and evolving answers. Routledge.
Civilian Control of Armed Forces
Civilian Control of Armed Forces Tuba Eldem Fenerbahce University, Istanbul, Turkey Keywords
Civil-military relations · Democratic control of armed forces · Civilian control of military · Guardianship dilemma · Civil-military problematique
Introduction The question of “civilian control” or “how to guard the guardians” has been a central issue within the subfield of civil-military relations (CMR), since Plato’s Republic written more than 2500 years ago. The states need strong armies to defend their borders, but armed forces strong enough to protect the state also pose a threat to the civilian leadership. Enjoying important political advantages vis-à-vis the executive power such as “a highly emotionalized symbolic status,” “a marked superiority in organization,” and most importantly “a monopoly of arms,” why the armed forces ever obey civilian masters (Finer 1962, p. 6)? How can civilian leaders reliably get the military to obey when civilian and military preferences diverge?” How could the civilian democratic control can be established, enhanced, and assessed? This entry differentiates between the concepts of civilian and democratic control, classifies the research on the basis of the method that they use to measure the extent of civilian control, and discusses the civilian control strategies.
Conceptualization of Civilian Control There is a consensus in the literature that civilian control means more than the absence of military coup d’états. It is an essential feature of any democratic state and a concept vital to understanding civil-military relations. There is far less
183
agreement, however, as to what exactly civilian control is, what it entails, and even when it can be said to exist. Until the end of cold war, the term “civilian control” is used interchangeably with “political control” referring to the executive control of the military. Huntington (1957, p. 80), for instance, presumes that civilian control has “to do with the relative power of civilian and military groups” and is achieved to the extent to which military power is minimized. According to Cottey, Edmunds, and Forster (1999), civilian control concerns the political function and position of the military – that is to say, their relationship with the institutions and patterns of political power in the society concerned. Civilian control, however, means more than the noninterference of the military in politics. Civilian control occurs when civilian officials exert sufficient power over the armed forces not only to “conduct general policy without interference from the military” but also “to define the goals and general organization of national defense, to formulate and conduct defense policy, and to monitor the implementation of military policy” (Aguero 1995, p. 19). Although in theory civilian supremacy is absolute and allencompassing that is all decisions of government, including defense and military policy, are to be made or approved by officials outside the professional armed forces, in reality, as Welch (1987, p. 12) puts it, civilian control is a matter of degree. Several students of CMR differentiate degrees of civilian control by distinguishing several decision-making areas, such as leadership or elite recruitment, public policy, internal security, national defense, and military organization (Colton 1979; Trinkunas 2011; Croissant and Kuehn 2009). Full civilian control exists if political leaders enjoy uncontested decision-making power in all areas, while in the typical military regime, the military leadership exercises control over all areas. Another way of looking at the degrees of civilian control requires examining not only the scope of issues but also the means the military officers used to gain greater involvement. Timothy Colton (1979) in his study of CMR in the Soviet Union disaggregated civilian control along two dimensions: the scope of issues with
C
184
which a military is concerned (internal, institutional, intermediate, societal) and the means it employs (official prerogative, expert advice, political bargaining, force) (p. 233). The professionalization of the management of war has created opportunity for the military establishments to gain significant power and considerable autonomy, even in those liberal democracies in which the civilian supremacy has been an established norm. Officers are involved in the formation of defense policies, provide expertise to civilian decision-makers, engage in bureaucratic or political bargaining to protect or enhance their organizational interests, and exercise authority over their internal functioning. Since a certain degree of institutional (Pion-Berlin 1992) or professional autonomy (Huntington 1957) is functional for military effectiveness, militaries usually enjoy some autonomous decision-making power concerning internal affairs. The prevalent view in the literature thus considers civilian control as “a regime of shared responsibility.” As Douglas Bland (1999) asserts “civil control of the military is managed and maintained through the sharing of responsibility for control between civilian leaders and military officers (p. 9). The civil authority “control policies dealing with national goals, the allocation of defense resources, and the use of force,” while the military is given “a degree of ‘rightful’ and vested authority over such matters as military doctrine, discipline, operational planning, internal organization, promotion below general and flag grade, and the tactical direction of units in operations” (Bland 1999, p. 19). The bottom line for civilian control is who possess final decision-making power, control the defense policy agenda, define the boundaries of military’s institutional autonomy, and monitor and sanction the military’s defense activities.
From Civilian to Democratic Control of the Armed Forces The concept of civilian control, which was widely used during the Cold War era largely as a response to the threat of praetorianism and the resultant
Civilian Control of Armed Forces
need to enforce civilian executive control of the military, is replaced in the post-Cold War era by the concept of democratic control signifying not only the subordination of military to the civilian executive but also the restriction of the political leaders’ autonomy in military and defense policy. The concept of democratic control of armed forces assesses the compatibility of military relations with society in terms of democratic norms such as openness, transparency, accountability, legitimacy, and pluralism (Cottey et al. 2002; Born et al. 2006). Democratic control of armed forces can be defined as the international norms and standards governing the relationship between the armed forces and society, whereby the armed forces are subordinated to constitutionally designated authorities and subject to executive, legislative, judicial, and societal oversight. Several important structural changes that follow the end of Cold War brought the democratic control of the armed forces to the center of academic and practical debates: (1) the emergence of a global democratic “zeitgeist” (Diamond 1993, p. 53); (2) the transition that took place in the Central and Eastern Europe and the Western actors’ desire to build stable democracies in the region; (3) the enlargements of the EU, NATO, and the Council of Europe and their respective conditionality criteria; (4) the use of democratic control norms as interstate confidence-building measures, such as the OSCE Code of Conduct on Politico-Military Aspects of Security; (5) the attachment of “security sector reform” to the agendas of development, good governance, and peacebuilding communities; and (6) the transformation of both NATO and national militaries from conventional war fighters to peacebuilding and peacekeeping actors. These post-Cold War processes all contributed to the development of norms of democratic control at the regional and international levels. The epistemic communities or expertise-based networks of professionals engaged in security sector reform, which have multiplied within the last decade in tandem with the abovementioned structural developments, have played a pivotal role in the elaboration of norms of democratic control of armed forces and security sector reform in the international level (Faleg 2012).
Civilian Control of Armed Forces
Operationalization of the Concept of Democratic Civilian Control of Armed Forces Democratic control can be assessed on five different dimensions, including political, institutional, legislative, judicial, and societal oversight of the armed forces. The first and core of democratic control is the subordination of military to the political control of the state’s democratically elected authorities. Civilian authorities in a democracy should be able to act autonomously from the armed forces without fear of military disloyalty to regime (Welch 1987, 13; Fitch 1998, p. 37). Military involvement in political affairs of a state poses the classic, and by far most severe, threat to civilian control. The role armed forces should be limited to national defense, and the military should not take any role in extra-military areas of state apparatus, including police, intelligence, state enterprises, and, of course, politics. Their role in politics is limited with influence, which it exerted trough bureaucratic bargaining and expert advice. The second essential characteristic of democratic control is the institutional control of the armed forces in its professional area of expertise. For most experts, the most effective way is the establishment of a strong, well-staffed, civilianled ministry of defense that devises, advises, and manages defense policies and oversees military operations (Huntington 1957, pp. 428–455; Aguero 1995, p. 197; Stepan 1988; Pion-Berlin 1998, 2009, p. 563; Born and Lazzarini 2006; Bland 2001; Cottey et al. 2002). Competent, effective, and courageous civilian cadre that is knowledgeable about military affairs and the existence of institutional platforms allowing strong cooperation and coordination with military leaders are also considered as indispensable for effective institutional control of the military organization (Born et al. 2006, p. 7; Croissant and Kuehn 2017). The third dimension of democratic control is the legislative oversight of all executive decisions regarding the defense and security of the country. The organization, deployment, and use of armed
185
forces; the setting of military priorities and requirements; the allocation of necessary resources; military promotions; and the definition of security threats should be scrutinized by the legislature in order to ensure popular support and legitimacy (Lunn 2003, p. 13; Born 2003, p. 39; Born et al. 2003, p. 6; Hänggi 2003, p. 16; Stepan 1988). It is the legislative body that “keeps the government accountable and secures a balance between the security policy and society by aligning the goals, policies, and procedures of the military and political leaders” (Born 2003, p. 39). Legislative oversight of the defense organization – primarily but not exclusively exercised through “the power of the purse” – should go beyond routine (rubber stamp) approval of what the executive proposes (Huisman 2002, p. 2). The legislatures in advanced democracies perform such a role through the help of an independent and respected audit bureau, competent and suitably supported specialist committees, knowledgeable parliamentary staff, and “outside” expertise (Greenwood 2006, p. 31). The democratic control requires the judicial control of the armed forces, the subordination of the armed forces to the civilian rule of law, minimization of the legal autonomy of the military, and the end of jurisdiction of military courts over civilians. Military personnel should be held accountable for violations of military and criminal law, and they should not have special legal privileges by law or by practice (Fitch 2001, p. 63). In consolidated democracies, the military is also subject to the civilian justice system, and either there are no military courts (Sweden, Denmark, Finland, Norway, Austria, and Germany) or if military courts do exist, they have almost no legal jurisdiction outside of narrowly defined internal breaches of military discipline (Stepan 1988, p. 97; Fitch 1998, 2001). The final dimension of democratic control is the public or societal control referring to a popular perception of democratic control of the armed forces, with military staffs subordinated to civilian officeholders whom themselves clearly accountable to the elected representatives of the society at large. An effective public control, therefore, requires not only an existence of a well-developed
C
186
participatory political culture that subjects the elected civilians’ management and the use of force to a deliberative process (Levy 2016) but also of a security community representing civil society that nurtures an informed national debate on security issues (Hanggi 2003, p. 17; Born 2003; Born et al. 2004, 2006). Collective actors working outside formal institutions such as expert communities, specialized think tanks, nongovernmental organizations, and universities are involved and that their judgment receive proper weight by the decision-makers. Debates should not be confined to the military organization but should extend to national security policy including the nature of threat, the operational aspects of military deployment, the very legitimacy for using force, and its utility in promoting the public good (Dauber 1998; Levy 2016, p. 83).
Measurement of Democratic Civilian Control How can we measure the extent of democratic civilian control? When can the process of asserting civilian supremacy be said to be complete? How do we know that the civilian control has been set firmly in place? Scholars use a variety of approaches, which differ in their objects of observation as well as in the causal assumptions they rely upon to assess the degree of civilian control of armed forces. Some students of CMR look at the interaction and behavior of civilian and military leaders, others at their culture and attitudes, and still others at their institutional environment to measure the degree of civilian control of armed forces in a country (Pion-Berlin 2001). Based on the data used, we can distinguish three main approaches: institutional, behavioral, and attitudinal.
Institutional Alfred Stepan’s (1988) prerogatives method developed in his seminal study of post-transition regimes in Brazil and the Southern Cone can be considered as the most widely used institutionalist
Civilian Control of Armed Forces
approach to measure civilian control. Stepan considers the problem of civilian control on two dimensions, articulated military contestation and military institutional prerogatives. The dimension of articulated military contestation refers to the objection of the military against the policies of the civilian democratic leadership often in such key areas as the legacy of human rights violations, the control over the structure and mission of the military, the military budget, and the military prerogatives (Stepan 1988, p. 68). The dimension of military institutional prerogatives refers to “those areas where, whether challenged or not, the military as an institution assumes they have an acquired right or privilege, formal or informal, to exercise effective control over its internal governance, to play a role within extra-military areas within the state apparatus, or even to structure relationships between the state and political or civil society” (p. 93). The dimension of articulated military contestation involves the kind of open contestation internal to Dahl’s conceptualization of power. Since, military power can derive from a series of prerogatives that it has acquired ideologically and politically, Stepan considers these prerogatives as a form of latent independent structural power within polity even in cases where there is almost no articulated conflict. Stepan identifies 11 potential institutional prerogatives of a military: (1) constitutionally sanctioned independent role; (2) military relationship to the chief executive; its role in (3) the Cabinet, (4) intelligence, (5) the police, (7) state enterprises, (8) coordination of defense sector, and (9) the legal system; (6) military promotions; (10) role of the legislature in defense and security; and (11) role of senior civil servants or civilian political appointees in the defense sector (pp. 94–97). Stepan ranks military prerogatives in each area as “low,” “moderate,” and “high/strong.” Empirically every military prerogative can be contested. These two dimensions create four possible scenarios: the civilian control when both civilmilitary contestation and military prerogatives are low, a regime of unequal civilian accommodation when military prerogatives are high and military contestation is low, untenable position for democratic leaders when both military
Civilian Control of Armed Forces
prerogatives and contestation are high, and finally unsustainable position for military leaders, when military prerogatives are low and military contestation is high. Building on previous works by Alfred Stepan (1988) and Timothy Colton (1979), several scholars measure the degree of civilian control of the military, by identifying the extent to which effective civilian institutions have been established, the residual prerogatives of the military and the patterns of conflict that circumscribe civilian decision-making power in five areas: leadership recruitment, public policy, internal security, external defense, and military organization (Trinkunas 2005; Croissant et al. 2010). Similar to Stepan, civilian control over each area is measured as high, medium, and low. The institutional approach allows the researcher to identify the degree of civilian control in a given country at a given point in time, as well as keep track of changes over time and identify differences in patterns in various countries.
187
Behavioral Approaches The behavioral approach, used by both structural and rational choice theorists, measures the extent of civilian control by identifying whether civilians or the military triumphs when their preferences diverge (Kemp and Hudlin 1992; Kohn 1994; Desch 1999; Feaver 1998, 2003). Civilian control is weak when military preferences prevail most of the time; the most extreme cases are states of military rule or incidents of military coups in which the military either prefers its own rule to civilian rule or supports one group of civilians over another. When military preferences prevail some of the time, civilian control is still not firm but such cases pose a less serious problem for CMR. Finally, civilian control is firm when civilian preferences prevail most of the time (Desch 1999, p. 5). Michael Desch in his study, Civilian Control of the Military (1999), determines the level of civilian control, the dependent variable of his study, by
Stepan’s Institutional Model of Civilian Control. Source: Stepan (1988), p. 100
C
188
Civilian Control of Armed Forces
using this approach. The central argument of his structural theory of civilian control is that the particular combination of internal and external threats faced by a state (independent variables) determines the quality of civilian control (the dependent variable). Based on this theory, he expects civilian control to be strongest in times of high external threat and low internal threat and be weakest in times of low external threat and high internal threat. Under indeterminate threat environments, such as low external and low internal threats or high external and high internal threats, the quality of the civilian control will be determined by the intervening variables such as civilian expertise in national security affairs, orientation of the military, mode of civilian control (objective or subjective), cohesion of civilian and military institutions, and convergence of civilian and military ideas all of which are subject to the effects of different sorts of military doctrines (Table 1). Born et al. (2006), in their book, Civil-Military Relations in Europe: Learning from Crisis and Institutional Change, also use behavioral approach to analyze the extent of civilian control in the 14 European countries. Arguing that civilian control in practice can be best studied in instances of conflict and crises, the authors study the behavior of civilian and military authorities under circumstances where they were under tension or when civilian and military actors pursued competing interests or goals. For each country, the authors selected two cases: one was to concern a short-term crisis, an event or controversy that occurred over a relatively limited time span and with limited impact, while the other case was to relate an event or process that could be considered to have a long-term implication on, or affect the institutional reform of the military and thus had a major impact on civil-military relations. The selected countries were then divided into three
types of democracy – established, consolidated, and transitional – in order to facilitate clustering and comparative analysis. Comparisons between the case studies identified three distinct pressures in these three types of democracies: exploitation of the military for political purposes in transitional democracies; the process of civilianizing defense ministries in consolidating democracies; and cultural differences between the military and civilian elites in established democracies. Peter Feaver (2003) also uses behavioral method in his seminal book Armed Servants: Agency, Oversight and Civil-Military Relations in which he developed his agency model of US CMR. Treating CMR as a bargaining game of principal-agent relations, Feaver offers a twodimensional matrix: (1) whether civilians monitor intrusively or non-intrusively and (2) whether the military “works” or “shirks.” Working means that military is acting the way civilian want; “shirking” means that the military is acting to accomplish its own purposes regardless of civilian preferences. Based on this framework, Feaver’s analysis of Cold War cases from 1945 to 1980 shows that, for the most part, the military worked rather than shirked and the CMR during this period belong in the “intrusive monitoring and working” cell (p.152). The other three general types of CMR are (1) military working with non-intrusive monitoring of civilians, (2) military shirking with nonintrusive monitoring of civilians, and (3) military shirking with intrusive monitoring of civilians.
Attitudinal/Cultural Approach The attitudinal approach focuses on ideas, role beliefs, and culture, i.e., the set of values and assumptions held by a collective that helps its members make sense of the world and orient their choices. Military role beliefs referring
Civilian Control of Armed Forces, Table 1 Desch’s structural theory of civilian control. Source: Desch 1999, p.14. High internal threats Low internal threats
High external threats Poor civilian control Good civilian control
Low external threats Worst civilian control Mixed civilian control
Civilian Control of Armed Forces
narrowly to “military conceptions of their role in politics” and more broadly to “the entire complex of attitudes that define officers’ normative models of civil-military relations” (Fitch 1998, p. 61) have been underlined since Plato’s Republic as a crucial variable shaping the extent to which civilian control is institutionalized (Bland 2004, p. 30; Dahl 1971, p. 50; Feaver 1999, p. 226; Finer 1962, p.30; Fitch 1998, 2001; Nunn 1995; Welch and Smith 1974, p.6). For the culturalists, analysis of military role beliefs is essential, because behavior alone cannot tell us whether the military’s compliance with democratic norms stems from “(1) its internalization of democratic norms, (2) its perceptions that actions violating those norms are unnecessary, or (3) its judgments that such actions are desirable but politically unfeasible, given opposition from other actors, foreign or domestic” (Fitch 1998, p. 67). Fitch (1998) has offered perhaps the most comprehensive test on the importance of officer corps’ role beliefs in his work on the Ecuadorian and Argentinean armed forces. Based on in-depth interviews with military officers in Ecuador and Argentina, Fitch argues that here have been more changes in traditional military views than most scholars expected at the beginning of the democratic transition. Particularly in Argentina, he found strong evidence of real progress toward institutionalization of role beliefs supporting a democratic model of CMR. On the other hand, he found that Ecuadorian military attitudes toward the political role of the military is ambiguous, reflecting the conflicting pressures of an unstable democratic political context in an international environment that discourages overt military intervention in politics (Fitch 1998, pp. 61–105).
Civilian Control Strategies Civilian control strategies can be classified into two broad categories: (a) those that affect the ability of the military to undermine control and (b) those that affect the disposition of the military to be defiant (Finer 1962; Feaver 1999, p.225). Those strategies that affect the ability of the military to undermine control include deployment of
189
the military far from the centers of political power, keeping the army divided and/or weak, creating parallel military forces that are capable of counterbalancing each other, dividing the lines of command and suppressing inter-branch communication, and developing a high degree of functional specialization. All of these strategies are found inherently limited since most of them carry the risk of eroding the military effectiveness, i.e., the ability of the military to fulfill its mission and functions (Biddle and Long 2004; Brooks 2007; Brown et al. 2016; Pilster and Böhmelt 2011; Quinlivan 1999). Constitutional and administrative restraints that legally bind the military in a subordinate position are necessary yet limited, since they only restrain the military insofar as the military abides by the measures (Feaver 1999, p. 225). Those measures reducing the military’s disposition to intervene include cultivating the “professionalism” and/or “norm of obedience” (Huntington 1957). In his seminal book The Soldier and the State, Huntington (1957) differentiates between “subjective” or “objective” control, based on the level of nation’s “autonomous military professionalism.” Objective control involves maximization of military professionalism through the recognition of military’s organizational autonomy in its own professional sphere and a rigid separation of the military from the political sphere. Officer corps is considered as professional to the extent they exhibit the qualities of expertise, responsibility, and corporateness (p. 28). These professional traits would be well nurtured in those military organizations, which ensure competitive entry, advancement based on seniority and merit, an advanced military education establishment, a general staff system, and the esprit de corps and skill of the officer corps (p. 28). Maximization of military autonomy in these areas, considered as its own professional sphere, would foster the development of a professional military ethic, which insists on individual and collective subordination to higher authority and opposes intervention in matters outside its sphere of professional expertise (p. 79). While objective control encourages an independent military sphere and the development of a professional military
C
190
ethic conducive to civilian control, subjective mechanisms of control prevalent in authoritarian regimes involve the maximization of the power of the ruling group in relation to the military, by politicizing and binding the military and its interests to those of the ruling civilian regime (pp. 83– 84). Subjective civilian control, therefore, encourages the political socialization of the military, so that its values mirror those of the state. Huntington found the objective control superior to subjective control for maximizing military effectiveness. The norm of obedience can be cultivated in the army through two basic techniques: (a) altering the ascriptive characteristics of the military so that it will be filled by officers prone to obey and (b) altering the incentives of the military so that, regardless of their nature, the officers will prefer to obey. The ascriptive characteristics of the military can be altered through training and indoctrination with the norm of civilian supremacy (Farrell 2001), military’s integration with society as citizen-soldiers (Moskos and Wood 1988), the convergence of civilian and military values (Janowitz 1960), and the recruitment based on political loyalty (Brooks 2007). If the civilians cannot completely weaken the ability of the military to undermine control, they can seek to modify the disposition of the military to be “disobedient” by either cultivating political loyalty through bribing or by creating a set of incentives for the military that rewards subordination with autonomy. The disposition of the military to intervene can also be weakened by reinforcing the legitimacy of the civilian government and/or by adopting monitoring mechanisms, such as investigations, rules of engagement, audits, civilian staffs with expertise, and oversight responsibilities, which raise the costs of military insubordination or noncompliant behavior simply by making it more difficult for such action to go unnoticed.
Conclusion Four conclusions can be drawn from the above analysis. First, the civil-military problematique or the Guardianship Dilemma, which refers to the
Civilian Control of Armed Forces
challenge of subordinating military under civilian authorities without hampering its effectiveness, underpins the theory of civilian control. Second, civilian control means more than the absence of a military coup or other forms of overt military intervention. It refers to civilians’ decision-making authority over relevant political issues. Thus, although in theory it is absolute and allencompassing, in practice it is a matter of degree. Third, the concept of civilian control, which is operationalized and measured by using institutional, behavioral, and attitudinal approaches, is replaced by the concept of democratic control in the post-Cold War era. Finally, those strategies reducing the military’s disposition to intervene are preferred over those that affect the ability of the military to subvert control since the latter is found inherently limited due to their risk of eroding the military effectiveness.
Cross-References ▶ Civil-Military Relations ▶ Norms ▶ North Atlantic Treaty Organization (NATO) ▶ Peacebuilding ▶ Post-Cold War Environment ▶ Security Sector Reform
References Aguero, F. (1995). Soldiers, civilians and democracy: Post-Franco Spain in comparative perspective. Baltimore: Johns Hopkins University Press. Biddle, S., & Long, S. (2004). Democracy and military effectiveness: A deeper look. Journal of Conflict Resolution, 48, 525–546. Bland, L. D. (1999). A unified theory of civil-military relations. Armed Forces and Society, 26(1), 7–25. Bland, L. D. (2001). Patterns in liberal democratic civilmilitary relations. Armed Forces and Society, 27(4), 525–540. Bland, L. D. (2004). ‘Your obedient servant’: The military’s role in the civil control of armed forces. In H. Born, K. Haltier, & M. Malesic (Eds.), Renaissance of democratic control of armed forces in contemporary societies (pp. 25–36). Baden-Baden: Nomos Verlagsgesellschaft. Born, H. (2003). Learning from best practices of parliamentary oversight of the security sector. In H. Born, P.
Civilian Control of Armed Forces H. Fluri, & S. Lunn (Eds.), Oversight and guidance: The relevance of parliamentary oversight for the security sector and its reform: A collection of articles on foundational aspects of parliamentary oversight of the security sector. Brussels/Geneva: Geneva Centre for the Democratic Control of Armed Forces. Born, H., & Lazzarini, C. (2006). Preliminary report on civilian command authority over the armed forces in their national and international operations (CDL-DEM Study No. 389/2006). Strasbourg: European Commission for Democracy through Law. Born, H., Fluri, P., & Johnsson, A. (Eds.). (2003). Parliamentary oversight of the security sector: Principles, mechanisms and practices (Handbook). Geneva/Belgrade: Inter-Parliamentary Union & Geneva Centre for the Democratic Control of Armed Forces. Born, H., Haltiner, K., & Malesic, M. (Eds.). (2004). Renaissance of democratic control of armed forces in contemporary societies. Baden-Baden: Nomos Verlagsgesellschaft. Born, H., Caparini, M., Haltiner, K. W., & Kuhlmann, J. (2006). Civil-military relations in Europe: Learning from crisis and institutional change. New York: Routledge. Brooks, R. (2007). Introduction. In R. Brooks & E. A. Stanley (Eds.), Creating military power: the sources of military effectiveness (pp. 1–26). Stanford: Stanford University Press. Brown, C. S., Fariss, C. J., & McMahon, R. B. (2016). Recouping after coup-proofing: Compromised military effectiveness and strategic substitution. International Interactions, 42(1), 1–30. Colton, T. J. (1979). Commissars, commanders, and civilian authority: The structure of soviet military politics. Cambridge, MA: Harvard University Press. Cottey, A., Edmunds, T., & Forster, A. (1999). Democratic control of armed forces in central and Eastern Europe: A framework for understanding civil-military relations in postcommunist Europe, ESRC “One Europe or several?” (Working Paper 1/99). Sussex: University of Sussex. Cottey, A., Edmunds, T., & Forster, A. (2002). The second generation problematic: Rethinking democracy and civil-military relations. Armed Forces and Society, 29 (1), 31–56. Croissant, A., & Kuehn, D. (2009). Patterns of civilian control of the military in East Asia’s new democracies. Journal of East Asian Studies, 9(2), 187–218. Croissant, A., & Kuehn, D. (2017). Introduction. In A. Croissant & D. Kuehn (Eds.), Reforming civil-military relations in new democracies: Democratic control and military effectiveness in comparative perspectives (pp. 1–22). Cham: Springer International Publishing. Croissant, A., Kuehn, D., Chambers, P. W., & Wolf, S. O. (2010). Beyond the fallacy of coup-ism: Conceptualizing civilian control of the military in emerging democracies. Democratization, 17(5), 950–975.
191 Dahl, R. A. (1971). Polyarchy; Participation and opposition. New Haven: Yale University Press. Dauber, C. (1998). The practice of argument: Reading the condition of civil–military relations. Armed Forces & Society, 24(3), 435–446. Desch, M. (1999). Civilian control of the military: The changing security environment. Baltimore: John Hopkins University Press. Diamond, L. (1993). “The Globalization of Democracy.” In Robert O. Slater, Barry M. Schutz, and Steven R. Dorr, eds. Global Transformation and the Third World. Boulder: Lynne Rienner. Faleg, G. (2012). Between knowledge and power: Epistemic communities and the emergence of security sector reform in the EU security architecture. European Security, 21(2), 161–184. https://doi.org/10.1080/096 62839.2012.665882. Farrell, T. (2001). Transnational norms and military development: Constructing Ireland’s professional Army. European Journal of International Relations, 7(1), 63–102. Feaver, P. D. (1998). Crisis as shirking: An agency theory explanation of the souring of American civil-military relations. Armed Forces and Society, 24(3), 407–434. Feaver, P. D. (1999). Civil-military relations. Annual Review of Political Science, 2, 211–241. Feaver, P. D. (2003). Armed servants: Agency, oversight, and civil-military relations. Cambridge, MA: Harvard University Press. Finer, S. E. (1962). The man on horseback. The role of the military in politics. New York: Frederick A. Praeger. Fitch, J. S. (1998). The armed forces and democracy in Latin America. Baltimore: The Johns Hopkins University Press. Fitch, J. S. (2001). Military attitudes toward democracy in Latin America: How do we know if anything has changed? In D. Pion-Berlin (Ed.), Civil-military relations in Latin America – new analytical perspectives (pp. 59–87). Chapel Hill/London: The University of North Carolina Press. Greenwood, D. (2006). Turkish civil-military relations and the EU: Preparation for continuing convergence (final expert report of an international task force). In S. Faltas & S. Jansen (Eds.), Governance and the military: Perspectives for change in Turkey: Harmonie paper no. 19 (pp. 21–68). The Netherlands: Centre for European Security Studies. Hanggi, H. (2003). Making sense of security sector governance. In H. Hanggi & T. H. Winkler (Eds.), Challenges of security sector governance (pp. 3–23). Munster: Geneva Center for the Democratic Control of Armed Forces. Huisman, S. (2002). Assessing democratic oversight of the armed forces (Working Paper No. 84). Geneva: Geneva Centre for the Democratic Control of Armed Forces (DCAF). Huntington, S. (1957). The soldier and the state: The theory and politics of civil-military relations. Cambridge, MA: Harvard University Press.
C
192 Janowitz, M. (1960). The professional soldier: A social and political portrait. London: Macmillan. Kemp, K. W., & Hudlin, C. (1992). Civil supremacy over the military: Its nature and limits. Armed Forces and Society, 19(1), 7–26. Kohn, R. (1994). Out of control. The crisis in civil-military relations. National Interest, 35(Spring), 3–17. Levy, Y. (2016). What is controlled by civilian control of the military? Control of the military vs. control of militarization. Armed Forces & Society, 42(1), 75–98. https://doi.org/10.1177/0095327X14567918. Lunn, S. (2003). The democratic control of armed forces in principle and practice. In H. Born, P. H. Fluri, & S. Lunn (Eds.), Oversight and guidance: The relevance of parliamentary oversight for the security sector and its reform: A collection of articles on foundational aspects of parliamentary oversight of the security sector (pp. 13–38). Brussels/ Geneva: Geneva Centre for the Democratic Control of Armed Forces. Moskos, C. C., & Wood, F. R. (Eds.). (1988). The military: More than just a job? Washington, DC: PergamonBrassey’s. Nunn, F. (1995). The South American military and re (democratization): Professional thought and self-perception. Journal of Inter-American Studies and World Affairs, 37(2), 1–56. Pilster, U., & Böhmelt, T. (2011). Coup-proofing and military effectiveness in interstate wars, 1967–99. Conflict Management and Peace Science, 28, 331–350. Pion-Berlin, D. (1992). Military autonomy and emerging democracies in South America. Comparative Politics, 25(1), 83–102. Pion-Berlin, D. (1998). The limits to military power: Institutions and defense budgeting in democratic Argentina. Studies in Comparative International Development, 33(Spring), 94–115. Pion-Berlin, D. (Ed.). (2001). Civil-military relations in Latin America: New analytical perspectives. Chapel Hill/London: The University of North Carolina Press. Pion-Berlin, D. (2009). Defense organization and civilmilitary relations in Latin America. Armed Forces and Society, 35(3), 562–586. Quinlivan, J. T. (1999). Coup-proofing: Its practice and consequences in the Middle East. International Security, 24(2), 131–165. Stepan, A. (1988). Rethinking military politics – Brazil and the southern cone. Princeton: Princeton University Press. Trinkunas, H. A. (2005). Crafting civilian control of the military in Venezuela: A comparative perspective. Chapel Hill: University of North Carolina Press. Welch, C. E., Jr. (1987). No farewell to arms? Military disengagement from politics in Africa and Latin America. Boulder: Westview Press. Welch, C. A., & Smith, A. K. (1974). Military role and rule: Perspectives on civil-military relations. North Scituate: Duxbury Press.
Civil-Military Relations
Civil-Military Relations William A. Taylor Angelo State University, San Angelo, TX, USA Keywords
Control of the military · Coup d’etat · Disarmament demobilization and reintegration (DDR) · Economic development · Insurrection · Interagency cooperation · Military effectiveness · Military roles · Military service · Mutiny · Operational challenges · Post-conflict resolution · Private military security contractor (PMSC) · Rebellion · Revolution in military affairs (RMA) · Security sector reform (SSR)
Introduction Civil-military relations are the vital connections between a government, its military, and the society that it seeks to protect and encompass a broad range of relationships that occur at distinct levels and in discreet timeframes, including control of the military, military roles, military service, interagency cooperation, military effectiveness, and operational challenges. They also vary greatly depending on the specific civilization within which they exist. Therefore, civil-military relations are distinctive within a particular culture. An individual nation’s ideology, political system, social fabric, historical traditions, norms and values, and government structure, among other factors, all influence its civil-military relations. For example, US civil-military relations differ significantly from those in Russia. Such a situation has led to the related notion of strategic culture in which the way that a certain state tackles a global security studies issue assumes unique characteristics.
Background and Context Civil-military relations are a foundational concept in global security studies and include six major
Civil-Military Relations
categories: control, roles, service, cooperation, effectiveness, and challenges. The first major component of civil-military relations is control of the military. This consideration details what political authority, if any, commands armed forces during both peace and war. In a democracy, for instance, civilians exercise control of the military, albeit to varying degrees and with relative levels of success. By contrast, in an authoritarian regime, the armed forces reign supreme and receive little oversight from civilian leaders. Control of the military exists at several echelons, including command relationships during war and supervision during peace. On the battlefield, civilmilitary relations encompass coordination and disputes between generals and politicians regarding appropriate military strategy as well as adherence by military commanders to orders from civilian authorities, including issues of restraint that occur over differing interpretations of limited or unlimited ends. During peacetime, control of the military governs how officers participate in crafting strategy as well as funding and policy oversight of the military by civil establishments. It also entails the degree to which armed forces are involved in domestic politics. The more politicized that military leadership becomes, the less civilian rule exists. In its most extreme form, a partisan armed forces could lead a coup d’etat and seize dominion militarily and politically. Civilian control of the military also entails investigations into situations where armed forces have defied civilian authority or plan to do so, including mutinies, insurrections, and rebellions. Civil-military relations also encompass the numerous mutable roles that civilian leaders assign to armed forces. Military roles transform in response to shifts in the international security environment, including revolutions in military affairs (RMAs). These variations in military roles at specific points in time can result from such technological advances as the emergence of new weapons, including tanks, submarines, airplanes, nuclear weapons, or more recently the Internet, such social fluctuations as the creation of the levée en masse or institution of conscription under the banner of nationalism, or such strategic breaks as the operational concepts of blitzkrieg,
193
strategic bombing, amphibious operations, or network centric warfare. In the contemporary international security environment, the spectrum of missions for a military is broad and includes everything from disaster relief during peacetime to strategic nuclear war, the apocalyptic apex of armed conflict. During peace, armed forces might undertake a range of options, including civil support, humanitarian assistance, peacekeeping, shows of force, or counterdrug operations, among many others. During war, functions are even more diverse. A military might undertake counterterrorism, counterinsurgency, limited conflict, and major theater war, among numerous other tasks. This broad gamut of encounters, ranging from peaceful interaction to general war, involves four broad categories, including peacetime military engagement, peace support, counterinsurgency, and major combat. Civilmilitary relations elucidate debates regarding which roles to prepare for and pursue, thereby ensuring a balanced approach wherein civilian leaders articulate clear and achievable vital national interests and military commanders use their experience to craft strategies to achieve them. Civil-military relations also encompass assignments regarding military service. This characteristic involves policies regarding who serves in the military and how they serve. The first consideration examines whether armed forces broadly represent the society that they protect and studies whether certain segments of the populace are over-represented or under-represented within the nation’s military. Civil-military relations seek symmetry between the military and society concerning who serves. When armed forces fail to embody the society they protect, a schism between citizens and soldiers forms, labeled by many observers as a civil-military gap. At its most basic level, steadiness in this arena ensures that soldiers are citizens of their country; at its most specific point, this equilibrium safeguards that armed forces are not overly reliant on a particular demographic or geographic segment of the population to fill their ranks. In an extreme form of distortion, a society might rely exclusively on foreign fighters to serve in its military, thereby completely divorcing its citizens from its soldiers
C
194
and vice versa. The second deliberation examines the important dichotomy between compulsion and volunteerism regarding the manner in which a society provides personnel for its military. A country can resort to conscription to fill the military’s ranks or can rely exclusively on volunteers to do so; a nation can also use some combination of the two. This situation can also morph depending on circumstances. There can also be global trends during certain epochs. Most nations during the twentieth century relied on some form of draft, especially during the world wars. In contrast, many countries today depend on volunteers to fill the ranks. The contrast between peace and war also plays a significant role in military service. A nation could prefer to use volunteers during peacetime but might be forced to use conscription during hostilities, especially during total wars where the entire country must mobilize for conflict. Another important pillar of civil-military relations is interagency cooperation. The degree to which a state’s military works together with civilian agencies influences civil-military relations. A completely insular armed force might interact very little with civilian agencies or hold them completely subordinate to its own power, whereas a military that has proficiency in interagency cooperation will plan and coordinate with civilian agencies, supporting their overarching goals. This is true not only within a particular nation during planning for operations but also once it deploys soldiers into a foreign country. These forces will interact with large numbers of civilians in theater, including the population of the host country; local political, economic, and cultural leaders; as well as nongovernmental organizations and relief workers operating there. Interagency cooperation is a hallmark of sound civil-military relations, primarily because it ensures that armed forces consider and further such critical factors as politics, economics, and development in addition to purely military goals. It also contributes to integrating all instruments of national power, wherein the power wielded by generals and admirals joins with diplomatic, informational, and economic instruments of national power exercised by civilian agencies.
Civil-Military Relations
Another critical factor of civil-military relations is military effectiveness. Civil-military relations greatly impact effectiveness, which considers how states cultivate martial prowess and what characteristics make certain nations more successful than others at that endeavor. Balanced civil-military relations can enhance military effectiveness, ensuring that armed forces do not pursue solely bellicose ends without adequate contemplation of the political goals of any conflict. In taking a holistic view of military effectiveness, leaders must integrate fighting dominance and strategy within the broader concept of national sovereignty and grand strategy, wherein the military is only one tool available to civilian leaders. In the absence of stable civilmilitary relations, commanders might focus exclusively on operational objectives, which might prove successful at the tactical level but unravels at the strategic level due to inadequate consideration of the political, economic, and social dimensions of warfare. Accounting for civil-military relations with a balanced approach enhances military effectiveness and thereby contributes to sound grand strategy. Civil-military relations also entail a number of operational challenges related to employing armed forces in the contemporary international security environment, including private military security contractors (PMSCs); economic development; security sector reform (SSR); disarmament, demobilization, and reintegration (DDR); building partner capacity; and post-conflict resolution. Private military security contractors (PMSCs) are personnel that a state has contracted through a commercial company to augment, or in its extreme form replace, uniformed service members. Often colloquially referred to as mercenaries, PMSCs present both opportunities and challenges for civil-military relations. They can provide short-term battlefield prowess without the long-term requirements and costs of training, equipping, and providing veterans’ care in the aftermath of their service. This benefit is often referred to as surge capacity by proponents of PMSCs. They also, however, present many legal quandaries because private entities abide by different rules and regulations than those governing
Civil-Military Relations
the nation’s military service members. Because a business does not usually base its headquarters in the host nation, PMSCs are not subject to the laws of that country; prosecution of crimes, especially involving deadly force, therefore become mired in legal complexities. Politically, PMSCs can undercut vital national interests if they deviate from established norms and standards of conduct because civilians in the host nation understandably view them as official military forces regardless of significant distinctions between the two categories of soldiers. Another operational challenge of civil-military relations is economic development. Stability operations, a common contemporary mission, entail not only the use of force but also economic development pursued by armed forces. As a result, when practicing stability operations, a military must conduct economic development, which requires a great deal of interaction with civilians as well as significant noncombatant knowledge, especially in business, economics, and engineering. Often, reserve units fulfill such tasks due to their dual nature as citizens and soldiers and their specialized skills in such related fields as civil affairs, public relations, civil engineering, legal matters, and law enforcement. Disarmament, demobilization, and reintegration (DDR) is another operational challenge of civil-military relations. DDR seeks to provide reconciliation of past conflicts so as to prevent future violence and thereby provide a stable foundation upon which to rebuild. The first step involves deactivating past combatants and removing weapons from easy access within the society. Limiting the number of weapons in circulation often lessens the risks of a resumption of violence or retribution based on group grievances. The second phase includes getting soldiers out of military units, recording who they are, and documenting what skills they have. Demobilization deescalates tensions and transitions the society from war to peace. The third process, reintegration, converts prior fighters into employed workers and commissions them on the reconstruction and rebuilding of the country. Doing so ensures that former soldiers become productive citizens; it also prevents
195
unemployment, which serves as combustible tinder for injustices that might reignite into further bloodshed. Another contemporary security challenge of civil-military relations is security sector reform (SSR), which restructures a foreign partner nation to promote the effectiveness, legitimacy, and accountability of their military and police forces. Whereas DDR is a short-term measure immediately after conflict, SSR is a long-term endeavor that builds upon the success of the former. Security sector reform is especially important in nations where state institutions have a history of partiality, corruption, and repression and therefore lack the necessary legitimacy and professionalism to provide a firm foundation for reconstruction and recovery. Another contemporary operational challenge of civil-military relations is building partner capacity, which is when a military provides advice, training, and assistance to an ally to bolster that nation’s capability to defend itself. Whereas SSR addresses underlying failings in terms of corruption and repression, building partner capacity equips and trains an ally’s military to improve their overall effectiveness. Doing so ensures that they will be able to manage the security tasks before them with decreasing amounts of foreign assistance. Finally, post-conflict resolution is a critical contemporary operational challenge of civilmilitary relations. This endeavor is the most complex, yet most vital, task of ensuring sustainable peace. It requires robust civil-military relations in order to guarantee that all facets of society, civilian and military, are optimized for strength and harmony. Post-conflict resolution entails the various activities previously discussed, including economic development, DDR, SSR, and building partner capacity; in sum, it ensures that military success on the battlefield is translated into political and economic success on the home front.
Conclusion Civil-military relations are essential to global security studies. The complex but vital relationships
C
196
between a government, its military, and the society that it protects are an indispensable consideration in the contemporary international security environment. Globalization has blurred the lines between battlefields and home fronts and has resulted in a contemporary international security environment that necessitates far more interaction between military and civilian personnel than in previous eras. The contemporary international security environment also has blended the realms of military and civilian operations, requiring armed forces to function within the noncombatant dominion and civilians to cooperate with soldiers within armed arenas. The six major facets of civil-military relations, including control, roles, service, cooperation, effectiveness, and challenges, all demonstrate this trend and will surge in consequence. As a result, global security studies will continue to explore and expand the significance of civil-military relations.
Cross-References ▶ Army Recruitment of Ethnic Minorities ▶ Balance of Power ▶ Child Soldiers ▶ Civil Liberties ▶ Conflict and Conflict Resolution ▶ Disarmament ▶ Emerging Powers ▶ Ethics of Security ▶ Humanitarian Intervention ▶ Insurgents and Insurgency ▶ Peace and Reconciliation ▶ Protection of Civilians (POC) ▶ Role of the Private Sector ▶ Security and Citizenship ▶ Security Sector Reform ▶ Stability Operations ▶ State Legitimacy ▶ Strategic Culture
References Barany, Z. (2012). The soldier and the changing state: Building democratic armies in Africa, Asia, Europe, and the Americas. Princeton: Princeton University Press.
Clean Development Mechanism (CDM) Cohen, E. A. (2002). Supreme command: Soldiers, statesmen, and leadership in wartime. New York: Free. Feaver, P. D. (2003). Armed servants: Agency, oversight, and civil-military relations. Cambridge, MA: Harvard University Press. Huntington, S. P. (1967). The soldier and the state: The theory and politics of civil-military relations. Cambridge, MA: Belknap Press of Harvard University Press. Janowitz, M. (1960). The professional soldier: A social and political portrait. New York: Free. Lenze, P. E., Jr. (2016). Civil-military relations in the Islamic world. Lanham: Lexington. Nielsen, S. C., & Snider, D. M. (2009). American civil-military relations: The soldier and the state in a new era. Baltimore: Johns Hopkins University Press. Owens, M. T. (2011). US civil-military relations after 9/11: Renegotiating the civil-military bargain. London: Continuum. Pion-Berlin, D., & Martínez, R. (2017). Soldiers, politicians, and civilians: Reforming civil-military relations in democratic Latin America. Cambridge, UK: Cambridge University Press. Taylor, W. A. (2016). Military service and American democracy: From world war II to the Iraq and Afghanistan wars. Lawrence: University Press of Kansas.
Clean Development Mechanism (CDM) Dan Liu School of Humanities and Social Science, North University of China, Taiyuan, China Keywords
Kyoto Protocol · Flexibility mechanism · CERs · Common but differentiated responsibilities · Sustainable development · Environmental security
Definition The Clean Development Mechanism (CDM) is one of the three market-based flexibility mechanisms established by the Kyoto Protocol to the United Nations Framework Convention on Climate Change (1997). It is an effective mechanism to reduce emissions by promoting sustainable
Clean Development Mechanism (CDM)
energy projects, which allows emissions trading among different countries and helps in achieving goals of emissions reduction (Nautiyal and Varun 2015: 121). CDM projects earn tradable, saleable certified emission reductions (CERs) credits that can be used by industrialized (Annex 1) countries and firms to meet a part of their emission reduction targets under the Kyoto Protocol. Videlicet, the CDM allows industrialized (Annex 1) countries and firms to offset their national emissions by investing in Greenhouse Gas (GHG) reduction activities in the developing world (non-Annex I nations) in return for emissions credits, known as CERs. The CDM is the main source of income for the UNFCCC Adaptation Fund, which was established to finance adaptation projects and programs in developing country Parties to the Kyoto Protocol that are particularly vulnerable to the adverse effects of climate change. The Adaptation Fund is financed by a 2% levy on CERs issued by the CDM (https://cdm.unfccc.int/about/index. html).
Introduction The details of the CDM are in Article 12 of the Kyoto Protocol (https://unfccc.int/resource/docs/ convkp/kpeng.pdf#page¼12), as follows: 1. A clean development mechanism is hereby defined. 2. The purpose of the clean development mechanism shall be to assist Parties not included in Annex I in achieving sustainable development and in contributing to the ultimate objective of the Convention, and to assist Parties included in Annex I in achieving compliance with their quantified emission limitation and reduction commitments under Article 3. 3. Under the clean development mechanism: (a) Parties not included in Annex I will benefit from project activities resulting in certified emission reductions; and (b) Parties included in Annex I may use the certified emission reductions accruing from such project activities to contribute to compliance with part of their quantified emission limitation and reduction commitments under Article 3, as determined by the Conference of the Parties serving as the meeting of the Parties to this Protocol.
197 4. The clean development mechanism shall be subject to the authority and guidance of the Conference of the Parties serving as the meeting of the Parties to this Protocol and be supervised by an executive board of the clean development mechanism. 5. Emission reductions resulting from each project activity shall be certified by operational entities to be designated by the Conference of the Parties serving as the meeting of the Parties to this Protocol, on the basis of: (a) Voluntary participation approved by each Party involved; (b) Real, measurable, and long-term benefits related to the mitigation of climate change; and (c) Reductions in emissions that are additional to any that would occur in the absence of the certified project activity. 6. The clean development mechanism shall assist in arranging funding of certified project activities as necessary. 7. The Conference of the Parties serving as the meeting of the Parties to this Protocol shall, at its first session, elaborate modalities and procedures with the objective of ensuring transparency, efficiency and accountability through independent auditing and verification of project activities. 8. The Conference of the Parties serving as the meeting of the Parties to this Protocol shall ensure that a share of the proceeds from certified project activities is used to cover administrative expenses as well as to assist developing country Parties that are particularly vulnerable to the adverse effects of climate change to meet the costs of adaptation. 9. Participation under the clean development mechanism, including in activities mentioned in paragraph 3 (a) above and in the acquisition of certified emission reductions, may involve private and/or public entities, and is to be subject to whatever guidance may be provided by the executive board of the clean development mechanism. 10. Certified emission reductions obtained during the period from the year 2000 up to the beginning of the first commitment period can be used to assist in achieving compliance in the first commitment period.
Briefly speaking, to assist Annex I countries in their compliance efforts, the Kyoto Protocol establishes three flexibility mechanisms: International Emissions Trading (IET), Joint Implementation (JI), and the CDM. These flexibility mechanisms help Annex I countries (and private actors) meet their targets by purchasing emissions
C
198
offsets or credits abroad through international carbon markets (Schatz 2008: 707). The CDM was designed to meet a dual objective: to help developed countries fulfill their commitments to reduce emissions, and to assist developing countries in achieving sustainable development (https://cdm. unfccc.int/about/dev_ben/index.html).
Legal Governance of the CDM As shown on the UNFCCC wibsite (https://cdm. unfccc.int/EB/governance.html), the legal governing bodies and their rules on the CDM are as follows: The CDM Executive Board The CDM Executive Board (CDM EB) supervises the Kyoto Protocol’s clean development mechanism under the authority and guidance of the Conference of the Parties serving as the Meeting of the Parties to the Kyoto Protocol (CMP). The CDM EB is fully accountable to the CMP. The CDM EB will be the ultimate point of contact for CDM project participants for the registration of projects and the issuance of CERs. Panels/Working Groups/Teams The CDM EB may establish committees, panels, or working groups to assist it in the performance of its functions. The CDM EB shall draw on the expertise necessary to perform its functions, including from the United Nations Framework Convention on Climate Change (UNFCCC) roster of experts. In this context, it shall take fully into account the consideration of regional balance (Rule 32 of the rules of procedures of the CDM EB). A Designated National Authority A designated national authority (DNA) is the organization granted responsibility by a Party to authorize and approve participation in CDM projects. Establishment of a DNA is one of the requirements for participation by a Party in the CDM. The main task of the DNA is to assess potential CDM projects to determine whether they will
Clean Development Mechanism (CDM)
assist the host country in achieving its sustainable development goals, and to provide a letter of approval to project participants in CDM projects. This letter of approval must confirm that the project activity contributes to sustainable development in the country, that the country has ratified the Kyoto Protocol, and that participation in CDM is voluntary. It is then submitted to CDM Executive Board to support the registration of the project. DNAs have additional roles to play, such as the submission of proposed standardized baselines for their country, among others. These responsibilities have increased as the CDM has evolved. A Designated Operational Entity A designated operational entity (DOE) is an independent auditor accredited by the CDM Executive Board (CDM EB) to validate project proposals or verify whether implemented projects have achieved planned greenhouse gas emission reductions. More specifically, the two key functions of DOEs are: 1. Validation: assessing whether a project proposal meets the eligibility requirements and subsequently request registration of the project by the CDM EB. 2. Verification/certification: verify emission reductions from a project, certify as appropriate, and recommend to the CDM EB the amount of Certified Emission Reductions (CERs) that should be issued. Usually, for large-scale projects, a DOE may only conduct either validation or verification of the same project. However, upon request, the CDM EB may allow a single DOE to perform both functions (validation and verification/ certification).
The CDM Project Cycle Generally there are seven steps for the CDM’s project cycle according to the UNFCCC wibsite (https://cdm.unfccc.int/Projects/diagram.html):
Clean Development Mechanism (CDM)
Project Design The project participant prepares project design document, making use of the approved emissions baseline and monitoring methodology. Design steps entail: • Project Design Document (CDM-PDD): The project design document form was developed by the CDM EB on the basis of Appendix B of the CDM modalities and procedures. Project participants shall submit information on their proposed CDM project using the CDM-PDD form. • Proposal of a new baseline and/or monitoring methodology: The proposed new baseline methodology shall be submitted by the designated operational entity to the CDM EB for review and approval, prior to validation and submission for registration of the project. • Use of an approved methodology: An approved methodology is a methodology previously approved by the CDM EB and made publicly available along with any relevant guidance. When an approved methodology is used, the designated operational entity may proceed with the validation of the CDM project activity and submit the CDM-PDD with a request for registration. National Approval The project participant secures a letter of approval from party. The Designated National Authority (DNA) of a party involved in a proposed CDM project activity shall submit a letter indicating the following: • That the country has ratified the Kyoto Protocol • That participation is voluntary • And, from host parties, a statement that the proposed CDM project activity contributes to sustainable development (EB 16, Annex 6, paragraph 1) Validation The project design document is validated by accredited designated operational entity, private third-party certifier.
199
Validation is the process of independent evaluation of a project activity by a designated operational entity against the requirements of the CDM as set out in CDM modalities and procedures and relevant decisions of the Kyoto Protocol Parties and the CDM EB, on the basis of the project design document. Registration Valid project submitted by DOE to CDM EB with request for registration. Registration is the formal acceptance by the CDM EB of a validated project as a CDM project activity. Registration is the prerequisite for the verification, certification, and issuance of CERs related to that project activity. Registration steps entail: • • • •
Completeness check by the secretariat. Vetting by secretariat. Vetting by CDM EB. If a party or three members of CDM EB request review, the project undergoes review, otherwise proceeds to registration.
Monitoring The project participant is responsible for monitoring actual emissions according to approved methodology. Verification The designated operational entity verifies that emission reductions took place, in the amount claimed, according to approved monitoring plan. Verification is the independent review and expost determination by the designated operational entity of the monitored reductions in anthropogenic emissions by sources of greenhouse gases that have occurred as a result of a registered CDM project activity during the verification period. Certification is the written assurance by the designated operational entity that, during the specified period, the project activity achieved the emission reductions as verified. CER Issuance Designated operational entity submits verification report with request for issuance to CDM EB.
C
200
Clean Development Mechanism (CDM)
Issuance steps entail: • • • •
Completeness check by secretariat. Vetting by secretariat. Vetting by CDM EB. If a Party or three members of CDM EB request review, issuance request undergoes review, otherwise proceeds to issuance.
•
CDM Project Activities • The CDM project activities include four types, overviewed below in a brief description of the CDM project process, as introduced on the UNFCCC wibsite for examples (https://cdm. unfccc.int/Projects/guides.html). Example 1: CDM Projects • Project Activity Design The Guidelines for completing the CDMPDDs and the Glossary of CDM Terms have been developed by the Executive Board on the basis of the CDM modalities and procedures and the subsequent decisions by the Board. Project participants shall submit information on their proposed CDM project activity using a Project design document (CDM-PDD). Further project design requirements are detailed in the CDM Project Standard. • Notification of CDM Prior Consideration The submission of the “prior consideration of the CDM” form within 6 months of the project start date is a mandatory requirement for all projects which have already started before a PDD has been published for public comments or a new methodology/revision of methodology has been proposed. • Proposal of a New Baseline and/ or Monitoring Methodology The new baseline methodology shall be submitted by a designated operational entity (DOE) to the CDM EB for review, prior to validation and submission for registration of this project activity, with the draft project design document (CDM-PDD), including a
•
•
•
description of the project and identification of the project participants. Use of an Approved Methodology The approved methodology is a methodology previously approved by the Executive Board and made publicly available along with any relevant guidance. In case of approved methodologies, the designated operational entities may proceed with the validation of the CDM project activity and submit project design document (CDM-PDD) for registration. Validation of the CDM Project Activity Validation is the process of independent evaluation of a CDM project activity or Program of activities (PoA) by a DOE against the requirements of the CDM rules and requirements, on the basis of the PDD (or PoA-DD and CPA-DDs). Registration of the CDM Project Activity Registration is the formal acceptance by the EB of a validated project as a CDM project activity. Registration is the prerequisite for the verification, certification, and issuance of CERs related to that project activity. Certification/ Verification of the CDM Project Activity Verification is the periodic independent review and ex-postdetermination by the designated operational entity of the monitored reductions in anthropogenic emissions by sources of greenhouse gases that have occurred as a result of a registered CDM project activity during the verification period. Certification is the written assurance by the designated operational entity that, during a specified time period, a project activity achieved the reductions in anthropogenic emissions by sources of greenhouse gases as verified. Issuance of CERs Issuance is the instruction by the CDM EB to the CDM Registry Administrator to issue a specified quantity of CERs, lCERs, or tCERs for a project activity or PoA, as applicable, into the pending account of the Board in the CDM registry, for subsequent distribution to accounts of project participants in accordance with the CDM rules and requirements.
Clean Development Mechanism (CDM)
Example 2: Small-scale CDM Projects A project which is eligible to be considered as a small-scale CDM project activity can benefit from the “Simplified modalities and procedures for small-scale clean development mechanism project activities” (decision 4/CMP.1, Annex II) which were adopted by the COP/MOP at its first session. Note that paragraph 28, decision 1/CMP.2 (Further guidance relating to the clean development mechanism), revises the definitions for small-scale CDM project activities referred to in paragraph 6 (c) of decision 17/CP.7. In order to reduce transaction costs associated with preparing and implementing a CDM project activity, the simplified modalities and procedures provide for the following: • A simplified project design document (most recent version of the CDM-SSC-PDD) • Simplified methodologies for baseline determination and monitoring plans • Specific guidelines making simplified provisions for small-scale project activities • Simplified provisions for environmental impact analysis. • The same DOE can validate as well as verify and certify emission reductions for a specific SSC CDM project activity. Example 3: Afforestation and Reforestation (A/R) CDM Projects An A/R project would have to complete the following stages, and with a view to the following considerations, as described below. • A/R Project Activity Design The project design document for afforestation and reforestation project activities (CDMA/R-PDD) and the glossary of terms have been developed by the Executive Board on the basis of Appendix B of the CDM modalities and procedures for afforestation and reforestation project activities. Project participants shall submit information on their proposed CDM project activity using the project design document for afforestation and reforestation project activities (CDM-A/R-PDD).
201
• Proposal of a New A/R Baseline and/or Monitoring A/R Methodology The new A/R baseline methodology shall be submitted by the designated operational entity to the Executive Board for review, prior to a validation and submission for registration of this project activity for afforestation and reforestation project activities, with the draft project design document (CDM-AR-PDD), including a description of the project and identification of the project participants. • Use of an Approved A/R Methodology The approved A/R methodology is a methodology previously approved by the Executive Board and made publicly available along with any relevant guidance. In case of approved methodologies, the designated operational entities may proceed with the validation of the CDM project activity and submit project design document for afforestation and reforestation project activities (CDM-AR-PDD) for registration. • Validation of the CDM A/R Project Activity Validation is the process of independent evaluation of a A/R project activity by a designated operational entity against the requirements of the CDM as set out in decision 19/CP.9, the present annex and relevant decisions of the COP/MOP, on the basis of the project design document for afforestation and reforestation project activities, as outlined in Appendix B. • Registration of the A/R CDM Project Activity Registration is the formal acceptance by the Executive Board of a validated project as a A/R CDM project activity. Registration is the prerequisite for the verification, certification, and issuance of CERs related to that A/R project activity. • Certification/Verification of the A/R CDM Project Activity Verification is the periodic independent review and ex-postdetermination by the designated operational entity of the monitored reductions in anthropogenic emissions by sources of greenhouse gases that have occurred as a result of a registered A/R CDM project
C
202
activity during the verification period. Certification is the written assurance by the designated operational entity that, during a specified time period, a project activity achieved the reductions in anthropogenic emissions by sources of greenhouse gases as verified. • Issuance of CERs Issuance is the instruction by the CDM Executive Board to the CDM Registry Administrator to issue a specified quantity of CERs, lCERs, or tCERs for a project activity or PoA, as applicable, into the pending account of the Board in the CDM registry, for subsequent distribution to accounts of project participants in accordance with the CDM rules and requirements.
Clean Development Mechanism (CDM)
• A single designated operational entity (DOE) may perform validation as well as verification and certification for a small-scale afforestation or reforestation project activity under the CDM or for bundled small-scale afforestation and reforestation project activities under the CDM. • Simplified provisions for environmental impact analysis. • Further guidance related to the registration fee for proposed A/R CDM project activities (Version 01, EB 36 Annex 21). • Shorter review period for the registration of SSC CDM project activities.
Conclusion Example 4: Small-scale A/R CDM Projects A project which is eligible to be considered as a small-scale A/R CDM project activity can benefit from simplified modalities and procedures, which were adopted by the Conference of the Parties at its eighth session “Simplified modalities and procedures for small-scale A/R clean development mechanism project activities.” In order to reduce transaction costs associated to preparing and implementing a CDM project activity, the simplified modalities and procedures provide for the following simplifications: • A simplified project design document (most recent version of the CDM-SSC-AR-PDD). • Simplified methodologies for baseline determination and monitoring plans. • Several small-scale afforestation or reforestation project activities under the CDM may be bundled for the purpose of validation. An overall monitoring plan that monitors performance of the constituent project activities on a sample basis may be proposed for bundled project activities. If bundled project activities are registered with an overall monitoring plan, this monitoring plan shall be implemented and each verification/certification of the net anthropogenic removals by sinks achieved shall cover all of the bundled project activities. However, please note a provision for avoidance of de-bundling of larger project activities as provided in Appendix C.
CDM was established by the Kyoto Protocol in consideration of the sustainable development principle as well as the principle of common but differentiated responsibilities. Its purpose is to assist industrialized (Annex I) countries and firms in achieving their reduction commitments, as well as to assist developing world (non-Annex I nations) in attaining sustainable development through low-carbon technology transfer, and to achieve the Convention’s ultimate goal of preventing dangerous anthropogenic interference with the climate system (Schatz 2008: 709). Theoretically, benefits of CDM projects include investment in climate change mitigation projects in developing countries, transfer or diffusion of technology in the host countries, as well as improvement of the livelihood of communities through the creation of employment or increased economic activity. Generally, the CDM may contribute to the environmental security and economic security. The industry on CDM is expanding rapidly over the years and statistics support this growth in CDM (Kang and Paik 2010: 32). However, without the financial and technological bridges between developed and developing countries, such as those commitments detailed in the UNFCCC and the CDM, any global climate change policy will struggle to succeed (Wilder and Curnow 2001: 582). Thus the focus of the current climate change negotiations should be
Climate Change and Public Health
to build the detailed financial resourses and technology transfer bridges set out in the UNFCCC and the Paris Agreement due to the principle of common but differentiated responsibilities in order to assist developed and developing countries in the implementation of these provisions. Though the international governing framework transitions from the Kyoto Protocol to the Paris Agreement, the CDM or successor mechanism will remain a component of the emerging governance architecture, both as a potential partner for current and emerging cap-and-trade schemes and as an important transitional staging post towards a more comprehensive emissions trading framework (Kelly 2018: 433). Indeed, the further progress for the CDM will depend on a more secure legal environment, resting on the dual pillars of international law and national laws.
Cross-References ▶ Economic Security ▶ Environmental Security ▶ Environmental Security Complexes ▶ Greenhouse Gas Emissions
References Clean Development Mechanism (CDM). Retrieved from https://cdm.unfccc.int/ Kang, N. H., & Paik, M. J. (2010). Clean development mechanism preferred: Flexibility mechanism in Kyoto protocol. Asian Business Lawyer, 6, 31–48. Kelly, G. (2018). Assessing the climate governance contribution and future of the clean development mechanism. Nordic Journal of International Law, 87(4), 393–435. Kyoto protocol to the United Nations framework convention on climate change. Retrieved from https://unfccc. int/resource/docs/convkp/kpeng.pdf#page¼12 Nautiyal, H., & Varun. (2015). Clean development mechanism: A key to sustainable development. In P. Thangavel & G. Sridevi (Eds.), Environmental sustainability. New Delhi: Springer. https://doi.org/10.1007/ 978-81-322-2056-5_7. Schatz, A. (2008). Discounting the clean development mechanism. Georgetown International Environmental Law Review, 20(4), 703–742. Wilder, M. (2001). The clean development mechanism, forum: The Kyoto protocol: Politics and practicalities. University of New South Wales Law Journal, 24(2), 577–582.
203
Further Reading Addaney, M. (2018). The clean development mechanism and environmental protection in rapidly developing countries: Comparative perspectives and lessons from China and India. Tsinghua China Law Review, 10(2), 297–331. Asia-Pacific Partnership on Clean Development and Climate. Retrieved from https://www.asiapacific partnership.org/ Condon, A. (2016). The odd couple: Uniting climate change mitigation and sustainable development under the clean development mechanism. Law and Development Review, 9(1), 153–176. Curnow, P. (2001). Supplementarity and the flexibility mechanisms under the Kyoto protocol: How flexible is “Flexible”, notes and commentaries. Asia Pacific Journal of Environmental Law, 6(2), 165–182. Ekardt, F. (2012). The clean development mechanism as a governance problem. Emerging carbon markets in the developing world: Trends and perspectives. Carbon & Climate Law Review, 2012(4), 396–407. EU ETS. Retrieved from https://ec.europa.eu/enviorment/ climat/emission/indexen.htm Glossary of CDM terms. Retrieved from https://cdm. unfccc.int/filestorage/e/x/t/extfile-20200812172710158Glossary_CDM.pdf/Glossary_CDM.pdf? t¼Q2V8cWpmZ3AzfDBBv9PotwQVKSLB4pfAyYve Jiang, X. (2013). Legal issues for implementing the CDM in China. Berlin/Heidelberg: Springer. https://doi.org/ 10.1007/978-3-642-24,737-8. Oberthür, S., & Ott, H. E. (1999). The clean development mechanism (Article 12). In The Kyoto protocol. Berlin/ Heidelberg: Springer. https://doi.org/10.1007/978-3662-03925-0_14. Penca, J. (2016). Transnational legal transplants and legitimacy: The example of clean and green development mechanisms. Legal Studies, 36(4), 706–724. UN environment programme. Retrieved from https://www. unep.org/ UNFCCC-Documents and decisions of the CDM. Retrieved from https://unfccc.int/documents? search2¼&search3¼CDM
Climate Change and Public Health Viktor Friedmann Budapest Metropolitan University, Budapest, Hungary Keywords
Climate change · Global security · Health security · Public health · Resilience · Vital systems security · Vulnerability
C
204
Introduction Climate change has a rapidly growing impact on public health and hence on both national and human security. The warming of the Earth’s atmosphere, driven by anthropogenic factors, transforms global and local climatic conditions and leads to more extreme weather patterns. Health security is affected by the resulting increase in the frequency of heat waves and other natural disasters, by worsening air pollution, and by indirect impacts on the prevalence of vector- and water-borne diseases, food security, and social stability, among other effects. Climate change also impacts public health security in a deeper and more fundamental sense. Public health as a governmental activity originally developed as a mechanism of security aiming to manage overall processes at the level of the population, based on the natural regularities they display and on predictions and interventions that these patterns make possible. Climate change, as the continuous and relatively rapid transformation of environmental factors, however, upends approaches to security that rely on notions of regularity, stability, equilibrium, or normality and on historical data for calculating future probabilities. Therefore, climate change should not only be seen as a new threat, or as a threat multiplier, to public health, but as a phenomenon transforming the concept of security that underlies the principles and practices of public health. Moreover, this challenge is not identical to the one posed by global pandemics, bioterrorism, or other single-event catastrophes that similarly fall outside the classical security logic of public health.
Climate Change as a Threat to Public Health Security Public health, following Acheson’s definition adopted by the World Health Organization (WHO), is “the science and art of preventing disease, prolonging life and promoting health through the organized efforts of society” (Detels and Tan 2015). As such, it is not primarily
Climate Change and Public Health
associated with national security, although domestic or international stability might be affected by migration, violent conflict, or other forms of social upheaval resulting from major public health breakdown, and armed forces might be impacted by infectious diseases. More typically, the referent object of public health security is the health of the individual or of the population, and thus it belongs above all to the realm of human security (McInnes 2014) (see entry ▶ “Human Security”). Climate change is driven by the warming of the planet caused by increasing atmospheric levels of CO2 and other greenhouse gases (GHGs) and leads to changes in local precipitation and temperature patterns as well as to more extreme weather conditions. It affects human health through multiple pathways, which are already taking a significant toll on populations around the world (see entry ▶ “Health Security”). It is estimated that already in 2002, as many as 150,000 deaths (5.5 million disability-adjusted life-years) could be attributed to climate change (Confalonieri et al. 2007, p. 407). According to the World Health Organization (WHO 2018, p. 24), by 2030, climate change can cause 250,000 additional deaths annually. Furthermore, the burning of fossil fuels, which is the foremost contributor to climate change, is in itself a major threat to human health (see entry ▶ “Air Pollution”). Household and ambient air pollution from fine particulate matter causes cancer and various cardiovascular and respiratory diseases responsible for an estimated 7 million deaths per year, or around 12.5% of all deaths (WHO and United Nations 2015, p. 12). The WHO (2018, p. 27) estimated that due to its impact on air pollution levels, meeting the GHG reduction commitments undertaken in the Paris Climate Agreement for 2030 would prevent 138,000 premature deaths per year in the European region only (see entry ▶ “Paris Agreement”). Global warming is not just a product of air pollution, but worsens it further (Kinney et al. 2015). For instance, increased temperatures have been shown to lead to higher concentrations of ground-level ozone, exposing more people to asthma and other respiratory diseases. The
Climate Change and Public Health
presence of airborne allergens is also expected to grow due to potentially earlier and longer pollen seasons, an increase in pollen production, and the spread of certain invasive allergenic plants such as ragweed. As rainfall decreases in some areas, desertification and droughts make dust storms more prevalent, while forest fires can vastly contribute to the amount of particulate matter in the air (see entry ▶ “Desertification”). Smoke emitted by the wildfires that devastated Russia during the heat waves of 2010 has resulted in up to 55,000 excess deaths (Kinney et al. 2015, p. 105). Linking such mortality and morbidity effects to climate change is always difficult due to the complexity and length of causal chains that include many natural and social mediating factors. Yet, many effects can be attributed to climate change with reasonable certainty. Extreme weather events, including hurricanes, typhoons, floods, droughts, or forest fires, provide one such example (Herring et al. 2019). Climaterelated natural disasters claimed more than 2.5 million lives between 1980 and 2013 (Dangour et al. 2015, p. 179). Researchers attribute a 46% increase in the number of extreme weather events between 2000 and 2013 to climate change (WHO 2018, p. 10). Besides the physical trauma, such catastrophic events often take a toll on the mental health of survivors as well in the form of post-traumatic stress disorder, pathological anxiety, or solastalgia, a mental distress caused by environmental change such as experiencing a post-disaster landscape (Albrecht et al. 2007; Doherty 2015). The magnitude and perceived threat of climate change can also induce anxiety, depression, or apathy (see entry ▶ “Post-Traumatic Stress Disorder (PTSD)”). Although not always listed among natural disasters, heat waves can be similarly deadly and are perhaps the most direct expressions of the global warming underlying climate change (Basu 2015). Although warmer temperatures will also decrease cold-related mortality and morbidity, this positive effect is expected to be dwarfed by the negative health consequences of hotter temperatures. Prolonged overheating of the body – the core temperature of which must be
205
kept around 36–38 C – leads to reduced physical capacity; exacerbates preexisting cardiovascular, respiratory, or kidney diseases; and can ultimately result directly in death by heat stroke. The 2003 European heat wave is estimated to have caused as many as 70,000 premature deaths (EASAC 2019, p. 15). A 2013 report by the European Commission concluded that without adaptation measures 26,000 additional deaths would be caused by heat waves in Europe by 2020 and 90,000 extra deaths per year by 2050 (EASAC 2019, p. 15). Heat waves affect most negatively the elderly. The fact that the proportion of people over 65 is expected to reach 15.9% of the world population by 2050 and 22.6% by 2100 (28.1 and 30.4% in Europe, respectively) will further exacerbate the problem (United Nations n.d.). Heat waves provide a good demonstration of how social and natural factors mediate the impact of climate change. Since “almost all heatrelated deaths are preventable,” heat waves are “primarily a social rather than biological problem” (Richard C. Keller in: Basu 2015, p. 97). Rising heat levels translate into heat strain experienced by individuals through a large number of intervening factors. Urban communities are generally more exposed to heat due to the urban heat island phenomenon that can cause up to 6 increase in local temperatures (Hanna 2011). Low-income groups usually have less access to air conditioning, are more likely to live on the top floor of buildings or even on the street, and often have less access to cool spaces. Workload and working conditions, social expectations about clothing, and the availability of information and assistance are all social factors that lead to differentiated impacts by heat waves even within a single society. Climate change is a global threat to health (see entry ▶ “Global Threats”), but in a different way than what originally motivated the creation of the concept of global public health security (WHO 2007). There the primary concern was with the impact of increased interconnectedness and interdependence that facilitated the emergence and quick worldwide spread of infectious diseases and the transnational impact of man-made disasters. In other words, what was at stake was the
C
206
globalization and circulation of locally emerging health risks. Climate change, in contrast, is an underlying process of global scale and complexity that rewrites health hazards and risks all over the world, although in a locally highly differentiated manner. Local GHG emissions contribute to a global problem with impacts playing out often in regions far away from the main concentration of the causes (Barnett et al. 2008). Consider the case of diseases (Reisen 2015; Rose and Wu 2015). The original concern of global health security was that, due to the increasing volume and intensity of trade and transportation, diseases can easily travel across borders and hence local outbreaks can rapidly turn into transnational epidemics. Climate change, in contrast, is a global phenomenon that leads to less rapid but more permanent alterations in local disease patterns. Rising temperatures increase the prevalence of bacterial foodborne diseases such as salmonellosis as well as of waterborne, particularly diarrheal, diseases. Growing water scarcity leads to problems of hygiene, while floods can spread pathogens, especially as a result of sewage overflows or contamination of freshwater reserves (see entries ▶ “Drinking Water,” and ▶ “Water-Borne Diseases”). Changes in temperature and precipitation patterns also affect the geographical range and transmissibility of vector-borne diseases by transforming the global distribution of environments suitable for both vectors (e.g., ticks and mosquitoes) and pathogens (Lyme disease, malaria, dengue fever, West Nile fever, Zika, etc.; see entry ▶ “Malaria”) they carry (Caminade et al. 2019). Thus, vector-borne diseases can spread to new latitudes and higher elevations (e.g., the recent and projected spread of West Nile virus infections in North America and Southeastern Europe, or the spread of malaria in the Kenya’s highland capital Nairobi), while they might become rarer or even disappear from locations where they are currently present (Alsop 2007; EASAC 2019, pp. 18– 19; Reisen 2015). Although global, the impact of climate change is thus very uneven. To give a further example, changing precipitation patterns and growing temperatures as well as higher atmospheric CO2 concentrations might benefit agricultural production in
Climate Change and Public Health
at least some northern countries. In much of the developing world, however, where malnutrition continues to be a major problem, crop yields are expected to fall significantly as a result of climate change (Dangour et al. 2015). Disaster-related crop loss, less favorable temperatures, decreased water availability, the changing geographical range of agricultural pathogens, potential reduction in the number of pollinating insects, loss of agricultural labor capacity due to rising temperatures, and the resulting increase in food prices all restrict the availability and affordability of food. Collapse or migration of fish stocks from areas around the equator due to warming seawaters is just one of the ways in which dietary choices in many parts of the world are expected to become less diverse and thus less healthy. Higher levels of atmospheric CO2 in themselves have also been found to decrease the nutritional value of fruits and vegetables, further aggravating the problem of malnutrition (see entry ▶ “Malnutrition”). Currently there are more than 820 million undernourished people in the world, a number that has been increasing since 2015 (FAO et al. 2019). While global demand for food is expected to grow by around 50% between 2015 and 2030, 30 countries are experiencing falling agricultural yields (Dangour et al. 2015, p. 182; Watts et al. 2018, p. 2489). Finally, climate change also poses threats to the systems and facilities of public health that are supposed to provide health security (Paterson et al. 2014). Extreme weather events, for instance, can damage or destroy public health facilities; cut their access to essential services, such as electricity or water; overwhelm their capacity to deal with an increased patient load; or even directly affect staff health and labor capacity. Certain areas and populations might become inaccessible, while the facilities themselves might find their supply chains disrupted as transport systems are hit by natural disasters.
Climate Change and the Logic of Public Health Security More than just a novel threat to public health, climate change transforms the logic of security in
Climate Change and Public Health
this context. Foucault (2007) identified public health as an application of (population) security, a new technology of power that emerged in the eighteenth century. The specificity of mechanisms of security from this perspective is that, in contrast with legal codes or disciplinary norms, they do not try to address phenomena through external standards. Instead, mechanisms of security start from an analysis of the observable patterns and calculable processes that characterize the phenomena to be regulated – for instance, the rate of births or deaths or the prevalence or spread of diseases in a population, etc. – and develop interventions, such as vaccination campaigns, on this basis (see entry ▶ “Biopolitics”). They rely on a history of observations to establish probabilities and statistical estimates, prepare forecasts, calculate risks, and keep developments within an acceptable range. Security, in other words, works with what is normal about natural processes as they play out at the level of the population in order to manage them. As opposed to the emergency politics of existential threats (Buzan et al. 1998), security here stands for a rather mundane modality of power. What makes the operation of this technology of power possible is the existence of observable regularities over time. In many areas, however, climate change questions the stability of patterns upon which such mechanisms of security could be erected. By introducing constantly shifting circumstances and normalities, it makes knowledge about the past much less useful and about the future much less reliable (Fagan 2017). One way to address this new constellation of uncertainty has been offered by linking the concept of “vital systems security” to the threat posed by climate change (Gilman et al. 2011). The concept describes a form of security that is concerned with the protection of critical artificial systems upon which our societies have come to depend – such as transportation, public health, and water and energy supply – from unpredictable and potentially catastrophic events (Collier and Lakoff 2015; Lakoff 2007). Its historical roots are in the civil defense preparations for nuclear attacks in the 1960s, but the same approach has also been applied to threats of terrorist attacks, natural disasters, or major epidemics.
207
Whereas population security deals with regularly occurring, calculable, and relatively low-impact risks by means of prevention, surveillance, probabilistic analysis, etc., vital systems security addresses itself to incalculable, single-event catastrophes by means of imaginative scenario planning and other non-probabilistic methods. The impact climate change has on the frequency of natural disasters and its potential for reaching tipping points triggering catastrophic shifts makes it a major challenge from the perspective of the security of vital public health systems. Vital systems security shifts the central goal of security from protection against threats – which are unpredictable and unpreventable – to decreasing the vulnerability of referent objects (systems, communities) (see entry ▶ “Vulnerability and Vulnerable Groups of People”). Since the exposure factor of vulnerability cannot be controlled, this shifts the focus to the object’s resilience: its capacity to cope with and recover from catastrophic events by adapting to changes and maintaining their essential functions and organization (Brklacich et al. 2009) (see entry ▶ “Resilience”). The implications of climate change for public health are not fully covered, however, by either population or vital systems security. Most of the challenges described above (such as shifts in the geographical range of diseases, falling agricultural yields, increasing levels of air pollution) are not low-probability, catastrophic events, yet neither are they the calculable and regular natural phenomena of population security. Instead, climate change makes what can be considered a normal or equilibrium state unstable and slowly shifting. In the new Anthropocene era, human influences have become decisive even for our planetary life-support system, blurring the distinction between nature and artifice (Dryzek 2019). This has introduced the fragility, vulnerability, and instability that characterize modernity’s human-made structures into what had previously been considered to be the relatively stable environment in which they developed (McDonald 2018) (see entry ▶ “Environmental Security”). Although climate change does not necessarily create new risks or hazards, it aggravates and
C
208
redistributes them in a constantly changing manner. Since vulnerability is a matter of context, shifting exposure to hazards keeps redefining vulnerabilities. Heat waves, for instance, pose much less of a health risk for populations that have been physiologically acclimatized or that have already developed the adequate knowledge, infrastructure, and habits to cope with them (Basu 2015). Similarly, alteration of the geographical range of vector-borne diseases is particularly dangerous because it exposes populations that have not developed immunity and therefore are more sensitive to exposure (Sutherst 2004). Rather than merely managing what is regular and predictable or coping with the catastrophic and unpredictable, public health security gradually approximates a state of liquidity where “change is the only permanence, and uncertainty the only certainty” (Bauman 2012, p. viii). As a result, modeling possible futures and fostering the resilience of populations and vital systems are increasingly generalized as central elements of public health security in terms of a wide spectrum of everyday public health risks (see entry ▶ “Health System”).
Conclusion This article has argued that in the context of the ecological instability introduced by the Anthropocene and, centrally, by climate change, public health security needs to operate simultaneously with three different logics of security: traditional population security, vital systems security, and a more liquid form of security that shares certain elements with both of these. In a new era of constant change, nonlinear developments, and potentially catastrophic shifts, public health security is increasingly reliant on reflexive and capacity-based knowledges and techniques, such as modeling or resilience-building, even in its everyday operations.
Cross-References ▶ Air Pollution ▶ Biopolitics
Climate Change and Public Health
▶ Drinking Water ▶ Environmental Security ▶ Health Security ▶ Health System ▶ Human Security ▶ Malaria ▶ Malnutrition ▶ Vulnerability and Vulnerable Groups of People ▶ Water-Borne Diseases
References Albrecht, G., Sartore, G.-M., Connor, L., Higginbotham, N., Freeman, S., Kelly, B., . . . Pollard, G. (2007). Solastalgia: The distress caused by environmental change. Australasian Psychiatry, 15(Suppl 1), S95–S98. https://doi.org/10.1080/ 10398560701701288. Alsop, Z. (2007). Malaria returns to Kenya’s highlands as temperatures rise. The Lancet, 370(9591), 925–926. https://doi.org/10.1016/S0140-6736(07)61428-7. Barnett, J., Matthew, R. A., & O’Brien, K. (2008). Global environmental change and human security. In H. G. Brauch, Ú. O. Spring, C. Mesjasz, J. Grin, P. Dunay, N. C. Behera, . . . P. H. Liotta (Eds.), Globalization and environmental challenges: Reconceptualizing security in the 21st century (pp. 355–361). https://doi.org/10.1007/978-3-540-759775_24. Basu, R. (2015). Disorders related to heat waves. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 87–103). Oxford/New York: Oxford University Press. Bauman, Z. (2012). Liquid modernity. Cambridge, UK/ Malden: Polity Press. Brklacich, M., Chazan, M., & Bohle, H. G. (2009). Human security vulnerability, and global environmental change. In R. A. Matthew, J. Barnett, B. McDonald, & K. L. O’Brien (Eds.), Global environmental change and human security (pp. 35–51). Cambridge, MA: The MIT Press. Buzan, B., Wæver, O., & de Wilde, J. (1998). Security: A new framework for analysis. Boulder: Lynne Rienner Pub. Caminade, C., McIntyre, K. M., & Jones, A. E. (2019). Impact of recent and future climate change on vectorborne diseases. Annals of the New York Academy of Sciences, 1436(1), 157–173. https://doi.org/10.1111/ nyas.13950. Collier, S. J., & Lakoff, A. (2015). Vital systems security: Reflexive biopolitics and the Government of emergency. Theory, Culture and Society, 32(2), 19–51. https://doi.org/10.1177/0263276413510050. Confalonieri, U., Menne, B., Akhtar, R., Ebi, K. L., Hauengue, R. S., Kovats, R. S., . . . Woodward, A. (2007). Human health. In M. L. Parry, O. F. Canziani,
Climate Change and Public Health J. P. Palutikof, P. J. van der Linden, & C. E. Hanson (Eds.), Climate change 2007: Impacts, adaptation and vulnerability. Contribution of Working Group II. to the fourth assessment report of the Intergovernmental Panel on Climate Change (pp. 391–431). Cambridge, UK: Cambridge University Press. Dangour, A. D., Green, R., Sutherland, J., Watson, L., & Wheeler, T. R. (2015). Health impacts related to food and nutrition insecurity. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 173–193). Oxford/New York: Oxford University Press. Detels, R., & Tan, C. C. (2015). The scope and concerns of public health. In R. Detels, M. Gulliford, Q. A. Karim, & C. C. Tan (Eds.), Oxford textbook of global public health (6th ed.). Retrieved from https://www.oxfordmedicine. com/view/10.1093/med/9780199661756.001.0001/ med-9780199661756-chapter-1 Doherty, T. J. (2015). Mental health impacts. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 195–214). Oxford/New York: Oxford University Press. Dryzek, J. S. (2019). The politics of the Anthropocene. Oxford/New York: Oxford University Press. EASAC. (2019). The imperative of climate action to protect human health in Europe. Retrieved from https://easac.eu/publications/details/the-imperative-ofclimate-action-to-protect-human-health-in-europe/ Fagan, M. (2017). Security in the anthropocene: Environment, ecology, escape. European Journal of International Relations, 23(2), 292–314. https://doi. org/10.1177/1354066116639738. FAO, IFAD, UNICEF, WFP, & WHO. (2019). The state of food security and nutrition in the world 2019. Rome: Food and Agricultural Organization. Foucault, M. (2007). Security, territory, population: Lectures at the Collège de France, 1977–1978. Basingstoke/New York: Palgrave Macmillan. Gilman, N., Randall, D., & Schwartz, P. (2011). Climate change and ‘security.’ In J. S. Dryzek, R. B. Norgaard, & D. Schlosberg (Eds.), The Oxford handbook of climate change and society. Retrieved from https://www.oxfordhandbooks.com/view/10. 1093/oxfordhb/9780199566600.001.0001/oxfordhb9780199566600-e-17 Hanna, E. G. (2011). Health hazards. In The Oxford handbook of climate change and society. https://doi. org/10.1093/oxfordhb/9780199566600.003.0015. Herring, S. C., Christidis, N., Hoell, A., Hoerling, M. P., Stott, P. A., Herring, S. C., . . . Stott, P. A. (2019). Explaining Extreme Events of 2017 from a Climate perspective. Bulletin of the American Meteorological Society. https://doi.org/10.1175/ BAMS-ExplainingExtremeEvents2017.1. Kinney, P. L., Ito, K., Weinberger, K. R., & Sheffield, P. E. (2015). Respiratory and allergic disorders. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 105–127). Oxford/New York: Oxford University Press. Lakoff, A. (2007). From population to vital system: National security and the changing object of public
209 health (ARC Working Paper, Vol. 7). Anthropology of the Contemporary Research Laboratory. Retrieved from http://mx1.www.anthropos-lab.net/wp/publications/ 2007/08/workingpaperno7.pdf McDonald, M. (2018). Climate change and security: Towards ecological security? International Theory, 10(2), 153–180. https://doi.org/10.1017/ S1752971918000039. McInnes, C. (2014). The many meanings of health security. In S. Rushton & J. Youde (Eds.), Routledge handbook of global health security (pp. 7–17). https://doi.org/10. 4324/9780203078563.ch1. Paterson, J., Berry, P., Ebi, K., & Varangu, L. (2014). Health care facilities resilient to climate change impacts. International Journal of Environmental Research and Public Health, 11(12), 13097–13116. https://doi.org/10.3390/ijerph111213097. Reisen, W. K. (2015). Vector-borne diseases. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 129–155). Oxford/New York: Oxford University Press. Rose, J. B., & Wu, F. (2015). Waterborne and foodborne diseases. In B. Levy & J. Patz (Eds.), Climate change and public health (pp. 157–172). Oxford/New York: Oxford University Press. Sutherst, R. W. (2004). Global change and human vulnerability to vector-borne diseases. Clinical Microbiology Reviews, 17(1), 136–173. https://doi.org/10. 1128/CMR.17.1.136-173.2004. United Nations. (n.d.). World population prospects. Retrieved July 25, 2019, from https://population.un. org/wpp/ Watts, N., Amann, M., Arnell, N., Ayeb-Karlsson, S., Belesova, K., Berry, H., . . . Costello, A. (2018). The 2018 report of the Lancet Countdown on health and climate change: Shaping the health of nations for centuries to come. The Lancet, 392(10163), 2479–2514. https://doi.org/10.1016/S0140-6736(18)32594-7. WHO. (2007). The world health report 2007: A safer future: global public health security in the 21st century. Retrieved from https://apps.who.int/iris/handle/10665/ 69698 WHO. (2018). COP24 special report: Health and climate change. Retrieved from https://apps.who.int/iris/han dle/10665/276405 WHO, & United Nations. (2015). Climate and health country profiles 2015: A global overview. Retrieved from https://apps.who.int/iris/handle/10665/208855
Further Reading Brklacich, M., Chazan, M., & Bohle, H. G. (2009). Human security vulnerability, and global environmental change. In R. A. Matthew, J. Barnett, B. McDonald, & K. L. O’Brien (Eds.), Global environmental change and human security (pp. 35–51). Cambridge, MA: The MIT Press. Gilman, N., Randall, D., & Schwartz, P. (2011). Climate change and ‘security.’ In J. S. Dryzek, R. B. Norgaard, & D. Schlosberg (Eds.), The Oxford handbook of climate change and society. Retrieved from
C
210 https://www.oxfordhandbooks.com/view/10.1093/ oxfordhb/9780199566600.001.0001/oxfordhb9780199566600-e-17 Levy, B., & Patz, J. (Eds.). (2015). Climate change and public health. Oxford/New York: Oxford University Press.
Collective Security Treaty Organization (CSTO) Evgenii Gamerman Institute for the Comprehensive Analysis of Regional Problems of the Far Eastern Branch of the Russian Academy of Sciences, Blagoveshchensk, Russia Keywords
CSTO · Russia · Central Asia · Cooperation · Collective security · Kazakhstan · Kyrgyzstan · Belarus · Armenia · Tajikistan · Uzbekistan
Introduction This article is devoted to the organization of a collective security treaty. It was created by the former Soviet republics to respond to the challenges and threats of the post-Soviet period, mainly in the military sphere. The organization in its development has passed the basic stages of formation and organizational construction. Collective Security Treaty Organization (CSTO) is a regional international organization. There are also two unofficial names – “Tashkent’s Pact” and “Tashkent’s Treaty.” It started in 1992, when on May 15 the Collective Security Treaty was signed in Tashkent by the heads of Armenia, Kazakhstan, Kyrgyzstan, Russia, Tajikistan, and Uzbekistan. In 1993, Azerbaijan, Belarus, and Georgia joined the CSTO. The Treaty entered into force on April 20, 1994. The contract was designed for 5 years and allowed its extension. On April 2, 1999, the presidents of Armenia, Belarus, Kazakhstan, Kyrgyzstan, Russia, and Tajikistan signed a protocol on extending the term of the
Collective Security Treaty Organization (CSTO)
agreement for the next 5-year period. Three states – Azerbaijan, Georgia, and Uzbekistan – refused to renew the contract. On May 14, 2002, it was decided to transform the Collective Security Treaty into an international organization, the Collective Security Treaty Organization (CSTO). On October 7, 2002, the Charter and the agreement on the legal status of the organization were signed in Chisinau, which were ratified by all six members and entered into force on September 18, 2003. The decision on the full accession of Uzbekistan to the CSTO was signed on August 16, 2006, in Sochi (Paramonov, 2008). The countries of the Collective Security Treaty Organization approved the creation of the Collective Rapid Reaction Forces on February 4, 2009, in Moscow. They were created to repel military aggression; to carry out an operation to combat terrorism and extremism, transnational crime, and drug trafficking; and to eliminate the consequences of emergencies. The Institute of the Collective Security Treaty Organization was established to conduct fundamental and applied research on the role and place of the Collective Security Treaty Organization in the modern world on the basis of the CSTO Secretariat on June 18, 2009. On December 19, 2012, Uzbekistan again suspended its membership in the CSTO. Since April 11, 2013, Serbia and Afghanistan have become observer states. The CSTO’s Collective Security Strategy for the period up to 2025 was adopted at the Collective Security Session in Yerevan in 2016; it revealed the concept of further strengthening of the organization’s aggregate potential with a view to making it an effective instrument of international policy ensuring peace and security in the Eurasian region. The Objectives of the CSTO are as follows: strengthening of peace, international and regional security, protection on a collective basis of independence, territorial integrity, and sovereignty of member states. The main goal of the organization is also to protect the territorial and economic space of the member countries of the treaty from any external military-political aggressors,
Collective Security Treaty Organization (CSTO)
international terrorists, and natural disasters of large scale (Charter of the Collective Security Treaty Organization). The Principles of activity are as follows: priority of political means to the military, strict respect for independence, voluntary participation, equality of rights and obligations of member states, and noninterference in cases falling under the national jurisdiction of states. Structure of the Collective Security Treaty Organization: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Collective Security Council (CSC) The Council of Foreign Ministers (CFM) Council of Ministers of Defense (CMD) The Parliamentary Assembly Committee of Secretaries of Security Councils (CSSC) The Permanent Council The Secretary-General and the Secretariat Joint (Staff) headquarters Military Committee
The supreme body is the Collective Security Council (CSC). The Council consists of the heads of the member states. Functions of the council are to consider the principal issues of the organization’s activities, make decisions aimed at the realization of its objectives, and ensure the coordination and joint activities of the member states in matters affecting the implementation of the stated goals. The Council of Foreign Ministers (CFM) is the advisory and executive body of the Organization. Functions: coordination of interaction in the field of foreign policy. The Council of Defense Ministers (CDM) is a consultative and executive body. Functions: coordination of interaction in the field of military policy, military-technical cooperation, and military construction. The Parliamentary Assembly (PA CSTO) was established in November 2006. Functions: Plenary sessions and meetings of the Council of the PA and its Standing Committees, which are held twice a year. It considers the activities of the
211
Organization, the implementation of the decisions of the CSC sessions, and the tasks of their legal provision. Issues of the implementation of the Program for the approximation and harmonization of legislation and the practice of ratifying international treaties are discussed. The chairmen of the Parliamentary Assembly were B. Gryzlov (2006–2012), S. Naryshkin (2012–2016), and V. Volodin (from 2016 until now) (Nikolayenko, 2004). The Committee of Secretaries of Security Councils (CSSC) is a consultative and executive body. Functions: coordination of interaction in the field of ensuring national security of member states. The Permanent Council of the CSTO consists of Permanent and Plenipotentiary Representatives appointed by States in accordance with domestic procedures. Functions: coordination of interaction in the implementation of decisions taken by the organs of the Organization, between the sessions of the Council. The Secretary-General of the Organization is the highest administrative official. He manages the Secretariat of the Organization. He is appointed by the decision of the SLE among the citizens of the member states, and he is accountable to the Council. The Secretariat is a permanent working body of the Organization. Functions: organizational, informational, analytical, advisory support of CSTO bodies. The Joint Staff (headquarters) is a permanent working body. Functions: preparation of proposals on the military component of the CSTO and organization and coordination of practical implementation of decisions of the Organization’s bodies on military cooperation. There are also the tasks of forming and functioning of collective CSTO forces, joint training of personnel and specialists for the armed forces, functioning of the CSTO crisis response center, development of a conceptual framework for the formation of the collective security system on the united headquarters.
C
212
The Military Committee was established under the Council of Defense Ministers of the Organization. Functions: prompt consideration of the planning and application of forces and assets of the collective security system of the Organization. The members of the Military Committee are the chiefs of the general staffs of the participating countries (Babadzhanova, 2008). Since 2005, Russia has begun to train personnel for CSTO countries in their schools on a nocost basis. In June 2010, due to the opposition of the Kyrgyz and Uzbek diasporas in Kyrgyzstan, the country was actually on the verge of a civil war. Kyrgyz President Roza Otunbayeva appealed to the Committee of Security Council Secretaries with a request to introduce elements of the Collective Rapid Reaction Force (CRRF) into the country. The CSTO denied this request and did not put forces into Kyrgyzstan. However, Kyrgyzstan was assisted in the search for instigators of riots and control of all information sources. This led to the fact that the Organization was criticized, including by its members (President of Belarus A. Lukashenko) for its failures to adequately respond to threats, which solution is set as the main goal of the organization’s creation and functioning. As the military-political bloc CSTO has never participated in combat operations. One of the most important activities of the CSTO is cooperation with international organizations and third countries. The cooperation of the Organization with the UN, the OSCE, the SCO, and their specialized structures is developed. In 2016, it was decided to establish a CSTO crisis response center, which task is informationanalytical and organizational and technical support for the adoption by CSTO bodies of relevant decisions for the purpose of crisis response. Within the framework of military cooperation, since 2004, more than 30 exercises of various scales have been conducted on the territory of the member states of the Organization: “Frontier,” “Interaction,” “Indestructible Brotherhood,” “Thunder,” “Cobalt,” etc.
Collective Security Treaty Organization (CSTO)
In 2007, an agreement on the peacekeeping activities of the United Nations Office on Drugs and Crime (UNODC) was signed, which entered into force and was registered by the UN Secretariat in 2009. To participate in peacekeeping operations, CSTO peacekeepers forces have been established on a permanent basis, its numbering about 3,000 servicemen and about 600 representatives of internal affairs bodies. They can be used both in the area of responsibility of the Organization and beyond it under the mandate of the UN Security Council (Bystrenko, 2015).
Conclusion Thus, the Collective Security Treaty Organization was established on the basis of the former Soviet republics, as an attempt to fill the vacuum that arose after the collapse of the Soviet Union. The huge number of security threats, which became very relevant in the 1990s, required the activation of cooperation between the countries of the region, including the military sphere. The CSTO is an attempt to bring such an interaction into the system. The organization went through all stages of organization and formation. However, it is not yet clear what role it will play in security issues, so in no way it has manifested itself in any of the conflicts on the territory of the countries of the former USSR.
References Babadzhanova, Y. (2008). Analysis of military doctrines of the CSTO member states (Vol. 3). Moscow: MGIMO (U). 89 p. Bystrenko, V. I. (2015). DKB – CSTO – An uneasy path to collective security. Naukaimir, 2(18), 12. Charter of the Collective Security Treaty Organization [‘electronic resource]. http://www.odkb-csto.org/docu ments. Date of circulation: 12.04.2018. Nikolayenko, V. D. (2004). Organization of the Collective Security Treaty (origins, formation, prospects). M., a scientific book. 222 p. Paramonov, V., Strokov, A., & Stolpovskiy, O. (2008). Russia and China in Central Asia: Politics, economy, security. Bishkek. Foundation of Alexander Kniazev
Commission on Human Rights
Commission on Human Rights Tuğba Bayar Department of International Relations, Bilkent University, Ankara, Turkey Keywords
Human rights · Human rights violations · United Nations
Definition The UNCHR was established as an intergovernmental organization in 1946, for the protection of fundamental human rights and liberties. The UNHCR was founded according to the Article 68 of the UN Charter that permits the Economic and Social Council (ECOSOC) to set up functional commissions in economic and social fields for the promotion of human rights. The UNCHR was recognized as a subsidiary body of the ECOSOC. It was functioning as a plenary body, where the 53 member states meet not only with each other but also with human rights advocates and nongovernmental organizations (NGOs) to bring up their concerns about human rights, discuss the actual problems, and formulate solution methods. The role of the NGOs was quite comprehensive in the functioning of the CHR. The seats of the Commission were distributed among the regions. The regional groups were the African group with 15 seats, the Asian group with 11 seats, the Latin American and Caribbean group with 11 seats, the Western European group with 10 seats, and the Eastern European group with 5 seats. The annual sessions were held in Geneva, Switzerland. During its sessions, it has adopted several resolutions, set standards, and taken decisions to improve the human rights conditions around the globe. International protection of human rights is facilitated by several nongovernmental organizations, intergovernmental organizations, multilateral treaties, international custom, and general principles of law. The Commission on Human Rights is one of the key institutions of this regime.
213
The Commission served as the first international forum established for setting the standards of modern human rights. During its first session in 1947, Eleanor Roosevelt was elected Chairman, unanimously. The Commission played a crucial role in formulation of the Universal Declaration of Human Rights (UDHR) that was adopted in 1948 by the United Nations General Assembly (UNGA), as Resolution 217. The range of the main themes addressed by the Commission stretch from economic, social, and cultural rights and civil and political rights to racism, to right to self-determination, and to the violation of human rights in Palestine. The Commission’s duties include examining, monitoring, and publicly reporting the human rights conditions in given countries. Furthermore, standard setting and implementation are other two significant responsibilities of the Commission. From time to time, the Commission set working groups to monitor the implementation of the existing human rights standards. The United Nations Commission on Human Rights (UNCHR) was one of the Charter-based bodies of the United Nations Human Rights Office of the High Commissioner (OHCHR). The OHCHR is composed of four United Nations (UN) Charter-based bodies and ten treaty-based bodies. The treaty-based bodies are besides the Commission on Human Rights that was later replaced by the Human Rights Council, the Universal Periodic Review, the Special Procedures of the Human Rights Council, and the Human Rights Council Complaint Procedure. The UN Charterbased bodies are diverse human rights monitoring mechanisms. The Commission on Human Rights was one of the foremost United Nations mandated commissions of inquiry and investigation used to detect and respond to grave human rights and humanitarian law violations. The Commission made efforts to promote accountability for the violations. Impunity is also one of the major impediments before implementation of protective measures. The Commission on Human Rights has also sought solutions to overcome impunity. During the first 20 years following its establishment, the Commission concentrated on detecting
C
214
the human rights abuses, discussing about the ideals, and setting standard for particular fields of human rights. In this period, the Commission adopted a passive bearing, without seeking sanctions for the violators or conducting investigations. During this period the Commission has come across with some major events, such as the human rights violations of the Franco regime in Spain, the first signals of the apartheid policy in South Africa, the human rights violations in various states of Balkans, and the 1959 events in Tibet. The Commission was highly sensitive about not to disturb the balance among the principle of sovereignty and human rights. The sensitivity of the Commission held it back from intervening in the human rights cases happening within their territories. The functioning of the Commission took a new and active lane after the ECOSOC has adopted the Resolution 1102 on March 4, 1966 by which it has invited the Commission on Human Rights to consider the human rights violations in all countries. The decade was marked with decolonization of Africa and Asia. During decolonization, ethnic conflicts and racial tensions escalated rapidly. Therefore the Commission has started to conduct investigations, document reports, and note policy recommendations to improve the human rights situations. The Resolution 1102 was a result of the petitions obtained from decolonizing territories of the Portuguese Administration, South Africa, and Southern Rhodesia (today known as Zimbabwe). The Commission on Human Rights has responded by asking the subcommission to deal with the violations and make recommendations to stop the violations. During the examination of the case, the sub-commission got in touch with the inhabitants to listen to their complaints and to collect information about the violations. In the following incidents, the Commission has continued to implement observation and inquiry processes. The reports have proven that the claims of domestic jurisdictions are not relevant anymore. The international law on human rights was expanding where the domestic systems were remaining inadequate. Hence, the United Nations bodies have started to take measures within their powers in order to
Commission on Human Rights
improve the conditions. The expanded powers have triggered the discussions regarding power abuse. As a response to the case in Chile in 1975, the Commission has set one ad hoc working group. The case of Chile has become a significant precedent case. The overthrow of Salvador Allende and the establishment of a military junta by General Augusto Pinochet led to several human rights abuses that the United Nations could immediately respond. Although the Commission on Human Rights was expanding its capabilities, the dramatic political events and the politicization of the body prevented the Commission to become an efficient human rights organization. The major criticism was directed by Israel that claimed the Commission to be anti-Israel and therefore politicized. The tension was sparked by the Commission’s affirmation of Palestine’s right to self-determination and their legitimate right to resist the Israeli occupation and settlements. The political disagreements led the CHR losing its direction in time. The last meeting of the Commission on Human Rights, after 60 years life, was held on March 27, 2006. The body was replaced by the United Nations Human Rights Council in the same year. The policy decision to replace the Commission with the Human Rights Council was taken at the September 2005 World Summit in New York. This decision was adopted by the UN General Assembly (UNGA) by a resolution on March 15, 2006. The discussions to replace the CHR with a new body began by the year 2004. In the UN report of the High-Level Panel on Threats, Challenges and Change that was published on December 2, 2004, the Commission on Human Rights suffered from a legitimacy deficit that casts doubts on the overall reputation of the United Nations; the Commission’s capacity to perform their tasks has been undermined by eroding credibility and professionalism. Therefore, reform of the Commission is found necessary. On March 21, 2005, the Secretary General of the UN Kofi Annan has introduced his report In Larger Freedom to the General Assembly for a UN reform. In his report, Kofi Annan has proposed to replace the CHR with the Human Rights Council that is supposed
Committee on World Food Security (CFS)
to be smaller than the Commission and should be elected by the two-thirds majority of the Assembly. The decision was resolved finally on March 15, 2006, by the Resolution 60/251 by 170 votes, with the United States, Israel, the Marshall Islands, and Palau voting against and Iran, Venezuela, and Belarus abstaining. The responsibilities of the newly formed Human Rights Council are described as it should address violations of human rights, including gross and systematic violations, and make recommendations thereon. It should also promote the effective coordination and the mainstreaming of human rights within the United Nations system. The achievements of the Commission are recognized by the General Assembly, and the necessity to redress the shortcomings came forth for the replacement decision. Today, the United Nations Human Rights Council is composed of 47 UN member states, who are elected by the UNGA. The work of the Council was reviewed in 2011 for further advancements. Human rights is one of the main concepts, on which the UN Charter is based upon. The promotion and protection of human rights is one of the key purposes of the entire UN system. In order to realize this aim, the UN organization has created several bodies. Besides the Human Rights Council, the High Commissioner for Human Rights (OHCHR) has a leading responsibility to promote and protect human rights globally. In addition to these two organizations, there are several human rights treaty bodies that function as individual committees to monitor the human rights situation all around the world. The UN Development Group’s Human Rights Mainstreaming Mechanism (UNDG-HRM) has also been playing a fundamental role in human rights mainstreaming efforts of the UN. The broad legal basis is broad, extending from the International Bill of Human Rights of 1948 to numerous international treaties on human rights.
Cross-References ▶ Human Rights and Privilege
215
Further Reading Donnelly, J., & Whelan, D. J. (2017). International human rights. London: Hachette UK. Evans, T. (Ed.). (1998). Human rights fifty years on: A reappraisal. Manchester: Manchester University Press. Nickel, J. W. (1987). Making sense of human rights: Philosophical reflections on the universal declaration of human rights. Berkeley: University of California Press. Tolley, H. (1987). The UN commission on human rights (p. 47). Boulder/London: Westview Press.
Committee on World Food Security (CFS) Mary Ruth Griffin Division of Natural Science, Walters State Community College, Greenville, TN, USA Keywords
Global Strategic Framework for Food Security and Nutrition (GSF) · Civil Society Mechanism (CSM) · Non-governmental organizations (NGO) · High Level Panel of Experts (HLPE)
Introduction The United Nation’s (UN) Food and Agriculture Organization (FAO) was created in 1945 in response to the recognition that inadequate food supplies for growing populations in unstable developing countries could affect global security. Since the time of its development, recognition of the importance of food security for global stability has only increased, and today food security is viewed as a key component of government national security interests. Over time, this recognition led to a need for greater promotion and development of food security and nutrition goals, which coincided with the United Nation’s (UN) goals to focus on human welfare. Consequently, in 1974, the Committee on World Food Security (CFS) was established as an intergovernmental forum for review of policies concerning world food security.
C
216
The current CFS is composed of members, participants, and observers. Committee membership is open to all member states of the FAO, World Food Program (WFP), and International Fund for Agricultural Development (IFAD) or non-member states of the FAO that are member states of the United Nations. The FAO Conference instituted CFS as a Committee hosted in FAO, with a Joint Secretariat composed by FAO, WFP and IFAD. The CFS reports to the UN General Assembly through the Economic and Social Council (ECOSOC) and to the FAO Conference (CFS Structure 2018). The main roles of the CFS are to recommend implementation of better coordination at the global, regional, and national levels, promote policy convergence, facilitate support, advise and promote accountability, and share best practices.
Reform of the CFS In 2008, during the height of the global food price crisis, member states of the CFS agreed at the Committee’s 34th Session to embark on an ambitious reform process. What spurred their action was the realization that current global governance and economic systems had failed to prevent millions of people from remaining food-insecure. In addition, a breakdown in these fragmented systems had now caused additional millions of people, many residing in countries that were once thought to be foodsecure areas, to face increasing hunger and food insecurity. Individuals, families, communities, and even entire countries had lost control over the factors that determined food security for them (McKeon 2015). As a result, there was intergovernmental agreement among the organization’s 127 member states that the CFS would become the main international forum dealing with food security and nutrition (CFS34 2008). In 2009, the member countries of the CFS adopted a series of reforms, which involved the inclusion of participants from both the civil society and the private sector. The goal was to provide a platform or voice for those often hit the hardest during times of production, physical, or economic
Committee on World Food Security (CFS)
crisis. The 2009 CFS reform redefined its role to constitute “the foremost inclusive international and intergovernmental platform for a broad range of committed stakeholders to work together in a coordinated manner and in support of country-led processes toward the elimination of hunger and ensuring food security and nutrition for all human beings” (CFS35 2009). Through this CFS reform process, civil society organizations (CSO) secured the right to coordinate autonomously and engage as official participants through the International Food Security and Nutrition Civil Society Mechanism (CSM). Civil society organizations can include both social movements and non-governmental organizations (NGOs). Non-governmental organizations represent specific issues or interest of certain social groups (Duncan and Barling 2012). The inclusion of the CSOs to participate in the CFS activities enables them to have opportunities for active and more meaningful engagement in the procedures and debates leading up to any final decision-making. However, final voting authority remains with the nation states because ultimately the states have the responsibility of enacting any policies supported. In 2010, the High Level Panel of Experts (HLPE) on food security and nutrition was established to facilitate and support debate and decision-making. Its purpose is to serve the CFS members and stakeholders by providing expertise from leading experts in their fields. The HLPE provides independent, scientific knowledge and evidence-based analysis on the multidisciplinary aspects of food security issues in order to keep the CFS up-to-date and abreast of emerging trends in food security. Work by the HLPE is transparent and presented annually at the CFS Plenary (Gitz and Meybeck 2011). Following its reform process, the CFS is now composed of a Bureau and a more robust Advisory Group. The Bureau serves as the executive arm of the CFS and consists of a chairperson and 12 member countries representing the inhabited continents and regions. The Advisory Group is composed of UN agencies; international agricultural research institutions; international and regional final institutions such as the World
Committee on World Food Security (CFS)
217
Global Strategic Framework for Food Security and Nutrition
The GSF, in line with the mandate of the CFS Plenary, draws on the Plenary’s recommendations and a number of earlier frameworks and is intended to complement and ensure coherence between them. Some of these earlier frameworks which help guide the GSF’s development include, but are not limited to, the World Food Summit Plan of Action, the Rome Declaration on World Food Security, as well as the Voluntary Guidelines on the responsible Governance of Tenure of Land, Fisheries, and Forests in the Context of National Food Security (VGGT) and the Voluntary Guidelines to support the progressive realization of the right to adequate food in the context of national food security (VGRtF) (CFS 2017). Current CFS supported policies and their subsequent endorsed documents on multidimensional topics include the following:
As part of its role to provide policy coherence about food security and nutrition, the CFS is responsible for developing the Global Strategic Framework for Food Security and Nutrition (GSF). The GSF is viewed as a dynamic reference document that is annually approved by the CFS Plenary; however, the GSF is not a legally binding instrument. Through consensus among its stakeholders, the CFS endorses specific recommendations and offers guidelines for action at the global regional and country level through the GSF. It is important to note however that its views are not necessarily shared by the FAO, WFP, or the IFAD, and as such, its recommendations are viewed as being voluntary. Instead, the GSF is intended to be flexible in order to better address major priorities associated with food security and nutrition. It provides an overarching framework and a “living” reference document with practical guidance or core recommendations for food security and nutrition strategies, policies, and actions. The intended users of the GSF are the CSF’s member states, various intergovernmental and regional organizations, members of the financial sector, universities, along with research and extension organizations, smallholders and business enterprises, as well as communities, workers, and consumers (Duncan 2015).
1. Twin-Track Approach 2. Promoting responsible investment in agriculture and food systems 3. Investing in smallholders 4. Addressing excessive food price volatility 5. Addressing gender issues in food security and nutrition 6. Increasing agricultural productivity and production in a socially, economically, and environmentally sustainable manner 7. Nutrition 8. Tenure of land, fisheries, and forests 9. Addressing food security and nutrition in protracted crisis 10. Social protection for food security and nutrition 11. Food security and climate 12. Biofuels and food security 13. Food losses and waste in the context of sustainable food systems 14. Sustainable fisheries and aquaculture for food security and nutrition 15. Water for food security and nutrition 16. Sustainable agricultural development for food security and nutrition: What roles for livestock? 17. Sustainable forestry for food security and nutrition (“CFS 2017”)
Bank and the World Trade Organization; CSM members, which include groups from smallholder family farmers, urban poor, agricultural and food workers, and indigenous people; and lastly members from private and philanthropic organizations (PSM) (CFS 2017). Since this time, the international community has come to view the CFS as the leading platform by which policy discussion and coherence about food security and nutrition take place. The CFS role as a leader comes from its inclusivity of ideas from its member countries and the CSM and PSM. Today the CSF is a uniquely inclusive global policy forum (Duncan 2015).
C
218
Communism
Conclusion
Communism Food security and adequate nutrition are key concerns globally, regionally, and nationally even in the twenty-first century. The CFS is unique as an international and intergovernmental agency in that it provides for participation from both the civil society and the private sector. This inclusivity provides a voice for those most impacted by the threats of food insecurity. The anticipation is that, by including more key stakeholders, the CFS will be better enabled to provide flexible and proactive policies and “best practices” suited for the special needs of specific nations and regions. Work by the CFS is critical to lessen the damage caused by any future global or local food crisis.
Navagaye Simpson1,2 and Francis Grice2 1 McDaniel College, Westminster, MD, USA 2 Department of Political Science and International Studies, McDaniel College, Westminster, MD, USA Keywords
Communism · Totalitarianism · Vladimir Lenin · Joseph Stalin · Mao Zedong · Kim Il-Sung · Karl Marx · Friedrich Engels · The Soviet Union · China · North Korea
Introduction Cross-References ▶ Food Insecurity ▶ Food Price Index ▶ Threats Which Disrupt Food Security
References Committee on World Food Security. (2017). Global strategic framework for food security & nutrition. (2018, July 1). Retrieved from http://www.fao.org/cfs/home/ products/onlinegsf/1/jp/ Committee on World Food Security – Structure. (2018, July 1). Retrieved from http://www.fao.org/cfs/home/ about/en Duncan, J. (2015). Global food security governance. London: Routledge. Duncan, J., & Barling, D. (2012). Renewal through participation in global food security governance: Implementing the international food security and nutrition civil society mechanism to the committee on world food security. International Journal Sociology of Agriculture and Food, 192(2), 143–161. Gitz, V., & Meybeck, A. (2011). The establishment of the High Level Panel of Experts on food security and nutrition (HLPE). Shared, independent and comprehensive knowledge for international policy coherence in food security and nutrition. CIRED working papers no 2011-30, hal-00866427 McKeon, N. (2015). Food security governance. London: Routledge. Report on the 34th Session of the Committee on World Food Security. Rome, Oct. 14–17, 2008. Report of the 35th Session of the Committee on World Food Security. Rome, Oct. 14–17, 2009.
Communism is one of the best known and most misunderstood political ideologies of the nineteenth, twentieth, and twenty-first centuries. In many parts of the world, the word has been adopted by members of the political right as a slogan to levy against their counterparts on the left, often without much consideration given to the full meaning and history of the ideology. There are several reasons why Communism as a political creed has been demonized and its proponents abhorred. One of these is that states purporting to follow the tenets of the doctrine have formed the primary resistance to Western neoliberal Capitalism and democracy since the Soviet Union was established in 1917. The rise of a formidable opposition to democracy, and even fascism, over the following years shook the Western world to its core. It helped to engender events such as the Red Scare in the United States, Operation Barbarossa in Europe, and the temporary emergence of the domino theory within international relations during the earlymid Cold War. This led to a state of affairs in many Western states where patriotism became entwined with anti-Communism, especially during the Cold War, and this legacy continues today. A second reason for the ideology having received a bad name is that efforts to implement the theory of Communism into a state have all too
Communism
often manifested as brutal totalitarian regimes under the reign of despotic leaders. Three infamous examples were Joseph Stalin in the Soviet Union, Mao Zedong in China, and Kim Il-Sung, who killed over a hundred million people during their respective lifetimes (Courtois et al. 1999) As a result, Communism has become recognized as one of the world’s most notorious forms of nondemocratic rule. Violence has frequently plagued Communist states in part because the ideology has rarely been received positively by its citizenry, but instead has had to have been forced upon them by victorious and sometimes charismatic revolutionaries. This has typically been done through a mixture of indoctrination, the purging of political rivals and dissenters against the new regime, and other forms of repression. The complexity of the relationship between Communism as an ideology and Communism as a government type has been made more challenging still by the fact that most leaders of Communist governments have regularly used the language of Communism as an ideology within their political discourse and justifications for their rule. Many even created adaptions to the theory as a means to either explain or justify the context and actions of their regime. This article begins by outlining the basic tenets of Communist ideology as described by Karl Marx and Friedrich Engels and then tracing its evolution through the adaptions, experiences, and approaches of the most prominent practitioner-theorists who followed them. This includes Vladimir Ilyich Lenin, Josef Stalin, Mao Zedong, and the Kim Family. It then looks at the future of Communism, including the quasi-Communist model that continues to exist within China today of Socialism with Chinese Characteristics.
Marx and Engels Communism was originally conceived by Marx and Engels less as a form of government unto itself and more as the state of affairs that would happen when “capitalism and the conflict that private property was thought to bring [was replaced] by a socialism that would liberate the
219
working masses and restore to all of humanity an unspoiled soul” (Snyder 2010, p. 2). They further advanced a materialist concept of history that attempted to offer a scientific explanation for the human evolutionary process. This began with Despotism, in which one ruler economically exploited everyone else beneath him; evolved to Feudalism, in which the aristocracy joined the monarch as the exploiting classes; and at the time of their writing had reached Capitalism, in which the upper and middle classes – the bourgeoisie – had allied together in order to exploit the working class. They predicted that society would next advance to a period of Dictatorship of the Proletariat, in which the working classes would take over the levers of economic power to bring class consciousness to, and oppose counterrevolution by, the now overthrown bourgeoisie. It would then culminate in Communism, where government would wither away and equality would be enjoyed and self-enforced between all people within a truly classless society. Each phase would be driven forward by the growth of contradictions between the exploiting and exploited classes, which would erupt into a violent revolution that moved society from one phase to the next whenever the contradictions became too intense to continue. Marx and Engels wrote their theories with Western Europe in mind, the states of which had mostly entered into an economic system that closely resembled their Capitalist phase. Consequently, the two theorists focused most of their attention upon how these societies would transition through the next two stages of Communist history, with an emphasis upon how Capitalism would be overthrown. This included noting that despite holding a majority of the power in any given society where they thrived, the bourgeoisie were always the minority. The proletariat in contrast were nearly always in the overwhelming majority numerically but held relatively little power. Eventually, they predicted that the divide between the bourgeoisie and the proletariat would become so wide, and the oppression exerted by the former upon the latter would become so great, that the proletariat would rise up in revolution and emancipate themselves from the shackles of
C
220
Capitalism. Also notably, Marx and Engels envisioned that the transition from Capitalism to the Dictatorship of the Proletariat would occur first in industrial urbanized societies and then eventually reach less developed ones. They thought that it was neither desirable nor possible for a society to leapfrog from a rural Feudal society to the Dictatorship of the Proletariat. This would become a major issue that later practitioner-theorists would try to reconcile because Marxist revolutions emerged not within industrialized Europe as Marx and Engels predicted but within the predominantly agricultural regions of Russia, Eastern Europe, and Asia.
Lenin Several small-scale experiments with Communism were attempted during the late nineteenth century, such as the Paris Commune. Yet, the man who brought the Marxist ideology to international prominence by instigating the October 1917 revolution and overthrowing the post-Tsarist provisional government was Vladimir Ilyich Lenin. Prior to Lenin’s rise to power, few followers of Marxism conceived that a society such as Russia, which lacked an urban proletariat and was an overwhelmingly agricultural society, could host a Communist revolution. During his early revolutionary endeavors, Lenin had sought to tackle this issue directly by creating “Vanguardism” as an addition to classical Marxist theory, and he continued to promote this adaption after the revolution was achieved. Vanguardism involved the idea that “the party, acting on behalf of the proletariat, should place itself at the head of a workerpeasant revolutionary coalition under proletarian leadership. The revolution achieved by this coalition would, owing to peasant predominance, still necessarily be a bourgeois revolution. It would result in the setting up of a bourgeois-democratic dictatorship of workers and peasants; and this dictatorship would prepare the conditions in which the socialist revolution would become possible” (Carr 1959, p. 37). A second major theory
Communism
created by Lenin was Democratic Centralism, which stated that open discussion and the exchange of ideas within the party were appropriate, while courses of actions were being considered. Once a decision was made, however, members were expected to obediently and enthusiastically toe the line without hesitation or dissent (Joseph 2014, p. 154). Lenin and the Bolsheviks also followed the Marxist belief that revolution should not be restricted to just one state because an international revolution was both an achievable and imminent phenomenon. They further felt that if a revolution were to start in a more developed European state, then it would help the Bolsheviks with implementing Marxism within Russia. As a result, Lenin and the Bolsheviks were initially eager to help revolutionary movements in surrounding states to make the move toward Dictatorship of the Proletariat. This led them to foster and support Marxist movements within other states in both Europe and Asia (although, ironically, with the latter they recommended a watered-down version due to their predominantly agricultural societies). Lenin’s implementation of his adapted version of Marxism involved the use of terror when it came to seizing and maintaining power. Like Marx and Engels before him and many Marxist practitioner-theorists after him, Lenin acknowledged that revolutions are inherently violent and embraced this destructive energy as a positive force that “was to be the midwife not just of revolution but of full communism as well” (Ryan 2007). He was also driven by a fear of an internal or external bourgeoisie counterrevolution overthrowing his regime. This led him to embrace such chilling security measures as deploying a secret police force known as the All-Russian Extraordinary Commission (Cheka) to suppress political dissenters and undertaking violent purges of alleged class enemies and obstructionists. As Lenin himself summarized: “There is no way of liberating the masses except by forcibly suppressing the exploiters. That is what the Extraordinary Commissions are doing, and therein lies their service to the proletariat” (Lenin 1918).
Communism
Stalin Following the death of Lenin in 1924, existing power struggles within the Bolshevik Party intensified, leading to the eventual exile, arrest, and execution of many of its leading members. The man who would make his way to the top was one of history’s most nightmarish mass murderers: Joseph Stalin. Initially, Stalin did not deviate far from Lenin’s teachings and practices, but within a few years he co-created and embraced the concept of Socialism in One Country. This abandoned the idea of fermenting and assisting Marxist revolution abroad in favor of building the Soviet Union into a strong and highly advanced socialist state (Shachtman 1932). Stalin wanted to turn a blind eye to international revolution partially because of an oppositional response to his rival Leon Trotsky’s vision of permanent world revolution, but this was not the only reason for the change. Stalin also believed that Europe was going through a period of instability that would not end for many decades and felt that intervening in their affairs would shift attention away from more pressing problems at home (Shachtman 1932). Stalin was also a proponent of the two-stage theory which posited that underdeveloped societies would first need to adapt and overcome the remnants of Feudalism within their own country before they could shepherd their own society further toward Communism (Boer 2017). It is no secret that, in the name of consolidating power, Stalin was a brutal mass murderer who showed very little mercy to his population. Forced collectivization and industrialization were imposed upon a predominantly unwilling population using extreme state violence and oppression against anyone suspected of resisting these initiatives. Furthermore, Stalin not only wanted to do away with intraparty discussion, but also suffered from an all-consuming paranoia, which led him to instruct his secret police – the much feared and hated NKVD – to implement a campaign of terror and show trials that came to be known as the Great Purge. This would lead to the execution, murder, forced deportation, and imprisonment of huge
221
portions of the population for overwhelmingly fictitious crimes against the state. The Great Purge was particularly remarkable because it took place during peacetime in Russia, within a society who was supposed to have been committed to the rational values of Marxism and the Russian revolutionary tradition (Shatz 1984, p. 1). Ultimately, Stalin was so great of an inhumane monster, and his rule was so detrimental to Soviet society that his successor, Nikita Khrushchev, felt compelled to denounce him. Khrushchev then tried to formally lead the country through a period of de-Stalinization in a desperate attempt to reverse some of the political and physical damage and keep the Communist experiment alive.
Mao Zedong The next major practitioner-theorist of Communism, Mao Zedong, has the dubious claim to fame of being one of the few people in history who probably killed more people than Stalin. The Communists in China faced a similar conundrum to that encountered by the Bolsheviks in pre-revolutionary Russia in that China was overwhelmingly populated by rural peasants rather than the urban workers. To overcome this problem, Mao took the core tenets of Lenin’s Vanguardism and added several new adaptions of his own. One of the most significant of these was the “the Mass Line” which involved requiring party officials to go out to the peasant masses in the countryside and draw their strength and inspiration from them (Joseph 2014, pp. 161–162). Mao claimed to have pioneered this approach during the 1920s, when he worked as a Communist Party liaison to the autonomous peasant uprisings in Hunan against the local white landlords. His subsequent demands that the party and the urban middle classes visit the rural regions to learn from the peasantry stood as testament to his enduring determination to impose this adaption upon the Chinese population. Another major adaption that Mao embraced was Voluntarism, which held that the human spirit
C
222
could overcome all obstacles and that technology was secondary in significance compared to the sheer force of human will that could be mustered by China’s peasantry (Joseph 2014, pp. 163–165). A third was Permanent Revolution, which claimed that revolution should not cease simply because a revolutionary party had seized power. Instead, it should continue indefinitely to sustain societal advancement and avoid reactionary retrenchment, potentially even after Communism itself is reached (Joseph 2014, pp. 171–172). Mao’s attempt to implement this latter variation contributed to his decision to undertake the Great Leap Forward and the Cultural Revolution, which stand out as some of the worst atrocities and manmade disasters in all of history. The plan for the Great Leap Forward was to interconnect industry and agriculture by collectivizing basic human necessities and using “socialist economics to increase Chinese production of steel, coal, and electricity” (Mitter 2016, p. 55). Through this program, Mao wanted to increase China’s agricultural and industrial output to a level that one day surpassed that of Britain. The result was an unmitigated failure, however, that saw the death of millions due to widespread famine, human rights abuses by the regime, and general social strife. During the Cultural Revolution some years later, Mao called forth young people from across the country to target political dissenters, alleged enemies of the state, and intellectuals by publicly humiliating, physically punishing, and even killing them. His goal was to fundamentally shift the culture of the Chinese people from its allegedly backward and reactionary roots to a new proletarian culture that promoted positive class consciousness and emancipation, as well as to rejuvenate the revolutionary spirt within China. It led to the destruction and closure of many vestiges of traditional civilization and learning across the country, including schools, universities, and temples. Many young people joined pro-Mao Red Guard groups, which Mao tasked with shaking up the system, eventually compelling Mao to deploy the Red Army to restore order. The human toll of the Cultural Revolution was enormous, with an estimated one million Chinese killed by their hands.
Communism
Like Stalin, Mao enacted and presided over a raft of horrifying measures to control his population and exercise his will. Some of these actions were intended to deter resistance and punish dissent against Communist rule, but others had the goal of compelling the Chinese people to carry out the political, economic, and political transformations that Mao desired. According to the Black Book of Communism a staggering 65 million Chinese persons perished at the hands of Mao and his attempts to create and control a Communist regime in China (Edwards 2010).
The Kim Family One of the other prominent adaptors of Marxist doctrine is the Kim Family regime, which has ruled over North Korea since the end of the Second World War and which continues to present as a hardline Communist regime today. In addition to continuing many of the traits of Communism advanced by Marx, Lenin, Stalin, and Mao, the Kim Family added Juche as a uniquely Korean variant to the theory. Kim Il-Sung, who ruled over the country from 1949 to 1994, explained the concept in his own words: “Juche means that the masters of the revolution and the work of construction are the masses of the people and that they are also the motive force of the revolution and the work of construction. In other words, one is responsible for one’s own destiny and one has also the capacity for hewing out one’s own destiny.” (Kim Il-Sung 1975, p. 173) Juche also involves a number of other important components, including the message that the North Korean people should be self-sufficient rather than reliant upon foreign help, that inequality does not disappear in the immediate aftermath of the revolution but must be tackled over a protracted period of time, and that the population should stand in perpetual readiness of defending the kingdom against outside aggression. These themes and the imposition of them upon the North Korean population by the Kim Family regime have contributed to an at least partially deliberate self-isolation by North Korea and its labeling as “the Hermit Kingdom.” Juche also
Communism
takes Mao’s ideas about the Mass Line to the next level by suggesting that there is an inseparable connection between the Supreme Leader and the will of the masses. This holds that the leader draws his power and understanding from the masses and the leader is in turn the mastermind of the revolution and the sole legitimate representative of the working classes. This notion has become combined with a cult of personality that ascribes a semidivine status to the Supreme Leaders and links them with mystical aspects of North Korean history. Following the ascent of Kim Jong-il to the post of Supreme Leader, after the death of his father in 1994, a new political philosophy of Songun (“Military First”) was added to the ideological framework of the regime. This philosophy prioritized the military as the leading edge of the revolution and effectively placed them higher in the political and social hierarchies than all other groups. It was partially abandoned and replaced by Kim Jong-un following his assumption of power after the death of Kim Jong-il in 2011 with the philosophy of Byungjin (“Parallel Development” of the country’s economy and nuclear weapons). This creed aimed to rebalance power within the country back toward the Party rather than the military, although the degree to which this has succeeded remains as yet unknown. Similar to Stalin’s Soviet Union and Mao’s China, violence is used regularly by the Kim Family regime against the population to deter resistance, coerce obedience, and compel participation in collectivized agriculture and state-run industries. The use of propaganda to induce the people to support Communism, to detest America, and most importantly to revere the Kim Family are also used for this purpose. This is so horrifyingly effective that many North Korean defectors still revere their former leader and his family even after going through the turmoil of fleeing their home country and settling permanently into a new life abroad. The cult of personality for the Supreme Leader, discussed above, along with other security measures such as political purges and the extremely harsh treatment of political dissenters and their families has helped the Kim
223
Family regime to maintain a firm grasp on a North Korean state whose days would almost certainly otherwise be numbered (Grice 2017).
The Future of Communism To date, no societies have successfully realized the full transition to Communism as envisaged in Marx’s political theory, and there is little reason to think that in the future this situation will be overturned. The probability of the kind of hardline Communism that existed under the reigns of Lenin, Stalin, and Mao reemerging in the future is difficult to predict. On the one hand, the growth of an international human rights regime and a general liberalization of the international system over the past few decades would make it harder for any government to institute terror and purges upon their population. This could in turn be seen as making the establishment and maintenance of this kind of regime exponentially more challenging. Of course, North Korea is empirically a hardline Communist state that remains one of the most isolated and repressive in the world and its citizens suffer as a result. Yet, the recent sanctions applied against it on the basis of its human rights abuses (which exist separately to those applied against it for its nuclear and ballistic missile tests) could be viewed as proof that the global community is becoming increasingly intolerant toward this kind of activity. On the other hand, human rights abuses remain unchecked in many parts of the world, including Chinese Tibet and Russian Chechnya, which suggests that new Communist dictatorships could thrive in those states that are powerful enough to shrug off economic, diplomatic, and possibly even military sanctions. The re-emergence of hardline Communism is not, however, the only way that the ideology could appear as a form of government in the future. The current ruling regime in China purports to be operating under a comparatively new concept called Socialism with Chinese Characteristics, which originated under Deng Xiaoping. At its core, this philosophy recognized that China is still a heavily agricultural and
C
224
Communism
peasant-based society and rejects the idea that the Capitalist phase can be skipped over. Instead, Socialism with Chinese Characteristics promotes the idea that China must go through the Capitalist period described by Marx, but the exploitation and oppression that would usually be associated with that stage of history can be softened by private and public industry working hand in hand together (Liu 2007). This concept can either be looked at as a nuanced approach to the future of communism or a fundamental breach of traditional communist values which stand in direct opposition to anything related to Capitalism.
of the individual to the state that is typically pursued within Communist governments could be argued to have more in common with Fascism than it does with the Communist creed that they are supposed to embody.
Conclusion
References
Communism is a complex concept to unpack because the ideology is in many ways different to the government type, yet the two phenomena are also inseparably intertwined: studying one is impossible without referencing the other. Violence is heavily associated with Communism as a government type, and some of this can be ascribed to the ideology of Communism, which acknowledges the existing oppression within Capitalist and Feudal societies, as well as the need for class-based violent revolution to progress society along its historical path. Yet the violent nature of most Communist governments cannot be explained away purely on this basis. Instead, other factors play a role, including the absence of a democratic mandate and the resentment of the population toward being compelled to work on collectivized farms and in state-owned factories. Many Communist governments have, in response, employed the use of severe security measures, including extreme force and barbarity, political indoctrination, and other forms of state pressure and persecution in order to push forward with collectivization and to both deter and punish acts of rebellion. Even when used for the ostensibly benign purposes of bringing power to disadvantaged and exploited people within society, these tactics are nevertheless still barbarous terror tactics. Ironically, though, despite fundamental differences in goals and ideological considerations, much of the oppression and subjugation
Boer, R. (2017). Stalin: From theology to the philosophy of socialism in power. Singapore: Springer Nature. https://doi.org/10.1007/978-981-10-6367-1_2. Carr, E. H. (1959). Socialism in one country. In Socialism in one country 1924–1926. London: Palgrave Macmillan. Courtois, S., Werth, N., Panné, J., Paczkowski, A., Bartošek, K., & Margolin, J. (1999). The black book of communism: Crimes, terror, repression. Cambridge, MA: Harvard University Press. Edwards, L. (2010). The legacy of Mao Zedong is mass murder. Retrieved from https://www.heritage.org/asia/ commentary/the-legacy-mao-zedong-mass-murder Grice, F. (2017). The improbability of popular rebellion in Kim Jong-un’s North Korea and policy alternatives for the USA. Journal of Asian Security and International Affairs, 4(3), 263–293. Joesph, W. A. (2014). Politics in China: An Introduction. Oxford: Oxford University Press Kim Il-Sung. (1975). For the independent peaceful reunification of Korea. New York: International Publishers. Lenin, V. (1918). Speech at a rally and concert. Speech presented at The All-Russia Extraordinary Commission Staff in Russia, Moscow. Retrieved from https:// www.marxists.org/archive/lenin/works/1918/nov/ 07b.htm Liu, J. (2007). What is Socialism with Chinese Characteristics? Hunan University of science and technology. A congress Marx international V – Contribution: ParisSorbonne et Nanterre (3/6 October 2007), Paris. Mitter, R. (2016). Modern China: A very short introduction (2nd ed.). New York: Oxford University Press. Ryan, J. (2007). Lenin’s the state and revolution and soviet state violence: A textual analysis. Revolutionary Russia, 20(2), 151–172. https://doi.org/10.1080/ 09546540701633452. Shachtman, M. (1932). The reactionary theory of socialism in one country. Retrieved from https://www.marxists. org/archive/shachtma/1932/05/9yrslo5.htm
Cross-References ▶ Fascism ▶ Nondemocratic Systems ▶ Totalitarianism
Conflict and Conflict Resolution Shatz, M. (1984). Stalin, the great purge, and Russian history: A new look at the new class. The Carl Beck Papers, 305, 1–45. Snyder, T. (2010). Bloodlands: Europe between Hitler and Stalin. New York: Basic Books.
Further Reading Brown, A. (2009). The rise and fall of communism. New York: HarperCollins. Holmes, L. (2009). Communism: A very short introduction. Oxford, UK: Oxford University Press. Priestland, D. (2009). The red flag: A history of communism. New York: Grove Press.
225
Conceptualizations of Conflict and Conflict Resolution It is a reasonable assumption that the readership of this entry would expect the discourse on conflict and conflict resolution to begin with an analysis or definition of both terms (conflict and conflict resolution). This is a reasonable expectation and one that is adhered to in the narrative.
Conflict
Conflict and Conflict Resolution Wendell C. Wallace The Centre for Criminology and Criminal Justice, Department of Behavioural Sciences, The University of the West Indies, St. Augustine, Trinidad and Tobago Keywords
Conflict · Disputes · Conflict resolution
Introduction Interpersonal and other forms of conflicts are essential, natural, and unavoidable human phenomenon in both homogeneous and diverse societies as daily life increasingly brings individuals with different backgrounds, cultures, values, morals, beliefs, ethics, personalities, and objectives into contact with one another (Ghaffar 2009; Turnuklu et al. 2009). This diversity often leads to conflicting situations by and between individuals, organizations, entities, and nation states. In the context of conflict, Doğan (2016) points out that the occurrence of conflicts in every environment in which humans are present appears to be quite normal. In fact, Doğan (2016) posits that the genesis of conflict is equivalent to the history of humanity. Instructively, just as conflicts have been in existence for centuries, so too are attempts at their resolution. But what is conflict and what is conflict resolution?
There are many different conceptualizations of conflict as well as conceptualizations of its source. For example, Tschannen-Moran (2001) points out that conflict refers to some form of friction, disagreement, or discord arising within individuals or a group when the beliefs or actions of one or more members of the group are either resisted by or unacceptable to one or more members of another group. Further, Tschannen-Moran (2001) adds that conflict pertains to the opposing ideas and actions of different entities, thus resulting in an antagonistic state. For Hocker and Wilmot (1985), conflict is “a struggle between at least two interdependent parties who perceive incompatible goals, scarce resources, and interference from the other party in achieving their goals” (p. 23). Vlah (2010) operationalizes conflict as a legal form of exhibiting differences and normal elements of communication processes while also offering a possibility for personal and social improvement, Doğan (2016) sees conflict in terms of divergence naturally occurring in life, while Tesfay (2002) pontificates that conflict is an expression of hostility and antagonism. A broader conceptualization of conflict emanates from the Heidelberg Institute for International Conflict Research which pontificates that conflicts is “the clashing of interests (positional differences) on national values of some duration and magnitude between at least two parties that are determined to pursue their interests and win their cases” (HIIK 2005). For some individuals, the very mention of the term conflict conjures up images of furiousness, fear, tension, anger, disappointment, distrust, hostility, damage, death, and destruction
C
226
(Doğan 2016). Yet, others view conflicts as being abnormal, dysfunctional, pathological, and something that should be avoided and minimized at all costs. However, such misconceptions about conflict have been dispelled by Ghaffar (2009), Hocker and Wilmot (1998), and Seval (2006) as cited in Doğan (2016) and Weeks (2000) who all view conflict positively and as an integral component of personal development, intellectual revolt, excitement, and encouragement. This is premised on notions of: (1) conflict being a part of the daily regimen of life and it is inevitable (Doğan, 2016), and (2) that there are both productive and destructive conflicts. According to Hocker and Wilmot (1985), destructive conflicts often escalate and destroy relationships and are conflicts where the parties are unhappy with the outcomes and feel that they have lost. Hocker and Wilmot (1985) also point out that productive conflicts leave the parties feeling satisfied and usually involve the creation of a collaborative transformation of the elements of the conflict.
Sources of Conflict The starting point of any attempt to conceptualize conflict resolution must be initiated by reference to what are the possible sources of conflict. The sources of conflict are numerous and nonexhaustive and there is a multiplicity of authors and literature outlining those causes. The sources of conflict include, but are not limited to, incompatible goals, poor communication, lack of resources, common resources, competition for common but scarce resources, status differences, goal differences, interdependence, authority relationships, jurisdictional ambiguities, divergent views and causes, and roles and expectations (Auerbach and Dolan 1997; Champoux 2003; De Janasz et al. 2006; Ghaffar 2009; Mohamad Johdi and Raman 2011; Rahim 2001).
Types of Conflict There are many different categorizations and classification of conflict. In fact, Shahmohammadi
Conflict and Conflict Resolution
(2014) cogitates that conflicts are classified into the following four types: (1) interpersonal conflict refers to a conflict between two individuals, (2) intrapersonal conflict refers to conflict that occurs within an individual and the experience takes place in the person’s mind, (3) intragroup conflict is a type of conflict that happens among individuals within a team based on incompatibilities and misunderstandings among these individuals, and (4) intergroup conflict takes place when a misunderstanding arises among different teams within an organization. However, the list by Shahmohammadi (2014) is not an exhaustive list as conflicts that occur between states as well as between state and nonstate actors – armed conflict must be added to the typology of conflicts. It should be noted that there are different types of armed conflicts and these were identified by the Uppsala Conflict Data Program (UCDP) (n.d.) as cited by Ganiyu (2010, p. 13) as: (1) interstate armed conflict which occurs between two or more states, (2) extra state armed conflict which occurs between a state and a nonstate group outside its own territory, (3) internationalized internal armed conflict which occurs between the government of a state and internal opposition groups with intervention from other states, and (4) internal armed conflict that occurs between the government of a state and internal opposition groups without intervention from other states.
Conflict Resolution Like conflict, conflict resolution has many conceptualizations and perspectives. For example, Kriesberg (2009) points out that conflict resolution relates to all strategies of conflict solution in domains of conflicts, whether within or between families, organizations, communities, or countries. Batton (2002) refers to conflict resolution as a philosophy and set of skills that assist individuals and groups to better understand and deal with conflict as it arises in all aspects of their lives, while Sweeney and Caruthers (1996) define conflict resolution in a very simplistic manner as “the process used by parties in conflict to reach a settlement.”
Conflict and Conflict Resolution
As it relates to the resolution of conflicts, the extant literature points out that some supporters of conflict resolution tend not to believe in enforced settlements and proffer the worldview that the consent and contentment of the parties to a conflict are central. With this approach, de Bono (1985) submits that solutions to the conflict must emanate from within the disputing parties and that the role of a third party is vital to the resolution process, but only to the extent that the third party facilitates the interaction process that is aimed at resolving the conflicting situation(s). A major challenge to the understanding of conflict and conflict resolution is that conflict resolution and conflict management are sometimes used interchangeably, and therefore, misused and misrepresented. Importantly, conflict resolution and conflict management are not one and the same, and the terms should not be used interchangeably as they are markedly different concepts. In fact, conflict management refers to the process of limiting the negative aspects of conflict while increasing the positive aspects of conflict (Carr 2013), while conflict resolution refers to strategies of conflict solution (Kriesberg 2009). Importantly, proponents of conflict resolution (as opposed to conflict management) place emphasis on identifying and discussing the fundamental issues as they believe that conflict can be resolved. This is evidenced by the work of Mitchell (1989) who points out that by identifying and discussing the issues associated with the conflict, disruptive conflict behavior will cease, hostile attitudes and perceptions will be ameliorated, the initial source of conflict will be removed, and this will ensure that there are no unsatisfied goals remaining to convolute the future.
Conclusion Doğan (2016) submits that conflict has a very long antecedent as it dates from the beginning of human history. Occurrences of conflicts will probably never end, for as Doğan (2016) proffers – conflict will continue as long as there are differences in values, beliefs, cultures of people and
227
groups. It is not so much that conflict is bad; however, if conflicts remain unresolved, the consequences can be deleterious. In sum, conflict is a contradiction of people’s current values, expectations, and goals (Penda 2005), while conflict resolution represents any attempt at reconciling the contradictions (Kriesberg 2009).
Cross-References ▶ Mediation
References Auerbach, A. J., & Dolan, S. L. (1997). Fundamentals of organizational behaviour: The Canadian context. Toronto: ITP Nelson. Batton, J. (2002). Institutionalizing conflict resolution education: The Ohio model. Conflict Resolution Quarterly, 19(4), 479–494. de Bono, E. (1985). Conflicts: A better way to resolve them. London: Harrap. Carr, K. (2013). Effects of violence prevention programs on middle and high school violent behaviors. Unpublished Master of Arts Thesis, Northern Michigan University. Champoux, J. E. (2003). Organizational behavior: Essential tenets (2nd ed.). Canada: South-Western. De Janasz, S. C., Dowd, K. O., & Schneider, B. Z. (2006). Interpersonal skills in organizations (2nd ed.). New York: McGraw-Hill/Irwin. Doğan, S. (2016). Conflicts management model in school: A mixed design study. Journal of Education and Learning, 5(2), 200–219. Ganiyu, O. T. (2010). Preventing interstate armed conflict: Whose responsibility?. Unpublished thesis. Jönköping International Business School Jönköping University. Ghaffar, A. (2009). Conflict in schools: Its causes & management strategies. Journal of Managerial Sciences, 3 (2), 212–227. Heidelberg Institute for International Conflict Research (HIIK) (2005). Conflict Barometer 2005. Crisis, wars, coups d‘état, negotiations, mediations, peace settlements. http://www.rzuser.uniheidelberg.de/~lscheith/ CoBa05.pdf Hocker, J. J., & Wilmot, W. W. (1985). Interpersonal conflict (2nd rev. ed.). Dubuque: William C. Brown Publishers. Hocker, J. L., & Wilmot, W. W. (1998). Interpersonal Conflict, (5th ed.) Madison, WI: Brown and Benchmark. Kriesberg, L. (2009). The evolution of conflict resolution. In J. Bercovitch, V. Kremenyuk, & I. William Zartman
C
228 (Eds.), The Sage handbook of conflict resolution. Thousand Oaks: Sage. Mitchell, C. R. (1989). The structure of international conflict. London: Macmillan. Mohamad Johdi, S., & Raman, R. (2011). Conflict management in the MARA education institutions, Malaysia. National Seminar of Deans Council, Faculties of Education, Universities of Malaysia (SMDD2011) – Universiti Putra Malaysia, Serdang, 27 & 28 September 2011. Penda, I. A. (2005). The fundamental values of the European Union – From utopia to reality. Politička misao, 18(3), 157–172. Rahim, M. A. (2001). Managing conflict in organizations (3rd ed.). Westport: Quorum Books. Seval, H. (2006). Çatışmanın etkileri ve yönetimi. Manas Sosyal Bilimler Dergisi, 15, 245–254. Shahmohammadi, S. (2014). Conflict management among secondary school students. Social and Behavioral Sciences, 159, 630–635. Sweeney, B., & Caruthers, W. L. (1996). Conflict resolution: History, philosophy, theory and educational applications. School Counselor, 43, 326–344. Tesfay, G. (2002). A Study of factors that generate conflict between government secondary school teachers and educational managers in Addis Ababa Administrative Region. Unpublished thesis, The School of Graduate Studies Addis Ababa University. Tschannen-Moran, M. (2001). The effects of a state-wide conflict management initiative in schools. American Secondary Education, 29(3), 2–32. Turnuklu, A., Kacmaz, T., Turk, F., Kalender, A., Sevkin, B., & Zengin, F. (2009). Helping students resolve their conflicts through conflict resolution and peer mediation training. Procedia Social and Behavioral Sciences, 1, 639–647. Uppsala Conflict Data Program (UCDP). (n.d.). http:// www.pcr.uu.se/research/ucdp/definitions/definition_ of_armed_conflict/ Vlah, N. (2010). Concept and structure of social conflict. Educational Sciences, 2(12), 373–385. Weeks, D. (2000). The eight essential steps to conflict resolution. Osijek: Sunce.
Further Readings Folger, J. P., Poole, S. M., & Stutman, R. K. (2016). Working through conflict: Strategies for Relationships, Groups, and Organizations (7th ed.). London, New York: Routledge. Hocker, J. L., & Wilmot, W. W. (2017). Interpersonal conflict (10th ed.). New York: McGraw-Hill Education. Jeong, H.-W. (2008). Understanding conflict and conflict analysis. London: SAGE Publications Ltd. Schellenberg, J. A. (1996). Conflict resolution: Theory, research, and practice (1st ed.). Albany, New York: State University of New York Press. Wagner-Pacifici, R., & Hall, M. (2012). Resolution of social conflict. Annual Review of Sociology, 38(1), 181–199.
Convention on Biological Diversity
Convention on Biological Diversity Emma Mitrotta School of International Studies, University of Trento, Trento, Italy Keywords
Biodiversity · Environmental protection · Sustainable use · Protected areas · Indigenous and local communities · Participation · Environmental security · Global security · Food security
Introduction The Convention on Biological Diversity (CBD) was adopted at the United Nations Conference on Environment and Development (UNCED) held in Rio de Janeiro in June 1992; it entered into force on December 29, 1993, and counts with an almost universal membership, with the notable exception of the United States, which signed it in 1994 but never ratified. In addition to the CBD, another treaty was adopted at UNCED, the United Nations Framework Convention on Climate Change, and three soft law instruments, namely, the Rio Declaration on Environment and Development, Agenda 21, and the Non-legally Binding Authoritative Statement of Principles for a Global Consensus on the Management, Conservation and Sustainable Development of All Types of Forests. Since the 1980s, the realization that all life on Earth is interconnected is reflected in several soft law instruments, in particular in the 1982 World Charter for Nature, the legal principles of the World Commission on Environment and Development, and the 1991 World Conservation Strategy resulting from the joint efforts of the International Union for Conservation of Nature (IUCN), the United Nations Environment Programme (UNEP), and the World Wildlife Fund (WWF) (Boyle 1996). The CBD builds on these instruments and, in its Article 2, defines biological diversity or biodiversity as “the variability among
Convention on Biological Diversity
living organisms from all sources including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems.” Hence, biodiversity encompasses genetic diversity within species, species diversity, and diversity of ecosystems. The CBD approach to conservation endorses and goes beyond the species-, habitat-, and issuespecific protection ensured by ad hoc regimes. This is the only global and comprehensive agreement dealing with the conservation of biodiversity and allowing for the sustainable use of biological resources, which are under the sovereignty of states where they are located and are subject to a system of access- and benefit-sharing (Dupuy and Viñuales 2015). The CBD works as a framework convention since it provides broad goals and guiding principles that can be developed at the national level and can be further articulated through supplementary legal agreements, like protocols as well as soft law instruments, especially the decisions of the CBD Conference of the Parties (COP), which is the governing body of the Convention. The next sections provide a brief description of the CBD text and the institutional architecture it outlines. It is also discussed to what extent this Convention contributes to strengthening human, environmental, and global security. Indeed, safeguarding and increasing the potential and diversity of nature through biodiversity conservation have implications in terms of food security (see ▶ “Food Insecurity”) and sustainable agriculture. These aspects are specifically addressed in the framework of the CBD Programme of Work on Agricultural Biodiversity and its cross-cutting “International Initiative on Biodiversity for Food and Nutrition.”
The Convention on Biological Diversity The preamble of the CBD starts by recognizing the intrinsic value of biodiversity and spelling out a range of other values that benefit humans and ensure the maintenance of the biosphere.
229
Biodiversity conservation is defined as a common concern of humankind, which means that all states – including non-parties – have a legitimate interest in conserving biodiversity and, at the same time, have the responsibility to achieve this purpose, individually and jointly. Moreover, as a common concern, biodiversity conservation and its sustainable use have an inter-temporal character by virtue of a direct reference to the benefit of both present and future generations. It is also acknowledged that states have sovereign rights over their own biological resources, together with the responsibility to use such resources sustainably and conserve biodiversity, which is increasingly affected by human activities. Rights and responsibilities are two sides of the same coin and reconnect to the idea that biodiversity has to be protected for its intrinsic value as well as for the benefits it provides to humankind. The principles of prevention and precaution are also included in the preamble, as is the principle of cooperation, which is not limited to an interstate dimension but encompasses intergovernmental organizations and the nongovernmental sector. The preamble further recognizes the contribution of other actors to the objectives of this Convention, including at the subnational level, such as women and indigenous and local communities. Arguably, by referring to different non-state actors, the CBD advances their role at the international level and confirms the idea that the international community goes beyond states, as proved by their increased participation at UNECD in Rio and in line with other instruments adopted in the same occasion, such as the Rio Declaration and Agenda 21. Article 1 defines the three main objectives of this Convention: the conservation of biodiversity, the sustainable use of its components, and the fair and equitable sharing of the benefits deriving from such utilization. Sustainable use is defined in Article 2 as “the use of components of biological diversity in a way and at a rate that does not lead to the long-term decline of biological diversity, thereby maintaining its potential to meet the needs and aspirations of present and future generations.” Arguably, sustainable use qualifies state’s permanent sovereignty over natural resources that is
C
230
reiterated in Article 3 together with state responsibility not to cause damage to the environment of other states or areas beyond the limits of national jurisdiction. In so doing, the Convention restates verbatim Principle 21 of the Stockholm Declaration, without including the developmental component introduced by Principle 2 of the Rio Declaration. Article 4 affirms that CBD provisions apply to biodiversity components located within the territory of state parties and to processes and activities carried out under their jurisdiction or control, both within their territories and beyond the limits of national jurisdiction, and regardless of where the effects of such processes and activities occur. Hence, it indirectly extends the jurisdictional scope to all biodiversity, including its components located in areas outside national jurisdiction (Matz-Lück 2008). Article 5 introduces the principle of cooperation, which has a general scope and is further complemented by the procedural obligations to inform, notify, and consult foreseen in Article 14. Cooperation can also be advanced through the exchange of information (Article 17) and technical and scientific cooperation (Article 18) as well as through the technical and financial assistance that developed countries have to provide to developing ones as per Articles 12 and 20. Articles 6 to 9 specify the requirements of conservation and sustainable use that limit state sovereignty over biodiversity resources. Article 6 requires state parties to develop national strategies, plans, or programs for the conservation and sustainable use of biodiversity and to integrate such objectives in other sectoral and cross-sectoral planning and policy instruments. Article 7 demands states to identify what biodiversity components are important for conservation and sustainable use purposes, to monitor them, and to identify and monitor any process or activity likely to impact on their conservation and sustainable use. Articles 8 and 9 illustrate possible conservation measures. Conservation in situ (Article 8) must be preferred to ex situ measures (Article 9), which can be used as a complementary tool to preserve biodiversity components outside their natural surroundings. Protected areas, buffer
Convention on Biological Diversity
zones, the ecosystem approach, the restoration of degraded ecosystems, the recovery of threatened species, the control of invasive alien species, the preservation of traditional knowledge, and management practices are identified among the in situ conservation measures. Article 10 provides details on the sustainable use of biodiversity components, including through traditional cultural practices (Article 10 (c)). According to Article 11, conservation and sustainable use of biodiversity components should be supported with social and economic incentives adopted at national level. Furthermore, parties should foster the understanding of biodiversity conservation through educational and awareness-raising programs both at national and international levels (Article 13). Article 14 requires states to introduce environmental impact assessment (EIA) procedures for projects that are likely to impact on biodiversity. This provision shows that states cannot disregard the negative consequences of their actions and promotes the precautionary principle, which is otherwise only explicitly recognized in the preamble to the CBD (Boyle 1996). Access to genetic resources and the consequent sharing of benefits are addressed in Article 15. This is among the main objectives of the CBD and is thus crucial for its implementation. This provision confirms state sovereign rights by affirming that “the authority to determine access to genetic resources rests with the national governments and is subject to national legislation.” Access should be granted on mutually agreed terms (Article 15(4)) and subject to obtaining the prior informed consent of the provider party (Article 15(5)), which should benefit, in a fair and equitable way, from the results of the research and development as well as from the benefits arising from the commercial or other uses of genetic resources (Article 15(7)). The provider state should facilitate access for other parties for environmentally sound uses and avoid imposing restrictions that would defeat the objectives of the CBD (Article 15(2)). The issue of access and benefit-sharing has been addressed in a specific agreement that is supplementary to the CBD, the Nagoya Protocol on Access and Benefit-Sharing
Convention on Biological Diversity
(Nagoya Protocol), adopted in 2010. This Protocol also deals with traditional knowledge associated with genetic resources covered by the CBD and the benefits deriving from their utilization. Article 16 regulates the transfer of technology and the protection of intellectual property, while Article 19 addresses biotechnological research activities on genetic resources and the distribution of benefits arising from them. These provisions are meant to favor developing countries that are usually provider countries and lack the technical and financial means to develop such knowledge and technologies. Pursuant to Article 19(3), the Cartagena Protocol on Biosafety was adopted in 2000 as a supplementary agreement to the CBD to deal with the movements of living modified organisms resulting from modern biotechnology from one country to another.
Institutional Design and Dispute Settlement Similar to other multilateral environmental treaties, the CBD relies on its COP, which is responsible for reviewing and enhancing the implementation of the Convention. Article 23 details the tasks of the COP, which include receiving and reviewing national reports as well as the advice of subsidiary bodies, the establishment of bodies necessary for supplying the scientific and technical advice needed for the implementation of the Convention, and adopting amendments, protocols, and annexes. The COP convenes every 2 years and has played a significant role in advancing the CBD regime through its decisions. Article 24 establishes the secretariat, which has administrative functions and coordinates the relationship with other international bodies. Given the complexity of ecological interdependencies, Article 25 foresees the establishment of a Subsidiary Body on Scientific, Technical and Technological Advice and details its tasks aimed at addressing the scientific and technical developments in this field. Article 26 requires state parties to periodically report on the implementation of the Convention; these reports are reviewed by the COP, which can
231
comment on weaknesses or failures of the parties. Moreover, the COP can rely on other methods foreseen in the Convention to strengthen its implementation, including the development of training and capacity building programs (Article 12), public education and awareness (Article 13), EIA (Article 14), exchange of information (Article 17), and cooperation under different forms (Birnie et al. 2009). The CBD foresees numerous incentives aimed to enhance its compliance at both national and international level, but there is no formal institutional mechanism on compliance control or enforcement as such. The compulsory method for addressing disputes concerning the interpretation or application of the CBD is negotiation (Article 27). Nevertheless, the parties can also resort to other mechanisms – namely, arbitration or the jurisdiction of the International Court of Justice – if so declared when ratifying, accepting, approving, or acceding to the CBD.
The Contribution of the CBD to Human, Environmental, and Global Security Biodiversity conservation through the implementation of the CBD can arguably contribute to human security, environmental security, and global security, not least by enhancing the internal stability of countries. Environmental security is meant to ensure the viability of ecological processes and the survival of human life on Earth, hence calling for the protection of the environment per se and encompassing the concept of human security, which deals with fundamental human needs such as food and shelter as well as physical and mental health. Instead, global security is used in a more traditional sense and refers to conflict prevention and peace in an interstate perspective. Indeed, these concepts overlap to a certain extent; human security can be included within both environmental and global security; moreover, the overlap emerges when considering environmental degradation caused by warfare as an environmental security threat or when environmental security threats contribute to increased conflict both within and between countries.
C
232
Therefore, the concept of environmental security deals with nontraditional (nonmilitary) security challenges deriving from environmental-related stresses, like climate change-related events or the scarcity of primary resources such as food and water, and can have transboundary impacts, thus requiring states to unite their efforts. Arguably the whole CBD Convention contributes to human, environmental, and global security by way of its objectives. In fact, biodiversity conservation and the sustainable use of biodiversity components enhance the preservation of natural resources and ecosystems, thus increasing their intrinsic value (environmental security); they also contribute to meet “the food, health and other needs of the growing world population” (human security) and “to strengthen friendly relations among States and contribute to peace for humankind” (global security), as stated in the preamble. Moreover, the fair and equitable sharing of benefits arising from the utilization of genetic resources is conducive to internal stability and global security by fighting biopiracy and facilitating access to genetic resources. The security dimension also emerges in other provisions. Biodiversity conservation can be pursued through multiple (in situ and ex situ) measures, and protected areas represent a primary mechanism for this purpose (Article 8(a)). When extending across countries, transboundary protected areas (TBPAs) can boost environmental and global security. In fact, TBPAs are flexible frameworks that enable cross-border conservation efforts by applying to diverse natural spaces and interconnected ecosystems that have to be governed through the integrated management of natural resources and the ecosystem approach; they also enable large-scale processes like animal migration. TBPAs are used for intergovernmental cooperation, but they are increasingly including governance mechanisms that facilitate the participation of subnational actors, including indigenous and local communities. They have also been employed to attain peace objectives (as peace parks) and to reconnect communities divided by externally imposed boundaries. Hence, this link between
Convention on Biological Diversity
biodiversity conservation, TBPAs, and peace has the potential to effectively foster environmental and global security. Moreover, the promotion of peace and global security is indirectly pursued through the cooperative attitude that pervades the Convention: in the preamble, in Article 5 for areas beyond national jurisdiction and on matters of mutual interest, between industrialized and developing countries in technical and financial terms (Articles 12, 16, 17, 18, 19, 20), through the exchange of information (Article 17) and the obligations to consult with and notify potentially affected states of environmental dangers or damages (Article 14(1)(c) and (d)). These latter obligations also increase environmental security by aiming to initiate a dialogue between interested parties on actions to prevent or minimize the expected danger or damage (Article 14(1)(d)). Similarly, environmental impact assessments for projects that are likely to impact on biodiversity (Article 14(1)(a)) can effectively pursue environmental security by redefining or even halting projects that would cause disproportionate or irreversible damage. Biodiversity is key for strengthening food security, nutritional balance, and human health both in urban and rural contexts and is inherently linked to sustainable agricultural and the adoption of the ecosystem approach. The CBD cross-cutting initiative on biodiversity for food and nutrition supports actions that promote and conserve the wider use of biodiversity – both plant and animal species – and provide opportunities for sustainable livelihoods to tackle local problems of hunger and malnutrition, halt the erosion of food cultures, and counteract the uniformity in the agricultural market and human diets (CBD COP Decision VIII/23, 2006). This initiative is framed within the CBD Programme of Work on Agricultural Biodiversity and carried out in partnership with the Food and Agricultural Organization and the International Plant Genetic Resources Institute. This initiative stresses the importance of reconciling human health and ecosystem health and has the potential to help in the eradication of poverty. In fact, it was originally meant to contribute to achieving the Millennium Development Goals (MDG), namely, MDG n. 1 on the
Convention on Biological Diversity
eradication of poverty and hunger and n. 7 on ensuring environmental sustainability (CBD COP Decision VIII/23, 2006, Annex, B), and can arguably be connected to the more recent Sustainable Development Goals (SDG). In particular, a direct link can be traced to SDG 2 “zero hunger,” SDG3 “good health and well-being,” and SDG 15 “life on land”; moreover, its relevance can be raised for SDG 1 “no poverty,” SDG 5 on gender equality (for the critical role that women play in the maintenance of diverse food systems, especially in local and traditional communities), SDG 6 on clean water and sanitation, SDG 12 on responsible consumption and production, and SDG 13 on climate action. Indigenous and local communities can play a key role in strengthening the links between biodiversity, food, and nutrition by maintaining and enhancing biodiversity conservation through their traditional knowledge and practices (CBD COP Decision VIII/23, 2006, Annex, D, Element 3) – including on the properties and uses of genetic resources – with positive repercussions in terms of human and environmental security more generally. In this respect, Article 8 (j) promotes the respect, preservation, and maintenance of the knowledge and traditional practices for conservation and sustainable use of biodiversity of these communities. This focus has been widened through the activity of the CBD COP and dedicated working groups; the Nagoya Protocol then laid the basis for the effective participation of indigenous and local communities in sharing benefits from the utilization of genetic resources they hold where, in accordance with national legislation, they have “established rights” over genetic resources (Article 5(2)). International and regional human rights courts have contributed to establish indigenous peoples’ rights connected to the environment, such as the right to own and use traditional land and natural resources, via an extensive interpretation of certain treaty provisions. In particular, these courts have built on the rights of minorities, the right to culture, the right to property, the right to life, and participatory rights in both their substantial and procedural dimensions (Fodella 2013). In this sense, human rights jurisprudence
233
has contributed to the development of international law through cross-sectoral fertilization, has enhanced the role of indigenous peoples as actors under international law, and has defined the collective dimension of rights that were originally attributed to individuals – thus strengthening not only indigenous collective rights but also group rights – and made those rights justiciable (Fodella 2013). Arguably, strengthening indigenous and local communities’ rights in relation to the conservation and sustainable use of biodiversity and the fair and equitable sharing of benefits arising from the use of such resources by the state or third parties has the potential to reinforce human security by empowering these communities and improving their living conditions, to strengthen environmental security due to the improved biodiversity status, and to enhance internal stability and thus contribute to global security.
Conclusion Biodiversity conservation is pursued through a variety of international instruments dedicated to the protection of specific species and habitats and to environmental issues and areas. The CBD aims to address this fragmentation and establish “a comprehensive global regime for the protection of nature” that also deals with natural resources “located wholly within a State’s own national boundaries” (Boyle 1996, p. 33). As a framework convention, the CBD defines broad goals and guiding principles that can be further developed at the national level and through supplementary legal agreements and can be adapted over time by way of an evolutive interpretation. Moreover, the CBD can contribute to achieving specific objectives through dedicated programs of work and initiatives, as in the case of food security, sustainable agriculture, and the role of local and indigenous communities. As explained, the implementation of this Convention and the progress it has prompted in international (environmental) law contribute to strengthening human security, environmental security, and global security.
C
234
Cross-References ▶ Endangered Species ▶ Environmental Security ▶ Food Insecurity
References Birnie, P., Boyle, A., & Redgwell, C. (2009). International law and the environment. New York: Oxford University Press, 612 ff. Boyle, A. (1996). The Rio convention on biological diversity. In M. Bowman & C. Redgwell (Eds.), International law and the conservation of biological diversity (pp. 33–49). London/The Hague/Boston: Kluwer Law International. Dupuy, P.-M., & Viñuales, J. E. (2015). International environmental law. Cambridge: Cambridge University Press. Fodella, A. (2013). Indigenous peoples, the environment, and international jurisprudence. In N. Boschiero, T. Scovazzi, C. Pitea, & C. Ragni (Eds.), International Courts and the Development of International Law. Essays in honour of Tullio Treves (pp. 349–364). The Hague: T.M.C. Asser Press. Matz-Lück, N. (2008). Biological diversity, international protection. In Wolfrum, R. (Ed.), Max Planck encyclopedia of public international law (online edition). https:// o p i l . o u p l a w. c o m / v i e w / 1 0 . 1 0 9 3 / l a w : e p i l / 9780199231690/law-9780199231690-e1562. Accessed 13 May 2018.
Conventions Against Corruption Ong, D. M. (2010). International environmental law governing threats to biological diversity. In M. Fitzmaurice, D. M. Ong, & P. Merkouris (Eds.), Research handbook of international environmental law (pp. 519–541). Cheltenham/Northampton: Edward Elgar Publishing. Rayfuse, R. (2007). Biological diversity. In D. Bodansky, J. Brunnée, & E. Hey (Eds.), The Oxford handbook of international environmental law (pp. 362–393). Oxford: Oxford University Press. Secretariat of the Convention on Biological Diversity. (2001). Handbook of the convention on biological diversity. London: Routledge. Shine, C., & Kohona, P. T. B. (1992). The convention on biological diversity: Bridging the gap between conservation and development. Review of European Community & International Environmental Law, 1(3), 278–288.
Conventions Against Corruption Samuel Olufeso University of Ibadan, Ibadan, Nigeria Keywords
Corruption · Anti-corruption strategy
Introduction Further Reading Beyerlin, U. (2014). Universal transboundary protection of biodiversity and its impact on the low-level transboundary protection of wildlife. In L. J. Kotzé & T. Marahun (Eds.), Transboundary biodiversity governance (pp. 105–127). Leiden/Boston: Brill. Bowman, M., & Redgwell, C. (Eds.). (1996). International law and the conservation of biological diversity. London/The Hague/Boston: Kluwer Law International. Brunnée, J., & Toope, S. J. (1997). Environmental security and freshwater resources: Ecosystem regime building. The American Journal of International Law, 91(1), 26–59. Chandler, M. (1993). The biodiversity convention: Selected issues of interest to the international lawyer. Colorado Journal of International Environmental Law and Policy, 4(1), 141–175. Glowka, L., Burhenne-Guilmin, F., Synge, H., McNeely, J. A. & Gündling, L. (1994). A guide to the convention on biological diversity (IUCN environmental policy and law paper no 30). Gland: IUCN. Koester, V. (2006). The nature of the convention on biological diversity and its application of components of the concept of sustainable development. The Italian Yearbook of International Law Online, 16(1), 57–84.
This article provides an introduction to issues of corruption and anti-corruption strategy. It examines the strands, manifestations, and consequences of corruption in any state. It further interrogates national and international anti-corruption strategies employed to combat corruption in all its ramifications. It finally concludes with a discussion on how to prevent and tackle corruption.
Corruption Volumes of articles within and outside academia have been written to address the issue. Also, conferences and symposia have been organized to address the nature, manifestations, strands, and the effects of corruption. It remains a daily occurrence in countries across the globe. No wonder
Conventions Against Corruption
Amundsen (1999: 1) submits that corruption is not concomitant with any clime: it is ubiquitous. It is deep-rooted in poor countries of sub-Saharan Africa. It is widespread in Latin America. It is deep-seated in many of the newly industrialized countries coupled with alarming proportions in several post-communist states. Despite its ubiquitous nature across geographies, it also finds comfort within the private and public realms. As reported by Transparency International, “corruption is one of the greatest challenges of the contemporary world. It undermines good government, fundamentally distorts public policy, leads to the misallocation of resources, harms the private sector and development and particularly hurts the poor” (TI 1998). As a topical discourse with global visibility, its magnitude and proportions differ from country to country. In some climes, it is rare; in other places, it is systemic. Where it is rare, it is easily spotted as its culprits are punished. In places where it is systemic, it resembles a hydra-headed monster as incentives are created for its perpetuity. Consequently, the deployment of power and resources in a manner that advances personal, sectional, religious, and other parochial interests at the peril of a broad-based social, national, or global need is fastened to corruption. This is because power and resources are channeled toward private enrichment and personal aggrandizement (World Bank 1998). Theft, bribery, kickbacks, overinvoicing, patronage, extortion, nepotism, and other practices that stand in dissonance with the institutionalized norms of a society are tied to corruption (Chow 2005). Given its widespread nature, in many countries there exist euphemisms to describe these practices: egunje in Nigeria, mordida in Mexico, arreglo in the Philippines, baksheesh in Egypt, dash in Kenya, pot-de-vin in France, steekpenning in the Netherlands, tangente in Italy, or kenőpénz in Hungary. All these euphemisms are argots employed to refer to bribes in money or favor, offered or given to a person in a position of trust to sway his/her disposition. Consequently, its universal presence makes it an ambiguous concept in the social discourse with no established denotation. This implies
235
that there is no generally agreed definition or an encompassing meaning for the term. In fact, it has been viewed from variegated standpoints and perspectives by scholars, agencies, think tanks, commentators, and analysts across space. This has obstructed the attainment of a definitional homogeneity of the concept within academia and the active domain of administration. Regardless of these variations, Sorkaa (2002) fluently describes corruption as the erosion of ethics and accountability in public life. Expressing corresponding opinion, Nye (1967) and Adesina (2016) sees it as a moral impurity as it involves failure to kowtow to some social standards. Edame (2001) and Aluko (2009) see corruption as an antisocial behavior that bestows improper advantages contrary to legal and moral precepts to advance the living conditions of its beneficiaries. Therefore, it remains the abuse of entrusted power for private gains (World Bank 1998). Citing concrete examples, corruption may involve (but is not limited to) asking, giving, or taking a free gift or a favor in exchange for the performance of a legitimate task and the perversion or obstruction of the performance of such a task and the performance of an illegitimate task such as hoarding; collusive price-fixing; smuggling; transfer-pricing; inflation of prices; election-rigging; illegal arrest for harassment or intimidation purposes; abuse/misuse/nonuse of office, position, or power; dumping of obsolete machinery or outdated drugs; illegal foreign exchange transactions; legal but obviously unfair and unjust acquisition of wealth; “gilded crimes”; certificate forgery; false accounts and claims; and diversion of public, corporate, or private money or property to direct or indirect personal use (Odekunle 1986). Furthermore, corruption can be categorized into varying strands and forms. Its categorization is anchored in the institutional position of the participants, the nature of the transaction, or the underlying agenda. To an extent, the most worthwhile categorization of the varying shades of corruption is provided by the United Nations’ Office of Drug Control and Crime Prevention (UNODOCCP). The entire spectrum includes bribery, embezzlement, theft, fraud, extortion,
C
236
exploiting a conflict of interest, influence peddling, insider trading, the offering or receiving of an unlawful gratuity, favor or illegal commission, favoritism, nepotism/clientelism, illegal political contributions, and money laundering (UN 2001). Transparency International divides it into three layers: petty (management), grand (leadership), and political (systemic) (TI n. d.). Petty corruption captures daily abuse of entrusted power by lower-ranked and midranked state officials in their engagements with citizens who often access basic goods in the public domain such as health, education, police departments, and other agencies of the state. Grand corruption entails deeds committed at the highest echelons of government that twist policies or the central functioning of the state, enabling leaders to profit at the expense of the public good. Political corruption reflects the intentional tweaking of policies, institutions, rules of procedure for the allocation of resources, and financing by political decision-makers who abuse their position to sustain their power, status, and wealth (TI 2002, 2013). As the act cuts across countries, so do its effects. In the developing world, it renders state institutions prostrate, dysfunctional, and a few moribund; weakens and suffocates developmental efforts in all its ramifications; induces mediocrity in the public sector; perverts justice and fairness coupled with deterioration of law and order; and causes deprivation and chronic poverty. Relatedly, public resources are either allocated wastefully or siphoned, able and frank citizens feel exasperated, and the populace develops distrust against the government. The summative effects of these are disappearance of foreign aid, incomplete and abandoned projects, low productivity, inept administration and management, and weakened legitimacy. Above all, corruption ruins economic development. Eventually, this results in political instability, dilapidated infrastructure, battered health care, a battered education system, and further problems. Individuals and groups who wish to conduct their businesses honorably are discouraged and lose confidence in the rule of law. Consequently, corruption triggers mistrust in public establishments, weakens scrupulous doctrines
Conventions Against Corruption
by recompensing those eager and able to “grease palms” or “wet ground,” and maintains unfairness.
Measures Against Corruption Measures have been adopted to combat corruption at the international and national fronts. The measures include the approval of national and international agreements like the United Nations and African Union Conventions designed to tackle the entire spectrum of corruption coupled with the formation of autonomous bodies known as anticorruption agencies or ACAs (Agbiboa 2013); Ampratwum 2008). A small number of these ACAs have succeeded in curtailing systemic corruption in their countries, whereas the majority are toothless bulldogs that “bark” without “biting” corrupt elements in the society. This has further accentuated grand corruption in these societies. For instance, Hong Kong’s Independent Commission Against Corruption (ICAC) and Singapore’s Corruption Investigation and Prevention Bureau (CIPB) exemplify successful ACAs in the world (Gregory 2015). A greater number of these bodies are nonetheless notorious for their failure, particularly in developing countries of Africa where corruption has adorned the toga of social acceptance in both private and public realms (Fjeldstad and Isaksen 2008; Heeks and Mathisen 2012). For example, Nigeria’s Advance Free Fraud and Fraud Related Offences Act of 1995, Corrupt Practices and Other Related Offences Act of 2000, and Money Laundering Act of 2003 were enacted by various successive administrations stately interested in wrestling with corruption, with little in the way of results to show for this. As the most sinister social debacle that threatens the overall growth and development of a country, retards the pace of development in all areas, and widens the lacuna between the rich and the poor as the rich live in opulence and luxury while the poor asphyxiate in chronic pauperism and broken fortunes, it requires more than legislation and awareness to combat the menace of corruption. Hence, elevating awareness without ample enforcement and sanctions (as laws are meaningless without sanctions) will usher in
Conventions Against Corruption
cynicism among the populace and, subsequently, worsen the situation. On this note, it is germane for any anti-corruption policy to create a balance of awareness, enforcement, and sanctions. The core theme of any anti-corruption framework to the public must reflect that corruption has: • Denied the populace timely access to government services • Increased the cost of services • Imposed a “regressive tax” on the poorest segments of the population • Restricted economic and democratic development • Constituted a high-risk/low-profit activity in the new context (e.g., corrupt persons are punishable by jail sentences and fines) The challenge is how to best communicate this anti-corruption message to the population at large. This is where synergy comes in. The press, as a strong pillar in the fight against corruption, is needed to communicate these messages to the public. Hence, when everyone becomes informed and journalists are well-equipped with the necessary training to report instances of corruption or publish the central theme of an anti-corrupt strategy, coupled with a vibrant judiciary, there will be no hiding havens for corrupt elements in society.
International Conventions Against Corruption This section dwells on various global conventions in force to stem the menace of corruption. These conventions set out universal statutory benchmarks and precepts aimed at wrestling corruption and ensuring domestic action as well as global synergy as both are needed to successfully take on the various strands and manifestations of corruption. Although these conventions may seem largely identical in appearance, they can differ substantially depending on the signatories and the precise tasks set out in them. Concerning their geographic range, some extend their catchment areas across continents, while others are regional and subregional in coverage.
237
On the global front, the UN Convention Against Corruption (UNCAC), adopted in 2003, became operational in 2005 with 154 signatories as of December 2011. It was put in place to prohibit corruption and mandates signatories to take a spectrum of measures to combat it. Signatories undertake to cooperate with one another related to instances of cross-country corruption doings and to return pilfered possessions to their countries of origin. It is the most widespread anti-graft treaty with respect to geographical spread and issues addressed. Beyond this, the OECD Anti-Bribery Convention is the maiden and sole global anti-corruption framework aimed specifically at the “supply side” of the bribery deal. The instrument creates lawfully binding criteria to criminalize inducement of foreign public personnel in international financial transactions and sets out concomitant measures to be taken, such as painstaking surveillance and farreaching follow-up mechanisms, to guarantee effective and robust implementation. Correspondingly, the UN Convention against transnational organized crime acknowledges the fight against corruption coupled with allied acts as an integral part of the fight against cross-border organized crime. The instrument expects signatories to take required measures to forestall and forbid corruption and to put money laundering in check. On the continental front, the African Union Convention is a holistic framework adopted to criminalize the hydra-nature of corruption as it requires member states to work together to thwart, spot, punish, and eliminate it, and related doings, in both the public and private domains. The treaty outlines a blueprint for global collaboration and mutual legal aid to fight graft and recover stolen assets, as well as follow-up mechanisms to evaluate the progress made by each signatory. The Organization of American States’ (OAS) InterAmerican Convention against Corruption remains the pioneer international anti-graft treaty on the regional level in the meantime. Adopted in 1996 (it became effective in 1997), with 33 signatories as of 2012, it offers a range of deterrent strategies, provides for the proscription of certain actions of corruption, and sets out a sequence of provisions to support the collaboration between its signatories in aspects of legal aid and technical help,
C
238
repatriation and identification, asset recovery, and follow-up mechanisms. In Europe, there is a plethora of instruments targeted at wrestling corruption. It includes the Council of Europe Civil Law Convention (the maiden attempt to define common international rules in the field of civil law and corruption in an international treaty), the Council of Europe Criminal Law Convention, the European Union Convention against Corruption Involving Officials, and the Convention on the Protection of the European Communities’ Financial Interests. These are all legal frameworks drafted to combat corruption. In consonance, the instruments help to criminalize acts that stand in discord with institutionalized rules, punish active and passive acts of corruption, encourage international collaboration in the prosecution of corrupt persons, and provide incentives for individuals who have suffered as a result of graft among others. On the subregional front, the South African Development Community Protocol exemplifies the maiden subcontinental treaty to fight corruption in Africa. Mimicking the AU convention, the protocol assists the signatories to prevent, detect, punish, and eradicate corruption in the private and public spheres. However, only marginal progress has been recorded in its implementation since the protocol became operational in 2005. In the West African subregion, there is the ECOWAS (Economic Community of West African States) Protocol on the Fight Against Corruption, adopted in 2001, to strengthen the fight against corruption in the region. Despite its laudable provisions aimed at arresting the systemic corruption that permeates the region, it remains a fetus in the uterus due to lack of ratifications since its adoption in 2001. In summation, all these protocols and treaties are designed to achieve better governance by safeguarding resources aimed at poverty alleviation, valued assets in the search of development, and global teamwork in fighting the menace of corruption.
Cross-References ▶ Money Laundering
Conventions Against Corruption
References Adesina, S. O. (2016). Nigeria and the burden of corruption. Canada Social Science, 12(12), 12–20. Agbiboa, D. E. (2013). Between corruption and development: The political economy of state robbery in Nigeria. Journal of Business Ethics, 108(3), 325–345. Available on https://www.researchgate.net/publica tion/241761025_Between_Corruption_and_Develop ment_The_Political_Economy_of_State_Robbery_in_ Nigeria. Aluko, Y.A. (2009). Corruption in Nigeria: Concepts and dimensions. In D. U. Enweremadu & E. F. Okafor (Eds.), Anti-corruption reforms in Nigeria since 1999: Issues, challenges and the way forward (IFRA Special Research Issue, 3, pp. 1–18). Ibadan: IFRA. Ampratwum, E. F. (2008). The fight against corruption and its implications for development in developing and transition economies. Journal of Money Laundering Control, 11(1), 76–87. Available on https://www. researchgate.net/publication/235296689_The_Fight_ Against_Corruption_and_its_Implications_for_Devel opment_in_Developing_and_Transition_Economies. Amundsen, I. (1999). Political corruption: An introduction to the issues. Bergen: Michelsen Institute: Development Studies and Human Rights. Chow, G. C. (2005). Corruption and China’s economic reform in the early 21st century (CEPS working paper No. 116). Princeton University. Available on: https:// www.princeton.edu/ceps/workingpapers/116chow.pdf Edame, G. (2001). Development, economy and planning in Nigeria. Benin: Harmony Books. Fjeldstad, O.-H., & Isaksen, J. (2008). Anti-corruption reforms: Challenges, effects and limits of the World Bank support (IEG working paper 2008/7). Washington DC: The World Bank. Gregory, R. (2015). Political independence, operational impartiality, and the effectiveness of anti-corruption agencies. Asian Education and Development Studies, 4(1), 125–142. Heeks, R., & Mathisen, H. (2012). Understanding success and failure of anti-corruption initiatives. Crime, Law and Social Change, 58(5), 533–549. Available on https://link.springer.com/article/10.1007/s10611-0119361-y. Nye, J. (1967). Corruption and political development: A cost benefit analysis. The Journal American Political Science Review, 3(1), 570. Odekunle, F. (1986). Nigeria: Corruption in development. Ibadan: Ibadan University Press. Sorkaa, A. (2002). “Development as ethics and accountability in government: The way forward for Nigeria”, an Inaugural lecture delivered at Benue State University, August 10. Transparency Internaitonal. (n.d.) FAQs on corruption. https://www.transparency.org/whoweare/organisation/ faqs_on_corruption/9 Transparency International. (1998). Available on http:// www.transparancy.de/mission.html
Core-Periphery Model Transparency International. (2002). Anti-corruption handbook. Retrieved February 17, 2017, from http://www. transparency.org/policy_research/ach Transparency International. (2013). http://transparency. org.au/index.php/about-us/mission-statement/ United Nations Manual on Anti-Corruption Policy. (2001). United Nations office for drug control and crime prevention. Austria: The Global Programme against Corruption, Centre for International Crime Prevention, Office of Drug Control and Crime Prevention, United Nations Office at Vienna World Bank. (1998). Special report on corruption in Africa. Washington, DC: World Bank.
Core-Periphery Model Andrzej Klimczuk1 and Magdalena Klimczuk-Kochańska2 1 Independent Researcher, Bialystok, Poland 2 Faculty of Management, University of Warsaw, Warsaw, Poland Keywords
Center-periphery model · Centripetal and centrifugal forces · Regional disparities · Regional polarization
Definition and Introduction Core-periphery imbalances and regional disparities figure prominently on the agenda of several disciplines, which result from their enormous impact on economic and social development around the world. In sociology, international relations, and economics, this concept is crucial in explanations of economic exchange. There are few countries that play a dominant role in world trade (sometimes described as the “Global North”), while most countries have a secondary or even a tertiary position in world trade (the “Global South”). Moreover, when we are discussing global, continental, regional, and national economies, we can present regions and even smaller territorial units (such as subregions, provinces, districts, or counties) which have higher wages than some underdeveloped areas within the same larger area in focus.
239
Such regional inequalities and injustices are the main themes of the core-periphery model, which focuses on tendencies of economic activities to concentrate around some pivotal points. It seeks to explain the spatial inequalities or imbalances observable on all levels or scales by highlighting the role of horizontal and vertical relations between various entities from the level of towns and cities to the global scale. The existence of a core-periphery structure implies that in the spatial dimension (space and place), the socioeconomic development is usually uneven. From such a geographical perspective, the regions known as the “core” are advanced in various areas, while other regions described as the “periphery” serve as a social, economic, and political backstages, backyards, and supply sources or – in some cases – are even subject to degradation and decline. Furthermore, the level of development has a negative correlation with distance from the core. The economies of the states that have gone through various stages of development at the earliest and with the fastest pace have become wealthy core regions and growth poles. Those countries and regions where these processes have been slower become or remain the poor periphery. The critical question raised in discussions related to the core-periphery model focuses on the results and outcomes of the disproportions and asymmetry of the relationship and value of various indicators related to the level of regional development. The terms “center” and “core” are often used as synonyms. Peripherality is perceived negatively, and peripheral areas are regions that may generate challenges for the core and may even be deemed to require political interventions from time to time (e.g., regions with a predominantly agricultural structure, regions deprived of natural resources, regions located far from the main transport routes, depopulated regions, and regions where large-scale enterprises have been liquidated resulting in mass unemployment and other social problems). The peripheries are associated with distance, difference, and dependence on external aid and the unfavorable phenomenon of marginalization and deprivation. At the same time, however, there are no uniform or
C
240
standardized development patterns that could allow solving the issue of the development gap of the underdeveloped and developing countries and regions. Thus, there have been numerous attempts to identify the factors contributing to uneven development around the world. There is an intense focus on the conflicting relations between centers and peripheries, often reduced to a simple dualism of dominant centers and weak peripheries. This model is of interest to groups such as geographers, scholars of regional studies, town planners, economists, sociologists, as well as practitioners and experts in the field of development studies. Various theories and policy papers that will be discussed in subsequent sections of this chapter have tried to explain spatial determinants of development. We will first describe the origins of the core-periphery model as may be attributed to Raúl Prebisch (1950). Later we will present the human geography approach in the field of regional studies from John Friedmann (1966). Next, attention will be focused on select elements of world-systems theory as proposed by Immanuel Wallerstein (1974). Finally, at the end of the chapter, we discuss recent contributions of mainstream economics from Paul Robin Krugman (1981, 1991, 1998, 2011).
Rau´l Prebisch’s Manifesto (1950) The core-periphery concept was developed in the 1950s by Prebisch within the framework of the United Nations Economic Commission for Latin America (ECLA; esp. Comisión Económica para América Latina y el Caribe – CEPAL). Prebisch started using the terminology of “core” and “periphery” already in 1929. In his report for the ECLA titled “The Economic Development of Latin America and its Principal Problems” – often referred to simply as Prebisch’s Manifesto – he describes these notions as two broad and contrasting regional categories, that is, the economically developed center and the undeveloped periphery. These terms are connected but also defined by various internal features such as wage
Core-Periphery Model
levels, production structures, export composition, and other similar attributes. Prebisch’s concept is still often presented in the literature as the foundation of the dependency theory. Prebisch found that productivity increases – wherever they occur – tend to help the manufacturing centers more than the agricultural sectors and areas that are exporting primary goods and resources. Prebisch argued that theories and models stemming from the developed world (the center) were not applicable in the non-developed world (the periphery) due to different situations and historical experiences (Prebisch 1950). Importantly, the ideas of Prebisch had a tremendous impact on both economic policy and strands of development thinking all over the world. He highlighted that unequal exchange is causing the flow of surplus value from periphery regions to core regions. Prebisch also noted that this issue has been unnoticed for a long time, at least in the social sciences, due to previously used terms and all other variants of the rich-poor dichotomy.
The Core-Periphery Model of Regional Development by John Friedmann (1966) The core-periphery model was also of interest to John Friedmann. He further developed this concept in 1966 by underlining the role of spatial distances from the core. His approach is sometimes interpreted and combined with the growth pole theory (focusing on input-output linkages) of François Perroux (1955) as well as with later works of Albert O. Hirschman (1958) who, among others, described the “trickle-down effect” in the theory of unbalanced development. Moreover, it can be noted that Friedmann’s model combines elements of the export-based approach presented by Douglass C. North (1955) and parts of Gunnar Myrdal’s (1957) theory of cumulative and circular causation with the “spread effect” (whereby development spreads from city to the suburbs and all adjoining areas) and the “backwash effect” (whereby development of the city tends to gather resources and labor force away from surrounding areas and that may degrade these places).
Core-Periphery Model
Friedmann’s version of the core-periphery model includes an explanation of why some inner-city areas enjoy considerable prosperity, while others show signs of urban deprivation and poverty, even as urban areas, in general, have some advantage over peripheral rural areas. This model of regional development thus focuses on spatially diversified development. It recognizes the tendency by the most competitive entities to locate their manufacturing and service activities in the most developed regions. Economic centers (cores) dominate over peripheral areas not only in the economic sphere but also in the political and cultural fields. The core, which is usually a metropolitan area, contributes to the development of the periphery even as, at the same time, it is subordinating it in the social and economic dimensions. Centers typically have a high potential for innovation (improvement) and growth, which shapes the geographic diffusion of innovations (Rogers 1962, 2003). At the same time, according to Friedmann, peripheral regions experience lagging growth or even stagnation and may rely on growth driven mainly by the core area’s demands for resources. We should also mention a further division of regions proposed by Friedmann (1966), where core regions and the periphery are divided into “upward transition regions” (advanced or early), “downward transition regions,” and “resource frontier regions.” Upward transition regions are areas of growth that spread over small centers rather than at the core. Downward transition regions are characterized by depleted resources, low agricultural productivity, or outdated industry. Resource frontier regions are described as the newly “colonized” areas which are brought into production networks for the first time. For example, less accessible inner-city areas may experience a backwash effect with limited investment. The effect is especially well visible when the inner city is close to the newly developing central business district, concentrating a major povertywealth gap in relatively tight space. Friedmann’s theory is sometimes described similarly to the “three-sector model” (or “Petty’s Law”) proposed in economics by Allan Fisher, Colin Clark, Jean Fourastié, and Daniel Bell (see
241
review by Ehrig and Staroske 2009). Friedmann’s version is called a “core-periphery four-stage model of regional development” that covers the following stages: pre-industrial, transitional, industrial, and postindustrial. The pre-industrial stage refers to the primary sector (agricultural) of the economy, which is characterized by economic activities limited to a small area and a small-scale settlement structure with small units. Each aspect of pre-industrial society is relatively isolated, small units stay dispersed, and economic entities such as population and traders have low mobility. The transitional stage is described by the increasing concentration of the economy in the core that is fostered by capital accumulation and industrial growth. A dominant center appears within an urban system and becomes its growth pole. Trade and mobility increase at this stage, but the labor force’s space of daily existence is still local because the personal mobility of people stays limited. The periphery is at this point wholly subordinated to the center of political and economic dominance. In the industrial stage, manufacturing (the secondary sector) is growing with increasing employment of people who are migrating from rural areas to urban areas. This change subsequently also results in shifting from using the human workforce to the mechanization and automation of production. Thus, the core-periphery model is also used to describe changes in the labor markets and in the labor economics literature. The model is thus also referred to as “dual labor market theory” and as “insider-outsider theory” (Klimczuk and Klimczuk-Kochańska 2016). In general, both theories assume that labor markets are divided into segments, which are distinguished from each other by a separate system of rules, job requirements, and different skills. For example, human resource policies include a preference (in the primary segment) for recruiting white male workers to managerial positions by offering training, pay gains, promotion, and job security. At the same time, external labor markets are dominated by women and minorities and offer low-paying and low-status jobs. Furthermore, in the industrial stage, through a process of
C
242
economic growth and diffusion, other growth centers appear. The main reason for deconcentration is the increasing production costs related to labor and land in the core area. This diffusion is linked to increased interactions between elements of the urban system and the construction of transport infrastructure. The fourth stage, that is, the postindustrial stage, sees a growing demand for workforce in services (the tertiary sector). It is assumed that this stage is characterized by the spatial integration of the economy and the achievement of equilibrium. The urban system becomes fully integrated, and inequalities are reduced significantly. The distribution of economic activities is focused on establishing specializations and a division of labor linked with strong flows along transport corridors. Friedmann believed that the allocation of economic activities should reach optimum, balance, and stability. That does not mean that the trade and mobility of the population should decrease. As far as different areas specialize in specific functions, there will be a division of labor between regions. An integrated model foresees a cyclical movement of the population caused mostly by the age factor: the youth studying in big cities, families settling in the suburbs, and older adults searching for competitive and peaceful rural environments. To sum up, according to Friedmann’s model, the development potential of a given region or country is determined by the stimulating effect of regional growth centers, the construction of infrastructure, and the provision of support from central areas to less developed regions. An advantage of the model is that the assumptions of this theory are also applicable to different spatial scales, that is, from local and regional through to the national and global scale.
Core-Periphery Hierarchy in World-Systems Theory by Immanuel Wallerstein (1974) The concept of the core-periphery model is also part of Wallerstein’s theory which he proposed in the 1970s to explain the genesis and functioning
Core-Periphery Model
of capitalism while also seeking to interpret the phenomenon of globalization. This theory assumes that the world-system is a specific spatial and temporal entity, including various political and cultural units that are functioning based on certain specific principles. An essential element of this theory is the coreperiphery hierarchy whereby discrepancies in interests and inequalities result from the domination of the vibrant center over the weak periphery. Regarding other issues, this theory is quite similar to Prebisch and Friedmann’s approaches. In fact, it is often considered as being identical to Prebisch’s concept. However, in Wallerstein’s theory, center and periphery are inextricably linked together in both material and sociocultural terms. Thus, while dependency theory only suggests that one area is dependent on the other, here neither of the two would function the way it does without the other. Wallerstein shows that the core regions are innovative and play an active role in international trade, export capital, generate high incomes, and have high productivity and stability of the political system. The core is the site of the exchange of products between the monopolized and free-market zones and the flow of profits to the former. Peripheral areas are less innovative, have low incomes and productivity, are dependent on capital import, have a minor role in international trade, and are politically unstable. Therefore, in this approach, peripheries are rather dependent on the centers and disadvantaged by unequal terms of trade. Moreover, Wallerstein (1974) distinguishes semi-peripheries that are interpreted as a kind of buffer between the center and the periphery. Even if the semi-peripheral countries and regions experience the highest mobility, their prospective promotion to the status of a core region is decided primarily by international or governmental interventions. Some of the semi-peripheries were previously the central areas, while some have advanced from the periphery. In Wallerstein’s opinion, the countries of the periphery and the semi-peripheries that build for a comparative advantage on cheap labor stand to lose the investment thus attracted. Labor costs will increase in
Core-Periphery Model
time on a global scale due to the depletion of the resources of the rural population. Thomas D. Hall et al. (2011) further extended and modified world-systems theory, e.g., with a view to pre-capitalist societies. The core-periphery differentiation focuses here on diverse sociopolitical groups conducting the active exchange. The peripheries thus have a more significant impact on the center than it is presented in the original concept of the core-periphery hierarchy. Moreover, semi-peripheries are characterized here as zones of innovation.
The Core-Periphery Model in New Economic Geography by Paul Robin Krugman (1991) Krugman, a Nobel laureate economist, underlines that it is scandalous that economists have ignored the core-periphery model for so many years (Krugman 1998: 13). He uses some categories and terminology especially from Wallerstein (Krugman 1981: 149) and combines the idea of the core-periphery model with some assumptions from classical location theories. The first of these theories included Johann Heinrich von Thünen’s (1825) model of the dual economy that discusses the city center and its periphery. Some other assumptions come from works of Alfred Marshall (1879, 1890) who considered the significance of relations between the development of industrial districts and large local markets. Also based on the theory of international trade, Krugman thus developed the model of new economic geography. In Krugman’s theory, the increase in income in the core development region is partly at the expense of the peripheral region. It is also essential that globalization processes lead to disproportions in the development between regions and countries and that these disproportions exist because of the progress (deepening) of international integration processes. Standard international exchange models show that market integration can result in losses for a few countries but lead to an increase in the income of most countries involved in the exchange. The central element of the model is the mobility of manufacturing workers observed due to
243
interregional wage differentials. Moreover, companies tend to search for locating their production in the largest markets because it may help them to save on shipping and other combined costs that should be involved if they want to sell at a distance. The size of a market is a result of the number of residents and their income levels. Thus, the crucial indicators refer to the quantity and quality of available jobs. If a more substantial number of manufacturing enterprises concentrate on one of the regions, this will increase the number of jobs and the availability of the goods produced there. As a result, the income of employees in this region increases, which will lead to the migration of other employees to this area. The growing number of employees, and thus consumers, increases the market size that may consume goods produced there. Considering transportation costs, the region concerned thus becomes the most favorable location for enterprises. New economic geography also describes two different forces: centripetal and centrifugal (Krugman 1991). The centripetal forces are related to agglomeration. Among these, we can find market size, the mobility of workers, and positive external effects. These forces result in a cumulative-circular, divergent, and asymmetric development model in which one region is achieving the core status while the other is becoming periphery. The centrifugal forces are immobile factors, for example, natural resources, competition, and adverse external effects. If either of these forces is dominant, there will be profound interregional differences. Krugman (2011) also considers three factors that can change the relationship between centrifugal and centripetal forces. These are (1) economies of scale in industrial production, (2) transportation costs, and (3) demand for industrial goods. With a view to these forces, it is possible to conceptualize the centripetal governance mechanism of “circular causation” which is described by Krugman as a situation when at the beginning the employees are attracted by the enterprises, but later the same employees who are consumers attract new companies to the region. Krugman convincingly argues that concentration processes are stronger than forces conducive to dispersion. This usually leads to polarization or at least to the creation of distinct variations in the level of
C
244
socioeconomic development in space. It is worth noting this new trend of thinking that considers the spatial aspects of socioeconomic development. The theory has extensive influence across various fields of study, such as urban and regional studies, international trade, development studies, and industrial organization.
Conclusion The spatial inequalities of social-economic development processes result in the emergence of marginalized areas (peripheries) which are mainly rural areas. Peripherality is a complex and multidimensional concept. It has a relative character: the identification and classification of a given area as peripheral one depend on adopted criteria and a reference point. In general, the peripherality assessment is negative and emphasizes traits such as backwardness, dependence, marginalization, and deprivation. States and regions use various mechanisms of public intervention under the slogan of striving for social and territorial cohesion. The effects of these efforts are, however, far from satisfactory. A review of selected theories and concepts of regional development allows us to indicate various causes of peripherality, although many of the theoretical concepts discussed relate to this only indirectly. The phenomenon of cumulative causation results in the simultaneous occurrence of negative phenomena in impoverished and peripheral areas, whose relations and interconnections lead to permanent exclusion and marginalization. It is challenging to escape such a “path dependence,” and it is virtually impossible to accomplish this without outside interference (Magnusson and Ottosson 2009). On the other hand, it should be noted that spatial unevenness is a feature of socioeconomic development, which is an inevitable phenomenon. The spatial diversity of socioeconomic development, especially in international terms, may also lead to the use of the so-called latecomer advantage or leapfrogging based on the economic benefits resulting from the omission of particular stages of development (Yap and Rasiah 2016). An economically backward entity (e.g., the state) may
Core-Periphery Model
avoid unfavorable processes and may focus on copying only tested ready-made solutions, without incurring the costs related to the quest to find these solutions (e.g., in terms of technology and innovation). From a historical perspective, the core-periphery model is related to processes of industrialization and urbanization that deepened the split between core and periphery areas. Regions with attractive geographic and communication locations benefited from industrialization and became core areas that drew in economic entities seeking economies of scale, exerted demand for an increasing amount of labor resources, attracted external capital, and effectively competed for these resources with the periphery. As a result, at the other extreme, peripheral regions were established that have lost the majority of their labor resources and which are not attractive to external capital due to the monofunctional structure of the local economy. Most of the rural areas are in this group, except for those located near large urban centers. Core areas also become clusters of economic activity, sources of innovation, and gatherings of the creators of innovation sometimes described as the “creative class” (Florida 2002, 2017). Excessive costs, especially of introducing technological innovations, are a barrier to their transfer to peripheral regions (Klimczuk and Klimczuk-Kochańska 2015). Costs including financial investments and the training of workers are effectively weakening the effects of diffusion (spread or spillover) of technical progress and knowledge. Insufficient endogenous potential (in terms of human capital and social capital) for absorbing innovation also intensifies adverse economic and social effects in peripheral areas. Further open discussion on the causes of peripherality is needed as well as more awareness of needs and potential positive responses to related social, economic, and political challenges. The role of global economic organizations such as the G20, the Organisation for Economic Co-operation and Development (OECD), and the World Economic Forum (WEF) may also merit attention in this respect. Most of the intergovernmental organizations concerned claim that they want to reduce global inequalities and resolve problems of
Countering Violent Extremism (CVE)
poverty, even as they are subject to criticism for policies and decisions that in effect serve to keep the status quo (Held and McGrew 2007).
References Ehrig, D., & Staroske, U. (2009). The gap of services and the three-sector-hypothesis (Petty’s law): Is this concept out of fashion or a tool to enhance welfare? In D. Harrisson, R. Bourque, & G. Széll (Eds.), Social innovation, the social economy and world economic development (pp. 261–278). Frankfurt am Main: Peter Lang. Florida, R. (2002). The rise of the creative class: And how it’s transforming work, leisure, community and everyday life. New York: Basic Books. Florida, R. (2017). The new urban crisis: How our cities are increasing inequality, deepening segregation, and failing the middle class – And what we can do about it. New York: Basic Books. Friedmann, J. (1966). Regional development policy: A case study of Venezuela. Cambridge: MIT Press. Hall, T. D., Kardulias, P. N., & Chase-Dunn, C. (2011). World-systems analysis and archaeology: Continuing the dialogue. Journal of Archaeological Research, 19 (3), 233–279. Held, D., & McGrew, A. (2007). Globalization/anti-globalization: Beyond the great divide. Cambridge: Polity. Hirschman, A. O. (1958). The strategy of economic development. New Haven: Yale University Press. Klimczuk, A., & Klimczuk-Kochańska, M. (2015). Technology transfer. In M. Odekon (Ed.), The SAGE encyclopedia of world poverty (2nd ed., pp. 1529–1531). Los Angeles: SAGE. Klimczuk, A., & Klimczuk-Kochańska, M. (2016). Dual labor market. In N. Naples, R. Hoogland, M. Wickramasinghe, & A. Wong (Eds.), The Wiley-Blackwell encyclopedia of gender and sexuality studies (pp. 1–3). Hoboken: Wiley-Blackwell. Krugman, P. (1981). Intraindustry specialization and the gains from trade. Journal of Political Economy, 89(5), 959–973. Krugman, P. (1991). Increasing returns and economic geography. Journal of Political Economy, 99(3), 483–499. Krugman, P. (1998). What’s new about the new economic geography? Oxford Review of Economic Policy, 14(2), 7–17. Krugman, P. (2011). The new economic geography, now middle-aged. Regional Studies, 45(1), 1–7. Magnusson, L., & Ottosson, J. (Eds.). (2009). The evolution of path dependence. Cheltenham/Northampton: Edward Elgar. Marshall, A. (1879). The economics of industry. London: Macmillan. Marshall, A. (1890). Principles of economics. London: Macmillan. Myrdal, G. (1957). Rich lands and poor: The road to world prosperity. New York: Harper.
245 North, D. C. (1955). Location theory and regional economic growth. Journal of Political Economy, 63, 243–258. Perroux, F. (1955). Matériaux pour une analyse de la croissance économique [Materials for an analysis of economic growth]. Paris: ISEA. Prebisch, R. (1950). The economic development of Latin America and its principal problems. Lake Success: United Nations Department of Economic Affairs. Rogers, E. M. (1962). Diffusion of innovations (1st ed.). New York: Free Press. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press. von Thünen, J. H. (1825). Der isolirte Staat in Beziehung auf Landwirtschaft und Nationalökonomie [The isolated state in relation to agriculture and economics]. Hamburg: Friedrich Perthes. Wallerstein, I. (1974). The rise and future demise of the world capitalist system: Concepts for comparative analysis. Comparative Studies in Society and History, 16 (4), 387–415. Yap, X.-S., & Rasiah, R. (2016). Catching up and leapfrogging: The new latecomers in the integrated circuits industry. Abingdon/New York: Routledge.
Further Reading Baldwin, R., Forslid, R., Martin, P., Ottaviano, G., & Robert-Nicoud, F. (2011). Economic geography and public policy. Princeton: Princeton University Press. Capello, R., & Nijkamp, P. (Eds.). (2010). Handbook of regional growth and development theories. Cheltenham/Northampton: Edward Elgar. Fujita, M., Krugman, P. R., & Venables, A. J. (2001). The spatial economy: Cities, regions, and international trade. Cambridge, MA/London: MIT Press. Geyer, H. S. (Ed.). (2006). Global regionalization: Core peripheral trends. Cheltenham/Northampton: Edward Elgar. Lang, T., Henn, S., Ehrlich, K., & Sgibnev, W. (Eds.). (2015). Understanding geographies of polarization and peripheralization: Perspectives from central and Eastern Europe and beyond. London: Palgrave Macmillan.
Countering Violent Extremism (CVE) Daniel Koehler German Institute on Radicalization and De-Radicalization Studies (GIRDS), Stuttgart, Germany Keywords
Counter-radicalization · Violent extremism · CVE
C
246
Introduction “Countering Violent Extremism” (CVE) is usually understood to be “an approach intended to preclude individuals from engaging in, or materially supporting, ideologically motivated violence” (Williams 2017, p. 153) or simply as “non-coercive attempts to reduce involvement in terrorism” (Harris-Hogan et al. 2015, p. 6). The term CVE is now widely used in international and national counterterrorism strategies and policies, even though it was criticized as being a “catch-all category that lacks precision and focus” (Heydemann 2014, p. 1). The term was (sporadically) used as early as 2005 but became impactful in the United States around 2009/2010 advanced by Daniel Benjamin, at the Office of the Coordinator for Counterterrorism under the Obama administration. Even though the Bush administration after the terror attacks of September 11, 2001 had already introduced a “National Implementation Plan” including a pillar to “counter violent Islamic extremism” (CVIE), countering ideas and belief systems, or more specifically a radical or extremist ideology, was not popular in the US counterterrorism sphere until the Obama administration. This strategic amendment is seen to have been inspired by the British concept of “preventing violent extremism” (PVE) as part of the “counterterrorism strategy” (CONTEST) and its pillar “PREVENT,” which has been developed since 2003 and been in place with major support from the British government after the 7/7 London bombings in 2005. PREVENT has since then seen a number of revisions, the first in 2006. The main goals of PREVENT were set as to challenge violent extremist ideology and support “mainstream” voices, disrupt those who promote violent extremism and support the institutions where they are active, support individuals who are being targeted and recruited to the cause of violent extremism, increase the resilience of communities to violent extremism, and address the grievances that ideologues are exploiting. With these very broad goals of PREVENT, it becomes clear that P/CVE encompasses an equally broad array of activities, methods, programs, and initiatives, making the concept
Countering Violent Extremism (CVE)
vulnerable for “meaning everything and nothing” criticism. The following discussion aims to clarify the key components, program types, and methods usually used underneath the P/CVE umbrella worldwide.
Scope of P/CVE Activities A common classification used for P/CVE activities is the “public health model” from Caplan (1964) rooted in clinical psychiatry. “Primary” prevention in this model aims to prevent a deviant behavior from occurring in a “non-infected” system. This includes activities aimed, for example, at general awareness raising, resilience, or other community coherence. Primary prevention addresses societal issues and individuals before violent extremist groups and ideologies are encountered and specific risk factors formed. “Secondary” prevention aims to avert solidification of risk factors or a radicalization process in the early stages. “Tertiary” prevention aims to prevent recidivism to violent extremism or to other risk factors in the future, implying that an initial desistance or disengagement has been achieved. Naturally, very different methods and programs fall under these three categories as working with long-term members of extremist groups to induce defection is a completely different task than teaching children about the risks posed by extremist groups. Another classification concept using a prevention-based terminology was introduced by Gordon Jr. (1983), who, in contrast to Caplan, only looked at a state of noninvolvement. “Universal” prevention in this concept aims to introduce wide-reaching, easy, and cheap measures of preventative care, while “selective” prevention aims to introduce more differential methods targeting a group with a higher risk of being attracted to extremist groups. Additionally, “indexed” prevention aims at those individuals with a high risk of future involvement. Both Caplan and Gordon Jr. essentially based their models on the goal of disease control, which makes the translation to P/CVE problematic as this transfer might imply a pathological nature of radicalization and violent extremism. This can
Countering Violent Extremism (CVE)
have a significant negative impact on P/CVE practitioners’ self-understanding and the cognitive opening of the participants in these activities. Furthermore, both classification schemes use a “prevention” terminology, even though activities at least under the “tertiary” pillar include what has also been called deradicalization, disengagement, or intervention programs. As intended by Caplan, every intervention in tertiary prevention essentially aims to prevent recidivism. This mechanism was echoed, when such practical activities were seen as programs reducing risk of terrorist recidivism (Horgan and Altier 2012). Objection to that framework was, for example, raised by Koehler (2016), who argued that preventing recidivism is just one necessary (and later) part of interventions, which must reduce individual physical and psychological commitment to the extremist group and ideology in the first place. Nevertheless, academics and practitioners have widely come to see activities intervening with (highly) radicalized individuals in order to achieve defection and avoid recidivism as tertiary prevention (e.g., Harris-Hogan et al. 2015). However, there is no unity regarding the inclusion of intervention (i.e., deradicalization or disengagement) within the P/CVE framework. The US Department of Homeland Security (DHS) CVE Task Forces, for example, define it as “proactive actions to counter efforts by extremists to recruit, radicalize, and mobilize followers to violence. Fundamentally, CVE actions intend to address the conditions and reduce the factors that most likely contribute to recruitment and radicalization by violent extremists” (https://www.dhs. gov/cve/what-is-cve (accessed March 10, 2018)). Policy frameworks from other countries or international organizations explicitly include the intervention aspect, for example, within the European Union’s counterterrorism strategy (see below). It would be therefore accurate to see CVE as an umbrella category under which prevention-oriented initiatives (i.e., before a person radicalizes to a point of using violence) and interventionoriented initiatives (i.e., deradicalization and disengagement of persons who are already radicalized to the point of using violence) are subsumed.
247
The first is commonly referred to as “counterradicalization” or “preventing violent extremism” (PVE) programs and the latter as intervention, deradicalization, rehabilitation, or reintegration programs. Naturally, there is no clear distinction between prevention- or intervention-oriented methods and programs in practice, as radicalization processes are not linear but dynamic as well. Hence, whether or not a person is not yet “radical enough” for deradicalization is mostly impossible and even futile to answer, which is why most programs do not differentiate as clearly the different terms and concepts as the academic discourse might suggest. In reality, prevention- and intervention-oriented tools form a methods blend aiming to achieve effects on all levels: preventing further radicalization, decreasing physical and psychological commitment to the radical milieu and thought pattern or ideology, preventing return to violence and extremism, increasing resilience to extremist ideologies or groups, and assisting to build a new self-sustained life and identity. In consequence, as radicalization is a context-bound phenomenon “par excellence” (Reinares et al. 2008, p. 7), so is countering it.
A Short History of P/CVE, Countering Radicalization, and Deradicalization P/CVE and activities associated with the concept (e.g., deradicalization) have increasingly become buzzwords among counterterrorism experts and policymakers around the world in recent years. However, the roots of some aspects within P/ CVE are lying with rehabilitation programs for civil war combatants (Disarmament, Demobilization, Reintegration – DDR), which have been conducted at least since 1989 (Muggah 2005, p. 244). Furthermore, whole terror groups have disavowed violent means on many occasions in the past (e.g., Ashour 2009; El-Said 2012; Ferguson 2010). As a related concept, the relatively young term “deradicalization” began to emerge and enter the international discourse mainly through Middle Eastern countries’ attempts to use theological debates on terrorist
C
248
prisoners aiming to convince them to abandon militant jihadist ideology as a part of the “Global War on Terror” initiated by the United States after the September 11 attacks. While state-run programs like those in Yemen (Johnsen 2006) and Saudi Arabia (Boucek 2007; El-Said and Barrett 2012) starting just a few years after 9/11 were pivotal to spread the deradicalization concept into the general public (e.g., Time Magazine: Ripley 2008) and to spark further academic interest (e.g., Bjørgo and Horgan 2009; Horgan 2009; Mullins 2010; Noricks 2009), some programs (governmental and nongovernmental) in Europe had already been working extensively on diverting right-wing extremists away from violence and terrorism since the mid-1990s (Bjørgo 1997; Bjørgo and Carlsson 2005). Very early though, leading experts have found the “lack of conceptual clarity in the emerging discourse on deradicalization striking” (Bjørgo and Horgan 2009, p. 3). It seemed that the term was being applied to a wide array of policies and tools with “virtually no conceptual development in the area” (Horgan 2009, p. 17). With the outbreak of the Syrian Civil War in 2011 and the emergence of terrorist semi-states like the so-called caliphate of the terror organization “Islamic State in Iraq and Syria (ISIS)” (Honig and Yahel 2017) and the global increase in “foreign fighter” travel movements to unprecedented levels (Hegghammer 2013), governments around the world have been under pressure to develop and implement various different responses to the recruitment for groups like ISIS (i.e., PVE) and to the perceived threat of returned and radicalized combatants (i.e., intervention, reintegration). In 2014, the United Nations Security Council (UNSC) adopted Resolution 2178 urging all member states to establish effective rehabilitation measures for returning fighters from Syria and Iraq (UNSC 2014). In January 2016, the UNSC presented a “Plan of Action to Prevent Violent Extremism” to the General Assembly with more than 70 specific recommendations, including a call to introduce “disengagement, rehabilitation and counselling programmes for persons engaged in violent extremism” (UNSG 2016, p. 4). Furthermore, in December 2017, the UNSC adopted Resolution
Countering Violent Extremism (CVE)
2396, which continued to call for specific measures to counterterrorism, including CVE activities, such as counter-narrative campaigns and rehabilitation programs (UNSC 2017). Similarly, the revised “European Union Counterterrorism Strategy” places a strong importance on “disengagement and exit strategies” (EU 2014, p. 11), and in 2016 the European Commission called the implementation of “deradicalization” programs under the overall goal to prevent and fight radicalization an “absolute priority” (EC 2016, p. 6). On December 4, 2015, the Organization for Security and Co-operation in Europe (OSCE) released a “Ministerial Declaration on Preventing and Countering Violent Extremism and Radicalization that Lead to Terrorism” with 22 comprehensive recommendation calls for member states ranging from measures to counterterrorism financing to exchange of best practices, including reintegration and rehabilitation of prison inmates convicted of terrorism (OSCE 2015). The United States released its first CVE strategy report under the Obama administration in August 2011, but it was not before the Boston marathon bombing in 2013 that created some actual funding to be provided to CVE projects on the ground. In September 2014 the White House announced the “Three City Pilot” program, identifying Boston, Minneapolis, and Los Angeles as lead cities in the US CVE sphere. Furthermore, the US Department of Homeland Security (DHS) started to host a CVE Task Force including the Federal Bureau of Investigation (FBI), Department of Justice (DOJ), and the National Counterterrorism Center (NCTC). This CVE Task Force also rolled out the first grant program for civil society organizations in January 2017. Hence, it is fair to say that programs and strategies that could roughly be described as CVEspecific or CVE-related, even though vastly different in nature, have gained global significance in the fight against terrorism, recruitment into violent extremism, and violent radicalization. However, terminology remains unclear and potentially inhibits development in the field, as Altier et al. (2014, p. 647) found “that existing research
Countering Violent Extremism (CVE)
remains devoid of conceptual clarity” with synonymous and inconsistent use of different terms.
International Networks and Organizations in the P/CVE Field As P/CVE has increasingly become a standard component of counterterrorism policies around the world, a number of think tanks and international organizations focusing on tools and approaches under the CVE umbrella have come into existence. This small selection of some of the most prominent networks gives an impression about how much this field has grown into a vibrant international community of practitioners, policy advisers and makers, academics, law enforcement, and community leaders. Within the European Union, the Radicalization Awareness Network (RAN), which was officially launched on September 9, 2011, includes nine thematic working groups to foster exchange in CVE-related working fields, such as, for example, prison and police, education, communication and narratives, youth, families and communities, or health and social care. RAN also includes a working group on exiting extremist or terrorist milieus. RAN is, by far, the largest and most relevant network in this area in Europe and supported with significant resources from the European Commission Directorate General Home. At the core of RAN lies the RAN Centre of Excellence, which provides training courses and regular working papers, in addition to policy briefs. Going beyond political boundaries, the Against Violent Extremism (AVE) network, financed by Google Ideas and organized by one of the largest think tanks in the CVE field, the British Institute for Strategic Dialogue (ISD), is a leading private industry-supported hub for former extremists/terrorists, entrepreneurs, policymakers, and academics which aspires to bring together partners from all related fields for potential project-related collaboration. A third example of CVE networks, in this case a policymakers focused, is the Policy Planners Network (PPN), which is also run by ISD and was established in 2008 as intergovernmental
249
network intended to improve policy and practice to counter extremism and radicalization. The PPN includes representatives from 12 governments, the United Kingdom, Canada, France, Germany, the Netherlands, Denmark, Sweden, Belgium, Spain, Finland, Norway, and Australia (state of Victoria), and also cooperates with the European Commission and the Counterterrorism Coordinator (CTC) at the Council of the EU. The fourth example of CVE networks is active on the regional or local level. The Strong Cities Network (SCN) was established by the United Nations, in September 2015, as global network of mayors, municipal-level policymakers, and practitioners to build social cohesion and community resilience to counter violent extremism in all forms. The SCN also is led by ISD. In addition to CVE think tanks (like ISD or the British Quilliam Foundation) or networks, international actors like the Hedayah International Center of Excellence for Countering Violent Extremism, launched in December, 2012, must be named. Headquarters in Abu Dhabi, and established by the Global Counterterrorism Forum, Hedayah develops training courses, resources, and materials for practitioners and policymakers in the CVE field. For example, it maintains a library of CVE counter-narrative campaigns. Numerous additional hubs, networks, or relevant institutions, some more research-oriented than others, could be named. However, with a quickly evolving and changing field like CVE, it is important to note that these networks and organizations bring together many actors with differing understandings of P/CVE in an attempt to create a field of activity and knowledge gaining that yet has to develop a strong evidence base for good practices.
CVE Methods and Approaches Based on policy documents, comparative studies, and evaluations of programs, as well as press reporting about CVE, it is possible to identify a core field of CVE methods and activities. By far the majority of CVE activities are situated in the primary prevention sphere (i.e., CVE-related).
C
250
Typically, CVE initiatives and policies aim to address and reduce drivers of violent radicalization processes, for example, marginalization, discrimination, social and economic inequalities, or collective and individual grievances. It must be noted that the list of factors addressed in the primary prevention CVE space is generally based on assuming a causal relationship between these factors and involvement in extremist and terrorist milieus. Widely employed CVE tools here are, for example, education (e.g., about human rights, threat posed by violent extremist groups), community engagement and resilience building, youth empowerment programs, gender equality, professional skill development, or strategic communication (so-called counter-narrative campaigns). In addition, measures taken to reduce recruitment activities, for example, drying out sources of terrorism financing, reforming legal frameworks, assuring obedience to human rights in prison systems, or accountability insurance for human right violations, are also commonly included under the CVE umbrella. As for the secondary and tertiary fields of CVE activity, various different forms of counselling, mentoring, rehabilitation, reintegration, deradicalization, and disengagement methods are usually employed. These methods and tools aim to facilitate a process of turning from a position of perceived deviance or conflict with the surrounding environment (e.g., support for violent extremism) towards moderation and equilibrium. A key conceptual step is the question of how much of the extremist or radical ideology and worldview is the focus of the intervention (deradicalization), rather than illegal behavior (disengagement). Horgan and Braddock (2010, p. 152), for example, define disengagement as: the process whereby an individual experiences a change in role or function that is usually associated with a reduction of violent participation. It may not necessarily involve leaving the movement, but is most frequently associated with significant temporary or permanent role change. Additionally, while disengagement may stem from role change, that role change may be influenced by psychological factors such as disillusionment, burnout or the failure to reach the expectations that influenced initial
Countering Violent Extremism (CVE) involvement. This can lead to a member seeking out a different role within the movement
and deradicalization as: the social and psychological process whereby an individual’s commitment to, and involvement in, violent radicalization is reduced to the extent that they are no longer at risk of involvement and engagement in violent activity. Deradicalization may also refer to any initiative that tries to achieve a reduction of risk of re-offending through addressing the specific and relevant disengagement issues. (Horgan and Braddock 2010, p. 153).
More specifically, Braddock (2014, p. 60) points out that deradicalization is a “psychological process through which an individual abandons his extremist ideology and is theoretically rendered a decreased threat for re-engaging in terrorism.” Even though most tertiary CVE programs are active within the prison and post-release reintegration environment, multiple additional types of programs (e.g., family counselling programs, community-based rehabilitation programs) exist around the world (for an overview see Koehler 2016).
Conclusion and Future Challenges CVE programs and tools face two major challenges. First is a lack of scientifically sound evaluations partially due to inherent complex task to prove the causality between an intervention and a nonevent but also due to a notoriously widespread lack of transparency and access to programs for researchers (Horgan and Braddock 2010). Second, the CVE field has seen a strong increase in funding and attention by policymakers around the world. As a result, programs and initiatives have significantly expanded both in scope, size, and numbers, even though the necessary backing by evidence-based theories, concepts, and tools has not made a similar development. This detachment of practical CVE activities from established research has created a risk of building and funding CVE programs which are ineffective at best or counterproductive at worst. Hence, the question of necessary quality standards has been brought up in the debate (Koehler 2017).
Countering Violent Extremism (CVE)
As this debate is ongoing, it is clear that CVE in its various forms and approaches has found its place within counterterrorism policies around the world and will likely remain a standard component of comprehensive strategies to combat terrorist and violent extremist recruitment and activities. While a securitization of the overall CVE field must be avoided, certain subdomains like prison-based CVE naturally must be done with a risk reduction perspective taken by security agencies in mind. On the other hand, communitybased CVE programs are essential tools to empower communities and reduce friction caused by an overreach of law enforcement and intelligence agencies. It must be noted that CVE still is a comparatively young field, and even though uniquely diverse regarding the number of different methods and program types included in it, the overall understanding of the various practical and theoretical problems connected to CVE remains mostly rudimentary. However, the increase in academic publications focusing on CVE and the growing interest of policymakers in quality assurance in this field provide a reason for cautious optimism regarding the professionalization and sustainability of such activities in the future.
Cross-References ▶ Conflict and Conflict Resolution ▶ Disarmament ▶ Peace and Reconciliation
References Altier, M. B., Thoroughgood, C. N., & Horgan, J. G. (2014). Turning away from terrorism: Lessons from psychology, sociology, and criminology. Journal of Peace Research, 51(5), 647–661. https://doi.org/10.11 77/0022343314535946. Ashour, O. (2009). The Deradicalization of jihadists: Transforming armed Islamist movements. New York/ London: Routledge. Bjørgo, T. (1997). Racist and right-wing violence in Scandinavia: Patterns, perpetrators, and responses. Oslo: Aschehoug. Bjørgo, T., & Carlsson, Y. (2005). Early intervention with violent and racist youth groups. NUPI Paper No. 677.
251 Norwegian Institute for International Affairs, Oslo. Retrieved from: https://www.files.ethz.ch/isn/27305/ 677.pdf. Bjørgo, T., & Horgan, J. (2009). Leaving terrorism behind: Individual and collective disengagement. London/New York: Routledge. Boucek, C. (2007). Extremist reeducation and rehabilitation in Saudi Arabia. Terrorism Monitor, 5(16), 1–4. Braddock, K. (2014). The talking cure? Communication and psychologival impact in prison de-radicalisation programmes. In A. Silke (Ed.), Prisons, terrorism and extremism: Critical issues in management, radicalisation and reform (pp. 60–74). London: Routledge. Caplan, G. (1964). Principles of preventive psychiatry. New York: Basic Books. EC. (2016). Communication from the Commission to the European Parliament, the European Council and the Council delivering on the European Agenda on Security to fight against terrorism and pave the way towards an effective and genuine Security Union (COM(2016) 230 final). European Commission. Retrieved from https://ec.europa.eu/home-affairs/sites/homeaffairs/file s/what-we-do/policies/european-agenda-security/legisl ative-documents/docs/20160420/communication_eas_ progress_since_april_2015_en.pdf. El-Said, H. (2012). Clemency, civil accord and reconciliation: The evolution of Algeria’s deradicaliaztion process. In H. El-Said & J. Harrigan (Eds.), Deradicalising violent extremists: Counter-radicalisation and deradicalisation programmes and their impact in muslim majority states (pp. 14–49). London: Routledge. El-Said, H., & Barrett, R. (2012). Saudi Arabia: The master of deradicalization. In H. El-Said & J. Harrigan (Eds.), Deradicalising violent extremists: Counterradicalisation and deradicalisation programmes and their impact in muslim majority states (pp. 194–226). London: Routledge. EU. (2014). Revised EU strategy for combating radicalisation and recruitment to terrorism. (5643/5/ 14). Brussels. Retrieved from http://register.consilium. europa.eu/doc/srv?l¼EN&f¼ST%209956%202014% 20INIT. Ferguson, N. (2010). Disengaging from terrorism. In A. Silke (Ed.), The psychology of counter-terrorism (pp. 111–123). London: Routledge. Gordon, R. S., Jr. (1983). An operational classification of disease prevention. Public Health Reports, 98(2), 107– 109. Harris-Hogan, S., Barrelle, K., & Zammit, A. (2015). What is countering violent extremism? Exploring CVE policy and practice in Australia. Behavioral Sciences of Terrorism and Political Aggression, 8(1), 6–24. https://doi.org/10.1080/19434472.2015. 1104710. Hegghammer, T. (2013). Number of foreign fighters from Europe in Syria is historically unprecedented. Who should be worried? The Washington Post–The Monkey Cage, 27.
C
252 Heydemann, S. (2014). Countering violent extremism as a field of practice. United States Institute of Peace Insights, 1 (Spring 2014). Honig, O., & Yahel, I. (2017). A fifth wave of terrorism? The emergence of terrorist semi-states. Terrorism and Political Violence, 1–19. https://doi.org/10.1080/095 46553.2017.1330201. Horgan, J. (2009). Walking away from terrorism: Accounts of disengagement from radical and extremist movements. London/New York: Routledge. Horgan, J., & Altier, M. B. (2012). The future of terrorist de-radicalization programs. Georgetown Journal of International Affairs, 13, 83–90. Horgan, J., & Braddock, K. (2010). Rehabilitating the terrorists? Challenges in assessing the effectiveness of De-radicalization programs. Terrorism and Political Violence, 22(2), 267–291. https://doi.org/10.1080/ 09546551003594748. Johnsen, G. (2006). Yemen’s passive role in the war on terrorism. Terrorism Monitor, 4(4), 7–9. Koehler, D. (2016). Understanding deradicalization. Methods, tools and programs for countering violent extremism. Oxon/New York: Routledge. Koehler, D. (2017). How and why we should take deradicalization seriously. Nature Human Behaviour, 1, 0095. https://doi.org/10.1038/s41562-017-0095. Muggah, R. (2005). No magic bullet: A critical perspective on disarmament, demobilization and reintegration (DDR) and weapons reduction in post-conflict contexts. The Round Table, 94(379), 239–252. Mullins, S. (2010). Rehabilitation of Islamist terrorists: Lessons from criminology. Dynamics of Asymmetric Conflict, 3(3), 162–193. https://doi.org/10.1080/1746 7586.2010.528438. Noricks, D. M. E. (2009). Disengagement and Deradicalization: Processes and programs. In P. K. Davis & K. Cragin (Eds.), Social science for counterterrorism. Putting the pieces together (pp. 299–320). Santa Monica: Rand Corporation. OSCE. (2015). Ministerial declaration on preventing and countering violent extremism and radicalization that lead to terrorism. Belgrade: Organization for Security and Co-operation in Europe. Retrieved from https://www.osce.org/cio/208216?download¼true. Reinares, F., Alonso, R., Bjørgo, T., Della Porta, D., Coolsaet, R., Khosrokhavar, F., . . . De Vries, G. (2008). Radicalisation processes leading to acts of terrorism. Retrieved from http://www.rikcoolsaet.be/files/ art_ip_wz/Expert%20Group%20Report%20Violent% 20Radicalisation%20FINAL.pdf. Ripley, A. (2008, March 13). Future revolutions. 4. Reverse Radicalism. Time Magazine. UNSC. (2014). Resolution 2178 (2014). (S/RES/2178 (2014)). United Nations Security Council, New York. Retrieved from: https://www.un.org/sc/ctc/news/docu ment/sres2178-2014-addressing-the-growing-issue-offoreign-terrorist-fighters/. UNSC. (2017). Resolution 2396. United Nations Security Council, New York. Retrieved from: https://www.un. org/securitycouncil/content/sres23962017.
Critical Infrastructure UNSG. (2016). United Nations plan of action to prevent violent extremism. New York: United Nations Secretary General. Retrieved from https://www.un.org/counterter rorism/ctitf/sites/www.un.org.counterterrorism.ctitf/fil es/plan_action.pdf. Williams, M. J. (2017). Prosocial behavior following immortality priming: Experimental tests of factors with implications for CVE interventions. Behavioral Sciences of Terrorism and Political Aggression, 9 (3), 153–190. https://doi.org/10.1080/19434472.201 6.1186718.
Further Reading Koehler, D. (2016). Understanding Deradicalization. Methods, tools and programs for countering violent extremism. Oxon/New York: Routledge. Marsden, S. (2017). Reintegrating extremists. Deradicalisation and desistance. London: Palgrave Macmillan. Horgan, J. (2009). Walking away from terrorism: Accounts of disengagement from radical and extremist movements. London/New York: Routledge.
Critical Infrastructure Murat Bayar Department of Political Science and Public Administration, Social Sciences University of Ankara, Ankara, Turkey Keywords
Cyber · Terrorism · Sea level rise · Climate change
Introduction States cooperate on a large number of issues ranging from trade and poverty alleviation to economic security. While the dominant theories in international relations highlight interstate militarized disputes and nuclear proliferation as the primary threats against international security, global terrorism and climate change have increasingly posed new types of threats in the twenty-first century. Critical infrastructures, such as ports, subways, and nuclear plants, which are essential for the well-being and livelihoods of societies, are affected by both of these issues. Accordingly,
Critical Infrastructure
there is a growing international effort (e.g., the European Union’s conceptualization of European critical infrastructures) to address the role of critical infrastructures in sustaining physical and economic security, the human-caused and natural threats against them, and the mechanisms through which states can adapt to, if not mitigate, these new threats. In general, infrastructure refers to underlying physical and organizational facilities and systems that are needed for the operation and management of a society. Infrastructures exist in virtually all sectors, including commerce, manufacturing, energy, finance, transport, water, communications, food, health care, education, and national defense. While all infrastructures are needed for a society to function, critical infrastructures (CIs) are “physical and information technology facilities, networks, services and assets which, if disrupted or destroyed, would have a serious impact on the health, safety, security or economic well-being of citizens or the effective functioning of governments in the Member States” (European Commission 2004, p. 3). In this regard, certain facilities and systems are categorized as critical by respective authorities due to their vital roles in serving the society.
Operationalization of Critical Infrastructures The criticality of an infrastructure can be determined by its proportions (e.g., number of services, size of population served), time (e.g., duration of outage), and the quality of the service delivered (e.g., food and water quality) (Fekete 2011). For instance, the Switzerland Federal Office for Civil Protection (2015) has listed oil and power supply subsectors (under the energy sector), banks (financial services), information technologies and telecommunications, the water supply, and rail and road traffic (transport) as being of very high criticality. It is important to note that the level of criticality depends on a country’s economic structure: banking is vital, and shipping is negligible for Switzerland, and both may occupy a different position in another country.
253
Yet, the European Commission (EC) (2013) has argued against the sectoral approach in which each sector is handled individually with regard to risk assessment methodologies and risk ranking. In contrast, the EC has asked for a systems approach where interconnectedness and interdependencies with other sectors are taken into account. The 2013 document focused on four CIs that are selected on the basis of their cross-border nature, diversity, and stakeholder cooperation: Eurocontrol (the EU’s air traffic management network), Galileo (the EU’s global navigation satellite system), the electricity transmission grid, and the European gas transmission network. Since, by definition, CIs are essential for the operation of a society, their protection against potential threats is particularly important for authorities. The 9/11 attacks targeted, among others, the financial infrastructure (World Trade Center) in New York, NY, whereas the 2004 Spain and 2005 London terrorist attacks hit transportation infrastructures. Furthermore, sea level rise, which is a consequence of climate change, threatens coastal cities all around the world. Accordingly, these growing threats have urged countries at risk to develop new measurements and solutions for CI protection. The literature operationalizes CI protection in the context of threat, risk, and vulnerability. Threat can be defined as a natural or humancaused event or actor that has potential to inflict damage on infrastructure (US Department of Homeland Security 2013), whereas risk is a function of threat, likelihood, and consequences (Kaplan and Garrick 1981). Finally, vulnerability is the weakness of a system to cope with disruptive events (McEntire 2005). While threat and vulnerability are expressed in the range of 0–1, consequences can be measured in terms of human casualties and injuries, monetary value, and the loss of public support (Lewis et al. 2011).
Global and Domestic Threats to Critical Infrastructures In its Strategic National Risk Assessment, the US Department of Homeland Security (DHS) (2011)
C
254
developed a comprehensive list of threats that are categorized as natural, technological/accidental, and human-made. Natural threats range from earthquakes and floods to human pandemic outbreaks, whereas technological/accidental threats include biological food contamination and chemical substance release. Finally, examples for human-caused threats comprise cyberattacks against data and terrorism attacks using explosives. The DHS also specified national thresholds for these threats based on existing data and expert opinions. For instance, a human pandemic outbreak has a critical threshold at a 25% infection rate across the population, whereas a cyberattack has a critical threshold at data compromise resulting in at least USD 1 billion losses. Yet, these measurements do not account for the psychological consequences of events on individuals, institutions, and the society. The DHS (2013) has proposed a four-step approach to conduct Threat and Hazard Identification and Risk Assessment (THIRA): identify threats, put threats in context (i.e., describe them and how they relate to the community), set capability targets (e.g., evacuate the area within 2 h, reopen highways within 10 h), and evaluate required human and material resources. The likelihood of events is to be determined by a combination of the historical track record of occurrence, expert opinion, and intelligence data, where available. The DHS has suggested that communities should only consider threats that are plausible and of high potential magnitude, with impact on the affected geographical area, its population (through casualties, injuries, and illnesses), and the economic value of the infrastructure and supply chains affected. This approach largely overlaps with the assessment of the Germany Federal Ministry of the Interior (2009), which has recommended companies to evaluate the success of their CI protection policies with the “SMART” criteria: these should be specific, measurable (and monitored), accepted, realistic, and time-bound (signaling timeframes and deadlines). Employing the event tree method, McGill and Ayyub (2007) have distinguished between protection vulnerabilities and response vulnerabilities. The former concept corresponds to all elements of
Critical Infrastructure
vulnerability that take place between the initiation of a threat event and the infliction of damage on the target. It is expressed as the joint probabilities of adversary success, exposure of the target to damage (given adversary success), and damage (given the exposure of the target). Security systems cope with the threat event at this stage through detection, engagement, and neutralization. The second type involves elements of vulnerability that are manifested during emergency response and recovery actions. It is expressed as the aggregated and joint probabilities of the potential (if unmitigated) loss (given the damage) and the actual loss (given the effectiveness of response and recovery capabilities). It should be noted that response vulnerabilities partly overlap with the concept of resilience. Noy (2009) has found that natural disasters, including those that damaged infrastructure, have a much worse effect on the output levels of developing countries and smaller economies than the output levels of developed countries and bigger economies. This finding can be puzzling at first, since interdependencies and cascading effects are expected to be higher in the latter group. However, Noy has noted that so are the literacy rate, institutional quality, and preparedness levels against disasters. While natural disasters and ensuing infrastructure breakdowns have implications on macroeconomic indicators, socioeconomic and political factors can also lead to the disruption or damaging of CIs. Flynn (2014) has stated that the main sources of risk to CIs include increasing demand that exceeds CI capacity, underinvestment in maintenance, adverse climate change effects, interdependencies due to globalization, and lack of political coordination at the local and national levels. He has also indicated that acceptable resilience levels can be determined with a combination of CI performance and the time spent between disruption and recovery. As a major regional/international effort to cope with these threats, the European Commission’s Green Paper (2005) outlined the European Programme for Critical Infrastructure Protection (EPCIP) and the Critical Infrastructure Warning Information Network (CIWIN). While the former
Critical Infrastructure
program aims to coordinate CI protection without compromising the competitiveness of European industries, the latter network seeks to facilitate communication and the flow of information at the European Union (EU) level. The Green Paper also defined European critical infrastructures (ECIs) as CIs that, if disrupted or destroyed, would have a substantial impact on more than one member state. In this regard, ECIs are distinguished from national CIs, which primarily concern individual home states (European Commission 2005). The EU’s initiative to address CI protection at regional scale also signifies a step forward from the national approach of the US Department of Homeland Security.
Conclusion Overall, the protection of CIs involves the identification and mitigation of, and adaptation to, preexisting and emerging natural and human-made threats, such as, inter alia, sea level rise, cyber intrusions, and chemical terrorist attacks. For example, as many as 24 out of 28 EU member countries (as of April 2018), as well as 3 out of 6 formal candidates for membership in the EU, are coastal countries and vulnerable to sea level rise. Efforts to tackle the implications of this necessitate international cooperation in order to minimize risks and increase the resilience of critical infrastructures, upon which global trade and security are built.
References European Commission. (2004). Communication from the Commission to the Council and the European Parliament: Critical infrastructure protection in the fight against terrorism. Brussels, 20.10.2004, COM(2004) 702 final. Retrieved February 11, 2018 from http://eurlex.europa.eu/legal-content/EN/TXT/PDF/? uri¼CELEX:52004DC0702&from¼EN European Commission. (2005). Green paper on a European Programme for Critical Infrastructure Protection. Brussels, 17.11.2005, COM(2005) 576 final. Retrieved February 11, 2018 from http://eur-lex. europa.eu/legal-content/EN/TXT/PDF/? uri¼CELEX:52005DC0576&from¼EN
255 European Commission. (2013). Commission staff working document on a new approach to the European Programme for Critical Infrastructure Protection making European Critical Infrastructures more secure. Brussels, 28.8.2013, SWD (2013) 318 final. Retrieved February 11, 2018 from http://ec.europa.eu/energy/sites/ ener/files/documents/20130828_epcip_commission_ staff_working_document.pdf Fekete, A. (2011). Common criteria for the assessment of critical infrastructures. International Journal of Disaster Risk Science, 2(1), 15–24. Flynn, S. E. (2014). Understanding the challenges & opportunities of the resilience imperative. Key note speech at Transforming the resilience for cognitive, cyber-physical systems conference. Denver. Germany Federal Ministry of Interior. (2009). Protection of critical infrastructures – Baseline protection concept: Recommendation for companies. Retrieved February 11, 2018 from http://www.bbk.bund.de/ SharedDocs/Downloads/BBK/DE/Publikationen/ PublikationenKritis/Basisschutzkonzept_engl.pdf?__ blob¼publicationFile Kaplan, S., & John Garrick, B. (1981). On the quantitative definition of risk. Risk Analysis, 1(1), 11–27. Lewis, T., Mackin, T. J., & Darken, R. (2011). Critical infrastructure as complex emergent systems. International Journal of Cyber Warfare and Terrorism, 1, 1–12. McEntire, D. A. (2005). Why vulnerability matters: Exploring the merit of an inclusive disaster reduction concept. Disaster Prevention and Management, 14(2), 206–222. McGill, W. L., & Ayyub, B. M. (2007). The meaning of vulnerability in the context of critical infrastructure protection. In Critical infrastructure protection: Elements of risk (pp. 25–48). Fairfax, VA: George Mason University, School of Law. https://cip.gmu.edu/wp-content/uploads/ 2016/06/ElementsofRiskMonograph.pdf Noy, I. (2009). The macroeconomic consequences of disasters. Journal of Development Economics, 88, 221–231. Switzerland Federal Office for Civil Protection. (2015). The Swiss Programme on critical infrastructure protection. Retrieved February 11, 2018 from https://www. shareweb.ch/site/Disaster-Resilience/resilience-andrelated-topics/Documents/FOCP_Critical-Infrastruc ture-Protection.pdf U.S. Department of Homeland Security. (2011). Strategic National Risk Assessment. Retrieved February 11, 2018 from http://www.dhs.gov/xlibrary/assets/rma-strategicnational-riskassessment-ppd8.pdf U.S. Department of Homeland Security. (2013). Threat and Hazard Identification and Risk Assessment Guide – Comprehensive Preparedness Guide (CPG) 201 (2nd ed.). Retrieved February 11, 2018 from http://www. fema.gov/media-library-data/8ca0a9e54dc8b037a55b 402b2a269e94/CPG201_htirag_2nd_edition.pdf
Further Reading French, G. S. (2007). Intelligence analysis for strategic risk assessments. In Critical infrastructure protection:
C
256 Elements of risk (pp. 12–24). Fairfax, VA: George Mason University, School of Law. https://cip.gmu. edu/wp-content/uploads/2016/06/ElementsofRisk Monograph.pdf O’Rourke, T. D. (2007). Critical infrastructure, interdependencies, and resilience. The Bridge, 37(1), 22–29. Weichselgartner, J. (2001). Disaster mitigation: The concept of vulnerability revisited. Disaster Prevention and Management, 10(2), 85–94.
Critical Security Studies Wendell C. Wallace1 and Scott N. Romaniuk2 1 Department of Behavioural Sciences, Centre for Criminology and Criminal Justice, The University of the West Indies, St. Augustine, Trinidad and Tobago 2 International Centre for Policing and Security, University of South Wales, Pontypridd, UK Keywords
(Non-)traditional security · Aberystwyth School · Copenhagen School · Institutions · International relations (IR) theory · Security theory
Introduction Security is a central concept and a necessary component for the continued existence of humankind. Security is at the fulcrum of the creation of numerous regional and international institutions, such as the Caribbean Community (CARICOM), United Nations (UN), North Atlantic Treaty Organization (NATO), and the European Union (EU). Security has always been given prominence in the academic disciplines of International Relations and Politics; however, starting with the seminal work of Buzan (1983), the concept of security began attracting a multiplicity of authors and researchers from disparate fields and different parts of the global community in the search of different, enhanced, and/or contemporary models of security away from traditional concepts of security that existed before the end of the pre-Cold War period. Robinson (2010) points out that the work of Buzan (1998) led to a diversification and a
Critical Security Studies
broadening of the concept and foci of security to include political, economic, societal, and ecological elements that were notably absent prior to the end of the Pre-Cold war period. Historically, security studies has long been recognized as a subdiscipline of International Relations (IR); however, security studies is also entrenched within a plethora of social sciences disciplines, but none more preeminent than IR. As with many other disciplines that have gone critical, “security studies” underwent a change in its ontology and epistemology and went critical (McCormack 2009). For influential proponents of “critical security studies” (CSS), such as Keith Krause, Michael C. Williams, and Ken Booth, among many other widely studied scholars, to be “critical” meant positioning oneself against the established Realist and Positivist traditions associated with traditional security, for example, the conventional study of security with a primary focus on the state as the referent object of security and military forces. It is in this regard, that in the twenty-first century, the term “Critical Security Studies” has come to occupy a prominent place within the lexicon of IR and security studies (Browning and McDonald 2011). This entry analyzes the CSS approach with a focus on: (1) the “Copenhagen School” – which calls for a broadening of the concept of “security” and highlights the process of “securitization” of political issues, and (2) the Aberystwyth or “Welsh School” that draws on Marxism and Critical Theory aimed at creating a self-conscious approach that is emancipatory in nature. Instructively, it should be noted that CSS is increasingly being employed as a means of analyzing contemporary relations between nation states as well as real events and occurrences between nations.
What Is Critical Security Studies? According to Robinson (2010), the genesis of CSS appears to be the ending of the Cold War, which ushered in an era of new(er) scholars, authors, and academicians who argued that there was a lacuna in the intellectual field of IR, with specific reference to security studies. Many of
Critical Security Studies
these new(er), more “radical” scholars, authors, and academicians were disillusioned by the politics of the Cold War, and in line with the historical antecedents of academic challenging; they challenged the existing assumptions of security to break the concept free of its chains and narrow corridors. These challenges were based on critical interpretations of preexisting notions of state-centricity and conventional claims that state sovereignty was equal to security (see Robinson 2010). These new(er) scholars, authors, and academicians emphasized the growing nature of globalization, inter-connectivity, and interdependence, the danger of weapons proliferation, and the diversification of threats to people’s daily lives, as reasons for a definition of security that focused less on the state and military power, and more on economic, social, political, and environmental issues (see Robinson 2010). CSS was therefore a contemporary signpost that demonstrated a “critical turn” away from established/ confined security studies tradition and a new “critical” positioning within security studies. Instructively, there is no one universally accepted definition for CSS (Peoples and Vaughn-Williams 2015). Krause and Williams (1997: xiii) argue against attempts to place a precise theoretical label on CSS as they opined “our appending of the term critical to security studies is meant to imply more an orientation towards the discipline [of security studies] than a precise theoretical label.” However, according to Fotion et al. (2007: 1) “[y]ou cannot discuss any subject until you make it clear what it is you are talking about.” With this in mind, it is important to proffer conceptualizations regarding the term “critical security studies.” In his discourse on CSS, Williams (2005) pointed out that CSS provides a deeper (recognition that security is derived from societal assumptions about the nature of politics), broader (recognition that security extends beyond the threat and use of military force), and more focused (emancipatory) approach to the understanding of security, when compared to traditional security (see Newman 2010 for support on the broadening and deepening of traditional security studies to CSS). For some, CSS is not a coherent, unified
257
body of scholarship; rather, it is a disparate body of scholarship, including, but not limited to, feminism, critical theory, postcolonialism, constructivism, and critical geopolitics, all which share similar opposition to conventional security studies (Dexter 2013). The term “critical security studies” has also been narrowly conceptualized (Booth 2005) and in broad (Krause and Williams 1997) terms. For Booth (2005), CSS refers to a theoretical commitment, a critical and permanent exploration of ontology, and the epistemology and praxis of security with the aim of enhancing security through emancipatory politics, while for Buzan et al. (1998), CSS is concerned with individuals and the deconstruction of previous security constructs. Bilgin (1999) attempts to operationalize CSS and points out that CSS favors an explicitly normative security agenda based on human emancipation, in opposition to Cold War security agendas which, under the guise of objective theories, saw the security of states as being of paramount importance. Bilgin (1999) and others questioned the statism of Cold War security and saw CSS as a radical, more contemporary way to reconceptualize security practices by encouraging changes in world politics whereby the ideas and experiences of the poor, the disadvantaged, the voiceless, the unrepresented, and the powerless were represented. Further, CSS aimed to debunk existing security practices that were statist and militaristic in character, while at the same time conceiving emancipatory alternatives. In sum, CSS is an academic discipline within security studies, which rejects mainstream approaches to state-centric, sovereignty-based, militarized security concepts and proposes divergent views based on nonstatist and nonmilitarized paradigms.
History of Critical Security Studies The early history of security was dominated by what is referred to as traditional security studies by Peoples and Vaughan-Williams (2015). Traditional security studies refer to realists, liberal, peace studies, and (military and) strategic studies perspectives in the study of security that were state
C
258
centric (Diskaya 2013; Peoples and VaughanWilliams 2015). These academics have been described as traditionalists (traditional security scholars) by Diskaya (2013) and were generally mainstream IR theorists from the USA whose notions of security were guided by pre-Cold War viewpoints of threat, defense, protection, and security. However, with the cessation of the Cold War, security experts were from that point forward required to rethink traditional concepts of security, and this facilitated the deepening and broadening of traditional conceptualizations of security so as to include emergent threats that were devoid of strict war and state-centric definitions, understandings, characteristics, and value (Newman 2010). Therefore, the traditional notions of security were displaced by newer notions that were deemed critical, and it is cogitated that security studies undertook a “critical turn,” which saw a moving away from a state-centric and militaryfocused approach to security. This critical turn away from conventional security studies was facilitated mainly by British and partially continental European security theorists who saw themselves as CSS experts. This newer notion was now based in a variety of emerging schools of thought, toward a more expansive conception of security, which encompassed economic, social, political, and environmental issues. These newer notions of CSS acted as challenges to the prior state-centric orthodoxy of conventional international security stances that were based upon the military defense of territory against “external” threats (Diskaya 2013). CSS also challenged Neorealist scholarship (Robinson, 2010) and has been involved in the broadening and deepening of the security agenda (Paris 2001; Newman 2010). According to Robinson (2010), by the beginning of the 1990s, a growing number of researchers began embracing CSS. These critical analyses of security were conducted in the context of altering “the bipolar international system that existed from 1945 to 1989” (The Times 2016). As momentum grew exponentially and the field of CSS began expanding, it began attracting increasing numbers of academicians, scholars, and interested persons who began seeing traditional
Critical Security Studies
security studies as anachronistic or paradoxical in nature. The name “critical security studies” was then developed and adopted for the field by the participants at a small conference: “Strategies in Conflict: Critical Approaches to Security Studies” at York University, Toronto, in May 1994 (Mutimer 2007).
Major Schools of Critical Security Studies There are two major schools of thought regarding CSS (as presented below): (1) the Copenhagen School and (2) the Aberystwyth or “Welsh School.” These two frameworks are the archetypal examples of CSS: 1. The Copenhagen School – security is about survival. Copenhagen School theorists argue that within IR, something becomes a security issue when it is presented as posing an existential threat to some object that needs to be dealt with immediately and by use of extraordinary measures, but that this object was not necessarily the state. Though the Copenhagen School shared the traditional military understanding of security with traditional security scholars, their conceptual apparatus differed as they combined a mix of neorealist and social constructivist concepts making them conceptually different from the traditionalists (Diskaya 2013). In sum, the “Copenhagen School” calls for a broadening of the concept of security, taking it beyond traditionalist paradigms, and through the Adlerian constructivist perspective, and highlights the process of securitization (as its main theoretical contribution) of political issues (Robinson 2010). 2. The Aberystwyth (Welsh) School – according to Diskaya (2013), the Aberystwyth School of security studies or CSS works within the tradition of critical theory, which has its roots in Marxism. Diskaya (2013) points out that CSS is based on the pioneering work of Ken Booth and Richard Wyn Jones who were heavily influenced by Gramscian critical theory, the Frankfurt School critical social theory, and radical international relations theory. CSS is
Critical Security Studies
diametrically opposed to traditional security studies and its state-centric nature and critiques the approach on this ground. Instructively, Booth (1991) and Jones (1999) not only criticized the traditional security approaches, but also made human emancipation their focus of the new CSS. In sum, the Aberystwyth (Welsh) School draws on Marxism and Critical Theory to create a self-consciously activist approach that emphasizes emancipation.
Conclusion The study of security has evolved beyond its traditional moorings of states and military power to include resource scarcity, environmental degradation, and population displacement. CSS studies is now viewed as the contemporary offshoot of security studies as it has broadened and deepened the nature and scope of the previously limited concept of security studies. Though the term CSS may have many schools of thought, opponents and proponents, and conceptualizations, Krause and Williams (1997), in their seminal work “Critical Security Studies: Concepts and Cases,” proposed a broad and flexible understanding of the term, one which still appears to holds currency.
Cross-References ▶ Actors and Stakeholders in Non-traditional Security ▶ Ethics of Security ▶ Global Security ▶ Global Threats ▶ Globalization and Security ▶ Human Security ▶ Military-Focused Security ▶ Ontological Security ▶ Post-colonialism and Security ▶ Regional Security Organizations ▶ Security and Citizenship ▶ Security Deficit ▶ Security Discourse ▶ Security State ▶ Supranational Actors
259
References Bilgin, P. (1999). Security studies: Theory/practice. Cambridge Review of International Affairs, 12(2), 31–42. Booth, K. (1991). Security and emancipation. Review of International Studies, 17, 313–326. Booth, K. (2005). Critical security studies and world politics. Lynne Rienner. Browning, C., & McDonald, M. (2011). The future of critical security studies: Ethics and the politics of security. European Journal of International Relations, 19(2), 235–255. Buzan, B. (1983). People, states, and fear: The national security problem in international relations. Chapel Hill: University of North Carolina Press. Buzan, B., Wæver, O., & de Wilde, J. (1998). Security: A new framework for analysis. Lynne Rienner. Dexter, H. (2013). Pl7505 brief intro to critical theory and critical security studies. https://www.slideshare.net/ HelenDexter/pl7505-brief-intro-to-critical-theory-andcritical-security-studies Diskaya, A. (2013). Towards a critical securitization theory: The Copenhagen and Aberystwyth schools of security studies. E-International Relations. http://www.e-ir. info/2013/02/01/towards-a-critical-securitizationtheory-the-copenhagen-and-aberystwyth-schools-ofsecurity-studies/#_ftn6 Fotion, N., Kashnikov, B., & Lekea, K. J. (2007). Terrorism: The new world disorder. London: Continuum. Jones, R. W. (1999). Security, strategy, and critical theory. Lynne Rienner. Krause, K., & Williams, M. C. (1997). Critical security studies: Concepts and cases. London: University of Minnesota Press. McCormack, T. (2009). Critical security studies: Are they really critical? Arena Journal, 32, 139–151. Mutimer, D. (2007). Critical security studies: A schismatic history. In A. Collins (Ed.), Contemporary security studies (pp. 53–74). Oxford: Oxford University Press. Newman, E. (2010). Critical human security studies. Review of International Studies, 36, 77–94. Paris, R. (2001). Human security: Paradigm shift or hot air. International Security, 26(2), 87–102. Peoples, C., & Vaughan-Williams, N. (2015). Critical security studies: An introduction (2nd ed.). Abingdon: Routledge. Robinson, D. (2010). Critical security studies and the deconstruction of realist hegemony. Journal of Alternative Perspectives in the Social Sciences, 2(2), 846–853. The Times. (2016, February 9). Global discord. https:// www.thetimes.co.uk Williams, P. (2005). Critical security studies. In A. J. Bellamy (Ed.), International society and its critics (1st ed., pp. 135–150). Oxford: Oxford University Press.
Further Reading Bellamy, A. J. (Ed.). (2005). International society and its critics (1st ed.). Oxford: Oxford University Press.
C
260 Collins, A. (2015). Contemporary security studies (4th ed.). Oxford: Oxford University Press. Kaliber, A. (2016). Critical security studies: An introduction. Global Affairs, 1(4–5), 486–487. Kaltofen, C. (2013). Engaging adorno: Critical security studies after emancipation. Security Dialogue, 44(1), 37–51. Smith, S. (2005). The contested concept of security. In K. Booth (Ed.), Critical security studies and world politics (pp. 27–62). Lynne Rienner.
Cyber Diplomacy Mihai Sebastian Chihaia1 and Jan Rempala2 1 Department of Political Science, International Relations and European Studies, Alexandru Ioan Cuza University of Iasi, Iasi, Romania 2 Technology Practice, FleishmanHillard, Brussels, Belgium Keywords
Cyber diplomacy · Cyberspace · Diplomacy · Cybersecurity
Introduction There are two broad ways of looking at Cyber Diplomacy: one aspect is from a public diplomacy standpoint while the other is from a geopolitical perspective. However, it is cyber diplomacy from this latter viewpoint that should be considered the more precise version of the term. The geopolitical perspective of cyber diplomacy is concerned with the creation of a “diplomacy of cyberspace” and the pursuit of a state’s national cyber-interests on a bilateral and multilateral level. The public diplomacy aspect of Cyber Diplomacy only looks at one facet of cyber diplomacy, which is more commonly referred to as digital diplomacy. Digital diplomacy broadly speaking utilizes the Internet as a tool to further public diplomacy goals and outreach. Having these two ways of looking at Cyber Diplomacy develop congruently has led inevitably to conflation in the literature and among practitioners.
Cyber Diplomacy
Cyber Diplomacy: Definition and Theory In 2017 Andre Barrinha and Thomas Renard published the article “Cyber-Diplomacy: the making of an international society in the digital age” which is the first holistic examination and peerreviewed journal publication in the West that distinguishes Cyber Diplomacy from e-diplomacy/ digital diplomacy. The definition that Barrinha and Renard propose is that Cyber Diplomacy can be seen as the pursuit of a state’s national interest in the realm of cyberspace. Or to quote: “Cyber-diplomacy can be defined as diplomacy in the cyber domain or, in other words, the use of diplomatic resources and the performance of diplomatic functions to secure national interests with regard to the cyberspace.” They argue this logic can usually be seen in the national strategies regarding cyberspace or cybersecurity. Common themes that cyber diplomacy revolves around are: Internet freedom, Internet governance, cybercrime, and cybersecurity (Barrinha and Renard 2017, p. 355). Attempts to define Cyber Diplomacy usually are interlinked with explaining what “diplomacy” means, and Andre and Barrinha propose to look at the “diplomacy of cyberspace.” The definition used by them is couched in the English School’s concept of diplomacy understood as the “attempt to adjust conflicting interests by negotiations and compromise” is central to international relations. From this English School perspective Barrinha and Renard position their definition of cyber diplomacy as the result of an ever-evolving process in International politics and society.
The Conflation of Cyber Diplomacy Cyber Diplomacy is not public diplomacy meets the Internet. Running up to 2016 with the publication of Shaun Riordan’s piece on the issue (Riordan 2016), the debate surrounding diplomacy in the digital sphere was strewn about with terms used synonymously and interchangeably. Terms like e-diplomacy, digital diplomacy, mass
Cyber Diplomacy
diplomacy, and diplomacy 2.0 were used alongside cyber diplomacy to refer to the same things. Much of the confusion and conflation can be attributed to experts pinning the Internet’s application to preexisting tools of diplomacy, notably as a revolutionary way of conducting public diplomacy. For many experts and practitioners cyber diplomacy at the time was simply the utilization of digital tools to conduct diplomacy. Examples of this conflation can be seen in early academia. In 2002, Cyber-Diplomacy: Managing Foreign Policy in the Twenty-First Century was one of the first academic sources to use the term “cyber diplomacy”; however, the central premise was focused on how the Canadian Department of Foreign Affairs and International Trade can best use information technology to communicate. Jurgen Kleiner’s 2008 The Inertia of Diplomacy also has a section titled “cyber diplomacy” but again conflates the idea that cyber diplomacy is simply public diplomacy on the internet. Kleiner fails to even provide a definition of what cyber diplomacy means. It is therefore not difficult to understand why there was no recognition of this conflation; simply looking at the various terms and definitions of the past reveals a heavy focus on communication capabilities.
Digital Versus Cyber Diplomacy Traditional practitioners of diplomacy soon adapted to the new communication capabilities the Internet gave rise to. To describe this phenomenon, the interchangeable usage of ediplomacy and digital diplomacy came to dominate the characterization of what was happening – governments utilizing social media. The United States Department of State can be seen as one of the pioneers of this trend. The United States opened up an office of eDiplomacy in 2003 which would dedicate itself to exploring “The New Diplomacy” Model (Snyder 2003). This model was based on the usage of innovative thinking and the usage of new media to expand the reach of the US foreign policy. Hence the term
261
e-diplomacy, or digital diplomacy, was born. eDiplomacy as noted by scholars (Al-Muftah et al. 2018; Hanson 2012) does not have a single definition but it is considered the use of communication and information technology for the purpose of attaining foreign policy goals. With this framework of a “diplomacy of cyberspace” in mind, new ways of looking at diplomacy and cyberspace emerge, two key ways are the format of negotiations and the role of the “cyber diplomat” which have led to the institutionalization of the diplomacy of cyberspace.
The Institutionalization of Cyber Diplomacy The prominence of cyber issues and the need to cooperate on these issues have also determined governments to establish specific departments in their foreign ministries and appoint ambassadors or representatives for cyber aspects – also called cyber diplomats. The main aim in creating these specialized departments in foreign ministries was to centralize all aspects of cyber issues into one unit that coordinated the government work in this regard. At the same time, it made multilateral engagements more easy to conduct, having one voice speaking on behalf of the state. This is also part of the engagement process that aims at building partnerships at bilateral level but also in the framework of international formats. Supporting other countries to develop cybersecurity policies and capacity building has been a feature of the international cooperation framework. Engagement is thus conducted via bilateral talks vis-à-vis dialogues or multilaterally as in the case of the United Nations. However, due to the intertwining of private actors and stakeholders in cyberspace and technology, the role of a cyber diplomat is much less traditional than their classical counterpart. A cyber diplomat has to take into account a wide range of these nongovernmental viewpoints and experts when engaging in talks. In fact, cyber diplomats consult regularly with these nonstate actors in order to keep up to date with the
C
262
latest technological developments. This important aspect of engagement requires working with civil society and the industry to get their input and create a multistakeholder model in approaching cyberspace cooperation and challenges. By gathering ideas from these multiple directions, a more comprehensive framework can be achieved. Cyber diplomacy is much less about a public affairs officer running a twitter account and more about the overarching framework of what the rules are. For example, rules regarding how Twitter runs its platform. We can think of cyber diplomacy as a chessboard on which a multitude of different actors, policies, and platforms are at play. Any area that has relevance to cyberspace will be affected by Cyber Diplomacy, such as trade policy, security, freedom of governance, and freedom of speech. “It’s paramount nation states make sure that cyberspace doesn’t become the Wild West, where everybody can do whatever they want,”. . . “What’s happening now in a global perspective, is that we’re trying to establish the parameters of what behavior should be punished in cyberspace.” Tiirmaa-Klaar, Estonian Cyber Ambassador. (Maack 2019)
Cyber Diplomacy was born out of the necessity to regulate the emerging battlefield in cyberspace. The emergence of cyberattacks, hacking, cybercrime, cyber espionage, IP theft, and disinformation are all problems that still require some sort of international rule set. After all, how can a state retaliate against actors when there is no process for which to attribute these attacks to. The need for cyber diplomacy follows the same logic that evolved around airspace or maritime. These were all areas that at one point did not have a set of governing norms; it was only via diplomatic negotiations that international society was able to create an overarching set of standards and ultimately agree upon laws in these sectors.
International Practices of Cyber Diplomacy The recent trends in cyberspace development coupled with the growing number of cyberattacks
Cyber Diplomacy
have outlined the need to take measures to tackle these issues at international level and maintain a climate of stability and cooperation. Efforts have been undertaken under different frameworks such as at UN level, through the UN Group of Governmental Experts (UNGGE). There has been a general agreement at the UN level that international law can and should be applied in cyberspace. However, in the process of applying this in practice has been stuck, with governments unable to reach a consensus. It is important to mention that there have been several incentives and formats aimed at creating norms and rules in cyberspace not only at state level (Global Commission on the Stability of Cyberspace) but also driven by or involving industry such as the Digital Geneva Convention (Microsoft, 2017) or the Charter of Trust (Siemens, 2018) Besides UN, the Organization for Security and Cooperation in Europe (OSCE) also stands out with its initiative to develop voluntary confidence building measures that will apply in cyberspace. In concrete terms, the OSCE member states governments have initially agreed (2013) on a first set of measures that set up some communication/dialogue avenues as well as contributed to more transparency. This was followed by a second set (2016) that aimed at strengthening cooperation among states (Matino 2018, p. 2). These measures have proven useful in offering some concrete guidelines for states to follow but the fact that they are voluntarily agreed upon shows their limitations. However, these kinds of initiatives complement each other and should not be downplayed. The EU has established itself as an important cyber diplomacy actor through different tools such as working with partners and helping them develop capabilities, institutions, and policies in the cyber area, increasing resilience in the face of cyber threats as well as developing institutional frameworks to deal with cyber diplomacy aspects at bilateral, regional, and global levels (EU Cyber Direct 2019, p. 12). The EU supports the current international law applied in cyberspace and has been very active in cyber capacity building projects.
Cyber Diplomacy
Disagreement and Fragmentation of Norms in Cyberspace Cyber diplomacy is also a process to spread normative values and to create a core common framework for conduct in cyberspace. However, different states have conflict or differing views on what this framework should look like. Such a conflict then runs the risk of fragmenting norms and causing international disagreement (Homburger 2019). Cyber diplomacy, as part of foreign policy, encompasses besides the international norm building in cyberspace also partnerships and confidence building measures (Bendiek 2018, p. 2). Cyber analysts and scholars have already identified such a fragmentation happening on the international stage. Beginning in 1998 the Russian Federation requested the UN investigate developments in information and telecommunications in the context of international security. It is widely seen that the Russian Federation was concerned that these new technological developments would affect a state’s ability to maintain stability and thus pose a security risk. This proposal then has framed cyber conduct as a “security” issue (Stauffacher 2019), first and foremost, with the United States interpreting this as an attempt to constrain their development of cyber technology. The UN established a Group of Governmental Experts (UNGGE) in 2004 in order to generate an international consensus on the Russian Proposal; however, it failed due to lack of consensus. So far there have been six GGE’s 2004/2005, 2009/ 2010, 2012/2013, 2014/2015, 2016/2017, and 2019/2021. 2019/2021 also saw the establishment of an Open Ended Working Group (OEWG) for 2019/2021. This, now, split process embodies how contentious topics surrounding cyber diplomacy are. The OEWG was born out of the Russian Federation and Chinese disagreement with GGE process running up to 2011. In 2011 both nations began promoting a UN Draft Code of Conduct for Information Security which outlined the argument that information content poses a serious threat to national security and states must enact measures to maintain sovereign control over their sphere of information. This vision opposed what
263
scholars see as the “western model” regarding cybersecurity which emphasizes free-flow of information and content with minimal restrictions. The OEWG was formally established as one of two resolutions stemming from the 2018 UN General Assembly taking input from the 2016/2017 UN GGE. The OEWG also has the added caveat that it is not restricted to membership unlike the initial 15 – now, for 2019/2021, 25 member state UN GGE. This was a serious point that Russia brought against the UN GGE process. In terms of actual norm creation, the 2013 and 2015 reports were the only ones to produce some sort of breakthrough in cyber diplomacy whereby all states agreed to apply international law to cyberspace. However, the following 2017 report failed, because while agreement existed on applying international law to cyberspace, the consensus to create an “attribution council,” which would have laid the groundwork for allowing the international community to identify and attribute cyberattack perpetrators, failed. Attribution is an important layer in regulating state actions in cyberspace and adhering to a set of laws. Being able to hold responsible states that undertake cyberattacks is an important milestone in the efforts to establish a framework of laws in cyberspace. Both the UN GGE 2019/2020 and the OEWG 2019/2021 have relatively overlapping mandates and continue to cause friction on the international level. Whether the OEWG will prove to be more successful is doubtful, as it will work based on consensus as well for the creation of a final report.
Conclusion Cyber diplomacy is still a rather new concept at theoretical level and states are still developing practice in this regard, therefore there are many angles to look at it. Cyber diplomacy must involve a wide range of actors who cooperate with each other in the process of ensuring a secure and regulated cyberspace. In this process, it is essential for governments, industry, and civil society to engage in dialogue and exchange relevant
C
264
information and expertise. Future regulations should continue to encompass the input coming from multi stakeholders. International foras will become more essential in the near future as governments are looking to reach a consensus on relevant issues pertaining to the cyberspace. This entry is intended to give a brief overview of the concept of cyber diplomacy and the nexus of relevant aspects surrounding it. Further research is required to delve deeper in the process of cooperation at international level in cyberspace as well as on overcoming the current challenges.
Cross-References ▶ Origins of Cyber-warfare
References Al-Muftah, H., Weerakkody, V., Rana, N. P., Sivarajah, U., & Irani, Z. (2018). Factors influencing e-diplomacy implementation: Exploring causal relationships using interpretive structural modelling. Government Information Quarterly, 35(3), 502–514. https://doi.org/10. 1016/j.giq.2018.03.002. Barrinha, A., & Renard, T. (2017). Cyber-diplomacy: The making of an international society in the digital age. Global Affairs, 3(4–5), 353–364. https://doi.org/10. 1080/23340460.2017.1414924. Bendiek, A. (2018, April 19). The EU as a force for peace in international cyber diplomacy. Retrieved from https://www.swp-berlin.org/en/publication/the-eu-asa-force-for-peace-in-international-cyber-diplomacy/ EU Cyber Direct. (2019). Cyber diplomacy in the European Union. Retrieved from https:// eucyberdirect.eu/wp-content/uploads/2019/12/cd_ booklet-final.pdf
Cyber Diplomacy Hanson, F. (2012, October 25). The history of eDiplomacy at the U.S. Department of State. [Report]. Retrieved from https://www.brookings.edu/research/the-historyof-ediplomacy-at-the-u-s-department-of-state/ Homburger, Z. (2019). The necessity and pitfall of cybersecurity capacity building for norm development in cyberspace. Global Security, 33(2), 224–242. https:// doi.org/10.1080/13600826.2019.1569502. Maack, M. (2019). What the hell is a ‘cyber diplomat’. Retrieved from https://thenextweb.com/eu/2019/05/ 24/what-the-hell-is-a-cyber-diplomat/ Matino, L. (2018, May 2). Give diplomacy a chance: OSCE’s red lines in cyberspace. Retrieved from https://www.ispionline.it/en/pubblicazione/givediplomacy-chance-osces-red-lines-cyberspace-20377 Riordan, S. (2016, May 12). Cyber diplomacy vs. Digital diplomacy: A terminological distinction [CPD Blog]. Retrieved from https://www.uscpublicdiplomacy.org/ bl og /cyb er-dip loma cy-v s-d igit al-d ipl omacy terminological-distinction Snyder, J. T. (2003, July 24). The new diplomacy: The virtual consulate model. Retrieved from https://20012009.state.gov/r/adcompd/rls/27561.htm Stauffacher, D. (2019, May). UN GGE and UN OEWG: How to live with two concurrent UN Cybersecurity processes. Retrieved from https://ict4peace.org/wpcontent/uploads/2019/11/ICT4Peace-2019-OEWGUN-GGE-How-to-live-with-two-UN-processes.pdf
Further Reading EU Cyber Direct. Retrieved from https://eucyberdirect.eu/ wp-content/uploads/2019/12/cd_booklet-final.pdf Schulzke, M. (2018). The politics of attributing blame for cyberattacks and the costs of uncertainty. Perspectives on Politics, 16(4), 954–968. UNODA Fact Sheet: Developments in the field of information and telecommunications in the context of international security. Retrieved from https://www.un.org/ disarmament/ict-security/ Ziolkowski, K. (Ed.). (2015). Peacetime regime for state activities in cyberspace: International law, international relations and diplomacy. Tallinn: NATO Cooperative Cyber Defence Centre of Excellence.
D
Deforestation Hasan Volkan Oral Department of Civil Engineering & EPPAM, Istanbul Aydin University, İstanbul, Turkey Keywords
Forest removal · Environmental security · Forest clearance · Global warming · Population growth
Introduction According to Cambridge Dictionary (2018), global security is the protection of the world against war and other threats. Nowadays, with the development of the Internet, the content of threats has changed, and considered under the information technology in the early 2000s (Smith et al. 2006). In 2019, trade wars, cybersecurity breaches, and the indirect impacts of the climate change became the hot topics related to global security concern (Global Trends 2019). However, Kirchner and Sperling (2008) handled these threats more easily and understandable and grouped them under the regional and global. Hough (2018) categorized these threats as military, economic, social, and environmental, health, natural, accidental, and criminal threats. Zapolskis (2012) named the issue of global security as environmental security and defined the environment as follows: how various environmental factors
(climate, resources, etc.) and processes can affect the security of states and societies. Environmental security examines the relationships between different environmental issues, their effects, and various security problems. Environment is considered as integrated part of this security concept together with the dimensions of economic, social, energy, or information security. Zurlini and Müller (2008) stated that environmental security is also central to the national security, comprising the dynamics and interconnections among humans and natural resources. Moreover, environmental security deals with environmental problems, such as depletion and degradation of tropical forests, soil erosion, fuel wood shortage, air and water pollution, extinction of species, and reduction in biological diversity (Akbulut 2014) and degradation of tropical forests, which is also known as deforestation, is one of the significant environmental problems in the world today. Depending on that, deforestation is a serious global threat. Twenty-First century is a period in which environmental problems are noticed more than ever in the world and their results can be observed at the most lethal level. The rapidly growing demographic structure and globalization are leading to a few environmental issues because of the uncontrolled urbanization, industrialization, deforestation, and loss of useful agriculture land (Singh and Singh 2017). Added to these issues, other environmental issues such as food and agriculture scarcity, biodiversity richness, species
© Springer Nature Switzerland AG 2023 S. N. Romaniuk, P. Marton (eds.), The Palgrave Encyclopedia of Global Security Studies, https://doi.org/10.1007/978-3-319-74319-6
266
extinction, and the reasons and the consequences of climate change are in the area of environmental geopolitics (O’Lear 2018). Specifically, globalization has enormous effects and on these environmental problems. For instance, economic globalization has provided free market economics with ideological and political victories that leaded a substantial change in the world’s economic structure. It has also introduced international division of labor, because of low cost and comparatively unregulated work standards in developing countries (Akbulut 2014). Among these problems, deforestation can be described as the removal of forest ecosystem by human hand. As a result, plants that can photosynthesize diminish and the amount of carbon dioxide (CO2) gas in the atmosphere increases, soil erosion accelerates, natural disasters such as flooding takes place more often, and the nutrient level of the soil is altered. Increasing CO2 leads to global warming and the impact of this increase is more dangerous than the accumulation of CO2 in the atmosphere resulting from fossil fuel emission. The cause of deforestation is to find alternative living places due to human population growth. The removal of the Amazon rainforests is one of the most comprehensive examples of this problem. The primary aim of the paper is to provide easy and understandable information about deforestation, its reasons, and to present the relationship with global warming. The secondary aim is to define the relationship with global security through global warming. For these purposes, the definition of deforestation, the factors affecting the occurrence of this problem, and the linear relationship between deforestation and climate change and global warming are given in this section.
Definition of the Deforestation Deforestation is a subset of the bigger land use change problem and is also known as forest degradation or forest clearance and which can be defined as the conversion of forestland to non-
Deforestation
forest land (IPCC 2000). The reasons of deforestation can be grouped as follows: (a) Population growth (b) Opening spaces in forests for urbanization and farming (c) Forest economics Population Growth Like all living creatures in the world, humans reproduce in order to sustain their generation under favorable conditions and increase the number in the population. Towards the end of the eighteenth century, the population of humankind, which had been enriched after the Industrial Revolution, shifted from rural areas to large cities. In 2018, the world population was 7,700,000,000 people and today 55% of the world’s population lives in urban areas and a proportion that is expected to increase to 68% by 2050 (UNDES 2018). As the result, alternative habitats need to search in order to meet the growing population’s needs such as, nutrition and shelter. Opening Spaces in Forests for Urbanization and Farming, Cattle Ranching Housing is one of the most basic human needs. Since the beginning of the civilization, human being has met the need for accommodation with the possibilities offered by nature itself. The human kind, who established the cities together with the agricultural revolution, continued his life in cities in parallel with the development of civilization. The human population in the cities has been expanding since the beginning of the twentieth century, and the cities they established became too small for the humans. This expansion allows urbanization to occur, which is a result of population migration from rural areas (WHO 2018). As the result of urbanization, the search for alternative living places has started. Added to that, economic needs of the population has accelerated due to the rapid growth. For this reason, the forests have been removed to make new production facilities in order to increase the agricultural production. Consequently, the construction of new farm areas for the production of milk and
Deforestation
dairy products are constructed on these lands. Converting cleared forest lands to pasture is another major types of environmental problem and it is known as cattle ranching. These ranches are constructed on the lands that have been acquired from removed forests. According to Food and Agriculture Organization (FAO) (2018), the link between deforestation and cattle ranching is strongest in Latin America. In Central America, forest area has been reduced by almost 40% over the past 40 years. Over the same period, pasture areas and the cattle population increased rapidly. Forest Economics Forest economics has been the most important economic subsistence source of humans throughout the ages. One of the best examples of this is the use of timber. For instance, to harvest timber and to create commercial items from this harvest, such as paper, furniture, and homes, is the common and very well-known example. There is ample global supply for the foreseeable future, and although there is a worldwide trend towards deforestation, it is generally due to clearing land for agriculture rather than logging for timber. Nevertheless, illegal logging remains a concern (Ramage et al. 2017). Illegal logging, which is illegal activities ranging from forest ecosystem and industries to timber and non-timber forest products, seriously threaten forests especially in the tropic regions of the world. For instance, in some South American countries, such as Cambodia, Indonesia, and Bolivia, the estimated illegally logged production may exceed 80% (Lee et al. 2018). Extracting oil from palm trees is another example of using forests as the economic subsistence source of humans. A large proportion of palm oil expansion occurs at the expense of biodiversity and ecosystems in the countries it is produced. For instance, large areas of forest in Sumatra, Indonesia, have been replaced by cash crops like oil palm and rubber plantations (egu.eu 2017). Added to that, this type of production is the largest cause of deforestation in this country and other equatorial countries with doubling the expanses of tropical rainforest. Indonesia’s endangered orangutan
267
population, which depends upon the rainforest, has dwindled by as much as 50% in recent years (Miniscalco 2019).
Deforestation and Climate Change Briefly, deforestation results from human activities that partially remove forest carbon stocks without regeneration in a reasonable time frame (on the order of a decade). That is, the rate of biomass carbon removal is greater than the rate of regrowth resulting in a gradual decline in overall biomass carbon stocks (De Fries et al. 2007). Deforestation releases CO2, known as soil respiration, to the atmosphere because carbon stored in the organic matter of trees and soils is oxidized during the processes of removing the forests. Carbon dioxide fluxes from deforestation are highly uncertain components of the contemporary carbon budget, due to changes in forest soils. Deforestation frequently occurs in the tropics, and accounts for about 2 Gton C/year release of CO2 to the atmosphere. During combustion of any fossil fuel, carbon is oxidized and CO2 is released. Fossil fuel combustion releases 5 Gton C/year and this value is rising exponentially with time, driven by population growth and economic growth. Both processes release carbon to the atmosphere at a rate of about 7 Gton C/year (Archer 2007). It is estimated that 25% of the world’s total greenhouse gas production comes from deforestation alone. Furthermore, forests around the world store more than double the amount of CO2 than is found in the atmosphere. This means that when areas are deforested, the CO2 stored in those trees is released into the atmosphere (Climate and Water 2019).
Types of Deforestation There are four types of deforestation (Learn ArcGIS 2013; Gichucho et al. 2013) as follows: (a) Land clearing to prepare for livestock grazing or expansion of crop planting
D
268
According to FAO (2018), the deforestation process starts when roads in farmlands are cut through the forest, opening it up for logging and mining. Once the forest along the road has been cleared, commercial or subsistence farmers move in and start growing crops. As the result, forest soils become too nutrient-poor and fragile to sustain crops for long. After 2 or 3 years, the soil is depleted, and crop yields fall. Hence, the farmers let the grass grow and move on and the ranchers move in. (b) Commercial logging and timber harvests Logging operations in a forest should be well synchronized and productive within work cycles, occurred in a scattered and unsystematic scheme. Lack of harvest preparation, low recovery rates, and improper working techniques in felling and crosscutting resulted in low extraction intensity (FAO 2001). For instance, one of the leading causes of rainforest destruction is logging. Many types of wood used for furniture, flooring, and construction are harvested from tropical forests in Africa, Asia, and South America. By buying certain wood products to meet the necessary needs, people in places like the United States and Europe are connected accelerating to the destruction of rainforests (EC 2013). (c) Slash-and-burn forest cutting for subsistence farming Slash-and-burn agriculture typically refers to land uses where a cropping period is rotated with a fallow period that is long enough to enable the growing of dense, woody vegetation, and where the biomass is eliminated from the plot by cutting, slashing, and burning it, prior to the next cultivation cycle. It is generally considered an extensive land use, maintained through time by expansion over uncultivated land following population growth (intensification), in contrast with more intensive land uses, where the biomass is incorporated to the soil through plowing or other practices. Slash-and-burn agriculture is a widely adopted and sometimes inescapable strategy to practice agriculture in forested
Deforestation
landscapes. Most staple annual crops require full exposure to the sun in order to grow; hence, areas of forest need to be cleared to establish new fields. This offers great sanitary conditions to crops because their main competitors (weeds) and threats (pests and diseases) are destroyed, except for wild animals if some forest remains around the field (Pollini 2014). (d) Natural events such as volcanic eruption, stand wind throw from hurricanes, catastrophic forest fires, or changes in local climate and rainfall regimes Volcanic eruptions and subsequent lava flow sometimes burn large tracts of forest, while the gases released from the activity can diminish wildlife (Butler 2019). The combination of forest fires with land use change and climate change could speed destruction in areas like the Amazon and contribute to emissions of CO2 that contribute to global warming (Doyle 2017). According to Bennett and Barton (2018), theories connecting forests with rainfall peaked in popularity in the end of nineteenth century, a period when scientists expressed alarm that deforestation caused regional declines in precipitation. Forests were considered to create rain within a locality and region. Scientific consensus shifted by the early twentieth century to the view that forests did not play a significant role in determining rainfall. The forest-rainfall connection reemerged in the 1980s alongside advances in climate modelling and growing fears of anthropogenic global warming and tropical deforestation. Employing new data and theories, supply-side advocates have once again placed a strong forest-rainfall connection into scientific prominence.
Conclusion Here in this chapter, the reasons and the impacts of deforestation are discussed and defined the reasons which allow to occur the impacts on the environment. Deforestation is one of the most important problems of the twentieth century.
Deforestation
Today, the biggest impact of deforestation can be noticed on climate change and global warming. Decrease in deforestation can also lessen the impact of these problems. More tree planting should be a state policy by the governments especially in third world and developing countries. In order to combat against the deforestation, the first thing has to be urgently launched is to control the rapidly growing human population. Cattle ranching, illegal logging, and opening huge amounts of lands in tropical forests must be urgently stopped. Added to these, sustainable development principles need to be used to complement each other to combat against the deforestation. Beside the environmental concerns, deforestation is also one of the main discussion topics in global security. As a rule of thumb, the concept of security is related to the fact that individuals continue their daily lives comfortably and do not encounter any problems. This philosophy is also applied in the political sphere and is closely related to the existence of nations. People’s fates are closely related to the geography they are in. Countries have also similar interests with their geography.
Cross-References ▶ Deforestation ▶ Ecosystems
References Akbulut, A. (2014). Environmental degradation as a security threat: The challenge for developing countries. International Journal of Human Sciences, 11(1), 1227–1237. Archer, D. (2007). Methane hydrate stability and anthropogenic climate change. Bio Geosciences Discussions, 4(2), 993–1057. Bennett, B. M., & Barton, G. A. (2018). The enduring link between forest cover and rainfall: A historical perspective on science and policy discussions. Forest Economics, 5(5), 2–9. https://doi.org/10.1186/s40663-0170124. Accessed 22 Dec 2018. Butler, R. A. (2019). Deforestation. https://rainforests. mongabay.com/08-deforestation.html. Accessed 01 Dec 2019.
269 Cambridge Dictionary. (2018). Global security. https:// dictionary.cambridge.org/dictionary/english/global-secu rity. Accessed 24 Dec 2018. Climate and Water. (2019). Characteristics of world weather and climate. https://www.climateandweather. net/global-warming/deforestation.html. Accessed 12 Jan 2019. De Fries, R., et al. (2007). Land use change around protected areas: Management to balance human needs and ecological function. Ecological Applications, 17(4), 1031–1038. Doyle, A. (2017). Forest fires stoke record loss in world tree cover, https://www.scientificamerican.com/article/forestfires-stoke-record-loss-in-world-tree-cover/. Accessed 29 Jan 2019 European Committee (EC). (2013). The impact of EU consumption on deforestation: Comprehensive analysis of the impact of EU consumption on deforestation. https:// ec.europa.eu/environment/forests/pdf/1.%20Report% 20analysis%20of%20impact.pdf. Accessed 01 Aug 2019. European Geosciences Union. (2017). Deforestation linked to palm oil production is making Indonesia warmer. https:// www.egu.eu/news/355/deforestation-linked-to-palmoil-production-is-making-indonesia-warmer/. Accessed 02 Apr 2019. FAO. (2001). Commercial Timber harvesting in the natural forests of Mozambique (Forest Harvesting CaseStudy, 18). http://www.fao.org/3/a-y3061e.pdf. Accessed 01 Feb 2019. FAO. (2018). Cattle ranching and deforestation (Livestock Policy Brief 03). http://www.fao.org/3/a-a0262e.pdf. Accessed 29 May 2018. Gichucho, C., et al. (2013). Land cover change and deforestation in gazette Maji Mazuri Forest, Kenya. International Journal of Science and Research (IJSR), 2, 563–566. Global Trends. (2019). Security threats. https://trends. sustainability.com/security-threats/. Accessed 29 Sept 2019. Hough, P. (2018). Understanding global security (2nd ed.). London: Routledge Publications. IPCC. (2000). Summary for policymakers land use, landuse change, and forestry. https://www.ipcc.ch/pdf/ special-reports/spm/srl-en.pdf. Accessed 05 Apr 2018. Kirchner, E. J., & Sperling, J. (2008). Global security governance (1st ed.). London: Routledge. Learn ArcGIS. (2013). Compare roads and deforestation. https://learn.arcgis.com/en/projects/get-started-witharcmap/lessons/compare-roads-and-deforestation.htm. Accessed 29 Sept 2019. Lee, Y. H., et al. (2018). Profit sharing as a management strategy for a state-owned teak plantation at high risk for illegal logging. Ecological Economics, 149, 140–148. Miniscalco, E. (2019). Is harvesting palm oil destroying the rainforests? https://www.scientificamerican.com/article/ harvesting-palm-oil-and-rainforests/. Accessed 14 Jan 2019. O’Lear, S. (2018). Environmental geopolitics. Maryland: Rowman and Littlefield Publications.
D
270 Pollini, J. (2014). Slash and burn agriculture. In P. B. Thompson & D. M. Kaplan (Eds.), Encyclopedia of food and agricultural ethics. New York: Springer. Ramage, M. H., et al. (2017). The wood from the trees: The use of timber in construction. Renewable and Sustainable Energy Reviews, 68(1), 333–359. https:// doi.org/10.1016/j.rser.2016.09.107. Accessed 15 July 2019. Singh, R. L., & Singh, P. K. (2017). Global environmental problems. In R. Singh et al. (Eds.), Principles and applications of environmental biotechnology for a sustainable future (Applied environmental science and engineering for a sustainable future). Singapore: Springer. Smith, M., et al. (2006). Countering security threats in service-oriented on-demand grid computing using sandboxing and trusted computing techniques. Journal of Parallel and Distributed Computing, 66, 1189– 1204. UNDES. (2018). 68% of the world population projected to live in urban areas by 2050, says UN. https://www.un.org/ development/desa/en/news/population/2018-revision-ofworld-urbanization-prospects.html. Accessed 28 May 2018. WHO. (2018). Climate change and human health, urbanization health. http://www.who.int/globalchange/ecosys tems/urbanization/en/. Accessed 28 May 2018. Zapolskis, M. (2012). The concept of environmental security in international relations: Definition, features, implications. Politologia, 65(1), 114–116. Zurlini, G., & Müller, F. (2008). Environmental security. In Encyclopedia of ecology. New York: Elsevier.
Further Reading Land Use, Land-Use Change and Forestry. http://www.ipcc. ch/ipccreports/sres/land_use/index.php?idp¼49. Accessed 07 Nov 2019.
Democratic Security Max Steuer Department of Political Science, Comenius University, Bratislava, Slovakia Keywords
Democracy · Abuses of fundamental rights · Majority rule · Rule of law · Threat perceptions · Council of Europe · Anti-terrorist legislation The concept of democratic security attempts at reconciling the tension between democracy that
Democratic Security
is inseparable from fundamental rights guarantees (Schaffer 2015) on the one hand, and due considerations given to security both at the national and the individual level on the other hand. As such, it opens up the room for more multilevel analyses (international, state, and individual) of concrete strategies and institutions that aim to show the inseparability of human rights and security. Among these strategies and institutions are state constitutions, international conventions and other binding and nonbinding documents of international law, as well as specific security strategies, including those drafted by nongovernmental organizations or in the academia. In addition, the concept enables to look at the practice of reconciling democracy and security in concrete decisions and actions taken by governments as well as other domestic and international actors. This contribution begins by elaborating on the meaning(s) of democratic security, with the aim to provide a hands-on conceptualization that can be used in empirical analyses, and to minimize the risks entailed in bringing a relatively new and notso-frequently used concept into the discussion about (global) security. Secondly, it introduces a few avenues where the concept can be applied, illustrating its particular relevance for several contemporary societal dilemmas, which entail at least a seeming trade-off between adhering to democratic principles and providing the people with security guarantees. Thirdly, it explores a few other avenues where the concept has relevance beyond what is widespread in contemporary studies. In conclusion, it argues for more discussion about this concept, the meaning of which is insufficiently represented by the more well-known concept of human security in security studies.
The Meanings of Democratic Security In order to grasp the meaning of democratic security, this contribution gains inspiration from the concept-building tool presented by Calise and Lowi (2010) in conjunction with existing (rather scarce) scholarship on the concept. Democratic security is a type of security that emphasizes its
Democratic Security
democratic elements, that is, elements of shared rule and decisions, accountability, and protection of fundamental rights. Thus, if we imagine security to be defined by two axes – individual and collective, and democracy and autocracy, democratic security could be placed in the quadrant marked by “democracy” and “collective.” In other words, democratic security emphasizes involvement, joint decisions, respecting constitutional boundaries, as well as international commitments of the decision-making actors over applying immediate hard measures in the name of effectiveness and removal of (real or imagined) threats. It is quite clear that behind lays a substantive conceptualization of democracy that goes beyond the majority rule. A minimalist conceptualization of democracy would limit the scope of application of democratic security even though it could still be used more narrowly in evaluating whether concrete security policies are executed with an approval of democratically elected majorities. This understanding of democratic security should not be confused with one of the Columbian policies with the same name, that arguably does not meet democratic standards in several ways (Flores-Macías 2014; Mason 2003). Rather, properly understood, it builds on the claim that “security policy cannot be excluded from democratic decision-making processes without both destroying democracies and corrupting security policy” (Johansen 1991, p. 210). Thus, it cannot be simply labelled as “security in democracies,” as one contribution which operates with the concept in its title but does not elaborate on it in its body, asserts (Hayes 2012, p. 64). Instead, other projects centered around “democratic governance of security” or “democratic security agenda” are more useful in the development of the concept. The former was developed by Loader and Walker in their account of the four “pathologies” of modern security governance that risk the effective undermining of democracy (“paternalism, consumerism, authoritarianism and fragmentation”) (Loader and Walker 2007, pp. 196–215) and the ways to overcome them.
271
The “democratic security agenda” departs from the foundations of democratic peace theory when arguing that “‘hard security’ [. . .] can no longer guarantee stability, democratic norms and practices are vital foundations for lasting peace” (Directorate of Policy Planning 2015, p. 2). Democratic peace can be viewed as the normative foundation of democratic security but in addition to it, the democratic practice of security-policy making remains a vital element of the concept as well. Therefore, the relevance of the concept of democratic security does not depend on the validation of the democratic peace theory. The claim that the tension between democracy and security is irrevocable because “security practice inherently organizes social and political relations around enemies, risks, fear, anxiety” (Huysmans 2014, p. 4) could be seen as a critique of the democratic security agenda. However, empirically there are enemies of democracy operating both within and outside the contexts of democratic regime (e.g., authoritarian political leaders or political parties striving to overthrow the regime). While normatively it could be envisioned that democracy would operate in a context without such enemies (and thus the “risks, fear [and] anxiety” which they may trigger in individuals believing in democracy), empirically it has never been the case, and it is unlikely to ever become one, given that democracy also entails the respect towards diversity of worldviews and cannot suppress all authoritarian tendencies. Consequently, it seems more useful to determine which criteria the security policies and practices need to meet in order to be considered compatible with democracy. As the next section shows, democratic security can be utilized for the purposes of empirical analysis without making a normative judgment as to whether democratic security is the only, or most, desirable approach to security. At the same time, a normative analysis that considers the only sustainable approach to security to unfold through democracy remains a scholarly alternative as well.
D
272
The Application of Democratic Security Democratic security may be applied to answer a range of research questions inquiring about the decision-making as well as implementation of policies affecting the standards of fundamental rights protection. The 2015 report of the Secretary General of the Council of Europe, followed by the 2016 and 2017 reports, stipulates an even broader conceptualization, whereby the “delivering of democratic security” is determined by the presence and flourishing of “efficient and independent judiciary, free media, vibrant & influential civil society, legitimate democratic institutions and inclusive societies” (Jagland 2015, p. 6, cf. also 2016, 2017, 2018). While this conceptualization signals a commitment to the understanding of the democratic foundations being intertwined with security guarantees, it is slightly too broad for the concept to make a meaningful difference between democratic security and democracy itself. Both at the global and state levels, there is a proliferation of security measures questionable from the perspective of the democratic procedures of their adoption. This not only can be read as part of a “global trend of autocratization” (Lührmann et al. 2018) but also in the context of the increasing complexity of issues that have direct implications for individual rights in the digital era. At the level of the universal international organization, the United Nations, the process of appointment of member states of the Human Rights Council has resulted in outright authoritarian regimes occupying seats in this body. While this result can be seen as being in line with procedural democracy and the unifying mission of the United Nations, the capacity of these members to shape the global agenda of protection of human rights despite their constant violation of international human rights norms may legitimize policy measures that undermine democratic security on a global stage. For example, China as a member of the United Nations Human Rights Council successfully advocated for the adoption of a resolution pleading for “a community of shared future for human beings” (UN Human Rights Council 2018). This phrase is part of the “dictionary” of Xi Jinping’s efforts towards making China a global actor
Democratic Security
(Xiaochun 2018; Zongze 2018), sidelining human rights abuses by its state organization. While this resolution might be seen as a “drop in the ocean” of international politics, it indicates the viability of autocratic actors with no concern for domestic democratic security measures to present their agenda on the global stage offered by organizations such as the United Nations. In the following, four examples of application of democratic security will be detailed. These examples by far do not exhaust the subject of study – surveillance policies being a notable case not covered. The first example addresses the war on terror that has gained prominence particularly after the attacks of 9/11. In the process of ensuring national (and even contributing to global) security, democratic procedures were often sidelined. Globally, the “counter-terrorism mandate” came to include phenomena such as “organized crime, drug trafficking, and illegal immigration” (Crelinsten 1998). In the USA, executive powers have increased in a seldom accountable process of decision-making on security policies. As Starr Deelen (2017) describes, congressional oversight that should represent the citizens of the state has been limited, due to external factors or voluntary decisions of the legislative body. Consequently, the President of the United States possesses greater de facto executive powers than before 2001, which complicates actions against the prevention of their abuse. In turn, nothing prevents the office-holder to add these powers into the “arsenal” of a “global constitutional breaching experiment” (Havercroft et al. 2018) which President Donald Trump began carry out while in office. At the same time, democratic states are no the only ones which may use limited or even nondemocratic procedures to centralize decisionmaking powers. The example of the implementation process of Chinese counter-terrorism policy (Tai-Ting Liu and Chang 2017) mirrors the USA one in that while it unfolds in a nondemocratic regime, it demonstrates similar features than the policy-making in the democratic regime. Setting aside the problem of Chinese legislation not being adopted in a substantively democratic process, this legislation may not only empower but
Democratic Security
outright legitimize (domestically as well as internationally) policy measures that are at odds with human rights protection. Hence, neglecting democratic security can inspire other states to engage in similar practices under the guise of formal rule of law. The concept of democratic security can help understand another policy area more thoroughly: the banning political parties because of the threat they represent to the democratic regime. The logic of these bans goes as follows: if parties which demonstrably try to overthrow the regime get to power, the whole regime will be endangered. Therefore, in order to protect the regime (and all its citizens), the parties cannot be allowed to get to power, and party bans through legal means guarantee that. This measure is rarely discussed directly in connection to the concept of security (cf. Tyulkina 2015 who does not devote separate discussion to security considerations related to party bans). Disentangling its logic, however, could point to how party bans follow the premises of democratic security, given that they aim to prevent the violation of fundamental rights, and the overthrowing of democratic regimes. At the same time, the parties which are banned often have next to no chance to build a legislative majority, so the rationale of this preventive measure to protect the “security of the democratic regime” can be questioned. The results of empirical analyses of the effectiveness of the bans in minimizing the presence of antidemocratic ideas in the public space are mixed as well (Bale 2007; Bértoa and Bourne 2017; Navot 2008). Applying the concept of democratic security allows to link the discussion on party bans to a notion of security which includes the existing normative considerations for upholding the values of the rule of law and fundamental rights that may be undermined by (at least some) decisions on bans. At the same time, it takes into consideration that certain political parties may indeed form security threats for the continuation of democratic values, and actions taken against them would not be justified “simply” by the normative preference of incumbent power-holders for a democratic regime, but by the security risks entailed in these parties getting the control of any state institution.
273
The second example concerns the actions taken against instances of extreme speech in democracies. Extreme speech can be considered as a security risk given that its most radical form (incitement to violence) can feed into violent actions against individuals or societal groups (mostly minorities). Interestingly, while major international and European human rights instruments enshrine the protection of the individual’s “liberty and security” (emphasis added), they allow prohibiting speech (when certain other conditions are met), among others, for the protection of “national security” (see Hare 2009, pp. 63–74). The emphasis on the “security of the nation” rather than “human security” already indicates the tension between individual’s rights and the interests of a larger social group. Rather than limiting speech in order to protect an individual’s security, it is the “nation” or the “public” that is declared to be protected by the limitation. Regardless of whether the perspective of individuals’ rights or the “security of the nation” is employed, if the legitimacy of restricting speech for security reasons needs to be subjected to critical scrutiny, democratic security, is a useful conceptual starting point for empirical analysis. The question with respect to concrete measures and decisions (for instance, the state of emergency declared in France after the terrorist attacks in 2015) is whether they operated with the presumption of remaining in boundaries of the democratic regime, or whether they were ready to overstep these boundaries, sacrificing (some elements of) democracy in the name of security. These sacrifices might not be apparent but they emerge from a more nuanced analysis of the reasoning behind the policies adopted, the evaluation of the approach chosen to implement them as well as their practical effects. Such an analysis may show that even a formal commitment to democratic security does not always result in practicing it in the organization under study (Steuer 2016). The final example moves from the domestic to the international (supranational) context when discussing the democratic underpinnings of the Common Foreign and Security Policy of the EU. As research in this area (Sjursen 2013) demonstrated, the specific decision-making structures of
D
274
international and/or supranational organizations entail several challenges from a democratic perspective. This debate could be moved one step further by asking whether and to what extent the democratic deficits occurring at the level of decision-making about common security policy instruments undermine the legitimacy of these instruments and, eventually, open up the room for unaccountable actions that run contrary to the fundamentals of democratic security both at the interstate and the community levels.
Unexplored Terrains of Democratic Security As the examples have shown, democratic security can be applied both to assess the process of decision-making about various security-related policies and the outcome of these policies from a democratic perspective. It empowers the observer with the option to uncover discrepancies between the presented purposes of the policy and its actual intent from the perspective of the foundations of democracy. While an extensive approach to democracy is preferable to provide a more complex evaluation, narrower conceptualizations focusing only on the majoritarian authorization of security measures may yield results as well. Ordinary domestic legislation on securityrelated matters (e.g., procedure of declaration of the state of emergency, surveillance vs. privacyrelated laws), documents such as the European Security Strategy, international conventions on matters such as nuclear security, cybercrime, or health security can all be analyzed from a democratic perspective, looking at the procedure of their adoption and the consequences they entail for democratic standards locally, regionally, and internationally. Another avenue of research on democratic security, implicated already in the Council of Europe’s broad conceptualization of democratic security, is the role of the media in monitoring and documenting the implementation of security policies. Democratic security requires that the media have broad access to decision-making procedures and implementation of security policies, so that
Democratic Security
they can critically assess them and point to potential shortcomings. In turn, vibrant and free media enhance security in democracies by reducing the risk of uncontrolled concentration and abuses of power. Additionally, respected media help expose factually incorrect information, hoaxes, and demagogies and contribute to the sustainability of the democratic regime and the marginalization of antidemocratic actors. Last but not least, democratic security may be employed to assess the justifiability of various counterterrorism initiatives (Art and Richardson 2007) in a democratic context. As opposed to the notion of “smart militant democracy” (Walker 2011), it retains the focus on the commitment to democratic standards rather than to efficiency with attention paid to avoiding an extensive departure from these standards. If policies limiting fundamental rights and bypassing processes of majoritarian decision-making (i.e., fast-tracking legislative processes) are adopted without extensive evidence and demonstration of a causal nexus between the need for the policy and the neutralization of an (evidence-based) security threat, then democratic security cautions against their implementation on the ground of their potential to backfire and infringe upon democratic principles.
Conclusion The concept of democratic security is suitable both for empirical evaluation of the presence of democratic elements in decision-making on security policy, and the execution of security measures in line with democratic principles, and for a normative position on the inherent link between democracy and security and the impossibility of one to persist without the other. While without minimum security guarantees represented by law enforcement and respect to the constitutional foundations of the political community, the democratic regime is likely to soon fall apart due to the ever-present efforts trying to undermine it, without respecting democratic foundations in implementation of security measures, they may become counterproductive and trigger (at best) the weakening of democratic standards.
Democratic Security
The exact form and degree of these effects is likely to vary across specific policy areas and concrete communities, which calls for more single and comparative case studies examining them. Democratic security is better suited than human security to encompass the broad range of characteristics of a substantive definition of democracy that include not only the protection of fundamental rights but also majority rule, openness, and accountability of decision-making procedures. Thus, it is useful for studies of security policies at various levels of inquiry.
Cross-References ▶ Authoritarianism ▶ Civil Liberties ▶ Countering Violent Extremism (CVE) ▶ Emerging Powers ▶ Human Security ▶ Militant Democracy ▶ Role of the Media ▶ Rule of Law
References Art, R. J., & Richardson, L. (2007). Democracy and counterterrorism: Lessons from the past. Washington, DC: US Institute of Peace. Bale, T. (2007). Are bans on political parties bound to turn out badly? A comparative investigation of three ‘intolerant’ democracies: Turkey, Spain, and Belgium. Comparative European Politics, 5(2), 141–157. https://doi. org/10.1057/palgrave.cep.6110093. Bértoa, F. C., & Bourne, A. (2017). Prescribing democracy? Party proscription and party system stability in Germany, Spain and Turkey. European Journal of Political Research, 56(2), 440–465. https://doi.org/10. 1111/1475-6765.12179. Calise, M., & Lowi, T. J. (2010). Hyperpolitics: An interactive dictionary of political science concepts. Chicago: University of Chicago Press. Crelinsten, R. D. (1998). The discourse and practice of counter-terrorism in liberal democracies. Australian Journal of Politics & History, 44(3), 389–413. https:// doi.org/10.1111/1467-8497.00028. Directorate of Policy Planning. (2015). Council of Europe debates on democratic security (2015–2017). Concept paper. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/CoERMPublicCommonSearchServices/
275 DisplayDCTMContent?documentId¼090000168046e 800. Accessed 1 Oct 2018. Flores-Macías, G. A. (2014). Financing security through elite taxation: The case of Colombia’s “democratic security taxes”. Studies in Comparative International Development, 49(4), 477–500. https://doi.org/10.1007/ s12116-013-9146-7. Hare, I. (2009). Extreme speech under international and regional human rights standards. In I. Hare & J. Weinstein (Eds.), Extreme speech and democracy (pp. 62–80). Oxford: Oxford University Press. Havercroft, J., Wiener, A., Kumm, M., & Dunoff, J. L. (2018). Editorial: Donald Trump as global constitutional breaching experiment. Global Constitutionalism, 7(1), 1–13. https://doi.org/10.1017/S2045381718 000035. Hayes, J. (2012). Securitization, social identity, and democratic security: Nixon, India, and the ties that bind. International Organization, 66(1), 63–93. https://doi. org/10.1017/S0020818311000324. Huysmans, J. (2014). Security unbound: Enacting democratic limits. Abingdon: Routledge. Jagland, T. (2015). State of democracy, human rights and the rule of law in Europe: A shared responsibility for democratic security in Europe. Strasbourg: Council of Europe. Retrieved from https://wcd.coe.int/ViewDoc. jsp?Ref¼SG%282015%291&Language¼lanEnglish& Ver¼original&BackColorInternet¼C3C3C3&Back ColorIntranet¼EDB021&BackColorLogged¼F5D383. Accessed 1 Oct 2018. Jagland, T. (2016). State of democracy, human rights and the rule of law in Europe: A security imperative for Europe. Strasbourg: Council of Europe. Retrieved from https://rm. coe.int/1680646af8. Accessed 1 Oct 2018. Jagland, T. (2017). State of democracy, human rights and the rule of law in Europe: Populism – how strong are Europe’s checks and balances? Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/state-ofdemocracy-human-rights-and-the-rule-of-law-populismhow-stron/168070568f. Accessed 1 Oct 2018. Jagland, T. (2018). State of democracy, human rights and the rule of law in Europe: Role of institutions, threats to institutions. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/state-of-democracy-humanrights-and-the-rule-of-law-role-of-institutio/168086c0c5. Accessed 1 Oct 2018. Johansen, R. C. (1991). Real security is democratic security. Alternatives: Global, Local, Political, 16(2), 209–241. Loader, I., & Walker, N. (2007). Civilizing security. Cambridge, UK: Cambridge University Press. Lührmann, A., Mechkova, V., Dahlum, S., Maxwell, L., Olin, M., Petrarca, C.S., . . ., Lindberg, S.I. (2018). State of the world 2017: Autocratization and exclusion? Democratization, 25(8), 1321–1340. https://doi.org/ 10.1080/13510347.2018.1479693. Mason, A. (2003). Colombia’s democratic security agenda: Public order in the security tripod. Security Dialogue, 34(4), 391–409. https://doi.org/10.1177/096701060 3344002.
D
276 Navot, S. (2008). Fighting terrorism in the political arena: The banning of political parties. Party Politics, 14(6), 745–762. https://doi.org/10.1177/135406880 8093409. Schaffer, J. K. (2015). The co-originality of human rights and democracy in an international order. International Theory, 7(1), 96–124. Sjursen, H. (Ed.). (2013). The EU’s common foreign and security policy: The quest for democracy. London: Routledge. Starr-Deelen, D.G. (2017). Counter-terrorism from the Obama administration to President Trump: Caught in the fait accompli war. Basingstoke: Palgrave. Steuer, M. (2016). The Council of Europe and democratic security: Reconciling the irreconcilable? Politikon: IAPSS Journal of Political Science, 29, 267–279. https://doi.org/10.22151/politikon.29.16. Tai-Ting Liu, T., & Chang, K. (2017). In the name of integrity and security: China’s counterterrorist policies. In S. N. Romaniuk, F. Grice, D. Irrera, & S. Webb (Eds.), The Palgrave handbook of global counterterrorism policy (pp. 667–689). London: Palgrave Macmillan. https:// doi.org/10.1057/978-1-137-55769-8_31. Tyulkina, S. (2015). Militant democracy: Undemocratic political parties and beyond. London: Routledge. UN Human Rights Council. (2018). Resolution A/HRC/ 37/L.36. Retrieved from http://undocs.org/A/HRC/37/ L.36. Accessed 1 Oct 2018. Walker, C. (2011). Militant speech about terrorism in a smart militant democracy. Mississippi Law Journal, 80(4), 1395–1453. Xiaochun, Z. (2018). In pursuit of a community of shared future: China’s global activism in perspective. China Quarterly of International Strategic Studies, 04(01), 23–37. https://doi.org/10.1142/S2377740018500082. Zongze, R. (2018). Building a community with a shared future: Meliorating the era of strategic opportunity in China. China International Studies, 69, 5–27.
Democratic Transitions Aries A. Arugay University of the Philippines-Diliman, Quezon City, Philippines Keywords
Democratization · Elite pacts · Political revolution · Regime change · Third Wave
Definition Democratic Transition is a process of democratization where a state undergoes regime change
Democratic Transitions
away from a particular type of authoritarianism to a more liberal and/or democratic one.
Introduction A democratic transition either describes a specific phase in the country’s democratization or a particular political regime installed after authoritarian rule. Democratization is a process where authoritarian rulers are replaced by leaders selected in a free, open, and fair election. Its specific phases comprise bringing about the end of the democratic regime, the inauguration of the democratic regime, and then the consolidation of the democratic system (Huntington 1991). Scholars have conventionally broken down the process of democratization into several stages in which democratic transition and democratic consolidation are the basic stages. The democratic transition is the period from the overthrow of the authoritarian regime until the holding of elections and the adoption of a new democratic constitution. “Democratic consolidation is the process of achieving broad and deep legitimation, such that all significant political actors, at both the elite and mass levels, believe that the democratic regime is the most right and appropriate for their society, better than any other realistic alternative they can imagine” (Diamond 1999, p. 65). This entry discusses the nature of a democratic transition as part of the democratization process including its emphasis on elite pacts or negotiations. After this, it examines the linkages between the democratic transition of a country, its security environment, and how it pursues its own security goals and objectives. This entry argues that a democratic transition is a particularly volatile phase where the use of force abroad can be deployed in order to generate political legitimacy by unifying elites, mobilizing mass support, and consolidating the power of contingent democrats. This argument has far-reaching implications for the established democratic peace thesis in the field of security studies in particular and international relations (IR) in general. Democracies might not actually exhibit peaceful behavior to both allies and adversaries, especially if they are still in the democratic transition phase.
Democratic Transitions
Democratic Transition as a Negotiated Process of Democratization The study of democratic transitions emphasizes the role of political processes, elite initiatives, and deliberate or strategic choices that account for the shift from authoritarian rule to liberal democracy. This agent-oriented approach is in sharp contrast to the structurally biased theories of modernization and political development (Lipset 1959). Democratization as a process has to adopt a historical approach marked by holistic consideration of different countries as case studies, provided a better basis for analysis than looking for functional requisites. Rustow’s (1970, p. 340) seminal study sketched a general route that all countries travel during democratization that included a transition phase as a “historical moment when the parties to the inconclusive political struggle decide to compromise and adopt democracy that gives each some share in the polity.” Fourth, there is the second transition or habituation phase, wherein there is a conscious adoption of democratic rules during the historical moment. The dynamism of a democratic transition transcended previous thinking about democratization as simply the identification of “prerequisites” often in the form of structural conditions such as economic development, industrialization, and modernization. In order for a democratic transition to occur, political leaders or strategic elites must arrive at a consensus to shift to a more democratic form of government. This defies the previous argument that democracy requires favorable structural and cultural qualities. Instead, the critical factor if democratic craftsmanship as democratization is now determined by elite dispositions, calculations, and pacts (O’Donnell and Schmitter 1986). Democratic transitions became the focal phase in the democratization process since the late 1970s.
Democratic Transitions, Conflict, and Security Whether or not democracies could foster peace remains one of the most important topics of debate
277
within IR. There is vibrant exchange of ideas on how democracy as a form of governance can engender peace among nations (Thompson and Tucker 1997). Indeed, the democratic peace literature has come a long way since the articulation of its seminal ideas from philosophers such as Adam Smith, Baron de Montesquieu, and most importantly Immanuel Kant. The democratic peace thesis has guided the foreign and security policies of major powers as well as international institutions. As more and more nations have joined the “community of democracies,” there is recognition that democratic peace is relevant for international politics. However, this introduction of democratic rule to some countries might pose a challenge some of the claims of democratic peace. If indeed institutions and norms of democratic states are mainly responsible for their pacific behavior, then it might not be appropriate to include countries that are undertaking democratic transitions. Thus, the main puzzle seems to be whether nations on the road to democracy are pacific or antagonistic in their relations to other members of the international system and how this antagonism can be minimized. Opening the Floodgates: Theorizing on Democratic Transition and War Rather than pacifying, democratization as a process induces governments to exhibit aggressive behavior against other countries. The turbulent nature of democratic transitions tends to release a set of dynamics that increases the likelihood of war. Borrowing insights derived from comparative politics, Mansfield and Snyder (1995) argued that the combination of weak institutions, disgruntled elites, and the susceptibility of masses to nationalist appeals pushes fledgling democratic regimes to be war-prone. Mansfield and Snyder’s (1995) attempt is not only to qualify the established relationship between democracy and war. The more controversial claim of the authors is the spillover of the uncertain and turbulent nature of domestic politics to foreign policies of states. Because of weak and contested institutions, this “rocky” transitional period influences states to adopt belligerent positions that ultimately push them to fight wars. To a
D
278
certain extent, Mansfield and Snyder have placed the argument of democratic peace on its head since the critical factor is less a state’s regime type and more the stability of its institutions. By focusing on democracy as a process rather than a finished product, both authors have posed a significant challenge to the democratic peace thesis. The main hypothesis is that “states that have recently undergone regime change in a democratic direction are much more war-prone than states that have undergone no regime change, and are somewhat more-prone than those that have undergone a change in an autocratic direction” (Mansfield and Snyder 1995, p. 8). The “adolescent” nature of the regime motivates elites to mobilize masses in their favor in their struggle for power. As the new political dispensation is still being shaped, they resort to nationalist appeals to further their parochial interests. The resort to nationalism automatically pits the democratizing regime against another country. And because of weak institutions and decentralized authority, the accountability mechanisms or “whips” in democracies are weak, if not nonexistent in this period. Democratic transitions generate a “political impasse” that prevents the formation of stable political coalitions with predictable policy programs and a sustainable source of support. First, it widens the political spectrum allowing previously marginalized and disgruntled groups with opportunities for political participation. Echoing Huntington’s (1968) powerful argument made almost four decades ago, the authors argued that the absence of strong institutions to mitigate differences of interest will likely result on chaos and violence. Second, the unpredictable nature of politics reduces the time horizons of political actors and compels them to refuse compromises. This produces political gridlocks that might jeopardize regime survival unless elites in power can create an effective diversion which would mobilize the society against a common external enemy. Finally, competitive mass mobilization arises because no elite group is powerful enough to control the masses. Often, elites will try to use coercive institutions on a quid pro quo basis to gain the upper hand against other elites of which the military is often the ally of choice. In
Democratic Transitions
exchange, political leaders often have to cater to military interests which could end up engendering aggressive foreign policies. Using cases of democratic transitions of major powers, Mansfield and Snyder revealed that democratizing states have political leaders who employ several strategies that tend to engender belligerent behavior in the international system. First, elites tend to perform “logrolling” or giving each group what it wants. One major interest group that elites often logroll with is the military. Second, ruling coalitions are often formed by “squaring the circle.” By trying to please all the interests of the coalition members, leaders often resort to aggressive foreign policies as a unifying tool. Finally, as also espoused by diversionary theory, elites attempt to shore up public approval through policies that improve national prestige which could include fighting wars. Conceptual, Theoretical, and Methodological Critiques Mansfield and Snyder’s study were able to generate renewed interest in the connections between democracy and security. IR scholars focuses on several aspects of their theory and launched credible critiques. The theory claimed that one of the major reasons why democratization would lead to war-proneness is “policy incoherence” that characterizes regimes in transition. Weede (1996) argued that the authors were not able to sufficiently show that this is less if not opposite among mature democracies. This means that the theory was silent on decision-making processes, specifically the formulation of foreign policy, between mature and adolescent democracies. Weede’s critique is to ask for an elaboration of the Mansfield and Snyder’s claim that the pattern of unstable coalitions generates logrolling and nationalist prestige strategies. What are the main differences in policymaking among old and young democracies that predisposes the latter to fight more wars than the former? This question was left out by the progenitors of the theory. On another matter, he was also not convinced that democratization should be privileged over other types of regime change since the statistical test
Democratic Transitions
also revealed that autocratization is related to war. Weede represents the position that all types of regime transitions – given that political instability is the defining condition in these types of political change. As another critique, Walt (1996) argued that revolutions tend to increase the intensity of security competition by altering the perceptions of intent and beliefs about offense and defense balance of the revolutionary state as well as other major powers in the international system. Elite conflict after a revolution can lead the government to adopt aggressive foreign policies. His study utilized a qualitative approach by looking at seven cases – the French, Russian, Iranian, American, Mexican, Turkish, and Chinese Revolutions. These cases were chosen on several grounds that included revolution type (from above or below), geographical location, ideology, and whether these revolutions led to interstate war. Walt did not pass judgment on whether these revolutions were processes of democratization as well but one can observe that some of his cases are radical changes toward the democratic direction. Walt’s study proves that regime change of the drastic kind can release the same set of mechanisms that would lead a particular state to be war prone. Wolfe’s (1996) critique qualifies the nature of democratic transitions and how some types more than others increases the probability of war. He argued that Mansfield and Snyder’s theory is appropriate to more turbulent transitions which may not be applicable to postcommunist countries. Wolfe is of the opinion that democratic transitions in Central and Eastern Europe have generally contributed to peace. Moreover, he believed that the level of social development is critical on evaluating the instability of regime change towards democracy. This is an important point, since not all democratic transitions are the same in terms of the level of uncertainty. Theories of democratization, for example, have made strides on differentiating between transformation and replacement modes of transition. The former is described as being a managed, calibrated, and negotiated transition, while the latter is a rupture of the political system, which could be more chaotic. By not recognizing the differences between
279
modalities of democratic transitions, Mansfield and Snyder may have overestimated the impact of their unstable and uncertain nature. Thompson and Tucker (1997) took greater lengths in arriving at a more comprehensive critique. They arrived at the conclusion that there are loopholes in the soundness of their theory. First, while they consider the root cause to be immature institutions, they also discussed “other processes” that are present in transitioning regimes. As such, could institutional weakness be an intermediate cause and other factors related to domestic politics – societal cleavages, social development, or political culture – are the ones responsible for the causation? Second, they also concur with Weede that regime change or regime instability should be the explanatory variable. Finally, they questioned the study’s causal arrow. Could it be the case that democratization has external sources as well? Thompson and Tucker challenged the study’s distinction between domestic and foreign frontiers as regards processes of democratization. They argued that domestic actors do not simply “project their preferred strategies and policies” without being influenced by the outside environment. The study by Ward and Gleditsch (1998) generally agreed to the logic behind the theory that immature democracies undergoing rapid changes in institutions may be inclined to fight wars. They focused on the neglected aspects of the domestic political structure as well as the process of democratization. This study tested whether the direction, intensity, and nature of a democratic regime have an impact on the likelihood of war. They did not define democratization as merely the presence of absence of change. Sensitive to theories found in comparative politics, they posit that democratic transitions moving along a continuum with varying magnitude, direction, and duration. Their statistical results, however, revealed that faster and more intense transitions toward democracy decrease the probability of war. More protracted, contested, and slower transitions tend to confirm the hypothesis of Mansfield and Snyder. Thus, democratization per se does not cause war but how the transition is proceeding and how fast is this process that affects war.
D
280
The conclusions of Mansfield and Snyder seem to receive more agreement from scholars that study civil wars. Hegre et al. (2001) tested the democratization and war hypothesis and found out that hybrid regimes tend to be engaged in civil wars. They also agree with the logic of the theory that political transitions are more susceptible to experienced civil wars. They also concluded that the longer the duration of a transition is, the slower is the net decrease of violence within the country. In an attempt to integrate theories on enduring rivalries and democratization and war, Hensel et al. (2000) was able to prove that rivalries which has a member that is experiencing regime change is more conflict prone that rivalries which contain nondemocratic states. In his study of ethnic wars, Mousseau (2001) confirmed the basic theoretical claim of Mansfield and Snyder in his study. Democratization has a destabilizing effect on multiethnic nations as the regime is still incapacitated to accommodate rising ethnic demands. Adamson’s (2001) study of Turkey’s interventionist policy on Cyprus validated the hypothesis that democratizing states are more war-prone. There is substantial evidence that the fledgling democratic government in Turkey was pressured by the press, public opinion, and opposition elites to adopt a more aggressive and radical policy position against Greece on Cyprus. The government that was formed in 1973 was composed of an unstable coalition since the political system has relatively higher levels of party fragmentation. The prominent role of the military and their strong lobby to intervene in Cyprus was also a critical factor. Adamson also highlighted the fact that Greece was also undergoing a democratic transition did not lead to a more peaceful resolution to the crisis. The implications of her study resemble those of Mansfield and Snyder – democratic peace theory has to be sensitive to cases where democratization has led to war.
Conclusion The propensity of countries undergoing democratic transition remain relevant in global security
Democratic Transitions
studies. Some of the regime changes to democratic rule, especially the recent ones such as Iraq and Afghanistan, were imposed by foreign actors. Whether or not the transition was endogenous or by virtue of external intervention have implications for the linkage between democratic transitions and war. More contemporary cases of democratic transitions might not resemble those from the “third wave of democratization.” This group of democratic transitions might reveal contradictory results given that most of these countries did not engage in interstate war. However, the same causal mechanisms have led to more intrastate conflict and violence. In this sense, recent democratization might also be sensitive to the changing nature of warfare in the international system. These academic debates have far-reaching implications for domestic and international security policy. For example, Mansfield and Snyder (2007, p. 265) believed that their findings could guide foreign policies of more developed states that engage in democracy promotion abroad. Their main recommendation is for democratization to be subjected to “careful timing and sequencing”: In cases where institutional requisites for successful consolidation are not yet in place, it is best to try to see that they are developed before encouraging mass political contestation. This admonition for the international community to support a “step-by-step” approach to democratization implies that the priority should be the establishment of institutions that will foster effective governance before extending political participation to the citizenry. Democratizing regimes to adopt more pragmatic policies such as amnesty for crimes committed by authoritarian elites and their agents in order to neutralize them under the new democratic regime. Finally, it was recommended that the scope of political competition in this transition phase should remain limited and domestic political leaders should be accountable to international institutions rather than to democratic actors (Mansfield and Snyder 2007). Cases in the Arab spring, particularly in Iraq and Afghanistan, are relevant in further studying the linkages between democratic transitions and security. According to the theorizing, the current
Democratic Transitions
approach of marginalizing authoritarian elites and other nondemocratic actors is unwise and could lead to disastrous outcomes. Allowing mass opinion to find its way through newly established institutional channels could only drive political leaders to international war. Both scholars find their conclusions to be relevant in the Middle East as both old and new elites are scrambling to get the votes of the masses thorough a combination of nationalist, ideological, and religious appeals. These will likely lead to international conflict given that the rule of law is still weak, the bureaucracies remain corrupt, and accountability is absent in these transitioning regimes. The literature on democratic transitions and security is still in its infancy stage and there is still a lot of space for refinement, modification, and innovation. While democracy as an ideal may have reached a point where it is the universally preferred mode of political governance, there remains parts of the world that have not undergone democratization. Rather than undermine the democratic peace thesis, this nuanced theorizing introduced the democratic transition of a country as an influential factor in the propensity of states for conflict and insecurity. By basing the arguments on the significance of democratic institutions, this literature supported the institutional basis for democratic peace. Without strong institutions that will check democratic governments, the pacific effect of democracy will not present. This is a relevant and significant contribution to the existing scholarship on democratic peace.
Cross-References ▶ Civil-Military Relations ▶ Democratic Security ▶ Democratization ▶ Legitimacy in Statebuilding ▶ Nondemocratic Systems ▶ Peacebuilding ▶ Post-Cold War Environment ▶ Post-colonialism and Security ▶ State Legitimacy
281
References Adamson, F. B. (2001). Democratization and the domestic sources of foreign policy: Turkey in the 1974 Cyprus crisis. Political Science Quarterly, 116(2), 277–303. Diamond, L. (1999). Developing democracy: Toward consolidation. Baltimore: Johns Hopkins University Press. Hegre, H., Ellingsen, T., Gates, S., & Gleditsch, N. P. (2001). Toward a democratic civil peace? Democracy, political change, and civil war, 1816–1992. American Political Science Review, 95(1), 33–48. Hensel, P. R., Goertz, G., & Diehl, P. F. (2000). The democratic peace and rivalries. Journal of Politics, 62 (4), 1173–1188. Huntington, S. P. (1968). Political order in changing societies. New Haven: Yale University Press. Huntington, S. P. (1991). Third wave: Democratization in the late twentieth century. Norman: University of Oklahoma Press. Lipset, S. M. (1959). Some social requisites of democracy. American Sociological Review, 53(1), 69–105. Mansfield, E. D., & Snyder, J. (1995) Democratization and the danger of war. International Security 20(1), 5–10.2307/2539213. Mansfield, E. D., & Snyder, J. (2007). Electing to fight: Why emerging democracies go to war. Cambridge: MIT Press. Mousseau, D. Y. (2001). Democratizing with ethnic divisions: A source of conflict? Journal of Peace Research, 38(5), 547–567. O’Donnell, G., & Schmitter, P. (1986). Transitions from authoritarian rule: Tentative conclusions about uncertain democracies. Baltimore: Johns Hopkins University Press. Rustow, D. A. (1970). Transitions to democracy: Toward a dynamic model. Comparative Politics, 2(3), 337–363. Thompson, W. R., & Tucker, R. (1997). A tale of two democratic peace critiques. Journal of Conflict Resolution, 41(3), 428–454. Walt, S. (1996). Revolution and war. Ithaca: Cornell University Press. Ward, M. D., & Gleditsch, K. S. (1998). Democratizing for peace. American Political Science Review, 92(1), 51–61. Weede, E. (1996). The effects of democratization on war. International Security, 2(1), 176–207. Wolf, R. (1996). The effects of democratization on war. International Security, 2(1), 176–207.
Further Reading Dassel, K., & Reinhardt, E. (1998). Domestic strife and the initiation of violence at home and abroad. American Journal of Political Science, 43(1), 56–85. Linz, J. J., & Stepan, A. (1996). Problems of democratic transition and consolidation. Baltimore: Johns Hopkins University Press. Mansfield, E. D., & Snyder, J. (2002). Democratic transitions, institutional strength, and war. International Organization, 56(2), 297–337.
D
282
Democratization Kürşad GÜÇ Department of International Relations, Faculty of Political Sciences, Ankara University, Ankara, Turkey Keywords
Democracy promotion · Democratic peace · Democratic reconstruction · Intervention · Peace operations
Introduction Democracy has been one of the most prominent phenomenons in both political and intellectual spheres for nearly almost two centuries. Since the early nineteenth century, the number of democracies has steadily increased. Samuel Huntington (1991) asserts that the popularity of democracy has reflected in regime transitions of states in three waves. The third wave of democratization from 1974 to 1990 and the end of the Cold War brought about a historical turning point. During this period, the number of democracies, for the first time in history, left behind the number of autocracies. According to Freedom House (Freedom in the World 2017), as of 2017, 87 of assessed 195 countries (45%) had democratic regimes while the numbers of nondemocracies and partly democracies remained at 49 (25%) and 59 (30%) respectively. The democratization processes of countries have taken different shapes and motivations for centuries. Nonetheless, transitions from autocracies to democracies have essentially occurred in two ways. Firstly, autocratic countries turned into democratic ones through internal processes such as popular uprisings and revolutions. The second way of democratization has come along with foreign aids and interventions. In this type of democratization, external actors, either another countries or international organizations, guide a specific country on the way of becoming a democracy. In this context, foreign aids and/or interventions for democratization have mostly arisen as an imposition.
Democratization
Especially after the Cold War, during the triumph period of liberal democratic values, democratic states or international organizations have imposed democratic regimes to post-conflict and fragile nondemocratic states. The basic motivation of this tendency was a belief that democracy is the only type of governance for both internal and global peace and stability. Therefore, most of the democratizations in a number of war-torn societies have been conducted by either powerful democratic states such as the United States or international organizations like NATO (see chapter ▶ “North Atlantic Treaty Organization (NATO)”) and the United Nations since the end of the Cold War. Because the first type of democratization is a subject for another entry in this project (see chapter ▶ “Democratic Transitions”), this entry will only focus on the context of democratization stemming from external interventions. In this sense, firstly, the main motivations that underlie interventions for democratization will be discussed. Afterwards, prominent democratizations through interventions will be evaluated. Scopes, contexts, examples, successes, and failures of democratizations and their impacts on regional and global security will be examined.
Democracy as a Source of Peace and Stability In the literature, a number of scholars have argued the impact of democracy on peace. Many scholars hold that democratic states are less prone to violence. It is claimed that the experience of human beings during the last almost two centuries has proved the arguments of liberals. Democratic states, indeed, are less likely inclined to use violence as a method of policy-conducting not only in domestic life but also in the international area. The increasing number of democratic states since the end of cold war is indeed seen as one of the most important components of contemporary peace and of the low possibility of major war today. Gleditsch and Hegre (in Gates et al. 1996, pp. 1–2) provide three levels of analysis. First is the national level: democratic states live in peace
Democratization
and are peaceful – at least more peaceful than nondemocratic states. Second is the dyadic level: democratic states maintain peace and do not wage war against each other. Finally, the systemic level: an international system comprised by mainly democratic states is a more peaceful system. At the national level, liberal democratic states are founded on individual rights such as equality under the law, free speech, private property, and elected representation. Because the citizens who endure the burdens of war elect governments, these factors become the main barriers to a state prone to wage a war (Doyle 1986, p. 1151). For liberal theorists, due to the fact that wars bring about a state of siege, austerity measures, and civil casualties, citizens in democratic states are likely to exercise more control over governments in order not to allow them to go to war. The ambitions of ruling elites for violence are restrained by democratic processes and institutions (Burchill 2009, p. 61). Besides, one of the principal features of democracies is dispersed power which means multiple veto powers that also prevent the war (Jervis 2001, p. 4). The only beneficiaries of wars are military aristocrats and arms companies, whereas the masses and citizens who fight on the battlegrounds are exposed to the catastrophic effects of wars. As Schumpeter indicates, democracies do not pursue a minority interests and do not tolerate the costs of military expansion (Doyle 1986, p. 1153). For this reason, it is less possible to motivate large segments of democratic societies in order to support their governments to go to war. In contrast to democratic regimes, autocratic ones are accepted as more risky in terms of conducting violence. Doyle (1986, p. 1157) claims “Authoritarian rulers both stimulate and respond to an international political environment in which conflicts of prestige, interest, and pure fear of what other states might do all lead states toward war. War and conquest have thus characterized the careers of many authoritarian rulers and ruling parties...” The legitimacy of autocratic regimes stems from oppression, the patronage of privileged groups, and commonly national heroism. The citizens and masses do not have enough right to direct the policies of ruling elites. As domestic violence is a method for autocratic
283
regimes to stay in power, international conflicts and wars in which these states are involved are also sources of autocrats’ legitimacy. As instances of this stance, Jervis (2001, p. 5) points out that although there are some contrary examples, it is difficult to construe the expansionism of Germany under the Nazis and of the USSR without reference to their domestic regimes. Democracy is also considered to have a preventive effect on internal conflicts. In spite of the negative examples of civil wars in democratic states such as the American civil war (1861– 1865) and the Lebanese civil war (1975–1990), a significant percentage of internal armed conflicts have emerged in nondemocratic states. Furthermore, no civil war has appeared in any mature democratic state since the end of the Cold War. On the other hand, some claim that although Iraq has been an electoral-democracy since the fall of Saddam and the withdrawal of the United States, it has been witnessed one of the most bloody ongoing civil wars of the first decade of the twenty-first century for nearly 10 years. As a response to this kind of critiques, Tertrais (2012, p. 12) emphasizes, transition periods from autocracy to democracy may be painful and transitional states are more inclined to war than autocracies until an entire democratic structure is established. At a dyadic level, democratic states are thought not to fight against each other. Democratic states perform through compromise, nonviolence, and respect for law and when these values affect foreign policies, they enhance the peace in particular between democracies (Jervis 2001, p. 4). Since the beginning of the eighteenth century, a “liberal democratic society,” which Kant called the “pacific union,” has commenced to be established between democratic states (Doyle 1997, p. 260). The members of this society do not use force against each other as an instrument of foreign policy. In this sense, Doyle (1986, p. 1156) holds that liberal democracies are unique to exercise peaceful restraint and therefore a separate peace emerges between them. The notable absence of war among liberal democracies for almost 200 years is the most relevant indicator of this separate peace among them (Doyle 1997, p. 260). Although some realists claim that balance
D
284
of power among leading states, international hegemony by a superpower, or imperialist cooperation can also provide international stability and peace as did in the certain parts of history, Doyle (2000, p. 35) asserts that none of these logics explain the distinct peace among liberal democratic states for more than 150 years. In the systemic level, it is argued that an international system composed of democratic states is likely to be more peaceful and restrains any attempts for war. This claim is one move ahead of the dyadic level arguments. As mentioned above, in the dyadic level, there are peaceful relations between at least two liberal democratic states which establish a “separate liberal democratic society” in the international system. In the systemic level, however, if the international system is entirely made up of liberal democracies instead of a separate democratic society, the world could be free from violence. The expansion of liberal democratic peace zones throughout the world is the main hope of Fukuyama for the post-cold war era; he claims that an international order which consists of only liberal democracies is less likely to undergo wars, “since all nations would reciprocally recognize one another’s legitimacy” (Burchill 2009, pp. 59–61). Nonetheless, the systemic level arguments are less plausible than both the nation level and the dyadic level ones. Although the latter has been experienced many times for almost two centuries, the former has not. In other words, human beings have never experienced a global international system made up of only democratic states.
Democratization Through Promotion and Intervention Western liberal democratic states have long assisted democracy-seeking societies. The support for democratization in various countries has been conducted by both official structures and NGOs (Hearn 1997). The United States Agency for International Development (USAID) and the World Bank (see chapter ▶ “World Bank”) are the prominent instances of official bodies in democracy assistance. Apart from official donors for democratization, nongovernmental agencies
Democratization
have also taken part in democracy promotion activities. Ford Foundation and the National Endowment for Democracy (NED) in the United States, Westminster Foundation for Democracy in the United Kingdom, and political bodies called “Stiftungen” in Germany are the outstanding nongovernmental foundations aiding democracy in related countries (Hearn 1997). Besides, the European Union (EU) has led democratization in the eastern part of the continent. All of these institutions and foundations intensified their democracy assistance activities after the end of the Cold War. For example, the enlargement of the EU eastward created a democratization wave in the former Soviet countries which previously lacked liberal democratic structures. Moreover, foundations like USAID, NED, the World Bank, etc. supported civil societies and governments in the third world in order to create a functioning liberal democratic order based on free markets, human rights, free elections, and multiparty systems. Despite abovementioned democracy assistance processes, the 1990s witnessed bloody civil wars in nondemocratic countries throughout the world. Thus, international community was obliged to find fundamental solutions in order to end civil wars threatening global security. In this context, the logic of liberal democratic reconstruction in war-torn societies came to the fore. This tendency turned into the central strategy of the United Nations peace operations during the 1900s. The reason why liberal democratic values became the rising stars in the 1990s was the end of the Cold War and the triumph of the West against the Eastern Bloc. Francis Fukuyama (2002) described this new era as “the end of history” because the human being had reached the highest point in its course, there was no better type of governance to attain. The popularity of liberal democracy encompassed all over the world in the beginning of the 1990s. This time span was a landmark in the world’s history: the number of democracies exceeded the number of autocracies (Hewitt et al. 2012, p. 19). A number of undemocratic states, no matter war-torn or stable, converted into liberal democracies. This liberal
Democratization
transition also affected UN’s new peace perception. An Agenda for Peace was first published by the Secretary-General of the UN, Boutros Boutros-Ghali, in 1992 in order to reflect the new policy statement of the organization toward peace operations. As to what methods were to be implemented to establish peace, An Agenda for Peace partly rested on an optimistic view “that global trends favour liberal ideas” (Peou 2002, pp. 53–54). An Agenda for Peace, therefore, called for democracy to provide security for war-torn societies: “Democracy at all levels is essential to attain peace for a new era of prosperity and justice” (An Agenda for Peace 1992, parag. 82). The link between peacebuilding (see chapter ▶ “Peacebuilding) and liberal ideas was explicit in UN’s all policy statements during this period. As the Secretary-General Boutros Ghali stated in 1993: Without peace there can be no development and there can be no democracy. Without development, the basis for democracy will be lacking and societies will tend to fall into conflict. And without democracy, no sustainable development will occur; without such development, peace cannot long be maintained. (in Heathershaw 2008: 600)
As a result of this logic in favor of democracy, the UN conducted a quite number of peace operations throughout the world in order to rebuild postconflict societies in line with liberal democratic values. With reference to accepted relation between democracy and peace, the UN peace operations targeted not only to end armed conflicts but also to provide democratic cultures to the war-torn countries. In this regard, the UN peace operations during the 1990s aimed to sow the seeds of democracy in various fragile and collapsed states sinking in civil wars such as Angola, Nicaragua, Cambodia, Somalia, Mozambique, Rwanda, Liberia, and Haiti. Nevertheless, the UN’s strategy to mobilize democracy in those countries mostly remained limited to democratic procedures instead of founding democratic principles. In other words, the UN supposed to complete democratization in deeply divided war-torn societies by just conducting and monitoring multiparty elections. The lack of democratic principles and structures such as civil societies, culture of
285
consensus, free market, functioning governmental institutions rendered elections meaningless in those countries. The Bosnian civil war (1992–1995) was one of the turning points in the UN’s democratization strategy. Instead of simple procedures, it was understood that a success in democratization would only be reached by building a post-conflict state and society from scratch in line with democratic principles. This strategy was called “statebuilding” (see chapter ▶ “Legitimacy in Statebuilding”). According to this strategy, a lasting peace and stability in war-torn societies would only be gained through full democratization. It means that all aspects of democracy such as a comprehensive constitution, liberal economy, guaranteed human rights, free and fair elections, multiparty systems, and rule of law (see chapter ▶ “Rule of Law”) should be reconstructed by international community on behalf of host societies. As Gromes (2009, p. 109) points out, statebuilding efforts for democratization in Bosnia have been successful in various aspects. The illegal and undemocratic structures of the political parties which escalated ethnic hatred during the civil war have been dramatically decreased. Although the police still do not properly match the criteria of rule of law, it is far from conducting violence against minority or opposition groups as it did in civil war. Moreover, politically motivated violence, too, has been reduced. In addition, despite the existence of political interference, the judicial structure of the state has mostly become independent. Along with these achievements, much progress in media, payment system, armed forces, and civil services has also occurred. However, power-sharing governmental structure of the state which motivates ethnic affiliations and ongoing existence of multinational peacekeeping forces and international assistance for governing raise doubts about the self-sustainability of the peace in Bosnia. In addition to Bosnia, democratic reconstruction model was used in East Timor, Sierra Leone, and Kosovo in 1999. In Kosovo after the NATO-led intervention, the US Under-Secretary of State, Marc Grossman, explicitly stated the new
D
286
standards being imposed on Kosovo: functioning democratic institutions, the rule of law, a market economy, and property rights (Bendana 2005, p. 9). By doing so, the objective has been to create a democratic stability in host states so as to secure not only war-torn countries but also regions and the world. However, to what extent these stated objectives were attained far enough remained controversial. For example, in East Timor, when the international mission ended in 2002, the achievements in the country were considered as a great success. But, in 2006 a new round of violence occurred between security forces created during the previous peace missions. Therefore, the then UN Secretary-General Koffi Annan acknowledged that the withdrawal of peace mission had been too early to complete to enhance functioning institutions in East Timor. Consequently, he had to recommend the deployment of a new peace mission to the country (Paris 2010, p. 342). Despite the weaknesses of democratization initiatives above, none of them have undermined the logic of democratization through foreign interventions as Afghanistan and Iraq cases have done. Afghanistan and Iraq were invaded by US-led coalitions in order to fight against global terrorism and aggressive autocrats in 2001 and 2003, respectively. Both operations reportedly aimed to overthrow totalitarian regimes and instead construct democratic ones in those countries. Thus, it was assumed that Afghan and Iraqi people who would get rid of oppressive regimes would easily embrace democratic values. However, in the process that followed the occupations in Iraq and Afghanistan, both countries have fallen into chaos hindering democratization efforts. Iraq and Afghanistan, contrary to expectations, have been fertile grounds for elements threatening not only regional but also global security such as Al-Qaeda and ISIL. Therefore, as Laurence Whitehead (2009, p. 215) clearly expresses, Iraqi case have shown “the Dark Side” of the logic and practice of Western democratization through intervention. The dramatic failures of democratization attempts through military interventions in Afghanistan and Iraq have caused a chilling effect on the subject both intellectually and politically in the twenty-first century. “Bringing democracy” to a
Democratization
so-called nondemocratic society is today approached with high suspicion by international community. Instead of military intervention, other kinds of democracy promotion strategies shine out. An international public opinion has emerged to provide indirect democracy assistance for nondemocratic societies to let them becoming self-democratic regimes, such as the creation and empowerment of civil society, the integration of the local economy into the global marketplace, and the support of political parties for a multiparty life.
Conclusion Democracy has indeed been indisputable political reality of modern times. Despite some disputes, general tendency in favor of democracy throughout the world reinforces this phenomenon. Positive effects of democracy such as prosperity, development, security, accountability, predictability, stability, etc. reinforce democracy as a center of attraction. Although a large number of scholar and theorists criticize the logic of democratic peace, it is empirically obvious that democracies are less prone to war than nondemocracies. Besides, most prosperous countries are mostly governed by democratic regimes. These factors have gradually popularized democracy for nearly two centuries. Eventually with the end of the Cold War, the number of democracies left behind a number of autocratic regimes. The triumph of liberal democratic values during the 1990s brought about the logic that democracy should be spread throughout the globe even forcefully. In this sense, liberal agencies have conducted democracy promotion activities including military interventions in countries destroyed by civil wars. Despite some successes in various countries, the logic of democratization through international military intervention has ultimately failed to satisfy expectations. Iraq and Afghanistan cases have been the breaking points of democratization since they have become the centers of structures that pose threats to global security rather than contribute to global peace. To conclude, democratization through foreign intervention have fallen into disgrace on the
Desalination
grounds that it is more likely to do more harm than good for global security.
Cross-References ▶ Democratic Transitions ▶ Legitimacy in Statebuilding ▶ North Atlantic Treaty Organization (NATO) ▶ Peacebuilding ▶ Rule of Law ▶ World Bank
References An Agenda for Peace: preventive diplomacy, peacemaking and peace-keeping, Report of the Secretary-General, UN document A/47/277-S/24111. (1992). Retrieved April 15, 2018, from http://www.un-documents.net/a47-277.htm Bendana, A. (2005). From peacebuilding to state building: One step forward and two steps back? Development, 48 (3), 5–15. Burchill, S. (2009). Liberalism. In S. Burchill, A. Linklater, R. Devetak, J. Donnelly, T. Nardin, M. Paterson, C. Reus-Smit, & J. True (Eds.), Theories of international relations. New York: Palgrave Macmillan. Doyle, M. W. (1986). Liberalism and world politics. American Political Science Review, 80(4), 1151–1169. Doyle, M. W. (1997). War and peace. New York: W.W. Norton. Doyle, M. W. (2000). Peace, liberty, and democracy: Results and liberals contest a legacy. In M. Cox, G. J. Ikenberry, & T. Inoguchi (Eds.), American democracy promotion: Impulses, strategies, and impacts. New York: Oxford University Press. Freedom in the world. (2017). Freedom house. Retrieved March 06, 2018, from https://freedomhouse.org/report/ freedom-world/freedom-world-2017 Fukuyama, F. (2002). The end of history and the last man. New York: Perennial. Gates, S., Knutsen, T. L., & Moses, J. W. (1996). Democracy and peace: A skeptical view. Journal of Peace Research, 33(1), 1–10. Gromes, T. (2009). A case study in “institutionalisation before liberalisation”: Lessons from Bosnia and Herzegovina. Journal of Intervention and Statebuilding, 3(1), 93–114. Hearn, J. (1997). Foreign aid, democratization and civil society in Africa: A study of South Africa, Ghana and Uganda. Brighton: Institute of Development Studies. Discussion paper 368. Heathershaw, J. (2008). Unpacking the liberal peace: The dividing and merging of peacebuilding discourses. Millenium-Journal of International Studies, 36(3), 597–621.
287 Hewitt, J. J., Wilkenfeld, J., Gurr, T. R., & Heldt, B. (2012). Peace and conflict 2012 executive summary. Retrieved November 11, 2017, from https://cidcm.umd.edu/ publications/peace-and-conflict-2012 Huntington, S. P. (1991). The third wave: Democratization in the late twentieth century. Norman: University of Oklahoma Press. Jervis, R. (2001). Theories of war in an era of leading power peace “Presidential Address, American Political Science Association, 2001”. The American Political Science Review, 96(1), 1–14. Paris, R. (2010). Saving liberal peacebuilding. Review of International Studies, 36(2), 337–365. Peou, S. (2002). The UN, peacekeeping, and collective human security: From an agenda for peace to the Brahimi report. International Peacekeeping, 9(2), 51–68. Tertrais, B. (2012). The demise of Ares: The end of war as we know it? The Washington Quarterly, 35(3), 7–22. Whitehead, L. (2009). Losing the ‘force’? The ‘dark side’ of democratization after Iraq. Democratization, 16(2), 215–242.
Further Reading Carothers, T. (2009). Democracy assistance: Political vs. developmental? Journal of Democracy, 20(1), 5–19. Diamond, L. (2008). The spirit of democracy: The struggle to build free societies throughout the world. New York: Times Books. Gates, S., Knutsen, T. L., & Moses, J. W. (1996). Democracy and peace: A skeptical view. Journal of Peace Research, 33(1), 1–10. Gelpi, C. F., & Grieco, J. M. (2008). Democracy, interdependence, and the sources of the liberal peace. Journal of Peace Research, 45(1), 17–36. Kurki, M. (2011). Governmentality and EU democracy promotion: The European instrument for democracy and human rights and the construction of democratic civil societies. International Political Sociology, 5(4), 349–366. Owen, J. M. (1994). How liberalism produces democratic peace. International Security, 19(2), 87–125. Reinsberg, B. (2015). Foreign aid responses to political liberalization. World Development, 75, 46–61. Sørensen, G. (1992). Kant and the democratization. Journal of Peace Research, 29(4), 397–415.
Desalination Bilge Bas Istanbul Bilgi University, Istanbul, Turkey Keywords
Desalination · Water scarcity · Reverse osmosis · Brine
D
288
Introduction Despite the fact that nearly 70% of the world is covered with water, the amount of freshwater that people are able to reach is limited. Freshwater scarcity is a very important and rising problem as a result of increasing population and water demand. In the World Economic Forum in 2018, water crisis was listed among the largest ten global risk factors in terms of impact for the next decade (WEF 2018). Low water quality due to inadequate sanitation is also a cause of freshwater scarcity that 3.6 billion people (nearly half of world population) face. Population and water demand projections show that the conditions in terms of water will become more severe in the future (4.8–5.7 billion people in 2050) (WWAP 2018). Desalination is a way of producing freshwater and developed as a solution for water scarcity problems. In the following, main definitions and features are presented related to desalination. Later, desalination is analyzed and discussed in terms of its environmental impacts and sustainability, cost, and role on water security.
Definition Desalination is the separation of salt content of saline groundwater and surface waters using energy with the aim of producing domestic and industrial water. The desalination process concludes with produced water as product and concentrated water as wastewater. Concentrated water with high salt content is called brine.
Desalination
desalination systems. The evolution of desalination technology continued with increasing usage of membrane systems because these systems use less amount of energy/fuel for brackish/seawater desalination compared to thermal systems. A semipermeable membrane and pressure, which are applied against natural osmotic pressure due to dissolved solids in saline water, are used in membrane systems. These systems can be classified according to their membrane pore size as reverse osmosis (RO) (~0.0001 mm pore size), nanofiltration (NF) (~0.001 mm pore size), ultrafiltration (UF) (~0.01 mm pore size), and microfiltration (~0.1 mm pore size). Decreasing pore size leads to removal of smaller dissolved solids (Frenkel 2010).
World Usage Worldwide cumulative capacity of desalination facilities, which is the sum of the capacities of installed and contracted projects, is determined as 114.9 million m3/d by mid February of 2020 and 97.2 million m3/d of this amount corresponds to installed desalination capacity. This capacity is provided from different water resources as seawater (57%), brackish water (20%) and other sources (23%). Reverse osmosis technology is responsible for 69% of the total installed capacity, which is followed by MSF (17%) and MED (7%) technologies. 59% and 36% of produced water is used in drinking water production and industrial facilities, respectively. The remaining sectors which consumes desalinated water is tourist facilities (2%), irrigation (2%) and military purpose, demonstration, etc. (1%) (Eke et al. 2020).
Desalination Technologies Desalination technologies can be classified mainly in two categories: as thermal and membrane systems. In thermal desalination systems, the salt content of water is separated through evaporation and distillation processes using thermal energy. Multistage flash systems (MSF), multi-effect distillation (MED), and vapor compression (VC) systems are widely used thermal
Environmental Impact and Sustainability Desalination plants can be evaluated as heavy industrial plants causing a series of environmental impacts both at their construction and operation phases. Energy/fuel consumption; process chemicals used; air, water, and noise emissions generated; coastal area usage; and visual pollution
Desalination
are some of pollutant sources, pollutants and impacts related to desalination systems (Höpner and Windelberg 1997; Bas et al. 2011). Water Quality The most important effect of desalination plants on water quality is from the discharge of brine effluents. The widest method of brine disposal is marine outfalls due to being generally the most feasible disposal alternative. Discharged brine changes the existing water quality of the marine environment at a level, which is directly related to brine chemical content, outfall system, and hydrodynamic conditions of the region (Bas et al. 2011). Change of water quality occurs in the form of increasing water temperature values (in case of the usage of thermal technologies), increasing salinity and chemical concentrations, decrease in dissolved oxygen (DO) values, and increasing turbidity (Falzon and Gingell 1990; Ahmed et al. 2001; Gleick et al. 2006; Raventos et al. 2006; Dupavillon and Gillanders 2009). Marine Sediments RO processes require pretreatment of feed water, and these treatment processes include usage of various chemicals (Darwish et al. 2013). As a result, produced brine contains heavy metals such as mercury, barium, cadmium, cobalt, chromium, copper, iron, manganese, nickel, phosphorus, lead, titanium, vanadium, zinc, etc. Heavy metals which are discharged to the marine environment accumulate in marine sediments due to their high specific weight compared to the seawater. Previously held field studies in different regions of the world prove the contribution of desalination plant outfalls to heavy metal concentrations in marine sediments (Alshahri 2017; Sadiq 2002; Lin et al. 2013). Accumulated heavy metals in marine sediments cause negative impact on benthic organisms which use marine sediments as their habitat (Lattemann and Höpner 2008). Air Pollution High amount of electricity consumption during operation phase of RO plants is the most important source of air pollution produced through
289
desalination plants. Average electricity consumption of a middle-sized RO plant is 5 kWh/m3 treated water (Lattemann and Höpner 2008). Depending on the capacity and energy usage of the plant, the produced amount of CO2 and other greenhouse gases (as CO2-e) (carbon footprint) is in the range of 0.73–7.80 kg/m3 and 0.98–1.31 kg/kWh CO2-e for various RO plants with capacities of 125,000–2,700,000 m3/d and electricity consumption rates of 2.3–4.7 kWh/m3 (Lattemann et al. 2013). Noise Pollution Both the construction and operation phases of a RO plant are sources of noise pollution. This is why these plants need to be constructed far from residential areas and noise mitigation measures should be applied. Chavand and Evenden (2010) measured internal noise levels of a large SWRO plant in Australia as 76–101 dB(A) and outside noise levels as 37 dB(A). Coastal Area Usage and Visual Pollution Large SWRO plants require large area, with construction taking place in coastal areas. This necessitates usage of large coastal areas and change of coastal area usage type and converts the coastal zone into an industrial zone (Einav et al. 2002). Marine Organisms Marine organisms are affected by the existence of desalination systems in coastal areas both at their construction and operation phases. Construction of seawater intake structures and brine outfall structures causes degradation of seafloor and benthic organisms (Gordon et al. 2012). As stated before, discharged brine forms a dense layer at the sea bottom. Constituted dense layer harms benthic organisms due to exposure to high salt content and chemicals of discharged brine. This is elucidated with various field and laboratory studies that focus on the impact of brine discharges on marine organisms covering benthic organisms, fish, and seagrasses. The negative impact of brine discharge is observed in the form of the decrease of abundance, richness, and diversity of communities (Del-Pilar-Ruso et al. 2007; Riera et al. 2011); decrease of survival
D
290
rates of organisms (Yoon and Park 2011); decrease of the number of embryos (Dupavillon and Gillanders 2009); and the vanishing of communities especially close to discharge points (Fernández-Torquemada et al. 2005). However, in situations that the marine species are resistant to increasing salinity values or they acclimate to high salinity values of the natural marine ambient such as the Red Sea, there is a possibility that marine species are not affected negatively from brine discharges (Tomasko et al. 1999; Talavera and Quesada-Ruiz 2001; Van der Merwe et al. 2014).
Brine Management Brine is the waste produced through desalination processes and should be handled choosing the most suitable methodology. The most common method of brine disposal is surface water discharge (freshwater, rivers, and coastal waters) using marine outfall systems. Deep well injection, land application, evaporation ponds, and zero liquid discharge (ZLD) techniques are the other brine disposal methodologies (Panagopoulos et al. 2019).
Desalination Costs Desalination is a developing technology, as stated before. Technological development brings about financial improvements, and unit cost of desalination has a decreasing trend over the last three decades. Typical cost of desalinating 1000 gallons of seawater varies between US $2.00 and $12.00 depending on a vast number of factors, such as desalination technology, feed and produced water quality, chosen brine management methodology, energy source, location of the plant, etc. (WRE 2012). A typical breakdown of seawater desalination costs consists of 30–40% of direct capital costs, 20–35% of energy, 15–30% of other operational and maintenance (O&M) costs, and 10–20% of indirect capital costs (http://www.iwa-network.org/desalinationpast-present-future/).
Desalination
Role in Water Security Desalination is an important source of water especially for arid countries, giving opportunity for them to sustain their water security. Desalination provides a vast, unlimited, and steady source of water in the case that these arid countries have a coastline. Being a precipitation-independent water supply methodology, it contributes to users’ draught resistance capacity. However, the role of desalination on water security is controversial due to the paradigm shift of water governance in recent years. Supply diversification and loading order are two paradigm shifts which are considered important for sustainability of water security. Desalination is advantageous in providing a different water resource that contributes to supply diversification. However, it is an energy-dependent technology, and high water-energy trade-offs are inconsistent with loading order and the water-energy nexus points of view (Williams 2018). Increasing awareness of climate change brings queries on environmental impacts of water production and supply techniques. The loading order concept includes water resource choices according to their sustainability in the form of environmental and energy footprints. This led researchers to develop desalination technologies coupled with renewable energy technologies such as solar power (Ghermandi and Messalem 2009). Besides, measures to decrease the mentioned impacts are of concern in administrative programs aimed at increasing the energy efficiency of desalination plants, increasing renewable energy usage, and putting demand response programs into action (Bender et al. 2005). Another concern is that desalination provides a technological fix for local water supply problems. This hides systematic problems in water security which stem from socioeconomic and institutional factors and prevents their solution and limits water security (McEvoy 2014). Also, public perception of desalinated water usage is another concern which affects the water security levels reached. Due to the water quality, energy consumption, cost, and environmental issues, willingness of usage of desalinated water
Desalination
in household and industrial activities varies in different regions of the world. Public perception is considerably important to reach water security goals (McEvoy 2014; Dolnicar and Schäfer 2009).
Conclusion Desalination is a promising technology to solve water scarcity problems especially for arid regions where rain-independent water supply methods are needed to have water security. However, desalination must be evaluated with full details as a water supply method due to its environmental impacts, high energy need, and costs. It is important to be aware that desalination is not panacea to global water scarcity problem; however, it can be used as a solution for local water scarcity problems in case other water resources options are not valid. Also, research and development studies are crucial to take the available technology to the next level with innovative methods which are promising in terms of environmental and financial performance.
Cross-References ▶ Air Pollution ▶ Drinking Water ▶ Ecosystems ▶ Greenhouse Gas Emissions ▶ Solar Energy
References Ahmed, M., Shayya, W. H., Hoey, D., & Al-Handaly, J. (2001). Brine disposal from reverse osmosis desalination plants in Oman and the United Arab Emirates. Desalination, 133(2), 135–147. Alshahri, F. (2017). Heavy metal contamination in sand and sediments near to disposal site of reject brine from desalination plant, Arabian Gulf: Assessment of environmental pollution. Environmental Science and Pollution Research, 24(2), 1821–1831. Bas, B., Erturk Bozkurtoglu, S. N., & Kabdasli, S. (2011). An experimental study on brine disposal under wave conditions. 14th International Congress of the International-Maritime-Association-of-the-Mediterranean (IMAM), Genoa, Italy.
291 Bender, S., Doughman, P., Hungerford, D., Korosec, S., Lieberg, T., Merritt, M., Rawson, M., Raitt, H., Sugar, J., Fromm, S., & Kennedy, K. (2005). Implementing California’s loading order for electricity resources, California Energy Commission, Staff Report, USA. Chavand, V., & Evenden, C. (2010). Noise assessment of a desalination plant, 20th International Congress on Acoustics 2010, Sydney, Australia: 23–27 August. Darwish, M., Hassabou, A. H., & Shomar, B. (2013). Using Seawater Reverse Osmosis (SWRO) desalting system for less environmental impacts in Qatar. Desalination, 309, 113–124. Del-Pilar-Ruso, Y., De la Ossa Carretero, J. A., Giménez Casalduero, F., & Sánchez Lizaso, J. L. (2007). Spatial and temporal changes in infaunal communities inhabiting soft-bottoms affected by brine discharge. Marine Environmental Research, 64(4), 492–503. https://doi.org/10.1016/j.marenvres.2007.04.003. Dolnicar, S., & Schäfer, A. I. (2009). Desalinated versus recycled water: Public perceptions and profiles of the accepters. Journal of Environmental Management, 90(2), 888–900. Dupavillon, J. L., & Gillanders, B. M. (2009). Impacts of seawater desalination on the giant Australian cuttlefish Sepia apama in the upper Spencer Gulf, South Australia. Marine Environmental Research, 67(4–5), 207–218. Einav, R., Hamssib, K., & Periyb, D. (2002). The footprint of the desalination processes on the environment. Desalination, 152(1–3), 141–154. Eke, J., Yusuf, A., Giwa, A., & Sodiq, A. (2020). The global status of desalination: An assessment of current desalination technologies, plants and capacity. Desalination, 495, 114633. https://doi.org/10.1016/j.desal. 2020.114633. Falzon, L., & Gingell B. (1990). A study of the influence of the effluent from the Tigne RO plant on algae grow. (Dissertation of B.Sc. Degree). Malta University, Malta. Fernández-Torquemada, Y., Sánchez-Lizaso, J. L., & GonzálezCorrea, J. M. (2005). Preliminary results of the monitoring of the brine discharge produced by the SWRO desalination plant of Alicante (SE Spain). Desalination, 182, 395–402. Frenkel, V. S. (2010). In M. Schorr (Ed.), Seawater desalination: Trends and technologies, desalination. InTech. https://doi.org/10.5772/13889. Retrived from: http://cdn.intechopen.com/pdfs-wm/13756.pdf. ISBN: 978-953-307-311-8. Ghermandi, A., & Messalem, R. (2009). Solar-driven desalination with reverse osmosis: The state of the art. Desalination and Water Treatment, 7, 285–296. Gleick, P. H., Cooley, H., & Wolff, G. (2006). With a grain of salt: An update on saltwater desalination. In H. Cooley, P. H. Gleick, D. Katz, E. Lee, J. Morrison, M. Palaniappan, A. Samulon, & G. Wolff (Eds.), The world’s water 2006–2007, The biennial report on freshwater resources (pp. 51–89). Washington, DC: Island Press.
D
292 Gordon, H. F., Viscovich, P. G., Thompsn, A. L., Costanzo, S. D., West, E. J., & Boerlage, S. F. E. (2012). The effects of Gold Coast desalination plant operations on the marine environment. IDA Journal of Desalination and Water Reuse, 4(2), 12–21. Höpner, T., & Windelberg, J. (1997). Elements of environmental impact studies on coastal desalination plants. Desalination, 108(1–3), 11–18. Lattemann, S., & Höpner, T. (2008). Environmental impact and impact assessment of seawater desalination. Desalination, 220(1–3), 1–15. https://doi.org/10.1016/j. desal.2007.03.009. Lattemann, S., Rodriguez, S. G. S., Kennedy, M. D., Schippers, J. C., & Amy, G. L. (2013). Environmental and performance aspects of pretreatment and desalination technologies. Advances in Water Desalination, Ed. N. Lior, 79–195. https://doi.org/10.1002/ 9781118347737.ch2. Lin, Y.-C., Chang-Chien, G.-P., Chiang, P.-C., Chen, W.H., & Lin, Y.-C. (2013). Potential impacts of discharges from seawater reverse osmosis on Taiwan marine environment. Desalination, 322, 84–93. McEvoy, J. (2014). Desalination and water security: The promise and perils of a technological fix to the water crisis in Baja California Sur, Mexico. Water Alternatives, 7(3), 518–541. Panagopoulos, A., Haralambous, K.-J., & Loizidou, M. (2019). Desalination brine disposal methods and treatment technologies – A review. Science of the Total Environment, 693, 133545. https://doi.org/10.1016/j. scitotenv.2019.07.351. Raventos, N., Macpherson, E., & Garcia-Rubies, A. (2006). Effect of brine discharge from a desalination plant on macrobenthic communities in the NW Mediterranean. Marine Environmental Research, 62(1), 1–14. Riera, R., Tuya, F., Sacramento, A., Ramos, E., Rodríguez, M., & Monterroso, Ó. (2011). The effects of brine disposal on a subtidal meiofauna community. Estuarine, Coastal and Shelf Science, 93, 4, 359–365. https:// doi.org/10.1016/j.ecss.2011.05.001. Sadiq, M. (2002). Metal contamination in sediments from a desalination plant effluent outfall area. Science of the Total Environment, 287(1–2), 37–44. Talavera, J. L. P., & Quesada-Ruiz, J. J. (2001). Identification of the mixing processes in brine discharges carried out in Barranco del Toro Beach, south of Gran Canaria (Canary Islands). Desalination, 139(1–3), 277–286. Tomasko, D. A., Blake, N. J., Dye, C. W., & Hammond, M. A. (1999). Effects of the disposal of reverse osmosis seawater desalination discharges on a seagrass meadow (Thalassia testudinum) offshore of Antigua, West Indies. In S. A. Bortone (Ed.), Seagrasses: monitoring, ecology, physiology and management (pp. 99–112). Boca Raton: CRC Press. Van der Merwe, R., Röthig, T., Voolstra, C., Ochsenkühn, M., Lattemann, S., & Amy, G. L. (2014). High salinity tolerance of the Red Sea coral Fungia granulosa under desalination concentrate discharge conditions: An in
Desertification situ photophysiology experiment. Frontiers in Marine Science, 1, 1–8. WEF (World Economic Forum). (2018). The global risks report 2018, 13th edn. Geneva. http://www3.weforum. org/docs/WEF_GRR18_Report.pdf WRE (Water Reuse Association). (2012). Seawater desalination costs, White Paper, (https://watereuse.org/wpcontent/uploads/2015/10/WateReuse_Desal_Cost_ White_Paper.pdf). WWAP (United Nations World Water Assessment Programme)/UN-Water. (2018). The United Nations World Water Development Report 2018: Nature-based solutions for water. Paris: UNESCO. Yoon, S., & Park, G. (2011). Ecotoxicological effects of brine discharge on marine community by seawater desalination. Desalination and Water Treatment, 33, 240–247. https://doi.org/10.5004/dwt.2011.2644.
Further Reading Books Latteman, S. (2017). Development of an environmental impact assessment and decision support system for seawater desalination plants. CRC Press, London. ISBN-10: 9781138474635. Voutchkov, N., Fawell, J., Payment, P., Cunliffe, D., Lattemann, S., & Cotruvo, J., (2010). Desalination technology. CRC Press, IWA Publishing, Boca Raton FL, ISBN-10: 1843393476. Williams, J. (2018). Diversification or loading order? Divergent water-energy politics and the contradictions of desalination in southern California. Water Alternatives, 11(3), 847–865.
Web Sites International Desalination Association (IDA). (2018). website: http://idadesal.org/
Documentaries Build It Bigger: Drought-Proofing Australia, Season 4, Episode 2. https://www.imdb.com/title/tt1876674/ How It’s Made: Train Rails/Desalinated Water/Racing Wheelchairs/Parquetry, Season 15, Episode 3. (https:// www.imdb.com/title/tt1713205/?ref_=ttep_ep3)
Desertification K. B. Usha Jawaharlal Nehru University, New Delhi, India Keywords
Anthropogenic disaster · Desertification · Dry lands · Human well-being · Land degradation neutrality · Sustainable development goals
Desertification
Introduction Desertification has become one of the most pressing global environmental and socio-economic concerns in the twenty-first century. It is commonly understood as land degradation due to anthropogenic changes in the soil and environment in the arid and semiarid regions. Today, the damaging effect of desertification is recognized worldwide as equal to that of global warming and climate change. Desertification is even treated by the scientific community as a security issue considering the multifaceted threats and risks it generates (Brauch 2003) for the lives of humans and nonhumans. The latest studies show nearly twofifth of humanity affected by land degradation severely threatening human survival and wellbeing (Stam 2018). This chapter outlines the concepts and definitions of desertification, its causes and consequences, mitigation efforts, and policy challenges. It concludes by examining the recent debates on combating desertification and the highly politicized nature of the issue.
Context The term desertification was first coined by Louis Lavauden, French scientist and explorer, in 1927 (Darkoh 2003). Andre Aubreville, a botanist and ecologist, in his book Climate, Forests, and Desertification published in 1949, popularized the concept of desertification (cited in Kannan 2012). However, it was only in the 1970s, in the context of the drought in the Sahel, the semiarid savannah zone in Africa, that debate was unleashed on the issue. During 1972–1974, desertification was recognized as an issue of global scale. The Sahel experienced the longest drought ever recorded in human history in modern times. The cause of this tragedy with its multiple impacts on human lives and biodiversity has been interpreted as a result of problems created by unwise, irrational, and unsustainable land use practices. Urged by the Sahel tragedy, the United Nations passed a resolution in 1974 and called for an international conference on desertification held in Nairobi, Kenya, in 1977. Ninety-four countries
293
participated in this UN Conference on Desertification (UNCOD) and adopted a Plan of Action for Combating Desertification (PACD). In 1991, the United Nations Environmental Programme (UNEP) found that despite the small success of local efforts to alleviate land degradation, the problem of desertification got intensified in arid, semiarid, and subhumid areas. Consequent upon UNEP’s findings, the question of addressing desertification effectively became a major concern for the United Nations Convention to Combat Desertification (UNCCD) held on the eve of Rio Earth Summit in 1992. In the UNCCD held in 1994, the UN General Assembly declared 17 June to be “World Day to Combat Desertification and Drought” on the basis of a resolution adopted by 194 participant countries. Since then, despite criticism from the scientific community over the logical and empirical shortcomings of the concept of desertification (Mortimore 1989; Swift 1996), it has been institutionalized at the global level, especially at the UN platform, with the aim of searching for remedies and solutions to alleviate desertification and land degradation. Currently, the UNCCD is the legally binding international instrument to address land-related issues. Given the current complexity of unsustainable land use, climatic conditions, and environmental change, scientists have predicted further aggravation of the situation threatening the subsistence and future of human life. The international community recognized the need for a firm commitment and continuous efforts to address the issues of desertification, land degradation, and the multifarious disastrous effects of these phenomena on the environment, biodiversity, and human beings at the local, regional, and global levels.
Concept and Definitions Desertification is a concept widely used for comprehending the natural- and human-induced changes in land-based ecosystems leading to multidimensional catastrophic consequences after the Sahel tragedy. It has generally been recognized as a process of land degradation with alarming consequences. It is not about the
D
294
expansion of already-existing desert areas. An allagreed definition is not available on the evolving concept of desertification, and the concept has been defined in many different ways by scientific and policy communities. Since humans become triggers as well as victims of desertification, Fouad Ibrahim suggests a more human-oriented definition: “Desertification is the degradation of the dry lands production systems which have developed as a result of centuries-long interactions between the human communities and their environments” (Ibrahim 1993, p. 5). A widely accepted definition of desertification was conceptualized in the 1994 UNCCD meeting. It defined desertification as “the degradation of land and vegetation, soil erosion and the loss of top soil and fertile land in arid, semi-arid and dry subhumid areas, caused primarily by human activities and climate variations” (UNCCD 1994, p. 4). The UNCCD elaborates land degradation as: The reduction or loss, in arid, semi-arid and dry subhumid areas, of the biological or economic productivity and complexity of rainfed cropland, irrigated cropland, or range, pasture, forest and woodlands resulting from land uses or from a process or a combination of processes, including processes arising from human activities and habitation patterns, such as: (i) soil erosion caused by wind/or water; (ii) deterioration of the physical, chemical and biological or economic properties of soil; and (iii) longterm loss of natural vegetation. (ibid., p. 5)
These definitions help understand the multiple causes, consequences, and ecological and human impact of the problem of desertification and may help in finding ways of alleviation. The UN-supported Millennium Ecosystem Assessment (MEA) by the World Resources Institute considers desertification as a process of land degradation which is “the reduction in the capacity of the land to perform ecosystem goods, functions and services that support society and development,” and as the process of desertification of the dry lands collectively (Millennium Ecosystem Assessment 2005). The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) defines desertification “as land degradation in arid, semi-arid and dry sub-humid areas (collectively called dry lands) because of human activities and climatic
Desertification
variations” (Scholes 2018, p. 17). As these definitions depict, a wide range of issues such as human intervention in nature, land use, agricultural practices, topography, climate change, soil erosion, deforestation, loss of biodiversity, etc., are involved in the process of desertification and land degradation. The intensity of the problem is felt more in the areas near deserts. But other areas are also affected. In spite of several attempts to prevent and combat desertification through measures such as the restoration of vegetation and so on, land degradation remains a vital issue in many regions of the globe including China, Russia, India, America, Africa, West Asia, Central Asia, etc. Thus, desertification has emerged as an international political issue of great significance in many human development initiatives, especially under the UN platform. When the assessment of the land-related issues in different regions has been made, critics argued that the concept of desertification is no longer analytically useful for policy purposes. They argue that the concept is vaguely defined, and therefore, the concept of “land degradation” is preferred, unless desertification does create a desert-like condition in the affected areas (Lijuan Miao 2015). Given the complexity of the problem, the UNCCD has recognized land degradation as a better concept, as one that captures various aspects of the problem as among the important priorities for achieving sustainable development. However, some scientists see the UN definition as too broad and suggest desertification to be viewed in terms of loss of productivity of the land that is not reversible. According to them, whether the change is permanent on a human time scale is important in defining desertification. Thus, disagreement between the policy and scientific research communities on the concept of desertification is visible. The scientific community alleges a political agenda behind the institutionalization of post-Sahel desertification in the UN system (Cortner 1989). Some scholars claim that new scientific knowledge from climatology made the concept irrelevant in the current context. Roy H. Behnke and Michael Mortimore observe that desertification research has been “targeted
Desertification
and deeply involved in formulation of public policy.” They further state: If scientists require clarity in the concepts they employ, the politicians and administrators who create and manage large institutions have other, very pragmatic requirements. In the search for money and support, they need a problem that is dramatic enough to command immediate attention, simple enough to be quickly grasped, and general enough to satisfy diverse interest groups; they need. . . a development narrative—a powerful story line with clear, broadly applicable policy implications and urgent funding needs. (Behnke and Mortimore 2016, pp. 5–6)
These scholars indicate that a bias in favor of the policy community ignoring scientifically informed knowledge was reflected in the institutionalization of the concept of desertification on a global scale. However, whatever be the tension between the policy and the scientific communities, the problem of desertification poses a big challenge to both groups in contemporary times.
Causes and Consequences of Desertification Desertification is not a new issue. As the outcome of certain natural process and development related to human intervention in nature, desertification always existed since recorded history. Several causes for this phenomenon are identified by researchers. Desertification is driven mainly by economic activities, such as property development, industry, and agriculture. A range of processes such as soil erosion, wind erosion, salinization, droughts, and wild fire burnings cause desertification. Overgrazing of livestock creates conditions of land degradation. Injudicious farming practices, poor environmental awareness, mismanagement of water resources, oil exploration, and mineral mining among others are underlying human factors causing desertification and land degradation (Squires and Heshmati 2013). The human mismanagement of land may decrease the quantity of rainfall and moisture and transform land to desert-like conditions. Soil contamination due to the overuse of pesticides and chemicals in agricultural lands is
295
another factor that causes desertification. The overuse of chemicals leads to land salinity, with high concentration of salt contents leading to degradation (Kumar 2015). Climate change and desertification are interlinked. Desertification can be exacerbated by climate through the change of spatial and temporal patterns of temperature, precipitation, solar insolation, and winds. Climatic variations can influence drought patterns. The fossil fuels generating Green House Gas Emissions (CO2), the main driver of climate change, can worsen the effect of desertification. Scientists theorized that increased atmospheric dust produced by overgrazing, rangeland burning, and overcropping can reduce local rainfalls or may cause global climatic shifts. Dense pall of dust can reduce precipitation as happening in northern India and Pakistan (Kannan 2012). China is also one of the countries severely affected by dustsand storms and desertification (Lu Qi 2005). The Kalmykia republic in the Russian Federation is an example of anthropogenic intervention under the 70 years of Soviet socialist development model disregarding ecological consequences, contributing to overexploitation of natural resources and land degradation, desertification, soil erosion, and related issues. By 1990, almost the whole of Kalmykia has undergone desertification, and 13% of its territory has been transformed into true desert (Zonn 1995, p. 347), with lasting human and ecological consequences. Desertification is recognized as a global problem that generates far-reaching consequences: social, cultural, economic, and political. It affects all continents and a great number of countries, including countries such as China, India, Russia, Australia, USA, and European countries. The negative impact generated by desertification on both environment and humans matters the most. Desertification causes decline in agricultural production, degradation of land and ecosystems, water crises, environmental problems, and loss of well-being for the people, i.e., human security issues. It generates public health crises, loss of livelihood, and other socioeconomic impacts. The vulnerability depends on other related factors such as age, gender, disability, immune status, and access to healthcare services of the individuals
D
296
affected (World Information Transfer 2009). Migration to other areas searching for better livelihood opportunities may generate conflicts. The worst-affected people are found in Africa and Asia. The reciprocal influence between development and environmental problems, one enhancing the other, aggravates desertification. The report of the World Commission on Environment and Development, Our Common Future (April 1987), identified desertification along with population growth, deforestation, and water pollution as one of the “four most urgent global environmental requirements.” The Commission linked desertification to problems of food security, social welfare, political stability, and mankind’s ability to achieve the Commission’s goal of “sustainable development” (WCED 1987). Scholarly research points out the possibility of food insecurity in the future as there is a need to feed a 9-billion population in the world by 2050 on available and decreasing land resources (Juntti 2014). According to the Global Environmental Facility (GEF), more than 2.6 billion people in 100 countries are vulnerable to the process of desertification. More than 33% of Earth’s surface is also affected by desertification. The estimate of Plenary of the IPBES held in 2018 indicates, “Currently, degradation of the Earth’s land surface through human activities is negatively impacting the well-being of at least 3.2 billion people, pushing the planet towards a sixth mass species extinction, and costing more than 10 per cent of the annual global gross product in loss of biodiversity and ecosystem services” (IPBES, 2018 cited in Scholes 2018). Thus, desertification forms an issue with wider implications for biodiversity, eco-safety, poverty eradication, socio-economic stability, and sustainable development across the globe.
Mitigation Strategies: Institutions, Strategies, and Programs The UNCCD, adopted in Paris on 17 June 1994, which came into force on 26 December 1996, is the globally recognized institution and platform
Desertification
that has the legal authority for addressing desertification issues. Ratified by 196 countries, the UNCCD identifies land degradation and desertification as one of the most pressing environmental concerns of the contemporary world. The UNCCD’s decision-making structure consists of several institutions. The Conference of the Parties (CoP) established in 1997 is the highest decisionmaking body. The CoP includes 196 countries and the European Union as its committed members. It meets biennially since 2001, and as of 2017 it had 13 sessions. This is the body that evaluates country reports, and makes necessary suggestions, recommendations, and amendments for facilitating implementation. Besides the CoP, the other institutions in the UNCCD structure are the Secretariat, the Committee on Science and Technology (CST), the Committee for the Review of the Implementation of the Convention (CRIC), the Global Mechanism (GM), and National Action Programmes. In its 10-year strategy (2007–2018) adopted in 2007, the UCCD has formulated a global “zero net degradation” as desired goals to be achieved with global partnership and shared responsibility. Global Mechanism (GM) was established in 1998 for funding sustainable land management practices in member countries. GEF was adopted in 2010 taking into account the scientific evidence linking desertification to climate change and other related issues such as carbon emissions. The UN Conference on Sustainable Development (“Rio + 20”) held in June 2012 has also called for a target of “zero net land degradation” (Juntti 2014). The Agenda 2030 for Sustainable Development adopted by the 193 Member States of the United Nations General Assembly at the Sustainable Development Summit on 25 September 2015 gives priority to desertification and land degradation. In the Sustainable Development Goals (SDGs) built on the Millennium Development Goals which were targeted to be achieved by 2015, goal 15 “Life on Land” aimed to “protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss” (United Nations 2017). This Goal’s section 15.3 specifically refers
Desertification
to the need to “combat desertification, restore degraded land and soil, including land affected by desertification, drought and floods, and strive to achieve a land degradation-neutral world” by 2030 (United Nations 2017). The goal of land degradation neutrality (LDN) has been acknowledged as a new paradigm for managing land degradation and thereby achieving SDGs by 2030. The UNCCD conceptualizes LDN as “a state whereby the amount and quality of land resources necessary to support ecosystem functions and services and enhance food security remain stable or increase within specified temporal and spatial scales and ecosystems” (UNCCD 2017). All the national governments, many international NGOs, and other agencies supported by the UN predominantly use the conceptualization of the UNCCD in their commitment to comprehend the issue and implement the goals of Agenda 2030. The outcome of this goal is that the rate of deteriorating lands would be counterbalanced by the rate of land improvement. The UNCCD signed a memorandum of understanding with the World Future Council to jointly combat desertification issues. As Monique Barbut, Executive Secretary at UNCCD, explains, “Desertification is a silent, invisible crisis that is destabilizing communities on a global scale. It is important to identify and promote laws and policies that successfully protect, monitor and regulate combating desertification. We look forward to working closely together with the World Future Council” (Petersen 2017). The signatory nations have formulated national action programs to mitigate various effects of drought and desertification. For instance, the Government of India is committed to achieve landdegradation-neutrality by 2030. According to India’s Environment Minister, Harsh Vardhan, the country’s new National Action Programme (NAP) for combatting drought and desertification by considering the national circumstances and development priorities focuses on sustainable land and resource management for livelihood generation at the community level. It makes the local lands healthier and productive for providing a better homeland and a better future to its inhabitants. The 2017 World Desertification Day slogan,
297
“Our Land, Our Home, Our Future,” underlines the central role that productive land can play in turning the growing tide of migrants abandoning their unproductive land into communities and nations that are stable, secure, and sustainable in future. The Indian government also launched initiatives such as the Soil Health Card Scheme, to help farmers for improving productivity through the judicious use of resources and earmarked a fund of ` 840.52 crore over the last 3 years (Press Information Bureau 2017). Similar national action programs have been implemented by all the member countries and regions in Asia, Africa, North America, South America, and Europe. The 2018 World Desertification Day’s theme is “Land has true value – invest in it.” National governments are in the process of introducing national action plans according to this theme. Thus, by now, sources of funding, scientific studies, research publications, evidences, etc., are available to combat desertification. However, several questionable and complex issues centered on combating desertification still prevail as challenges to policy making.
Policy Challenges It is generally considered that desertification can be prevented by restoring land and soil functions through conservation, protection, and restoration of vegetation cover and water availability. Educating people regarding engaging with nature and their responsibility for land and nature is also required through training and awareness-raising activities. Considering the long-lasting impact of desertification, its prevention is an important social challenge. A possible solution of reducing land degradation is through planting trees to increase the moisture level and slow down wind erosion. Local framers are the main players in the fight against desertification. Experiences in many parts of the world show that local people’s involvement and indigenous knowledge can make a difference in mitigating desertification more effectively. It is actually the local communities that are leading innovations to address the
D
298
problems of desertification. Desertification is a global problem, but better solutions are predominantly local. The complexity and interdisciplinary nature of desertification and ambiguous and uncertain linkages of its causes and consequences pose multiple challenges to environmental managers and policy makers. Therefore, the issue calls for a comprehensive policy framework enabling to address various issues at local, national, regional, and global levels. This requires inter-sectorial collaboration, improvement of knowledge base, and innovative assessment models. Structural inequalities, such as the gendered division of labor and discriminatory approaches to women and nature, have to be addressed, and, accordingly, policy should also take into account the gendered consequences of desertification.
Desertification
toward mitigating desertification. Several competing factors, such as scientific knowledge, political will, corporate interests, practical experience, traditional knowledge, and know-how, may have wider implications for the effective mitigation efforts by various stakeholders and institutions. Given today’s neoliberal development model, corporate interests, and the accumulation of profit disregarding the depletion of natural resources, it remains an unanswered question how far the nature-based solutions suggested to mitigate desertification might succeed.
Cross-References ▶ Anthropocene ▶ Ecosystems ▶ Environmental Security ▶ Food Insecurity
Conclusion Desertification attracted attention as a global issue in the context of the Sahel drought tragedy of the 1970s. Desertification is a contested issue about which hundreds of definitions are available, including contradictory ones. The conceptual and definitional issues of desertification generate challenges to policy making and implementation. Since the end of 1970s, the UNCCD has become the legal program to initiate strategies for combating desertification depending on the local, national, and regional circumstances and development priorities. The multiple causes and consequences of desertification processes are widely discussed and identified in the scientific and policy communities with practical suggestions to mitigate the effects of desertification in various parts of the world. Besides the UNCCD, mitigating desertification is one of the important priorities in the programs of the UNEP, the Millennium Development Goals, and the SDGs. However, desertification is a complex issue. This has become a puzzled concept where divergent opinions of the policy and scientific communities could be seen. Since 2006, the GEF invested a huge amount of money in several projects
References Behnke, R. H., & Mortimore, M. (2016). Introduction: The end of desertification? In R. H. Behnke & M. Mortimore (Eds.), The end of desertification? Disputing environmental change in the drylands (pp. 1–36). Berlin: Springer. Brauch, H. G. (2003). Desertification: A new security challenge for the Mediterranean? In J. L. William & G. Kepner (Eds.), Desertification in the Mediterranean. A security issue (pp. 11–86). Dordrecht: Springer. Cortner, H. J. (1989). Desertification and the political agenda. Population and Environment, 11(1), 31–41. Darkoh, M. B. K. (2003). Desertification in the drylands: A review of the African situation. Annals of Arid Zone, 42(3&4), 289–307. Ibrahim, F. (1993). A reassessment of the human dimension of desertification, desertification after the UNCED, Rio 1992 (September 1993). Geo Journal, 31(1), 5–10. Juntti, M. (2014). Desertification. In P. G. Harris (Ed.), Routledge handbook of global environmental politics (pp. 506–519). London: Routledge. Kannan, A. (2012). Global environmental governance and desertification: A study of gulf cooperation council countries. New Delhi: Concept Publishing Company. Kumar, P. S. (2015). Soil salinity: A serious environmental issue and plant growth promoting bacteria as one of the tools for its alleviation. Saudi Journal of Biological Sciences, 22(2), 123–131. Retrieved from https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC4336437/ Lijuan Miao, P. Y. (2015). Future climate impact on the desertification in the dry land Asia using AVHRR
Disarmament GIMMS NDVI3g data. Remote Sensing, 7, 3863–3877. Retrieved from https://pdfs.semanticscholar.org/9a18/ 6b6995e8065b21dd2e8f97d96af6e323d7f4.pdf Lu Qi, W. S. (2005). Desertification and dust storms in China: Impacts, root causes and mitigation strategies. Chinese Forestry Science and Technology, 5(3), 22–35. Retrieved from http://www.academia.edu/19901796/ Desertification_and_dust_storms_in_China_impacts_ root_causes_and_mitigation_strategies Millennium Ecosystem Assessment (MEA). (2005). Ecosystems and human well-being: Desertification synthesis. Washington, DC: World Resources Institute. Retrieved from https://www.millenniumassessment. org/documents/document.355.aspx.pdf Mortimore, M. (1989). Adapting to drought. In Farmers, famines and desertification in West Africa. Cambridge: Cambridge University Press. Petersen, M. (2017). UN desertification chief signs partnership agreement with World Future Council. Retrieved from https://www.worldfuturecouncil.org/ un-desertification-chief-signs-partnership-agreementworld-future-council/ Press Information Bureau, Government of India, Ministry of Environment, Forest and Climate Change. (2017, June 16). ‘Nation committed to achieve land degradation neutrality by 2030’: Dr Harsh Vardhan Environment Minister’s Statement on Eve of World Day to Combat Desertification. Retrieved from http://pib.nic.in/newsite/PrintRelease.aspx? relid¼165692 Scholes, R. E. (2018). Summary for policymakers of the thematic assessment report onland degradation and restoration of the Intergovernmental SciencePolicy Platform on Biodiversity and Ecosystem Services. Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). Retrieved from https://www.ipbes.net/sites/default/ files/do wnloads/spm_ldr_unedited_advance_ 28march2018.pdf Squires, V. R., & Heshmati, G. A. (2013). Introduction to deserts and desertified regions in China. In G. A. Squires (Ed.), Combating desertification in Asia, Africa and the Middle East: Proven practices (p. 476). Dordrecht: Springer. Stam, C. (2018, March 27). Land degradation threatens wellbeing of two-fifths of humanity, major report warns. Euractiv. Retrieved from https://www.euractiv. com/section/agriculture-food/news/land-degradationthreatens-wellbeing-of-two-fifths-of-humanity-majorreport-warns/ Swift, J. (1996). Desertification: Narratives, winners and losers. I. In M. Leach (Ed.), The lie of the land: Challenging received wisdom on the African environment (pp. 73–90). Martlesham: James Currey. UNCCD. (1994). United Nations convention to combat desertification in those countries experiencing serious drought and/or desertification, particularly in Africa. New York: United Nations. Retrieved from https:// www2.unccd.int/sites/default/files/relevant-links/ 2017-01/UNCCD_Convention_ENG_0.pdf
299 UNCCD. (2017). Global land outlook. Bonn: Secretariat of the United Nations Convention to Combat Desertification. Retrieved from https://static1.squarespace. com/static/5694c48bd82d5e9597570999/t/ 59e9f992a9db090e9f51bdaa/1508506042149/GLO_ Full_Report_low_res_English.pdf United Nations. (2017). Transforming our world: The 2030 agenda for sustainable development. Retrieved from https://sustainabledevelopment.un.org/content/docu ments/21252030%20Agenda%20for%20Sustainable %20Development%20web.pdf WCED. (1987). Our common future. Oxford: Oxford University Press. World Information Transfer. (2009). Special focus: Desertification: Its effects on people and land. World Ecology Report: Critical Issues in Health and Environment, 21(1), 1–5. Retrieved from https://worldinfo.org/wpcontent/uploads/library/wer/english/2009_Spring_ Vol_XXI_no_1.pdf Zonn, I. S. (1995). Desertification in Russia: Problems and solutions (an example in the Republic of KalmykiaKhalmg Tangch). Environmental Monitoring and Assessment, 37, 347–363.
Disarmament Tamer Kasikci and Mustafa Yetim Department of International Relations, Eskisehir Osmangazi University, Eskisehir, Turkey Keywords
Arms Control · Non-proliferation of Nuclear Weapons (NPT) · Strategic Arms Limitation Talks (SALT) · Strategic Arms Reduction Talks (START) · Conference on the Disarmament
Introduction Since one of the main objectives of International Relations is to prevent any future war by understanding and explaining the causes of it, disarmament has been regarded as a fundamental issue in the discipline. Disarmament has been defined as “the reduction or withdrawal of military forces and weapons” (The Concise Oxford Dictionary 1997). The disarmament attempts can be classified according to the logic behind them. There is pure disarmament call which is an idealistic goal for all international actors and is based on
D
300
abolishing all kind of military tools forever. Also there are general and complete disarmament (GCD) demands that offer the complete abolition of nuclear, chemical, and biological weapons and only the reduction of conventional weapons. Another type, the limited negotiated disarmament (LND), rests upon the multilateral agreements over prohibition of a certain kind of weapon (such as biological weapons in the Biological Weapons Convention/1972) or of a certain region (such as the Outer Space Treaty/1966). There is also disarmament attempts resulted from mandatory circumstances. One of the well-known examples of this is the forcible disarmament which is the reduction of military power of defeated country in a war, like the disarmament of Germany after the World War I. In addition states occasionally have to reduce their arms due to structural factors in the international system. For example, the increasing defense and military costs brought about arms reduction especially in the conventional weapons. Lastly, as seen in many cases especially after the Cold War, in order to end the civil wars, the parties go through a post-conflict disarmament process which is the abolition of all military tools under the supervision of international community (Cooper 2006). Although arms control and disarmament concepts are used interchangeably in international relations, their contents are somewhat different. Arms control is related to reducing arms to decrease the possibility of war. In arms control process, the main objective of the states is to create a secure environment and stability in an anarchic world. On the other hand, the main goal of the disarmament is to establish a world in which there are still conflicts between states but using force is not one of the ways to solve these crises (Pilisuk 2007). Even though disarmament is still an unrealistic ideal today, it has still a long history. In the twelfth century, the Church made an attempt to ban Christians to use crossbows in warfare (Croft 1996). In an example of self-imposed restriction Japanese rejected using all kind of firearms for almost two centuries from sixteenth century to eighteenth century (Pilisuk 2007). In the west the idea of disarmament gained momentum with the World
Disarmament
War I. Since the arms race was widely accepted as one of the major causes of the war, disarmament had become a popular notion after the war. Even more than fifty states came together in the first World Disarmament Conference in 1932 to discuss the possible reduction and limitation on armaments (Webster 2006). The demands for the disarmament had waned with the rise of the revisionist military powers such as Germany and Italy during the interwar period. All kinds of approaches to prevent a new major war, including the disarmament attempts, were proved to be unsuccessful with the break of the World War II. Once again in the postwar period, the disarmament became a popular concept especially in the newborn United Nations which was very determined on disarmament as one of the major objectives of the organization. The development of the nuclear weapons initially by the USA during the World War II and then by the Soviet Russia in 1949 both complicated and made more immediate the disarmament issue. The destruction capability of nuclear weapons which had been tragically demonstrated by the atomic bombs used by the USA in Hiroshima and Nagasaki during the World War II showed that these weapons had the potential to destroy all humankind in a possible World War III (Krieger 2007). For that reason reducing the numbers of the nuclear weapons or even achieving the “zero point” which means destroying all kinds of nuclear weapons has been the major issue in disarmament talks since the beginning of the Cold War. In this period the promising developments on the disarmament had occurred in the talks between nuclear powers such as Strategic Arms Limitation Talks (SALT) and Strategic Arms Reduction Talks (START). With the end of the Cold War, the process of the disarmament entered into a new phase. While during the Cold War, the main actors behind the disarmament attempts were the states and the multilateral institutions, especially the intergovernmental organizations like the UN, after the Cold War with the empowerment of the nongovernmental organization (NGO) arena, they have gained the leading role in this process. In addition, the disarmament demands during the
Disarmament
Cold War were constructed on the threats to states and military security. After the Cold War, the issue of human security gained a new momentum, and the disarmament talks focused on the economic security (Cooper 2006). In the contemporary world, the disarmament issue is generally tried to be handled by the multilateral institutions especially by the UN. The disarmament has been one of the major issue areas of the UN since its foundation. Moreover, the very first General Assembly resolution was about the establishment of a commission which would deal with the “elimination from national armaments of atomic weapons and of all other major weapons adaptable to mass destruction” (Krause 2008). Therefore the UN has always been at the center of the global disarmament agenda. There are three distinct mechanisms that deal with the disarmament issue within the UN System. These are the First Committee of the General Assembly, 65-nation Conference on Disarmament, and the Disarmament Commission (Godsberg 2012). The milestone of the disarmament issue in the UN System is the Treaty on Nonproliferation of Nuclear Weapons (NPT) which entered into force in 1970 after a 2-year signing process. The NPT is based on three pillars: (1) nonproliferation which means all nuclear weapon holders agree that they will not transfer nuclear weapon technology to any other state, (2) peaceful uses which means all states have the right to develop nuclear power for only peaceful purposes, and (3) disarmament which means all states agree on negotiating nuclear and general disarmament (Krause 2008). The major problem in the disarmament is the process of gathering information about the states’ weapon stock. Governments are generally reluctant to share the information about their military stock, and it is always possible to manipulate the numbers. Thus it is always difficult to be sure if the states keep its promises on disarmament (Thakur 2011). Even in the nuclear issue, the exact number of nuclear-possessor countries and the number of their possessions are not clear. For example, even though there is not a formal declaration, there is a common suspicion over Israel’s possession of nuclear weapons. There are several
301
international institutions which have data sets on the military stocks of all countries. The UN Secretariat, the International Atomic Energy Agency (IAEA), and the Organization for the Prohibition of Chemical Weapons (OPCW) in the arena of nuclear and chemical weapons and also the UN Conventional Arms Register in the arena of conventional weapons keep records of military stocks (Thakur 2011). The disarmament attempts in the UN have not been still so successful due to several reasons. Firstly, the veto power of the UN Security Council’s permanent members, which are also the major military powers with all holding nuclear weapons, prevents to issue binding resolutions on disarmament. Secondly, Conference on the Disarmament, the central mechanism on disarmament issue, takes decision with consensus which gives every member veto power. For that reason it is always impossible difficult to get decisions in the conference. Lastly the work of the General Assembly’s First Committee cannot make any difference over the stance of the especially major powers on the issue of the disarmament (Godsberg 2012). The effectiveness of the current nuclear arms control regime has also been challenged by several factors. Firstly the major nuclear powers are unwilling to reduction of their nuclear arsenal. Secondly there are some nonnuclear signatories like North Korea and Iran who have strong tendency to build their own nuclear capacity. Thirdly several important powers such as India, Israel, and Pakistan still prefer not to sign the NPT. Fourthly there are important terrorist groups which have interest and ability to acquire nuclear weapons. Lastly even though there is a strong pressure on the military use of the nuclear power, it is still seen as an inexpensive, safe, and environment-friendly alternative to the fossil fuel, which provokes the demands for the nuclear power (Sauer 2006; Thakur 2011).
Conclusion War is the deadliest and most brutal way of problem-solving in international relations and the
D
302
Disruptive Technologies in Food Production: The Next Green Revolution
weapons of any kind are the main tools for warmaking. For that reason, armament and how to limit it have been always fundamental issues in international relations. Despite the fact that pure and general disarmament attempts have not been successful after many decades, it is not reasonable to give up them, since it is one of the major way to make the world a more secure place. Also it should be kept in mind that there are some promising developments in disarmament especially on certain weapons such as biological and chemical weapons and land mines.
References Arms Control Association. (2018). Arms control and proliferation profile: Israel. https://www.armscontrol.org/ factsheets/israelprofile. Access 05 Aug 2019. Cooper, N. (2006). Putting disarmament back in the frame. Review of International Studies, 32(2), 353–376. Crawford, T. W. (2008). Arms control and arms race. In W. A. Darity Jr. (Ed.), International encyclopedia of the social sciences (pp.175–180). Macmillian Reference. Croft, S. (1996). Strategies of arms control: A history and typology. Manchester: Manchester University Press. Godsberg, A. (2012). Nuclear disarmament and the United Nations disarmament machinery. ILSA Journal of International and Comparative Law, 18(2), 581–595. Krause, K. (2008). Disarmament. In T. G. Weiss & S. Daws (Eds.), The Oxford handbook on the United Nations (pp. 287–299). Oxford: Oxford University Press. Krieger, D. (2007). Nuclear disarmament. In C.Webel & J. Galtung (Eds.), Handbook of Peace and Conflict Studies (pp. 106–120). Oxon: Routledge. Pilisuk, M. (2007). Disarmament and survival. In C. Webel & J. Galtung (Eds.), Handbook of Peace and Conflict Studies (pp. 94–105). Oxon: Routledge. Sauer, T. (2006). The nuclear nonproliferation regime in crisis. Peace Review, 18(3), 333–340. Thakur, R. (2011). Nuclear nonproliferation and disarmament: Can the power of ideas tame the power of the state. International Studies Review, 13(1), 34–45. Webster, A. (2006). From Versailles to Geneva: The many forms of interwar disarmament. Journal of Strategic Studies, 29(2), 225–246.
Further Readings Bull, H. (1961). The control of the arms race: Disarmament and arms control in the missile age. New York, Praeger. Burns, R. D. (2009). The evolution of arms control: From antiquity to the nuclear age. Oxford, Praeger Security International.
Cooper, N., & Mutimer, D. (Eds.). (2012). Reconceptualising arms control: Controlling the means of violence. Oxford/New York, Routledge. Lodgaard, S. (2011). Nuclear disarmament and non-proliferation towards a nuclear-weapon-free world? Oxon, Routledge. Sheehan, M. (1998). Arms control: Theory and practice. Oxford: Blackwell.
Disruptive Technologies in Food Production: The Next Green Revolution Jose Ma. Luis Montesclaros Centre for Non-Traditional Security Studies, Nanyang Technological University, Singapore, Singapore Keywords
Agtech · Green revolution · Technology
Introduction In the 1960s and 1970s, agriculture started its First Green Revolution (FGR). This was revolutionary because of the nature of the transformation of the food production landscape, allowing for significantly higher yields through greater use of improved production inputs. Disruptive technologies, in this section, refer to technologies which can allow for similar improvements in crop yields or in terms of other metrics such as resource use efficiency (Teng 2017).
Insights from the First Green Revolution One can say that the FGR focused on extensive, rural agriculture. New seeds allowed for higheryielding varieties of rice, such as IR8, which had the special property of being semidwarfed, for greater resistance to lodging. There were also other new inputs (fertilizers, pesticides), greater mechanization (tractors, threshers, harvesters), and intensified infrastructure investments (irrigation, farm-to-market roads). Over time, average rice yields increased by more than 100%, from 2.03
Disruptive Technologies in Food Production: The Next Green Revolution
tonnes per hectare (t/ha) in 1965 to 4.2 t/ha in 2007, with similar growth observed in other crops such as maize and wheat (Global Rice Science Partnership 2013). The FGR was triggered by international recognition of the challenge of food insecurity. Prior to the FGR, there were food shortages in many parts of the world in the 1930s and 1940s, as a result of various political processes as well as due to collateral damage from the Second World War in which soil quality also suffered (UN FAO 1948). At the same time, droughts were occurring worldwide, while economies could not afford to finance the import of needed food to fill gaps in domestic production. High inflation rates made food less affordable and, henceforth, even less accessible. These challenges brought to the fore the importance of food security. Former US President Nixon then stressed that “the primary responsibility lies with each nation, for seeing that its nation has the food needed for health and life;” the international community, led by the United States, even established the Food and Agriculture Organization of the United Nations (UN FAO) in recognition of this (UN FAO 2017).
Structural Changes Requiring another Green Revolution Over recent decades, a number of structural changes have occurred which point to the need for another green revolution. On the supply side, annual growth in yields has slowed down for some crops, given less favorable growing environments (temperatures, humidity, precipitation) and irregularities in precipitation. On the demand side, the number of consumers is expected to grow to nine billion people by 2050, alongside increased urbanization, whereby as much as two-thirds of global population is expected to live in cities, from approximately half of the world population today. Given a limited amount of land and water, competing uses for these inputs (such as from the industry/ manufacturing and energy sectors) to meet the needs of a growing population could lead to increased prices for these inputs. The challenge this poses is that, at higher prices, poorer
303
individuals may not be able to obtain their needed amounts of food. (For further information on threats, please refer to the entries on “Threats Which Disrupt Food Security” and “Food Prices and Economic Access to Food”). In combination, the structural changes above have led to a new challenge today of ensuring that the needs of a growing urban consumer base be met in a more productive and resource-efficient manner, utilizing fewer natural resources and requiring less land per unit of food produced.
Next Green Revolution in Intensive Agriculture To address these challenges, several new technologies are starting to disrupt agriculture, which can collectively be referred to as Next Green Revolution (NGR) technologies. First among these is digitized agriculture, which allows farms to be run using computers to optimize the amount of inputs utilized. This allows farmers to focus more on strategic decisions in relation to running the farming enterprise. Environmental sensors are used to observe and report the plants’ growth amid environmental conditions (lighting, temperature, air pressure, humidity) and rate of water and nutrient use. The computer then processes this information to provide recommendations on whether any of these production factors need to be increased or decreased, in order to reduce waste and boost yields. To complete the loop, the computer then implements its recommended combinations of farming inputs, through technology known as “variable rate input.” The farmer’s role then is to set objectives or goals for computers to seek through optimization. This data-driven process is referred to as the Internet-of-Things (IoT), and its application to agriculture is referred to generally as IoT-enabled farming (Montesclaros et al. 2019). Another innovative solution is vertical indoor farming, also known as contained farming. To save on space, alternative irrigation systems such as hydroponics and aeroponics are utilized, some of which do not even require the use of soil and instead replace soil with essential nutrients either mixed with water that is at the base of the plant or sprayed
D
304
Disruptive Technologies in Food Production: The Next Green Revolution
on top of it (He 2015; He et al. 2016). Indoor crop production requires the use of light-emitting diode (LED) lamps to replace sunlight, as well as temperature control. From the perspective of sustainability, the use of vertical farming has also been seen as a way of allowing for nature to “recover,” as this results in less land and forests being cleared for extensive agriculture (Despommier 2010). In combination, technologies covered under digitized agriculture can more effectively be implemented indoors where there is greater scope to control the environment. They also offer potentially profitable uses of space and the possibility that agriculture could be a viable enterprise for cities. These could potentially help raise production yields and reduce food lost to spoilage during transport. They also shorten the supply chain, with fewer actors involved, thereby allowing local farmers to capture more value which used to go to transporters and middlemen, consequently increasing the viability of local food production.
The Next Green Revolution in Extensive Agriculture Not all plants are suitable for intensive agriculture. Given the capital-intensive nature of these systems, lower-priced products may not allow investors to recover their initial investments. Moreover, there are land rental rates as more land is required. For instance, it is unlikely that rice, wheat, and maize, which were the focus of the first green revolution, can be grown indoors. For plants which are primarily for extensive agriculture, the Next Green Revolution has a different face. Unlike intensive agriculture where temperatures can be controlled, extensive agriculture can only alter the way crops adapt to the environment (Montesclaros et al. 2019). An important technology in this regard is the use of drones, which can be seen as the outdoor counterpart of indoor agriculture’s environmental sensors, crop analytics, and variable rate input. Drones fulfil these same functions. First, drones can be used to record temperature, humidity, air pressure, wind and precipitation. Next, they can be used for crop
analytics. To do so, they gather raw information, through snapshots or videos of growing environments. This information is analyzed to infer the height of crops and for comparing growth across plots of land. Based on the computed heights and growth rates of plants, along with information on color, crop analytics applications can then estimate the impacts of varying conditions on crop growth. They can also identify potential pests and diseases affecting plants and spot weeds as well (Lambert et al. 2017). Finally, drones can fulfil the role of variable rate input, through targeted application of pesticides on the plants, while a relative of drones, agbots, is now being developed which allows for identifying weeds and removing them mechanically. Together with drones, one can also integrate the use of nanotechnology, to improve pest removal and soil fertilization. Given that environments cannot be changed, another new development is altering the traits of plants to allow them to grow better under different climatic conditions. Biotechnology has been used to draw out desired traits of plants, such as plant quality, virus resistance, or tolerance to extreme environmental conditions (e.g., droughts, submergence). A technological development for achieving this is through gene editing (Georges and Ray 2017). A popular tool for this is referred to as clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated proteins (Cas), the application that is better known as is CRISPR Cas9. These new plant breeding techniques can develop the desired traits in a shorter time period when compared to traditional breeding techniques (Demirci et al. 2018). Unlike genetically modified organisms (GMOs), plant varieties produced using gene editing do not contain genome from other organisms; rather, these varieties are only the result of the recombination of genes within the same organism. As such, so far, there has been less stigma attached to them. The technology has been applied to rice, maize, opium poppy (grown for medical purposes), soybean, tobacco, tomatoes, and even flowers for traits such as higher yields and resistance to extreme weather and to herbicides, too.
Diversity
Conclusion Given today’s challenges of climate change and rapid urbanization, another green revolution is needed to ensure that the needs of a growing urban consumer base are met in a more resourceefficient manner, utilizing fewer natural resources per unit of food produced. This chapter has framed disruptive technologies needed for this revolution by using a distinction in regard to farming systems, namely, that of intensive and extensive systems. Digitization, through environmental sensing, crop analytics, and variable rate input, is applied to both systems, but in different ways, with the former using static technologies and the latter utilizing roaming drones. Another key difference is that unlike intensive agriculture, environments are not easily controlled in extensive agriculture, therefore requiring biotechnological developments that allow crops to thrive under diverse growth environments.
References Demirci, Y., Zhang, B., & Unver, T. (2018). CRISPR/Cas9: An RNA-guided highly precise synthetic tool for plant genome editing. Journal of Cellular Physiology, 233 (3), 1844–1859. https://www.ncbi.nlm.nih.gov/ pubmed/28430356. Accessed 16 April 2018. Despommier, D. (2010) The vertical farm: Feeding the world in the 21st century. Macmillan, 2010. https:// books.google.com.sg/books/about/The_Vertical_ Farm.html?id¼0DxTK0jW35sC&source¼kp_cover& redir_esc¼y. Accessed 14 June 2018. Georges, F., & Ray, H. (2017). Genome editing of crops: A renewed opportunity for food security. GM Crops and Food, 8:1, 1–12. https://doi.org/10.1080/21645698. 2016.1270489. Accessed 13 March 2018. Global Rice Science Partnership. (2013). Rice Almanac, 4th Ed. http://books.irri.org/9789712203008_content. pdf. Accessed 14 June 2018. He, J. (2015). Farming of vegetables in space-limited environments. Cosmos, 11(1), 21–36. World Scientific Publ i s h i n g C o m p a n y. h t t p s : / / d o i . o rg / 1 0 . 11 4 2 / S0219607715500020. He, J., See, X. E., Qin, L., & Choong, T. W. (2016). Effects of root-zone temperature on photosynthesis, productivity and nutritional quality of Aeroponically grown salad rocket (Eruca sativa) vegetable. American Journal of Plant Sciences, 7, 1993–2005. https://doi.org/10.4236/ ajps.2016.714181.
305 Lambert, J. P. T., Hicks, H. L., Childs, D. Z., & Freckleton, R. P. (2017). Evaluating the potential of unmanned aerial systems for mapping weeds at field scales: A case study with Alopecurus myosuroides. Weed Research, 58, 35–45. https://www.tandfonline.com/ d o i / a b s / 1 0 . 1 0 8 0 / 0 1 4 3 11 6 1 . 2 0 1 8 . 1 4 4 8 4 8 4 ? journalCode¼tres20. Accessed 16 April 2018. Montesclaros, J. M. L., Babu, S. C., & Teng, P. S. (2019). IoT-enabled farms and climate-adaptive agriculture technologies: Investment lessons from Singapore. IFPRI Discussion Paper 1805. Washington, DC: International Food Policy Research Institute (IFPRI). https://doi.org/10.2499/p15738coll2.133079. Accessed 1 March 2019. Mukhopadhyay, S. S. (2014). Nanotechnology in agriculture: Prospects and constraints. Dovepress Journal, 2014/7, 63–71. Teng, P. (2017). Knowledge intensive agriculture: The new disruptor in world food? RSIS Commentary No. 17124, 23 June 2017. https://www.rsis.edu.sg/rsis-publication/ nts/co17124-knowledge-intensive-agriculture-thenew-disruptor-in-world-food/#.Wqdk6mpuaM8. Accessed 13 March 2018. UN FAO. (1948). The state of food and agriculture, 1948: A survey of world conditions and prospects, Washington, D.C.: Food and Agriculture Organisation of the United Nations. http://www.fao.org/ docrep/016/ap636e/ap636e.pdf. Accessed 12 Dec 2017. UN FAO. (2017). Redressing hunger and malnutrition in the wake of the crisis – 1954-1955 – FAO 70th Anniversary. http://www.fao.org/70/1945-55/en/. Accessed 12 Dec 2017.
Diversity Sutapa Ghosh Department of Sociology, Barrackpore Rastraguru Surendranath College, Barrackpore, India Keywords
Diversity · Multiculturalism · Race · Ethnic diversity
Introduction Diversity implies the fact of numerous forms of things or people being included in something, a range of different things or people. It is also the mixture of races and religions that make up a
D
306
group of people. The concepts “pluralism” and “diversity” go hand by hand. Social diversity means to the subsistence of different forms or variety of social phenomena that are dissimilar from and contrasting each other. On the other hand, pluralism denotes to “more than one.” Indian scenario represents pluralism as the existence of diversity in different social phenomena like cultural, geographic, racial, religious, linguistic, and ethnic diversity. This entry tries to explore the meaning and concept of diversity as well as various dimensions of diversity. The objective of this entry is also to find out theoretical connotations of diversity. Diversity is like a garland comprised of various colorful and fragrant flower. Culture, gender, religions, lifestyle, and even family play a vital role to create society more diverse. As society proceeds towards mechanical solidarity to organic solidarity, diversity increases and has an impact on social interaction and integration of societies. Other factors like increased cross- border mobility, lessrigid gender roles, advanced standards of living, and the process of individualization also create ambiance for diversification. Social interaction, self-conception, and internal integration of societies result on various lifestyle, value systems, and sagacity. Sociologically, it refers the existence of collective disparities in the aforesaid categories. The root of these variations may be several – racial diversity takes place when the difference is biological, religion diversity occurs on the basis of religion, when language becomes base then linguistic diversity comes out, etc. Not only this, but also diversity in family patterns is further noticed by the sociologists. They indicate five types of diversity: organizational, cultural, class, lifecourse, and cohort. In organizational part family used to give onus of domestic duties in a numerous ways to its members. In an “orthodox” family, woman plays the role of “housewife” and the husband acts as “breadwinner” while single parent or dual-career families portrait diversity. Culture plays a pivotal role to encourage diversity of family benefits and values. Cultural varieties in family show the path of women movement and concept of feminism have arisen in this way. Class diversity is found among poor, working class,
Diversity
middle and upper classes. Diversity in family experiences during the life course. Cohort represents the connections between generations which now, probably, become weaker than the past. Apart from this, sexual diversity is found in family organizations. Besides, heterosexual family and homosexual partnerships are accepted both in western as well as eastern societies (Giddens 2010).
Sociological Perspectives of Diversity In the tradition of classical sociology, we hardly come across any specific reference to diversity. Nevertheless, the root of the present sociological discourses on the issues of diversity, especially those focusing on diversity in family patterns, gender diversity, diversity of the marginalized sections, etc., can be traced in the thinking of the founding fathers of sociology. The concern of contemporary sociology with the issues of diversity essentially veers around the modern social institutions and the processes therein, which forensically or otherwise push the people towards the margins of the modern society. The contribution of the founding fathers that touches upon the issues of the deprived and underprivileged segments of society is analyzed here to have an insight into the continuing debate over the issues of diversity. Auguste Comte, the founding father, describes the process of evolution through three successive stages of development – Theological, Metaphysical and the Scientific or Positive stages – thinking, interacting, social configuration alters, and diversification emerges. As civilization gather momentum towards developed industrialized society from Military society, one, probably, observes that the ethos of diversity comes into existence. Society proceeds from homogeneous to heterogeneous (Coser 1996). Herbert Spencer has elevated the sociological concept of progress. The evolution of society involves increasing complexity of social structure and associated culture symbols, and this complexity increases the capacity of the human species to adapt and survive in its environment
Diversity
(Turner 1998, p. 81). As a champion of the concept of liberty, Spencer espouses the notion of diversity, as we understand it today, quite vociferously. To him, Each member of the race . . .must not only be endowed with faculties enabling him to receive the highest enjoyment in the act of living, but must be so constituted that he may obtain full satisfaction for every desire,without diminishing the power of others. . .. (Spencer 1851, p. 250)
In Emile Durkheim’s sociology of morality, division of labor, and suicide, we find a continuous tension between the relative importance of the individual and collectivity. In every aspect of Durkheimian sociology, we find a great concern with morality. Morality is something that binds the individuals in collectivity, imparts in them a sense of respectability to the whole, which ultimately results into the formation of societies in which individuals claim for justice and rights. So morality results into some form of diversity on the basis of which a society comes into being. Moreover, it depicts that unity lies in the various sources of diversity (Haralambos and Heald 2006). Like Spencer, Durkheim also shows that society is progressing from simple to a complex state or, to a modern one. Division of labor plays an active role in the process of transformation from small hoarding homogeneous to a large heterogeneous modern society. With the increase in the level of division of labor and specialization, the society characterized by mechanical solidarity gets transformed into one characterized by organic solidarity. As a result of the increase in the level of division of labor in the modern society, the degree of collective conscience also declines which ultimately results in an upsurge of individuality. In this situation “it becomes a source of disintegration” for the individual as well as for society (Durkheim 1893a, p. 105). This leads to a moral crisis which may usher in the stage of anomie in social levels. In such a situation, there is hardly any moral or social control over the lives of the individuals. Needless to say that in this chaotic social condition, individuals’ rights, liberty, or freedom suffer sheer violation. The violation of the rights of the individuals,
307
perhaps, leads to the violation of rights of women, children, the minorities, and other marginalized groups at the cross-sections of any given society. Therefore, it can be said that diversification in the members of the society, probably, creates path of hatred, intolerance, disharmony hampering social solidarity, and cohesion. In this situation, according to Durkheim, the state cannot be “a spectator of social life” (1893b, p. 72). In such an anomic social condition, the state should function as protector of rights and ensure selfrealization for the individual. Unlike Spencer and other liberal philosophers, Durkheim, perhaps, considers the issue of individuals’ freedom and right as a function of the all important issues of social integration and cohesion. To Durkheim the fullest level of development of individuality is achieved when the individuals become able to satisfy the needs of the collectivity. Rights of the individual or individuality, hence, in the Durkheimian parlance, probably, is developed and protected in its being functional for the cause of social integration. On the face of the upsurge of individuality in modern society, Durkheim views the state, division of labor, and other institutions as different agencies which can tune up individuals’ aspirations in line with the need of society. Max Weber, in his treatise (2003), The Protestant Ethic and the Spirit of Capitalism, has attempted to find the path leading to the modern society. Lutheranism, for him, has laid the necessary foundation for the growth of human rationality, which ultimately has created the background for the emergence of individualism. An individual powered by his competence and primary qualities of character adopts a job in which an impersonal and specialized function is required. This job is business activity which is regarded as a “calling.” This type of roles is organized to a high degree of bureaucratic structure in which individuals are subjected not only to the informal ethical discipline of institutional patterns but also to a rigorous system of formally organized hierarchical authority (Weber 2003, p. 162). Rational people then read their bible on their own and all these rational beings are very eager for economic development. As a result, capitalism, the economic foundation of modern society,
D
308
comes into effect. The rational basis of the modern industrial capitalism liberates the individuals from the bonds of conservatism. With this substantive transformation, the concept of diversity has gained significance. Upsurge of individualism may encourage diversity and promote the concept of security. The whole sociological discourse on diversity, perhaps, is embedded in it. Coser, the conflict functionalist, opines that conflict does not create violence all times. Even each and every time revolution is not taken place. Conflict may be loosening for the time being. It is because grievances are redressed and a positivistic perspective is developed for the betterment of the society and for the sake of collective conscience. In this way, social solidarity is maintained in a diverse situation (Turner 1998). In the eighteenth century, liberal theory challenged the idea of subordination and gave emphasis on the functioning of the state to protect the rights of the citizens. Sociologists belonging to this school of thought also talked about the principle of noninterference whereby the individuals will have the free space to develop their individuality to the fullest extent. It was a time when rights were considered to be a luxury and were enjoyed only by the middle and upper class people, primarily the males (Gadda 2008). Of late women came to challenge the prevailing inequality in the institutions. Women’s movements started against the oppression of patriarchy. Feminism provided the necessary ideological backup to this (Friedan 1963). To resist patriarchal domination and oppression, women’s movement started during the middle of nineteenth century. The ideology of the movement was oriented towards the achievement of equal share of scarce resources (e.g.,, wealth, power, income, and status) for women. The women activists viewed gender inequality as the result of patriarchal and sexist pattern of the division of labor. They firmly believed that gender equality can be produced by re-patterning the key institutions – law, work, family, education, and media. As the women movement gathered momentum, they became aware of their rights. The footprint of gender diversity ushers here.
Diversity
India: Shrine of Diversity and Unity The ethos of India is “unity in diversity.” Beginning with geographical variations, perhaps, one can see the differences between various regions. India is a home of different types of mother tongue. People of this country reveal different racial categories that exhibit that it has its roots in different racial groups. The policy of secularism has created room for religious diversity and this generates cultural pluralism in India. India is the habitat for a number of races. Depending on physical features, Risley classified the races into seven categories (Bhasin 1983, p. 20). These are the Turko-Iranian type, the Indo-Aryan type, the Scytho-Dravidian type, the Aryo-Dravidian type, the Mongoloid -Dravidian type, the Mongoloid type, and the Dravidian type. Language is an essence of any culture and “India is a linguistic madhouse” (Sen 2012, p. 5). Language is compiled of much more than the characteristics of a particular speech – grammar, structure, phonetics, and pronunciation – it is also a matter of social identity and group loyalty (Oommen and Venugopal 1998). The people of India demonstrate a huge amount of diversity in the languages and dialects spoken. Four linguistic regions are divided on the basis of languages. They are the Southern Region, the Eastern Region, the Western Region, and the NorthCentral Region. Since time immemorial a variety of religions have flourished in India, but its outlook is always secular. Initially, Indian society was dominated by Hinduism, but as time passed different religion such as Islam, Christianity, Buddhism, Jainism, Sikhism, Persis, and Jews also fashioned their domain. India has a rich and diversified cultural heritage comprises of customs, traditions, beliefs, and morals. It is a potpourri of music, art and craft, literatures, dance, etc. Ethnic diversity is found among caste system, tribes. It gives shelter for many tribes. So India is a country where heterogeneous people, vivid diversity in the form of religions, race, and languages are found. But this diversity, possibly, fabricates unity, integrity, and
Diversity
309
homogeneity in the society. Indian people develop the notion of tolerances irrespective of caste, religion, language, etc., and this provides security for the countrymen. However, this idea of secularism, tolerance, etc., probably, is in doubt. Evidences show that sometimes diversity obstructs the path of peace and harmony. That is why; people are fighting against each other. Communalism, intolerance, and disharmonious situation often hamper the growth and development of the country. These create a feeling of insecurity in the country as well as it has also threatened global security. The Maobadi, extremists of Kashmir, Hindu-Muslim hatred all these create an environment of tension, threat, ethnocentrism.
to the United Nations’ 2008 Diène report, communities most affected by racism and xenophobia in Japan include:
Japan: Homogeneous or Diversified Society
History shows that commencing of Japanese colonialism, racial orthodoxy against other Asians, was consuetudinary in Imperial Japan. An Investigation of Global Policy with the Yamato Race as Nucleus, a classified report in 1943 of the Ministry of Health and Welfare completed on July 1, 1943, argues that just like family has conspicuous hierarchy but it promotes coherence and reciprocity, the Japanese, being a racially superior people, were ascertained to govern Asia “eternally” as the chief of the family of Asian nations. There are numerous minority groups who are Japanese citizens, for instance, the Ainu (an aboriginal people primarily residing in Hokkaido), the Ryukyuans (who may or may not be believed ethnically Yamato people), Korean known as “Zainichi,” and Chinese, and citizen successors of immigrants. It reveals that Japan is more ethnically diverse than most of us apprehend. Though minorities are denied, yet picture is changing by protecting and restoring the unparalleled cultures of Japan’s minority groups.
Japan is recognized as one of the most homogeneous country in the world. Japan stands as a cornerstone of advance technology and a pillar of success, though it recognizes race as a serious national issues. But if one can see deeply, it will be found that racial disparity prevails in Japan since 1950s. It signifies that the post-American attack raise differentiation and antagonistic atmosphere for non-Japanese. In spite of changing demographic scenario, marginal groups in Japan are identified as social peripheral groups. As a result, harmonized society intertwines many half-Japanese and foreigners equally. Children of those people have to face challenges to combat against racial discriminations. Due to cultural segregation, many biracial children, perhaps, are still influenced in both cultures based on their different phenotype. According to census statistics, 98.5% of the population of Japan are Japanese, with the rest being foreign nationals dwelling in Japan. Although this does not depict clear picture of ethnicity – the Ainu, Ryukyuans, Burakumin, and naturalized immigrants being counted as simply “Japanese.” Denial of collecting data that reflects ethnic identities, the Japanese government asserts that Japanese citizens belong to the same races and no disparity is found because of racial segregation. According
• The national minorities of Ainu and people of Okinawa • People and descendants of people from neighboring countries (Koreans and Chinese) • The new immigrants from other Asian, African, South American, and Middle Eastern countries (Source: https://sites.google.com/ a/richland2.org/japan%2D%2D-flemming% 2D%2D-foti-6/culture?tmpl¼%2Fsystem% 2Fapp%2Ftemplates%2Fprint%2F& showPrintDialog¼1. Accessed on 15/03/ 2019)
China: Depot of Ethnic Diversity China is very diverse in nature. Like India, geographical diversity of China plays a pivotal role to promulgate different cultures. Religion is not recognized by the China’s administrations, yet very
D
310
few practices in religions such as Christianity, Buddhism, Taoism, Islam, and Judaism. Chinese government identified 56 ethnicities and several of them have their own languages. China is the storehouse of culture. The hub of China is mainly consisted of Han ethnicity. Other ethnic groups are Manchus, Mongolia, Qiang, and Tibetan. Muslims are found in the Northwestern China. Sometimes, cultures and lifestyles differ according to regional diversity. Therefore, Chinese government set up five province-level autonomous regions: Xinjiang (mainly for Uyghurs), Tibet (mainly for Tibetans), Ningxia (mainly for Hui), Guangxi (mainly for Zhuang), and Inner Mongolia (mainly for Mongols) to show respect to their culture (Source: https://www.quora.com/ How-diverse-is-China-in-terms-of-culture-andlanguage accessed on 06/0502108).
Afghanistan: Repository of Cultural Diversity Being an Islamic country Afghanistan portrays its cultural diversity. People of Afghanistan represent their own culture and beliefs depended on diverse regions, locality, and tradition. The country, probably, is a multiethnic and mostly tribal society. The population of the country comprises into the following ethnolinguistic groups: Pashtun, Tajik, Hazara, Uzbek, Aymāq, Turkmen, Baloch, Pashai, Nuristani, Gujjar, Arab, Brahui, Pamiri, and a few others. Since 1747, when the country came into existence, a strict division (caste system) between the people of Afghanistan was found determining their standard of living. Gender diversity plays an important role illustrating the situation of women in the country. Under the Talibani ruled, the status of women became worsen. At present, people of Afghanistan have to face ethnic tensions, generating deep discomfort in the country’s diverse communities. Claiming societal and cultural heterogeneity of the country, RavanFarhadi declares that Afghanistan is a marginal state because of no ethnic group makes up more than a third of the population.
Diversity
Bangladesh: Stocks of Culture Regarded as a homogeneous country, the constitution of which has not yet recognized the linguistic and cultural diversity of its inhabitants, multiculturalism in Bangladesh has never been appreciated formally. One can claim Bangladesh to be ethnically homogeneous as 98 percent of its population speaks Bengali with a staggering 87 percent being Muslims. Even though Bangladesh is divided into distinct cultural areas and home to the four major religions, Christian, Sikhs, and atheists are a minuscule part of it. Hindus and Buddhists form a significant minority there. The four major tribes Chakmas, Marmas, Tipperas, and Mros can be distinguished by their differences of dialect, dress, and customs. Seeing beyond the constitution and it loopholes, one can witness the extent of diglossia and multilingualism that varies according to class, ethnicity, gender, and religion in the different regions of this so-called “homogeneous” country.
Pakistan The two nation theory gave birth to Pakistan. But it has failed to take the onus of harmony in the country. Due to linguistic diversity, Bangladesh came up as a sovereign country. The ongoing conflict between various communities often creates obstacle to integrity of the country. Besides, racial segregation and gender disparity hinder the growth of the nation and thus, security of the people is hampered.
America: “The Melting Pot” This country inherited its ethos, that is, language, legal system, and other cultural aspects from the British. Immigrants from all the European countries came and settled in the USA in the beginning. Now people from all over the world go there and make their fortunes. America is truly “the melting pot,” assimilating
Diversity
the cultures, values, and beliefs of all and becoming a land of multiculturalism, diverse races, and ethnicities. Diversity and America are complementary terms now.
Concluding Remarks Sociologists have raised their voices on various discourse of value of diversity. Sometimes it divides social researches into two groups considering jeopardise situation and favorable conditions emerging from increasing social diversity. There is a strong doubt about whether diverse societies are susceptible of ever being put together. This can be exhibited in an empirical studies conducted in the USA that ethnic diversity can be associated by low levels of faith and inclination among citizens and social institutes. Albeit other study observes that owing to heterogeneous character of population and aura of endurance have facilitated the path of regional economic development. In this case diversity is associated with creatrix thinking, liberality, and vigor. Therefore, a multicultural settlement operates for attracting the cultural and economic elite and procreation ground for new ideas. So diversity, perhaps, fabricates unity and integrity. But, possibly, at the same time identity formation is the main modus operandi of diversity. Genesis of identity may create atmosphere of separatism. In India various states are formed because of linguistic, ethnic, and regional diversity. The concept of separatism brings forth new states like Chhattisgarh, Uttarakhand, and Jharkhand from Madhya Pradesh, Uttar Pradesh, and Bihar, respectively. The reason behind this is ethnic diversity. Individual state is formed on the basis of diversity, which leads to independent identity formation as well as separatist outlook. And this separatist view, perhaps, disagrees to take the responsibilities of others. It also breaks down the concept of humanity, liberty, and brotherhood. For this reason, global security is in question. The peaceless situation in the advanced country where assailants often takes
311
the lives of innocent people, including school children as their victims. One can find the severe, meager, and wretched conditions of the Rohinga in Myanmar, people of Syria, and Afghanistan. World’s safety and security are in vulnerable positions which, probably, are threatened by the notion of diversity.
D Cross-References ▶ Multiculturalism ▶ Societal Identity and Security
References Bhasin, M. K. (1983). People of India: An investigation of biological variability in ecological. In Ethno-economic and linguistic groups. New Delhi: Kamala Raj Enterprises. Coser, L. A. (1996). Masters of sociological thought: Ideas in historical and social context. Jaipur: Rawat Publications. Durkheim, E. (1893a). The division of labor in society Cited by L. D. Edles (2015) in Sociological theory in the classical era: Text and readings. Los Angeles: Sage. Durkheim, E. (1893b). The division of labor in society cited by L. D. Edles (1958) Professional ethics and civic morals. Illinois: The Free Press. Giddens, A. (2010). Sociology. Cambridge: Polity. Haralambos, M., & Heald, R. M. (2006). Sociology: Themes and perspectives. New Delhi: Oxford University Press. Sen, P. K. (2012). Indian society: Continuity and change. New Delhi: Pearson. Spencer, H. (1851). Social statics; or the conditions to human happiness specified and the first of them developed. London: John Chapman. http://oll.libertyfund. org/title/273 Weber, M. (2003). The protestant ethic and the spirit of capitalism. New York: Dover Publications.
Further Reading Ahmadi, D. (2018). Diversity and social cohesion: The case of Jane-Finch, a highly diverse lower-income Toronto neighbourhood. Urban Research & Practice, 11(2), 139–158. https://doi.org/10.1080/17535069. 2017.1312509. Accessed 10 Apr 2018. Dincer, O. C., & Wang, F. (2011). Ethnic diversity and economic growth in China. Journal of Economic Policy Reform, 14(1), 1–10.
312 Dube, S. C. (1990). Indian society. National Book Trust: New Delhi. Freund, J. (1969). The sociology of Max Weber. New York: Vintage Books. Friedan, B. (1963). The feminine mystique. New York: Dell. Gadda, A. (2008). Rights, Foucault and power: A critical analysis of the United Nation Convention on the Rights of the Child (Edinburgh working papers in sociology). Edinburgh: University of Edinburgh. Oommen, T. K., & Venugopal, C. N. (1998). Sociology for law students. New Delhi: Eastern Book Company. Putnam, R. D. (2007). E pluribus unum: Diversity and community in the twenty-first century. Scandinavian Political Studies, 30(2), 137–174. Ram, A. (2002). Indian social system. Rawat: Jaipur. Sturgis, P., Brunton-Smith, I., Kuha, J., & Jackson, J. (2014). Ethnic diversity, segregation and the social cohesion of neighbourhoods in London. Ethnic and Racial Studies, 37(8), 1286–1309. https://doi.org/10. 1080/01419870.2013.831932. Turner, J. H. (1998). The structure of sociological theory (6th ed.). Cincinnati: Wadsworth Publishing Company.
Doctors Without Borders – Me´decins Sans Frontie`res S. Paul Department of Peace and Conflict Studies and Management, Sikkim University, Gangtok, Sikkim, India Keywords
Humanitarian relief · Endemic diseases · Armed conflicts · Natural disasters · Refugees · Internally displaced people
Introduction The medical relief organization Médicins Sans Frontières (MSF), also known as Doctors Without Borders, is often referred to as “the cowboys of humanitarian aid” (Jefferis 2005). They are known for their untiring efforts to assist people in war-torn regions and in developing countries affected by endemic diseases. In 2015, over 30,000 MSF personnel, mostly local doctors, nurses and other medical professionals, logistical experts, water and sanitation engineers, and administrators provided medical aid in over
Doctors Without Borders – Me´decins Sans Frontie`res
70 countries. The organization has offices in 21 countries/entities: Australia, Austria, Belgium, Brazil, Canada, Denmark, France, Germany, Greece, Holland, Hong Kong, Italy, Japan, Luxembourg, Norway, Spain, South Africa, Sweden, Switzerland, the United Kingdom, and the USA. In addition, MSF has an international office in Geneva, an Access to Essential Medicines Campaign office in Geneva, and two UN liaison offices, one in Geneva and one in New York City. Seven MSF branch offices operate in Argentina, the Czech Republic, India, Ireland, Mexico, South Korea, and the United Arab Emirates. MSF focuses in its work on four areas: armed conflicts, natural disasters, neglected people, refugees and internally displaced people, to provide assistance to populations in distress, be they victims of natural or man-made disasters, with everything they need from psychological care to lifesaving nutrition. They have set up health care facilities in almost all parts of world where their assistance is needed. MSF’s actions are guided by medical ethics and the principles of independence and impartiality. They are ready to offer assistance in any country based on an independent assessment of people’s needs.
Historical Evolution MSF was founded in 1971, in the aftermath of the Biafra secession attempt, during the Nigerian Civil War of 1967–1970, by a small group of French doctors and journalists who sought to expand access to medical care across national boundaries irrespective of race, religion, creed or political affiliation. The Nigerian military formed a blockade around the breakaway region. At the time, France was the only major country supportive of Biafrans and the conditions within the blockaded area were unknown to the world. A number of French doctors volunteered with the French Red Cross to work in hospitals and feeding centers in besieged Biafra. One of the cofounders of the organization was Bernard Kouchner, who would later in life become Minister of Foreign Affairs of France. After entering the
Doctors Without Borders – Me´decins Sans Frontie`res
country, the volunteers, in addition to Biafran health workers and hospitals, were subjected to attacks by the Nigerian army, and witnessed civilians being murdered and starved by the blockading forces. The doctors publicly criticized the Nigerian government and the Red Cross for their seemingly complicit behavior and disagreed with the policy of not interfering in the politics of countries undergoing internal armed conflict. These doctors concluded that a new aid organization was needed that would ignore political and religious boundaries and prioritize the welfare of victims. The civil war in Biafra thus resulted in the founding of GIMCU or Groupe d’Intervention Médical et Chirurgical d’Urgence (Emergency Medical and Surgical Intervention Group). A second similarly complex humanitarian emergency was the result of Cyclone Bhola in eastern Pakistan (now Bangladesh). The crisis led to the establishment of SMF or Secours Médical Francais, i.e., French Medical Relief. On December 20, 1971, MSF was born from the merger of GIMCU and SMF, with Kouchner as its first director (Shampo and Kyle 2011, 1) (Table 1).
Examples of the Activities of MSF in Various Countries and Areas Sudan: MSF has been providing medical humanitarian assistance in Sudan since 1979 in the context of civil war. One of the most dangerous diseases present here, to which as much as one
313
half of the Sudanese population was exposed to, was visceral leishmaniasis (locally known as kala azar). In March 2010, MSF set up its first kala azar treatment center in Eastern Sudan, providing free treatment for this deadly disease. If left untreated, there is a fatality rate of 99% within 1–4 months of infection. Since the treatment center was set up, MSF has cured more than 27,000 kala azar patients with a success rate of approximately 90–95%. MSF has been providing necessary medical supplies to hospitals and training South Sudanese health professionals to help them deal with kala azar (MSF 2010). Syria: Syria remains one of the most complex and volatile humanitarian crises in the world today. In the war-torn country, International Humanitarian Law (IHL) has been regularly overlooked, amounting to an absence of due care from parties to the conflict to avoid civilian casualties. Even those civilians who manage to flee the front lines or besieged areas and reach the border are finding it increasingly difficult or at times impossible to seek refuge abroad. Border restrictions and closures are forcing people to return to the places in Syria that they have fled or to camp out in the desert with no facilities or resources, at risk of violence, disease, and hunger. From the early stage of the conflict in Syria, MSF sought permission to extend its medical assistance to all parts of the country, but permission has not been granted. This has resulted in MSF’s medical support being limited to regions controlled by opposition forces, or restricted to cross-frontline and/or cross-border support to
Doctors Without Borders – Me´decins Sans Frontie`res, Table 1 The work of MSF from 1972 to 1989 (Source: Bortolotti 2004; MSF 2009) Year 1972 1974
Location and causes Nicaragua, earthquake Honduras, Hurricane Fifi causes major floods 1975–1979 War between South Vietnam and North Vietnam 1976–1984 Lebanese Civil War
1984 1989
Ethiopia, famine Cambodia
Work Relief mission Set up a long-term medical relief mission Set up a refugee camp mission in Thailand for Cambodian refugees Assisted surgeries in the hospitals in various cities in Lebanon. MSF helped those in need of medical aid without regard to religious background Set up nutrition programs Started a long-term relief mission to help survivors of war and to reconstruct the country’s health care system
D
Doctors Without Borders – Me´decins Sans Frontie`res
314
medical networks in government-controlled areas, undertaken without official consent. In the opposition-controlled regions close to the border with Turkey, MSF was able, between 2012 and 2014, to maintain six fully functional hospitals and five outpatient clinics staffed directly by MSF national and international medical staff (MSF 2015b). With much difficulty, MSF continues to operate six health facilities in the north of Syria and support with medical aid up to 150 other health facilities (MSF 2016). Despite all the challenges, including the destruction of some of the health centers concerned, MSF performed more than 1000 surgeries inside Syria in 2012 and more in 2013. MSF teams are also working in the neighboring countries that are providing assistance to refugees (Table 2). Myanmar and the Rohingya refugees and IDPs: Due to ongoing government repression and intercommunal violence, Rohingya refugees have been fleeing Myanmar/Burma in large numbers to Bangladesh. Up to 700,000 Rohingya refugees fled to Bangladesh following targeted violence in neighboring Rakhine state in Myanmar. In 2016, MSF continued to provide healthcare to vulnerable people in Bangladesh, including a large number of Rohingya refugees from Myanmar. In early November, MSF conducted six retrospective mortality surveys in different sections of the refugee settlements in Cox’s Bazar, just over the border from Myanmar, in Bangladesh, contributing with these key findings to the assessment of the situation by the international community (MSF 2017). The major MSF project locations in Cox’s Bazar are Rubber Garden near Kutupalong, Balukhali, Balukhali 2, Tasnimarkhola, Jamtoli, Hakimparaand, and Moynarghona. In terms of health challenges in the area, measles and diphtheria were seen as the key public Doctors Without Borders – Me´decins Sans Frontie`res, Table 2 MSF’s vaccination work in Aleppo Governorate (North) since 2016. (Source: MSF 2016) Activities Immunization program Tetanus Measles
Numbers 35,907 children