123 28 15MB
English Pages 1165 [1123] Year 2020
Ron Iphofen Editor
Handbook of Research Ethics and Scientific Integrity
Handbook of Research Ethics and Scientific Integrity
Ron Iphofen Editor
Handbook of Research Ethics and Scientific Integrity With 22 Figures and 25 Tables
Editor Ron Iphofen Chatelaillon Plage, France
ISBN 978-3-030-16758-5 ISBN 978-3-030-16759-2 (eBook) ISBN 978-3-030-16760-8 (print and electronic bundle) https://doi.org/10.1007/978-3-030-16759-2 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This handbook was originally envisaged as a “one-stop shop” for contemporary information, issues, and challenges in the fields of research ethics and scientific integrity. Our central aim is for interested readers to find a comprehensive coverage of these issues both within their “home” discipline and in relation to similar concerns in other disciplines. It should also stand as a “first port of call” for anyone seeking to explore further such related issues. These topics and issues are foundations that researchers, reviewers, and policymakers can build on when they are pursuing the best ways forward in seeking to practice and promote ethical research and integrity in science. Perhaps it is almost impossible with the information available on the Internet to entirely fulfill these promises in any one handbook these days. Some readers will find all they need here, others we would encourage to begin here and use these foundational chapters from which to build their knowledge and good research practice. Each chapter aims to be as comprehensive, detailed, accurate, and up to date as we could make it, while recognizing that, in such a fluid field as research ethics, the sites, settings, and practices can change quite rapidly. We have striven for chapters to be as authoritative and as internationally referenced as possible so that readers can be assured of the accuracy and relevance of content. Some chapters do focus on their “own” local geographical region or country, but hopefully with enough lessons learned that can be applied more generally. To structure the chapters we asked each author to supply a summary abstract outlining the topics covered in their chapter followed by an introduction anticipating their approach to the topics. We wanted some background coverage – such as a historiography and key writers or contributors to developments in the field. We also sought a balanced discussion about current debates together with some horizon scanning for future issues they might anticipate as presenting difficulties. We also asked if they could propose “solutions” to the dilemmas involved where possible. In order to allow a reasonable degree of scholarly freedom, authors were not confined to the rigid application of these topics as sectional subheadings. But the parts in the handbook do follow a planned sequence. First, we look at how research in general is governed and monitored in order to forestall the typical forms of misconduct (Part II). Then we move to consider the key topics that are regularly “revisited” when ethics in research is mentioned (Part III). This is followed by a part that v
vi
Preface
addresses the ways in which the variety of research methods manage both the topics and the various forms of guidance and governance (Part IV). The next part covers recognition of the specific needs and concerns that researchers face for different kinds of subjects or participants (Part V). The final part draws all this together in looking at how a selected range of different disciplines have incorporated these aforementioned concerns – in what ways they are similar and how they differ (Part VI). The benefit to readers lies in the ability to quickly source some comprehensive information, key protagonists, the main issues, and principal challenges in the field. We expect our readers to include experienced researchers keen to assess their own perspectives in relation to others in the field, as well as novice researchers aiming to scope the field perhaps for the first time. They should equally find the handbook of enduring interest and practical benefit. It will save a great deal of time in sourcing the disparate available material in these fields. We also hope this handbook to be of value to research advisors, funding agencies, policymakers, postgraduate supervisors/ lecturers, and the research reviewers who comprise members of research ethics review committees and research governance boards. Much of this material is concerned with topics and issues of longstanding interest and concern – but much looks at it in ways that strive to accommodate the innovations occurring in this flexible and dynamic field. Those experienced in the field might find some of this material new and challenging; but that is as it should be since ethics is, or should be, about engaging debate on the pressing moral issues of our time – and none are more so than in the field of scientific and scholarly research. Chatelaillon Plage, France March 2020
Ron Iphofen
Acknowledgments
It is chastening when nearing the print production of a volume such as this to recall just how long the process took, how many people were involved, and, not least, how dependent an editor is on loyal, reliable, and supportive colleagues. The concept for this handbook began in 2017. Now, 3 years later, we have something we can genuinely be proud of. So I first wish to thank the commissioning team at Springer Nature – Mokshika Gaur, Floor Oosting, and Christopher Wilby for understanding and helping tease out the essential nature of the direction we needed to take for the handbook. Crucial to the whole production process, contacting authors, spotting errors, and keeping everyone on target in a polite, amiable, and efficient way (not easy) was Shruti Datt – so many thanks Shruti – it truly is “our” handbook. And for such a communal enterprise the commitment I had from the authors was remarkable. Not enough career credit goes these days to books and even less to book chapters – so to ensure that these works really did “add value” to our understanding of ethics and integrity in research, it was vital the chapter authors delivered so thoroughly. I have learned that readers value committed, informed, and readable materials whether book or book chapter, and are less concerned than the career hierarchy with the impact factor of a prestigious journal. So to all our authors, sincere thanks for your hard work and commitment to the project. Peer review was conducted throughout in a comprehensive and constructive manner by my esteemed friends and colleagues on the editorial board – the quality of the final product is so much dependent on their mentoring, critical commentary, and practical suggestions. Finally, I cannot thank my wife, Carol, enough for her loyal support, the recognition of the sacrifice of family time necessitated by this endeavor, and her recognition of how important this handbook was to my overall mission to help ensure research is conducted with integrity for the benefit of us all. Very many thanks to you all.
vii
Contents
Volume 1 Part I 1
.......................................
1
An Introduction to Research Ethics and Scientific Integrity . . . . . Ron Iphofen
3
Part II
Introduction
Regulating Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2
Regulating Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ron Iphofen
17
3
Research Ethics Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mihalis Kritikos
33
4
Organizing and Contesting Research Ethics Mark Israel
.................
51
5
Research Ethics Codes and Guidelines . . . . . . . . . . . . . . . . . . . . . . Margit Sutrop, Mari-Liisa Parder, and Marten Juurik
67
6
Protecting Participants in Clinical Trials Through Research Ethics Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Richard Carpentier and Barbara McGillivray
91
7
Publication Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deborah C. Poff and David S. Ginley
107
8
Peer Review in Scholarly Journal Publishing . . . . . . . . . . . . . . . . . Jason Roberts, Kristen Overstreet, Rachel Hendrick, and Jennifer Mahar
127
9
Research Misconduct Ian Freckelton Q. C.
...................................
159
Dual Use in Modern Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . Panagiotis Kavouras and Costas A. Charitidis
181
10
ix
x
Contents
Part III
Key Topics in Research Ethics . . . . . . . . . . . . . . . . . . . . . . .
201
11
Key Topics in Research Ethics Ron Iphofen
............................
203
12
Informed Consent and Ethical Research . . . . . . . . . . . . . . . . . . . . Margit Sutrop and Kristi Lõuk
213
13
Privacy in Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kevin Macnish
233
14
Biosecurity Risk Management in Research Johannes Rath and Monique Ischi
..................
251
15
Benefit Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Doris Schroeder
263
16
Internet Research Ethics and Social Media . . . . . . . . . . . . . . . . . . Charles Melvin Ess
283
17
Research Ethics in Data: New Technologies, New Challenges . . . . Caroline Gans Combe
305
18
A Best Practice Approach to Anonymization . . . . . . . . . . . . . . . . . Elaine Mackey
323
19
Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Calvey
345
Part IV
Research Methods
................................
369
20
Ethical Issues in Research Methods . . . . . . . . . . . . . . . . . . . . . . . . Ron Iphofen
371
21
On Epistemic Integrity in Social Research . . . . . . . . . . . . . . . . . . . Martyn Hammersley
381
22
Ethical Issues in Data Sharing and Archiving . . . . . . . . . . . . . . . . Louise Corti and Libby Bishop
403
23
Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anja Bechmann and Jiyoung Ydun Kim
427
24
Ethics of Ethnography Martyn Hammersley
..................................
445
25
Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jonathan Lewis
459
26
Ethics of Observational Research . . . . . . . . . . . . . . . . . . . . . . . . . . Meta Gorup
475
Contents
xi
27
Creative Methods Dawn Mannay
......................................
493
28
Ethics of Discourse Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meta Gorup
509
29
Feminist Research Ethics Anna Karin Kingston
................................
531
Volume 2 Part V 30
Subjects and Participants . . . . . . . . . . . . . . . . . . . . . . . . . . .
551
Acting Ethically and with Integrity for Research Subjects and Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ron Iphofen
553
Ethical Issues in Community-Based, Participatory, and Action-Oriented Forms of Research . . . . . . . . . . . . . . . . . . . . . . . . Adrian Guta and Jijian Voronka
561
32
“Vulnerability” as a Concept Captive in Its Own Prison . . . . . . . . Will C. van den Hoonaard
577
33
Researcher Emotional Safety as Ethics in Practice Martin Tolich, Emma Tumilty, Louisa Choe, Bryndl Hohmann-Marriott, and Nikki Fahey
............
589
34
Research Involving the Armed Forces . . . . . . . . . . . . . . . . . . . . . . Simon E. Kolstoe and Louise Holden
603
35
Research Ethics, Children, and Young People . . . . . . . . . . . . . . . . John Oates
623
36
Older Citizens’ Involvement in Ageing Research . . . . . . . . . . . . . . Roger O’Sullivan
637
37
Disability Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anne Good
655
38
The Ethics of Research and Indigenous Peoples . . . . . . . . . . . . . . . Lily George
675
39
Ethical Research with Hard-to-Reach Groups . . . . . . . . . . . . . . . . John Sims
693
40
Queer Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark Edward and Chris Greenough
707
41
Ethics and Integrity for Research in Disasters and Crises . . . . . . . Dónal P. O’Mathúna
719
31
xii
Contents
Part VI
Disciplines and Professions . . . . . . . . . . . . . . . . . . . . . . . . .
737
42
Ethics and Integrity in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . Ron Iphofen
739
43
A Professional Ethics for Researchers? Nathan Emmerich
.....................
751
44
Sociology and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lisa-Jo K. van den Scott
769
45
Ethical Considerations in Psychology Research . . . . . . . . . . . . . . . John Oates
783
46
Ethics and Scientific Integrity in Biomedical Research Léo Coutellec
.........
803
47
The Ethics of Anthropology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marc Brightman and Vanessa Grotti
817
48
Constructive and Enabling Ethics in Criminological Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vasileios Karagiannopoulos and Jane Winstone
835
49
Ethical Dilemmas in Education Research . . . . . . . . . . . . . . . . . . . . Ros Brown
851
50
Ethics in Political Science Research . . . . . . . . . . . . . . . . . . . . . . . . Daniela R. Piccio and Alice Mattoni
873
51
Ethics and the Practice of History Catherine J. Denial
.........................
889
52
Consuming Images, Ethics, and Integrity in Visual Social Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Helen Lomax
899
53
Research Ethics in Economics and Finance . . . . . . . . . . . . . . . . . . Caroline Gans Combe
917
54
Ethics Inside and Outside the Physics Lab . . . . . . . . . . . . . . . . . . . Marshall Thomsen
937
55
Responsible Conduct of Research (RCR) . . . . . . . . . . . . . . . . . . . . Philip R. DeShong
955
56
Engineering Research and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Davis
967
57
Quest for Ethical and Socially Responsible Nanoscience and Nanotechnology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Costas A. Charitidis
983
Contents
58
xiii
Business Ethics Research and Research Ethics in Business Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deborah C. Poff
999
59
Research Ethics and Scientific Integrity in Neuroscience . . . . . . . . 1013 Jon Leefmann and Michael Jungert
60
Linguistics: Community-Based Language Revitalization . . . . . . . . 1037 Nariyo Kono
61
Ethics and Integrity in Nursing Research . . . . . . . . . . . . . . . . . . . . 1051 Edie West
62
Holocaust as an Inflection Point in the Development of Bioethics and Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071 Stacy Gallin and Ira Bedzow
63
Research Ethics in Sport and Exercise Science Julia West
64
Research Ethics in Investigative Journalism Yvonne T. Chua
. . . . . . . . . . . . . . . 1091 . . . . . . . . . . . . . . . . . 1109
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127
About the Editor
Dr. Ron Iphofen is an Independent Research Consultant, a Fellow of the UK Academy of Social Sciences, the Higher Education Academy, and the Royal Society of Medicine. Since retiring as Director of Postgraduate Studies in Health Sciences at Bangor University, his major activity has been as an adviser to several agencies within the European Commission for both the Seventh Framework Programme (FP7) and Horizon 2020, and a range of governmental and independent research institutions internationally such as in France for L’Agence nationale de recherche (ANR), in Canada for the Social Sciences and Humanities Research Council (SSHRC), and in Ireland for the National Disability Authority (NDA) of the Ministry of Justice. He was Vice Chair of the UK Social Research Association and convenes their Research Ethics Forum. He has advised the UK Research Integrity Office and the UK Parliamentary Office of Science and Technology among many others. He has advised on several major EC projects including the RESPECT project (on pan-European standards in the social sciences) and SECUR-ED (on passenger transport security). He currently leads a 3-year EU-funded project influencing policy on research ethics and scientific integrity across all nonmedical sciences: the PRO-RES Project. Ron founded the gerontology journal Quality in Ageing and Older Adults. His books include Ethical Decision Making in Social Research: A Practical Guide, Palgrave Macmillan (2009/2011), and the book series Advances in Research Ethics and Integrity, Emerald (2017); he coedited with Martin Tolich The SAGE Handbook of Qualitative Research Ethics (2018).
xv
List of Associate Editors
Pamela Andanda University of Witwatersrand Johannesburg, South Africa
Robert Dingwall Dingwall Enterprises Ltd. Nottingham, UK
Marian Duggan University of Kent Canterbury, UK
xvii
xviii
List of Associate Editors
Nathan Emmerich Australian National University Canberra, Australia
Anne Good Disability Federation of Ireland (DFI) Dublin, Ireland
Martyn Hammersley The Open University Milton Keynes, UK
François Hirsch Member of the Inserm Ethics Committee Paris, France
List of Associate Editors
xix
Mark Israel Australasian Human Research Ethics Consultancy Services Perth, Australia
Kath Melia Emeritus Professor University of Edinburgh Edinburgh, UK
John Oates The Open University Milton Keynes, UK
Arleen L. Salles Centre for Research Ethics & Bioethics (CRB) Uppsala, Sweden
xx
List of Associate Editors
Jackie Leach Scully Disability Innovation Institute University of New South Wales Kensington, Australia
Margit Sutrop University of Tartu Tartu, Estonia
Colin J. H. Thomson Australasian Human Research Ethics Consultancy Services Pty Ltd. Canberra, Australia
Yanya Viskovich Data Privacy Lawyer Zurich, Switzerland
Contributors
Anja Bechmann DATALAB, Center for Digital Social Research, Aarhus University, Aarhus N, Denmark Ira Bedzow Biomedical Ethics and Humanities Program, New York Medical College, New York, NY, USA Libby Bishop Data Archive for the Social Sciences (GESIS), Cologne, Germany Marc Brightman Department of Cultural Heritage, University of Bologna, Ravenna, Italy Ros Brown Ilkley, West Yorkshire, UK David Calvey Department of Sociology, Manchester Metropolitan University, Manchester, UK Richard Carpentier Research Ethics Board, Children Hospital of Eastern Ontario and Hôpital Montfort, Ottawa, ON, Canada Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada Costas A. Charitidis School of Chemical Engineering, R-Nanolab, National Technical University of Athens, Athens, Greece Louisa Choe Sociology, University of Otago, Dunedin, New Zealand Yvonne T. Chua Department of Journalism, University of the Philippines, Quezon City, Philippines Louise Corti UK Data Archive, University of Essex, Colchester, UK Léo Coutellec Research Laboratory in Ethics and Epistemology (R2E), CESP, INSERM U1018, Université Paris-Saclay, Paris, France Michael Davis Humanities Department, Illinois Institute of Technology, Chicago, IL, USA Catherine J. Denial Knox College, Galesburg, IL, USA xxi
xxii
Contributors
Philip R. DeShong Department of Chemistry and Biochemistry, University of Maryland, College Park, MD, USA Mark Edward Department of Performing Arts, Edge Hill University, Ormskirk, Lancashire, UK Nathan Emmerich Australian National University, Canberra, Australia Charles Melvin Ess Department of Media and Communication, University of Oslo, Oslo, Norway Nikki Fahey Graduate Research School, University of Otago, Dunedin, New Zealand Ian Freckelton Q. C. Law Faculty, University of Melbourne, Melbourne, Australia Queen’s Counsel, Melbourne, VIC, Australia Law and Psychiatry, University of Melbourne, Melbourne, VIC, Australia Forensic Medicine, Monash University, Melbourne, VIC, Australia Stacy Gallin Maimonides Institute for Medicine, Ethics and the Holocaust, Freehold, NJ, USA Center for Human Dignity in Bioethics, Health and the Holocaust, Misericordia University, Dallas, PA, USA Caroline Gans Combe INSEEC U. Institut National d’Etudes Economiques et Commerciales, Paris, France Lily George Victoria University of Wellington, Wellington, New Zealand David S. Ginley Materials and Chemistry Science and Technology, National Renewable Energy Laboratory, Golden, CO, USA Anne Good Chairperson Disability Research Hub, Disability Federation of Ireland (DFI), Dublin, Ireland Meta Gorup Ghent University, Ghent, Belgium Chris Greenough Department of Secondary Education, Edge Hill University, Ormskirk, Lancashire, UK Vanessa Grotti Robert Schuman Centre for Advanced Study, European University Institute, Florence, Italy Adrian Guta School of Social Work, University of Windsor, Windsor, ON, Canada Martyn Hammersley Open University, Milton Keynes, UK Rachel Hendrick Glasgow, UK Bryndl Hohmann-Marriott Sociology, University of Otago, Dunedin, New Zealand Louise Holden British Army, Andover, UK
Contributors
xxiii
Ron Iphofen Chatelaillon Plage, France Monique Ischi Department Integrative Zoology, University of Vienna, Vienna, Austria Mark Israel Australasian Human Research Ethics Consultancy Services, Perth, WA, Australia Murdoch University, Perth, WA, Australia University of Western Australia, Perth, WA, Australia Michael Jungert Center for Applied Philosophy of Science and Key Qualifications, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany Marten Juurik Centre for Ethics, University of Tartu, Tartu, Estonia Vasileios Karagiannopoulos Institute of Criminal Justice Studies, University of Portsmouth, Portsmouth, UK Panagiotis Kavouras School of Chemical Engineering, R-Nanolab, National Technical University of Athens, Athens, Greece Jiyoung Ydun Kim DATALAB, Center for Digital Social Research, Aarhus University, Aarhus N, Denmark Anna Karin Kingston School of Applied Social Studies, University College Cork, Cork, Ireland Simon E. Kolstoe University of Portsmouth, Portsmouth, UK Nariyo Kono Center for Public Service and University Studies, Portland State University, OR, USA Mihalis Kritikos Institute of European Studies-Vrije Universiteit Brussel (VUB), Brussels, Belgium Jon Leefmann Center for Applied Philosophy of Science and Key Qualifications, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany Jonathan Lewis Institute of Ethics, School of Theology, Philosophy and Music, Faculty of Humanities and Social Sciences, Dublin City University, Dublin, Ireland Helen Lomax School of Education and Professional Development, University of Huddersfield, Huddersfield, England Kristi Lõuk Centre for Ethics, University of Tartu, Tartu, Estonia Elaine Mackey Centre for Epidemiology Versus Arthritis, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK Kevin Macnish Department of Philosophy, University of Twente, Enschede, The Netherlands Jennifer Mahar Pembroke, MA, USA
xxiv
Contributors
Dawn Mannay School of Social Sciences, Cardiff University, Wales, UK Alice Mattoni Department of Political and Social Sciences, University of Bologna, Bologna, Italy Barbara McGillivray Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada Dónal P. O’Mathúna School of Nursing, Psychotherapy and Community Health, Dublin City University, Dublin, Ireland College of Nursing, The Ohio State University, Columbus, OH, USA Roger O’Sullivan Ageing Research and Development Division, Institute of Public Health in Ireland and Ulster University, Belfast/Dublin, Ireland John Oates The Open University, Milton Keynes, UK Kristen Overstreet Arvada, CO, USA Mari-Liisa Parder Centre for Ethics, University of Tartu, Tartu, Estonia Daniela R. Piccio University of Torino, Turin, Italy Deborah C. Poff Leading with Integrity, Ottawa, ON, Canada Johannes Rath Department Integrative Zoology, University of Vienna, Vienna, Austria Jason Roberts Ottawa, ON, Canada Doris Schroeder School of Health Sciences, University of Central Lancashire, Preston, UK John Sims Substance Misuse Service, Betsi Cadwaladr University Health Board, Caernarfon, Wales, UK Margit Sutrop Department of Philosophy, University of Tartu, Tartu, Estonia Marshall Thomsen Department of Physics and Astronomy, Eastern Michigan University, Ypsilanti, MI, USA Martin Tolich University of Otago, Dunedin, New Zealand Emma Tumilty Institute for Translational Sciences, University of Texas Medical Branch at Galveston, Galveston, TX, USA Will C. van den Hoonaard Department of Sociology, University of New Brunswick, Fredericton, NB, Canada Lisa-Jo K. van den Scott Department of Sociology, Memorial University of Newfoundland, St. John’s, NL, Canada Jijian Voronka School of Social Work, University of Windsor, Windsor, ON, Canada
Contributors
xxv
Edie West Nursing and Allied Health Department, Indiana University of Pennsylvania, Indiana, PA, USA Julia West School of Sport and Exercise Science, University of Worcester, Worcester, UK Jane Winstone Institute of Criminal Justice Studies, University of Portsmouth, Portsmouth, UK
Part I Introduction
1
An Introduction to Research Ethics and Scientific Integrity Ron Iphofen
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Selectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Reasons for Concern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 An Anti-science Political Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Conclusion: Extending the Area of Sanity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Abstract
This chapter outlines the aims for the handbook. A main aim is to be a first point of contact for contemporary information, issues, and challenges in the fields of research ethics and scientific integrity. It is aimed at researchers, reviewers, and policymakers to help them pursue the best ways forward in seeking ethics and integrity in all research across disciplines, methods, subjects, participants, and contexts. The authors form a global network of scholars, practitioners, and researchers with a range of experience and insights that scope a challenging field but one that is vital to the maintenance of research standards and public confidence in science. Fact-based policymaking remains under threat from political and ideological pressures. Scientists and researchers in all disciplines and professions hold a clear responsibility to protect their subjects, research participants, and society from pressures, interests, and prejudices that risk undermining the value of their work. This overview outlines how the handbook is constructed and how readers might gain from it.
R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_62
3
4
R. Iphofen
Keywords
Research ethics · Scientific integrity · Policymaking · Evidence · Conflicts of interest · Multidisciplinary · Research governance · Ethics review
Introduction While many common elements of ethics and integrity are shared by disciplines and research professions, there are also some specific and unique challenges to each discipline in terms of how they address research ethics and their own scientific integrity. Dealing with this requires the perspective and insights of those versed in the relevant discipline. But there are also those with expertise in understanding the problems that commonly occur across disciplines, such as consent, privacy, data management, fraud, plagiarism, and so on. These generic issues need to be understood and unveiled in specific cases. As a consequence, the sectionalized handbook reflects these intra-, inter-, and multidisciplinary concerns. However, while one must be receptive to the possibility of using ideas and good practice across jurisdictions and disciplines, one must equally be able to resist poorly conceived overgeneralizations or deliberate or reckless “ethical imperialism” (Schrag 2010). The five sections that divide the handbook are intentionally structured in sequence to build understanding from the overarching problems of research governance, through specific recurring issues, topics, and research methods to a consideration of how these problems relate to different categories of research subject or research participant and concluding with a focus on the concerns within a selection of key disciplines. Together with the members of the Editorial Advisory Board, we have drawn upon a global network of colleagues to author these chapters – seeking to recruit the best in the field in each case. We linked with these scholars, practitioners, and researchers to ensure they could remain connected, clear on mission to deliver relevant and topical coverage as we incrementally constructed the handbook. The central aim and attraction for this major reference work is to provide advice and guidance across a wide range of disciplines and professions to help all readers in “thinking through” and applying their approach to difficult questions related to the principles, values, and standards they need to bring to their research practice. Researchers can usually benefit from help in identifying and articulating their immediate concerns, such as “How do I strive to make my research as ethical as possible and that I behave with integrity?” They need that for their own benefit, for the benefit of their research subjects or participants, and to meet the demands of research ethics review committees and the publishers and editors of journals. Similarly, journal editors need to ensure that they are publishing ethically “reliable” evidence; policymakers and ethics reviewers need to ensure their regulations and requirements are reasonable and facilitative of robust research; funding agencies need to have faith that their money is well spent, in keeping with their missions and in the hands of reliable agents. The discrete literatures on research ethics, research integrity, and publication ethics need to be brought together and their interconnectedness demonstrated.
1
An Introduction to Research Ethics and Scientific Integrity
5
Most contributors to and editors of this handbook share a special concern for how reviewers on research ethics committees understand their role. It is vital that, while they seek to preserve standards and ensure the rights of both participants and researchers are met, they also bear in mind the many different ways in which knowledge can be gained and society, groups, and communities’ interests advanced through the range of valid research methods now available. The expertise required of such reviewers is wide and varied. They must be informed about rapid developments in most disciplines and research professions, able to “test” the worthiness of an approach at the same time as facilitating its ethical implementation. They are also tasked with conducting a cost/ benefit calculus as well as ensuring their assessment is proportionate to those risks and potential benefits. Ethics review is not just about participants and researchers but also about the ultimate users or beneficiaries of the research for which the reviewers give a favorable opinion enabling the research to proceed.
Selectivity Reviewers for book proposals are often critical not so much what a book covers but what it omits. We trust that here we have not “neglected” certain topics, but this could never have been a complete encyclopedia and, as with all handbooks, its purpose needs to be extended in sensible ways. In all works of this nature, there has to be a decision about what to include and, not necessarily what to “exclude,” what can be “left out” – for now. To write about what has been excluded would imply a deliberate, targeted policy of omission. Rather it has to do with the imposed limits on even large collections and the selection problem. Most of the items we present here seemed essential to the collection, but still their inclusion depended upon recruiting the appropriate authors to deliver the material. Some topics were left out simply because we could not find just the “right” author for our goals. This does not deny the relevance of such topics and their need to be covered elsewhere. In some cases, they might already be well covered elsewhere, but there may be still a need for the succinct “first port of call” we are promising in this collection. Readers are welcome to offer suggestions for topics, disciplines, and issues to be covered in a further collection. In any case, the assumption that even an encyclopedia is comprehensive and allinclusive is naïve. It is certainly the case that hardback collections would struggle to ensure comprehensiveness. Moreover, assumptions that such limitations can be overcome for collections that can be found on the Internet or the World Wide Web are a myth; they suffer the same limitations of inclusion/exclusion and assessment of the quality of authorship (Iphofen 2017). The trouble is that one only discovers what has been left out when one searches for something specific and cannot find it. Additionally, I recall my own astonishment, as a young student, that encyclopedia articles were actually written by a real person – a named scholar or group of scholars and hopefully eminent in their fields. I somehow thought they were written by a committee of experts and approved by an even more expert editorial board. We make no such claims for comprehensive coverage here, nor need we apologize for failings of inclusivity.
6
R. Iphofen
Observations This was a big project – but it could not have been smaller to be effective. We have striven to minimize excessive repetition of coverage. That is in part unavoidable with a discipline-based approach as opposed to a topic- or subject-based approach. It is possible that some readers from some disciplines will not see some issues as “of their concern.” Here we are promoting a focus on as many disciplines as possible with the idea that – if you wish to read about the issues for the discipline of economics, let’s say – you would come here first. (And for many readers it might be all they need depending on their level of interest.) The reviewers for the original proposal for this work appreciated how comprehensive it is to have chapters on art, history, literature, physics, and so on, but they suggested that it seems odd to have only one chapter for medicine and one for life sciences when each of those areas has so many issues to cover. Public health research ethics alone would have enough material for one large handbook even without considering integrity issues. Our response was that some of these proposed chapters might not work – it depended on finding the authors who operate in the field. For example, I suspect few art researchers are at all interested in human research ethics – but that does not mean they shouldn’t be! Perhaps there is an artist/researcher asking these questions. I tried but could not find one, so we did not deliver such a chapter. Medicine and life sciences were suggested simply so as not to omit the umbrella area. The difference between disciplines and professions with regard to ethics and integrity is addressed in an overview chapter for Part VI – on “Disciplines and Professions”. We do need to ask why investigative journalists adopt practices that most social scientists cannot or would not be permitted to engage in. If one looks at the Committee on Publication Ethics (COPE) case examples and their deliberative work, you can see that many differences of opinion remain in publication ethics and they are not consistent within or between disciplines. The selection I proposed for this final section was made to be both as widespread as possible and to test some unusual areas – such as music. Can music research offend the participant or bring harm to subjects? And economists who have told me there are no ethics issues in econometric or macroeconomic research clearly have not read the work of the Freakonomics authors (Levitt and Dubner 2005, 2009).
Reasons for Concern Before becoming acquainted with the engaged writing this handbook has to offer, there may be some who wonder why ethics and integrity in research actually matter. They might consider it an indulgence of moral philosophy or something that scientists and researchers simply have to address in order to be allowed to continue to pursue their activities – at worst an administrative hurdle that simply has to be dealt with. Perhaps it is just something that researchers “ought” to do to prove that they are morally responsible, trustworthy, reliable, and worth funding.
1
An Introduction to Research Ethics and Scientific Integrity
7
It is not merely for any of those reasons that ethics and integrity matter. Our health, well-being, freedom, and human rights – even the survival of humans, other species, and the planet as we know it – depend upon getting the integrity and ethics of research right. The reason is that there should never be challenges to scientific findings that could lead anyone to doubt the integrity of those findings and of those who produced those results. It is entirely possible that erroneous conclusions could be drawn from rigorous research efforts, but that should never be as a consequence of malicious intent, conflicts of interest, or simple incompetence. Just as important is how those findings are disseminated, applied, or used. Modern digital technology permits the collection of massive amounts of data about people, their thoughts, beliefs, and actions. It is crucial to the maintenance of the integrity of the science of information technology that great care is taken about how such data are gathered, how they come to be used, and who is permitted to use them. The kinds of things we should be concerned about are many and varied across research fields, professions, and disciplines. In historical research, there are some who deny the Holocaust. To do so requires rigorous evidence that challenges the extensive documentary archives and oral testimony that demonstrate the facts of the extermination of six million Jews and others stigmatized and denigrated by the Nazis before and during WWII (Evans 2002). Statist interventions cannot be permitted to undermine valid and balanced research – thus the draft legislation proposed in Poland in 2018 called for fines or prison sentences of up to 3 years for anyone intentionally attempting to attribute the crimes of Nazi Germany to the Polish nation, for example, by referring to “Polish death camps” (Gross 2018). Such a denial cannot merely be based on apparently blind prejudice and empty challenges to existing evidence and literature. In environmental sciences, there are those who challenge the evidence for climate change and/or question human responsibility for it; those who continue to provide the overwhelming evidence to the contrary must be beyond reproach both in their motives and their methods, and the same must apply to those who challenge such evidence – a requirement of responsible behavior must apply to all those who comment on such issues. In health and medical research, distortions of scientific results have been the result of the massive influence of corporate financial interests primarily from pharmaceutical companies (Goldacre 2001). In research into social inequality, the measurable evidence base against which growing inequality is assessed is constantly challenged. If there are purported social, ethnic, and cultural differences in something we label as “intelligence,” for example, we must be sure that the concept itself is valid and that such differences can be measured accurately; otherwise, we should not dwell on such inequalities unless they are shown to be important and make a difference to policy and social action. In food and nutrition sciences, there is extensive industrial lobbying that undermines independent assessment of what constitutes the healthiest food sources for humans (Ioris 2015). There is heavy interference in any attempts to identify detrimental practices in agribusiness – such as chlorinated chicken and the introduction of a terminator gene in GM products. In public health, debate on whether or not vaccination offers the best/most effective method for enhancing population immunity to specified infections and diseases has gone on for over a century and continues today
8
R. Iphofen
(Fuhrman 2013). If there are claims that challenge a dominant scientific perspective, we must be clear about the grounds on which such claims can be made. So, what exactly is the status of any evidence for, say, opposing the facts of climate change; denying the Holocaust; challenging the extent of environmental pollutions, IQ and genetics, or ethnicity; vaccinating to immunize; and so on? Are such challenges entirely divorced from prejudice, personal or corporate biases, or state and financial interests? Of course this is not meant to imply that all research for major corporations or the military was automatically considered to be unethical. Some might hold such a view and consider that all research should be funded by public bodies or foundations – anything else being considered as corrupt and untrustworthy. At the same time we would also want to recognize that diversity in funding sources carries potential benefits because it prevents the creation of groupthink and other monopolies of thought and practice. Neither would we want to suggest that large corporations or the military are not concerned about the ethics of the research they commission. National defense is a legitimate purpose of government, and we would all prefer that the military act on the basis of the best available research evidence. Public customers can also exercise undue influence over researchers, especially when a civil servant might have to tell a minister that the commissioned research has produced the “wrong” answer in terms of the minister’s ideology. The difficulties of addressing these issues become more complicated with the growing awareness of their interconnectedness and their nonlocalized relevance. Perhaps they were never truly separate but might have been seen as such for the benefit of disciplinary reductionism and the convenience of professional domain protections. And local problems were left as such to minimize the dangers of spreading them elsewhere. However, in a globalized world, these problems cannot remain isolated – hence disease, terrorism, fundamentalism, racism, migration, etc. are all necessarily of growing international concern, and the disciplines which study them must necessarily be interconnected. This creates further problems in that any flaws in any one of the fields can undermine the integrity of all – one fraudulent paper casts doubt or can be used to cast doubt on the motives of the researchers, and one unguarded e-mail undermines the integrity of scientists – without robust defense of research integrity, the cumulative effect can be to discredit science and scientific expertise in general and the honor of those who practice it. No one should be in a position to cast doubt on the sincerity, integrity, and honor of researchers in order to undermine their evidence. This means that researchers must ensure their integrity as far as they are able. The only challenges to scientific findings should be scientific – there should be no valid grounds to question findings in an ad hominem way, and part of the resistance to such challenges lies in the hands of researchers themselves. Most importantly, policymakers should have no grounds at the very least on the basis of ethics and integrity of knowledge products to question the reliability of findings. Thus, policy outcomes based on scientific evidence should never be challenged by those who could place doubt on the integrity of those who produced the evidence on which the policy was based. While the science can be “wrong” on the grounds of rigorous and valid evidence, the scientist must strive
1
An Introduction to Research Ethics and Scientific Integrity
9
never to be “morally wrong” – whenever they do that, however rarely, the status of valid evidence is at risk. This does not mean that maintaining scientific integrity and judging whether or not it has been maintained are relatively straightforward matters. Neither does it mean that scientific findings should be accepted uncritically. The reason we need to think ethically and with integrity is that the consequences of scientific findings – the advancement of knowledge, innovations in technology, and of method – must be critically assessed in as balanced and detached a way as possible and their implications considered. If you invent the means for nuclear fission in weapons and/or as a source of energy, it is vital to consider that it might be used and how it might be used – for either or both of those purposes. The assessment of the ethical impact of any research findings, including technological or methodological innovation, is one of the most challenging sets of issues we confront. All innovation must face the ethical hurdle of what has become known as the Collingridge (1980) dilemma: impacts cannot be easily predicted until the technology is extensively developed and widely used, but controlling or changing the method is difficult when the technology has become entrenched. It might seem wise to recall the aphorism that “just because we CAN do something does not mean that we SHOULD,” but simply suggesting the application of a precautionary principle is not enough since it can prove obstructive of potentially beneficial outcomes. It is necessary to think more profoundly, to question, from the outset, good reasons for the investigation of a particular field of enquiry, who might conduct that investigation, what their motives might be, how they go about doing it, and, once they have produced results, what they do with that knowledge – not in an ad hominem manner, but in terms of the rationale for engaging the research and any applications that might arise from it. The more we engage these issues, the less likely it becomes that actions based purely on ideology, prejudice, and opinion will be quite so easily able to influence policymaking that affects the health of human populations, the effective production and use of energy, and the survival of other species and of the planet as we know it. During the preparation of this handbook, some examples of ignorance of research evidence and a determination to sidestep reliable science were all too easily found in policies implemented by the US administration of 2016–2020.
An Anti-science Political Agenda Of course, the dangers we have in mind here can occur in any epoch and in any country. During my own career, combined political and scholarly forces almost dismantled the main funding body for social science research in the UK: the Social Science Research Council (SSRC). It was only saved by a change of name to the Economic and Social Research Council (ESRC) – removing the “science” label since in the opinion of those critics, the disciplines it covered were not “entitled” to be regarded as “scientific” (Posner 2006). More recently while this handbook was in preparation, the Union of Concerned Scientists (UCS), a nonprofit alliance of scientists in the USA (see: https://www.ucsusa.org/about-us), had collected evidence
10
R. Iphofen
on the multiple ways in which they saw the administration of President Trump attacking science and seeking to undermine the scientific process. The UCS called upon the administration and a then Republican-dominated Congress to ensure that science-based agencies be led by officials with a strong track record of respecting science as a critical component of decision-making. Nevertheless the administration selected leaders for science-based agencies who were seen as unqualified, conflicted, and/or openly hostile to the mission of their agency. Appointees held views in opposition to the agencies they were appointed to lead; others put in charge of overseeing the regulation of industry came directly from those industries that they are supposed to regulate – a clear conflict of interest. Some agency leaders had been openly hostile to scientists. For example, Scott Pruitt initially appointed to head the Environmental Protection Agency (EPA) was entangled in a decision to hire Definers Public Affairs (DFA) to deal with the agency’s press coverage. But DFA had previously been involved in targeting EPA staff who expressed personal views not in line with those of the Trump administration by submitting Freedom of Information Act (FOIA) requests for their emails. This sent a chilling signal to staff to not speak out against any wrongdoing within the EPA. The administration attempted to dismantle science-based policymaking by undermining and removing the essence of bedrock public health and environmental laws – such as the Clean Air Act and the Endangered Species Act and other regulations all with strong scientific foundations that had enabled agencies to freely collect and draw upon scientific data to carry out statutory responsibilities established by these laws. The Department of Interior removed the term “climate change” from its new strategic plan. At the EPA, political appointee John Konkus explicitly removed reference to “climate change” from all grants issued by the agency. Pruitt also directed that no individuals receiving EPA grant funding could serve on the Science Advisory Board which thus excluded many qualified experts in the sciences from serving. The UCS has shown how federal scientists have been attacked, censored, reassigned to undertake tasks not affiliated with their expertise, and prevented from attending conferences in their field of expertise. Perhaps the most devastating impact of all, however, is that these actions create a hostile work environment for agency scientists that stoked fear, resulting in self-censorship, lowering staff morale, leading to the departure of experts from public sector agencies, and sending a chilling message to scientists across the country that their work was not valued. This steadily developing anti-science culture seems part of the populist movements that have emerged globally in recent years. Balanced evidence untarnished by biased opinion is a threat to political forces motivated and supported by the impassioned emotions of an ideology steeped in prejudice. Even the UK Secretary of State for Justice, Michael Gove, in June 2016 could claim: “I think the people of this country have had enough of experts. . .,” and while there are limits to scientific expertise, the moral climate must be such that a more balanced set of arguments can be promoted. The economics professor Richard Portes argued that “. . .distrust has been encouraged by those who have vested interests in discrediting experts because they want to advance a particular agenda – be that in the field of economics, climate change, health or whatever – which may conflict with what expert opinion would
1
An Introduction to Research Ethics and Scientific Integrity
11
be. . . In too many cases, politicians and representatives of interest groups say they’re looking for evidence-based policy, when in truth they’re looking for policy-based evidence. If the evidence that comes from experts doesn’t accord with their view of the world, they’re prepared simply to shelve it” (Portes 2017). There are many other UK examples – an extreme right-wing clique in the UK Parliament can call itself the European Research Group (ERG) when all its focus is on anything that negates the value of the European Union. There is a think tank with the neutral-sounding title of the Institute of Economic Affairs which promotes a right-wing agenda and refuses to disclose who funds it. Not that there are no similar organizations on the “left” and “center.” The ability to promote such an agenda and to exploit public dissatisfactions was well illustrated by the Cambridge Analytica scandal (Cadwalladr 2018). This spinoff company from an academic research group cut across the worlds of market research, psychological profiling, and political lobbying. Their skills in data mining had been sought and used by political and state defense agencies in the USA, UK, and large corporations worldwide – including Russia. They were, in essence, research “mercenaries” – employed to gather data which could influence opinion on a massive scale and immune to the need for transparency, balance, and justice in their exploitation of data. One could argue that all independent research agencies are in some sense “mercenary” – willing to work for whoever can pay – but there are codes of research practice (such as in the European Society for Market Research ESOMAR 2016) which make responsibilities to “society” over and above those to a “client.” Rather Cambridge Analytica were paid to ensure certain outcomes in the US presidential election of 2016 and the UK referendum on membership of the EU in 2017 (Cadwalladr 2019). The implications of the Cambridge Analytica saga for public willingness to trust science and to cooperate in research are, as yet, unclear. Assurances about confidentiality are of course a key plank for ensuring informed consent, one of the basic pillars of conducting ethical research. Yet the links that Cambridge Analytica had with Facebook indicate the considerable power that such large social media organizations have gained and their ability to influence public opinion. Facebook continues to lobby globally and stridently against data privacy laws (https://www.theguardian.com/technology/2019/mar/02/facebook-global-lobby ing-campaign-against-data-privacy-laws-investment?CMP=Share_iOSApp_Other). And “Research has found that fake news spreads faster on Twitter than does real news. Fake news stories are 70 per cent more likely to be retweeted than true stories, and it takes six times longer for true stories to reach any 1500 people than it does for fabrications. Serious data breaches at Facebook and Cambridge Analytica, allowing the mass profiling of users for political campaigning, are not only a fundamental breach of privacy but a grave threat to the future of democracy” (Hutton and Adonis 2018). More not less dialogue with the public is needed in order to help people to understand the ethical standards that underpin research conduct and data linking for research purposes. In an attempt to address such concerns, the UK Nuffield Foundation established in 2018 the “Ada Lovelace Institute” to examine the profound ethical and social issues arising from the use of data, algorithms, and artificial
12
R. Iphofen
intelligence and to ensure they are harnessed for social well-being (see: http://www. nuffieldfoundation.org).
Conclusion: Extending the Area of Sanity Throughout this handbook there are examples of how research ethics processes themselves can be employed to constrain research. Such constraints range from a lack of understanding of other disciplines which leads to very limited notions of justice or autonomy being used in research ethics, to ethical guidelines being exploited to act in state interests, to the blocking in the guise of research ethics of research that criticizes the state. One example of the latter can be found in the Code of Research Ethics for the University of Malaya. In the appended Explanatory Notes on Sensitive Issues, they outline that “In the context of national security, sensitive issues mean any issue that can cause prejudice, hatred, enmity or contempt between or towards any ethnic or religious group and can affect public safety, national security and/or the integrity of the Government and is generally connected with the following acts or behaviour: (a) Questioning the implementation of certain government policies pertaining to economic development, education and social matters” (Accessed at: https://www.um.edu.my/docs/default-source/umrec/code-ofresearch-ethics.pdf?sfvrsn=2). In the University of Hertfordshire in the UK, university policy with regard to research ethics clearly implies the need to comply with UK immigration policy. On the issues of rewards for participation, they state “Human Participants resident in the UK, to whom payment is to be made or who are to receive other rewards, must undergo right to work checks, prior to taking part in the study or activity, in accordance with current Home Office regulations and will not be permitted to participate in a study where this would place them in breach of Home Office regulations” (https://www.herts.ac.uk/__data/assets/pdf_file/0003/233094/RE01Studies-Involving-Human-Participants.pdf). While one could argue that this represents an institutional concern to comply with the law, it sets up a conflict with research methodology and perhaps even compromises academic freedom and ethical research seeking to understand and explain problems facing migrants. It certainly does place a requirement on the researcher that threatens to damage the quality of the research. Every step in the right direction, no matter how small, builds upon the core values of science to share knowledge in a cooperative manner, to hold a healthy skepticism about all theories and findings, to seek knowledge that has a universal relevance and applicability, and, again, to ensure no personal or vested interests obscure or prevent new knowledge from being aired. Only in an incremental way can we accomplish George Orwell’s plea to “. . .extend the area of sanity little by little.” Just because we cannot do everything does not mean that we should do nothing, and neither is it possible for “good science” to provide all the answers we need. At least we may be able to put the lie to the ludicrous notion of “alternative facts” based on “fake news.” News has always been prone to fakery since its core purpose and motive are not science or even the
1
An Introduction to Research Ethics and Scientific Integrity
13
provision of balanced evidence, it still has to “sell” its product, and it is necessarily “manufactured” (Cohen and Young 1973). But if they can resist the constraints of some newspaper owners and some editors, journalists are still capable of seeking truths within an infrastructure that could limit their best goals – to do some good. Once a few stones are maliciously pushed down the hill, it is hard to stop an avalanche of evil intent. But conducting research that is ethical, free from challenges to the integrity of individuals, is one way of resisting the worst consequences of that avalanche. If we can agree on our fundamental values and find the best ways to implement them, there will be few opportunities for falsehoods to be purveyed, and reliable, robust, and honorable research findings will be able to be shared and applied in the interests of a safe, secure, and sustainable world.
References Cadwalladr C (2018) Our Cambridge Analytica scoop shocked the world. But the whole truth remains elusive, The Guardian, 23rd December Accessed at: https://www.theguardian.com/uknews/2018/dec/23/cambridge-analytica-facebook-scoop-carole-cadwalladr-shocked-world-truthstill-elusive Cadwalladr C (2019) Cambridge Analytica a year on: ‘a lesson in institutional failure’, The Guardian, 17 March, accessed at: https://www.theguardian.com/uk-news/2019/mar/17/cam bridge-analytica-year-on-lesson-in-institutional-failure-christopher-wylie Cohen S, Young J (1973) The manufacture of news: a reader. Sage, London Collingridge D (1980) The social control of technology. Frances Pinter, London ESOMAR (2016) https://www.esomar.org/what-we-do/code-guidelines Evans RJ (2002) Telling lies about Hitler: the holocaust, history and the David Irving trial. Verso, London Fuhrman J (2013) Super immunity. HarperCollins, London Goldacre B (2001) Bad pharma. HarperCollins, London Gross J (2018) Poland’s death camp law is designed to falsify history, Financial Times, February 6, accessed at: https://www.ft.com/content/1c183f56-0a6a-11e8-bacb-2958fde95e5e Hutton W, Adonis A (2018) Saving Britain: how we must change to prosper in Europe. Abacus, London Ioris AAR (2015) Cracking the nut of agribusiness and global food insecurity: in search of a critical agenda of research. Geoforum 63:1–4. https://doi.org/10.1016/j.geoforum.2015.05.004 Iphofen R (2017) Epistemological metaphors: orders of knowledge and control in the Encyclopedist myths of cyberspace. International Journal of Humanities and Cultural Studies. (ISSN 23565926) 4(2):122–141. https://www.ijhcs.com/index.php/ijhcs/article/view/3128 Levitt SD, Dubner SJ (2005) Freakonomics: a rogue economist explores the hidden side of everything. Allen Lane, London Levitt SD, Dubner SJ (2009) SuperFreakonomics: global cooling, patriotic prostitutes, and why suicide bombers should buy life insurance. Harper Collins, New York Portes R (2017) Who needs experts? The London Business Review, 9th May. Accessed at: https://www.london.edu/faculty-and-research/lbsr/who-needs-experts Posner M (2006) Social sciences under attack in the UK (1981–1983), La revue pour l’histoire du CNRS [], 7|2002, 20 October 2006, 09 January 2017. http://histoire-cnrs.revues.org/547; https://doi.org/10.4000/histoire-cnrs.547 Schrag Z (2010) Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009. The John Hopkins University Press, Baltimore
Part II Regulating Research
2
Regulating Research Governance and Ethics Across the Spectrum Ron Iphofen
Contents Introduction: Separating Governance from Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics and Integrity in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Exactly Is Research? And Why Does It Matter? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Research Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Challenge to Research Ethics Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion: Influencing Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18 20 20 24 26 28 31
Abstract
Effective research ethics review must retain its independence from corporate interests. Institutionalized risk aversion can hinder scientific advance. The task of research ethics review processes is to safeguard researchers and their subjects/ participants by enhanced risk awareness. Ethics and integrity are difficult to separate, overlapping issues of concern. The maintenance of ethical practice and scientific integrity relies upon an effective partnership between all stakeholders in research. To ensure equitable treatment of data gathering, management, and communication, the definition of what constitutes research should be broadened. But the problem of how to regulate or monitor practices of agencies across the research “spectrum” remains. Large data gathering corporations remain protective of their supposed self-regulations. Science and politics, and policymaking, are inextricably linked, which further enhance the responsibilities of scientists to fully understand and engage with the consequences of their research.
R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_52
17
18
R. Iphofen
Keywords
Research governance · Research ethics · Regulation · Compliance · Science and policymaking
Introduction: Separating Governance from Ethics Governance is a broad term for processes and systems, which are used to guide institutions, companies, or even countries toward desired outcomes. There is a strong argument that the governance of a research project and the review of its adherence to ethical obligations should be kept quite separate. It is relatively easy to establish regulations governing how researchers should behave; the difficulty is in ensuring their compliance. Most researchers do not behave badly, but as with all transgressions, the few who misbehave create problems for the majority who don’t. The problem is how we can know what they are actually doing since those responsible for monitoring research practice are rarely able to join researchers “in the field” – literally or figuratively as “in the laboratory.” Effective monitoring of researchers’ behavior is a problem for the funding agency, the regulators, and the researchmanaging institution; it is primarily a problem of research governance. It is of no help to the ethics reviewers of a research proposal if they are caught up in ensuring that regulations are followed or seeking to protect the reputations of funders and corporate bodies such as universities. Instead, it is vital that they distance themselves from corporate concerns and instead think about how researchers can get their jobs done safely and with the highest concern for the interests of their research participants. When researchers complain about “obstructive” research ethics committees (RECs), it is usually because the REC has failed to balance research governance with independent ethics review. The severest criticisms have come from the USA where RECs are constituted as IRBs (institutional review boards) (Schrag 2010, 2011; van den Hoonaard 2011; van den Hoonard and Hamilton 2016). IRBs have a combined brief to protect research subjects from harm but also to protect the research institution from any legal liabilities and must always have an eye on insurance indemnity premiums in case errors in research processes cause participants to claim a grievance. Necessarily this makes them “risk averse.” A truly independent REC requires heightened “risk awareness,” but excessive risk aversion could truly obstruct many scientific advances which often require a degree of “managed risk.” Good researchers will always think about the ethical issues of their project from the outset – from the inception of a research idea, through its engagement, to the communication of findings. What researchers need from ethics reviewers and/or advisors are the extra insights that can anticipate problems and concerns they might not have thought of. Traditionally, these were ethics experts, but increasingly research participants or their representatives get involved in assisting with planning research, as they often are experts within their own community who can anticipate problems for their constituency. Researchers may need additional advice, guidance, and, possibly, ongoing mentoring. Facilitative guidance from a REC could raise
2
Regulating Research
19
researchers’ awareness of risk, without becoming obstructive. At times research can, even must, challenge existing norms and regulation, and if it is unreasonably held back, then some of the risks that are a necessary element in the advancement of knowledge will not be taken and the pursuit of social justice can be undermined. Governance is an essential element in the legitimate management of research. But what must be guarded against is the pretense that a committee concerned with governance can also offer truly independent ethical advice. Governance must be done by an effective management process – if the institutional risk in conducting a project is too great, then it probably will not be allowed to take place. But the ethics of a project can be assisted by a REC owing no allegiance to any corporate body or any vested interests. This is best done by volunteers acting independently in the “public interest” – which means the interests of researchers and research participants and, of course, the community or society at large. In doing so RECs can truly “advise and guide” and perhaps also “warn” against dangers not originally perceived by the researcher. One cannot overemphasize the value of transparency in understanding why ethically satisfactory research might be blocked for reasons of institutional liability, high insurance risk, or reputation management. Without a clear distinction between governance and independent ethics appraisal, institutions can use ethics approval to block research without ever having to face challenges over their judgments. There may be vested interests from a range of interested parties – both higher and low down in institutional hierarchies to avoid excessive transparency, even just a public debate, since it could undermine ideological positions as well as threatening corporate image. Blanket bans on research in countries where the UK Foreign Office advises against travel offers a case in point. University insurers simply apply such a ban, and there is no pressure to encourage universities to critique it and, possibly, develop some alternative insurance scheme. Evidently aid NGOs and journalists are able to work in the same countries with a more nuanced understanding of risk and their own customized insurance provision. An open and honest debate about these issues and the way in which they contribute to the systematic creation of ignorance cannot be engaged in while governance and ethics are rolled up together. There are lessons for the integrity of research in all of this. The avoidance of fraud, corruption, plagiarism, and misuse of data (violations of research integrity) cannot be prevented by legislation alone. Someone with illicit intent will always find a way around the law if there is both incentive and opportunity to do so. Scientific integrity relies upon the maintenance of a practice culture which proscribes such bad behavior – researchers knowing when there are risks of stepping beyond moral and legal guidelines and seeking the advice of all those involved to resolve their problems. This suggests that the monitoring of scientific integrity should be joint accomplishment of the research institution and the professional association to which a researcher belongs. However, since research outcomes do depend on a tacit partnership between all research collaborators, ethical research will necessarily be a mutual accomplishment of all these stakeholders: researchers, participants, funders, sponsors, and reviewers (Iphofen 2011, 2017).
20
R. Iphofen
Ethics and Integrity in Research The combined coverage of research ethics and scientific integrity in this Handbook is intentional. Too often they are separated for what might only be regarded as administrative purposes. Work that is unethical lacks integrity – and any work that suffers in terms of its integrity cannot be regarded as an example of ethical research practice. A few chapters in this first section of the Handbook argue for their interconnectedness but also strive to define their differences. This concern reflects similar considerations made elsewhere (Parry 2017); hence we see no need to repeat this excellent discussion in this collection. Definitions of what counts as integrity in research vary across countries, institutions, and disciplines. The variety might reflect whether emphasis is placed on detection and punishment or on education and culture. The dominant position in many jurisdictions dwells on notions of “misconduct”; where the definition has legal status (as in the United States) and is meant to hold researchers and institutions accountable, the acts and degree of intention associated with misconduct may be tightly demarcated. Where definitions are intended to promote broader values, the field may be conceived broadly. The All European Academies’ (ALLEA 2017) European Code of Conduct for Research Integrity refers to reliability, honesty, respect, and accountability. The authors of the influential Singapore Statement on Research Integrity (WCRIF 2010) argued for fundamental principles relating to honesty, accountability, professional courtesy and fairness, and good stewardship of research, as well as 14 professional responsibilities that together ought to transcend legitimate national and disciplinary differences. Research integrity has often been driven down to an administrative concern with fraud, falsification, and plagiarism (FFP) and so gets caught up with research governance issues. Research ethics is perhaps best seen as the overarching set of moral values, virtues, principles, and standards which should act as the guide for “good” research practice. Ethical decision-making in research attempts to balance the potential for benefit against any harms that may arise for research subjects, participants, and, indeed, society as a whole. In many respects the conceptual difference between ethics and integrity in research is reflected in the distinction between the aims and purposes of the variety of codes and guidelines to be found in the field (see the EU-funded PRO-RES project: http://proresproject.eu/). There is a conventional agreement that codes are often backed by sanctions and lie in the realm of professional compliance agreements. Guidelines lack regulatory force but are encouraged in an aspirational manner to advise and warn practitioners about ethical research practices. Again there may be some parallels with scientific integrity linked to sanctionable codes and a good practice culture in research ethics supported by a range of advisory guidelines. Though not a hard and fast distinction, these issues are explored in the chapters that follow.
What Exactly Is Research? And Why Does It Matter? One could be forgiven for excessive concern over definitions, but these are not just matters of semantics. Not only does the ethics/integrity distinction matter, so too does what we actually mean by “research.” Experts with experience of serving on
2
Regulating Research
21
UK National Health Service (NHS) Hospital Trust research (R&D) review committees know that one of the more frequent items of contention was whether a proposal being considered was “research” or “merely” audit. The distinction mattered in the concern that the proposers may have been trying to minimize the rigor of review by claiming to be conducting an audit when they were clearly engaging in research. If a proposed action was only audit, then it was not sent for ethics review. All proposals that are clearly “research” were subjected to some form of ethics review. At times the R&D review committee might have to consider if an audit proposal was a disguised research project seeking to avoid the extra hurdle of ethics review – which would delay the project and require a great deal more effort from the proposers than they deemed necessary – especially since their self-perceptions viewed the questions they wanted to ask as self-evidently worth asking and their intentions, of course, honorable. There is a valid concern that audit can deliver valuable insights and information. It is a form of service evaluation, and if evaluation is not research, then it is hard to know what is. Since audit/service evaluation is frequently additional to the recognized treatment or delivery of care (“care. . .as usual”), it comprises an additional intervention not required of treatment but supplementary to it. Hence it could raise as many moral questions as research per se does and perhaps should therefore be subject to the same level of ethics review or at least ethics monitoring. In most definitions, the concept of research varies along a continuum from the basic notion of “a careful or diligent search,” “disciplined enquiry,” and the “systematic collecting of information about a particular subject” to an investigation “using explicit research methods”. . .which is “. . .aimed at the discovery and interpretation of facts, new knowledge, a revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws.” The argument is often made that audit cannot constitute research since it is not subject to the same design constraints that ensure the production of new and effective knowledge: no comparative analysis and lack of controlled or accurately measured variables and/or randomization. Of course, any good audit would have clearly fulfilled at least the basic definitions of research, while a poor audit could quite easily be identified and roundly condemned within an existing managerial structure. Indeed if a service evaluation is to be useful and effective, it ought to be comparable to rigorous research in terms at least of accurate baseline and outcome measures and the awareness, if not complete control, of potentially confounding variables. Often the definitions of what does actually constitute research are merely seeking a range of forms of administrative convenience. In the health and care context, it is recognized that the amount of service evaluations that need to be conducted could overwhelm an ethics review system. And funding bodies with an obligation to assess the value and quality of research adopt narrower definitions for similar reasons as well as the limitations on the availability of their funds. For example, the UK’s Research Excellence Framework (REF) (https://www.ref.ac.uk/) defines research as “a process of investigation leading to new insights effectively shared.” The availability to others is in keeping with open science initiatives and so includes work of relevance to the needs of commerce, industry, culture, society, and to the public and voluntary sectors. However it does exclude audit in the form of routine testing and
22
R. Iphofen
analysis of materials, components, and processes which might be of use for the maintenance of national standards. Once again this could be seen as overly restrictive since such evaluations could be of great benefit in terms of innovative technologies and methods and the assurance of quality – something highly sought after at the European level. Some challenges as to how one defines research have arisen as a result of the European Commission PRO-RES project mentioned earlier which aims to promote ethics and integrity in nonmedical research (see http://prores-project.eu/). PRO-RES is one of a cluster of similar EU-funded projects furthering the goal of scientific integrity. Its particular focus is in building a framework that can have the same level of policy influence that the Oviedo Convention and the Helsinki Declaration have in the biomedical field. Nonetheless it is highly ambitious, on many fronts, to try to meet the interests of the full range of nonmedical disciplines, professional associations, research agencies, and EU countries as well as address the kinds of conflicts that frequently occur, more or less publicly in the scientific arena such as domain protectionism, priority and replication disputes, wheel reinventions, and all forms of misconduct. But the full realization of the task to be confronted arises when adherence to ethical codes is absent, fought against, and, even, seen as not applicable to some disciplines. An initial PRO-RES task was to establish a code or set of guidelines that constituted a foundational statement on the values, virtues, principles, and standards to be sought after in research and the vices to be avoided. This statement drew from all the existing codes and guidelines in the nonmedical fields and was endorsed by most of the consortium partners – but not all. One of the key partners Fabian Zuleeg, CEO and Chief Economist of the European Policy Centre (EPC) (https://www.epc. eu/), expert and well-respected researchers, announced they could not endorse or follow all the principles that were collectively aspired to. To do so would undermine their business model in that their competitors would be even less likely to espouse such principles and so gain a market advantage. In similar vein, when a set of ethics guidelines were being established for the UK Social Research Association in 2003, Professor Paul Spicker, now of the Centre on Constitutional Change at Robert Gordon University in Scotland, pointed out how difficult it would be to conduct balanced and accurate public policy research if principles of privacy, confidentiality, fully informed consent, and the avoidance of covert observation were rigorously adhered to (Spicker 2007). How can a code, guidelines, or foundational statement that researchers need only “partially” follow be taken seriously? As above in the case of audit, analysts and/or advisers in the public and private sector may not carry the label “researchers,” but they certainly collect data and synthesize research findings from other sources. That is research by any other name. They then add their own judgment and experience, to make recommendations based on that research to funders or commissioners or employers, and this is often done behind closed doors due to their commissioners’ requirements, market sensitivity, and concern for knowledge ownership (intellectual property). The main difference between them and academic researchers is that they are not, or rarely, likely to be
2
Regulating Research
23
subject to any form of research ethics review, nor are they subject to those supposed fundamental principles of science – openness, transparency, and the noncommercially tied sharing of knowledge. In economic policy research, for example, it would be difficult to follow a “do no harm” maxim. Policy experimentation in itself can do harm. There are always tradeoff considerations – doing more harm to one group to benefit another or to achieve other policy objectives. There is also a normative and/or political element. For example, some people might think that austerity is morally wrong but advocating it comes from a normative, ostensibly legitimate, economic judgment call. And many policy research institutes promote a certain angle on the world – more or less transparently. It would be very useful if such organizations did follow an ethics code or guidelines, but they would need to be adapted to their specific needs to secure buy-in. Their full research findings or outputs are less likely to be public unless there were gains in doing so, and so an assessment of their “integrity” or research conduct in what has come to be seen a key requirement of “responsible” scientific research is unlikely to be sought. The problem would be how to incentivize them to follow such guidelines. In the world of consultancy and advice, time is short, and unless you are obliged or unless it gives you a competitive advantage, some organizations would be highly unlikely to follow any guidelines. Public research money is not a sufficient driver as they rarely rely on this. It goes without saying that this issue is much broader than the conduct of public policy researchers and advocacy organizations since data gathering is part of the modus vivendi of most large Internet-based corporations. All responsible bodies collect data and make use of it – in this sense they are “doing” research. Obviously some data are more “useful” (marketable and applicable) than others. The scale of profit realized by companies such as Google, Amazon, Facebook, and the like is testimony to its value and surely to its accuracy and effectiveness. But they too allow themselves only to be subject to their own internal mission statements – not to global standards or principles valued by academic researchers and their funding bodies – nor are there currently any adequate legal limits on their actions (Zuboff 2019). It cannot be allowed that standards in research apply to some and not all such that Facebook can “get away with” breaching the security of personal information for 30 million users, unconsented psychological experimentation, failing to control racist advertisements, allowing employers to exclude women from ads, secretly lobbying for surveillance expanding the USA’s “cybersecurity” bill CISA while claiming they weren’t, tracking nonusers without permission, making illegal preferential data deals with device makers, and many more unconsented forms of surveillance, background deals on data sharing, and so on (Greer 2019). There is also the danger of quality researcher “migration” from ethics-appraised academic research in universities to better paid commercial research organizations in which innovation might even override integrity – or at the least, an avoidance of the more formalized ethical hurdles. Even smaller, well-respected research agencies have pointed out the difficulty of recruiting higher-quality candidates when there is an international market for data capture talent from large corporations.
24
R. Iphofen
So there is an issue to be confronted about the ability to pick and choose which ethical principles are to be followed and which not – how can that display integrity and the pursuance of accepted generic principles for research? The EPC is a respected think tank which stresses its principles and values of independence, transparency, and multi-stakeholder working (https://www.epc.eu/about.php). As such it offers a good example of how such an organization can demonstrate their integrity. But the principles to be upheld do differ between different think tanks or advocacy agencies – some couldn’t or wouldn’t claim to be nonpartisan. More concerning are the many dubious players in the field that don’t adhere to any rules such as lobby organizations disguised as think tanks. Their only regulation is through the court of public opinion or from their funders. Even then for many funders, this actually means NOT adhering to principles such as espousing transparency, avoiding politically biased funding, and the highly selective reporting of findings. This also takes us back to the incentive issue, what is the advantage to such organizations of adhering to principles of ethics and integrity?
The Research Spectrum As hinted at earlier, perhaps then we need a broader definition of research, one which allows us to capture the risky fringes of data gathering, the ones that might pose even more danger to individuals, communities, and societies than well-meaning academic or scholarly research poses, even more by accident than design. This could be something like “Research is any gathering and/or manipulation of information or data that is employed to understand, explain, anticipate (predict) manage and/or change the actions or behavior of humans, animals, plants, and inanimate materials.” With such a definition, we could argue that all research should be subject to accepted societal norms and values applied through a variety of forms of regulation and control at personal, institutional, local, national, and international levels and subject to ongoing monitoring to ensure safe and beneficial outcomes. This may be a broad definition which appears all-inclusive – and such definitions have always been flawed in leaving nothing out. But, in part, that is the point. If data are being gathered and used – for whatever purpose – it has now been comprehensively shown that all data applications and manipulations must be viewed with caution since they do entail risks. What Google, Amazon, Facebook, and the whole range of social media sites have done even with their use of so-called “consented” surveillance is adequate evidence of the dangers involved. Clearly the risks involved, when they can be anticipated, might vary across the kinds of data gathered, the proposed uses to which it might be put, and, consequently, the degree to which sanctionable regulations might need to be, or can realistically be, applied. Trust is a delicate principle, all too easily broken. The personal control over the holding of information depends on preciously guarded relationships. At each succeeding “meta” level, trust in the safeguarding of data is increasingly threatened. When we get to the level of the global gathering, manipulation, and application of data, we can no longer simply rely on “trusting” in the ethical use of that information.
2
Regulating Research
25
There has always been a danger with applying sanctions to a lack of professional integrity that implied misconduct to be an attribute of the less ethical individual researcher and the infrastructural (organization and institutional) pressures that led to “bad” research actions were ignored. Given the range of definitions outlined earlier, it is clear that research actions lie along a continuum from “pure” (blue skies and subject to no pressure other than the unbiased seeking of knowledge – rare these days) to the “impure” (funded to deliver a precise outcome desired by the funder). They also lie along a continuum of structural dependency – from “low” (where the researcher can act as independently as the unfettered seeking of knowledge allows) to “high” (where the researcher can only conduct research that is permitted by the structure upon which they are dependent). One could imagine a whole range of other continua or dimensions to make up the research spectrum such as “funding,” “governmental,” “mission-focused,” “market-oriented,” “technologically disruptive,” “methodologically innovative,” and so on. We would need to construct a “researcher/agency profile” according to where the individual or their agency is positioned along these continua to form a “spectrum” that guides us to understanding the degree of integrity to be expected of any agency conducting “research” – however defined. Perhaps the role of a journalist offers a useful illustrative example. A good investigative journalist would have to be a rigorous researcher, but only as pure and disinterested as their employing news organization permits. In addition to the independence of the researcher, one would have to assess the degree of independence available to the news outlet. Even though investigative journalists, like all journalists, do have their own code of conduct, it is hard to say just how rigorous that can be when they are subject to the forms of structural dependence that defines their careers. Less “investigative” journalists operate more at the level of opinion though even then one assumes they seek some reasonable data to support their opinions. Even so “investigative” journalists working for the more salacious tabloids would be positioned differently on the spectrum from those employed by more “serious” media outlets. (These issues are discussed more fully in the ▶ Chap. 64, “Research Ethics in Investigative Journalism” in Part VI) The mathematician Hannah Fry has recently argued for a Hippocratic Oath for scientists – especially mathematicians – as a way of having them pay closer attention to the consequences of hidden algorithms and invisible processes that gather and manipulate personal data with varying degrees of permission and which perform functions that have real-life consequences for individuals and their communities and society. Although most agree that the best way to deliver ethical research lies in a personal commitment and a communal culture of best practice, it is increasingly clear that for some forms of research and for some types of researcher and research organization, regulation is necessary. Not only does it seem patently unfair that some research becomes subject to the constraints of formalized review, approval processes, sanctions, and regulations while other, potentially much more harmful “research” escapes oversight primarily because no one has thought or sought how it might be controlled. We need to find something equivalent to the conventional models for monitoring ethical research that
26
R. Iphofen
can be applied to think tanks, advocacy agencies, lobbying groups, and the more murky “advisory bodies” with mass behavior change as their raison d’être. It could be in everyone’s interests if an overall industry body could be established that could draw up a code of conduct similar to market research codes, but the content would have to recognize the balance of obligations between societal needs and meeting those of the financing client. Transparency would seem to be a core principle – who owns and funds and to what aim? The body could award a quality kitemark to organizations fulfilling the code’s requirements, and that might be used as a minimum requirement by funders but would also be a marketing tool that many would sign up to show that they are in the credible part of the research spectrum. Such voluntary forms of regulation further the virtues of research, a culture of ethical awareness that ensure the safety, well-being, and dignity of the researcher and researched alike – when sanctioned regulation appears impossible. Some research areas evidently lend themselves to the need for regulation. Personal data, AI and robotics, and food and agricultural science are cases in point. But even here it took a very long time for personal data protections to be effectively regulated via the EU’s General Data Protection Regulation (GDPR) and then for national Data Protection Agencies to implement the legislation. Even now with the unknown consequences of AI and robotics research, there is a hope that selfregulation minimizes the potential for serious harm. The harmful implications of big data research have yet to be thoroughly understood, but the warning signs via Cambridge Analytica and Facebook are clear. Both the European and the UK Parliaments have sidestepped the opportunity to take effective regulatory action. The authorities that must address these issues must not be reluctant merely because the task is daunting. It appears to be easy to establish regulatory mechanisms for the gentler forms of “pure” academic research – but they may be considerably less dangerous. If some research is worth monitoring, then all should be – in ways suited to their forms and function.
The Challenge to Research Ethics Review As discussed earlier while research ethics review has many overlaps with research governance, there is some feeling that not only has the field become overly bureaucratized but, worse, that there is no guarantee of safeguarding the public. Meeting the requirements of a committee can be done rather cynically if the individual researcher lacks moral integrity! And no review system can possibly be an effective moral safeguard throughout the life of a research project. The moral order no longer changes slowly so ethics remains a highly fluid field. The structural constraints and pressure on the pursuit of knowledge is itself subject to constant and exponential change. Careers depend upon publications of certain kinds in the most appropriate outlets while the very structure of academic publishing itself is undergoing radical transformation. Career progress depends in turn on the capture of substantial funding, again from more prestigious sources, as a demonstration of the worth of their research. Institutions need both outlets as part of their corporate
2
Regulating Research
27
image and certainly then cannot tolerate anyone sullying their reputation by whistleblowing about even the slightest misdemeanors. But the pressure to misbehave in science also comes from constant changes in legislation as political establishments struggle to reconcile principles of human rights deemed to be held high in the contemporary world. The problem with rights is that they frequently stand in opposition to each other. Rights to freedom, life, speech, and so on can run counter to each other and to a right which has grown increasingly precious – privacy. To illustrate, due to growing concern over public safety and security as a consequence of terrorist action and organized crime, the European Commission took on a mission from the European Parliament to fund research into security and surveillance. For both the Seventh Framework Programme (FP7) and Horizon 2020, more than 200 projects in the field were funded to the tune of many millions of euros. At the same time, the European Parliament was urged to enhance rights to privacy leading to the danger of privileging privacy over the public interest (Erdos 2013). Alreadyfunded research aimed to protect the general public is being stifled by bewilderment about what is or might be allowed in the interests of the protection of the “data subject.” Now that the enhanced privacy imperative has taken on the force of law, it becomes even more difficult to conduct research into the actions of individuals in positions of authority – thereby restricting the political freedoms once thought so vital to democratic accountability. Perhaps then it should come as no surprise that there has been growing international concern about the impact on the social sciences and humanities in particular of systems for the governance of research ethics being inappropriately designed for the kinds of radical methodologies that have traditionally informed the development of social research and practices in the arts and humanities. The kinds of “evidencebuilding” procedures devised in social research, for example, typically challenge common values and assumptions and perhaps necessarily so as authority structures – governments, corporations, state bureaucracies, and so on – seek to protect themselves from unwanted critical intrusion. Indeed even the general public is increasingly reluctant to respond to the most typical form of research engagement – the questionnaire-based survey, hence the pressure to devise innovative techniques to “get at” the knowledge and information that, in turn, may be conducted primarily for the public benefit. The dilemmas and contradictions abound. Some social scientists regard most forms of regulation as anti-democratic, restrictive of academic freedom, and inherently unethical. To take just one example, an exaggerated emphasis on “informed consent” may obstruct investigative research that exposes institutional malpractice or articulates problems that elites would prefer to suppress. Most researchers have encountered ethics review procedures that fail to meet the basic requirements of due process and result in unreasonable rejections of research proposals, inconsistencies in decision, and costly delays. These have borne particularly heavily on early-career researchers, whose job insecurity is compounded by this unpredictability. Such global concerns led to an invitation summit – The Ethics Rupture in Fredericton, New Brunswick, Canada, in October of 2012. This brought together leading researchers from Canada, the USA, the UK, Brazil, Italy, New Zealand, and
28
R. Iphofen
Australia who were committed to enhancing the practice of research ethics and to suggesting innovative alternatives to the status quo. This meeting resulted in the New Brunswick Declaration (see Box 1) which champions the protection of research participants and researchers and declares as a truism that, even without formal ethics review, research should respect persons, do no harm, and privilege benefit over risk. The Declaration is aspirational, believing formal ethics review will only reach its full potential when policies, procedures, and committees treat the researchers in the same manner as researchers are expected to treat research participants – a culture of mutual respect. The point is to place the key responsibility for good research behavior onto researchers themselves. Members of formal ethics review committees are often over worked, underappreciated, and systematically under resourced. This demonstrates a lack of respect from their host institutions: the outcomes are devastating preventing ethics committees from nurturing researchers, especially novice researchers both within the review process and also from preparing them for the field. So formal ethics review itself requires sustained and ongoing review: it must embrace a plurality of ethics oversight privileging researchers’ professional codes of practice while recognizing the situational and partial character of formal ethics review. Researchers have to be (and must be seen to be) competent and trustworthy having the potential to act reflexively and responsibly in the field when new ethical considerations arise. The signatories to the Declaration are committed to providing critical analysis of new and revised ethics codes and highlighting best practice, exemplary, and innovative research ethics review processes. The hope is to encourage a move away from a predominantly compliance culture, where ethics is seen as a matter for regulation that provokes responses of cynicism, disengagement, and avoidance. Achieving this, however, does require some effort to develop a more united voice, although it is not necessary for every discipline to agree on every detail. On the other hand, many current codes and guidelines do draw heavily, if often implicitly, on one another. Researchers are faced with occasionally confusing decisions about which code might be most appropriate to their current research focus. In an era of multimethodologies, interdisciplinary initiatives, and problem-focused research, such confusion does not seem to promote either efficiency or effectiveness, let alone good ethical practice.
Conclusion: Influencing Policy Crises around scientific issues have been a regular fixture on the political landscape for some time. The neat dichotomy that supposedly separates politics from science is a fallacy. There are so many events that it is hard to select an illustrative few – perhaps the concerns over bovine spongiform encephalopathy (BSE) or “mad cow disease” that plagued the UK and with global fears from the mid-1980s to the mid-1990s; the long-standing battle over GM crops; and the controversy linked to the MMR vaccine and subsequent concerns over vaccination in general show just how close science always is to politics.
2
Regulating Research
29
Although scientists and science advisers offer ostensibly technical advice, it is inflected by their own world view and vested interests, no matter how magnanimous or utilitarian these may be. Much of the US and UK governments’ rationales for declaring war on Iraq supposedly depended on technical expert advice. But did Saddam Hussein really have weapons that posed a threat? What were the technical specifications of those weapons? How real was this threat? Subsequent events suggest not. The responses to questions about Iraq’s weapons capability could not have been anything but political, even if couched in technical jargon: “. . . to suggest that science-related political crises could have been averted if only political actors had understood the scientific issues in depth is one approach. But such crises might also have been avoided had scientific advisers understood the political implications of their advice. . .” (Farrands 2003). What matters is that the nutritional and health evidence about diet, for example, and subsequent governmental policy advice and legislation should not be overly influenced by lobbying from agri-industry or food corporations. The evidence from as impartial a science as possible should be the deciding factor. Similarly climate change research and policies should not be biased solely by the interests of fossil fuels producers. Ultimately it is the politicians and the policymakers who must take the ethically responsible judgment calls about which research evidence has been produced with integrity. Just two examples from the UK can illustrate the complex issues involved and why some form of control is essential. A senior member of an independent panel that advises the government on drug-related issues resigned due to “political vetting” of members. Alex Stevens, professor in criminal justice at the University of Kent, said that the exclusion of suitably qualified applicants meant that the Advisory Council on the Misuse of Drugs was losing its independence. Victoria Atkins, the UK Government minister for crime, safeguarding, and vulnerability, vetoed the appointment of Niamh Eastwood, a highly respected lawyer and executive director of the drugs charity Release, on the basis of tweeted disagreements with government ministers (Wise 2019). And a top UK think tank, the Institute of Economic Affairs, was funded by fossil fuel interests for decades to promote papers and reports that denied climate warming and human responsibilities for it by challenging independent research and the overwhelming scientific support for human-induced climate change (Pegg and Evans 2019). The chapters that follow in this opening section illustrate the range of difficulties involved in regulating research. There is a need for flexibility and for ongoing debate as the moral order changes, society progresses, and new technologies set us new challenges. The variety in the range and nature of data is yet to be fully understood so that data protection legislation still requires interpretation in many of its aspects: what counts as the “public interest” and how can that be balanced with the demands for privacy and/or requirements for transparency? Clearly we can’t legislate for everything, but we do rely on experts for heightened awareness of ethical decision taking in context. Maybe ethics review experts should be more involved in the legislative aspects – advising regulators on when legislation might be appropriate and when not. National and institutional variations in ethics appraisal processes are
30
R. Iphofen
inevitable – posing problems for both interdisciplinary and international research collaborations. Administrative systems no matter how coherent and efficient cannot guarantee ethical practice. They certainly might help minimize institutional risk and facilitate ethical practice as “being seen to be done.” In that sense they meet a demand for political correctness that may not be required of, say, journalists or artists or novelists all of whom are equally capable of ethically risky practices – which may entail a variety of forms of “research.” It is necessarily the case that moral responsibility has to be devolved to all practitioners who are practicing ethics “in the field.” Values must be embedded in process and in procedure – in governance and ethical review – lack of clarity over values undermines the whole process. It might be ideal if ethical scrutiny was systematized as a reflective and reflexive practice engaged in by all researchers individually, overseen by professional bodies and institutional ethical scrutiny reserved for research which evidently contains high-risk interventions. The chapters that follow in this section raise many more questions of this nature and even suggest a few solutions. Box 1 The New Brunswick Declaration
A Declaration on Research Ethics, Integrity and Governance resulting from the 1st Ethics Rupture Summit, Fredericton, New Brunswick, Canada 2013 As signatories of the New Brunswick Declaration, we: 1. Seek to promote respect for the right to freedom and expression. 2. Affirm that the practice of research should respect persons and collectivities and privilege the possibility of benefit over risk. We champion constructive relationships among research participants, researchers, funders, publishers, research institutions, research ethics regulators and the wider community that aim to develop better understandings of ethical principles and practices. 3. Believe researchers must be held to professional standards of competence, integrity and trust, which include expectations that they will act reflexively and responsibly when new ethical challenges arise before, during, and long after the completion of research projects. Standards should be based on professional codes of ethical practice relevant to the research, drawn from the full diversity of professional associations to which those who study human experience belong, which include the arts and humanities, behavioural, health and social sciences. 4. Encourage a variety of means of furthering ethical conduct involving a broad range of parties such as participant communities, academic journals, professional associations, state and non-state funding agencies, academic departments and institutions, national regulators and oversight ethics committees. (continued)
2
Regulating Research
31
Box 1 (continued)
5. Encourage regulators and administrators to nurture a regulatory culture that grants researchers the same level of respect that researchers should offer research participants. 6. Seek to promote the social reproduction of ethical communities of practice. Effective ethics education works in socially-embedded settings and from the ground-up: it depends on strong mentoring, experiential learning and nurturance when engaging students and novice researchers with ethics in research settings. 7. Are committed to ongoing critical analysis of new and revised ethics regulations and regimes by: highlighting exemplary and innovative research ethics review processes; identifying tensions and contradictions among various elements of research ethics governance; and seeing that every venue devoted to discussing proposed ethics guidelines includes critical analysis and research about research ethics governance. 8. Shall work together to bring new experience, insights and expertise to bear on these principles, goals, and mechanisms. The Declaration can be found at: van den Hoonaard, W.C. & Hamilton, A. (eds.) (2016) The Ethics Rupture: Exploring Alternatives to Formal Research Ethics Review, Toronto: University of Toronto Press, pp.431-2 ISBN 978-14426-4832-6
References ALLEA (2017) European Code of Conduct for Research Integrity. http://www.allea.org/wp-con tent/uploads/2017/05/ALLEA-European-Code-of-Conduct-for-Research-Integrity-2017.pdf Erdos D (2013) ‘Mustn’t Ask, Mustn’t Tell: could New EU data laws ban historical and legal research?’ UK Const. L. Blog, 14th Feb 2013. Available at http://ukconstitutionallaw.org Farrands A (2003) It’s just politics in a lab coat, The Times Higher, 22 Aug. Accessed 03 Oct 19. https://www.timeshighereducation.com/news/its-just-politics-in-a-lab-coat/178727.article Greer E (2019) Mark Zuckerberg has to go. Here are 25 reasons why. The Guardian, May. Accessed 07 Oct 19. https://www.theguardian.com/commentisfree/2019/may/08/mark-zuckerberg-has-to-go Iphofen R (2011) Ethical decision making in social research. Palgrave Macmillan, London Iphofen R (2017) Governance and ethics: maintaining the distinction. TRUST Project e-Newsletter 3:1 Parry J (2017) Developing standards for research practice: some issues for consideration, Ch 8. In: Iphofen R (ed) Finding common ground: consensus in research ethics across the social sciences. Emerald Group Publishing Ltd, Bingley, pp 77–102 Pegg D, Evans R (2019) Revealed: top UK thinktank spent decades undermining climate science, The Guardian, 10 Oct https://www.theguardian.com/environment/2019/oct/10/thinktank-cli mate-science-institute-economic-affairs. Accessed 13 Oct 2019 Schrag ZM (2010) Ethical imperialism: institutional review boards and the social sciences. The Johns Hopkins University Press, Baltimore
32
R. Iphofen
Schrag ZM (2011) The case against ethics review in the social sciences. Res Ethics 7(4):120–131 Spicker P (2007) The ethics of policy research, Evid Policy A J Res Debate Pract 3 (1): 99–118 (20) van den Hoonaard WC (2011) The seduction of ethics: transforming the social sciences. University of Toronto Press, Toronto van den Hoonard WC, Hamilton A (2016) The ethics rupture: exploring alternatives to formal research ethics review. University of Toronto Press, Toronto WCRIF (2010) Singapore statement on research integrity. https://wcrif.org/guidance/singaporestatement Wise J (2019) Advisory drugs panel is losing independence, says academic who quit, British Medical Journal, 7th October BMJ 2019; 367 https://doi.org/10.1136/bmj.l5913. Accessed 13th Oct 2019 Zuboff S (2019) The age of surveillance capitalism (the fight for a human future at the new frontier of power). Profile Books, London
3
Research Ethics Governance The European Situation Mihalis Kritikos
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Historical Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Legal Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organizational/Procedural Aspects of Ethics Appraisal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Role of the Commission’s Ethics Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Centralization Versus Multilevel Governance/Learning Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34 35 36 38 42 43 47 50
Abstract
The chapter analyzes the gradual emergence of ethics and integrity as a new object of regulatory attention at the European Union (EU) level, especially in relation to research activities funded by the EU. The chapter first examines the reasons behind the gradual development of an institutional and legal governance framework on research ethics at the EC level. It is argued that those research ethics committees created for the purposes of EU-wide ethical evaluations constitute a sui generis institutional structure that highlights both the opportunities and the limitations that this supranational rule-making platform offers. Their operation seems to constitute a delicate political exercise that is based on a vaguely defined subsidiarity test. Furthermore, the chapter seeks to answer whether the process for the establishment of an EU-wide institutional framework for the ethical review of research proposals indicates a tendency for the establishment of centralized community ethical standards or instead reflects the need for a multilevel regulatory control of ethical standards and principles beyond the national level. The chapter further M. Kritikos (*) Institute of European Studies-Vrije Universiteit Brussel (VUB), Brussels, Belgium e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_1
33
34
M. Kritikos
analyzes the various legal and sociopolitical features of “ethics” in the EU’s research initiatives and policies and identifies the inherent limitations of the gradual “Europeanization” of the process for the ethical scrutiny of EC-funded research proposals. Keywords
Ethics · Ethics appraisal · EU Framework Programme for Research · Proceduralization · Subsidiarity · Ethical pluralism · Europeanization · Epistemic power
Introduction Ethics has become an essential part of all research activities, from beginning to end, funded by the European Union (EU), and compliance with the EU’s ethics acquis is considered as an indispensable requirement for the achievement of scientific excellence. Compliance with commonly agreed ethical rules and practices involves the adherence to fundamental ethical principles as embedded in legal norms and codes of conduct. The incorporation of ethical norms as a prerequisite for research funding has been an arduous but efficient process of expanding EU action in a field of public interest that is traditionally dominated by local practices, professional self-regulation and national competences. The need to make an ex-ante ethical evaluation of research proposals submitted for EU funding became clear only since the adoption of the Fifth Framework Programme for research. Ιt needs to be mentioned that the consideration of Ethical, Legal, and Social Aspects (ELSA) was initiated in the frame of the Second Framework Programme (FP2/1987–1991). Whereas the Fourth Framework Programme (1994–1998) became associated with the integration of the socioeconomic dimensions through its “Targeted Socio-Economic Research” program. Within the ongoing Framework Programme for Research (Horizon 2020), all EU-funded research must comply with fundamental ethical principles, including those reflected in the EU Charter of Fundamental Rights independently of the locus of the research. EU-funded research has become globalized and collaborative, which makes it necessary to deal with the issue of the extraterritorial reach of EU norms, hence the need to safeguard that EU-funded projects comply with fundamental ethical principles even in non-EU jurisdictions. Article 34 of the Annotated Model Grant Agreement of Horizon 2020 states that ‘the beneficiaries must carry out the action in compliance with (a) ethical principles (including the highest standards of research integrity) and (b) applicable international, EU, and national law’. Through a thorough, multistep procedure known as ethics appraisal/review, the scientific community’s research funding is conditional upon its compliance with a series of ethical principles and requirements. Over the years, ethics review has become increasingly significant to the point that, nowadays, proposals submitted for EU funding are also assessed in terms of their ethical soundness, by panels of independent experts in various domains of research ethics. It needs to be mentioned that EU ethics-related norms have developed an extraterritorial application as the
3
Research Ethics Governance
35
Annotated Grant Agreement of Horizon 2020 states that ‘activities carried out in a non-EU country (and funded by the EU) must comply with the laws of that country and be allowed in at least one EU Member State’. The focus of this chapter is on the procedural modalities, institutional design, and normative footprint of research ethics throughout the various European Framework Programmes (FP) for Research and Technological Development, the European Union’s main funding mechanism for research. The European Commission uses an ex ante approach for the assessment of the actual integration of socio-ethical narratives into scientific and research activities across the whole range of European Union (EU) funding mechanisms. Α special procedure has been built at the outset of the evaluation phase so as to facilitate the identification of ethical parameters and risks and their evaluation. This ex-ante ethics assessment allows researchers to proceed with their proposed research whilst making sure that the latter meets all legal and ethical requirements and standards.
Historical Development Although the EU is required to respect the ethical positions of its Member States (Article 6(3) TFEU), this does not mean that the incorporation of ethical norms has not taken place at EU level. In fact, research ethics has been a long-standing object of EU institutional attention, whereas all major EU research funding programmes are based on the results of the established ethics appraisal procedures that are designed around the operation of EU-wide ethics review panels set up by the various Commission services and executive agencies. More specifically, in the early 1990s, it was recognized that a more structured approach was needed at EU level with respect to dealing with ethical issues raised by the performance of research in the areas of biotechnology, clinical trials, and other forms of medical research. This led to the opening up of ethics debate and review at EU level as a way of managing public concerns over technological breakthroughs and the gradual recognition of ethics expertise as an appropriate tool for advising policymakers on the soft impacts of science and technology, at national and European levels. Under this prism, it became an essential procedural step and requirement in the frame of the evaluation of research proposals submitted for EU funding. It was in the Council Decision on the Fourth Framework Programme1 that for the first time there was an explicit reference to the need for Community’s Research, Technology, and Development (RTD) activities to take ethical considerations into account. Moreover, the Decision makes reference to the need for research on biomedical ethics, ‘to address general standards for the respect of human dignity and the protection of the individual in the context of biomedical research and its clinical application’. In other words, since 1994, in the context of the Fourth EU 1
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:31994D1110; https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX%3A31994D0763
36
M. Kritikos
Framework Programme, funding for research into the Ethical, Legal, and Social Aspects of emerging sciences and technologies became available, and the need for ethics review received legal recognition. The Fifth Framework Programme contained an entire article under the title “Respecting an ethical framework” stating that “Full respect of human rights and fundamental ethical principles will be ensured throughout all activities in the specific programme in accordance with Article 6 of the European Parliament and Council Decision on the Fifth framework programme.” These principles include animal welfare requirements in conformity with Community law. Article 7 stated that “All research activities conducted pursuant to the fifth framework programme shall be carried out in compliance with fundamental ethical principles, including animal welfare requirements, in conformity with Community law.” Article 10 classified ethics compliance as a ground to reject a project or not by stating that “Any project which contravenes the ethical principles laid down in the relevant international conventions and regulations shall not be selected.”2 Within the Sixth Framework Programme on Research, ethics review became codified, whereas the Science and Society projects supported a wide range of studies and participatory events in areas including gender, ethics, young people, and citizens’ participation. The preambles to the Council decision on the Sixth Framework Programme introduced the respect of fundamental ethical principles as one of the main goals of this supranational funding instrument. Activities under FP6 and the Seventh Framework Programme had to be conducted ‘in compliance with ethical principles, including those reflected in the Charter of Fundamental Rights of the European Union’.
Legal Aspects Notwithstanding the absence of a formal treaty competence in relation to research ethics and the lack of a common definition of ethics at the EU level (EU primary law does not include the term ‘ethics’ and ethics is not an autonomous concept of EU law), the Commission’s ethics review procedure does not take place in a legal void. Indeed, a series of successive EU legal instruments on research funding, including the EU research framework rules that set the grounds for the EU research programs and the respective rules of participation and model grant agreements, make explicit reference to, and highlight the importance of oversight and compliance with, ethical principles and standards. All activities carried out under the EU research funding program Horizon 2020 are checked for their ethical relevance and are expected ‘to comply with ethical principles and relevant national, EU, and international legislation, for example, the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights’. Participants in EC-funded research projects should conform to national
2
Fifth Framework Programme for Research, Technological Development and ... technological development and demonstration activities (1998–2002) OJ L26 – 01/02/1999; No 182/1999/EC.
3
Research Ethics Governance
37
legislation and applicable codes of conduct and seek the approval of the relevant ethics committees prior to the start of planned research activities. Article 19.1 of the Horizon 2020 Regulation3 states that “all the research and innovation activities carried out under Horizon 2020 shall comply with ethical principles and relevant national, European Union and international legislation, including the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights and its Supplementary Protocols.” Within this frame, applicants must describe the ethical implications of the research, and, where required by national legislation or rules, participants must seek the approval of the relevant ethics committees prior to the start of the activities that may raise ethical issues. Moreover, the funding of certain fields of research is not permitted ‘including those involving human cloning for reproductive purposes, modification of the genetic heritage of human beings which could make such changes heritable, and the creation of human embryos solely for the purpose of research or for the purpose of stem cell procurement’. Article 34 on Ethics of the Model Grant Agreement sets out a number of conditions and issues including the obligation to comply with ethical principles, activities raising ethical issues, activities involving human embryos or human embryonic stem cells, and the consequences of non-compliance. Although legally not binding, according to the European Charter for Researchers, researchers should not only “adhere to the recognised ethical practices and fundamental ethical principles” but also “need to be aware that they are accountable [on] ethical grounds, towards society as a whole.” Furthermore, ethics-related norms have gradually become incorporated into EU law through secondary legislation, as well as soft law on clinical trials4, medical devices5, good clinical practice6, data protection7, animal
3
Regulation (EU) No 1291/2013 of the European Parliament and of the Council of 11 December 2013 establishing Horizon 2020 – the Framework Programme for Research and Innovation (2014–2020) and repealing Decision No 1982/2006/EC Text with EEA relevance, OJ L 347, 20.12.2013, pp. 104–173. 4 Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use and repealing Directive 2001/20/EC Text with EEA relevance, OJ L 158, 27.5.2014, pp. 1–76. 5 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002, and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance. ) OJ L 117, 5.5.2017, pp. 1–175. 6 Commission Directive 2005/28/EC of 8 April 2005 laying down principles and detailed guidelines for good clinical practice as regards investigational medicinal products for human use, as well as the requirements for authorization of the manufacturing or importation of such products (Text with EEA relevance) OJ L 91, 9.4.2005, pp. 13–19. 7 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, pp. 1–88.
38
M. Kritikos
experimentation8, genetically modified organisms9, and the use of human biological tissues10. Such norms are also to be found in EU research framework programs, the opinions of the European Group on Ethics in Science and Technology (EGE)11, and the judgments of the Court of Justice of the European Union. In the governance and ethics fields, Science and Society actions have led the European Commission to adopt a series of guidance notes for researchers including a Code of Conduct for Responsible Nanosciences and Nanotechnologies Research. Although legally not binding, according to the European Charter for Researchers, researchers should not only “adhere to the recognised ethical practices and fundamental ethical principles” but also “need to be aware that they are accountable [on] ethical grounds, towards society as a whole.”12
Organizational/Procedural Aspects of Ethics Appraisal Over the years, ethics review has become increasingly prominent, and research ethics has evolved into an essential component of most of the recent policy and legal initiatives in the field of new and emerging technologies. This perspective proceeds by providing a brief overview of the management of ethical concerns in research at EU level. Thereafter, the ethics review procedure as established at the EU level is examined, including institutional features, legal underpinnings, and organizational arrangements. The institutional features and organizational mechanics of the ethics review procedure are of particular importance for the assessment of the way ethical concerns in the field of scientific research are managed at the EU level. Their importance lies in the way these features have shaped an EU ethics narrative and have gradually led to a de facto harmonized approach to research ethics in Europe. It is argued that the European Commission has employed the proceduralization paradigm in the field of its research ethics review structures (Spike 2005), so as to tackle the pathologies (Sass 2001) associated with such committee evaluation frameworks (Holland 2007). The proceduralization model fits well with the 8
Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes Text with EEA relevance OJ L 276, 20.10.2010, pp. 33–79. 9 Regulation (EC) No 1829/2003 of the European Parliament and of the Council of 22 September 2003 on genetically modified food and feed (Text with EEA relevance) OJ L 268, 18.10.2003, pp. 1–23. 10 Directive 2004/23/EC of the European Parliament and of the Council of 31 March 2004 on setting standards of quality and safety for the donation, procurement, testing, processing, preservation, storage, and distribution of human tissues and cells OJ L 102, 7.4.2004, pp. 48–58. 11 EGE is the external advisory body which, since its inception in 1991, has provided the Commission with independent advice on ethics of science and new technologies. It convenes the interservice group on Ethics and EU Policies, coordinating Commission activities in the fields of ethics and bioethics. 12 Annex Commission Recommendation 2005/251/EC on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers, O.J. 2005, L 75/67.
3
Research Ethics Governance
39
consensus model that has been associated with the institutionalization of ethics. Toward this direction, the ethics review service of the Commission has developed detailed procedural rules regarding the formulation, designation, and working methods of ethics review panels. These include the appointment of members and chairs of panels, the duties and responsibilities of members, the allocation of reviewing tasks (e.g., appointment of rapporteurs), the format for reports, the procedures to be adopted with regard to decision-making, and the conditions for interaction with applicants for research funding as well as with local and/or national research ethics services/actors and other authorization bodies. A system for screening research proposals, remote and central ethics reviews, follow-up, and audit has also been developed. In order to assist panel members, background papers, assessment forms, templates, and checklists are also provided to support the review process. This special emphasis on procedures has become evident also in the Commission’s specific procedural rules regarding the formulation, designation, and working methods of ethics review panels and in the various initiatives to ensure the consistent operation of the EU ethics review procedure. The proceduralized approach of the Commission has also provided a set of EU-wide procedural standards and mechanisms for the identification of what the ethical parameters should be in relation to research where such concerns have been raised. Proceduralization becomes especially evident in the way the ethics self-assessment and panel consensus forms are framed: promoting a tick-box approach for the initial identification of specific aspects the research protocol as inherently more ethically risky and in effect requiring further ethics reflection, monitoring, and assessment. It needs to be noted that the proceduralization paradigm has been introduced at the EU level as an alternative form of organization of ethics appraisal for deliberation, inclusion, and reflection (Stirling 2005). This administrative model of organization departs from the traditional “Community Method” of regulation through legislation (Black 2000). The nonhierarchical and open-ended structure that proceduralism suggests leaves space, in principle for deliberation and reflection (Jasanoff 2005). The main “merit” of proceduralization that derives from its allencompassing character involves the need for wider reflection regarding the potential ethical implications of particular research projects, in order to gather a diversity of perspectives and reach a solution that would not only comply with the traditional biomedical example of ethics scrutiny (Hoeyer et al. 2005) but would also be responsive to a wider array of concerns (Dingwall 2008). More specifically, under Horizon 2020, it is the Commission that “shall systematically carry out ethics reviews for proposals raising ethical issues” by verifying “the respect of ethical principles and legislation13.” From a procedural perspective, this “process of the ethics review [has to be] as transparent as possible14.” The process to assess and address the ethical dimension of activities funded is called the ethics appraisal procedure. The procedure ensures that ‘all research activities 13
Art. 14, para. 1 Regulation 1290/2013/EU. Art. 14, para. 2 Regulation 1290/2013/EU.
14
40
M. Kritikos
carried out under the Horizon 2020 Framework Programme are conducted in compliance with fundamental ethical principles and standards, relevant European legislation, international conventions and declarations, national authorizations and ethics approvals, proportionality of the research methods, and the applicants’ awareness of the ethical aspects and social impact of their planned research’. The ethics appraisal procedure concerns all activities funded in Horizon 2020 and includes the ethics review procedure, conducted prior to the start of a research project, in addition to ethics checks and audits. The latter is applicable only in some cases since not all projects go through ethics check and audits. The procedure can be initiated either by the experts or by the Commission services when there is a need for preventive and/or corrective measures. The aim of this mechanism is to minimize or even prevent the likelihood of the research protocol, once it starts becoming implemented, to pose any risk upon the integrity and the safety of research participants as well as upon the environment and public health. The primary concern lies with the safeguarding of the ethical soundness of the protocol under consideration in terms of procedural compliance and legal reflection on behalf of the applicant. The process begins with the ethics self-assessment performed by the applicants/scientists themselves in the form of completing an Ethics Issues Table – a checklist in the administrative Part A of the application. A scientific evaluation is carried out initially, and all proposals above threshold and considered for funding will undergo an ethics review carried out by independent ethics experts working in a panel. The review begins with an ethics prescreening for proposals with no declared ethics issues where confirmation of no ethics issues is necessary from the applicant’s point of view. It is performed by ethics experts and/or qualified staff, who review the information provided in the application/self-assessment. The aim of the pre-screening is to list the (potential) ethical issues. In case there is at least one confirmed ethical issue, the proposal will be subject to a complete ethics screening that will mainly assess the ethical aspects of its objectives, methodology, and potential impact. The ethics experts and/or qualified staff notably identify all proposals that require (ethical) approval at the national level (e.g., with regard to data protection, the conduct of clinical trials, and animal welfare). Each proposal will be screened by at least two independent ethics experts. The possible outcomes of the ethics screening are: 1. The proposal is “ethics-ready” which means that the grant agreement (GA) can be finalized. 2. “Conditional clearance” where experts formulate requirements which will become contractual obligations. (These requirements constitute the condition to be fulfilled, and, on this basis, the grant preparation can be finalized.) 3. “Ethics Assessment” that is recommended by the screening panel for a limited number of proposals with complex ethical issues (e.g., severe intervention on humans, etc.) prior to the signature of the GA and, if appropriate, list the additional information to be provided.
3
Research Ethics Governance
41
4. No ethics clearance (“negative ethics opinion”) which means that the proposal cannot be selected and funded for reasons that need to be explicitly stated. In specific cases, the ethics experts may also recommend an Ethics Assessment whereas proposals involving the use of human embryonic stem cells (hESCs) automatically proceed to the second step, the Ethics Assessment. For a limited number of proposals (e.g., severe intervention on humans, lack of appropriate ethics framework in the country where the research will be performed, etc.), the ethics screening can be followed by an Ethics Assessment prior to the signature of the grant agreement. The Ethics Assessment is an in-depth analysis of the ethical issues flagged by the ethics screening experts, by the Commission, and for all proposals involve human embryonic stem cells, that is carried out by a panel consisting of at least four (4) independent ethics experts. It takes into account, when available, the analysis done during the ethics screening as well as the information provided by the applicants in response to the results of the ethics screening. On the basis of the assessment, experts formulate requirements, some to be fulfilled before the signature of the grant agreement (GA) and the others becoming contractual obligations (Annex I). The experts may also recommend an ethics check and indicate the appropriate timing. Alternatively, the experts may consider that the elements submitted are not sufficient and request a second Ethics Assessment, indicating the weaknesses to be addressed and the information to be provided. The signature of the GA is postponed up until the results of the second Ethics Assessment. In addition, there are cases where the panel can decide not to grant ethics clearance due to severity of ethics issues, which the applicant has not engaged with sufficiently. Τhe complexity of the ethical issues raised by the proposed research in combination with the lack of applicant’s awareness of the structural challenges of ethical nature or even his/her unwillingness to comply with the ethics-related requirements in an adequate manner can also lead to the withdrawal of the grant in very extreme cases. It needs to be mentioned that the applicant is always given a chance to rectify any omissions. The requirements become contractual obligations and are consequently included in Annex 1 of the grant agreement unless it is considered that the requirements should be fulfilled before the grant signature. These conditions may include regular reporting to the Commission/Executive Agency, the appointment of an independent ethics advisor or ethics board that may be tasked to report to the Service/Executive Agency on the compliance with the ethics requirements, an ethics check or audit and their most suitable timeframe, submission of further information/documents, and adaptation of the methodology to comply with the ethical principles and legal rules. Following the conclusion of the ethics review at the initiative of the research funding service (being the European Commission, the Research Executive Agency, or the European Research Council Executive Agency), an ethics check can be undertaken. The objective of the procedure is to assist the researchers to deal with the ethics issues raised by their research and, if necessary, to take preventive and/or corrective measures primarily on the basis of the requirements of the ethics reports and, when available, the reports of the ethics advisor/board. This procedure is also
42
M. Kritikos
followed in cases where clearance is not granted after a further/second Ethics Assessment. On-site visits can also be organized. In case of substantial breach of ethical principles, research integrity, or relevant legislation, an ethics audit can be undertaken. The procedure is foreseen in the GA (Article 22). The checks and audits can result in an amendment of the GA. In severe cases, it can lead to a reduction of the grant, its termination, or any other appropriate measures, in accordance with the provisions of the grant agreement.
The Role of the Commission’s Ethics Service The management of this multilevel procedure is the responsibility of the ethics sector of the European Commission (DG Research and Innovation) which is located in the Directorate-General for Research and Innovation. Its institutional position has been recently upgraded via its positioning in the Unit of Scientific Advice Mechanism that is under the Director General him/herself. This institutional positioning has reinforced its horizontal character and its autonomy as an assessment mechanism. Beyond this centralized service, there is a plurality of agencies that are accountable to the European Commission (REA, ERCEA, EASME, etc.) that are involved in the ethics screening and appraisal procedures and follow the established rules and practices in this domain. This particular Commission’s service coordinates the ethics review of research projects that is implemented by independent ethics experts either individually or in panels. The service has been quite instrumental in adjusting various ethics norms and standards to the particularities of several scientific domains and bringing forward the best research practices by issuing guidance notes. It is also responsible for the organization of the joint meetings of all National Ethics Councils (NECs) of the EU Member States whose conclusions feed into the ethics review procedure in various ways. The ethics sector also acts as a liaison between those who receive framework program funding, competent national authorities, and relevant ethics committees. It is also responsible for appointing members to ethics screening and review panels, as well as coordinating the entire evaluation process. Although ethics review of research proposals may take place before any selection decision by the Commission, such review is mandatory where research proposals involve some form of intervention with respect to human beings, research involving human embryonic stem cells, human embryos, and nonhuman primates. One of the tasks of this service is to analyze, through ethics reviews, whether applicants are aware of the ethical challenges related to the proposed research as well as whether they are planning to take into account ethical principles, legal standards and best research practices in the relevant domain. The established system of ethics pre-screening, screening, enhanced delegation to individual programs, remote and central reviews, follow-up, and audit is accompanied by increased training programs for both applicants and Commission staff. Its institutional positioning has rendered this service a point of reference in the field of research ethics given also its ability not
3
Research Ethics Governance
43
only to enforce the panel opinions but also to monitor and audit the ethical soundness of the project during its implementation. It should be noted that the European Commission’s ethics sector does not actually undertake the ethical appraisal itself. The aforementioned scrutiny of research proposals is carried out by the so-called ethics panels, consisting of independent experts chosen by this service. The role of the Commission is to organize the ethics appraisal process and relies on the skills of independent external experts from different disciplines such as law, sociology, philosophy and ethics, religious and educational studies, psychology, information technology, medicine and, molecular biology. Each expert reads the proposals and then meets with other experts on the nominated panel to review ethical issues raised by such proposals. The aim is to reach a consensus on how best to deal with such issues and then to produce a report on how they should be managed. Typically, such reports include a list of identified ethical issues, details of the way in which they have been handled by the applicants, and any requirements and/or recommendations that the panel have for how such issues should now be dealt with by the applicants. These reports are then sent to applicants, and the Commission takes the reports into account when decisions are made to fund projects. This is done by including the key findings from the report as requirements in grant agreements entered into by those applicants whose research projects are funded by the EU. The experts’ panels are geographical and gender balanced. Their composition depends on the nature of the proposals under review. The panels reflect and comprise a variety of disciplines such as law, sociology, psychology, philosophy and ethics, medicine, molecular biology, chemistry, physics, engineering, and veterinary sciences. The aim is to reach a consensus on how best to deal with such issues and to then produce a report on how they should be managed. Typically, such reports include a list of identified ethical issues, details of the way in which they have been handled by the applicants, and any requirements and/or recommendations that the panel have for how such issues should now be dealt with by the applicants. These reports are then sent to the applicants, and the Commission takes the reports into account when decisions are made to fund projects. This is done by including the key findings from the report as requirements in grant agreements entered into by those applicants whose research projects are funded by the EU.
Centralization Versus Multilevel Governance/Learning Effects The Commission’s ethics review panel procedure, which provides evaluations of sensitive ethical issues raised by selected proposals for EU research funding, highlights both the opportunities and the limitations of harmonization and standardization efforts with regard to EU-wide ethics oversight of research initiatives. The opportunities that the established ethics appraisal procedure offers stem from the procedural design of this mechanism. This overemphasis on the procedural modalities can be attributed to the capacity of the proceduralization model to respect ethical plurality while at the same time to
44
M. Kritikos
remain sufficiently flexible to respond to the fast-moving nature of technological developments. This procedural approach should be seen as particularly important where Member States cannot agree ex ante on the substantive content of aspects of research ethics, such as on the terms of use of stem cells or whether genes should be modified and/or how fundamental ethical principles should be interpreted in a specific research context. It is an approach which provides the necessary institutional space for reflection from which consensus may emerge and practical solutions can be found to set the ethical terms and conditions for the performance of the planned research. More specifically, proceduralization has proved to be particularly important in areas where nonscientific considerations regarding the funding of research are of particular concern and given the diverse range of research protocols under consideration and the ethical plurality that is dominant in the frame of several technological domains. This procedural focus is more than necessary to be adopted at the EU level in the domain of research ethics given its capacity to facilitate consensus and compensate for the lack of ethics structures for particular scientific domains in most Member States. It needs to be mentioned that the EU-level ethics review procedure cannot stand in lieu of the special comitology procedure that has been established to approve (or not) human embryonic stem cell research at the EU level. The procedural character of this mechanism, by offering ample space for discussion and compromise, has decisively contributed to the handling of controversial ethical issues raised by research projects and the strengthening of compliance with research ethics standards. It has also contributed to the finding of common ground between values across different cultures, which may influence the nature and contents of EU countries’ and third country (i.e., non-EU) law on the issue. Its flexible and open-ended character is assisted by the lack of a strict methodological protocol which allows panels to develop self-generated working practices and methods of evaluating specific types of research, including resort to a range of scientific, legal, and ethics sources and procedural options. This extraterritorial influence has been facilitated by the EU’s general approach that the rights of EU data subjects should be protected regardless of where the data is processed as enshrined in the General Data Protection Regulation. This process-based model is suitable in the context of research ethics as ethics is not a static field given that it evolves as science evolves and ongoing dialogue as a process is required. As a result of the effectiveness of this procedural, discursive framework, the ethics review panels have gradually become more than advisory bodies that provide technical and ethical information at an institutional level. Situated at the intersection between law, ethics, and science, these ad hoc entities have become important actors in their own right with respect to resolving, defining, and legitimating the boundaries of ethical acceptability regarding innovative EU-funded research. Beyond their purely procedural role, the role of the ethics review panels has gradually become, through their reports, guidelines, and advice, to strive toward creation of a culture of responsible research and to create an ecosystem of best practices where researchers could resort to when looking for guidance not only when designing their scientific work but more importantly throughout the implementation of the
3
Research Ethics Governance
45
proposed research. The Commission’s guidance to researchers on how to comply with the ethical principles, the ethics panels’ requests for further information on specific aspects of the research protocol, and the assessment of the respective requirements can also be considered of a substantive nature. The opportunity, which researchers are given, to comprehensively respond to ethics issues after screening or assessment (in cases where further/second assessments are recommended) cannot be safely described as mechanistic. Even if the researchers initially tick the ethics issues checklist as part of ethics self-assessment, the process never ends there. In addition, where there are serious ethics issues, researchers can be called for a meeting in Brussels, and this engaging process does not fit the description of being mechanistic in nature. All these processes entail serious deliberation on ethics issues and can foster individual reflexivity. Given that this procedure operates only at the EU level, it is entirely based on the local implementation of the ethical requirements that it puts forward. The reports of the ethics review panels contain requirements, which are grounded on specific pieces of EU legislation such as the Clinical Trials Directive, the Animal Welfare Directive, or the General Data Protection Regulation that require local compliance. Therefore, the local implementation of legal requirements that have been drafted and agreed at the EU level has led to a gradual embedding of a de facto EU-based approach to research ethics. This embedding has long been expected given not only the implanting of ethical compliance requirements in all legal documents related to the implementation of Horizon 2020 but also the adoption of a series of EU-level hardand soft-law instruments that touch upon directly or indirectly ethical/moral issues. Due to its normative influence, the ethics review procedure has evolved incrementally in both size and scope, and there has been a significant increase in the number of research projects requiring ethics review and oversight in recent years. This Europeanization effect has been facilitated by the so-called ethics creep (Haggerty 2004) whereby ethics panels have unintentionally expanded their mandate to include a host of groups and practices that were undoubtedly not anticipated in the original research ethics formulations (Wolzt et al. 2009). In other words, this proceduralized model has been largely successful in promoting a workable, consistent, and procedural approach to dealing with EU-wide ethics review of research funding, in the absence of a formalized treaty mandate in the field. Finding common ground between values across different cultures in the frame of the discussions and opinions of the ethics review panels may pave the way for the framing of a common research ethics narrative given the EU-wide and innovative character of the research protocols under funding. Therefore, the ethics review procedure is indeed promoting the Europeanization of research ethics albeit in a restrictive manner in practice. This is mostly because it is limited to examining the ethical soundness of particular EU-funded research protocols but also in view of the fact that the EC defers decision-making on quite a number of issues to the competent national authorities as explained above. The question of the added value of the ethics review procedure is closely related to the complementary character of the Framework Programmes of Research in general. That is in fact one of the main items of discussion when negotiating the general and
46
M. Kritikos
specific research actions at the EU level as it is necessary to define which activities would be better performed at EU level in order to strengthen the European research ecosystem. Moreover, in contrast to the possible harmonization effects of proceduralized models, the rotating composition of the panels, their fluctuating group dynamics, and the diverse range of projects under review may limit the potential of standardizing their operation. On occasion, where an individual research project raises a number of problematic ethical issues, it may be the case that the panel offers a variety of readings on particular issues, as it is almost unavoidable for ethical viewpoints to be diverse. The outcomes of panel deliberations are usually made on a case-by-case basis without any attempt at harmonization of opinions or verdicts. Additionally, the panel reports usually contain requirements that become part of the grant agreement or recommendations to researchers or the EC, the latter being non-contractual. These depend on the nature of the project and the specific ethics issues that arise. Moreover, the determination of the concept of ethics by transferring competences to committees of a very small number of experts can be problematic in terms of their (political) accountability, their representativeness of various ethical discourses, and terms of their appointment. Although these panels function as a space for a deliberative engagement with ethical questions on new and emerging technologies in Europe and the framing of an EU-wide moral discourse on research ethics, their reliance on professional ethicists and the absence of moral judgments by non-expert citizens raises concerns about whether these entities effectively engage with broader lay values and/or adequately represent the public interest. As a result, this particular decision-making paradigm that conveys a “technocratic style” of analysis that derives legitimacy from the expertise of these ethics panels and the institutionalization of expert “ethics” runs the risk of leading to a reductionist approach to research ethics that is bound by professional ethicists, namely, one that is limited by a set of predefined regulatory modalities and organizational procedures that does not provide space for debates on more conceptual and structural issues such as the interrelationship between science, technology, and society at large. The latter have been considered as being beyond the scope of ethical deliberation at the expense of broader content (Zwart 2001). Therefore, this proceduralized model has resulted in a rather drastic delisting of the most controversial issues from the agenda of EU research funding discourse. The containment of ethical tensions related to the variety of sociotechnical imaginaries (e.g., see Jasanoff and Kim 2009) restricts the procedure’s moral imagination to the expected and desired outcomes of scientific and technological developments. At the same time, this approach exerts a normalizing role that leads to the entrapment of ethical reflexivity and to reductionist outcomes. Given the need to cope with the growing demand for a simplified selection procedure and even for circumventing local barriers the quest for a further procedural centralization of the EU-wide ethics review procedure and its epistemic authority has become stronger. There is a question about how feasible this might be and whether there is space for developing standard operating procedures for the EU-level research ethics panels and ensuring a higher level of regulatory certainty and efficiency for the research community. It has been argued that an EU-wide
3
Research Ethics Governance
47
harmonized framework for research ethics is unlikely to be legally feasible given that such a transfer of powers from Member State level would require unanimous political agreement in the Council of Ministers, as well as at an inter-institutional level. Legal feasibility concerns aside, an exhaustive harmonization of research ethics standards at EU level might not be desirable as ethical concerns regarding the conduct of research may be locally or nationally specific and linked to particular sociocultural beliefs or historical narratives. That is why in the context of Horizon 2020, ethics is attributed to the national level, when participants have to “comply with national legislation, regulations and ethical rules in the countries where the action will be carried out”15 but also “[w]here appropriate, [to] seek the approval of the relevant national or local ethics committees prior to the start of the action.”16 Reflecting the need to respect the predominant role of Member States on issues of morality and ethics, the European Court of Justice, in a way of judicial self-restraint, has accepted the Member State’s competence in determining their understanding of morality.17 There are also scientific domains and policy areas such as the use of human stem cells where Member States either do not allow such research endeavors or permit it under very stringent conditions.18 Therefore, a one-size-fits-all approach would not fit well with such moral pluralism and ethics diversity given that this particular research ethics governance scheme has been assigned particular normative meanings and functions. It runs also the risk of privileging particular views over others, with the potential to generate further conflict between Member States, as well as between Member States and EU institutions and violating the “principle of noninterference” that applies in the field of ethics. Within this frame, there is a need for a debate on whether increased harmonization of ethics review panel opinions/verdicts is necessary and compatible with the ad hoc nature of these panels required by the relevant EU rules.
Concluding Remarks The operation of the EC ethics review procedure has contributed to the shaping of an EU “responsible innovation” narrative in which ethics is to be seen as an inherent component in the design process of technologies, rather than as a constraint on 15
Art. 23, para. 9 Regulation 1290/2013/EU, (note 9). See also Declaration No 34 on Art. 163 TFEU, O.J. 2007, C 306/261: “The Conference agrees that the Union’s action in the area of research and technological development will pay due respect to the fundamental orientations and choices of the research policies of the Member States.” 16 Art. 23, para. 9 Regulation 1290/2013/EU. 17 ECJ Case 121/85 EU:C:1986:114 margin number 2 – Conegate v. HM Customs & Excise; ECJ Case 34/79 EU:C:1979:295 margin numbers 15, 22 – Henn and Darby; Advocate General Bot Case C-34/10 EU:C:2011:669 margin number 45 – Brüstle; ECJ Case C-506/06 EU:C:2008:119 margin number 38 – Mayr; ECJ Case C-159/90 EU:C:1991:378 – Society for the Protection of Unborn Children Ireland v. Grogan and Others [SPUC]. 18 31st recital (see also Art. 19, para. 4) Regulation 1291/2013/EU.
48
M. Kritikos
scientific advances. It has been instrumental in promoting good governance in research within the EU, whereas research ethics not only is seen as a major source for assessment and justification but also has evolved into a normative instrument for policy and legislation. In addition, the fact that ethics review is legally mandated in a variety of cases within the framework funding programs means that it has become an authoritative point of reference for resolving ethical disputes that may arise in this regard with respect to the conduct and funding of research under such programs. The analysis of the procedural modalities and of the governance structures in the field of EU research ethics indicates the gradual capacity of ethics review to shape and embed ethical rules/practices into EU research and innovation practices. Given the existing moral pluralism on the ethical permissibility of certain research practices, the proceduralized approach of the ethics review panels of the European Commission provides the necessary space for accommodating the so-called ethical pluralism that is so evident. The proceduralized design and modus operandi of this evaluation framework seems, in principle, to match well with the particular features of research and innovation at the EU level. This should be seen as particularly important where EU Member States cannot agree ex ante on the substantive content of aspects of research ethics and/or of the fundamental ethical principles. Given that the concept of ethics is still qualified as equally diverse and there is a plurality of national/ regional approaches and inevitable ethical diversity on certain controversial issues, the Commission’s approach offers a wide range of potential solutions and nonbinding viewpoints with respect to how problematic ethical issues should be interpreted and assessed in the context of an independent review procedure. Our initial findings indicate that this proceduralized form of organization of ethics appraisal at the EU level as a regulatory technique to create inclusive forms of moral debate has provided in fact sufficient space for the reconsideration of standards in a challenging area of research ethics in which these are highly contested (Hermerén 2009). The advantage of such an organizational model is that it helps embedding ethics in the design of the research protocols in fast-moving and innovative fields of research rather than treating ethics as an add-on or even worse as a red-tape mechanism. It has also helped to shape a “responsible innovation” narrative in which ethics is to be seen as an inherent component in the design process of technologies, rather than as a constraint on advances in technology. In sum, it has been instrumental in promoting good governance and responsible innovation (von Schomberg 2011) in research and innovation within the EU. It needs to be mentioned that the integration of ethics can also been seen in the calls for sociotechnical integration and for greater public engagement with science and technology. Its cross-program operation could also contribute to the shaping of a European understanding of ethics as well as of a more coherent concept of ethics in the EU under the umbrella of EU values and fundamental rights. Their operation signifies a shift in the regulatory governance of research funding from a purely technocratic preserve to one where sociocultural considerations are also seen as important. The institutionalized modes of ethics engagement as they take place in the frame of this
3
Research Ethics Governance
49
procedure may become a political technology that will facilitate the identification of practical ethical solutions to cultural conflicts. The ability of the proceduralized form of organization as a regulatory technique to create inclusive forms of moral debate could pave the way for the reconsideration of ethical pluralism in the domain of research ethics in the EU. At the same time, it raises questions about the normative influence of the work of ethics review panels upon the way ethics is defined at the EU level and European values are gradually being shaped in technological domains such as energy, security, and surveillance technologies. Research ethics experts are emerging as a new epistemic power group capable of brokering difficult cultural deals and defusing conflicts via a process-based framework of ethical analysis. The institutionalization of ethics is based on the so-called ethical expertise paradigm which in turn has been questioned as a scientific domain on the basis that it has a distinct methodology and special reservoirs of knowledge. At the same time, the determination of the concept of ethics by transferring competences to committees of a very small number of people can be problematic in terms of lacking democratic legitimacy. It also begs the question as to which understanding of ethics should be applied within the EU, how possible gaps have to be filled and whether ethics experts could become the honest brokers of research fundings structures worldwide. As a result, this particular decision-making paradigm that conveys a “technocratic style” of analysis runs the risk of leading to a reductionist, institutionalized approach to research ethics, namely, one that is limited to the assessment of the ethical soundness of a research protocol against a set of predefined questions, the EU acquis on research ethics, and the various guidance documents produced at the EU level. It needs also to be mentioned that the application of this proceduralized model has resulted in a rather drastic delisting of the more controversial issues from the agenda of EU research funding discourse that relate to more conceptual and structural issues or to the interrelationship between science, technology, and society at large which have been considered as being beyond the scope of ethical deliberation. As a result, ethics review increasingly retreats to procedural aspects of ethical deliberation and to the process of weighing “self-determination” versus “harm to others,” at the expense of broader content (Zwart 2001). The tickbox approach followed seems to rely on what has been called sociotechnical imaginaries (e.g., see Jasanoff and Kim 2009) and thereby restrict the procedure’s moral imagination to the expected and desired outcomes of scientific and technological developments. This mechanistic approach to research ethics exerts a normalizing role that leads to the entrapment of ethical reflexivity and to reductionist outcomes. On the basis of the abovementioned remarks, there may be a need for multistakeholder participation and public debate that may facilitate moving away from an expert-driven approach that serves technological innovation within the limits set by the competences of the EU in the field of scientific research. Theses panels are part of an institutional experiment that constantly reenacts and explicitly reshapes what European values are and the role of ethical principles/safeguards in the frame of technological innovation. The operation of the ethics review procedure indicates also a process of “incidental proceduralization,” the legitimacy of which flows from the
50
M. Kritikos
need to ensure the effectiveness of EU law. While procedure is sometimes presented as a response to the institutional crisis in the EU, a check-the-box compliance approach may not be able to shape a common EU ethics narrative if its operation is not enriched with public engagement initiatives and an EU accreditation scheme for all local and national research ethics committees that would render the ecosystem more inclusive.
References Dingwall R (2008) The ethical case against ethical regulation in humanities and social science research. Contemp Soc Sci J Acad Soc Sci 3:1–12 Haggerty KD (2004) Ethics creep: governing social science research in the name of ethics. Qual Sociol 27(4):391–414 Hermerén G (2009) Accountability, democracy and ethics committee. Law Innov Technol 1(2):153–170 Hoeyer K, Dahlager L, Lynöe N (2005) Conflicting notions of research ethics:the mutually challenging traditions of social scientists and medical researchers. Soc Sci Med 61(8):1741–1749 Holland K (2007) The epistemological bias of ethics review. Qual Inq 13:895–913 Jasanoff S (2005) Designs on nature. Science and democracy in Europe and the United States. Princeton University Press, Princeton Jasanoff S, Kim S-H (2009) Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva 47:119–146 Sass H-M (2001) Introduction: European bioethics on a rocky road. J Med Philos 26:215–224 Spike J (2005) Putting the “ethics” into “research ethics”. Am J Bioeth 5:51–53 Stirling A (2005) Opening up or closing down? Analysis, participation and power in the social appraisal of technology. In: Leach M, Scoones I, Wynne B (eds) Science, citizenship and globalisation. Zed Books, London von Schomberg R (2011) The quest for the “right” impacts of science and Hellstrom T., 2003, Systemic innovation and risk: technology assessment and the challenge of responsible innovation. Technol Soc 25:369–384 Wolzt M, Druml C, Leitner D, Singer EA (2009) Protocols in expedited review: tackling the workload of ethics committees. Intensive Care Med 35:613–615 Zwart H et al (2012) Ethical expertise in policy. In: Chadwick R (ed) Encyclopaedia of applied ethics, 2nd edn. Elsevier, Oxford, pp 157–164. https://doi.org/10.1016/B978-0-12-3739322.00006-5
4
Organizing and Contesting Research Ethics The Global Position Mark Israel
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues and Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interdisciplinary Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transnational Policy Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indigenous Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate, Future Issues, and Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52 52 56 57 58 58 60 63
Abstract
The machinery of research ethics oversight has grown in size, disciplinary ambit, and geographical reach over the last 50 years, generating overlapping patterns of regulation, statements, and guidelines that operate at supranational, national, local, community, discipline, topic, and institutional levels. These documents generate intersections as well as leaving interstitial spaces as governments, research agencies, institutions, associations, and supranational bodies attempt to assert, extend, and sometimes deny their authority over particular practices. While commentators have noted the widening control and intensification of the gaze that has occurred, the nature of, philosophical and actuarial support for, and the effectiveness of this oversight have been contested by researchers, research institutions, and communities of participants.
M. Israel (*) Australasian Human Research Ethics Consultancy Services, Perth, WA, Australia Murdoch University, Perth, WA, Australia University of Western Australia, Perth, WA, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_63
51
52
M. Israel
Keywords
Research ethics review · Principlism · Policy transfer · Policy migration · Indigenous research ethics
Introduction The machinery of research ethics oversight has grown in size, disciplinary ambit, and geographical reach over the last 50 years, producing overlapping patterns of regulation, statements, and guidelines that operate at supranational, national, local, community, discipline, topic, and institutional levels. Much of this machinery has drawn on principles and organizational structures first developed for biomedical research in the United States, principles and structures that have subsequently transferred across countries and disciplines. However, these documents and structures also leave interstitial and contested spaces where governments, research agencies, institutions, associations, and supranational bodies attempt to assert, extend, and sometimes deny their authority over particular research, audit, and evaluation practices. While commentators have noted the widening control and intensification of the gaze that has occurred, the nature of, philosophical and evidential support for, and the effectiveness of this oversight have been challenged by researchers, research institutions, and communities of participants, and the resultant patterns differ between countries and sometimes also markedly between institutions and communities even within the same country.
Background A conventional story is told of how research ethics regulation started in the United States following the World War II, drawing on lessons learned in the Nuremberg Doctors’ Trial (▶ Chap. 62, “Holocaust as an Inflection Point in the Development of Bioethics and Research Ethics” by Gallin and Bedzow). It is a comforting tale of progress; a world, shocked by the horrors of Nazi experimentation, devised codes of ethics aimed at ending the abuse of human subjects. However, reality generally refuses to fit into such a tidy linear narrative, and the born-out-of-Nuremberg narrative has become something of a foundation myth (Dingwall and Rozelle 2011). Codes of ethics existed in medical sciences in some countries (including Germany) long before the World War II. In addition, the Nuremberg Code was largely ignored by the United States, some of whose researchers intensified exploitative research on vulnerable populations both inside and outside their country. For example, a Presidential Commission reporting to President Obama documented how, soon after the war, researchers funded by the United States Public Health Service deliberately infected human subjects in Guatemala with sexually transmitted diseases (Presidential Commission for the Study of Bioethical Issues 2011). Some of the cases of abuse of vulnerable patients within medical experimentation were
4
Organizing and Contesting Research Ethics
53
exposed in the 1960s in the United States by Henry Beecher (1966) and in the United Kingdom by Maurice Pappworth (1967). However, ideas that underpinned the Nuremberg Code did play a role in the development through the 1950s of the World Medical Association’s Declaration of Helsinki (1964) which through its various subsequent revisions became the fundamental declaration of biomedical research ethics across the world. Subsequently, other international biomedical research ethics documents – including the International Ethical Guidelines for Biomedical Research Involving Human Subjects (Council for International Organizations of Medical Sciences (CIOMS), in 2002 and then 2016) and the United Nations Educational, Scientific and Cultural Organization’s (UNESCO) Universal Declaration on Bioethics and Human Rights (2005) – extended the geographical and disciplinary reach of biomedical research ethics declarations by claiming as universal particular conceptions of research ethics. The dominant contemporary approach to research ethics has its roots in the United States. In 1979, in response to a series of scandals in the United States, the Belmont Report was published (NCPHSBBR 1979). The Belmont Report contained ethical principles and guidelines to protect subjects in human research and argued for the need to shift research ethics guidelines from a set of rules to a series of broad principles – respect for persons, beneficence, and justice – on which subsequent rules might be based. These principles could be used by researchers and research ethics reviewers to identify ethical practices associated with matters such as informed consent, assessment of benefits and risks, and recruitment of research subjects. One of the people who worked on the Belmont Report, Tom Beauchamp, became a leading proponent of the approach adopted by the report, an approach that came to be known as “principlism.” Beauchamp argued that the principles constituted universal norms that any moral person would share irrespective of his or her own philosophical inclinations. Over the 1960s and 1970s, health research in the United States was subjected to research ethics review by institutional review boards. This meant that external experts constituted in committees were to deliberate and then reach the decision whether research conducted in particular contexts was deemed sufficiently ethical to proceed or not or whether changes had to be made in the design before it could continue. Rather than aiming at protecting patients, these requirements were introduced to protect institutions from legal liability (Stark 2012). Their ambit was extended in two ways. First, on the basis of the Belmont Report, the Federal Department of Health, Education, and Welfare required review for all research conducted in institutions that received funding from that department, irrespective of their discipline. In 1991, these structures, mechanisms, and policies were entrenched in Federal regulations known as the “Common Rule.” Second, recipients of United States health research grants around the world were required to set up their own review processes in accordance with United States specifications. The development of research ethics regulation and review structures in several other countries was stimulated by a mixture of medical scandals and the need to comply with developments in the United States. In Canada, the three key research agencies released the Tri-Council Policy Statement (TCPS) in 1998 following a
54
M. Israel
series of scandals. In its second edition, the 2010 TCPS established national standards and procedures for human research ethics and its review by research ethics boards and tied institutional and individual eligibility for research council funding to compliance. In New Zealand, the Cartwright Inquiry into cervical cancer experiments on women in Auckland resulted in legislation that mandated ethics review for health and disability research by a national network of committees, while researchers in other disciplines were subject to university ethics review committees that implemented institution-specific guidelines. In both cases, the national guidelines were based on principlism, though they also reflected differences in national legislation in relation to privacy, human rights, and Indigenous peoples. In the United Kingdom, review for medical research was introduced in 1968 in order to gain access to funding from the United States (Hedgecoe 2009), entrenched by the health service and extended to social care in 2009. In 2005, partly to preempt further extension in biomedical regulation, the Economic and Social Research Council published a research ethics framework for the disciplines that it covered, and by 2010 this framework mandated review for researchers seeking council funding (last updated as Economic and Social Research Council 2015). Unlike North America, the United Kingdom, and New Zealand, development of research ethics guidelines in Australia was not a response to scandal, though there had been evidence of unconscionable medical research. Instead, the development of review was driven largely by the funding agency responsible for biomedical research and occurred in response to the need to secure funding from the United States as well as a broader drive to public sector accountability. From 1966, the intensity and ambit of regulations grew from the medical sciences; by the mid-1980s, it covered social and behavioral research and then, later, humanities and the creative arts. In 1999 and again in 2007, the two key research councils in Australia produced a National Statement covering human research ethics that entrenched the Belmont principles and specified the processes and composition required for human research ethics committees. So, even among the English-speaking countries of the Global North, each jurisdiction has its own history and pattern of regulation and its own reasons for codifying, intensifying, or reconstructing arrangements. The resulting pattern can vary markedly even among geographically close countries with similar social and political systems and cultural values and that have a long history of close collaboration. For example, the Nordic countries have each produced a different system. Following legislative amendment in 2008, Sweden required researchers to seek review for all projects, irrespective of discipline, involving sensitive data, criminal activity, invasive physical intervention, or that carried a risk of physical or psychological harm. Norway has three separate national committees of research ethics; each has provided advice for its own set of disciplines, though only researchers working in the areas of health and experimental psychology have been required to seek review from the system of regional research ethics committees. Denmark also has regional and national committees for biomedical research but explicitly rejected the need for review processes to be extended to social sciences and humanities. Finland
4
Organizing and Contesting Research Ethics
55
has published different voluntary ethical guidelines for different disciplinary clusters and left it to individual universities to decide whether and how to comply. It is difficult to do justice to the different patterns of regulation that exist across the Global South. More detail can be found for each jurisdiction in the database compiled annually by the United States Office for Human Research Protections (2019) of over 1,000 laws, regulations, and guidelines on human subjects’ protections in 131 countries. Almost all the data relates to health research, but the listing does now contain a little material on social sciences. A global review of regulatory patterns for social sciences can also be found in Israel (2015). Most Latin American countries have national regulations covering clinical research. However, many do not have a comprehensive system of research ethics committees, and those that do may not have guidelines for overseeing and regulating research, relying on committees with overlapping jurisdictions and inconsistent approaches. The obvious exception is Brazil. Brazil first issued guidelines for medical experimentation on human subjects in 1988. In 1996 and 2012, the National Health Council adopted guidelines that extended its ambit to all research involving human participants; confirmed the importance of the ethical principles of autonomy, beneficence, non-maleficence, justice, and equity; and led to a system that included a set of research ethics committees under a National Commission for Ethics in Research (CONEP) with consultative, deliberative, normative, and education roles (Israel et al. 2016). Again, in sub-Saharan Africa, research ethics has largely been driven by bioethics. Most countries do not have national research ethics guidelines, though increasing numbers have research ethics committees for health research, particularly in English-speaking countries. The constitutions of some countries protect the right of participants in scientific research to grant consent, and most of the guidelines that do exist are very close in character and form to international biomedical research ethics documents. The most comprehensive system has been created in South Africa. In 1977, the South African Medical Research Council produced its Guidelines on Ethics for Medical Research. Following a scandal involving breast cancer research, nationally binding ethical guidelines for health research were published by the Department of Health in 2004 and again in 2015 based on principlism (Department of Health 2015). The 2004 Health Act also established the National Health Research Ethics Council (NHREC) with responsibility for the oversight of local research ethics committees and researchers. While initially contested, the guidelines were interpreted as covering all research, not just health. In Asia and Oceania, there are clear divisions between better-funded research communities, particularly those around the Arabian Gulf and Eastern Asia, that have national research ethics guidelines for biomedical research and the rest of the region that do not. For example, the Indian Council of Medical Research first published a policy on biomedical research ethics in 1980 and followed with guidelines in 2000, 2006 and 2017 (Indian Council of Medical Research 2017). Qatar’s Guidelines, Regulations and Policies for Research Involving Human Subjects were published by its Supreme Council of Health (Qatar Supreme Council of Health 2009). The Qatari policies established a set of rules and a review mechanism to cover all research but
56
M. Israel
offered exemptions to some low-risk social science research. Following 2013 legislation, the Philippine government required all health and health-related research involving human participants be reviewed by a research ethics committee accredited by the Philippine Health Research Ethics Board. The PHREB published new national guidelines in 2017 (Philippine Health Research Ethics Board 2017). Singapore enacted its Human Biomedical Research Act in 2015 and accompanying regulations in 2017. There are some commonalities in regulatory patterns – in most countries, the need to establish review processes followed requirements set by research councils either in the United States or in their own nations or by those international publishers who require all human research to have undergone ethics review prior to publication. In almost all cases, guidelines and review processes were first established to cover the health sciences; in many cases, other disciplines were drawn in either through incremental revision of existing documents and structures or by the creation of more-or-less parallel requirements. The impact beyond health has been uneven: the United States has national legislation that covers all disciplines; some countries now have national guidelines consciously generated for all disciplines (Canada and Australia); many have biomedical guidelines that have been generalized to some degree across all disciplines (South Africa, Taiwan, Sweden, the Netherlands, Mozambique, Senegal, Saudi Arabia, Qatar, Thailand); some have parallel guidelines for different sets of disciplines (Norway, Brazil, the UK, Finland); some leave it to individual institutions to develop guidelines outside health (New Zealand, Hong Kong); many have either decided not to regulate beyond health (Denmark) or at least have not yet done so (most European countries including France, Germany, and Italy, India, and almost all other LMICs).
Key Issues and Current Debate Defenders of principlism argue that through its involvement in the prospective institutional review of the ethics of research, principlism structures a peer review process that should and to some degree does improve the accountability of researchers, facilitate higher-quality research, protect core academic values, and defend the rights of research participants (Jennings 2012; Hunter 2018). Principlism has also been successful in introducing particular approaches to ethics – consequentialism and deontology – to a wider group of researchers. There is empirical evidence that some committees do work toward these goals (Hedgecoe 2008) and, to the extent that they do not, this need not necessarily be attributed to principlism. The architects of principlism also suggest it approaches problem-solving in a way that is “neutral between competing religious, cultural, and philosophical theories” (Gillon 1994: 188). This apparent professional neutrality might be a significant advantage in contexts where divisions between contending religious or political ideologies are deemed unbridgeable and have given rise to violence. However, other analysts have adopted a more critical stance arguing that principlism privileges a Eurocentric model of individual autonomy, ignores and
4
Organizing and Contesting Research Ethics
57
marginalizes other significant ethical traditions, and imposes a biomedical model of research on all other disciplines despite having no real understanding of the breadth of research practice outside biomedical sciences (Schrag 2010). In addition, many international declarations have been criticized for their failure to consider how such principles might be applied in specific social and economic contexts. Research ethics regulation and governance is shaped by government policies, economic indicators, social trends, institutional politics, and resources. Some stakeholders have extended the remit of ethics review in ways familiar to those acquainted with the literature on social control by both intensifying its gaze and expanding the areas for which it claims oversight. In many cases, this may have been motivated by a desire to foster ethical research or stop the exploitation of participants. However, many social analysts of research ethics regulation have not been so kind in their assessment. Some have termed this growth as “ethics creep” (Haggerty 2004) or “ethical imperialism” (Schrag 2010). Guta and his colleagues argued that in Canada at least, the pattern is more complex – a “simultaneous growing and retreating of ethics review as it expands into new terrain while losing control of its traditional domain” (2013: 307). Schrag’s use of imperialism referred specifically to the extension of a biomedical model to other disciplines. However, there are two other features of research ethics imperialism that are worth considering: transnational policy migration and continuing interference in Indigenous peoples’ self-determination.
Interdisciplinary Transfer There are other ways of approaching ethics, and it might be argued that the growth of principlism displaced earlier classical and religious traditions of virtue ethics and foreclosed the possibilities that might have been presented by feminist and critical philosophies. Before they were overtaken by national codes, some discipline-based associations in North America, Australasia, and the United Kingdom had created their own guidelines. This was more likely to happen in countries with larger concentrations of researchers and research infrastructures and in larger disciplines, often those with a close connection to a profession. In the social sciences, the early professional codes included those of the Society for Applied Anthropology in the United States in 1948, the American Sociological Association (1970) and the American Anthropological Association (1971), the American Psychological Association (1963), and the British Sociological Association (1968). As might be expected from social sciences, the guidelines and codes authorized by these associations tended to avoid excessive claims to universality, but their content and nature were contested by association members, and they tended to be shorter and less comprehensive in their coverage of research ethics than those generated by government bodies and research councils. While each country has a unique history of research ethics regulation and review, some broad generalizations may be worth making. In most cases, the first regulations were aimed at biomedical researchers. In some cases, the ambit of regulations slowly
58
M. Israel
expanded with each iteration; in other countries, the nature of biomedical regulations was simply reinterpreted to mean all research. In many countries, this expansion has not happened, and there are no national guidelines to cover non-biomedical research either by design or by omission. In only a few countries have discipline-specific research guidelines been created outside the biomedical sciences, though some commentators in the United Kingdom have argued that the social science guidelines there are too heavily influenced by health models and that the guidelines have been easily displaced by health regulations when social science research is undertaken in a health setting.
Transnational Policy Migration The first part of this chapter noted that the codes of research ethics based on principlism and its associated institutional review boards started in the United States but were exported globally. This global transfer of policy formed part of larger international flows of knowledge, ideology, capital, students, and academics. Initially, several countries adopted research ethics review in order to gain or maintain access to United States health research funding. Regulators in the United States required that other countries adopt uncritically and formulaically the regulations created in the United States for the United States. There have been other drivers of policy transfer. In the last few decades, big pharmaceutical companies have employed new venues for multicenter drug trials, driven by the search for lower risks of litigation, low labor costs, pharmacologically “naive” participants, weaker ethics review, and other regulatory processes. The new hosts for drug trials are largely low and middle-income countries. Seeking a regulatory framework in order to protect their citizens or remain competitive in the market for international research, these countries often imported existing frameworks from elsewhere. These frameworks may have been already familiar to postgraduate students and academics returning from the Global North and were supported by training programs funded by European, North American, and international institutions such as the World Health Organization and the Fogarty International Center of the United States National Institutes of Health. While building the capacity for research ethics governance in low- and middle-income countries is important, there are dangers that these programs rely too heavily on professionals from the Global North, support inappropriate regulation, repeat the mistakes already made in the Global North, and result in a one-way flow of ideas and influence, thereby undercutting competing Southernbased claims to expertise (Israel 2019).
Indigenous Research Ethics The universalizing nature of international and national research regulations initially paid little attention to other ways of attending to ethics (Tauri 2018). However, more recently, research ethics has become a site of resistance for Indigenous peoples. As a
4
Organizing and Contesting Research Ethics
59
result, several countries including the United States, Canada, Australia, New Zealand, Taiwan, and the Philippines have all generated codes and guidelines related to research on Indigenous peoples. In some cases, these are located in legislation or as part of the larger national guidelines; in other cases, they have been created by Indigenous peoples either to structure negotiation with outsiders or to establish the relationship between Indigenous concepts of knowledge, research, and ethics. Only in a few cases do these guidelines refer to, let alone reflect, Indigenous epistemologies. In the United States, some American Indian and Alaskan Native communities and some agencies have created guidelines or formal requirements that govern research with particular Indigenous communities. For example, the National Science Foundation published Principles for the Conduct of Research in the Arctic in 1990 “to promote mutual respect and communication between scientists and northern residents” (Social Science Task Force 1990). We might have expected future revisions of the Common Rule to take account of Indigenous communities given the US Federal government’s commitment under President Obama (2009) to undertake “regular and meaningful consultation and collaboration” with tribal officials where policy initiatives might have an impact on their communities but that may now be less likely given political changes in the United States (Israel and Fozdar 2019). The Canadian Tri-Council Policy Statement identifies standards that must be met at each part of the research process with First Nations, Inuit, and Metis peoples. These include community engagement, study design and methods, data governance, recruitment, capacity building, interpretation of findings, use of cultural knowledge, respect for community protocols and institutional requirements, and knowledge translation (Tri Council 2010). TCPS2 builds on the Ownership, Control, Access, and Possession (OCAP) Framework devised by the National Aboriginal Health Organization to enable First Nations communities to negotiate productive and equitable research relationships (First Nations Information Governance Centre 2007). In Australia, the Australian Institute of Aboriginal and Torres Strait Islander Studies published Guidelines for Ethical Research in Australian Indigenous Studies (GERAIS) in 2002, updating it most recently in 2012 (Australian Institute of Aboriginal and Torres Strait Islander Studies 2012). The guidelines outline principles to inform research involving Aboriginal and Torres Strait Islander peoples and espouse: the values of Indigenous rights, respect, and recognition; negotiation, consultation, agreement, and mutual understanding; participation, collaboration, and partnership; and benefits, outcomes, and giving back – marking a significant move away from the concerns of traditional medical ethics. Other guidelines published in Australia by the National Health and Medical Research Council (2018), Ethical conduct in research with Aboriginal and Torres Strait Islander Peoples and communities, define six core values – spirit and integrity, cultural continuity, equity, reciprocity, respect, and responsibility to govern research with or for Indigenous peoples. These guidelines move further than previous NHMRC statements in pointing to the need for research to be led by Indigenous research members, by Indigenous communities’ priorities and/or according to Indigenous standpoints and research methodologies.
60
M. Israel
In New Zealand, researchers have to pay attention to obligations associated with the 1840 Treaty of Waitangi between the British Crown and the Māori people. In 2010, Māori members of research ethics committees drafted Te Ara Tika, guidelines for Māori research ethics, for the Health Research Council of New Zealand (Hudson et al. 2010). Te Ara Tika called for research that was informed by ideas of respectful conduct, where tangible outcomes are achieved for Māori communities, and those communities are able to assume power in the research relationship and responsibility for the outcomes of a project. Initially, the document was marginalized, and Māori political and economic interests were reduced to a matter of culture by ethics committees. More recently, Te Ara Tika has become a key resource in New Zealand and is frequently referred to.
Current Debate, Future Issues, and Proposed Solutions It is easy to imagine the continuing and unimpeded rollout of principlism. Other disciplines not traditionally linked to human research seem likely to be affected by further disciplinary transfer as they extend their engagement in empirical research with people or become involved in interdisciplinary or multidisciplinary work. For example, biophysical or marine scientists are linking their work to human data, scientists active in the areas of information and communication technology are increasingly interested in user-experience data, and data scientists are increasingly aware of the difficulty of maintaining the anonymity of data sets. And yet, the stories that we tell of research ethics governance could be more complex, more aware of contradictions and, indeed, more respectful of difference. When providing an overview to so many jurisdictions, it is hard not to succumb to overgeneralization. Hedgecoe (2012) quite rightly calls out those analysts of research ethics who selectively draw on data to claim that the nature of research ethics review processes, structures, and approaches and the problems confronted by researchers in the face of such systems are universal. Hedgecoe calls this “assumed isomorphism” and is particularly critical of those who assume that what happened in their own country, particularly when this is the United States, explains or predicts what unfolds elsewhere. Gan and Israel (2019) have written similarly: Many commentators on research ethics have been based in the Global North and, when we find research ethics regulations that look very much like our own, have tended to make assumptions about the ways in which these patterns of regulation have unfolded. Apart from being disrespectful to local histories, insensitive to difference and intellectually lazy, failure to engage with the rich history of regulatory practices in different jurisdictions makes it hard for research ethicists to learn from others. That is hardly a position with which most people working in the field of research ethics would want to be associated.
Instead, Hedgecoe calls for empirically based accounts that might generate a sociology of research ethics that is attentive to the structural and contextual features that shape particular national approaches. Hedgecoe, however, does not explore the value of a comparative method that allows the exchange of possibilities and
4
Organizing and Contesting Research Ethics
61
strategies between countries. And there are some interesting stories to tell that point to the value of political mobilization by disciplines and communities and the role that might be played by funding bodies in refashioning research ethics legislation, policy statements, and review processes. In the last part of this chapter, I offer one account from Brazil, a second from Taiwan, and a third from a European-funded international project working with a community in South Africa that challenge the inevitable rollout of universalist and principlist codes. In Brazil in 2012, the National Health Council adopted Resolution 466/12, which provided new guidelines and rules for all research involving humans, identifying the rights and responsibilities of the state, researchers, and research participants. Resolution 466/12 was rejected by associations representing Brazilian anthropologists, sociologists, and political scientists. However, the 2012 Resolution allowed for a special resolution for social sciences and humanities. Initial attempts to curtail the impact of the resolution were not successful. In 2013, a working group started on the special resolution but encountered difficulties when it sought to challenge the biomedical “colonizing posture” of the central research ethics agency (Guerriero and Bosi 2015: 2622). However, in 2016, the council approved a new resolution for social sciences and humanities and those disciplines that draw on methodologies from those areas. Among other matters, Resolution 510/16 required equitable representation for all disciplines engaged in human research on national and local bodies responsible for making decisions about research ethics. The new resolution went further than many other national statements of research ethics in recognizing scientific and academic freedom and human rights and the role of research in expanding and consolidating democracy. It remains to be seen what impact changes in 2018 toward authoritarian government in Brazil will have on the implementation of such sentiments. For Linda Tuhiwai Smith (2012), a Māori academic, the struggle for decolonization requires among other things, a critical consciousness, a way of reimagining the world based on epistemologies that differ from those of the dominant paradigm, and the deployment of these ideas at a particular moment to disturb structures that underpin the status quo. Decolonization of research methodologies is a longstanding international project for Indigenous peoples; however, less attention has been paid to decolonizing research ethics. And yet, research ethics has provided to be one of the tools for disciplining the production of knowledge – some Indigenous groups in North America, Australasia, and Southern Africa have adopted the practices of research ethics review as one way of shaping access to Indigenous communities. For example, expansion of the universalist model of research ethics in Taiwan was disrupted when relations between the state and Taiwan’s Indigenous peoples changed. This moment reflected larger-scale processes of democratization and Taiwanization, processes that were sometimes antagonistic toward decolonization. It was also made possible by a period when Indigenous legislators held the balance of power in the national legislature and used it to formalize communal rights, rights that might be asserted when negotiating with external researchers. In 2011, the Human Subjects Research Act mandated that researchers who conducted biomedical and health-care research involving Indigenous peoples not only had to seek
62
M. Israel
individual informed consent but also had to seek consent from Indigenous communities in relation to their participation, publication of research results, and commercial benefits. This requirement was partly a reaction to a series of biomedical research scandals involving Taiwan’s Indigenous peoples. In late 2015, consultation and consent processes required by the 2011 Act were stipulated by the Council of Indigenous Peoples (Gan and Israel 2019). The 2015 regulation also required financial benefits agreed with Indigenous communities to be distributed through the Aboriginal Development Fund. There is also no reason why international instruments need to ignore local cultural norms. Funded by the European Research Council, the TRUST Project (2018) developed a Global Code of Conduct for Research in resource-poor settings, as well as a range of accompanying resources. These included case studies of “ethics dumping” when lower standards for ethics regulation in the Global South have been exploited by organizations from the Global North, an exemplar Code for Indigenous peoples (a code of ethics for the Indigenous San people of Southern Africa) (South African San Institute 2017), and a guide to implementation. The Global Code was created to provide accessible guidance for all research disciplines and focused on asymmetrical research collaborations where partners might face considerable imbalances of power, resources, and knowledge. The 23 Articles of the Code were organized around four values – fairness, respect, care, and honesty – and drew on existing good practices developed by communities and researchers in the Global South. These included collaboration in setting a research agenda; planning and conducting research; negotiating free, prior, and informed consent; building local capacity to undertake research; sharing benefits; mitigating against risks to participants; avoiding unfair exploitation of local human or natural resources; and not engaging in corruption. The TRUST project was aware of the differences that may exist between formal ethics requirements and the ways that researchers actually behave in situ. It recognized that we need to support ethical reflexivity and integrity and respond to the challenges posed by multidisciplinary and transnational research. However, the project may find it difficult to have an impact on actual research practice in many disciplines. Despite pointing to research from other disciplines, the documentation is strongly based on developments in global health research. This is partly – but only partly – justifiable because the issues have been most obviously confronted there. Some of the concepts intended for use across all disciplines are not yet familiar outside health research. While the code is expressed at a high level of abstraction, there is a danger that if interpreted without sensitivity, its articles might disrupt existing ethical practices better suited to particular methodologies and contexts. It is tempting to view as inexorable the expansion of a particular way of conceiving and responding to issues of research ethics. Starting from the international foundation myth associated with Nuremberg or the more recent Belmont Report, we appear to have witnessed expansion of research ethics governance and regulation across space and research disciplines. We have also seen intensification of the surveillance of researchers as research ethics machinery has been backed up by the audit, compliance, and disciplinary mechanisms associated with codes of
4
Organizing and Contesting Research Ethics
63
research, integrity, and misconduct. Telling the story in this way helps create a shared sense of purpose either in support of or in opposition to the research ethics rollout. However, such an approach misses the value of a comparative methodology. If we attend more closely to local, regional, and national histories, we should gain a better understanding of how policy and practices emerge and how principlism has been incorporated, accommodated, or resisted. We are likely to find that in some jurisdictions principlism has become largely accepted while in others it has not been applied (or at least not to some kinds of research). Finally, in some contexts, we might learn (and learn from) how it has been challenged or even displaced by other cultural values, discourses of human rights or self-determination, and an awareness of the politics of global ethics. Acknowledgments This chapter draws and expands on material Israel (2015, 2019), originally published in Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance and The SAGE Handbook of Qualitative Research Ethics. Any material originally from these works has been modified and reproduced with permission of SAGE Publications Ltd. It also draws on material drafted for and to be published in Gan and Israel (2019) and Israel and Fozdar (2019).
References Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) (2012) Guidelines for Ethical Research in Australian Indigenous Studies (GERAIS). Available at: http://www. aiatsis.gov.au/research/docs/GERAIS.pdf. Accessed 23 Dec 2013 Beecher HK (1966) Ethics and clinical research. N Engl J Med 274(24):1354–1360 Council for International Organizations of Medical Sciences (CIOMS) (2016) International ethical guidelines for health-related research involving human subjects. Available at: https://cioms.ch/ wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf. Accessed 6 Nov 2018 Department of Health (2015) Ethics in health research: principles, processes and structures, 2nd edn. Department of Health, Pretoria Dingwall R, Rozelle V (2011) The ethical governance of German physicians, 1890–1939: are there lessons from history? J Policy Hist 23(1):29–52 Economic and Social Research Council (ESRC) (United Kingdom) (2015) Framework for research ethics. Economic and Social Research Council, Swindon. Available at: https://esrc.ukri.org/files/ funding/guidance-for-applicants/esrc-framework-for-research-ethics-2015/. Accessed 13 May 2018 First Nations Information Governance Centre (2007) OCAP: ownership, control, access and possession. National Aboriginal Health Organization, Ottawa Gan Z-R, Israel M (2019) Transnational policy migration, interdisciplinary policy transfer and decolonization: tracing the patterns of research ethics regulation in Taiwan. Dev World Bioeth. https://doi.org/10.1111/dewb.12224 Gillon R (1994) Medical ethics: four principles plus attention to scope. BMJ Br Med J 309(6948): 184–188 Guerriero ICZ, Bosi MLM (2015) Research ethics in the dynamic of scientific field: challenges in the building of guidelines for social sciences and humanities. Cien Saude Colet 20(9):2615–2624 Guta A, Nixon SA, Wilson MG (2013) Resisting the seduction of ‘ethics creep’: using Foucault to surface complexity and contradiction in research ethics review. Soc Sci Med 98:301–310 Haggerty K (2004) Ethics creep: governing social science research in the name of ethics. Qual Sociol 27(4):391–414
64
M. Israel
Hedgecoe A (2008) Research ethics review and the sociological research relationship. Sociology 42(5):873–886 Hedgecoe A (2009) ‘A Form of Practical Machinery’: the origins of research ethics committees in the UK, 1967–1972. Med Hist 53:331–350 Hedgecoe A (2012) The problems of presumed isomorphism and the ethics review of social science: a response to Schrag. Res Ethics 8(2):79–86 Hudson M, Milne M, Reynolds P, Russell K, Smith B (2010) Te Ara Tika. Guidelines for Māori research ethics: a framework for researchers and ethics committee members. Final Draft. Available at: http://www.hrc.govt.nz/sites/default/files/Te%20Ara%20Tika%20Guidelines% 20for%20Maori%20Research%20Ethics.pdf. Accessed 23 Dec 2013 Hunter D (2018) Research ethics committees: what are they good for? In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London, pp 289–300 Indian Council of Medical Research (2017) National ethical guidelines for biomedical and health research involving human participants. ICMR, New Delhi. Available at: http://icmr.nic.in/sites/ default/files/guidelines/ICMR_Ethical_Guidelines_2017.pdf. Accessed 22 May 2019 Israel M (2015) Research ethics and integrity for social scientists: beyond regulatory compliance. Sage, London Israel M (2019) Ethical imperialism? Exporting research ethics to the global south. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London, pp 89–102 Israel M, Fozdar F (2019) The ethics of the study of Social Problems. In Marvasti, A & Treviño, J (eds) Researching social problems. New York: Routledge pp. 188–204 Israel M, Allen G, Thomson C (2016) Australian research ethics governance: plotting the demise of the adversarial culture. In: van den Hoonaard W, Hamilton A (eds) The ethics rupture: exploring alternatives to formal research-ethics review. University of Toronto Press, Toronto, pp 285–316 Jennings S (2012) Response to Schrag: what are ethics committees for anyway? A defence of social science research ethics review. Res Ethics 8(2):87–96 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (NCPHSBBR) (1979) Belmont report: ethical principles and guidelines for the protection of human subjects of research. Report, Department of Health, Education and Welfare, Office of the Secretary, Protection of Human Subjects, Michigan. Available at: https://www.hhs.gov/ohrp/regulations-andpolicy/belmont-report/read-the-belmont-report/index.html. Accessed 13 May 2018 National Health and Medical Research Council (2018) Ethical conduct in research with Aboriginal and Torres Strait Islander Peoples and communities: guidelines for researchers and stakeholders. Canberra. Available at: https://nhmrc.gov.au/about-us/publications/ethical-conduct-researchaboriginal-and-torres-strait-islander-peoples-and-communities. Accessed 5 Nov 2018 Office for Human Research Protections (2019) International compilation of human research standards. Office for Human Research Protections, Washington, DC. https://www.hhs.gov/ ohrp/international/compilation-human-research-standards/index.html. Accessed 22 May 2019 Pappworth MH (1967) Human guinea pigs: experimentation on man. Routledge, London Philippine Health Research Ethics Board (2017) National ethical guidelines for health and health related research. Department of Science and Technology – Philippine Council for Health Research and Development, Manila. http://www.ethics.healthresearch.ph/index.php/phocadownloads/category/4-neg. Accessed 7 Nov 2018 Presidential Commission for the Study of Bioethical Issues (2011) ‘Ethically Impossible’: STD research in Guatemala from 1946 to 1948. PCSBI, Washington, DC. Available at: https://law. stanford.edu/wp-content/uploads/2011/09/EthicallyImpossible_PCSBI_110913.pdf. Accessed 13 May 2018 Qatar Supreme Council of Health (2009) Policies, regulations and guidelines for research involving human subjects. Available at: http://www.sch.gov.qa/sch/UserFiles/File/Research%20Depart ment/PoliciesandRegulations.pdf. Accessed 23 Dec 2013 Schrag ZM (2010) Ethical imperialism: institutional review boards and the social sciences, 1965–2009. Johns Hopkins University Press, Baltimore
4
Organizing and Contesting Research Ethics
65
Smith LT (2012) Decolonising methodologies: research and indigenous peoples, 2nd edn. Zed, London Social Science Task Force (1990) Principles for the conduct of Arctic research. National Science Foundation. Available at: https://www.nsf.gov/geo/opp/arctic/conduct.jsp#implementation. Accessed 4 Jul 2018 South African San Institute (2017) San code of ethics. http://trust-project.eu/wp-content/uploads/ 2017/03/San-Code-of-RESEARCH-Ethics-Booklet-final.pdf. Accessed 13 May 2018 Stark L (2012) Behind closed doors: IRBs and the making of ethical research. University of Chicago Press, Chicago Tauri JM (2018) Research ethics, informed consent and the disempowerment of First Nation peoples. Res Ethics 14(3):1–14 Tri-Council (Canadian Institutes of Health Research, National Science and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada) (2010) Tri-council policy statement: ethical conduct for research involving humans. Public Works and Government Services, Ottawa. Available at: http://www.pre.ethics.gc.ca/pdf/eng/tcps2/ TCPS_2_FINAL_Web.pdf. Accessed 23 Dec 2013 TRUST Project (2018) Global code of conduct for research in resource-poor settings. Available at: http://www.globalcodeofconduct.org/wp-content/uploads/2018/05/Global-Code-of-Con duct-Brochure.pdf. Accessed 5 Nov 2018 UNESCO (2005) Universal declaration on bioethics and human rights. UNESCO, Paris. Available at: http://unesdoc.unesco.org/images/0014/001461/146180E.pdf. Accessed 6 Nov 2018 World Medical Association (WMA) (1964) Declaration of Helsinki. Adopted by the 18th WMA General Assembly, Helsinki, Finland
5
Research Ethics Codes and Guidelines Margit Sutrop, Mari-Liisa Parder, and Marten Juurik
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Functions Do Codes of Ethics Fulfill? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Are the Reasons for Creating Codes of Ethics for Scientists? . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of Research Ethics and Codes for Scientists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Categorizations of Codes of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Concepts of Research Integrity and Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Are So Many Codes of Ethics Necessary? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Universal Codes, Values, and Norms in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Harmonization Good For? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantity Does Not Matter, Quality Does . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Can One Make Codes of Ethics Work Better? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancing the Function of Codes of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68 68 69 70 71 74 75 77 77 79 80 80 80 85
Abstract
Although the origin of professional codes of ethics can be traced back to ancient Greece, their peak was in the late twentieth century with more than 70% of codes of ethics being created after 1990. Today professional ethical standards are formulated as codes of ethics, sets of principles or guidelines, declarations, conventions, charters, or laws, and they differ in scope, form, and content. As there is no consensus on what is meant by “research ethics” and “research integrity,” both concepts are clarified here. M. Sutrop (*) Department of Philosophy, University of Tartu, Tartu, Estonia e-mail: [email protected] M.-L. Parder · M. Juurik Centre for Ethics, University of Tartu, Tartu, Estonia e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_2
67
68
M. Sutrop et al.
Codes of ethics for scientists are often written in reaction to misconduct cases. However, the sudden boom in codes of ethics is also related to growing pressures upon scientists and the conflicting duties they face. Solutions to the issue of the vast number of codes and guidelines – creating a few universal general codes for research or harmonization of existing documents – are also both problematic. A universal code makes sacrifices on the level of content to gain acceptance internationally, and differences in values will continue to pose ethical dilemmas and conflict. The main obstacles and solutions in order to make codes of ethics work better are highlighted. It is argued that the process of drafting codes of ethics should be inclusive. To engage people real-life cases should be discussed for clarifying implicit values. Implementation requires skills or moral discussion and substantiation of positions. Codes of ethics, the shared understanding of values should be sought within professions. Declared and actual values should be in coherence both in the leadership of the organization and organizational culture. Keywords
Codes of conduct · Codes of ethics · Research ethics · Research integrity · Ethics guidelines
Introduction History The beginning of professional codes of ethics can be traced back to ancient Greece, where in the fifth century B.C. Hippocrates of the island of Kos requested everybody practicing medical science to bind themselves by an oath. The famous Hippocratic Oath requires all new physicians to swear by many healing gods to uphold professional ethical standards. Although the oath is still held sacred by doctors today, it has been criticized, and its content has been modified to correspond to the realities of today’s medical world, contemporary science, and society (Orr et al. 1997). The word “code” stems from the Latin word “codex,” which can mean tree trunk or book. Originally, a codex was a book made by wooden tables covered with wax (Evers 2003). Not too long ago, “code” referred to a collection of laws or regulations. In modern usage “code” still refers to its written form – the moral standards, rules, guidelines, or principles that are systematically ordered and presented as a written text. Today professional ethical standards are no longer formulated as oaths, but as codes of ethics, sets of principles, or guidelines, such as declarations, conventions, charters, or laws. Also, ethics codes can appear in different forms, as code of ethics, code of conduct, and code of practice. In this chapter, we are going to use the general term of “code of ethics.” In this chapter, we will first give an overview of the reasons why different professions have developed their codes of ethics and whether they are thought to be useful. After that, we will discuss what are the main drivers behind the process of
5
Research Ethics Codes and Guidelines
69
creating codes of ethics for scientists. As the history of the codes of ethics for scientists is closely related to the history of research ethics, we will also give an overview of the historical discussion of developing ethical standards for good research. We will distinguish between the history of “research integrity” and “research ethics” which has led to the development of two types of codes for scientists. We will also ask whether there are too many codes and whether harmonization of codes of ethics is desirable and achievable. The chapter ends with suggestions on how to make codes of ethics work better.
What Functions Do Codes of Ethics Fulfill? Codes of ethics are agreements between representatives of an occupation concerning ethical standards, an obligation to adhere to certain standards, principles, and rules that guide the profession, making them also explicit to outsiders. Michael Davis (1991) sees the emergence of codes of ethics as related to the professionalization of an occupation. In his view the purpose of the code is to protect each representative of a concrete occupation from certain pressure (e.g., from temptations rising from their own interests). The code gives a relatively secure guarantee that if one representative of an occupation decides to restrict his/her own interest and to follow common interests, others do not profit from his/her ethical conduct but also have to limit their own interests. Although codes of ethics are also used for sanctioning the members’ conduct, they have no force of law. A professional code of ethics is rather an explicit expression of the will of the members of the occupation or a “social agreement” which declares that the principles, rules, and instructions expressed in the code are binding for the representatives of this occupation (Pritchard 2006, p. 87). Thus, there are a variety of aims codes of ethics fulfil (Carlson et al. 2004; Davis 1991; Komić et al. 2015; Painter-Morland 2010; Pritchard 2006; Starr 1983; Unger 1991). First, the code of ethics expresses the shared understanding of a professional community of their obligations to adhere to certain ethical standards and principles. Second, it gives members a “pretext” to oppose the potential pressure by others to act unethically, e.g., in situations where ethical conduct is difficult and there is a temptation to throw ethics overboard for the sake of profit. Third, it helps to create an environment where ethical conduct is the norm. Fourth, it functions as an instruction or reminder about how to act in certain situations. Fifth, writing and supplementing a code of ethics gives the representatives of the occupation a good opportunity to practice ethical reflection. Sixth, the code of ethics can function as an educational tool that can be used at seminars, workshops, courses, or professional meetings. Finally, the code of ethics gives a signal to the public that the occupation is seriously dedicated to responsible activity. There are, however, also critical voices who doubt the usefulness of the codes. Some critics argue that codes tend to be instrumental in their aim. Dobson suggests that they can be “legal window dressings designed purely to minimize litigation exposure” (2005, p. 59) since they make individuals aware of litigation risks but do
70
M. Sutrop et al.
not affect motivations. Another group of critiques is related to the effectiveness of implementing codes, pointing out that codes do not work, because people do not adhere to them or are insufficiently informed about them (Dienhart 1995; Morin 2005; Schwitzgebel 2009). A third group of critiques focuses on the effectiveness of codes of ethics, stating that codes do not guarantee ethical leadership (Schaubroeck et al. 2012; Hassan et al. 2014), codes have few positive impacts (Brief et al. 1996), and a review of codes shows mixed results (Kaptein and Schwartz 2008). A fourth group of critiques focuses on moral reasoning, pointing out that moral thinking is oversimplified by professional codes (Beauchamp and Childress 2009) and that it is problematic to draw boundaries between professional and personal behavior. This in turn raises the question why codes do not regulate personal behavior (Pipes et al. 2005). Codes also inhibit moral reasoning (Ladd 1991) and have even been related to moral violence (Painter-Morland 2010). Kjonstad and Willmott make the distinction between two kinds of ethics: restrictive and empowering. The first defines what must be done, while the latter supports moral development and learning; they argue that codes of ethics prescribe instructions for conduct, but they do not help much in what they describe as “practical understanding of the normative organisation of human interaction,” (Kjonstad and Willmott 1995, p. 448). Codes also create more problems and gray areas than they help to resolve; moreover, people do not seek help from codes, because they tend to be controversial and thus do not provide guidelines for concrete situations (Luegenbiehl 1991). If they are members of different associations, professionals might also be expected to follow several, maybe even contradictory, codes (e.g., Barker 1988). In spite of all this criticism, more and more codes are being written and adopted. The wider use and development of codes of ethics began in the late twentieth century in the United States with more than 70% of codes of ethics being created after 1990 (Guillén et al. 2002). Illinois Institute of Technology’s Codes of Ethics Collection encompasses 2500 individual codes from around 1500 different organizations (see: http://ethics.iit.edu/ecodes/about). Also, the majority of the research ethics codes have been written in the last 30 years, with a growing number of codes having now the title “code of conduct for research integrity” which describe standards of good research practice.
What Are the Reasons for Creating Codes of Ethics for Scientists? The sudden boom in research ethics codes raises the question of whether people have become more unethical. Although it is true that very often codes of ethics are written in reaction to major misconduct scandals, it may be that, because of the Internet and other media developments, it is simply easier to detect misconduct and spread information about it. On the other hand, because of globalization, different cultures increasingly come into contact, and cultural and societal differences may incline people to ask what is right and wrong behavior and whether values and norms are understood in the same way.
5
Research Ethics Codes and Guidelines
71
The increased interest in writing codes of ethics for scientists may also be somehow related to various growing pressures upon scientists: pressure to publish (“publish or perish”), tight competition for research funds, anxiety about short-term contracts, etc. Also, a lot of scientists have to fulfil different roles (researcher, supervisor, teacher, research manager, journal editor, or reviewer), and the pressure to cope with conflicting duties makes one ask how to share one’s time, attention, and responsibility so that no commitment is neglected. A classic example in academia is finding time for both, teaching and individual research. There may also be conflicting duties within the same role. For example, a researcher may face a conflict between the duty to share data and to protect confidentiality (Shamoo and Resnik 2015). As scientists are now encouraged to take on roles outside the profession of science in order to transfer their knowledge to society and to commercialize their products, potential conflicts of interest may, for example, arise between professional, personal, or financial interests (Werhane and Doering 2007). As such, these conflicts pose an external threat to research integrity and therefore call for reflection. Braxton (2010) also highlights that codes are needed for overcoming both role ambiguity and role autonomy; they provide guidelines for academic roles including those of the academic president, dean, administrator, and academic faculty members. One must also take into account that universities have changed a great deal during the last 20 years. Today research-oriented universities are supposed to combine highlevel research, teaching, entrepreneurship, and knowledge transfer to industry and society. Scientists therefore have to rethink their professional duties and find a good balance between them. This may not be an easy task: scientists continuously face the need to prioritize and sometimes find themselves in difficult situations being confronted with unresolvable moral dilemmas. This calls for the capacity to balance between competing commitments and values, which many theorists hold to be an important part of integrity (e.g., Cox et al. 2003; Banks 2015). As more and more researchers are involved in interdisciplinary projects and international research teams and publish in international journals, differences in standards (among different disciplines, different countries, and different institutions) become evident creating the need to negotiate and develop a common understanding of the ethical standards of good research, including planning, design, and conduct of research as well as the dissemination of results. David Resnik (2009, p. 221) has articulated four arguments as to why international standards on research integrity are needed: (1) since science is international, there is a need for standards crossing national boundaries for cases of disagreements that may arise when scientists come from different cultures; (2) if there are no local standards, scientists can rely on international ones; (3) good international standards help to encourage the development of local standards; and (4) they help to foster trust between scientists working in different countries.
History of Research Ethics and Codes for Scientists The history of research ethics codes can be divided into two separate topics: first, the history of codes of ethics as formal documents, and second, the history of research
72
M. Sutrop et al.
ethics which includes the historical development of underlying principles and ideas. The principles and ideas included in a code might be much older than the code itself. For instance, the history of research ethics codes is often exemplified by certain historic landmarks following the Second World War, whereas some of its principles can be dated back to the nineteenth century. The Nuremberg Code of 1947 was part of the verdict of the Nuremberg Military Tribunal which considered the case of 23 doctors who conducted medical research in the Nazi concentration camps. The Code lists 10 basic principles for medical research, the first of which is the need for voluntary consent. While the Nuremberg Code is often credited as the source of this principle, Ruyter (2003) argues that it was a central issue in the 1880 court case against the Norwegian doctor and researcher Gerhard Armauer Hansen. The clinical experiments of Albert L. Neisser, the results of which he published in 1898, were publicly criticized for not soliciting the consent of the patients (Benedek 2014). In 1900 the Royal Prussian Disciplinary Court imposed a fine on Neisser’s assistants for not obtaining informed consent prior to the experiments. Neisser’s case was both preceded and followed by academic debate concerning the responsibility of physicians and researchers. This case was also followed by a decree by the Prussian Minister of Culture in 1900 which forbade all medical interventions other than diagnosis, therapy, and immunization, if the person had not been previously informed of the possible risks and had not given unequivocal consent (Benedek 2014, p. 261). The decree was an administrative directive, not a law, but it seems to be the first ever governmental statement on informed consent in research (Ruyter 2003; Benedek 2014). The fact that many of the basic principles embedded in codes are older than the codes themselves does not necessarily reduce their historical importance as the first international-scale professional agreements on these principles. The Declaration of Helsinki was adopted in 1964 by the World Medical Association proposing basic principles for biomedical research. The declaration was first revised in 1975, with one of the added principles being the requirement for independent ethics review committees, and it has gone through several revisions since (World Medical Association 2013). The Belmont Report, which set forth the basic principles for research using human subjects, was published in 1979. It was issued by the United States’ National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research that first met in 1976 and held monthly deliberations until the report was issued. Its three basic principles were (1) respect for persons which requires voluntary informed consent to be obtained; (2) beneficence which requires assessment of risks and benefits; and (3) justice which requires fair selection of research subjects (Office of the Secretary 1979). The emergence of codes of ethics for research is often seen as a reaction to public scandals (Emanuel et al. 2008; Israel and Hay 2006; Kitchener and Kitchener 2009; Levine and Skedsvold 2008). Thus the Nuremberg Code was a direct reaction to experiments of physicians in the concentration camps of Nazi Germany (Emanuel et al. 2008, p. 123; Ruyter 2003). The adoption of the Declaration of Helsinki, in turn, was influenced by the Nuremberg Code (Israel and Hay 2006, p. 30). The Belmont Report was preceded by controversial biomedical research cases like the thalidomide drug tragedy in 1961,
5
Research Ethics Codes and Guidelines
73
the Jewish National Hospital cancer study in 1965, the Willowbrook hepatitis study in 1966, and the Tuskegee syphilis study in 1972 (Kitchener and Kitchener 2009, p. 7). The 1960s and 1970s also saw the emergence of ethical considerations in social and behavioral science research in the United States (Levine and Skedsvold 2008). There were three high-profile studies: those of Stanley Milgram in the 1960s on obedience to authority, Philip Zimbardo’s prison simulation study of 1971, and Laud Humphreys’ studies in the years 1965–1968 (Levine and Skedsvold 2008). All of these studies raised questions about respect for, autonomy of, and harm done to research participants. During the same period, several prominent professional associations of social sciences established their own codes of ethics. The American Psychological Association (APA) published its first research-related code in 1966; the American Sociological Association (ASA) approved its code in 1969; the American Anthropological Association (AAA) issued a statement of ethical issues in 1967 and adopted an ethics code in 1971; and the American Political Science Association (APSA) issued a report on ethical issues in 1968 but did not adopt a code until 1989 (Levine and Skedsvold 2008, p. 338). During the same time, the federal policy started to include behavioral and social science research in addition to medical research. The earliest inclusion of social sciences in a policy which requires institutional review of research can be dated back to 1966 (Levine and Skedsvold 2008, p. 339). The history of research ethics for natural sciences has been somewhat different. Though researchers have continuously warned the public of possible dangers, this has not led to the regulation of natural science research. In 1955, the Russell-Einstein Manifesto was issued as a warning against nuclear weapons. In 1957, the Pugwash movement was founded with the aim of eliminating all weapons of mass destruction. In 1997, physicist Joseph Rotblat, a researcher on the Manhattan Project during the Second World War, a signatory of the Russell-Einstein Manifesto and one of the founders of the Pugwash movement, called for an international ethics committee for natural science research (Ruyter 2003). However, no such committee has yet been established. As can be seen, most of the ethical concerns were raised in relation to the treatment of human subjects, but there have also been other areas where standards for ethical aspects of research have been discussed and agreed on. For instance, the guidelines for use of animals in research have developed in relation to animal ethics and animal welfare, resulting in the proposal of the three R’s principles – replacement, reduction, and refinement – which were first published by William M. S. Russel and Rex L. Burch in 1959. In the field of publication ethics, the International Committee of Medical Journal Editors first proposed criteria for authorship in 1988 in the third edition of “Uniform requirements for manuscripts submitted to biomedical journals.” The development of different subfields of research ethics and their adoption into formal codes, guidelines, and other documents is a lengthy and complex topic that deserves attention but cannot be fully covered here. Following the emergence of ethical concerns over research in the 1960s, there have been numerous initiatives trying to agree on proper research standards either on institutional, national, or international levels. This has contributed to the growing number of codes and guidelines, for instance, the publication of On Being a Scientist
74
M. Sutrop et al.
(first edition 1989, second edition 1995) (American National Academy of Science 2009) followed by codes and guidelines by the European Science Foundation (ESF); All European Academies (ALLEA); the International Council for Science (ICSU); the United Nations Educational, Scientific and Cultural Organization (UNESCO); the Global Science Forum on the Organisation for Economic Co-operation and Development (OECD); the Committee on Publication Ethics (COPE); and the European Commission (Drenth 2010).
Categorizations of Codes of Ethics For the field of science, different codes of ethics can be categorized in various ways. One possibility is to look at the scope and whether it is meant for one discipline. Thus codes of ethics can be categorized as being international and general (e.g., Singapore statement on research integrity 2010; All European Academies (ALLEA) 2011; UNESCO 2017); national and general (e.g., Danish Ministry of Higher Education and Science 2014); and national and discipline-based on the one hand (e.g., APA 2017; British Educational Research Association (BERA) 2018) or institutional (e.g., university) based on the other. Another possibility to categorize codes is according to form. Codes of ethics can thus appear under different names: code for ethics, e.g., IVSA Code of Research Ethics and Guidelines (Papademas 2009); code of conduct, e.g., The European Code of Conduct for Research Integrity (ALLEA 2011); good practice guidelines, e.g., OeAWI Guidelines for good scientific practice (OeAWI 2016); and statements (Singapore Statement of Research Integrity 2010) or declarations, e.g., Brussels Declaration (American Council on Science and Health 2017). Form obviously depends on the aim of the document and its main target. If the aim of the code is to create trust toward science and let the wider society know what kind of values are shared by all researchers, the code may be a general value declaration with the description of related commitments (UNESCO Declaration on Bioethics and Human Rights 2015). In order to ensure that all scientists know and follow the ethical standards of good research, one needs to describe in detail the kind of action expected from all researchers, but also from research organizations. Besides describing researchers’ commitments, some codes describe both the individual and institutional duties for securing research integrity (All European Academies (ALLEA) 2011; Danish Ministry of Higher Education and Science 2014; Centre for Ethics; University of Tartu 2017). And if the code is meant to be used for guiding novices in the profession or to distinguish between good and bad behavior, one needs to provide detailed guidance on how the scientists should act in specific situations. More specifically, an important aspect related to the implementation of codes of ethics is who should be the main target group – students as research trainees or senior scientists. Current literature shows an emphasis on research trainees; for example, Sarauw et al. (2019) state that PhD trainings have become key areas in which research integrity as a field emerges. Similarly, Mahmud and Bretag (2014)
5
Research Ethics Codes and Guidelines
75
analyzed the consistency of integrity-politics of the Australian Code for Responsible Conduct of Research, which is aimed at students as research trainees and includes five key elements of academic integrity (access, approach, responsibility, detail, and support) (Bretag et al. 2011). It was found that in activities aimed toward young researchers, there are problems with consistency in defining research misconduct, and not enough support is offered to students (Mahmud and Bretag 2014). One can also distinguish codes according to negative or positive directives. Some codes mainly use the form of negation, “one should not,” “it is not allowed,” etc., whereas others describe in a positive way how the representative of the profession should behave. More aspirational documents describe the ideal – what are the virtues of the professionals or which values will be upheld and acted upon by all researchers. There may also be documents which inform outsiders to a certain profession of the social responsibility the representatives of this profession agree to take and their commitment to fulfilling their role in the best way possible. One alternative is focusing on content. Ethical standards for research can be formulated as codes of ethics for research integrity or research ethics. It is, however, difficult to draw a strict line between these two types of codes. Parry (2017) notes that in practice, two institutions may have codes that fulfil the same purpose, but one calls it a code of research integrity and the other a code of research ethics.
The Concepts of Research Integrity and Research Ethics There is no universal consensus on the meaning of these two terms. Some authors use “research integrity” as an overarching term and others vice versa. “Research ethics” seems to be used as a comprehensive term when one means research ethics as a field of study, i.e., as a subfield of practical ethics, similar to bioethics or environmental ethics. If “research integrity” is used as an overarching term, then one is usually referring to the ethos or ideal that researchers as professionals should serve. Part of this ethos is to be a certain kind of person who has “integrity.” Currently there is no common definition of “integrity,” shared by all writers. The word “integrity” is derived from Latin word “integritās” meaning totality, completeness, wholeness, and entirety. Integrity refers to a “defensive virtue,” as Maria do Céu Patrão Neves describes it; research integrity is “not oriented towards the development of a precise concrete action but instead towards defense or protection against potential external damaging acts” (Neves 2018, p. 182). In this sense, research integrity embodies everything that is under attack when the above mentioned pressures on scientists grow. However, integrity should be defined more specifically and precisely, including what characteristic integrity is a property of. The analysis of scientific literature on scientific integrity (Meriste et al. 2016) shows that integrity can be attributed to four things: (1) research findings, (2) individual researchers, (3) research institutions, and (4) science as a social system. In relation to research findings, integrity refers to correct and reliable research results, which are not corrupted by fabrication, falsification, and other similar forms of misconduct (e.g., Singapore Statement on Research Integrity 2010; Anderson et al.
76
M. Sutrop et al.
2013). Integrity related to the individual researcher can also be called researcher integrity, referring to the individual’s commitment to a certain set of values and norms (Meriste et al. 2016). Integrity related to a research institution is considered “a matter of creating an environment that promotes responsible conduct” (National Academy of Sciences 2002, p. 34; see more Jordan 2013, p. 248). In terms of science as a social system, research integrity includes all previously mentioned points. Matthias Kaiser attributes integrity to science as “a social system, which displays soundness in its functions, and much like an individual is judged in relation to its ethics, i.e. its practitioners behave in accordance with the accepted rules of good conduct within that system” (Kaiser 2014, p. 341). When one speaks about research integrity as a property of a scientist, one can speak either about the coherence of the researcher’s set of values or coherence between the researcher’s values and actions (Mcfall 1987; Fjellstrom 2005; Meriste et al. 2016). Following this analysis there can be also two sorts of breaches of integrity – firstly, a failure to adopt a particular set of professional values and, secondly, failure to conduct oneself in accordance to those values. Thus integrity is a virtue term focusing on a person’s character and referring to the motivations guiding person’s actions, while misconduct refers to actions and is thus to be contrasted with good conduct or good research practice (Meriste et al. 2016). Davies (2018) gives an overview of how research integrity has risen into prominence. It rose to be a public and policy issue (Marres 2007) following media coverage of high-profile frauds (Franzen et al. 2007) and policy debates on how to ensure research integrity (in the United States, the activities of the Office of Research Integrity (ORI) and, in Europe, the European Federation of Academies of Sciences and Humanities (ALLEA)). Davies (2018) maintains that the problem of research integrity should also be solved through funding patterns such as the European Commission’s program Horizon 2020 calling for projects to investigate and support research integrity. A quick look at codes of ethics shows that whereas historically the first normative documents (oaths, codes, guidelines, declarations, laws) of research ethics focused on the actions a good scientist should perform, today’s codes of ethics for research integrity speak to both virtues and actions. The tendency of ignoring the difference between virtues and actions leads to the confusion of research integrity and research ethics. As pointed out by Neves (2018, p. 183), originally moral integrity referred to a virtue – a character trait or a predisposition to act in a certain way considered necessary for the excellence of the profession; however today the virtues are converted into duties – obligations associated with the performance of specific tasks (Neves 2018, p. 183). As a result, in relation to “researcher integrity,” one no longer speaks about the researcher’s virtues, the development of which can be stimulated, but about duties to which all researchers must obey. Neves also points out that during the last two decades, “scientific integrity” has been defined firstly in terms of a via negativa (negation or negative way), “through the denunciation of what it is thought to be infringement of the duty of integrity” and, secondly, by its incidence essentially upon procedure, “on the telos or finality that originates and justifies the profession” (Neves 2018, p. 185). Thus, instead of trying
5
Research Ethics Codes and Guidelines
77
to elaborate on the question of what it means to be a good scientist and what kind of disposition this presumes, various organizations attempt to formulate scientists’ duties. Surprisingly, there does not seem to be much agreement on what these duties are. Moreover, research ethics codes express challenges to both values and issues emerging from the local context. An analysis of 69 research codes of ethics for an EU project, PRO-RES (see: http://prores-project.eu/), showed that either different concepts are used for describing similar ideas (e.g., intellectual freedom, freedom of inquiry, and academic freedom all addressed some aspects of unrestricted pursuit of truth, and concepts of justice, equality, and equity addressed some aspect of treating others fairly) or that similar concepts are used for describing different ideas (e.g., the variety of meanings behind the concepts of responsibility, respect, and justice). With respect to issues, there are substantial differences among documents. Issues include sharing responsibility for creating a supportive research environment (should the emphasis be on the role of institution or the role of researcher?); applying the principle of confidentiality (firstly toward what, identity, information, data or findings, etc.; secondly, can the identity of grantors be kept confidential, or should all grantors be made public?) (Parder and Juurik 2019).
Are So Many Codes of Ethics Necessary? The vast number of codes is problematic in various ways: firstly, no one has the time to read all of the codes which apply to science; secondly, the researcher needs to know how to distinguish between all the seemingly relevant codes of ethics, choosing the right one for the right occasion. Thirdly, the differences and conflicts among the numerous codes may create additional confusion and frustration; fourthly, it becomes very difficult to keep up with everything going on in the field of research ethics as different documents are updated with different intervals and new documents seem to be adopted every year. There are two suggestions for solving these issues: firstly, creation of fewer universal and general codes for all of research and, secondly, harmonization of existing codes.
Universal Codes, Values, and Norms in Research In 2007, The World Conference on Research Integrity articulated the need to “clarify, harmonize, and publicize standards for best practice and procedures for reporting improper conduct in research” (Mayer and Steneck 2007, p. 1). The final report of the World Conference suggests that a general international code of conduct would be a possible way to achieve these aims (Mayer and Steneck 2007, p. 28). Resnik (2009) has also argued for the need of international standards for research integrity. If such a document had legitimacy and authority, it could easily become a singular reference document for all research, on either international,
78
M. Sutrop et al.
national, or local levels. It is easy to understand the appeal of such an idea: all the complexities of ethical research could be reduced to a single set of rules, and once there was an ideal set of international universal standards, there would be no further need for national or institutional standards, which would duplicate or reword the international ones. This approach leans toward universal values and principles which could be adopted for all of science with one single document. However, as previously argued, the research ethics codes have two different foundations: the epistemic aims of research and the moral norms of the society. Even if the epistemic values are common to all researchers, the social norms entailed in various cultural contexts may still differ. Resnik (2009) also suggests that there may be some topics where international agreement is difficult to achieve, namely, social responsibility of scientists, conflicts of interest, and definitions of misconduct. The debate concerning the plurality and universality of values is not specific to science. The outlook that there are numerous important values, all of which are interpreted differently in different social contexts, makes it unlikely that there will ever be one universal code of ethics for science. However, even if an international declaration of universal values were adopted, guidelines would still be needed for coping with social and cultural differences. Kathinka Evers (2003) observes a similar problem from the perspective of analyticity and logic, referring to a principle according to which the content of a norm needs to be balanced with its extension. If a norm is to have a greater extension, e.g., applies to all scientists in every field, it would have less substance from a logical point of view. If a norm is to have more substance, for instance, if it offered a detailed description on how children should be involved in research, it would have a narrower extension logically in terms of the types of research activities to which it applies. A specific type of dilemma ensues which Evers (2003) calls “the trap of analyticity,” meaning that it is easier to reach consensus on general norms with greater extension and less substance. Exceptions to general principles may lead to debates and conflicts within the profession. It is appealing to formulate more general principles that gain acceptance easily. However, these general formulations, such as “a scientist should be honest” or “we respect everyone,” may lack conviction and therefore seem fake or pretentious. Balancing content and extension is especially problematic in the case of universal codes of ethics. Any international and universal code aims to reach the maximum extensibility, that is, to cover all research in every country. The potential content for such a code is vast, since research ethics can be applied by different scientific disciplines in various cultural and social contexts with numerous exceptions to any given principle. If codes of ethics were to address all this potential content, either one long and detailed guideline or numerous shorter ones would be required. All in all, if the aim were to reduce the number or length of codes, this could only take place at the expense of substance. Therefore, from the argument of analyticity that Evers proposes, it can be concluded that numerous codes of ethics cannot be avoided if they are to be helpful and relevant to researchers.
5
Research Ethics Codes and Guidelines
79
What Is Harmonization Good For? Harmonization refers to the desire or need to make different regulations compatible with each other. In this context regulation is understood as a collection of codified legal, ethical, and social norms, including laws, standards and technical requirements, procedures, methods, and guidelines, which prescribe how the regulated activities should be conducted. The main aim of harmonization in the research context has been to do away with differences among institutional, national, and regional policies, to make regulative systems more similar and compatible. There have been calls for harmonization in pharmaceutical research (Kidd 1996; Lee 2005; Molzon et al. 2011), genomic research (Knoppers 2014; McGonigle and Shomron 2016; Townend 2018), observational medical research (Urushihara et al. 2017; Lange et al. 2019), nanomedicine (Marchant et al. 2009), and various other medical research fields. In addition, there have been discussions about harmonization in relation to processing personal data (Hakulinen et al. 2011), research ethics committees (Doppelfeld 2007), and institutional guidelines (Bonn et al. 2017). Several comparisons have been made among regulative systems and policy documents to identify their main differences. For instance, Blake et al. (2011) found several substantial philosophical differences when comparing international, US, and EU regulations concerning pediatric research. Resnik et al. (2015) compared national policies and found “considerable variation in the definition of research misconduct.” A study by Urushihara et al. (2017) compared regulative differences between the United States, the European Union, and Japan concerning post-marketing observational studies of drugs and found that different regional definitions and procedures apply to similar kinds of studies. Fieldsend (2011) concluded that there is no consensus among European states on bioethical issues. He added that “The contradictory position of national legislation on key bioethical controversies; and the failure of many, especially large, Member States to ratify the Oviedo Convention are amongst the symptoms of a distinct lack of current EU consensus on bioethics” (Fieldsend 2011, p. 231). The analysis within the PRINTEGER project (Fuster and Gutwirth 2016) found that documents and legislations within the European Union have different definitions, scope, and levels of specificity regarding the concepts of research integrity and misconduct. Even if some differences may be identified, does this mean that differences are bad? The presumption here is that disharmony has a negative impact on research. According to Urushihara et al. (2017, p. 5), regulatory disharmony increases the costs of data assembly in global cooperative research. According to Resnik et al., “Lack of agreement on the definition of misconduct can lead to problems for promoting integrity in international research, since a type of behavior may be categorized as misconduct by one country but not by another” (2015, p. 253). Li et al. (2015) find that the sheer number of different rules and procedures is one of the reasons why there is a lack of consideration of ethical issues in the design of global clinical trials. Regulative differences may also have a negative impact on the procedural fairness of investigations into allegations of misconduct (Fuster and Gutwirth 2016).
80
M. Sutrop et al.
However, not everyone sees differences as something to be avoided. For instance, Fieldsend (2011, p. 232) argues that “there will continue to be a tension between those who see difference as an inconvenience at best, a threat at worst and those who see the ability to live harmoniously together despite our deepest differences as a crowning achievement.” Thus, harmonization could be seen as reconciling differences rather than eliminating them.
Quantity Does Not Matter, Quality Does Three general conclusions can be drawn from the topics of harmonization and universality. Firstly, the total number of codes does not say anything about their use. The existence of hundreds of other codes by no means obstructs researchers from following and using their own national or institutional code. On the one hand, the growing number of codes could indicate that the importance of research ethics has been widely recognized. But on the other hand, this could indicate that too much emphasis is being placed on formalities, administration of research, and compliance to rules, whereas research ethics is also about deliberation, reflection, and moral character. To truly assess the usefulness of codes, one needs to look at how the codes were adopted and how are they being implemented. Even then, not much can be known about what researchers truly believe and how they behave. Secondly, following the argument of analyticity (Evers 2003), one should not aim at adopting a universal code of ethics for all researchers, as a code with such broad extent would have to sacrifice much of its content in order to gain acceptance from the whole international scientific community. International declarations have their own value, but they can never cover all topics in a detailed way while leaving plenty of room for discipline-specific or institutional codes. Thirdly, harmonization cannot remove all differences. Many of the ethical dilemmas and conflicts arise from differences of values, whether within academia or society at large. While harmonization can achieve similarity in the conduct of research across different disciplines and cultures, it can never achieve a unity of values. As long as there exist multiple and various values within societies, or they are ranked differently, it is likely that there will continue to be differences, and possibly conflicts, between different codes and their underlying values and principles. However, as written codes of ethics foreground differences of opinion on which values and principles one should follow in concrete situations, this may promote discussion which may in turn help to get closer to agreement.
How Can One Make Codes of Ethics Work Better? Enhancing the Function of Codes of Ethics Although there is much serious critique related to codes of ethics that needs to be taken into account, there are also many arguments proving their usefulness. Previous research has shown that in order to ensure that the influence of the code of ethics is
5
Research Ethics Codes and Guidelines
81
not smaller than expected, six main obstacles should be overcome (Sutrop 2016): firstly, people often do not know or remember what is agreed upon in the code; second, people might know what is in the code, but lack motivation to follow those values, principles, or rules; third, people might not recognize that a clause of the code applies in the given situation; fourth, the values expressed in the code may conflict, therefore making it impossible to follow all of them; fifth, the professional values expressed in the code may contradict the values of the person, the organization, or the society values, rendering following the code’s values less desirable or even impossible; and, finally, the values declared by the organization and values applied in everyday work might differ, and leaders may have indifferent or arrogant attitudes to the code of ethics, thus setting a bad example to their staff. To overcome the obstacle of people not knowing what is written in the code, the process of drafting a code should be as inclusive as possible, and enough time should be dedicated to discussing it with members of an organization (Sutrop 2016). This means that as many people as possible should already be recruited in the drafting phase of the process and careful consideration be given to their ideas and opinions. The worst outcome is when the code is written by one person or a small group of organization members and adopted without discussion. This will guarantee that people are not interested in the code and do not have a relationship with it. This idea is also expressed in the works of Michael S. Pritchard, who has emphasized that “a good code is not a philosopher’s invention. It is carried by a practitioner’s moral agency and their own understanding of their speciality” (Pritchard 2006). The best way to engage people is through the discussion of real-life cases, identifying “good” or “the best” conduct in given situations and the explicit or implicit values in various solutions. One possibility is to use misconduct cases that have already happened in the field. Turning to motivation, there are two ways in which motivation can arise – either from an internal wish to follow the ethos of the profession or from fear of sanctions. Georg Spielthenner (2015) reports that fear is the most essential reason for following a code of ethics, as people are afraid of losing their colleagues’ trust and their own reputation. However, Spielthenner also acknowledges that people may follow internalist considerations if they consider the prescriptions of the code to be right on an internal basis (Spielthenner 2015, p. 200). Drenth (2010) similarly differentiates between a values-based approach (focusing on training, creation of a role model, and self-regulation) and a conformity-based approach (focusing on rules, orders, and sanctions). For a values-based approach, the emphasis is on noticing where values are expressed and understanding how they influence our everyday conduct. Conformity-based approaches center on rules understood identically by everyone and on the proportionality of sanctions. In relation to codes of ethics, the focus is on external sanctions (although internal sanctions, such as shame, are also possible). A great impact is achieved if people see that breaches are taken seriously and agreed-upon sanctions are enforced. Although imposing sanctions may sometimes be inevitable, it is essential that people are internally motivated to adhere to values agreed upon by representatives of their occupation. It should be clear to everyone why these particular values have been selected for a code and how following them helps to achieve the aims of the
82
M. Sutrop et al.
occupation or the organization. Professions establish their ethical standards depending on their aims, i.e., depending on which benefits or services their representatives must provide to people. These function as quality standards and guarantee the society’s trust in the profession (Bayles 1988). The ethical standards for research are based on the aims of research, which David Resnik (1998) divides into epistemic and practical. Epistemic aims are striving for knowledge and the eradication of ignorance. They are expressed through the creation of explanatory theories and posing of hypotheses, making precise predictions, and teaching the basics of research to the new generation. Practical aims, however, are finding and implementing solutions to problems related to health, environment, security, resources and energy use, people’s well-being, and social cohesion (Resnik 1998, pp. 34–35). In addition to those resulting from the aims of research, the ethical standards for research also have a moral foundation. For example, the forgery of data is forbidden as it is a form of dishonesty. On the one hand, forgery is condemned because general human morals consider honesty essential, but on the other hand, research cannot fulfil its tasks if data are not reliable. Thus, it follows that the norms of conduct expressed in researchers’ code of ethics have two sources: morals and research practices. Ethical conduct in research means that one does not violate generally agreed moral norms and contributes to achieving the aims of research (Resnik 1998, p. 48). With respect to the third obstacle – that people do not recognize that in a given situation one or another clause of the code applies or that even when they do know, they are unable to decide how to behave – the advancement of skills of moral discussion and substantiation is emphasized. Moral discussion needs practice; knowledge of different theories of ethics and ways of substantiation also contribute to the quality of discussion. Thus, when applying codes of ethics, attention should be paid to educating people and organizing training courses on ethics. Two kinds of attitudes can be differentiated in ethics – those based on principles and those based on virtues. Supporters of virtue ethics think that ethical conduct presumes more than avoiding unethical conduct or adhering to the rules and principles written in the code of ethics. As ethical principles can be contradictory, rules and regulations do not always work, and one can only rely on people’s own ethical sense – their virtues or habitual inclinations of conduct. Thus the main question is not how to prevent unethical conduct but how to encourage people to act ethically. Representatives of the virtue ethical approach are often very critical toward codes of ethics. For example, Bruce Macfarlane (2009) thinks that codes of ethics are of little use for researchers who seek answers to questions arising in their everyday practice: to whom does the authorship belong; how broadly should results be shared with others; how much should subjects be informed of the aims of research; how far does the requirement for confidentiality go; and so on? In Macfarlane’s opinion, narratives describing the real-life situations that researchers face are much more helpful. Discussion of such scenarios helps identify virtues that should be characteristic of a good researcher and vices of which a good researcher should disapprove. Narratives help people reach an understanding of what kind of people they would like to be or which actions are good or
5
Research Ethics Codes and Guidelines
83
appropriate. Following Aristotle’s understanding, a virtue is something intermediate between two vices. For example, courage is located between cowardice and daredevilry; generosity is between stinginess and wastefulness. In the case of a virtue ethical approach, professional ethics does not mean following prescriptions but the attempt to act in a morally wholesome way. The fourth obstacle focuses on the possibility that values expressed in a code may conflict. Codes of ethics may contain a shared understanding of values that the members of an occupation or organization consider right to follow; what they should be like; and values to which they should adhere in their activities. However, it is important to keep in mind that ideal people do not exist and values recorded in a code cannot always be followed simultaneously. This means that in a concrete situation, the representative of an occupation needs to decide which value is more important in this particular context (Iphofen 2009, p. 7). In itself a moral dilemma is a situation where actor S is morally obliged to but cannot do both A and B, as B means either not doing A, or some circumstance in the world makes it impossible to do A and B simultaneously (Gowans 1987). Dilemmas include situations where decision-making cannot be avoided, as not deciding is also an act with a moral meaning. The decision of becoming a whistleblower can present itself as a dilemma. In this case, the person has to choose between honesty and loyalty. At the practical level, moral dilemmas require a solution, though, reasonable solutions have been exhausted (Mason 1996). In such situations it is often very difficult to decide what to do, and even the opinions of rationally reasoning people can differ. Still, the obligation to make the best choice remains, and the burden of responsibility cannot be avoided. As suggested by Dale Beyerstein (1993), there are two possibilities for resolving moral dilemmas: finding additional morally relevant facts about the situation or refining our moral theories to yield a clear priority between the contested goods. Thus, if codes of ethics make moral dilemmas explicit (if there is a clash between two values, e.g., honesty and loyalty), reasoning about such cases requires paying attention to a wider moral theory. Another obstacle deals with the question of values expressed in the code and mutual conflict between the values of a person, their organization, or the society. In addition to an occupational code expressing professional values, all people have their own set of personal values, and the society surrounding them supports a set of values. It is highly likely that there are situations where these sets of values contradict with each other, and the person has to choose to which set of values she will remain true in the given situation. The reverse may also be true – that the person wants to adhere to professional ethics – but the organizational culture makes following the code of ethics impossible. To overcome this obstacle, it is necessary that the leaders of the organization act as role models and adhere to the values and principles written in the code, taking care that an organizational culture be created such as to support living according to the values agreed in the code. The final obstacle centers on the fact that the declared values of the organization and actual values applied in everyday work may differ and leaders may be indifferent or arrogant toward the code. Katharina Wulf (2011) points out that if organizations
84
M. Sutrop et al.
want to reduce the threat of unethical and illegal conduct, they must concentrate on ethical organizational culture, define its main features, and practice constant monitoring and assessment of practices. Creation of ethical organizational culture begins with formulating the organization’s values and the expected conduct of its members. When the values and standards of conduct have been written down, it is essential to analyze whether people really act accordingly, whether or not they are guided by values in their everyday activities. Organizations should review their codes and think through whether these guide people to act ethically. Many researchers have shown that responsibility does not rest on individuals only, but it is also expressed at the organizational level (Forsberg et al. 2018). The organizational context may increase the probability that individuals or groups of individuals prefer misconduct to ethically correct conduct. Organizational culture can normatively support misconduct in three ways (Greve et al. 2010). First, organizations can support misconduct by valuing certain unethical activities or by requiring achievement of goals without paying attention to how they are reached. Second, organizations can cultivate techniques that diminish the feeling of guilt for harming others (e.g., the clients are stupid – therefore, deceiving them is justified). Third, an organizational culture can create conditions that encourage misconduct. For example, constant assessment and comparison of organization members among themselves can bring about a wish to get ahead of one’s colleagues by any means. How is organizational culture shaped? It is known that leaders play a central role in the formation of organizational culture, namely, that their conduct influences the individuals’ readiness for misconduct most (Schein 2010). Even if the leaders’ own conduct is not of corrupt nature, by ignoring manifestations of corruption or even by favoring them, leaders can contribute to the spread of misconduct. Drawing on Schein’s treatment of the formation of organizational culture, Sims and Brinkmann (2003) identify five dimensions in leaders’ conduct that can contribute to the spread of misconduct: things to which leaders pay attention; ways how they solve crises; ways of conduct that they model; ways of action they appreciate or sanction; and, finally, what kind of workers they employ. Still, there are also organization theories (e.g., ecological approach to organizations) that say that organizational cultures reflect the environment, no matter how dedicatedly the leaders of organization would try to lead them in some other direction (Greve et al. 2010). We agree that the environmental influence should not be underestimated, but the environment has been shaped by people, so responsibility will always lie with the people. To be sure that the words and deeds of the representatives of an occupation or organization members are in harmony, it is necessary to analyze practices, to see to it that values are not barely declarations and – what is particularly essential – that declared and actual values do not contradict. Depending on the occupation or organization, this surveillance can be performed by the representational organization of the occupation, but the media can also function as a watchdog. If society expects ethical conduct from significant professions, organizations, and enterprises (e.g., constitutional institutions, banks, businesses oriented to customer service) and also says so, reacting strongly to breaches, then their leaders become interested in
5
Research Ethics Codes and Guidelines
85
shaping their organization accordingly and require such conduct in their organization, as the organization’s prestige and success depend on it.
References All European Academies (ALLEA) (2011) The European code of conduct for research integrity. European Science Foundation, Strasbourg, pp 1–20. https://doi.org/10.1037/e648332011-002 American Council on Science and Health (2017) The Brussels declaration – ethics and principles for science and society. http://www.euroscientist.com/wpcontent/uploads/2017/02/BrusselsDeclaration.pdf American Psychological Association (APA) (2017) Ethical principles of psychologists and code of conduct. American Psychological Association, Washington, DC Anderson MS, Shaw MA, Steneck NH et al (2013) Research integrity and misconduct in the academic profession. In: Higher education: handbook of theory and research. Springer, Dordrecht, pp 217–261. https://doi.org/10.1007/978-94-007-5836-0_5 Banks SJ (2015) From research integrity to researcher integrity: issues of conduct, competence and commitment. Acad Soc Sci Br Sociol Assoc Event Virtue Ethics Pract Rev Soc Sci Res:1–12. https://doi.org/10.13140/RG.2.1.4497.9040 Barker RL (1988) Just whose code of ethics should the independent practitioner follow? J Indep Soc Work 2:1–5. https://doi.org/10.1300/J283v02n04_01 Bayles MD (1988) The professional-client relationship. In: Callahan JC (ed) Ethical issues in professional life. Oxford University Press, Oxford, pp 113–120 Beauchamp TL, Childress JF (2009) Principles of biomedical ethics. Oxford University Press, New York. https://doi.org/10.3138/physio.61.3.154 Benedek TG (2014) “Case Neisser”: experimental design, the beginnings of immunology, and informed consent. Perspect Biol Med 57:249–267. https://doi.org/10.1353/pbm.2014.0018 Beyerstein D (1993) The functions and limitations of professional codes of ethics. In: Winkler ER, Coombs JR (eds) Applied ethics: a reader. Blackwell, Oxford, pp 416–421 Blake V, Joffe S, Kodish E (2011) Harmonization of ethics policies in pediatric research. J Law Med Ethics 39:70–78. https://doi.org/10.1111/j.1748-720X.2011.00551.x Bonn NA, Godecharle S, Dierickx K (2017) European universities’ guidance on research integrity and misconduct: accessibility, approaches, and content. Res Integr Res Misconduct 12:33–44. https://doi.org/10.1177/1556264616688980 Braxton JM (2010) Norms and the work of colleges and universities: introduction to the special issue – norms in academia. J High Educ 81:243–250 Bretag T, Mahmud S, Wallace M et al (2011) Core elements of exemplary academic integrity policy in Australian higher education. Int J Educ Integr 7:3–12. https://doi.org/10.21913/IJEI.v7i2.759 Brief AP, Dukerich JM, Brown PR, Brett JF (1996) What’ s wrong with the treadway commission report? Experimental analyses of the effects of personal on values and codes of conduct fraudulent financial reporting. J Bus Ethics 15:183–198. https://doi.org/10.1007/BF00705586 British Educational Research Association (BERA) (2018) Ethical guidelines for educational research, 4th edn. British Educational Research Association, London
The ideas expressed in this chapter have been supported by the Centre of Excellence in Estonian Studies (European Union, European Regional Development Fund) and are related to research projects IUT20-5 (Estonian Ministry of Education and Research) and support for sectoral R&D – RITA, action 4 – study “Developing Estonian National System for Monitoring and Supporting Ethics in Scientific Research.” This work has also profited from research done in the European Commission financed H2020 projects PRINTEGER and PRO-RES. Ilmar Anvelt and Tiina Kirss helped with English expression.
86
M. Sutrop et al.
Carlson RV, Boyd KM, Webb DJ (2004) The revision of the declaration of Helsinki: past, present and future. Br J Clin Pharmacol 57:695–713. https://doi.org/10.1111/j.1365-2125. 2004.02103.x Centre for Ethics; University of Tartu (2017) Estonian code of conduct for research integrity. https:// www.eetika.ee/en/estonian-code-conduct-research-integrity Cox D, La Caze M, Levine MP (2003) Integrity and the fragile self. Ashgate, Aldershot Danish Ministry of Higher Education and Science (2014) Danish code of conduct for research integrity. https://ufm.dk/en/publications/2014/the-danish-code-ofconduct-for-research-integrity Davies SR (2018) An ethics of the system: talking to scientists about research integrity. Sci Eng Ethics 25:1235. https://doi.org/10.1007/s11948-018-0064-y Davis M (1991) Thinking like an engineer: the place of a code of ethics in the practice of a profession. Philos Public Aff 20:150–167 de Lange DW, Guidet B, Andersen FH et al (2019) Huge variation in obtaining ethical permission for a non-interventional observational study in Europe. BMC Med Ethics 20:1–7 Dienhart J (1995) Rationality, ethical codes, and an egalitarian justification of ethical expertise: implications for professions and organizations. Bus Ethics Q 5:419–450. https://doi.org/ 10.2307/3857392 Dobson J (2005) Monkey business: a neo-Darwinist approach to ethics codes. Financ Anal J 61(3):59–64. https://doi.org/10.2469/faj.v61.n3.2728 Doppelfeld E (2007) Harmonization of research ethics committees – are there limits? Japan Med Assoc J 50:493–494 Drenth PJD (2010) Research integrity: protecting science, society and individuals. Eur Rev 18:417–426. https://doi.org/10.1017/S1062798710000104 Emanuel EJ, Wendler D, Grady C (2008) An ethical framework for biomedical research. In: Emanuel EJ, Grady C, Crouch RA, Lie RK, Miller FG, Wendler D (eds) The Oxford textbook of clinical research ethics. Oxford University Press, Oxford, pp 123–135 Evers K (2003) Codes of conduct. Standards for ethics in research. Office for Official Publications of the European Communities, Luxembourg. https://doi.org/10.7589/0090-3558-39.4.904 Fieldsend D (2011) Unity in diversity: can there ever be a true European consensus in bioethics? Hum Reprod Genet Ethics 17:222–234. https://doi.org/10.1558/hrge.v17i2.222 Fjellstrom R (2005) Respect for persons, respect for integrity. Med Health Care Philos 8:231–242. https://doi.org/10.1007/s11019-004-7694-3 Forsberg E-M, Anthun FO, Bailey S et al (2018) Working with research integrity – guidance for research performing organisations: the Bonn PRINTEGER statement. Sci Eng Ethics 24:1023–1034. https://doi.org/10.1007/s11948-018-0034-4 Franzen M, Rödder S, Weingart P (2007) Fraud: causes and culprits as perceived by science and the media. EMBO Rep 8:3. https://doi.org/10.1038/sj.embor.7400884 Fuster GG, Gutwirth S (2016) Promoting integrity as an integral dimension of excellence in research. D II.4 Legal analysis. https://printeger.eu/wp-content/uploads/2017/02/D2.4.pdf. https://doi.org/10.15420/ecr.2016.11.2.123 Gowans CW (1987) Moral dilemmas. Oxford University Press, Oxford Greve HR, Palmer D, Pozner JE (2010) Organizations gone wild: the causes, processes and consequences of organizational misconduct. Acad Manag Ann 4:53–107. https://doi.org/ 10.1080/19416521003654186 Guillén M, Melé D, Murphy P (2002) European vs. American approaches to institutionalisation of business ethics: the Spanish case. Bus Ethics A Eur Rev 11:167–178. https://doi.org/10.1111/ 1467-8608.00273 Hakulinen T, Arbyn M, Brewster DH et al (2011) Harmonization may be counterproductive – at least for parts of Europe where public health research operates effectively. Eur J Pub Health 21:686–687. https://doi.org/10.1093/eurpub/ckr149 Hassan S, Wright BE, Yuki G (2014) Does ethical leadership matter in government? Effects on organizational commitment, absenteeism, and willingness to report ethical problems. Public Adm Rev 74:333–343. https://doi.org/10.1111/puar.12216
5
Research Ethics Codes and Guidelines
87
Iphofen R (2009) Ethical decision making in social research: a practical guide. Palgrave Macmillan, Basingstoke Israel M, Hay I (2006) Research ethics for social scientists. SAGE, London Jordan SR (2013) Conceptual clarification and the task of improving research on academic ethics. J Acad Ethics 11:243–256. https://doi.org/10.1007/s10805-013-9190-y Kaiser M (2014) The integrity of science – lost in translation? Best Pract Res Clin Gastroenterol 28:339–347. https://doi.org/10.1016/j.bpg.2014.03.003 Kaptein M, Schwartz MS (2008) The effectiveness of business codes: a critical examination of existing studies and the development of an integrated research model. J Bus Ethics 77:111–127. https://doi.org/10.1007/s10551-006-9305-0 Kidd D (1996) The international conference on harmonization of pharmaceutical regulations the European medicines evaluation agency, and the FDA: who’s zooming who? Indiana J Glob Leg Stud 4:183–206 Kitchener KS, Kitchener RF (2009) Social science research ethics. In: Mertens DM, Ginsberg PE (eds) The handbook of social research ethics. SAGE, Los Angeles, pp 5–23 Kjonstad B, Willmott H (1995) Business ethics: restrictive or empowering? J Bus Ethics 14:445–464. https://doi.org/10.1007/BF00872086 Knoppers BM (2014) International ethics harmonization and the global alliance for genomics and health. Genome Med 6:13. https://doi.org/10.1186/gm530 Komić D, Marušić SL, Marušić A (2015) Research integrity and research ethics in professional codes of ethics: survey of terminology used by professional organizations across research disciplines. PLoS One 10:e0133662. https://doi.org/10.1371/journal.pone.0133662 Ladd J (1991) The quest for a code of professional ethics: an intellectual and moral confusion. In: Deborah GJ (ed) Ethical issues in engineering. Prentice-Hall, Englewood Cliffs, pp 130–136 Lee JJ (2005) What is past is prologue: the international conference on harmonization and lessons learned from European drug regulations harmonization. Univ Pennsylvania J Int Econ Law 26:151–191 Levine FJ, Skedsvold PR (2008) Behavioral and social science research. In: Emanuel EJ, Grady C, Crouch RA, Lie RK, Miller FG, Wendler D (eds) The Oxford textbook of clinical research ethics. Oxford University Press, Oxford, pp 336–355. https://doi.org/10.2500/aap.2008.29.3122 Li R, Barnes M, Aldinger CE, Bierer BE (2015) Global clinical trials: ethics, harmonization and commitments to transparency. Harv Public Heal Rev 5:1–7 Luegenbiehl HC (1991) Codes of ethics and the moral education of engineers. In: Deborah GJ (ed) Ethical issues in engineering. Prentice-Hall, Englewood Cliffs Macfarlane B (2009) Researching with integrity: the ethics of academic inquiry. Routledge, New York/London Mahmud S, Bretag T (2014) Fostering integrity in postgraduate research: an evidence-based policy and support framework. Account Res 21:122–137. https://doi.org/10.1080/08989621. 2014.847668 Marchant GE, Sylvester DJ, Abbott KW, Danforth TL (2009) International harmonization of regulation of nanomedicine. Stud Ethics Law Technol 3:Article 6 Marres N (2007) The issues deserve more credit: pragmatist contributions to the study of public involvement in controversy. Soc Stud Sci 37:759–780. https://doi.org/10.1177/ 0306312706077367 Mason HE (1996) Moral dilemmas and moral theory. Oxford University Press, Oxford Mayer T, Steneck N (2007) Final report to ESF and ORI. First world conference on research integrity: fostering responsible research, pp 1–50. https://doi.org/10.1111/j.1747-4949.2007. 00144.x Mcfall L (1987) Integrity. Ethics 98:5–20. https://doi.org/10.1086/292912 McGonigle I, Shomron N (2016) Privacy, anonymity and subjectivity in genomic research. Genet Res (Camb) 98:10–12. https://doi.org/10.1017/S0016672315000221 Meriste H, Parder M-L, Lõuk K et al (2016) Normative analysis of research integrity and misconduct. https://printeger.eu/wp-content/uploads/2016/10/D2.3.pdf
88
M. Sutrop et al.
Molzon JA, Giaquinto A, Lindstrom L et al (2011) The value and benefits of the international conference on harmonisation to drug regulatory authorities: advancing harmonization for better public health. Clin Pharmacol Ther 89:503–512. https://doi.org/10.1038/clpt.2011.10 Morin K (2005) Code of ethics for bioethicists: medicine’s lessons worth heeding. Am J Bioeth 5:59–60. https://doi.org/10.1080/15265160500245501 National Academy of Sciences (2002) Integrity in scientific research. National Academies Press, Washington, DC National Academy of Sciences, National Academy of Engineering, and Institute of Medicine (2009) On being a scientist: a guide to responsible conduct in research, 3rd edn. The National Academies Press, Washington, DC. https://doi.org/10.17226/12192 Neves M d CP (2018) On (scientific) integrity: conceptual clarification. Med Health Care Philos 21:181–187. https://doi.org/10.1007/s11019-017-9796-8 OeAWI (2016) OeAWI guidelines for good scientific practice. https://www.cdg.ac.at/fileadmin/ main/documents/Sonstige_Dokumente/160418_OeAWI_Richtlinien_Broschuere_DE_EN.pdf Office of the Secretary, United States Department of Health, Education, and Welfare (1979) The Belmont report: ethical principles and guidelines for the protection of human subjects of research. Office of the Secretary, United States Department of Health, Education, and Welfare. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research Orr RD, Pang N, Pellegrino ED, Siegler M (1997) Use of the Hippocratic oath: a review of twentieth century practice and a content analysis of oaths administered in medical schools in the US and Canada in 1993. J Clin Ethics 8:377–388 Painter-Morland M (2010) Questioning corporate codes of ethics. Bus Ethics 19:265–279. https:// doi.org/10.1111/j.1467-8608.2010.01591.x Papademas D (2009) IVSA code of research ethics and guidelines. Vis Stud 24:250–257. https:// doi.org/10.1080/14725860903309187 Parder M-L, Juurik M (2019) Reporting on existing codes and guidelines. Pro-Res D1.1, Tartu, European Commission: PRO-RES - PROmoting integrity in the use of RESearch results. https:// doi.org/10.5281/zenodo.3560777 Parry J (2017) Developing standards for research practice: some issues for consideration. In: Iphofen R (ed) Finding common ground: consensus in research ethics across the social sciences. Emerald Publishing, Bingley, pp 77–101 Pipes RB, Holstein JE, Aguirre MG (2005) Examining the personal-professional distinction 8 ethics codes and the difficulty of drawing a boundary. Am Psychol 60:325–334. https://doi.org/ 10.1037/0003-066X.60.4.325 Pritchard MS (2006) Professional integrity: thinking ethically. University Press of Kansas, Lawrence Resnik DB (1998) The ethics of science: an introduction. Routledge, London. https://doi.org/ 10.1111/1467-8519.00123 Resnik DB (2009) International standards for research integrity: an idea whose time has come? Account Res 16:218–228. https://doi.org/10.1080/08989620903065350 Resnik DB, Rasmussen LM, Kissling GE (2015) An international study of research misconduct policies. Account Res 22:249–266. https://doi.org/10.1080/08989621.2014.958218 Russel WMS, Burch RL (1959) The principles of humane experimental technique. Methuen, London Ruyter KW (2003) Forskningsetikk: Beskyttelse av enkeltpersoner og samfunn. Gyldendal akademisk, Oslo Sarauw LL, Degn L, Ørberg JW (2019) Researcher development through doctoral training in research integrity. Int J Acad Dev 24:178–191. https://doi.org/10.1080/1360144X. 2019.1595626 Schaubroeck JM, Hannah ST, Avolio BJ et al (2012) Embedding ethical leadership within and across organization levels. Acad Manag J 55:1053–1078. https://doi.org/10.5465/amj. 2011.0064
5
Research Ethics Codes and Guidelines
89
Schein EH (2010) Organizational culture and leadership. Jossey-Bass, San Francisco. https://doi. org/10.1080/10400435.2010.518579 Schwitzgebel E (2009) Do ethicists steal more books? Philos Psychol 22:711–725. https://doi.org/ 10.1080/09515080903409952 Shamoo AE, Resnik DB (2015) Responsible conduct of research, 3rd edn. Oxford University Press, New York Sims RR, Brinkmann J (2003) Enron ethics (or: culture matters more than codes). J Bus 45:243–256 Singapore Statement on Research Integrity (2010) Singapore Statement on Research Integrity. https://doi.org/10.3768/rtipress.2018.pb.0018.1806 Spielthenner G (2015) Why comply with a code of ethics? Med Health Care Philos 18:195–202. https://doi.org/10.1007/s11019-014-9594-5 Starr WC (1983) Codes of ethics towards a rule-utilitarian justification. J Bus Ethics 2:99–106. https://doi.org/10.1007/BF00381700 Sutrop M (2016) Kuidas panna eetikakoodeksid paremini toimima. In: Sutrop M (ed) Eetikakoodeksid. Väärtused, normid ja eetilised dilemmad. Eesti Keele Sihtasutus, Tartu, pp 85–103 Townend D (2018) Conclusion: harmonisation in genomic and health data sharing for research : an impossible dream? Hum Genet 137:657–664. https://doi.org/10.1007/s00439-018-1924-x UNESCO (2015) Universal declaration on bioethics and human rights. UNESCO, Paris UNESCO (2017) Recommendation on science and scientific researchers; 2018. UNESCO, Paris, pp 1–2 Unger S (1991) Codes of engineering ethics. In: Johnson DG (ed) Ethical issues in engineering. Prentice-Hall, Englewood Cliffs. https://doi.org/10.1007/bf02536578 Urushihara H, Parmenter L, Tashiro S et al (2017) Bridge the gap: the need for harmonized regulatory and ethical standards for postmarketing observational studies. Pharmacoepidemiol Drug Saf 26:1299–1306. https://doi.org/10.1002/pds.4269 Werhane P, Doering J (2007) Conflicts of interest and conflicts of commitment. In: Elliot D, Stern JD (eds) Researcher ethics: a reader. University Press of New England, London, pp 165–189 World Medical Association (2013) World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA 310:2191–2194. https://doi. org/10.1001/jama.2013.281053 Wulf K (2011) From codes of conduct to ethics and compliance programs. Recent developments in the United States. Logos Verlag, Berlin
6
Protecting Participants in Clinical Trials Through Research Ethics Review Richard Carpentier and Barbara McGillivray
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About Research Ethics Committees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Establishing Adequate Ethics Review Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Quality Improvement Value of Failures of Ethics Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Tuskegee Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting Research to Allow the Otherwise Unallowable to Happen . . . . . . . . . . . . . . . . . . . . . . . . . The Case of the HPV Trial in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase I Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges of Establishing Research Ethics Committees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Ethics Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92 93 93 95 95 96 96 98 100 103 104 104
Abstract
International standards require research involving humans to undergo ethics review by a research ethics committee. Such committees apply local, national, and international policies and regulations in order to ensure the protection of human participants and ensure that research projects have undergone independent
R. Carpentier (*) Research Ethics Board, Children Hospital of Eastern Ontario and Hôpital Montfort, Ottawa, ON, Canada Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada e-mail: [email protected]; [email protected] B. McGillivray Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_3
91
92
R. Carpentier and B. McGillivray
scientific review. The committees themselves exist within universities and hospitals or at a national level. However, delicate issues emerge whenever governance arrangements do not ensure the adequate independence of the process and the avoidance of conflicts of interests or when research participants are used primarily for the advancement of science or when commercial benefits are privileged over the betterment of public health advancement. All the elements of the decisional process of an ethics review system must be independent from the organizational structures that have interest in the outcome of the ethics review. International research may raise the issues of shopping for lower national standards and taking advantage of the vulnerable and less adequate national systems of protection. The exportation of research activities that would not be compliant with the regulations of a developed country, which often is the main beneficiary of the research activity, to a developing country where the regulatory requirements are less stringent is called “ethics dumping” and is morally objectionable. We give examples of inadequacies of review and propose safeguards to ensure the adequate balance of interests, including education, improvement of health literacy, and the development of appropriate, transparent, accountable, and robust oversight mechanisms. Keywords
Conflicts of Interests – Governance · Stewardship · International research · Ethics review · Regulations · Policies · Institutional governance · Ethics dumping · Oversight · Accreditation
Introduction Several instances of abuse of research participants have led national and international authorities and policy-makers to establish the bases for a global system of research ethics review which relies on the work of research ethics committees (RECs). (Research ethics committees (RECs) are also known by other names in various countries: Institutional Review Boards (IRBs), Research Ethics Boards (REBs), etc. Remarks about RECs apply to all equivalent committees.) The work of RECs is shaped in part by international requirements embodied in declarations, guidelines, extraterritorial laws, funding agreements, etc. These requirements, in turn, rely on national and local implementation that guarantees independent reviews, the avoidance of conflicts of interest at all levels, and the protection of research participants (and their communities), including the documentation of free and informed consent. The implementation of the system depends for a large part on a strong national legal infrastructure that can make RECs’ decisions effective. However, no system is perfect. In developed countries, we often see unrecognized conflicts of interests and failures in observance of the most rigorous requirements, leading to abuses and harm. In low- and middle-income countries, the governance infrastructure necessary to support a system of protection for research participants, including appropriate laws, is often lacking.
6
Protecting Participants in Clinical Trials Through Research Ethics Review
93
About Research Ethics Committees Research ethics review may be mandated by states, funders, or international bodies. Such reviews are performed by research ethics committees whose mandate is to ensure the protection of research participants. They do so by first obtaining an independent assurance that the proposed research is scientifically sound and has social value. RECs are small committees, usually comprised of minimally five people from scientific and nonscientific backgrounds, contributing different perspectives to the ethical review of protocols. Many normative instruments (e.g., laws, regulations, policies, etc.) are relying on the RECs to ensure that other stakeholders in the research enterprise, like sponsors and researchers, abide by their obligations in terms of adherence to approved protocols, obtaining informed consent through means that correspond to the participants’ literacy level, and notification of protocol deviations and adverse events, as well as ensuring that researchers and their staff have completed the education required to ensure that research is conducted adequately. Historically, the RECs were introduced, internationally, through the first revision of the Declaration of Helsinki in 1975 (World Medical Association 1975), although clinical research committees were introduced as early as the 1950s in the National Institutes of Health Clinical Center (NIH) to advise on legal issues and “as a method for making moral decisions” (Stark 2012). They are now increasingly mentioned in new contexts, like privacy legislations that rely on RECs to implement privacy protections in research. RECs take part in the social apparatus for risk management, with an emphasis on protecting individuals and communities engaged in research.
Establishing Adequate Ethics Review Systems It is incumbent upon each national authority as part of a duty of care to take the appropriate measures to ensure that a robust legal framework exists to provide the basis for a system of ethics review that functions with independence, competence, adequate resources, and appropriate accountability (World Health Organization 2011). Whenever there is international collaboration, whether it is through public or private sponsorship, the sponsor relies on a well-established system of ethics review in the country where the research activities will occur. It is the moral responsibility of the sponsors of research as well as of the recipient country to ensure that a good system of protection is in place. In itself, a robust ethics review system does not constitute an absolute guarantee of protection against abuses of the rights of research participants. However, rigorous ethics review greatly reduces the risks that people be abused or harmed because of their participation in research. In that regard, clinical trials may carry with them a higher risk of harm than other types of research and should be subjected to more stringent oversight. In developed countries, systems of ethics review evolved gradually after the Second World War and the Nuremberg trials to offer protection to research
94
R. Carpentier and B. McGillivray
participants in the context of a rapidly expanding research enterprise. The way in which they developed varied from one country to another (Hedgecoe 2010), but they eventually became an important element of the institutional, national, and international landscape. The governance of ethics review systems is complex and multilayered, as it involves a web of local, regional, national, and international laws, regulations, policies, as well as codes of ethics. Insofar as it requires independent scientific reviews, in order to establish the validity of the research questions and methods, the ethics review process may help improve the quality of scientific research but is especially tasked with the protection of research participants as well as those of society as a whole. It is very important that ethics review processes be able to look at research before it is initiated and that they be independent and competent to allow them to perform the different evaluations that are part of their important mandate, along with other actors in the research enterprise. The success of RECs in protecting research participants rely on the following conditions: • A relevant ethics committee must exist with the capability, resources, and independence to evaluate ethics applications. It is important that RECs be appropriately constituted and well-resourced and offered opportunities to develop and maintain their expertise. Although national cultural and economic contexts must permeate the considerations of RECs in their decision-making process, it is essential that some internationally recognized standards be incorporated in their review. This is unavoidable as international clinical trials contribute to regulatory approval in a number of countries. It is also of paramount importance that RECs be established with independence, at arm’s length from government, sponsors, and research organizations, in order to allow them to discharge their mandate. • Second, such committees must be able to recognize culturally sensitive ethical issues in complex settings. With the globalization of research, and especially of clinical trials, RECs need to be sensitive to cultural differences that may influence the safety of research participants. • Third, a compliance mechanism must be in place (Schroeder et al. 2018). In all major incidents where many research participants were injured or otherwise wronged, at least one of these conditions and generally more than one were not observed. With clinical trials being increasingly global and including more low- and middle-income countries (LMICs), a more important danger has emerged. Some actors in the research enterprise have ventured into exporting doubtful practices in LMICs where the national ethics review system hasn’t yet expanded to offer adequate protections. The expression “ethics dumping” was introduced by the European Commission to refer to ethically sensitive research activities undertaken outside the European Union (EU) in a fashion that would not have been acceptable under the EU ethical standpoint (Schroeder et al. 2018). One could argue that a similar phenomenon can occur whenever subsets of a population who are
6
Protecting Participants in Clinical Trials Through Research Ethics Review
95
economically disadvantaged or with low literacy levels may be targeted as easy to enroll research participants because of their particular vulnerabilities. Investigations into clinical trials in LMICs have led to the identification of five principal ethical violations: 1. 2. 3. 4. 5.
Exploitation of people’s vulnerability Absence of free and informed consent Improper use of placebos Absence of compensation norms in case of serious adverse events No access to treatment at the end of the trial (Anonymous 2014)
The Quality Improvement Value of Failures of Ethics Systems The Tuskegee Experiment Improvements in systems of ethics review often come after dramatic failures have occurred. History abounds with examples of system failures followed by improvements. The experiment conducted by the US Public Health Service between 1932 and 1972 in Tuskegee and labelled Tuskegee Study of Untreated Syphilis in the Negro Male is an early example. The study was initiated at a time where documenting the natural history of syphilis seemed a reasonable approach as the treatments available were not effective and did not significantly provide more improvements than non-treatment. However, the study continued long after penicillin had shown to be effective against the disease. The Tuskegee study is generally portrayed as an ethically flawed study where participants were not given penicillin as it became available. In a nutshell, the Tuskegee experiment was a prospective study of American Blacks with untreated syphilis. The study took place in Macon County, an economically depressed community of sharecroppers, aggravated in the early 1930s by the Great Depression, with a low level of education and health literacy. The Public Health Service had noted a high incidence of syphilis in that population. There was no consenting process, the participants were never told they had syphilis, and they were deceived into submitting to invasive procedures as they were told they were treated for “bad blood.” Some participants received the current standard treatment (arsenic as an example) or penicillin as it became available (Shweder 2004). These participants were replaced in the original pool by new participants that were not treated. Even after effective treatment became available in 1947, instead natural history observations continued until1972. Over 400 positively tested men and 200 healthy controls were enrolled to observe the course of the disease in infected men until death. The experiment was only stopped after a whistle-blower from within the Public Health Service went to the press and an article was published. The public outcry that followed the publicization of the study put in motion rapid government response. In 1974, after Congressional Hearings, the National Research
96
R. Carpentier and B. McGillivray
Act established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Regulations approved in 1974 imposed voluntary informed consent and review by an Institutional Review Board for Health Department-funded research. In 1978, the Commission published the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1978) with its set of ethical principles: respect for persons, beneficence, and justice, and in 1991 federal departments and agencies joined in approving the Federal Policy for the Protection of Human Subjects, otherwise known as the Common Rule. Therefore, a deeply flawed experiment became the cornerstone on which the US system of human subject protection was built.
Exporting Research to Allow the Otherwise Unallowable to Happen Undertaking research in a country where regulations are underdeveloped and ethics review systems are not rigorously implemented may end up compromising research participants, the research sponsors, as well as government which is hoping to generate economic benefits while ensuring an expanded access to affordable medications. When support is lacking from government and research stakeholders, RECs may not have the capacity to develop or maintain their expertise, their independence may be compromised, and the outcome of their review process may be overlooked or overridden by authorities. It may also happen that competent RECs see sponsors shop for a less experienced or more lenient REC that would accept their project rapidly without change. Governments should not accept that research review be done against lower standards (Nuffield Council on Bioethics 2002).
The Case of the HPV Trial in India In 2005, India removed restrictions on global clinical trials. The aims were to attract more research and development, with increased investments in foreign currencies, and gain advanced scientific knowledge through access to the testing of innovative molecules. In exchange, India offered lower costs, a large pool of mostly naïve research subjects, as well as access to financial support and infrastructures, and a mostly deregulated environment (Sariola et al. 2018): Drug companies are drawn to India for several reasons, including a technically competent workforce, patient availability, low costs and a friendly drug-control system. While good news for India’s economy, the booming clinical trial industry has raised concerns because of a lack of regulation of private trials and the uneven application of requirements for informed consent and proper ethics review. (Anonymous 2008)
6
Protecting Participants in Clinical Trials Through Research Ethics Review
97
As the number of international trials grew, voices from the civil society started to challenge the claims of overall benefits, highlighting harms to participants and human rights abuses, especially among the illiterate poor. A famous case is that of the Human Papilloma Virus Vaccine Trial. Starting in July 2009, a project described as a demonstration involving the human papillomavirus (HPV) vaccine took place in the Indian provinces of Andhra Pradesh and Gujarat, involving 25,000 girls 10 to 14 years of age. The goal of the project was to “evaluate the feasibility and acceptability of using the vaccine in an immunisation programme” (Srinivasan 2011). The program was carried out by an American agency, the Program for Appropriate Technology in Health (PATH) (Parliament of India 2013). The political/regulatory authority did not consider the project as a clinical trial, falling under the rigor of the regulatory framework. Seven girls died in the experiment. According to a 2013 Report of the Indian Department-Related Parliamentary Standing Committee on Health and Family Welfare looking specifically at that trial, the failures of the system were numerous: • Commercial interests apparently overriding the whole enterprise. • The mislabelling of the whole project as “observational” or “demonstration” project in order to obtain approval more easily. • The trial was conducted in contravention with the national drug approval authority guidelines stipulating a trial should have occurred on adults first. • A specially appointed committee concluded without inquiry that the seven deaths were not related to the trial. • In its report, the Parliamentary Committee noted the inappropriateness of the Indian Council of Medical Research (ICMR) both being mandated to enunciate the national ethical guidelines and being part of a public-private partnership. • The Health Department and the drug regulator had different views as to whether this was a clinical trial or not. Ultimately, the political views of the Health Department prevailed, and the project was not considered a clinical trial, exempting it from much of the normal ethical and regulatory requirements. • From the start of the trial, the known serious adverse events (SAEs) were not well documented for the Indian partners and regulatory authority (and no safety monitoring committee had been developed). • The Parliamentary Committee observed that the informed consent process had serious flaws, especially as most parents were illiterate. Consequent to the political/regulatory authority decision not to consider this project as a clinical trial, it did escape many of the normal requirements for this kind of research, including the establishment of a Data Safety Monitoring Board (DSMB) and the filing of SAEs. Furthermore, the REC that was involved in the review of this “demonstration project” was a provincial committee, with little experience and expertise. It did not request the creation of a DSMB, nor did it request that the consent process be adapted to the low literacy level of the research participants.
98
R. Carpentier and B. McGillivray
The Parliamentary Committee noted that the ethics committees did not play their role and that the functioning of ethics committees should be regularly monitored. As a consequence of the noted failures in the conduct of clinical trials, in October of 2013, the Supreme Court of India put a halt to 157 newly approved clinical trials until they be re-approved by a new system responding to prior court requirements. It also mandated the audiovisual recording of the informed consent process (Bagla 2013). The government had already mandated the registration of RECs (Chatterjee 2013); furthermore following the Report of Professor Chaudhury (Report of the Prof. Ranjit Roy Chaudhury Expert Committee 2013), there is now a program of accreditation in India for research ethics committees, investigators, and clinical trial sites (Standards for accreditation can be found at: http://www.cdsco.nic.in/ writereaddata/finalAccreditation%20Standards.pdf).
Phase I Trials Phase I clinical trials, especially First-in-Human, are designed to examine the pharmacology, tolerability, and safety in usually healthy human participants. In 2006, a phase I clinical study was conducted in Great Britain for a CD28 superagonist antibody TGN1412 (TeGenero Study) in six human volunteers. After receiving a very small dose of the drug, all six experienced a catastrophic response and required intensive care. An expert group tasked with looking at the issues raised by this trial noted the importance of improving early communication in situations where a trial presents a high degree of complexity combined with a higher risk agent: Increased opportunity for communication between the regulator and research ethics committees on applications for trials of higher risk medicines would also contribute to the overall safety environment. (Expert Scientific Group on Phase One Clinical Trials 2006)
The challenges for RECs raised by novel therapeutic agents require that they access proper independent expert advice, for example, from the regulatory authority or a specially constituted independent expert committee. The Report from an Expert Scientific Group, established after the TeGenero Trial in London, observed that: In the United Kingdom, the standard operating procedures for Research Ethics Committees currently require them to identify phase I clinical trials involving high-risk pharmaceuticals as defined by the Expert Scientific Group and have their approval contingent on a favorable review by the Medicines and Healthcare Products Regulatory Agency, which is expected to include an expert opinion. (Nada and Somberg 2007)
In addition, other recommendations focused on safety issues, particularly as studies move from animals to humans. These recommendations are meant to guide future First-in-Human trials. Increased safety of phase I trials requires that clinical trials be run according to accepted dosing regimens. A lesson learned from this particular trial is that researchers,
6
Protecting Participants in Clinical Trials Through Research Ethics Review
99
research organizations, RECs and regulators must stay alert to avoid risky dosing strategies (Hedgecoe 2014). A phase I trial, testing BIA 10-2474, a fatty acid amide hydrolase (FAAH) inhibitor, was conducted by a contract research organization in Rennes, France, and began with single ascending doses (SAD) of the drug without serious incident. Cohorts then received multiple ascending doses over a 10-day period. In January 2016, a 47-year-old male involved in the trial was hospitalized with symptoms suggestive of a stroke. He was found to have abnormal brain lesions seen on imaging and unfortunately became comatose and died. A further five men were hospitalized with three developing similar lesions. The events occurred after the fifth and sixth days of the 50 mg daily dose level. At the time of the adverse events, 90 men and women had received the investigational drug. In the succeeding months following this tragedy, the trial has been examined carefully at a national and international level with a number of recommendations being made. In recommendations published following these events, the paramount importance of ensuring participant safety was emphasized. In particular, a group of scientists was tasked with reviewing the incidents around the BIA trial. Their conclusions were as follows: 1. The investigator brochure did not include enough preclinical data to determine an effective dose range for the trial or even an indication of therapeutic potential. 2. When drugs may affect the central nervous system, a neuropsychological assessment should be conducted to monitor any alteration of psychological state during exposure to the product. 3. All phase I trials should adjust doses according to the data collected from the previous volunteers of the trial, not from preset dose escalation estimates. 4. The safety of participants should be the first consideration. Dose administration should be spaced out to allow for any negative effect to manifest itself. 5. There should be more transparency regarding ongoing and past phase I trials to improve protection of research participants. Participants’ safety issues included inadequate exclusion criteria, inadequate medical expertise to ensure exclusions were recognized, inaccurate risk descriptions, poor translation of participants’ information material and informed consent, potentially unsafe cohort dosing practices (having all receive the drug at once was a particular issue with TGN1412), delayed recognition, treatment and reporting of adverse events increasing risks to those having a reaction and making reconsent less likely of those still to receive the medication, and delays in making details of the trial available to allow broad evaluation of each incident for future safety. In the TGN1412 trial, REC review was also faulted. Each trial “disaster,” when completely analyzed, prompted changes to guidelines for future early phase trials. There should be a way for RECs to learn from clinical trial tragedies, with detailed reports made public like in the case of transportation tragedies, and mechanisms to implement remediation rapidly to avoid repetition.
100
R. Carpentier and B. McGillivray
Challenges of Establishing Research Ethics Committees Many countries are still facing the challenges associated with establishing their own system of ethics review. Some considerations are discussed below. Countries, institutions, and communities should strive to develop RECs and ethical review systems that ensure the broadest possible coverage of protection for potential research participants and contribute to the highest attainable quality in the science and ethics of biomedical research. States should promote, as appropriate, the establishment of RECs at the national, institutional, and local levels that are independent, multidisciplinary, multi-sectorial, and pluralistic in nature. RECs should be reporting to the highest level possible in the organization to shield them from inappropriate pressures from sponsors or researchers to approve research proposals. Conflicts of interests take many forms and threaten to compromise the mission of RECs. RECs require administrative and financial support (World Health Organization 2000). Similarly, utilizing existing international guidelines is useful as an initial guide as expertise is gained locally. Organizational support for education and independence from external pressures are of paramount importance. There is increasing recognition that countries with involvement in biomedical research, particularly clinical trials, should develop robust local research ethics committees. Particularly in countries with limited resources, establishing a national committee (rather than multiple local committees) may allow development of expertise in the review of clinical trials. In 2011, the World Health Organization (WHO) with its Standards and Operational Guidance for Ethics Review of HealthRelated Research with Human Participants (2011) presented a framework which included ten standards and suggested strategies to achieve each standard. Importantly, Standard 1 addressed the need for “relevant authorities” to ensure support with an adequate legal framework. Global clinical trials are usually conducted with the sponsors and researchers pledging observance of international instruments like the Declaration of Helsinki and the Good Clinical Practice Guidelines. These are very important documents that provide essential guidance as to how trials should be conducted. However, guidelines alone lack the implementation force that an adequate legal framework provides. To ensure success and continuity, a research ethics review system must emanate and be maintained from the highest level possible. To quote from the UNESCO document on Establishing Bioethics Committees, codes of ethics require “advocates among those who formulate, implement and monitor public policies.” Since the Second World War particularly, most European and North American countries have established research ethics committees. However, the models vary considerably, perhaps related to the authority of the central government over healthcare decisions and perhaps the will to have ethical deliberation (including healthcare and human research) be part of governmental decision-making. A centralized high-level ethics committee may exist within government or as a free-standing but supported entity. Each model may have advantages, but a freestanding entity may suffer less from conflicts of interest and political whim. Such
6
Protecting Participants in Clinical Trials Through Research Ethics Review
101
committees may function as think tanks on ethical issues (either permanently constituted or drawn together for specific issues), establish research ethics regulations or guidelines, and receive reports from more regional committees. As countries take on their own review of clinical trials, the national level committee can be charged with setting standards and monitoring. Individual research ethics review committees historically were set up at universities or hospitals, with a mandate to review projects undertaken by local researchers. Ethical guidelines could come from a variety of sources, some from within the country and others more international in scope. Such regional- or institution-specific RECs, for best practice, established standard operating procedures (SOPs) and had a program of education for committee members and for researchers. As clinical trials have flourished, most are multisite or even multinational. Multisite review has become the norm and has led to a variety of solutions. Private for-profit review, with entities able to centralize review through one entity, has been successful where permitted. Publicly funded more centralized review (one overall committee in a country, one committee in a state or province, one review in a particular area of medicine) are all workable models. In developing countries, multisite studies have led to an enormous problem of multiple reviews, perhaps of varying standards and expertise on the part of some committees. A variety of solutions to this have been suggested, each with advantages and disadvantages. Having ethics review only in a developed country (often the site of the pharmaceutical firm) may offer the expertise of a wellestablished and experienced committee but loses the crucial local input regarding cultural norms and knowledge of the participants and distances good monitoring. However, partnering with local committees may come with a power differential, such that local concerns are overridden. A proposed solution would have the local country not only enact legislation regarding research (including a mechanism for review) but also establish a clinical trials committee. There are a number of published international documents giving advice (UNESCO 2005; World Health Organization 2011; International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) 2016) for such an endeavor. Importantly, all underscore the need for an overall program with research ethics policies, SOPs, adequate support, ongoing education, transparency, management of conflicts of interest, and monitoring. Pairing with an established committee is crucial until the local group has attained the experience and confidence to proceed. Pairing also provides an alternate committee, should an appeal be brought forward. In considering whether the proposed research is ethically sound, the REC must consider a number of issues. These include the soundness of the scientific methodology, possible benefits against possible risks, the adequacy of the consenting process, justice and fairness in the selection of participants, and compliance with local policy and legal requirements. All of these have been seen to be at issue in reviews of clinical trials. In addition, clinical trial regulations and international guidelines (International Council for Harmonisation of Technical Requirements
102
R. Carpentier and B. McGillivray
for Pharmaceuticals for Human Use (ICH) 2016) require that RECs assess the scientific merits of a protocol. The Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada 2014) gives some suggestions of how such a review could be accomplished while stating that RECs avoid duplicating previous arms’ length peer reviews (e.g., as part of a grant competition). Those RECs reviewing a great number of clinical trials may well have expertise within its members to accomplish reviews. However, as few RECs in developing countries will have members with expertise in all methodologies, it may be helpful to request arm’s length peer reviews. This can give an objective evaluation to aid the REC in understanding whether the proposed methodologies are capable of answering the research question, especially if the researchers are asked to include responses to the review. A second benefit of such an outside review is management of conflict of interest with REC members, as the local clinical trial community may be relatively small. Phase I clinical trials will require special expertise obtained either from the regulator or a specially constituted committee. Such peer reviews could be with a formally constituted committee or on an ad hoc basis. The selection of participants and provision of an understandable consenting process is crucial, particularly when one considers the phenomenon of ethics dumping. Potential participants may be seen as commodities. They may be quite naïve to the research process (i.e., not standard care), may not have access to any level of medical care, and may be from the poorest segments of the local society. This can often mean much care must be taken to make the study understandable – in the local language, in a format understandable for those unable to read, explained by a local, etc. All of this will be even more important with a trial involving vulnerable individuals such as children, mentally incapacitated, or ill individuals. Many issues in clinical trials revolve around adverse events. The REC must be comfortable that a process is in place (safety monitoring committee) between the sponsor/investigators to rapidly assess and report adverse events and that the REC itself has a mechanism to assess the reports such that studies can be suspended if necessary. Outside the purview of the REC itself, but part of the responsibility of the country oversight, is the central registration of such trials. The REC can demand evidence of such registration. In terms of risks and benefits, the REC must ensure that the consenting process fairly describes risks and that these are compared to risks of that population’s daily lives. Benefits to be considered (and not amount to coercion) may be payments, other healthcare provisions, and having access to treatments following the trial. Other benefits may include the provision of employment for the local population, training in research methodologies, and community resources.
6
Protecting Participants in Clinical Trials Through Research Ethics Review
103
The Ethics Review In terms of practicalities, several suggestions can be made regarding the review itself. Proposals coming to the REC may be of several types: a clinical trial (often multicenter or multinational), a researcher-designed proposal involving a local community, chart reviews, or community-based research protocol, to name a few. As it is unlikely that the REC will always have a membership with expertise in every protocol, having a peer review procedure evaluating the science, whether the proposed methodology might achieve results, and whether biases or limitations have been considered can be helpful. In point form, we offer advice for committee members: • Read the entire protocol first, considering it from the perspective of the participant first but also the family and the community. • Have an ethical framework and your local review criteria in mind. • Acknowledge and respect traditional knowledge in research involving local communities. • Areas of focus in the review: – Methodology – poor methodology may mislead or put participants at risk – Selection and recruitment – special concerns may ensue from use of “captive” populations such as students, employees, colleagues, neighbors, or those in situations making them particularly vulnerable; consider justification for those excluded in the selection. – Recruitment strategy – how is information given, and could there be coercion or undue inducement? – Informed consent process – generally, participants must be competent, or additional procedures should be in place such as obtaining the assent of minors. There must be full disclosure of the research purpose, what will happen, and the risks and benefits. Ensuring time and private space for the process aids voluntary choice. – Withdrawal details – without consequence, clearly stated with details of what will happen to information or samples already collected. – Privacy and confidentiality – ensure there is transparency about the degree of confidentiality possible (consider focus groups, small communities, knowledge within families). – Risks and benefits – set a goal to ensure the risks are reasonable in relation to the anticipated benefits and the importance of the knowledge that may result. Foreseeable harms should not outweigh anticipated benefits. Consider risks as physical, psychological, and social. Look for benefits such as mentoring, jobs for the community, changing health, or even giving a voice to a community. – Who owns the data or the sources of the data? Review agreements for use of traditional knowledge, blood samples, and medical records.
104
R. Carpentier and B. McGillivray
Similarly, evidence of community support should be available as letters, memoranda of understanding (MOU), or other evidence. True communitybased research begins with a research topic or question important to the community with a goal to combine knowledge with social change. The research process then involves community members. Such research may never be published. More often, communities are involved with outside researchers, making it a responsibility of those conducting the review to ensure community knowledge and support. A recent summary of setting up a clinical trial for nodding syndrome in Africa (Anguzu et al. 2018) noted major hurdles of systemic community issues, community leader and other stakeholder perceptions, and negative community concerns toward scientists and research on nodding syndrome, all of which needed to be addressed prior to initiating the research itself. Seeking such community input could be via community experts, setting up an advisory group, or arranging to get input from the whole community (think town hall meetings). Areas of focus in the REC review should include at least getting the assurance that the methodology is appropriate, how participants are going to be recruited and then selected, an evaluation of the risks and benefits of the research (could consider benefits at the level of the individual participant as well as the community), the consenting process, the right to withdraw, and issues of confidentiality. Special concerns of selection and recruitment will be captive or vulnerable individuals, those excluded for nonscientific reasons, and coercive methods used to recruit.
Summary Robust systems of ethics review must rely on the competence of trained REC members and respect for the particularities and literacy level of participant communities. It should help advance the development of healthcare solutions, as well as support the contribution of clinical trials to the local economy and the implementation of research-related laws and regulations of local governments. The ethics review system is often neglected as it is a small and fragile entity in a large institution, which works in the shadow that creates its needs for independence. However, the failure to support a strong system threatens the safety of research participants and the soundness and integrity of science.
References Anguzu R, Akun PR, Ogwan R et al (2018) Setting up a clinical trial for a novel disease: a case study of the doxycycline for the treatment of nodding syndrome trial – challenges, enablers and lessons learned. Glob Health Action 11(1):1431362. https://doi.org/10.1080/16549716. 2018.1431362
6
Protecting Participants in Clinical Trials Through Research Ethics Review
105
Anonymous (2008) Clinical trials in India: ethical concerns. Bull World Health Organ 86(8): 581–582 Anonymous (2014) The ethical cost of offshoring clinical trials. Global Health Watch 4: an alternative world health report. People’s Health Movement, Cape Town, pp 319–329 Bagla P (2013) India’s Supreme Court mandates videotaped consent in clinical trials. Science 363(6423). https://www.sciencemag.org/news/2013/10/indias-supreme-court-mandates-video taped-consent-clinical-trials. Accessed 10 Mar 2019 Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada (2014) Tri-council policy statement: ethical conduct for research involving humans, Ottawa. http://www.pre.ethics. gc.ca/pdf/eng/tcps2-2014/TCPS_2_FINAL_Web.pdf. Accessed 10 Mar 2019 Chatterjee P (2013) India tightens regulation of clinical trials to safeguard participants. Br Med J 346:f1275 Collective (2013) Report of the Prof. Ranjit Roy Chaudhury Expert Committee to formulate policy and guidelines for approval of new drugs, clinical trials and banning of drugs. New Delhi, Ministry of Health and Family Welfare. http://admin.indiaenvironmentportal.org.in/files/file/ clinical%20trials1.pdf Expert Scientific Group on Phase One Clinical Trials, Final Report (2006) London, The Stationary Office Hedgecoe A (2010) “A form of practical machinery”: the origins of research ethics committees in the UK, 1967–1972. Med Hist 53:331–350 Hedgecoe A (2014) A deviation from standard design? Clinical trials, research ethics committees, and the regulatory co-construction of organizational deviance. Soc Stud Sci 44(1):59–81 International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) (2016) International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) Integrated addendum to ICH E6(R2): guideline for good clinical practice E6(R2), Geneva. https://www.ich.org/fileadmin/Public_Web_Site/ICH_ Products/Guidelines/Efficacy/E6/E6_R2__Step_4_2016_1109.pdf. Accessed 10 Mar 2019 Nada A, Somberg J (2007) First-in-man clinical trials post-TeGenero: a review of the impact of the TeGenero trial on the design, conduct and ethics of FIM trials. Am J Ther 14(6):594–604 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1978) The Belmont report: ethical principles and guidelines for the protection of human subjects of research. Bethesda, Department of Health, Education and Welfare Nuffield Council on Bioethics (2002) The ethics of research related to healthcare in developing countries. London, Nuffield Council on Bioethics Parliament of India/Rajya Sabha, Department-Related Parliamentary Standing Committee on Health and Family Welfare, Seventy Second Report (2013) Alleged irregularities in the conduct of studies using human papilloma virus (HPV) vaccine by Path in India (Department of Health Research, Ministry of Health and Family Welfare). Rajya Sabha Secretariat, New Delhi Sariola S, Jeffery R, Jesani A et al (2018) How civil society organisations changed regulation of clinical trials in India. Sci Cult. https://tandfonline.com/doi/pdf/10.1080/09505431.2018. 1493449?needAccess=true. Accessed 10 Mar 2019 Schroeder D, Cook J, Hirsch F et al (2018) Ethics dumping: introduction. In: Schroeder D, Cook J, Hirsch F et al (eds) Ethics dumping. Case studies from north-south research collaborations. Springer Briefs in Research and Innovation Governance, Springer Open, Cham Shweder RA (2004) Tuskegee re-examined. https://www.spiked-online.com/2004/01/08/tuskegeere-examined/. Accessed 10 Mar 2019 Srinivasan S (2011) HPV vaccine trials and sleeping watchdogs. Indian J Med Ethics 8(2):73–74. https://doi.org/10.20529/IJME.2011.031 Stark L (2012) Behind closed doors. The University of Chicago Press, Chicago UNESCO (2005) Universal declaration on bioethics and human rights. Paris. https://unesdoc. unesco.org/ark:/48223/pf0000146180. Accessed 10 Mar 2019
106
R. Carpentier and B. McGillivray
World Health Organization (2000) Operational guidelines for ethics committees that review biomedical research. WHO, Geneva. https://www.who.int/tdr/publications/documents/ethics.pdf. Accessed 10 Mar 2019 World Health Organization (2011) Standards and operational guidance for ethics review of healthrelated research with human participants. http://apps.who.int/iris/bitstream/handle/10665/44783/ 9789241502948_eng.pdf;jsessionid=1955B0172CF16E5CFEC9C66CCE63B8F9?sequence=1. Accessed 10 Mar 2019 World Medical Association (1975) Declaration of Helsinki. https://www.wma.net/wp-content/ uploads/2018/07/DoH-Oct1975.pdf. Accessed 10 Mar 2019
7
Publication Ethics Deborah C. Poff and David S. Ginley
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evidence of the Growth of Incidence of Violations of Ethics in Publications . . . . . . . . . . . . . . . . . Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Appropriate Forms of Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Ethical Issues with Respect to Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Kinds of Ethical Violations in Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Text Recycling or Self-Plagiarism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fake Review by Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodological Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Predatory Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Editor/Publisher Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stakeholders in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Education, Collaboration, and Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Individual Researcher/Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The University (Hospital or Research Institution or Organization) . . . . . . . . . . . . . . . . . . . . . . . . Research Centers/International Collaborations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Education and Ethics Resources: Creating and Maintaining a Culture of Ethics . . . . . . . . . . . . . . Approaches to Teaching Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Funder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
108 109 110 110 112 113 113 113 113 115 115 116 117 117 118 119 120 120 121 122 123 123
D. C. Poff (*) Leading with Integrity, Ottawa, ON, Canada e-mail: [email protected]; [email protected] D. S. Ginley Materials and Chemistry Science and Technology, National Renewable Energy Laboratory, Golden, CO, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_61
107
108
D. C. Poff and D. S. Ginley
The Tensions Among the Stakeholders and the Role of Publication Ethics, Editors, and Publishers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Publication Ethics from the Editor’s Perspective: What Do Editors Want and Need? . . . . . . . . Editors Awareness and Use of Available Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Future and Working Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
124 124 124 124 126 126
Abstract
This chapter provides an overview of the key issues involved in publication ethics, with a particular focus and emphasis on journal publishing. Publication ethics and its place on the continuum with research ethics/research integrity is explored. Particular examples of violations of publication ethics are identified and examined. As well, the various stakeholders who contribute to the research project and the end product of research in its dissemination are analyzed. There is discussion about the various types of responsible conduct and accountability. Educational needs are explored, and the chapter concludes with a discussion of the importance of mutuality, collaboration, and joint effort in addressing the complex issues of research and publication ethics. Keywords
Journal ethics · Research ethical · Responsible conduct in research and publication
Introduction Publication ethics describes a series of moral issues related to appropriate and inappropriate behavior that can occur in the context of the dissemination of research. The landscape of publishing is constantly changing, and more recently this rate of change has accelerated. The advent of increasing digital publishing, the rapid generation of new journals, and an increasing international landscape are significantly changing the nature of publication. Associated with these factors are increasing problems in the ethics of scholarly publishing. In particular, the proliferation of journals has put considerable stress on the reviewing system, including peer review and the role of editorial board members, editors, and publishers. Further, given the growth of predatory journals in publishing, there is the added issue of public assurance of the legitimacy of actual peer-reviewed journals. As well, there is increasing pressure on researchers to publish in high-impact journals and pressure for journals to have high-impact factors. Often this variable is a key requirement for promotion and tenure for university faculty. Overall, journal impact factors (JIF) are viewed as a way to measure the importance of a journal by calculating the number of times selected articles are cited within the last few years. The higher the impact factor, the more highly ranked the journal. It thereby is one axis of comparison of journals in a particular subject area which acts as a strong factor to solicit new
7
Publication Ethics
109
articles. Having said this, some experts have said that a number of journals and publishers have long recognized the inappropriate use of JIF. Callaway (2016), for example, notes that with the exception of a very few highly cited papers, most authors’ citation rates are far lower than the journal’s impact factor (Callaway 2016). Consequently, many CVs and applications for tenure and promotion cite the JIF of the journal where their articles are published rather than the impact factor of their own contribution. Despite years of critique about the limitations of impact factors, some universities and national laboratories still put emphasis on their importance in tenure and promotion deliberations. This desire to get high-impact factors can result in a number of practices to recruit articles including by personal contact and even paying for articles by editors. For authors, such increased competition and the perceived requirement to publish as much as possible may lead some authors to engage in unethical behavior as a means to increase their publication records. This, in turn, has led to the identification and increased attention to the various means through which authors may violate ethical publication norms. While no discipline is immune to unethical behavior in research, publication ethics, like research ethics, initially was developed within the biomedical sciences and natural sciences. This is not surprising given that research within those subject areas is frequently of higher risk to human beings and animals.
Evidence of the Growth of Incidence of Violations of Ethics in Publications As noted above, there is growing evidence concerning the increase of violations of ethical publications. Figure 1 lists the number of documents referencing plagiarism in the 1970–2017 time frame (Gasparyan et al. 2017). This illustrates the significant increase in the detection of plagiarism in journals that were published from 1970 to 2015. It should be noted that there is a continuing debate about whether detection always illustrates an increase in the number of plagiarism cases found or if the data are confounded by the increased ability to detect plagiarism through advances in technology. A 2009 study looking at a broad set of ethical areas as a function of their severity (potential consequence of the problem), frequency (rate of occurrence), and confidence (that the actual event occurred) in the data illustrated the ethical publication violations with which editors must deal. While the data are not entirely statistically robust, it does indicate that the editors’ perceptions on the whole was that most of the ethical issues occur with nearly the same incidence and there are none that stand out as being statistically anomalous. This combined with Fig. 1 above clearly indicates that the number and diversity of ethical issues are increasing or increasingly being detected substantially over time. This may seem surprising as the increased digital analysis of publications and digital accessibility might be thought to limit the ethical issues when in fact the inverse seems to be the case. As detection becomes more vigilant, evidence may illustrate changes over time in frequency and type (Table 1).
110
D. C. Poff and D. S. Ginley
Fig. 1 Number of Scopusindexed items tagged with the term “plagiarism” in 1970–2017 (as of March 31, 2017) (Callaway 2016)
Scope This chapter will discuss this changing landscape of ethics in scholarly publication and includes the acknowledgment of responsibility and actions undertaken by publishers, editors, authors, and referees. In describing violations in publication ethics, the chapter draws upon the useful resources of many professional organizations and editor/ publisher organizations. One such organization is the Committee on Publication Ethics (COPE) (https://publicationethics.org) which was founded in 1997 as a voluntary membership organization with the mandate to educate and facilitate the promotion of publication ethics and which has developed numerous resources and services since that time. Others include the Council of Science Editors (CSE) which was originally founded in 1957 as the Conference of Biology Editors and changed to its current title with expanded membership in 2000, the International Committee of Medical Journal Editors (which calls itself a small working group of general medical journals), World Association of Medical Editors (WAME) founded in 1995, the International Society of Managing and Technical Editors (ISMTE) founded in 2007, and the Open Access Scholarly Publishers Association (OASPA) founded in 2008.
Definition of Publications Publications, broadly construed across academic and scholarly disciplines, include: • • • • • •
Peer-reviewed monographs or book-length manuscripts of original research Peer-reviewed scholarly journal articles Peer-reviewed conference proceedings Non-peer-reviewed conference proceedings Peer-reviewed anthologies of academic research Working papers
7
Publication Ethics
111
Table 1 Mean ratings of editors’ perceptions of the severity and frequency of various ethical issues at their journals and their confidence in handling these issues (Gasparyan et al. 2017) Issue Redundant publication Plagiarism Duplicate submission Undisclosed author conflicts of interest Undisclosed reviewer conflicts of interest Gift authorship Disputed authorship Falsified or fabricated data Reviewer misconduct Unethical research design or conduct Undisclosed commercial involvement Ghost authorship Image manipulation Concerns over supplements Concerns over advertising Editorial interference by journal owner
Severitya 1.19 0.86 0.79 0.73 0.69 0.67 0.58e 0.56 0.56e 0.55 0.52 0.37 0.30 0.24f 0.13f 0.05
Confidenceb 0.70 0.70 0.79 0.73 0.71 0.51 0.90e 0.62 0.62 0.83e 0.66 0.61 0.69 1.08e 1.01e 1.51e
Frequencyc 1.39 0.96 1.01 0.90 0.94 1.08e 0.81 0.58 0.80 0.70 0.62 0.48 0.47 0.30f 0.20f 0.09
Trendd 3.43 3.46 3.28 3.28 3.08f 3.17 3.00f 3.12f 3.00f 2.98 3.24 3.32 3.34 2.97f 3.00e 2.95f
Respondents who stated that they did not know were excluded from calculations of the mean. Editors were asked to base their replies on their perceptions of the situation at their own journal rather than their perceptions of problems in the literature in general a Severity was graded on a 4-point scale ranging for 0, “not a problem,” to 3, “a very serious problem” b Editors’ confidence in handling issues was graded on a 4-point scale ranging for 0, “not at all confident,” to 3, “highly confident” c Frequency was graded on a 4-point scale ranging for 0, “never,” to 3, “very often (at least once a month)” d Editors who stated that an issue occurred “sometimes (more than once a year)” or “very often (at least once a month)” were asked whether they thought the problem was decreasing a lot (1), decreasingly slightly (2), occurring to the same extent as before (3), increasing slightly (4), or increasing a lot (5) e Kurtosis less than 1.0, indicating a wide spread of opinions (curve flatter than normal distribution) f Kurtosis more than +3.0, indicating a high degree of consensus (curve sharper than normal distribution). Kurtosis of 0 indicates that responses followed a normal distribution
• • • •
Preprint papers Textbooks Music compositions and scores (including juried compositions) Novels, poetry, and short stories (peer reviewed and not)
While there is no clear consensus concerning their standing as publications, some scholars are beginning to make the case for considering blogs and other e-media as legitimate publications. The following discussion identifies the standard categories and classification of ethical wrongdoing in the publication of scholarly products. Some but not all occur in the research process leading up to the preparation of a manuscript.
112
D. C. Poff and D. S. Ginley
Definition of Authorship First, however, it is important to define what authorship entails. More recently many journals have been much more rigorous in requiring authors to establish their contributions to the work and knowledge of the overall publication. Authorship requires a substantive contribution to the article or monograph under consideration. The International Committee of Medical Journal Editors (ICMJE) recommends that authorship be based on the following four factors, all of which must be true in order to legitimately claim authorship. http://www.icmje.org/recommendations/browse/rolesand-responsibilities/defining-the-role-of-authors-and-contributors.html): • Substantial contribution to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND • Drafting the work or revising it critically for important intellectual content; AND • Final approval of the version to be published; AND • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. (http://www.icmje/defining) False claims of authorship by individuals who do not meet these standards is unethical and unwarranted. In medical fields, the ICMJE definition has been adopted. However, as COPE notes, there is no universally agreed upon set of definitional criteria or factors that define authorship. (See COPE core principle 2 on Authorship and contributorship, https://publicationethics.org/news/core-practices.) There are an increasing number of issues with respect to authorship that is slowly resulting in a convergence of definitions and is leading to a more universal code. From the recent 2018 Australian Code (https://www.nhmrc.gov.au/about-us/pub lications/australian-code-responsible-conduct-research-2018), the key principles are as follows: An author is an individual who: • has made a significant intellectual or scholarly contribution to research and its output, and • agrees to be listed as an author.
The China Association for Science and Technology (CAST) recently published guidelines for improving the self-discipline of scientists and curbing academic fraud in scientific papers (http://english.cast.org.cn). From the 2017 The European Code of Conduct for Research Integrity (https://ec. europa.eu/research/participants/data/ref/h2020/other/hi/h2020-ethics_code-of-conduct_ en.pdf), it states: • All authors are fully responsible for the content of a publication, unless otherwise specified.
7
Publication Ethics
113
• All authors agree on the sequence of authorship, acknowledging that authorship itself is based on a significant contribution to the design of the research, relevant data collection, or the analysis or interpretation of the results. As these codes become more homogeneous, it will greatly aid the development of international collaborations and limit problems from cultural diversity and funding agency differences.
Other Appropriate Forms of Recognition ICMJE also notes that where there are persons who have been involved in research but who do not satisfy the four criteria, other forms of recognition may be appropriate. Those whose contributions do not justify authorship may be acknowledged individually or together as a group under a single heading (e.g. “Clinical Investigators” or “Participating Investigators”), and their contributions should be specified (e.g., “served as scientific advisors,” “critically reviewed the study proposal,” “collected data,” “provided and cared for study patients,” “participated in writing or technical editing of the manuscript.”
Other Ethical Issues with Respect to Authorship Ghost Authorship occurs when articles are attributed to someone who has not written or contributed to the authorship of an article. Such authorship is sometimes offered from funders who volunteer to ghost author an article for research they have funded (e.g., research on drug trials funded by a pharmaceutical company which contracts for tests for a particular experimental drug with an expert in the field). Gift Authorship occurs where authorship is offered to persons who have not contributed to the article. These may be, for example, famous scholars (or editors from the journal where the manuscript is submitted) whose names are added in hopes that this will curry favor and increase the likelihood of being accepted for publication. Listing of Authors Without Their Agreement occurs when authors are listed who have not agreed and are often unaware that they are listed as authors of a publication.
Other Kinds of Ethical Violations in Publication Text Recycling or Self-Plagiarism The Committee on Publication Ethics defines publicationethics.org/files/WebA2928) as when:
text
recycling
(https://
114
D. C. Poff and D. S. Ginley
• Sections of the text, generally excluding methods, are identical or near identical to a previous publication by the same author(s). • The original publication is not referenced in the subsequent publication. • There is still sufficient new material in the article to justify its publication. COPE defines more serious forms of text recycling as when: • There is significant overlap in the text, generally excluding methods, with sections that are identical or near identical to a previous publication by the same author(s). • The recycled text reports previously published data, and there is insufficient new material in the article to justify its publication in light of the previous publication(s). • The recycled text forms the major part of the discussion or conclusion in the article. • The overlap breaches copyright. Historically, text recycling was difficult to identify. The advent of electronic publishing has made the task easier. Clearly publishing a paper twice is unethical. While some authors have argued that words published in a report online could be used verbatim in a subsequent paper in a reviewed journal, journals generally consider an online report to be a publication, and the duplicate content is consequently considered to be unacceptable. This can be a significant issue in developing countries or for that matter countries where there is a great deal of publication pressure. Yuehong (Helen) Zhang (2008) became the first journal editor in China to introduce CrossCheck, a tool that compares text against published articles to flag plagiarism, and 2 years later, the study showed that 31% of the 2,233 showed significant non-original content. This study while controversial helped China begin to develop ethics guidelines and educational programs (Zhang 2010). Citation Manipulation can be done both by authors and by reviewers and editors. Citation manipulation by editors or reviewers occurs when reviewers or editors require that their own research be cited, as a condition of acceptance and publication. This also includes when editors require that their journal be cited as a condition for publication. When this is a mandatory requirement, it is generally assumed that this is pure self-interest in order to increase impact factors. Citation manipulation by authors occurs when authors (1) self-cite in an attempt to increase their own citation scores or (2) cite journals, editors, or famous scholars solely for the purpose of increasing the likelihood of having their manuscripts accepted for publication. Fabrication and Falsification of Data occurs when a researcher either creates data or manipulates data to support the hypothesis or hypotheses of the research being conducted. This may involve findings and the analysis of findings. It may also involve the manipulation of images, such as, the misrepresentation of gels and blots.
7
Publication Ethics
115
Plagiarism occurs when researchers (or students) appropriate others’ published work and ideas without acknowledgment through references. This includes both direct quotations which are not indicated or credited and paraphrasing without appropriate references. Conflict of Interest occurs and is “generally defined as a situation in which a person is in a position of trust (e.g., a principal investigator of a research grant) that requires the person to exercise judgment [when she or he] also has personal or professional interests and obligations of the sort that might interfere with the exercise of judgment” (Poff 2012). A conflict of interest which is institutional in nature may also occur. This frequently happens when senior representatives advance the reputation of the organization for the good of the organization even when this involves a lack of integrity or suppression of unpleasant truths. This is generally identified by the term “loyal agency” since the behavior is done for the good of the institution by a senior agent of the organization. Conflicts of interest may be financial where either financial funding of a research project or direct compensation may put the principal investigator of research in a conflict of interest (particularly when the funder wants a particular outcome with respect to the research project). Alternatively, the conflict may involve a conflict of commitment (includes forms of institutional conflict of interest as identified above) where there may be more than one source of obligation (e.g., a loyalty to an employer and a loyalty to a funder when fulfilling both loyalties conflict with one another).
Fake Review by Authors Recent discoveries of misconduct by authors who fake their own peer review have led to public embarrassment for journals and for the countries and universities in which the authors live, work, and publish. One well-known example occurred in 2017 when Springer Nature retracted 107 articles published by Chinese academics in the journal, Tumor Biology. Many journals previously allowed authors to recommend reviewers from the author’s specialization. The authors who participated in this misconduct sent either nonexistent faculty names with an email address or sent a real name of an actual academic but with a false email. In these cases, the authors themselves wrote glowing and laudatory reviews of their work and recommended acceptance of the manuscripts (Brainard and You 2018). Among the consequences of this scandal, many publishers and editors ceased inviting author recommendations when reviewing manuscripts.
Methodological Issues Methodology, strictly speaking, may not seem directly a subject for publication ethics. However, serious errors in methodology and scientific process may result in issues of ethics in publication. For example, the claim that a researcher used too small a sample or an unrepresentative sample in a survey with human participants
116
D. C. Poff and D. S. Ginley
may seem like a methodological criticism. However, if the results are overgeneralized in such a way that they are skewed and may be inaccurate to some constituencies within the population as a whole, this may have moral implications particularly for policy development which draws on the findings of the study.
Predatory Publishing In a recent COPE presentation at the World Congress on Research Integrity (June 5, 2019), predatory publishing was defined as follows: The systematic for profit promise and/or publication of a supposed academic product which presents itself as scholarly, legitimate, meritorious content (including in journals, monographs, books or conference proceedings) in a deceptive or fraudulent way – without any regard to quality assurance. This is also defined by COPE on their webpage (COPE Webpage: https:// publicationethics.org/files/DiscussionTopic_Predatory%20publishing_FINAL.pdf). In an article in The Guardian from August 10, 2018, it is stated that “more than 175,000 scientific articles have been provided by five of the largest ‘predatory openaccess publishers’, including India-based Omics Publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset.” In 2017, the US Federal Trade Commission won an injunction against Omics “alleging that they published articles without standard peer review, misrepresented numerous scientists as editors, and made multiple deceptive claims towards researchers” (Case 2:16-cv02022-GMN-VCF Document 46 Filed 09/29/17). The case resulted in a $50.1 million dollar judgement against Omics. The term “predatory publishers” was coined by Jeffrey Beall who, for a number of years, published a list of journals which he identified as predatory (//List of Predatory Publishers. https://beallist.com). There has been a fair bit of critical commentary about facts with respect to Beall’s list, such as the lack of peer review of the list and the mistaken inclusion of non-predatory journals in the list. But the label has generally stuck. However, while the label has become common parlance, many have criticized the term as suggesting that the academics and researchers who publish in these journals are innocent dupes who are victimized and tricked into believing that the journals are legitimate scholarly publications. This may be true of some individuals, but predatory publishers do have features that should alert all potential contributors to the lack of legitimacy of the journals. These include: • A promise for a very short turnaround time for review and acceptance (sometimes within 48 hours or less than a week – a time frame not consistent with inviting and receiving peer reviews). • A charge for publishing. While some legitimate journals do this as well, legitimate journals and publishers offer a number of services for the fee, including peer review, copy editing, printing and distribution, or online publication. • Mass recruiting on email, frequently targeting inappropriate scholarly fields (e.g., dentistry journals inviting philosophers to submit articles, etc.).
7
Publication Ethics
117
• Titles that are very similar to long-standing, prestigious, and recognized peerreviewed journals. • Invitations frequently come without a legitimate editor (i.e., invitations from the editorial office). • Editorial boards often have names of credible academics who were not ever invited to serve and who are unaware that their names are on the board list. Individuals who choose to publish in predatory journals are sometimes from industrialized countries. Frequently, however, they are from developing countries and/or countries where English is a second language for the authors. There is some speculation that the difficulty of getting published in English-language journals because of language skill limitations is part of the motivation for publishing in predatory journals.
The Editor/Publisher Perspective Having introduced and reviewed the key contemporary ethical issues in publication ethics, a review of how this impacts the editorial and publishing perspective is important. The type and prevalence of rejections of manuscripts by editors for violations of ethical publication is indicated in the table below. Table 2 illustrates another study of rejections in a set of journals. Types of Problems Table 2: Reasons for rejection of manuscripts submitted to JPP, JYP, and FRA (Journal of Pharmacology and Pharmacotherapeutics (JPP), Journal of Young Pharmacist (JYP) and FreeRadicals and Antioxidants (FRA)) between January 2010 and December 2014. The total number of submissions analyzed was 2575. A total of 301 (16.96%) manuscripts contained some degree of plagiarism. So, what do editors do when faced with suspected or alleged violations of publication ethics norms? One approach to answering the question is first to individuate the roles and responsibilities of the different stakeholders in a case of publication wrongdoing. For simplicity sake, assume that the research is the product of a national or federal peer-reviewed research grant that was awarded to a university professor at an accredited university. Such funding in many countries imposes that the researcher and the institution which administers the grant must follow ethical guidelines, frameworks, laws, and regulations that govern the granting of such awards.
Stakeholders in Research The stakeholders in research may include the principal investigator of the research (i.e., the PI); other co-researchers; students – both graduate and undergraduate who may be working on the research project for credit or money; and other research staff, including postdoctoral fellows or staff/administrative support positions.
118
D. C. Poff and D. S. Ginley
Table 2 Manuscripts rejected by cause and percentages (Zhang 2010) Reasons for rejection Multiple submission Duplicate submission (resubmission) Ethics committee permission not produced Incomplete submission Not in scope Not prepared according to journal instructions Revised article not submitted Plagiarism Rejected as recommended by referees/editors Suggested to submit to other journals Withdrawn by authors Total
Number of manuscripts rejected JPP JYP FRAa Total 28 5 2 35 27 10 – 37 5 – 5 0.28 1 – – 1 423 24 39 486 72 16 8 96 84 80 18 182 177 84 40 301 241 210 – 451 4 20 13 37 108 25 11 144 1,170 474 131 1,775
Percentage 1.97 2.08 0.06 27.38 5.41 10.25 16.96 25.41 2.08 8.11 100
January 2011 to December 2013. JPP = Journal of Pharmacology and Pharmacotherapeutics; JYP = Journal of Young Pharmacist; FRA = Free-Radicals and Antioxidants a
As well, the university is involved through the administration and financial oversight of the grant and through governing rules and regulations concerning the responsible conduct of research. This includes established protocols, policies, and procedures which must be followed in all investigations of allegations of wrongdoing in the conduct of research or in the publication of that research. The funder is also a stakeholder who must ensure the responsible conduct of research for all grants that are peer-reviewed and awarded. This includes established procedures in dealing with universities and researchers when there are allegations of misconduct in the research project. Finally, there is the editor of learned journals and the publisher of such journals. Since publication ethics is the focus of this chapter, the following SWOT type consideration of the strengths and limitations of the editor’s role is explored here.
The Editor In this SWOT analysis of the editors’ role in fostering responsible publishing of research, the strengths, weaknesses, opportunities, and threats of the role of editing in the scholarly publication realm are identified. In articulating these elements, it should be noted that roles and resources vary markedly across disciplines. Strengths • Editorial independence • Credibility in the research community • Expertise in research/publication integrity issues
7
Publication Ethics
119
• Ability to implement editorial policies to strengthen publication integrity • Responsibility for the integrity of the published record of research in their journals Weaknesses • Limited mandate or resources for legal action • No authority to investigate/discipline misconduct • No formal relationship with university employer or funder • Possible damage to reputation • Lack of knowledge of resources available to help Opportunities • Editors are well-positioned to detect misconduct. • Well-positioned to explore collaborative relationships with universities. • Have increasing access to various forms of detection technology. • Have increased models through publishers/organizations for developing editorial integrity policies and practices. • Have more educational resources available through professional bodies and organizations and greater transparency of policies and practices on the web. Threats • Lack of interest by universities to collaborate with editors/publishers • Lack of harmonization of standards across countries • Limited training opportunity for new editors With all of these strengths and limitations, it is important to remember that editors are in general at the end of the production line of research. The task is the process and production of an end product. When that product is excellent and the research has been conducted with methodological rigor, responsible conduct, and competent and ethical integrity, the peer review and editorial good practice result in the viable and critical dissemination of new knowledge. When errors happen either due to scientific incompetence or ethical neglect, the editor soon realizes the limitations of their role in the mechanisms of knowledge production. In the SWOT analysis, weaknesses and threats identified included the lack of authority to investigate or discipline and the lack of relationship with the funders of research. The editor is not the employer of the researcher and not the agent of responsibility for the grant holder. Given this, a number of considerations follow. All of them first and foremost speak to education and governance from accountable entities. The last considerations speak to transparency and collaboration between funders, universities, and editors.
Education, Collaboration, and Transparency First is an initial note about collaboration. This chapter has identified a number of stakeholders including those in key domains with distinct responsibilities and
120
D. C. Poff and D. S. Ginley
accountabilities for research and publication ethics. Because of this, the best solution requires a commitment to mutual respect and collaboration among all of the sectors impacted by the research process and product. This final section of the chapter will identify each potential partner in the collaboration.
The Individual Researcher/Author Individual scientists, social scientists, and humanists bear the primary responsibility for the ethical and responsible conduct of their research and for the ethical preparation and submission of the products of research for consideration for publication. In universities and research hospitals, in particular, researchers frequently are not only individually accountable for the responsible conduct of research and contractually bound by the terms and conditions of their respective grants, they are also responsible for educating and training the next generation of researchers through teaching, mentorship, and oversight. While there may be a current generation of researchers who have not formally been instructed in research ethics and the responsible conduct of research, in a some countries, such as Canada and the United States, this is certainly not the case for upcoming scientists and social scientists. Resources to support training and education in research ethics are increasingly available both from funding agencies and universities, including online tutorials. Many methodology courses for undergraduates and graduate students include modules on research ethics.
The University (Hospital or Research Institution or Organization) In many countries (e.g., the United States and Canada), federal governments legally invest faculty research grants with the university rather than with the individual researcher. This does not absolve faculty members from following the terms of their grants, but it further imposes conditions on institutions for the responsible conduct of research. Failure in oversight can result in various disciplinary measures by government, including but not limited to the cancellation or rescinding of the individual research grant. In rare cases, where institutional oversight is deemed by government to be seriously wanting, governments can impose external audits of all research grants, and in the United States and Canada, cases exist where Federal funding has been withdrawn. Universities are also responsible for maintaining standards of integrity in student performance as well as with respect to faculty performance. While some of the factors are outside of consideration for this chapter (e.g., nondiscrimination, respectful workforce, etc.), all of the violations of research ethics and publication ethics are germane to this discussion. When allegations of wrongdoing occur, universities are required to investigate using established rules and procedures. Such investigations generally embrace a number of values and principles, including principles of natural justice, procedural
7
Publication Ethics
121
justice, and confidentiality for the respondent to such allegations. Built into this notion of confidentiality are a number of additional values, such as innocent until proven guilty and an understanding of the reputational risk of allegations to the reputation of the individual and perhaps the institution as well. The strengthening of the responsibility framework by the research community – i.e., the community stepping up to be responsible and demonstrating they have the structure and framework to do so – may obviate the need for additional governmental controls and is consistent with the university’s historical claim to autonomous governance and academic freedom. Underlying this definition of standards is the need for the development of educational programs to help foster those standards. Herkert has assessed the state of engineering ethics education in the US circa 2000 where 80% of student were not required to have any ethics education.(Herkert 2000) Since then, the clear evolution of such programs provides the foundation for a strong definition and application of responsible publishing at all levels. An example of this approach is, for example, the recent report from the National Academies of the United States on “Steps to Encourage Responsible Research Practices”(National Academies of Sciences, Engineering, and Medicine 2017). Some of the key topic areas in the report are listed below. An older report was the foundation for the current report (National Academy of Sciences et al. 1992): • • • •
Acknowledging responsibility and taking action Integrating ethics into the education of scientists Considering guidelines for responsible research practices A framework of subjects to consider in encouraging responsible research practices • Discouraging questionable research practices Even though there is an increasing emphasis on the expansion of the research institution’s role in fostering responsible research practices, there is considerable argument for the position that this responsibility should be left to the individual faculty member or employee. This argument is predicated on concern with the traditional university values of autonomous governance and academic freedom.
Research Centers/International Collaborations Increasingly there are incentives to move researchers from a model of singleinvestigator grants to multidisciplinary and multinational research collaborations such as the EU Horizon 2020 (https://ec.europa.eu/info/business-economy-euro/eco nomic-and-fiscal-policy-coordination/eu-economic-governance-monitoring-preventioncorrection/european-semester/framework/europe-2020-strategy_en) projects and the US Department of Energy Energy Frontier Research Centers (https://science. energy.gov/bes/efrc/). Many of the same issues listed above occur in these consortia as well. Issues of authorship, research validation, and data storage and manipulation
122
D. C. Poff and D. S. Ginley
are key in multi-institutional centers across a number of diversity axes including educational level (undergraduate to full professor), type of institution (academic, national laboratory, industry), and even international diversity. These elements of diversity often have different policies with respect to data, publications, and even cultural norms that must be normalized to achieve fully collaborative efforts. Diversity issues include differences in disciplinary and interdisciplinary fields and norms within those fields. As discussed above, the diversity of policy can be minimized by adopting a clear standard. Organizations, such as COPE, do not set standards although all of their resources support the establishment of such standards. As well, research centers should develop robust management strategies for the responsible conduct and dissemination of research. Increasingly, while there is a degree of coherency in the definitions of authorship, there are substantial differences as well, and this is especially true across diverse disciplines, and this provides impediments in diverse collaborations especially those that are international.
Education and Ethics Resources: Creating and Maintaining a Culture of Ethics Externally, many countries have imposed ethics training as a condition of receiving federal grant monies. In the United States, for example, the National Science Foundation has made research ethics training mandatory for all funded researchers including principal investigators and students. Recently, the National Academies of the United States in their overall report have a chapter specifically on Education for the Responsible Conduct of Research covering both ethics and research integrity (National Academies of Sciences, Engineering, and Medicine 2017). They identify as below what the overall objectives of an educational program should be: • Ensuring and improving the integrity of research; promoting good behavior and quality research conduct; • Preventing bad behavior; decreasing research misconduct; • Making trainees aware of the expectations about research conduct within the research enterprise and as articulated in various federal, state, institutional, and professional laws, policies, and practices that exist; • Making practitioners and trainees aware of the uncertainty of some norms and standards in research practices due to such factors as changes in the technology used in research and the globalization of research; • Promoting and achieving public trust in science and engineering; • Managing the impact of research on the world beyond the lab, including society and the environment (Poff 2012)
As well as these initiatives, many universities and disciplines in both the sciences and the social sciences have introduced research ethics as modules within methodology curriculum (e.g., the University of Sheffield, the University of Edinburgh, etc.). Along with the growth of topics like business and professional ethics within
7
Publication Ethics
123
certain disciplines, a growing number of institutions have identified ethics as a learning outcome for all students. There is in fact a global realization in the importance of ethics education, and China has begun to train teachers specifically to teach ethics courses in Chinese universities (Murphy 2016).
Approaches to Teaching Ethics Various approaches can be adopted in teaching research ethics. Frequently, there does not appear to be one correct answer in dealing with ethical dilemmas, so a variety of approaches may be appropriate in teaching ethics. Increasingly, online resources can provide a more unified approach essentially leveling the ethics playing field. Scenario-based training with relevance to the student’s discipline and background seems to be one of the most effective training approaches (Barrista 2017; Mietzner and Reger 2005; Ametrano 2010; Sternberg), which can be supplemented by other resources, such as the materials available on the website of the Committee on Publication Ethics. Education should include many aspects of both research and publication in the context of standards of good practice, types of misconduct, questionable research practices, and broader types of ethical violations that can affect scientific ethics such as workplace ethics. Some of the specific topics that should be addressed include as derived from a recent study (National Academies of Sciences, Engineering, and Medicine 2017): • Application of the principles of honesty, professional skepticism, valid error correction, and approaches to verification of results • Principles of data handling, selection, archiving, and analysis with respect increasingly to open data sharing • Understanding of principles of justice and informed consent for human participant research • Understanding of animal rights in the protection of animal research • Publication practices including the prepublication of results, premature conclusions, fragmentary publications, etc. • Well-defined authorship practices and responsibilities for clear definition of credit • Well-defined training and mentoring practices including the responsibilities of supervisor
The Funder Particularly with respect to government funding of university research, funders have an accountability reporting relationship directly to the government and indirectly to taxpaying citizens for the responsible management of research grants. This includes mechanisms to guarantee responsible fiscal and administrative management of grants as well as mechanisms to ensure the responsible conduct of research.
124
D. C. Poff and D. S. Ginley
The Tensions Among the Stakeholders and the Role of Publication Ethics, Editors, and Publishers As with stakeholder theory generally, stakeholders represent different constituencies and, in general, have some distinct and different issues to bring to the situation. In the case of research ethics, it is fairly evident that university values concerning autonomy in governance and academic freedom may create tensions with government granting agencies which wish to ensure government standards of accountability in the conduct and management of university research. With the addition of the roles and responsibility of editors and publishers, the picture gets more complex. Since publication ethics is the topic for this chapter, the rest of this chapter will focus on this.
Publication Ethics from the Editor’s Perspective: What Do Editors Want and Need? Education for new editors as well as ongoing professional development is, at best, an ad hoc business. Journals range in size, financial resources, and educational opportunities. As such, many editors receive little educational assistance in the role of editorship when they become new editors. The consequence of this is that many publishing organizations and volunteer organizations, like COPE, have filled in the gap, offering annual workshops, seminars, guidelines, and other resources to assist its voluntary members. This was the motivation for the founding of COPE which continues to develop, revise, and extend its learning services and opportunities for professional development for editors and publishers. Given that education is ad hoc for editors, it is important to reflect on the level of awareness of available resources and the extent to which such resources are utilized.
Editors Awareness and Use of Available Resources Table 3 offers some information on the editors’ awareness of publication resources and their use of the same. While some editors clearly know and use some of the available aids to inform and assist in dealing with ethical issues in publishing, there is clearly room for improvement.
The Future and Working Together The publication and dissemination of research through various forms of publication is critical to the successful careers of scholars, to the reputation of universities, to the credibility of governments to create advances in the sciences and social sciences, and to the promotion of the common good through the
7
Publication Ethics
125
Table 3 Editors awareness and use of resources (Gasparyan et al. 2017)
Resourcea Other journals’ instructions
Percentage of respondentsb No. of Not respondentsb aware 167(73) 18(12)
Aware but not used 38 (36)
Blackwell best practice guidelines Blackwell helpdesk COPE
212(98)
36(31)
32(32)
212(98) 216(102)
59(54) 64(51)
35(39) 19(20)
ICMJE uniform requirements
183(75)
72(55)
15(21)
GPP EMWA guidelines CSE WAME EASE
211(98) 209(97) 207(95) 212(97) 207(96)
79(67) 83(71) 83(80) 85(79) 87(84)
16(24) 14(22) 13(13) 10(11) 10(14)
Used 44 (52) 32 (37) 6(7) 17 (29) 13 (24) 5(9) 3(7) 4(7) 5(10) 3(2)
COPE = Committee on Publications Ethics; CSE = Council of Science Editors; EASE = European Association of Science Editors; EMWA = European Medical Writers Association; GPP = Good Publication Practice for pharmaceutical companies; ICMJE = International Committee of Medical Journal Editors; WAME = World Association of Medical Editors a Abbreviations and acronyms were used on the questionnaires, since we considered that if editors were genuinely aware of an organization or document then they would be familiar with these b Figures in brackets indicate responses from healthcare editors
advancement of knowledge. Despite this, editors and publishers are outside the accountability frameworks of both universities and funding agencies. Editors are the curators of the scholarly record and do have the authority to retract articles and to publish expressions of concern when they believe publication ethics may have been violated. Having said this, when editors suspect a serious violation of publication ethics, or when external allegations of such violations are made by others, editors generally contact the author(s) and, depending on the seriousness of the matter and the response from the author(s), also contact the institution and request an investigation. As editors frequently note, often what follows is silence in response. The institution follows its procedures which may be necessarily protracted and protects the confidentiality of the respondent(s) to the allegation. Additionally, the funder may launch its own investigation which is also done in isolation from the editor. This is understandable but very unhelpful to the editor who may be waiting for a recommendation in order to determine the course of action of the journal. Clearly, stakeholders have different constituencies, obligations, and responsibilities, but the best resolution to the challenging and contested issues surrounding violations in publication ethics must be cross-sector and collaborative in nature. The future success to publication ethics is a future of joint responsibility and accountability.
126
D. C. Poff and D. S. Ginley
Cross-References ▶ Protecting Participants in Clinical Trials Through Research Ethics Review
References Ametrano IM (2010) Chapter 1 – learning ethical decision making: reflections on the process. In: e Scholarship of teaching and learning at EMU, vol 3, Article 5. http://commons.emich.edu/ sotl/vol3/iss1/5 Barrista A (2017) An activity theory perspective on how scenario-based simulations support learning: a descriptive analysis. Adv Simul 2(3):1–14 Brainard J, You J (2018) What a massive data base of retracted data reveals about science publishing’s death penalty. Science. https://doi.org/10.1126/science.aav8384 Callaway E (2016) Beat in, impact factor! Publishing elite turns against impact factors. Nature 535(7611):210 Gasparyan AY et al (2017) Plagiarism in the context of education and evolving detection strategies. J Korean Med Sci 32(8):1220–1227 Herkert JR (2000) Engineering ethics education in the USA: content, pedagogy and curriculum. Eur J Eng Educ 25(4):303–313 Mietzner D, Reger G (2005) Advantages and disadvantages of scenario approaches for strategic foresight. Int J Technol Intell Plan 1(2):220 Murphy MJ (2016) Ethics education in China. Teach Ethics 16(2):233–241 National Academies of Sciences, Engineering, and Medicine (2017) Fostering integrity in research. The National Academies Press, Washington, DC National Academy of Sciences, National Academy of Engineering, Institute of Medicine (eds) (1992) Responsible science, volume I: ensuring the integrity of the research process. The National Academies Press, Washington, DC Parasuraman S, Raveendarn R, Mueen Ahmed KK (2015) Violation of publication ethics in manuscripts: analysis and perspectives. J Pharmacol Pharmamcother 6(2):94–97 Poff D (2012) Research funding and Academic freedom. In: Chadwick R (ed) Encyclopedia of applied ethics, vol 3. Elsevier, Amsterdam, pp 797–804. 5 Sternberg RJ. Developing ethical reasoning and/or ethical decision making. Cornell University. https://www.ideaedu.org/Resources-Events/Teaching-Learning-Resources/Developing-ethicalreasoning-and-or-ethical-decision-making Wager E et al (2009) Science journal editors’ views on publication ethics: results of an international survey. J Med Ethics 35(6):348–353 Zhang Y (2010) Chinese journal finds 31% of submissions plagiarized. Nature 467:153
8
Peer Review in Scholarly Journal Publishing Jason Roberts, Kristen Overstreet, Rachel Hendrick, and Jennifer Mahar
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types and Stages of Peer Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Importance of Peer Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues and Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mistakes and Misconduct in Peer Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conflicts of Interest and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anticipated Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
128 131 131 131 133 133 143 151 153 153 153 154 154
J. Roberts (*) Ottawa, ON, Canada e-mail: [email protected] K. Overstreet Arvada, CO, USA e-mail: [email protected] R. Hendrick Glasgow, UK e-mail: [email protected] J. Mahar Pembroke, MA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_5
127
128
J. Roberts et al.
Abstract
To achieve the desired outcome of quality and rigor, the process and management of peer review must be conducted ethically, requiring each stakeholder to perform their actions according to established best practices. This chapter discusses the importance of peer review in scientific and academic journal publishing, and examines the ethical issues inherent in key areas of the process. It also offers a discourse on the implications of unethical practice for both the reputation of peer review processes themselves and the inviolability of the corpus of literature they serve to guard. The chapter begins by briefly introducing readers to the purpose of peer review, outlining its different stages, identifying the key groups involved in the process, and explaining why it is essential to the integrity of all journals that the process of peer review is performed ethically. It continues by looking specifically at various ethical issues that can arise during peer review as opposed to issues associated with publication ethics. While issues in publication ethics have received considerable attention, our understanding of good (and bad) ethical practices during the undertaking of peer review is less comprehensive and can be dependent on culture. These issues raise questions about the wider implications of unethical peer review. To fail at any point in the process, by introducing bias, competing interests, misconceptions, negligence, or ignorance could harm future research outcomes. Various case studies involving a failure in the peer review process leading to questionable data being published will be drawn upon. These illustrate the importance of peer review being conducted ethically and the necessity of strong management or leadership from journals and their editorial offices in order to protect the integrity of the journal, academic literature, and, ultimately, in the case of biomedical publishing, public health. The chapter concludes with a broader discussion of ways in which peer review in journal publishing can be conducted in an ethical fashion. Keywords
Academic literature · Ethics · Integrity · Journal publishing · Peer review stakeholders · Transparency
Introduction Considerable effort has been expended contemplating how peer review can be systematically improved (Rennie and Flanagin 2018). Critically, however, to achieve the desired outcome of quality and rigor, both the process and management of peer review must be conducted ethically, requiring each stakeholder (e.g., editor, reviewer, specialist reviewers such as statistical consultants) to perform their actions according to time-worn conventions. Unfortunately, these conventions are not so clearly established as once thought and certainly not understood by all stakeholders. Sometimes, they are comprehended but clearly ignored if in certain situations it is convenient to do so (e.g., commercial interests, society politics, expediency in a race to post a decision quickly), and these result in ethical issues emerging.
8
Peer Review in Scholarly Journal Publishing
129
Peer review needs to be both fair and full. Fair means that reviewers and editors need to consider their own biases and be assured that they will not affect the process or outcome. If they cannot objectively consider their biases or feel their biases will affect the peer review process, then they must recuse themselves. Frequently they do not (Bero 2017). Fair also means that editors and reviewers evaluate papers on their quality and methodology, not just the results or whether or not they “liked” what the researchers said (Button et al. 2016). Ideally, a journal will have helped them in this task by defining what “quality” means to that publication. If there are commercial interests involved, such as a randomized controlled trial reported on by researchers receiving funding from a pharmaceutical company, everyone needs to be clear on what these potential conflicts are and mean if the paper is published. Full means that the peer review process is conducted thoroughly. Uniform application of high standards should be expected/demanded, though as humans conduct peer review (at the time of writing at least!), the definition of high standards is not universally agreed upon, and a similar and consistent level of effort across and within journals and disciplines is simply unobtainable. Nevertheless, editors and reviewers should at least strive for the application of high standards. The peer review process should also be transparent. Transparency practices vary across disciplines (Lyon 2016), but one result of transparent peer review processes is a lessening of opportunities for unethical practices, both intended and unintended to emerge. To fail at any point in the process, by introducing bias, competing interests, barely evaluating a paper/ writing a skimpy review or outright management negligence from within the editorial office, means future research outcomes could be harmed if that research is published with all its flaws unchecked. These all represent ethical issues. Simply put, the intent of peer review is to offer validation so readers can trust that any manuscript they read in a peer-reviewed journal has been appropriately vetted. This is, of course, why so-called predatory journals are so concerning – because of the apparent absence of any obvious scrutinizing of manuscripts (Shamseer et al. 2017). However, defining precisely what peer review is can be a challenge. We can broadly imagine what it entails – a formal mechanism for researchers to have their work evaluated by their peers ahead of publication – but its specific conduct, from who gets to review and who selects those peer reviewers, how transparent that process is to the end user (the reader), to the standards that are to be maintained is harder to determine. Significant variance in the application of peer review exists across journals in particular, with the quality of the publication, the ability of the editors inviting reviewers to secure talented evaluators, and the degree of collective effort invested from all stakeholders (editors, reviewers, editorial office staff, maybe the publisher) all shaping the depth and breadth of the manuscript evaluation process. As any editorial office in receipt of a threadbare review will attest, there is also a prevalent lack of understanding (likely derived from the absence of definitive modes of conduct) on how to assess manuscripts by the peer reviewers themselves. Papers that successfully complete peer review are accepted for publication and become part of the published record for a field of research to be called upon for future reference. It is a stamp of approval to pass peer review. However, not only does peer review act as a gatekeeper to the published scientific and academic record,
130
J. Roberts et al.
it also serves to burnish manuscripts and help authors ensure their papers realize their full potential (Etkin et al. 2017). Those threadbare reviews, incidentally, tend to just focus on the gatekeeper function. Journals differ in what they request reviewers to evaluate or, as part of that polishing process, to suggest as areas for improvement. That said the essential appraisal criteria in STEM journals are usually items such as the veracity of the results, the rigor of the application of the scientific method, the ability of the authors to contextualize their research, the quality of reporting standards, and the degree to which the study advances understanding of the topic. Similarly, in the humanities, reviewers will typically be asked to report upon how original the research is, the persuasiveness of the argument and its contribution to the field (Mudditt and Wulf 2016). However, and this makes defining the nature and purpose of peer review so difficult, there are no definitive standards for what should act as a baseline of a complete assessment of a manuscript. Furthermore, there are no universally applied core competencies for reviewers or editors, so there are no obvious standards that peer reviewers must meet (Moher et al. 2017). This is potentially troublesome as although it is pretty obvious what a poor review looks like, it is not so clear what elements constitute a strong one. In qualitative research, peer reviewers are looking for credibility, transferability, dependability, and confirmability to establish the rigor of a study (Thomas and Magilvy 2011), and editors are looking for reviewers who can comment on those criteria. For quantitative research, at least, there is a degree to which you can apply the common (or even essential) criteria of validity and reliability dispassionately. For instance, was the correct statistical technique applied in the analysis? Does the sample size support the authors’ conclusions? But even then, the application of peer review diverges wildly in the weighting of importance given to those criteria by individual stakeholders and the journals themselves (Chauvin et al. 2015). Some clinical medical journals, for example, are sticklers for comprehensive reporting, believing that not only does it enhance reproducibility, it also facilitates detection of spin and bias in data. Other journals, alternatively, and frankly unfathomably, simply pay scant attention to those issues for whatever reason. Where peer review truly becomes jumbled is in the interpretation of where the line is drawn between originality and confirmation studies, particularly if the results presented are subtle variations – perhaps by geography or study population – of previously published research. Authors, as a consequence, may find themselves shut out of publication opportunities in certain journals, often based on the arbitrary application of unsupported standards (which itself, may be an ethical issue, one of exclusion constructed upon degrees of privilege). Perhaps as a result of the indeterminate nature of journal peer review, the issue of ethics within peer review also is frequently unclear. As already mentioned, there are no consistent and clear standards accepted across the board by journals and/or disciplines. Furthermore, there is no qualification to be obtained or test to pass to become a peer reviewer, though efforts are certainly moving in that direction such as the Publons Academy and ACS Reviewer Lab from the American Chemical Society. Though carried out by subject experts, essentially “amateurs” typically conduct peer review: reviewers and editorial board members are normally full-time researchers, often volunteering their time to evaluate manuscripts, having received little to no training on how to rigorously deconstruct a paper (Preston 2017; Shattell et al. 2010). There are
8
Peer Review in Scholarly Journal Publishing
131
implicit assumptions that because peer reviewers typically also write up research for publication they, therefore, know how to critically appraise papers. Buried in that assumption is an expectation that editors and reviewers will also know not only how to conduct themselves ethically but to also have a secure grounding in publication ethics to identify when there are ethical issues in a paper or how a study was conducted. Add to this the notion that peer review is predicated on trust, only for that trust to repeatedly be exposed as misplaced (i.e., stories of fraudulent research results, plagiarism/self-plagiarism, author-generated fake peer reviewers), the assumption that science is able to consistently self-regulate itself scrupulously and ethically through peer review does seem to be somewhat insecure (Thomas 2018). This chapter examines several ethical issues inherent in key areas of the peer review process. Although it does not dwell upon, or define, publication ethics per se, it does explore the ethical aspects of peer review as it applies to several key stakeholders in the publication process. It also offers a discourse on the implications of unethical practice for both the reputation of peer review processes themselves and the inviolability of the corpus of literature they serve to guard.
Background Types and Stages of Peer Review Multiple types of peer review processes are currently employed in scholarly journal publishing, from the traditional blinded processes to transparent preprints and postpublication collaborative review (see Table 1). Each process includes several stages, traditionally starting with author submission and ending with publication of the accepted article (see Fig. 1). Each process, stage, and role (i.e., authors, reviewers, and editors) serves to ensure the quality of the published work and thus protect the scientific record. There is a current initiative toward greater transparency in peer review, making the process less opaque and providing all stakeholders with access to the status of a manuscript, as it moves through the process, and to the reviewers’ and editors’ comments by publishing them with the accepted article. Transparency also includes making the underlying data accessible to all by archiving data sets in public repositories to allow for reproduction and verification, registering clinical trials before enrolling the first participant (http://www.icmje.org/recommen dations/browse/publishing-and-editorial-issues/clinical-trial-registration.html), and preregistering research plans before collecting data (https://cos.io/prereg/).
Importance of Peer Review When done properly, peer review enables authors to improve their manuscripts, so they adequately communicate their findings and advance scientific understanding with the publication of their work (Sense About Science 2012). Without peer review, there is no validation or refinement process. Until artificial intelligence catches up
132
J. Roberts et al.
Table 1 Types of peer review processes Type Traditional Single-blind (Snodgrass 2006)
Description
Notes
Reviewers are provided with authors’ identities, but reviewers’ identities are kept confidential
Editors see all information Readers are not aware of reviewers’ identities regarding published articles Editors see all information Readers are not aware of reviewers’ identities regarding published articles Editorial office staff facilitate the peer review process and are aware of all identities Editors see all information Reviewers’ names and/or comments may or may not be published with the article References to preprints are commonly accepted on NIH and other grant applications Ethical issues such as patient confidentiality, patent issues, and conflict of interest need to be carefully analyzed by the reader
Double-blind (Snodgrass 2006)
Authors’ and reviewers’ identities are kept confidential
Triple-blind (Pinholder 2016)
Authors’ and reviewers’ identities are blinded from each other and from the editors Authors and reviewers are aware of each other’s identities
Open/transparent (Ross-Hellauer 2017) Presubmission/ preprints (ArXiv 1991)
Post-publication (Enago 2018, F1000 2000; PubPeer 2012)
Preprint sites are vehicles for fast dissemination of information Posted content is immediately published, prior to peer review, and open to comments from readers. Authors may then revise and repost. If subsequently submitting to a journal for peer review and publication, authors must be mindful of whether the journal accepts submissions previously posted on a preprint server Content is published online as submitted. Reviewers may be invited by the journal to comment openly and/or readers provide comments. The manuscript may be removed for the revision period and then reposted/republished
Often seen as an extension of traditional peer review F1000 (Faculty of 1000) and PubPeer are examples where manuscripts that have undergone traditional peer review are posted and available for comment, extending the conversation about the manuscript indefinitely
with and then exceeds human competency, there will be a need for an author’s peers to vet manuscripts. Such is the centrality of peer review to the research publication process; the state of peer review is continually under attack for its failings, namely, poor application, inconsistency, and, crucially for this chapter, freedom from bias. Calls are growing for greater transparency and a democratization of the process to avoid vetting by a closed group of “privileged” elites such as tried, tested, and, no doubt, worn out “old hands,” learned society insiders and communities that reflect the geographic biases of a journal’s editorial board. Underpinning every aspect of peer review is accountability. Each stakeholder involved in the process must be held accountable for his or her actions. The
8
Peer Review in Scholarly Journal Publishing
133
Preprint Server
Author Submits Manuscript
Journal Performs Peer Review Open
Single, Double, and Triple Blind
Editor
Editor
Editor Picks Editorial Board Member
Decision Sent Æ Author Amends Paper
Reviewers
Article Accepted and Published
Post Publication
Fig. 1 The most commonly deployed peer review workflow models
implications are no less than the complete undermining of the faith that research published in journals is valid, honest, and contributing, however, incrementally, to our understanding of any given subject matter. To usefully summarize this, what follows are the key issues and current debates regarding ethical issues in peer review, the roles and expected responsibilities of each stakeholder in the process, and a small selection of illustrative cases.
Key Issues and Current Debate Ethical performance of the various roles involved in the peer review process is critical to the integrity of the scholarly record. Expertise, ethical behavior, and the ability to identify what is unethical are core components of each role. In this section several common ethical issues affecting each of the peer review stakeholders and the practical steps each needs to follow are explored, thus establishing conventions by role.
Mistakes and Misconduct in Peer Review Ethical peer review and publication require each stakeholder to be aware of the potential ethical issues in order to both avoid committing them and identify those
134
J. Roberts et al.
committed by others so they can be investigated and rectified. Best practice requires diligence in creating ethical policies to create awareness of the issues and a procedure for managing them when breached. Such policies address the intentionally committed issues of fabrication and falsification of data, manipulation of images, plagiarism, and salami slicing, as well as the unintentional issues that arise from permitting a lack of transparency or rigor in the peer review process.
Stakeholders and Their Roles in Ethical Peer Review The key stakeholders in the peer review process include authors, reviewers (including specialists such as statistical and methodological reviewers), and editors, each having an important role to play toward ensuring the quality and rigor of the peer review process and the final published product. In order to play the role well, each stakeholder must strictly adhere to established ethical principles and practices and promptly report any mistakes or misconduct committed by others. Ethical peer review, therefore, relies on a conscious interchange among the various stakeholders, each properly playing their role in the process. Reviewers Reviewers are tasked with checking papers for evidence of rigor and validity and to detect mistakes or malpractice that may have been committed during the research study or writing of the paper. In relation to the research, they may identify problems in the use of human subjects (confidentiality, consent, potential harm) or with the methodologies used and results presented. Regarding malpractice, reviewers may, for example, be able to determine if a paper has been plagiarized, if salami slicing has occurred, if there has been duplicate submission, or whether there are issues with authorship. Peer reviewers may also be aware of undisclosed conflicts of interest (even in blinded journals reviewers are almost half the time able to tell who wrote the paper (van Rooyen et al. 1998)). Reviewers may also be adept at detecting bias in the writing, especially if they have received training (Schroter 2018). Reviewers must also be ethically self-aware and understand how their performance has ethical implications. When a reviewer agrees to review a manuscript, they must have sufficient time available and the necessary expertise to submit a useful review by the requested deadline (COPE Council 2017b). Reviewers must commit the necessary time needed to perform a thorough review. When reviewers “phone it in” – provide two-sentence reviews such as “This is a good manuscript. It should be published” or “This manuscript is awful. It should be rejected” – the editor does not have the necessary information to justify a decision on the manuscript. Additionally, the authors would not have constructive comments that would help them to improve the manuscript during the revision process. The reviewer who submits this type of review has wasted the editor’s and authors’ time as they waited for a review that proves to be useless. If the editor is pressured to make a decision because of the elapsed time or fails to find another equally knowledgeable expert to perform a review, the perfunctory review could provide the sole consideration for the editor’s decision and the only gatekeeper for this work being added to, or not added to, the literature. On the other hand, if the editor is able to secure another reviewer following
8
Peer Review in Scholarly Journal Publishing
135
receipt of a useless review but has to wait for those additional comments, it may delay the author’s further research, perhaps allowing other researchers to publish their rival research first or apply for a patent ahead of the paper’s author, perhaps delaying a lifesaving treatment from reaching clinicians, or perhaps resulting in a loss of opportunity to have a significant effect on other research grants or manuscripts being submitted for consideration. Opinions vary on what the necessary expertise is for peer reviewers, but generally they require education and experience working in the discipline at a minimum, with a successful publication record and experience critiquing the work of colleagues an important addition. Expertise and ethical behavior as a researcher and/or clinician, and as a reviewer, establish the platform that will aid the reviewer in identifying ethical issues in the papers she/he reviews. COPE’s ethical guidelines for peer reviewers provides a section on preparing review comments (COPE Council 2017b) in regard to the quality of a paper’s content, as well as on any suspected ethical violations. These can be used to help reviewers behave ethically and to prepare and elevate the quality of their review comments. A few examples for ethical behavior are provided here (COPE Council 2017b): • Do not rewrite or ask the authors to rewrite the paper in your preferred style – this is an example of bias and can unnecessarily delay the authors in publishing the paper. • Do not ask the authors to extend the manuscript beyond its intended scope – this also delays the authors in publishing their work, possibly requiring additional data collection, and is only appropriate if needed to substantiate claims made in the paper under consideration. A reviewer can include a suggestion about extending the scope to strengthen the paper in the comments to the editor. • Submit comments someone else prepared (e.g., student) under your name – as discussed elsewhere in this chapter, it is unethical to take credit for a review written by someone else. • Recommend the authors cite your published work without clearly valid reasons for doing so – authors feel compelled to follow reviewers’ suggestions so they can get their work published; they should never be asked to add a citation to a reviewer’s (or editor’s) own work unless it is clearly needed to support the argument (and other references are not available). • Delay the review process by submitting your comments after the deadline (unless you have requested and received an extension; COPE council 2017b) – this, also, can delay authors from publishing their work. Editors Editors are the decision-makers. They determine if a submission fits the aims and scopes of their journals, whether papers are actually ready for peer review (the socalled triage process), determine who the reviewers will be, and, finally, act as the ultimate arbiter on whether a paper is accepted or rejected. Reviewers are often asked for an opinion on whether to accept, reject, or request revisions to a manuscript, but this is only a recommendation. It is the role of the editor to make the actual decision.
136
J. Roberts et al.
Therefore, as the ultimate overseer of who gets to perform peer review, and as the person that ensures decisions are clear and consistent, editors have an ethical responsibility to do their jobs to a consistently high standard. Failing to maintain standards at any point in the process may allow flawed research to pass relatively untouched into the peer-reviewed literature. Some journals are blessed with a deep support network of highly skilled editorial office staff, ancillary support such as paid statistical consultants, a consultative publications committee, or a supportive publisher. Regardless of whether or not they are well-resourced, editors are often compelled as the ethical ombudsman on a day-to-day basis for every single action. For example, editors decide if authors, reviewers, or other (handling) editors (within workflows where the editor in chief assigns members of the editorial board to handle the peer review for a manuscript) have potential conflicts of interest, including determining if an author has recommended a reviewer with a conflict of interest or one who does not actually exist (Stigbrand 2017). Editors, especially editors in chief, are responsible for ensuring policies are developed, and procedures are followed to ensure ethical peer review processes are expected and maintained for the journal and that all other editors are behaving ethically. This is typically done upfront by setting (or signing off on) policies on what constitutes a conflict. It may require making a judgment if another stakeholder is uncertain of whether a conflict exists. Editors may also have to sit in judgment regarding whether or not authors and (sometimes) reviewers are possibly guilty of misconduct and then determine what to do in response. For example, Springer’s discovery of failed peer review in 2017 led to the retraction of 107 articles in the journal Tumor Biology after undergoing a widespread investigation that uncovered the peer review process was compromised (Retraction Watch 2010). The specifics of the retractions indicated that the peer reviewers were imposters. While the names of the reviewers were real, their emails were not, thereby compromising each of the papers that were retracted (Stigbrand 2017). Journals should be scrutinizing peer review reports and utilizing available technology such as ORCID to identify reviewers. Editors may conduct their own investigations, provide evidence to and summarize the case for the publisher to handle, and/or alert an authors’ institutional office of research integrity. Choosing not to include the authors’ institutions can prohibit them from collecting evidence from computers and/or labs before it is destroyed (Generales 2017). Regardless of the level of resources, editors who choose to turn a blind eye to obviously unethical issues, either because they are unwilling to put in the considerable effort involved to investigate cases or because they are trying to avoid uncomfortable (often political) situations are, simply put, acting unethically. Similarly, journals face considerable outside pressures to publish, suppress, delay, or expedite content. The reasons may be commercial, political or to generate publicity. Regardless of the source of pressure, by failing to take an ethical stand and allowing themselves to be coerced, editors are, once again, failing to maintain basic ethical standards. As this point illustrates, editors are clearly not immune to committing misconduct themselves. This can happen when the peer review process is not rigorous and transparent. Editors can have conflicts of interest, have biases, and/or make bad or
8
Peer Review in Scholarly Journal Publishing
137
rushed decisions on a given day. Rigor and transparency, in particular, in the peer review process protect against possible peer review malpractice. Editors should be aware of any bias or conflict of interest made apparent in the review comments, as well as any of their own. Misconduct can be as minor as suggesting a number of edits to the paper so that it is written in the reviewer’s (or editor’s) preferred style (Pierson n.d.; vs. the journal’s style) to as major as requesting unnecessarily extensive revisions to slow the authors down and allow competing work to reach publication first (Hames 2007). There are many other issues that exist on a spectrum of ethicalness. It is fair to say many even represent common practice, but just how unethical they can be is likely not considered by many stakeholders on a daily basis. These include: • Suggesting the authors cite their work to increase editors’ h-indexes – citations are only to be used to award credit for previously published work, to ascribe value and support claims (Wren et al. 2019; Penders 2018). • Providing a paper with a “light” peer review process – the consequences range from falsely conveying weight to the thoughts/claims/evidence presented that may otherwise be flawed to patient harm, at the most extreme cases (Marcus 2018). • Willfully ignoring reviewers’ comments in order to accept a paper for publication – most likely this is undertaken to advance the work of a friend, colleague, or individual whose work benefits the editor in some material way. • Selective editing of peer reviewer comments – this is problematic if the editor amends reviewer comments to justify his decision, most notably if the reviewer’s comments run counter to the editor’s point of view. Potentially the editor is injecting bias here. Instead, preferred practice is to leave the comments intact but explain to the author why the reviewer’s comments are being overruled (Hames 2007). That said removing or editing comments that could be considered derogatory or are unclear or imply something the reviewer clearly did not intend is acceptable (Hames 2007, p. 77). • Failing to manage the pool of reviewers – most obviously, by repeatedly calling upon poor or biased reviewers who are failing to do their duty well, which results in the editor failing to do her duty well. • Using the reviewers’ comments alone to make decisions while avoiding reading the submission themselves – if the reviewers have performed their role poorly, there is no failsafe mechanism to avoid flawed research becoming accepted for publication. The editor may be further deficient if they failed to communicate how they wished the paper to be evaluated, so the reviewers may not be entirely blameless if the editor then did nothing when confronted with weak reviews. Another ethical mistake that editors can commit is failing to devote sufficient time to appropriately manage the journal. This may not seem like a consciously egregious act, but the implications are significant. The editor is integral to the process at several key stages; when the editor falls behind, sometimes grossly so, the process grinds to a halt, and authors suffer in long wait times for decisions and sometimes inferior results, such as only receiving one set of review comments when the journal’s
138
J. Roberts et al.
minimum requirement is usually two or three because the editor did not invite reviewers early on and rushed the process to catch up and make a decision. Editors may even reject manuscripts because they do not have the time to dedicate to them. When a person accepts the role of editor for a peer-reviewed journal, they must consider the time required and consistently dedicate that time every day/week throughout the term. A critically overlooked element of peer review is its actual management by editorial offices. Often retractions in the literature rise from misconduct of data, figures, and fabrication in the results of a manuscript. But sometimes there are cases that result from the actual application of the peer review process. BMJ Open reported troubling compliance in the ethical treatment of humans regarding the consent process in China for a study in transplantation (Rogers et al. 2019). Seventy-three percent of the papers reviewed showed approval from an ethical committee and/or cited Human Institutional Review Board (HIRB) was obtained, but not all of the papers included specific information regarding the HIRB number or date of approval (Rogers et al. 2019). Further, in this study, only 14% of the papers reported the source of the donated organs and provided any information regarding consent (Rogers et al. 2019). When reporting experiments on human subjects, it must be indicated that the procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional or regional) or with the Helsinki Declaration of 1975 (as revised in 1983). Journals should require inclusion of Institutional Review Board approvals within the manuscript. Lastly, editors must behave ethically in their roles as authors and reviewers as well. If an editor submits a manuscript to her own journal, there should be policies in place to ensure the editor has no access to nor affect upon the peer review process for that paper and to ensure that the expectation is that a manuscript submitted by an editor will receive the same rigorous peer review as any other submitted paper. Transparency is especially important here. These policies should be published in the journal’s instructions for authors. Additionally, when acting as a reviewer for a manuscript submitted to his journal, an editor should not act both as an anonymous reviewer and as the editor making the decision on the manuscript. If an editor performs the anonymous reviewer role, then the paper must be assigned to another editor for decision-making (COPE Council 2017b).
Authors The last stakeholder group we will discuss in this section are the authors, who must perform their research according to the ethical guidelines avowed by their fields and institutions and follow these ethical guidelines when writing their manuscripts for publication: • Report data and results ethically do not fabricate or falsify data – these actions are intentional and result in the distortion of scientific knowledge (Fanelli 2009) and are therefore egregious.
8
Peer Review in Scholarly Journal Publishing
139
• Report and share data; make data available to other researchers and those contributing to the peer review process – to allow future studies to replicate the findings and advance the science (Parr and Cummings 2005). • Do not submit a manuscript to, nor publish a manuscript in, more than one journal at a time; a manuscript must be rejected or withdrawn before submitting to a new journal – human (e.g., editor and reviewer) and other resources are wasted by multiple peer review and production processes. • Do not split findings from a study into multiple manuscripts to increase the number of publications/citations – this increases the burden on reviewers, editors, and readers and may also increase the cost for journals; it could also prohibit other work from being published and, most importantly, could distort the literature by overrepresenting data from a single study (Hoit 2007). • Do not omit findings or other information from your manuscript in an effort to support a desired conclusion – this is misleading and compromises the scientific record. • Properly cite previously published data, words, tables, figures, etc., including your own and others’ ideas – to present someone else’s work as your own is dishonest, and not citing your own previously published work suggests it is new scholarship (APA 2010) and may violate copyright laws. • Protect the privacy and welfare of research subjects – publishing personal information in print or photographs violates a study participant’s privacy. Privacy must be protected unless it is essential for scientific purposes and then the person must be shown the manuscript before publication (CSE 2012). Informed consent is required in many cases when working with human subjects to protect their welfare. • Declare competing interests/conflicts of interest – these interests provide an opportunity for professional judgment to be biased (Lo and Field 2009) leading to unethical actions. • Comply with the ethical standards of the institutions and governments where the research is conducted – failure to do so can result in unethical behavior and the loss of funding, publications, and professional prestige. Authors should treat the peer review process and the comments they receive with respect. Editors are typically not well paid, if at all, and the overwhelming majority of reviewers volunteer their time; thus, many informal hours (American Journal Experts n.d.; National Academy of Sciences et al. 1993) are put into the assessment of manuscripts. If the peer review process is working as intended, the peer review comments are provided by the experts in the field of study. As difficult as it is to receive criticism, this information is invaluable in the process of improving the manuscript (National Academy of Sciences et al. 1993; Ware 2016) prior to publication. After receiving constructive comments from the peer review process, authors must set aside their egos and feelings and set to work revising their manuscripts accordingly. Passive and certainly not-so-passive resistance to peer review comments, most notably by willfully ignoring requests for change or by addressing
140
J. Roberts et al.
the reviewer comments comprehensively in the Author Response to Reviewer Comments but then failing to adequately amend the paper, are quite possibly unethical depending on the severity of the problems identified. Experienced editorial office staff will testify that many rejected papers are turned around and submitted elsewhere without amending the paper as directed by the first journal’s reviewers. Yet, there has almost been a complete absence of discussion whether authors’ intentional disregard of reviewers’ advice and direction is unethical in some capacity.
Ethics in Practice Awareness of ethical issues and policies is the first step of providing ethical peer review. The second step is knowing how to use that awareness in practice. In this next section, we provide examples of how reviewers, editors, and authors can practice their peer review roles ethically. Reviewers A reviewer may be the only person able to spot an ethical issue, such as plagiarism, in a submitted paper, and it is imperative that they bring that issue to the attention of the editor by immediately emailing the editor or raising the concern in the confidential comments to the editor section of the review form. Software, such as iThenticate, can identify plagiarism of words and documents, but only a subject matter expert can identify the plagiarism of someone else’s ideas. The reviewer should indicate the concern, for example, plagiarism, and then provide specific information for why they suspect this issue in the paper. In the plagiarism example, the reviewer should provide the citations for the works (and page numbers, if possible) with which the submitted manuscript overlaps. The more specific information that the reviewer can provide, the better able the editor will be to promptly investigate the issue and make a decision on the manuscript. Though the issue may have been unintentional or be justifiable and can be corrected with revisions to the manuscript, it is never ethical to ignore or downplay a suspected ethical issue. Reviewers should consider the manuscripts they are reviewing as confidential, along with any other information provided as part of the review process (COPE 2017b; ICMJE 2019; Rockwell n.d.), unless specifically informed otherwise in the instructions received from the journal or the journal’s author guidelines. This includes not circumventing a journal’s peer review structure and contacting authors directly. It is wrong to personally contact the authors to discuss how they can improve the study but worse still for a reviewer to arrange authorship on the paper. Permission must be obtained from the editor to contact the authors, in cases where this is deemed appropriate. A reviewer should not involve a student or junior researcher in the review of the manuscript without first gaining permission from the editor (COPE Council 2017b). If a reviewer plans to include someone that they are mentoring in the review process for a manuscript, besides requesting permission from the journal, the name of the student/junior researcher should also be submitted to the journal, and this person should get proper credit for their contributions to the review comments (COPE
8
Peer Review in Scholarly Journal Publishing
141
Council 2017b). Failure to do so takes advantage of someone who has the weaker end of a power dynamic and, in addition, could result in the mentor having to take responsibility for something she did not write if a problem is identified after the review is submitted. Especially if time has passed, the mentor may no longer be in contact with the junior researcher, and the journal was never aware that someone else was involved in the review. It is important for senior researchers to provide proper supervision and an opportunity to learn and improve peer-reviewing skills for junior researchers; junior researchers should not be used to relieve senior researchers’ own workloads. At all times, it must be remembered that such actions are in a “live” and not “test” environment and that an actual author’s hard work is under consideration. Some journals actually provide a controlled process for this through mentoring opportunities such as “journal clubs.” Many others encourage familiarity with reviewing through resources such as The American Chemical Society (https://axial.acs.org/ 2017/05/18/acs-reviewer-lab/) and Publons’ (https://publons.com/community/acad emy/) peer review training programs. Similarly, the American Psychological Association (APA 2019) provides guidelines for a mentoring program (https://www.apa. org/pubs/journals/cpp/reviewer-mentoring-program). These resources may be helpful to both parties in the mentoring relationship. Critically, an important part of a mentoring program is instruction about publication ethics, both for the role of a reviewer and an author. Certainly, evidence suggests that instruction in publication ethics raises attendee awareness of such issues (Schroter et al. 2018). Editors Editors rely on reviewers, the experts in the field of study, to know the literature and field of study well enough that they can contribute to identifying ethical issues (e.g., bias, falsification, manipulation) in a paper under review. It is not the reviewers’ responsibility to prove there is, in fact, an ethical issue, only to bring the suspected issue to the attention of the editor. It is then the responsibility of the editor to determine how to investigate the issue to determine if the author may have violated publication ethics, knowingly or unknowingly. Using information from ethical governing bodies (Table 2), editors may investigate ethical issues on their own, refer them to society publications directors and committees, the publisher, and/or the Offices of Research and Integrity (ORI) at the authors’ institutions or the national ORI. Best practices include creation and enforcement of journal policy, a sound workflow, and investigative follow-through for ethical breeches. Editors should review each allegation without bias and follow a flowchart (COPE) in order to consistently and properly manage each case. An important aspect of best practice is prevention through education. This can be in the form of instructions to authors, instructions to reviewers, and editor intervention. Editors should be aware of all COPE guidelines and receive formal training in ethics. ORI has established a sound workflow for case management within their organization. Each case is listed on the ORI website with a description of the ethical concern and the outcome. In a highprofile case of data manipulation from 2015 (ORI 2015), a finding of research misconduct was concluded, and the case presented detail regarding false data as
142
J. Roberts et al.
Table 2 List of ethical governing bodies Governing body Office of Research Integrity (ORI)
Declaration of Helsinki in conjunction with the World Health Organization
Institutional Review Board (IRB) or Human Institutional Review Board (HIRB)
Animal Care and Use Committee
Committee on Publications Ethics (COPE)
Retraction Watch
Description A US Government Office of the Assistant Secretary for Health that is responsible for overseeing research misconduct within the Public Health Services research activity, which includes the National Institutes of Health Developed to protect humans involved in medical research. Each Institutional Review Board (IRB) for any organization, university, or college should follow the principals of the Declaration of Helsinki A governing body to protect the rights of human subjects while participating in a research study. An IRB statement should be present in each manuscript if humans participated in the study. If such a statement is not included, the manuscript should not be considered for publication Approval should be present in each manuscript if animals were used in the study COPE does not have any true governance over publications, but as a resource, it has proven to be a gold standard for ethical guidelines and should be used as a first resource when handling ethical concerns Retraction Watch is a heavily read and used resource. While it does not have any governance over publications, the blog is setting precedence with its postings and allows a searchable index of retractions that can be used as a resource
Reference ORI (1999)
WHO (2001)
Oregon State University (2019)
National Research Council (2011) COPE (1997)
Retraction Watch (2010)
reported by the authors’ institution. The published literature was retracted, and the author was sanctioned for a 5-year period with regard to Public Health Service supported funds. Authors Authors must also make themselves aware of publication ethics, at minimum familiarizing themselves with the author instructions for the journals to which they wish to submit manuscripts. Authors must not undermine the peer review process by suggesting biased potential reviewers or submitting papers without their co-authors’ review and approval of the submitted manuscript. Corresponding authors may not realize the knock-on effect of not keeping their co-authors well informed throughout the writing, peer review, and publication processes. The results can include having to
8
Peer Review in Scholarly Journal Publishing
143
publish a correction with the published article because a co-author saw the paper in print and didn’t agree with the published order of authors, or worse, a publisher retracting a paper because a co-author contacts the journal to say he was not involved in the process, didn’t agree to have the paper submitted, and/or doesn’t agree with the findings, as written. Editorial policies are often written as a result of negative experiences and an effort to prevent it from happening again – for the journal and for the authors. Some other important policies for authors to be aware of and adhere to are: • Do not submit a paper to one journal but withdraw the paper after peer review to submit the revised paper to a “better” journal – This wastes the first journal’s editor’s and reviewers’ time, which are both scarce. When submitting a paper to a journal, the authors are ethically bound to see the process through to acceptance and publication or rejection with that journal. • Do not submit a paper with known errors or sloppiness (e.g., incorrect citations) – As above, this wastes reviewers’, editors’, and publisher’s time and other resources, such as having to pay for extra copyediting. Respect the other stakeholders in the process and submit only the best work. • Do not ignore publication ethics/intentionally commit misconduct during the research study or writing and peer review of the manuscript – Being accused of and convicted of violating research and publication ethics can damage an author’s reputation and destroy his career. • Check the author instructions for the journal to which you plan to submit your manuscript, and determine their policy before posting your manuscript on a preprint server. Some journals will not consider your submission if it has already been posted as a preprint; others will not be able to consider it if you have signed certain types of licenses in order to post the preprint – Preprint servers are a great way to get your scholarly work in front of your colleagues quickly, but if you are not proactive, you may prohibit yourself from publishing that work in the journal of your choice. Individual researchers will likely play the roles of author and reviewer multiple times in their careers. Some will also play an editor role, either as an editor in chief, associate editor, or member of an editorial board. As discussed above, each role comes with important responsibilities in relation to publication ethics. Table 3 provides a useful chart showing ethical responsibilities by stakeholder and topic.
Conflicts of Interest and Bias A widely cited description of conflicts of interest (COI) in the context of medical research is “a set of conditions in which professional judgment concerning a primary interest . . . tends to be unduly influenced by a secondary influence” (1993, p. 573). Lo and Field offer a similar definition in the context of COIs in
144
J. Roberts et al.
Table 3 Ethical responsibilities for stakeholders in the peer review process Topic Declare/identify conflicts of interest Keep manuscript and reviews confidential (if appropriate, dependent on peer review process type) Dedicate appropriate time to role Ensure human subject consent and welfare Ensure data retention/availability Identify fabrication/falsification of data Identify duplicate submission/publication Identify plagiarism Identify piecemeal submission Identify image manipulation Adhere to deadlines Ensure citation of previously published work Ensure ethical reporting of data and results Submit a manuscript to only one journal at a time and only the journal the authors plan to publish in Properly cite previously published content and others’ ideas Comply with institutional/governmental ethical standards for research Fully report the findings from a research study – do not omit information or split findings into multiple papers inappropriately Ensure a submitted manuscript is free from errors and follows the journal’s author guidelines Ensure an accepted manuscript is free from errors and follows the journal’s author guidelines
Editors X X
Reviewers X X
Authors X X
X X X X X X X X X X
X X
X X X
X X X X X X X
X X X X X X X
X
X
X
Note: Some content from COPE 2017a
medical research: “A conflict of interest is a set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest” (2009, p. 46). Such secondary influences may include financial relationships to a product being studied in a paper under review. Alternatively, conflicts may be based on relationships such as a connection through a family member, a close/previous working relationship with the authors of the paper in review, or the authors are actually colleagues. A conflict may even be situational: scientific rivalry, political stances within a field of study, geopolitics, or even beliefs and values (ICMJE 2019). Bias is the most obvious manifestation of a conflict of interest, especially when there may be financial rewards at stake (such as the reviewer holding shares in the drug company that is vying to release the blockbuster drug being studied in a paper under review) that could lead to favorable critiques. However, bias may also have nothing to do with an explicit conflict of interest, especially if it is unconscious. It may be related to issues such as gender and race. It could be simply that reviewers are biased by the brilliance of the authors and the institutions that
8
Peer Review in Scholarly Journal Publishing
145
support them. Reviewers may simply have a reflexive response to the results that strips away at their initial intent to be scientifically objective – that too is a form of bias. These are personal biases, but there are also structural biases in the process itself. As shall be explored shortly in this section, the design and execution of the peer review process at a journal, most obviously in whether or not a blinding process is in place, may unwittingly produce biases. Perhaps questions posed to reviewers on their score sheets may subtly direct their thinking. As discussed elsewhere in this chapter, regardless of the stakeholder, the related issues of conflict and bias revolve around being aware of their presence (e.g., editors knowing they cannot invite a potential reviewer because of a conflict) and being selfaware of the conflicts and biases you, a potential participant in the peer review process, might carry (e.g., a reviewer declining to review a paper because, for instance, the reviewer is collaborating currently with the author – a fact the editor may not be aware of). Trust in both the medical research process and its published outcomes is essential (Brody 2007). COIs in medical research, including during peer review, bring the credibility of evidence into jeopardy and threaten that trust (Lo and Field 2009), which can have serious ethical implications. Undisclosed conflicts when revealed can contribute to a reduction of trust in the overall system of peer review and in journal publishing, and thus bring the validity of other published studies into question. The literature can then become biased with studies that should not have been published or published in their final form. This can ultimately have a detrimental effect on decisions made by policy-makers and doctors that could become costly for health services and could potentially harm patients (Turner et al. 2008). The phenomenon of studies funded by or supplied directly by a private interest that possess a tendency to feature favorable results is well-known (Ioannidis 2005). It is the responsibility of proper peer review to detect such bias. That such studies permeate the published literature so prevalently is a sign that peer review, sadly, is not always as thorough as intended. Currently, the handling of stakeholder COIs is primarily managed through the process of disclosure: trust in the honesty of participants is relied upon heavily in journal publishing. Guidance from professional organizations such as COPE, ICMJE, CSE, and WAME all emphasize this method. The ubiquity of disclosure as a way of regulating COIs in medical/health journal publishing is not surprising; it plays a key role in the management of COIs across medicine, as well as other professions such as business and law (Cain et al. 2005).
Stakeholders and Their Roles in Ethical Peer Review Reviewers The process of peer reviewing papers provides an opportunity for undisclosed potential sources for conflicts of interest to be identified. For example, peer reviewers may be aware of conflicts the authors have (in non-blinded journals, where the authorship is disclosed) or bias in the writing, such as the presentation
146
J. Roberts et al.
of only a small – and usually one with favorable results – subsection of a larger data set, and they should feed this information back to the journals in their reports. However, reviewers – who find themselves in a position of power, as gatekeepers of knowledge (Roth 2002) – can themselves have COIs. These naturally can impact their ability to provide unbiased, objective commentary. These conflicts can be financial (such as shares in a company whose drug is under discussion) or “other,” for example, academic competition or personal friendships with the authors of the article, religious beliefs, or personal beliefs in the strengths or weaknesses of particular treatments (Young 2009; Marcovitc et al. 2010). Potential conflicts may be obvious enough that a reader will infer potential bias whether or not it actually exists. However, especially reviewers, in rushing to get the job done and volunteering their time, may not detect instances of subtle bias. Gender bias is one such example and may or may not be unconscious. For instance, Knobloch-Westerwick conducted a study that detected that male-first authorship papers were perceived to hold higher scientific quality – a variant of the “Matilda effect” (Knobloch-Westerwick et al. 2013). In situations where peer reviewers are aware of the identity of authors, there may be unconscious (favorable) bias toward papers received from world-renowned institutions, thought leaders in a given field (the “Matthew effect”), and those with a strong publication and grant-winning track record. Equally, negative perceptions may materialize when the country of origin of the authors is apparent. A study by Mahoney (1977) found that reviewers experienced confirmatory bias (also known as “expert bias” (Phillips 2011), whereby they were strongly biased against manuscripts that conflicted with their own theoretical perspectives (and vice versa). As JAMA identified, in an editorial in 1987: Referees occupy a privileged position in the publication process in that they are privy to confidential information—thoughts, ideas, data, methods—supplied by colleagues who, at the same time, are not aware that they have access to it . . . the term conflict of interest should apply not only to the possibility of financial gain for referees, but also to other, though less easily measurable, interests beyond the financial, such as the possibility of otherwise unmerited gains in priority of publication, personal recognition, career advancement, increased power, or enhanced prestige. (Southgate 1987)
Indeed, in some (especially smaller) fields, it may be difficult to avoid COIs in peer review, where there may only be a limited number of people with the expertise to comment on a paper (Resnik and Elmore 2018). If a reviewer identifies an author or piece of research as a competitor to their own, their objectivity is reduced, and they may be inclined, consciously or not, to give it a less favorable review or delay the process by delaying submission of their comments. Conversely, if the reviewer and author work together, the reviewer may be inclined to be more supportive in their feedback. Due to the often-hard task of finding people willing to review, many journals allow authors to suggest reviewers. Unfortunately, this may simply exacerbate the problem as authors are likely to choose individuals they anticipate will respond positively to their work (e.g., Schroter et al. 2006; Wager et al. 2006).
8
Peer Review in Scholarly Journal Publishing
147
Editors The issues editors face regarding conflicts of interest are essentially similar to those confronting reviewers. Editors should know when to recuse themselves from having control over the outcome of a submission with which they have a conflict and, in such instances, defer the handling of the manuscript to another member of the editorial board. Editors, even more so than reviewers, need to be self-aware to their own biases. Unlike reviewers, whose identities are typically hidden from authors and readers alike, the editor (be it an editor in chief or Editorial Board member) is known. Such a public profile should surely mean, therefore, that this particular stakeholder should be acutely sensitive to how she/he behaves, with the reputation of the journal potentially at stake. Considering the public profile, and that they hold the reins of power, the issue of conflict and bias with editors becomes one of accountability. Editors need to understand their biases and moderate their behavior accordingly. They need to understand their relationships with fellow colleagues in the field and ensure that those privileged enough to know the editor, or work in fields of direct interest to the editor, do not receive favorable treatment. Editors have a duty to know what their editorial board (and ideally reviewer pool) is up to, to avoid placing the journal in compromising situations. Finally, editors need to take their ascribed role seriously and educate themselves on issues of conflict and bias. They need to discuss these issues and return to review them continually. They need to understand the consequences. They need to know when to ask for help or an outside opinion. They need to also take responsibility and not outsource the bias detection task to reviewers, especially when those reviewers may be unknown to them (such as author-suggested reviewers, reviewers suggested by a sophisticated algorithm within the editorial management system they are using). Authors Authors are supposed to abide by the standards set by the journal to which they submit their work. Once again, this taps into two leitmotivs of this chapter: transparency and self-awareness of one’s own biases and ethical standing. Not that ignorance should lead to absolution, but it is the case that many authors are inexperienced, unskilled, or simply operating within different cultural norms. What may seem like an obvious conflict to disclose to some is not for others. Authors should learn what a journal demands as its basic ethical expectations. Equally, journals need to be clear in making obvious what those expectations are along with outlining what the consequences are if there is an ethical breach. One question posed to journals is typically how far back into history does an author need to stretch before a conflict is no longer relevant? Another is pursuant to the relevancy of seemingly unrelated conflicts: should authors disclose only a direct conflict (e.g., they received funding for the study of drug x from company y) or disclose everything (e.g., all the speaker’s bureaus and lab funding they received from a multitude of companies)? No definitive answers or universal rules exist; thus, it is the responsibility of authors to familiarize themselves with specific journal requirements.
148
J. Roberts et al.
Ethics in Practice Reviewers The selection of an individual to potentially function as a peer reviewer forces two sets of decisions to be made. First, editors must be sure they have picked the most appropriate person to evaluate the paper, with their expertise and potential for bias and conflict understood. For reviewers, once in receipt of the invitation, they are then compelled to consider what their conflicts might be and what the implications would be if they accept that invitation to review. Ideally, editors should try to avoid placing such volunteer reviewers in a predicament and first carefully check – where possible – reviewers’ institutions and past publishing records. The process for handling reviewers’ (and other stakeholders’) conflicts is generally through the person’s disclosure. Reviewers are relied upon to declare if they have interests that would interfere with their abilities to provide unbiased feedback (e.g., having worked recently on a project or published with the authors, being employed in the same department, or having an interest in the topic being studied) and, if they do, to recuse themselves accordingly. Some, though not all, journals actively ask reviewers to disclose. As with other stakeholders, it is hoped that by directly asking reviewers to honestly disclose, they will be reminded of potential conflicts and theoretically be more willing to disclose their interests, rather than hide them and risk their discovery, which would result in damage to their reputations (Sah and Loewenstein 2014). Yet reviewers may not always declare conflicts (or indeed be aware that they might have what constitutes a conflict), and editors should thus carefully check, where possible (e.g., reviewers’ institutions, past publishing records). However, editors may be under pressure to find people willing to review and therefore be less vigilant in checking for potential conflicts. Yet some journals do not actively ask reviewers to disclose and instead simply expect them to be aware that they should do so. A cross-sectional survey, sent to editors of 91 journals from a range of medical fields (selecting those with the top impact factor in their field) in order to quantitatively assess journal policies for the COIs of authors, editors, and peer reviewers, found that, while 77% of journals have COI policies for authors, only 46% have them for peer reviewers (and 40% for editors) (Cooper et al. 2006). More recently, Resnik et al. (2017) analyzed COI policies in 224 journals on environmental, occupational, or public health research and found that while 96% required authors to disclose, only 33.9% required reviewers and editors to do. Frankly, those are statistics that are almost unconscionable and represent an important issue that journals must address. For their own protection, reviewers should demand to know what the rules are. As Smith (2002) states: Knowingly not following best practices, when one knows how to achieve it, is unethical.
To assist in the management of COIs, journals should ensure that they have effective policies for all stakeholders, including peer reviewers, that clearly define COIs, provide examples, and offer clear processes of disclosure/recusal:
8
Peer Review in Scholarly Journal Publishing
149
A policy should define COIs and provide individuals with examples of the types of situations covered by the policy and include a process for reviewers and editors to report the COI and recuse themselves from the review process, if appropriate. If a reviewer discloses a conflict that requires recusal, an editor could invite a different scientist to review the manuscript. If an editor discloses a conflict, a non-conflicted editor could handle the review process, assuming that the journal has more than one editor. (Resnik and Elmore 2018)
Therefore, in terms of practical action, it is recommended that reviewers always ask editors or the journal office for clarification on conflicts. If the situation proves intractable, the reviewer should simply decline the invitation to review. They may also wish to consult authoritative bodies beyond the journals. The Committee on Publication Ethics (COPE) has assembled guidance for peer reviewers, which touches upon the COIs they might experience (although it does not explicitly cite COIs as an example of ethical violations, they should look out for when conducting their reviews) (COPE 2017a).
Editors It is crucial to have effective policies and processes in place to manage the potential or existing COIs of all stakeholders. Furthermore, editors need to ensure the peer review process their journals deploy is enabled to detect potential bias. Finally, journals need to ensure they have a structure in place that minimizes the risk of potential stakeholder biases creeping into the decision-making process. As just discussed, policies should exist to define conflicts. What applies to reviewers must also apply to editors and their editorial board. Why? Editorial board members may have ties to industry. In smaller fields these ties may be inescapable. In a sample of 713 editors in chief of medical journals, Liu and colleagues found 50.6% received some form of payment from pharmaceutical companies for the research they conducted beyond their journal work (Liu et al. 2017). The authors suggested, as a result, that journals should consider whether the prevalence of such relationships has the potential to erode public trust. Such relationships, it should be noted, are not necessarily an indication of wrongdoing, but it is most definitely something journals should acknowledge, and it is recommended by groups such as WAME that journals take a transparent approach. One route to do that is to publish, annually, a listing of editors and their conflicts (ICMJE 2018; Marušić 2009). Editors should have a recusal process, visible to all, to handle papers where they come into direct conflict (Gottlieb and Bressler 2017). Most obviously this takes the form of assigning decision-making power to others. Helpfully in incidences where the editor is also an author, most editorial management systems automatically prevent an editor from being able to see her own paper. However, co-authors can still see the paper’s status. To ensure full transparency, it is recommended that journals publish a brief note explaining the blinding and decision-making process in such instances, so the reader is aware that full peer review was conducted on the editor’s paper (ICMJE 2018).
150
J. Roberts et al.
Similarly, editors need to make sure they and their teams understand the rules on conflicts and bias and discuss how they should be accounted for in the assessment of every paper. Few journals likely keep a record of the COIs of the reviewer pool but a quick look at their previously published work might reveal potential conflicts. A simple Internet search may also find records of other disclosures, for example, associated with a speaking engagement at a meeting. Equally, editors could simply point out the rules or ask the potential reviewers to consider their conflicts when inviting them to review a manuscript. This is the due diligence we should expect from journals. Whether it truly happens extensively is very much up for debate. Due diligence does not stop there. Editors and/or their editorial offices absolutely must test the accuracy of author disclosures by doing the detective work. They may also compel authors to be accurate and take the issues of declaration seriously, using strongly worded demands for signed affirmations, publicizing the consequences of a failure to properly disclose, and following through with other disciplinary measures if warranted. When it comes to eliminating bias, a foolproof system for its removal is unrealistic and simply unobtainable, at least as long as humans conduct peer review. However, there are steps journals can take to help detect bias. Perhaps the most well-known, though still often misunderstood by large swathes of authors (Blanco et al. 2018), are reporting guidelines such as CONSORT, PRISMA, etc. (Schulz et al. 2010; Moher et al. 2009). These reporting guidelines are designed to help authors improve the quality of the reporting of their studies while also helping journals to determine that same quality after submission. Integral to this process of compelling authors to tell us more, we might be able to detect hidden biases, undisclosed outcomes, and the full extent of the data studied. For instance, are authors guilty of selective outcome reporting bias (namely, did the authors cherry-pick the most positive results from a much broader data set?) (Kirkham et al. 2010). How did the authors randomize their study participants to account for bias? Editors and reviewers need to know this to determine if bias is still there. For systematic review articles, for instance, did the authors consciously exclude select literature and overemphasize the importance of other articles to fit their narrative (Montori et al. 2000)? Simply imposing these reporting guidelines on all stakeholders without careful consideration of the implementation and education in the proper use will likely lead to few positive dividends. Again, all stakeholders need to understand their engagement with reporting guidelines and journals in turn need to be transparent to all with regard to who uses them (and how they are used) along with information on what is done when problems emerge. Finally, editors should consider ways to control for bias by imposing review structures that either force reviewers to confront their biases or, perhaps, protect them from themselves (namely, through blinding). Opinions vary on the possibility of one method of peer review blinding proving more successful than others, but one study on papers submitted to a conference found that the reviewers who had author information were significantly more likely to recommend acceptance of papers from famous authors and top institutions (Tomkins et al. 2017).
8
Peer Review in Scholarly Journal Publishing
151
In such circumstances, blinding the review process (so that reviewers don’t know who the authors are and vice versa) can potentially help in terms of reducing bias. However, it is not an infallible method: a 2008 study on reviewer blinding in nursing journals found that 38% of reviewers could still identify authors (Baggs et al. 2008). Authors may be identified through their reference list (particularly if they selfreference) and the referee’s own background knowledge (Hill and Provost 2003). Hill and Provost found that, based on discriminative self-citations, reviewers were able to correctly identify authors 40–45% of the time. The identification accuracy for authors with substantial publication records rose to 60% accuracy for the top 10% of the most prolific authors and 85% for authors with 100 or more prior papers). The arguments against blinding are less obvious, but certain protections afforded to authors, especially in small fields, can be lost without some measure of blinding. With the authors’ identities revealed, those privilege-associated biases, issues of race, gender, geography, and politics emerge. Alternatively, if the authors are wellknown, favorable opinions may be formed before the paper is even read. The issue may well then be accentuated if a journal has adopted a policy to subsequently disclose the identity of the reviewers. Again, in small fields, a sympathetic review may result if the reviewers know their name, as a reviewer, will be appended to the paper, especially if the authors hold power or prestige within a particular field. Authors As with reviewers, authors are expected to disclose fully and honestly all potential conflicts of interest. Authors can help themselves by reading the instructions for authors for a given journal and may also consult with the journal for advice. Equally, disclosing “everything” is a sensible approach. If a journal does not want all the information disclosed, they will simply edit downward to what they require. A rule of thumb for all authors is that one should disclose everything if omission of the fact could later be used against them to question their integrity or undermine the validity of the research. If there is a potential for embarrassment or worse, simply disclose (Roberts 2009). Furthermore, in addition to disclosing the nature of any potentially conflicting relationships, authors should take it upon themselves to fully disclose their level of access to data. Did they get to review an entire data set or only a partial selection of data? And who provided that data? Did they see summarized or raw data? Access to data issues are almost certainly a cause for concern and a potential source of bias.
Anticipated Outcomes Two of the most common discussions involving peer review and publication relate to (a) fraud and other ethical transgressions by authors and conflicts and bias, as well as other ethical transgressions, by reviewers and editors, and (b) the failure of peer review to detect obvious bias and potential misconduct, along with an associated weak effort to raise the quality and validity of published results. Efforts to make
152
J. Roberts et al.
authors work harder at reporting precisely what they did – the reporting guidelines – have met, if we are honest, with collective indifference, beyond a minority of major titles, and genuine bafflement from many authors that simply see the task as administrative paperwork and not an effort to clean up the published literature. Meanwhile reports of peer review “fails” continue to stack up. Sites like Retraction Watch provide a litany of cases where peer review has failed and yet little changes. Perhaps the root of this failure lies in the fact that peer review hinges on disclosure and honesty and a grasp of how the process should work (namely, what your responsibility is when called upon as a particular stakeholder). Sadly, a basic understanding of how peer review functions eludes many (Schroter et al. 2018). Compounding this is the general approach to peer review being accepted as an amateur endeavor. Most certainly it is almost always a volunteer effort for reviewers as well as a large number of editors/editorial board members, and that seems to contribute to the mixed results we see from peer review and the less than impeccable observance of ethical standards. Anyone expecting a different outcome will almost certainly be waiting longer than anticipated. Peer review needs to be taken seriously, and the considerable effort involved, if done properly (a median of approximately 5 h according to Ware), should be valued accordingly (Ware 2011). Institutions and science generally need to recognize and reward peer review in a manner somewhat akin to how the volume of published papers is valued. Institutions and journals themselves need to offer training in how to evaluate manuscripts as well as provide an ethical grounding in publication ethics from the perspectives of three stakeholders: authors, reviewers, and editors. Even then, peer review carries considerable vulnerability, exposed by its reliance on disclosure. The surge in manuscripts submitted for consideration is simply leaving journal peer review creaking under the volume of material. Suitable reviewers, inundated with requests for their expertise, are burning out, rushing, and not paying attention or just not agreeing to help. All of which further reinforces concerns that peer review and peer reviewers may not be set up to catch ethical issues or behave unimpeachably. This leads to greater contemplation of the idea that artificial intelligence and machine-based learning may represent the only way consistent, and truly impartial peer review can be applied. That peer review is flawed is a narrative that has been explored extensively. Solutions and saviors of peer review are floated. In the meantime, and accepting the fact that whole stakeholder groups are not going to remodel their behavior overnight, one positive potential outcome is that a growing clamor for extended transparency in the peer review process may bear fruit. Though some are responding, many journals for various reasons may not yet be ready to reveal who evaluated a manuscript to their readers. However, there is a growing recognition that transparency (as will be discussed in the next section of this chapter) at least enhances accountability and may force journals to get their houses in order through developing ethical policies and then actually enforcing them. Greater clarity should, in theory, help reduce cases of ethical transgressions emerging through ignorance rather than outright malfeasance.
8
Peer Review in Scholarly Journal Publishing
153
Proposed Solutions Education The process of peer review can always be improved, but journals must first carefully ponder the implications of change. Ideally standards, and universally agreed upon standards at that, should be commonplace across all disciplines for every journal. Such standardization may resemble clearly stated basic ethical policies for all stakeholders and clear, and transparent, workflows. These would allow for broadbased education to ensure global awareness and enhance the integrity of ethical peer review processes. As a bare minimum, each journal should have: • An ethics policy for handling issues relating to authors, reviewers, and editors when there is an allegation of misconduct, using the Committee on Publication Ethics (COPE) resources (COPE 1997) as reference • A COI policy for each party involved in the process: editorial office professionals, editors, reviewers, and authors, including how and whether this information is published, what information will be collected (e.g., which form will be used), and which COI are unacceptable and for whom and how these will be handled • Requirements for using applicable reporting guidelines to detect poor methodology, bias, and inaccuracies and improve transparency (equator-network.org) • Workflows to support the above policies, probably using the ICMJE as a guide (ICMJE 2001) • A policy and workflow related to authorship for each submitted manuscript, again, where appropriate using the ICMJE as a guide (ICMJE 2001) • A policy and workflow for data management – availability and reproducibility – ranging from the registration of clinical trials to data deposition • Written expectations for reviewers – included in journals’ instructions for authors and within the emails sent to reviewers once they have agreed to review, to explain the role, and expected work product and level of confidentiality
Transparency Transparency in peer review is a multifaceted responsibility that encompasses authors, reviewers, and editors and their respective partners (publishers, societies, institutions). While the specific level of transparency and what will be transparent is ultimately up to each organization – journals should be documenting and posting/ publishing policies and workflows, versus only providing typesetting and copyediting details to their authors via their instructions to authors. If authors were more aware of the processes, they would be able to make more informed decisions about where and how to publish their work. Journals should have a transparency policy (Samulak and Roberts 2018), with some basic information about the journal and how manuscripts are handled; for example, for a single-blind journal, the editor in chief will make triage decisions, associate editors
154
J. Roberts et al.
will make decisions on reviewed submissions, and all submissions that pass triage will receive three peer reviews. What often gets lost in instructions to authors is the rationale behind the requests. If asking an author to provide specific information with a submission the journal should be prepared to define why and educate all parties throughout the process. If journals are transparent regarding why they are asking for reporting guidelines and conflict of interest information, they are likely to educate other stakeholders and receive the information required to process the submission and inform the readers if the submission is published. Transparency can be paramount to trust and is a key component to our ethical journal operations.
Conclusion Peer review is often criticized. It is studied with ever-greater scrutiny. It is bombarded, endlessly so, with suggestions to fix it, improve the quality, and deliver consistency. Despite this, it does seem peer review is rather immutable. Perhaps the tipping point for real change is coming. Technology, unsurprisingly, may well be the disruptor with the potential for artificial intelligence to perform the role of the reviewer and leave the gatekeeper role solely to the editor. That day of reckoning may still be some way off, but more immediate efforts may yield results and enhance the prospect that peer review can be conducted with an enhanced degree of ethicalness. Transparency seems like an obvious solution. Transparency may take the form of simply publishing the reviewer comments and the author response to those comments. It may equally be compelling journals to delineate precisely how they conduct peer review and reveal their criteria for assessing papers. The mere threat of increased transparency may compel many journals to take these matters more seriously and formulate policies for conducting ethical peer review and holding lax editors and reviewers to account. It may provoke reviewers into actually doing the task they volunteered to do seriously. Appropriate reviewer recognition options are also growing. Services are now available for reviewers to more visibly record their work as peer reviewers (and future transparency may allow others to see the quality of that work); so now the onus is on institutions and funders to legitimately reward the considerable amount of time researchers spend assessing the work of their peers. With greater rewards (such as consideration of peer review work as part of the tenure and promotion assessment), maybe researchers will be forced to get the training they need and learn about the responsibilities of their work as authors, reviewers, and editors and understand the ethical implications involved in achieving fair and full peer review.
References American Journal Experts (n.d.) Peer review: how we found 15 million hours of lost time. https:// www.aje.com/en/arc/peer-review-process-15-million-hours-lost-time/. Accessed 5 Feb 2019 American Psychological Association (2010) Publication manual of the American Psychological Association. American Psychological Association, Washington DC
8
Peer Review in Scholarly Journal Publishing
155
American Psychological Association (2019) Mentoring program for junior researchers. https://www.apa.org/pubs/journals/cpp/reviewer-mentoring-program. Accessed 5 Feb 2019 Arxiv (1991) arXiv preprint server. Cornell University. https://arxiv.org/. Accessed 27 Mar 2019 Baggs JG, Broome ME, Dougherty MC, Freda MC, Kearney MH (2008) Blinding in peer review: the preferences of reviewers for nursing journals. J Adv Nurs 64(2):131–138 Bero L (2017) Addressing bias and conflict of interest among biomedical researchers. J Am Med Assoc 317(17):1723–1724 Blanco D, Biggane AM, Cobo E, MiRoR Network (2018) Are CONSORT checklists submitted by authors adequately reflecting what information is actually reported in published papers? Trials 19(1):80 Brody H (2007) Hooked: ethics, the medical profession, and the pharmaceutical industry. Rowman & Littlefield Publishers, Lanham Button KS, Bal L, Clark A, Shipley T (2016) Preventing the ends from justifying the means: withholding results to address publication bias in peer-review. BMC Psychol 4(1):59. https://doi.org/10.1186/s40359-016-0167-7 Cain DM, Loewenstein G, Moore DA (2005) Coming clean but playing dirtier: the shortcomings of disclosure as a solution to conflicts of interest. In: Moore DA, Cain DM, Loewenstein G, Bazerman MH (eds) Conflicts of interest: challenges and solutions in business, law, medicine, and public policy. Cambridge University Press, New York Chauvin A, Ravaud P, Baron G, Barnes C, Boutron I (2015) The most important tasks for peer reviewers evaluating a randomized controlled trial are not congruent with the tasks most often requested by journal editors. BMC Med 13:158 Cooper RJ, Gupta M, Wilkes MS, Hoffman JR (2006) Conflict of Interest Disclosure Policies and Practices in Peer-reviewed Biomedical Journals J Gen Intern Med. 21(12):1248–52 COPE (2017a, September) COPE discussion document: who “owns” peer reviews? https:// publicationethics.org/files/Who_owns_peer_reviews_discussion_document.pdf. Accessed 4 Feb 2019 COPE Council (2017b, September) COPE ethical guidelines for peer reviewers. Version 2. https:// publicationethics.org/resources/guidelines-new/cope-ethical-guidelines-peer-reviewers. Accessed 6 Dec 2018 Council of Science Editors Editorial Policy Committee (2012) 3.1 Description of research misconduct. In: White paper on publication ethics. https://www.councilscienceeditors.org/resourcelibrary/editorial-policies/white-paper-on-publication-ethics/3-1-description-of-research-miscon duct/#311 Accessed 14 Oct 2019 Council on Publication Ethics (COPE) (1997). https://publicationethics.org. Accessed 27 Mar 2019 Enago Academy (2018) Post-publication peer review of scientific manuscripts: boom or bust? https:// www.enago.com/academy/post-publication-peer-review-of-scientific-manuscripts-boom-or-bust/. Accessed 27 Mar 2019 Etkin A, Gaston T, Roberts J (2017) Peer review: reform and renewal in scientific publishing. Against The Grain Press, Charleston F1000 About F1000: who we are (2000). https://f1000.com/about. Accessed 27 Mar 2019 Fanelli D (2009) How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS One 4(5):e5738. https://doi.org/10.1371/journal.pone.0005738 Generales M (2017) Research misconduct investigations: behind the scenes at a major research university. Presentation at the Council of Science Editors annual meeting, San Diego, California, 22 May, 2017 Gottlieb JD, Bressler NM (2017) How should journals handle the conflict of interest of their editors? Who watches the “watchers”? JAMA 317(17):1757–1758 Hames I (2007) Peer review and manuscript management in scientific journals: guidelines for good practice. Blackwell Publishing, Malden Hill S, Provost F (2003) The myth of the double-blind review? Author identification using only citations. SIGKDD Explor 5(2):179 Hoit JD (2007) Salami science. Am J Speech Lang Pathol 16:94. https://doi.org/10.1044/1058-0360 (2007/013)
156
J. Roberts et al.
ICMJE (2018) Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. http://www.icmje.org/icmje-recommendations.pdf. Accessed 23 Sept 2019 ICMJE (2019) Responsibilities in the submission and peer-review process. http://www.icmje.org/ recommendations/browse/roles-and-responsibilities/responsibilities-in-the-submission-and-peerpeview-process.html. Accessed 5 Feb 2019 ICMJE (International Committee of Medical Journal Editors) (2019) Conflicts of interest. Accessed 22 Oct 2019 ICMJE International Committee of Medical Journal Editors (2001). http://www.icmje.org/. Accessed 27 Mar 2019 Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2:e124 Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR (2010) The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 340:c365 Knobloch-Westerwick S, Glynn CJ, Huge M (2013) The Matilda effect in science communication: an experiment on gender bias in publication quality perceptions and collaboration interest. Sci Commun 35(5):603–625 Liu JJ, Bell CM, Matelski JJ, Detsky AS, Cram P (2017) Payments by US pharmaceutical and medical device manufacturers to US medical journal editors: retrospective observational study. BMJ 359:j4619 Lo B, Field MJ (2009) Conflict of interest in medical research, education, and practice. National Academic Press, Washington Lyon L (2016) Transparency: the emerging third dimension of open science and open data. LIBER Q 25(4):153–171. https://doi.org/10.18352/lq.10113 Mahoney MJ (1977) Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cogn Ther Res 1(2):161–175 Marcovitch H, Barbour V, Borrell C, Bosch F, Fernández E, Macdonald H, Marusić A, Nylenna M; Esteve Foundation Discussion Group. (2010) Conflict of interest in science communication: more than a financial issue. Report from Esteve Foundation Discussion Group, April 2009. Croat Med J. 51(1):7–15 Marcus A (2018) A scientist’s fraudulent studies put patients at risk. Science 362(6413):394 Marušić A (2009) Editorial interest in conflict of interest. Croat Med J 50(4):339–341 Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6(7):e1000097 Moher D, Galipeau J, Alam S, Barbour V, Bartolomeos K, Baskin P, Bell-Syer S, Cobey KD, Chan L, Clark J, Deeks J, Flanagin A, Garner P, Glenny A-M, Groves T, Gurusamy K, Habibzadeh F, Jewell-Thomas S, Kelsall D, Florencio Lapeña J Jr, MacLehose H, Marusic A, JE MK, Shah J, Shamseer L, Straus S, Tugwell P, Wager E, Winker M, Zhaori G (2017) Core competencies for scientific editors of biomedical journals: consensus statement. BMC Med 15(1):167 Montori VM, Smieja M, Guyatt GH (2000) Publication bias: a brief review for clinicians. Mayo Clin Proc 75(12):1284–1288 Mudditt A, Wulf K (2016) Peer Review in the humanities and social sciences: if it Ain’t broke, don’t fix it? https://scholarlykitchen.sspnet.org/2016/09/21/peer-review-in-the-humanities-and-socialsciences-if-it-aint-broke-dont-fix-it/. Accessed 24 Oct 2019 National Academy of Sciences (US), National Academy of Engineering (US), Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research (1993) Responsible science: ensuring the Integrity of the Research Process: volume II. National Academies Press (US), Washington, DC. 10, Guidelines for the Responsible Conduct of Research. https://www. ncbi.nlm.nih.gov/books/NBK236192/. Accessed 5 Feb 2019 National Research Council (US) (2011) Committee for the update of the guide for the care and use of laboratory animals. National Academies Press (US), Washington, DC Office of Research Integrity (1999) https://ori.hhs.gov/. Accessed 27 Mar 2019
8
Peer Review in Scholarly Journal Publishing
157
Office of Research Integrity (2015) Case summary: Potti, Anil. https://ori.hhs.gov/case-summarypotti-anil. Accessed 24 Oct 2019 Oregeon State University (2019) What is the Institutional Review Board (IRB)? https://research. oregonstate.edu/irb/frequently-asked-questions/what-institutional-review-board-irb. Accessed 27 Mar 2019 Parr CS, Cummings MP (2005) Data sharing in ecology and evolution. Trends Ecol Evol 20(7):362–363. https://doi.org/10.1016/j.tree.2005.04.023 Penders B (2018) Ten simple rules for responsible referencing. PLoS Comput Biol 14(4):e1006036 Phillips JS (2011) Expert bias in peer review. Curr Med Res Opin 27(12):2229–2233 Pierson CA (n.d.) Reviewing journal manuscripts. http://naepub.com/wp-content/uploads/2015/08/ 24329-Nursing-ReviewingMSS12ppselfcover_8.5x11_for_web.pdf. Accessed 28 Jan 2019 Pinholder G (2016) Journals and funders confront implicit bias in peer review. Science. https://doi. org/10.1126/science.352.6289.1067 Preston A (2017, August 9) The future of peer review. Scientific American. https://blogs.scientifi camerican.com/observations/the-future-of-peer-review/. Accessed 23 Oct 2019 PubPeer (2012) About PubPeer https://pubpeer.com/static/about. Accessed 27 Mar 2019 Rennie D, Flanagin A (2018) Three decades of peer review congresses. JAMA 319(4):350–353 Resnik DB, Elmore SA (2018) Conflict of interest in journal peer review. Toxicol Pathol 46(2):112–114 Resnik DB, Konecny B, Kissling GE (2017) Conflict of interest and funding disclosure policies of environmental, occupational, and public health journals. J Occup Environ Med 59(1):28–33 Retraction Watch (2010) When a journal retracts 107 papers for fake reviews, it pays a price https:// retractionwatch.com/category/by-journal/tumor-biology/. Accessed 27 Mar 2019 Roberts J (2009) An author’s guide to publication ethics: a review of emerging standards in biomedical journals. Headache 49(4):578–589 Rockwell S (n.d.) Ethics of peer review: a guide for manuscript reviewers. https://ori.hhs.gov/sites/ default/files/prethics.pdf. Accessed 5 Feb 2019 Rogers W, Robertson MP, Ballantyne A, Blakely B, Catsanos R, Clay-Williams R, Fiatarone Singh M (2019) Compliance with ethical standards in the reporting of donor sources and ethics review in peer-reviewed publications involving organ transplantation in China: a scoping review. BMJ Open 9:e024473 Ross-Hellauer T (2017) What is open peer review? A systematic review [version 2; peer review: 4 approved]. F1000Res 6:588. https://doi.org/10.12688/f1000research.11369.2 Roth W-M (2002) Editorial power/authorial suffering. Res Sci Educ 32:215–240 Sah S, Loewenstein G (2014) Nothing to declare: mandatory and voluntary disclosure leads advisors to avoid conflicts of interest. Psychol Sci 25(2):575–584 Samulak D, Roberts J (2018) Transparency – this is what we do, and this is what we expect. https://scholarlykitchen.sspnet.org/2018/09/11/guest-post-transparency-this-is-what-we-do-andthis-is-what-we-expect/. Accessed 14 Oct 2019 Schroter S, Tite L, Hutchings A, Black N (2006) Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA 295(3):314–317 Schroter S, Roberts J, Loder E, Penzien DB, Mahadeo S, Houle TT (2018) Biomedical authors’ awareness of publication ethics: an international survey. BMJ Open 8(11):e021282 Schulz KF, Altman DG, Moher D, CONSORT Group (2010) CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 63(8):834–840 Sense About Science (2012) Standing up for science 3. Peer review: the nuts and bolts. A guide for early career researchers. http://senseaboutscience.org/activities/peer-review-the-nuts-and-bolts/. Accessed 24 Jan 2019 Shamseer L, Moher D, Maduekwe O, Turner L, Barbour V, Burch R, Clark J, Galipeau J, Roberts J, Shea BJ (2017) Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Med 15:28
158
J. Roberts et al.
Shattell MM, Chinn P, Thomas SP, Cowling WR (2010) Authors’ and editors’ perspectives on peer review quality in three scholarly nursing journals. J Nurs Scholarsh 42:58–65 Smith NL (2002) An analysis of ethical challenges in evaluation. Am J Eval 23(2):199–206 Snodgrass R (2006) Single- versus double-blind reviewing: an analysis of the literature SIGMOD record, vol 35, no 3, Sep 2006. https://tods.acm.org/pdf/p8-snodgrass(1).pdf. Accessed 27 Mar 2019 Southgate MT (1987) Conflict of interest and the peer review process. JAMA 258(10):1375 Stigbrand T (2017) Tumor Biol. https://doi.org/10.1007/s13277-017-5487-6 Thomas SP (2018) Current controversies regarding peer review in scholarly journals. Issues Ment Health Nurs 39(2):99–101 Thomas E, Magilvy JK (2011) Qualitative rigor or research validity in qualitative research. J Spec Pediatr Nurs 16:151–155 Thompson DF (1993) Understanding financial conflicts of interest. N Engl J Med 329(8):573 Tomkins A, Zhang M, Heavlin WD (2017) Reviewer bias in single- versus double-blind peer review. PNAS 114(48):12708 Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358(3):252–260 van Rooyen S, Godlee F, Evans S, Smith R, Black N (1998) Effect of blinding and unmasking on the quality of peer review. JAMA 280(3):234–237 Wager E, Parkin EC, Tamber PS (2006) Are reviewers suggested by authors as good as those chosen by editors? Results of a rater-blinded, retrospective study. BMC Med 4:13 Ware M (2011) Peer review: recent experience and future directions. New Rev Inf Netw 16(1):23–53 Ware M (2016, May) PRC peer review survey 2015. http://publishingresearchconsortium.com/ index.php/134-news-main-menu/prc-peer-review-survey-2015-key-findings/172-peer-reviewsurvey-2015-key-findings. Accessed 14 Feb 2019 World Health Organization (2001) World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. Bull World Health Organ 79(4). https://www.who.int/bulletin/archives/79(4)373.pdf. Accessed 27 Mar 2019 Wren JD, Valencia A, Kelso J (2019) Reviewer-coerced citation: case report, update on journal policy and suggestions for future prevention. Bioinformatics 35(18):3217–3218 Young S (2009) Bias in the research literature and conflict of interest: an issue for publishers, editors, reviewers and authors, and it is not just about the money. J Psychiatry Neurosci 34(6):412–417
9
Research Misconduct Ian Freckelton Q. C.
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Investigative Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Research Misconduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Profiles of Those Found to Have Engaged in Research Misconduct . . . . . . . . . . . . . . . . . . . . . . . . . . The Criminalization of Research Misconduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Destruction of Research Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disciplinary Determinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . False Claims Act Litigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Lot of the Whistleblower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed “Solutions” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
160 161 163 166 167 169 170 171 173 174 176 178
Abstract
When misconduct is alleged in relation to research, it requires employing, auspicing, and funding bodies to undertake an investigation. However, a number of legal steps can be taken to obstruct or delay the progress of an investigation including defamation proceedings and applications for injunctions alleging deficits in investigative fairness and methodology. Depending upon the outcome of the investigation, there is the potential for a range of serious legal
I. Freckelton Q. C. (*) Law Faculty, University of Melbourne, Melbourne, Australia Queen’s Counsel, Melbourne, VIC, Australia Law and Psychiatry, University of Melbourne, Melbourne, VIC, Australia Forensic Medicine, Monash University, Melbourne, VIC, Australia e-mail: [email protected] © Crown 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_6
159
160
I. Freckelton Q. C.
outcomes for the individual concerned. These include disciplinary action for practitioners who are registered/licensed or who are members of a professional association, criminal prosecution, civil actions for compensation, and dismissal from or demotion in employment. The stakes are high for all concerned – the person accused of research misconduct, the employing organization, the funding body, the whistleblower, the area of research, and those who may have participated in or stood to benefit from the research. It is common for legal proceedings that arise from research misconduct to be lengthy and assertively contested. This chapter reviews international legal responses to accusations by whistleblowers that research misconduct has taken place, identifying the importance of prompt, fair, and independent investigative procedures and concentrating upon the role of disciplinary and criminal law to take stern measures to deter the incidence of fraudulent conduct on the part of persons who are funded either by government or by private agencies to undertake research projects. Keywords
Research misconduct · Fraud · Fabrication · Data fraud · Investigation of allegations · Legal processes · Prosecution · Qui tam actions · Whistleblowers
Introduction When an allegation of research misconduct is made by a whistleblower or other person, it poses a number of dilemmas and prompts a variety of institutional responses. Protection needs to be provided to the person communicating the information so that they do not become the victim of reprisals or harassment. In addition, a thorough and fair investigation needs to be undertaken to ascertain whether the allegations are well-founded and, if they are, into the consequences that should be imposed. These can vary from chastisement by the employer, to discharge of the person from their employment, a ban from eligibility for funding for a stipulated period of time, and even criminal prosecution. This means that the stakes are high for all concerned – the whistleblower who has probably taken significant risks to inform against the alleged perpetrator, the funders of the research, the co-researchers, the supervisors of the research, the employing entity, the journals that have published the research, the status of the field of research, and, of course, the individual who has been alleged to have engaged in the misconduct. Inevitably, then, the law and lawyers will play a significant role in the aftermath of the lodging of complaints about a researcher’s misconduct. This chapter reviews disciplinary, regulatory, and criminal and civil measures taken in response to allegations of research misconduct. It does so on an international basis but, of necessity, provides a disproportionate number of examples of such conduct and such responses from the medical area because misconduct in pharmacotherapy research has had a particularly high profile in terms of investigations and sanctions. However, where possible, other examples are highlighted.
9
Research Misconduct
161
Investigative Issues Whereas historically institutions have attempted to deal with accusations of misconduct in research by conducting confidential internal inquiries, this has ceased to be feasible in more recent times in developed countries. Nonetheless, the way in which an investigation is commissioned, its terms of reference, the powers of the investigators, and those selected to investigate all have repercussions for the rigor of what is done and thereby the confidence that the investigative process commands. Similarly, the constituting of a proliferation of inquiries can generate mistrust of the investigative processes and their authority. Another issue is what happens with the report generated by the investigation. Often it is in the institution’s interest that this not be released because the findings of the investigation may reflect adversely on senior personnel, such as supervisors, as well the policies and practices of the institution itself. Release of a report may publicly reveal the identity and conduct of a whistleblower which may be disadvantageous for them. This latter consideration is the justification most often advanced for a report not being made public, on the basis that persons who disclose information are in a vulnerable position and need to be protected. When a whistleblower is not prepared to reveal their own identity, though, there can be real limits to the investigation that can be initiated into the anonymous expression of concerns. In a number of countries, external investigations can be conducted into allegations of research misconduct. In the United States, for example, the Office of Research Integrity (ORI) plays an important role in encouraging research integrity. Its role is to: • Develop policies, procedures, and regulations related to the detection, investigation, and prevention of research misconduct and the responsible conduct of research. • Review and monitor research misconduct investigations conducted by applicant and awardee institutions, intramural research programs, and the Office of Inspector General in the Department of Health and Human Services (HHS). • Recommend research misconduct findings and administrative actions to the Assistant Secretary for Health for decision, subject to appeal. • Assist the Office of the General Counsel (OGC) to present cases before the HHS Departmental Appeals Board. • Provide technical assistance to institutions that respond to allegations of research misconduct. • Implement activities and programs to teach the responsible conduct of research, promote research integrity, prevent research misconduct, and improve the handling of allegations of research misconduct. • Conduct policy analyses, evaluations, and research to build the knowledge base in research misconduct, research integrity, and prevention and to improve HHS research integrity policies and procedures. • Administer programs for maintaining institutional assurances, responding to allegations of retaliation against whistleblowers, approving intramural and extramural policies and procedures, and responding to Freedom of Information Act and Privacy Act requests (Office of Research Integrity 2019).
162
I. Freckelton Q. C.
However, latterly in particular, the ORI has operated under reducing resources resulting in its having to be highly selective in relation to the investigations that it initiates – it made only 15 findings of research misconduct in 2016 and 2017 (Trager 2018). This has led to a call by the National Academies of Sciences, Engineering, and Medicine (2017) that universities and scientific societies create, operate, and fund a new, independent, non-governmental Research Integrity Advisory Board to serve as a clearinghouse to raise awareness of issues of research integrity, as an “honest broker” to mediate disagreements and as a beacon to help institutions that lack the knowledge or resources to root out bad behavior and foster ethical research conduct. In countries such as Australia, there is no independent agency that conducts investigations into allegations of fraud. The Australian Research Integrity Committee (ARIC 2018) provides a review system of institutional processes to respond to allegations of research misconduct and has as one of its purposes to ensure that institutions investigate such allegations and observe proper process in doing so. However, it remains the responsibility of institutions such as universities, research establishments, and funding bodies to conduct the investigations. At a broader level, the ARIC contributes to quality assurance and public confidence in the integrity of Australia’s research effort. However, such confidence should be limited as ARIC does not itself undertake substantive investigations; its role is to review the methodology and independence of the investigations done by others. Little of its work is public. Australia is by no means alone in this regard, and internationally there have been many instances of the unsatisfactory nature of investigations which have been left to employing institutions and then have led to a proliferation of investigations (see Freckelton 2016). An example is the series of investigation into concerns raised by a postdoctoral fellow into the research undertaken by Dr Thereza Imanishi-Kari (see Kevles 1998) into genes that regulate immune responses. Investigations took place by institutions in the United States employing Dr Imanishi-Kari, finding that her research methodology was defective but not identifying research misconduct. The National Institutes of Health (“NIH”) also reached this conclusion, but then Congressman Dingell, chairing the United States House of Representatives Energy and Commerce Committee, formed the view that the investigations had been inadequate and concluded that Dr Imanishi-Kari had tampered with data that she had submitted in her defense. This prompted involvement by the Office of Scientific Integrity (OSI) and ultimately the Office of Research Integrity (ORI) which by then had been constituted. Both made adverse findings against Dr Imanishi-Kari, but finally an appeals board (including two lawyers) criticized the reasoning of both bodies, finding that much of the case against Dr Imanishi-Kari was “internally inconsistent, lacked reliability or foundation, was not credible or corroborated, or was based on unwarranted assumptions” (Crotty 2001). The conduct in which Dr Imanishi-Kari engaged remains controversial. A further example is the multilayered investigations into what was asserted to be the scientific fraud of Professor Bruce Hall, a nephrologist in Australia who
9
Research Misconduct
163
researched suppressor cells in the immunology of rejection and tolerance in transplanted organs (see Freckelton 2016: 109ff). In 2001 whistleblowers within his laboratory raised more than 450 separate allegations against Hall. Parallel inquiries were instituted by the University of New South Wales, but their results were inconclusive. This resulted in the institution of a further inquiry chaired by a former Chief Justice of the Australian High Court. Its adequacy was impugned by Professor Hall, and the University Council determine not to release the report but later altered position and, in spite of an unsuccessful application for an injunction by Professor Hall, released a redacted version. On the basis of the report, the vice-chancellor determined that Professor Hall had not engaged in scientific misconduct but had committed errors of judgment that were deserving of censure but not dismissal. After media publicity, Professor Hall and his wife instituted actions for defamation which included appellate action in the New South Wales Court of Appeal. A critique by the editor of the Medical Journal of Australia (Van der Weyden 2006: 430) argued that it was necessary to learn lessons from the Hall saga: Allegations of serious scientific misconduct should be dealt with from the start by an external and independent inquiry; the inquiry should have statutory power to investigate and inquire; the inquiry should have sufficient scientific expertise to ensure credibility; to preserve public confidence, the inquiry should aim for the highest degree of transparency and accessibility of the final product; there is a need for uniform processes and procedures for dealing with and adjudication in scientific research; [and] fraud and there is a need to shift the emphasis from managing misconduct and fraud to preventing them.
Research Misconduct There are diverse definitions of “research misconduct.” Such definitions have the potential to be significant, given the legal ramifications of adverse findings. However, in general terms, the main definitions that have been employed internationally, while they do not command unanimity, all address core components of conduct that most in the research community would regard as unethical and have generated relatively little difficulty when operationalized. The US Office of Research Integrity provides the following straightforward definitions of research misconduct, which focuses upon the “top end” of misconduct fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. (a) Fabrication is making up data or results and recording or reporting them. (b) Falsification is manipulating research materials, equipment, or processes or changing or omitting data or results such that the research is not accurately represented in the research record. (c) Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.
164
I. Freckelton Q. C.
(d) Research misconduct does not include honest error or differences of opinion (Office of Research Integrity 2015). Along similar lines, the Australian Code for the Responsible Conduct of Research (2007) stipulated that: A complaint or allegation relates to research misconduct if it involves all of the following: • An alleged breach of the Code • Intent and deliberation, recklessness, or gross and persistent negligence • Serious consequences, such as false information on the public record, or adverse effects on research participants, animals, or the environment Research misconduct includes fabrication, falsification, plagiarism, or deception in proposing, carrying out, or reporting the results of research and failure to declare or manage a serious conflict of interest. It also includes avoidable failure to follow research proposals as approved by a research ethics committee, particularly where this failure may result in unreasonable risk or harm to humans, animals, or the environment. It also includes the wilful concealment or facilitation of research misconduct by others. Repeated or continuing breaches of the Code may also constitute research misconduct and do so where these have been the subject of previous counselling or specific direction. Research misconduct does not include honest differences in judgment in management of the research project and may not include honest errors that are minor or unintentional. However, breaches of the Code will require specific action by supervisors and responsible officers of the institution.
In 2018 this was replaced by the minimalist formulation that defined “research misconduct” as “a serious breach of the Code which is also intentional or reckless or negligent.” The European Code of Conduct for Research Integrity (2017) states that: Research misconduct is traditionally defined as fabrication, falsification, or plagiarism (the so-called FFP categorization) in proposing, performing, or reviewing research, or in reporting research results: • Fabrication is making up results and recording them as if they were real. • Falsification is manipulating research materials, equipment, or processes or changing, omitting, or suppressing data or results without justification. • Plagiarism is using other people’s work and ideas without giving proper credit to the original source, thus violating the rights of the original author(s) to their intellectual outputs. These three forms of violation are considered particularly serious since they distort the research record. There are further violations of good research practice that damage the integrity of the research process or of researchers. In addition to direct violations of the good research practices set out in this Code of Conduct, examples of other unacceptable practices include, but are not confined to: • Manipulating authorship or denigrating the role of other researchers in publications • Republishing substantive parts of one’s own earlier publications, including translations, without duly acknowledging or citing the original (“self-plagiarism”) • Citing selectively to enhance own findings or to please editors, reviewers, or colleagues • Withholding research results • Allowing funders/sponsors to jeopardize independence in the research process or reporting of results so as to introduce or promulgate bias • Expanding unnecessarily the bibliography of a study
9
Research Misconduct
165
• Accusing a researcher of misconduct or other violations in a malicious way • Misrepresenting research achievements • Exaggerating the importance and practical applicability of findings • Delaying or inappropriately hampering the work of other researchers • Misusing seniority to encourage violations of research integrity • Ignoring putative violations of research integrity by others or covering up inappropriate responses to misconduct or other violations by institutions • Establishing or supporting journals that undermine the quality control of research (“predatory journals”) In their most serious forms, unacceptable practices are sanctionable, but at the very least, every effort must be made to prevent, discourage, and stop them through training, supervision, and mentoring and through the development of a positive and supportive research environ.
The Council of Canadian Academies in 2010 provided a more broad-based definition: Research misconduct is the failure to apply, in a coherent and consistent manner, the values and principles essential to encouraging and achieving excellence in the search for knowledge. These values include honesty, fairness, trust, accountability and openness. (Council of Canadian Academies 2010: 5)
The diverse definitions referred to above highlight the fact that honest errors or mistakes are generally not accounted to constitute research misconduct but that there is no consensus on when the mental state of the researcher (e.g., reckless or negligent) is sufficient to fulfil the criteria for professional misconduct. Research misconduct has been described as “the dark side of the hypercompetitive environment of contemporary science” with its emphasis on funding, numbers of publications, and impact factors (Stern 2012). It was considerations such as these which Diederik Stapel (see Zwart 2017) highlighted in his rationalization for a sequence of frauds in high-profile psychology journals in which he engaged over a period of 15 years: his fraud was found to be “very substantial” and to include at least 55 publications in his name between 1996 and 2011 (Levelt Committee, Noort Committee and Drenth Committee 2012). In his autobiography The Derailment, he made the point that today’s enhanced access to information has brought its own dynamics, with the internet seeming “to increase the demand for simple, elegant analyses that depict social reality with a couple of swift brush strokes” (Stapel 2012). However, he characterized today’s scientists as not just “treasure hunters looking for hidden jewels; they’re also sales managers with objective publication targets to meet” and as marketers: As a scientist, if you can’t get enough people interested in your ideas and discoveries, the value of your career stock drops substantially. In contemporary science the message is not just ‘publish or perish’ but also increasingly ‘sell or sink’. Science is a business . . . the constant shortage of resources means that scientists spend a great deal of their time chasing after subsidies, grants and lucrative gigs in the private sector. They have to bring both their work and themselves in to the spotlight. (Stapel 2012: 97, 92–93)
The important point emerging from Stapel’s tendentious analysis (rationalizations) is that a wide variety of contextual, institutional, cultural, and
166
I. Freckelton Q. C.
personal factors can play a role in engagement in research misconduct. In addition, there is always a cultural context which has enabled the conduct. Research misconduct takes a variety of forms, varying from the elaborate and carefully executed to avoid detection to the primitive, which was always prone to be discovered. Examples of the former category are the use of the blood of an owl monkey by Dr Long in his Hodgkin’s disease research (Joyce 1981; Altman and Hermon 1997) and the spiking of human blood with rabbit blood by Dr Dong Pyou-Han in his AIDS research. Examples of the former (see Freckelton 2016) include the painting of spots on butterflies by William Charlton (see Cooper 2014), the marking of toads by Paul Kammerer in “the Case of the Midwife Toad” (see Koestler 1971), the painting of mice by William Summerlin in “The Patchwork Mouse Affair” (see Hixson 1971), and the planting of lichen on the island of Rhum by Professor Harrison in “the Unlikely Lichen Fraud” (see Sabbagh 2000).
Profiles of Those Found to Have Engaged in Research Misconduct There is a substantial history of persons proved to have engaged in serious research misconduct, allowing some observations that are empirically based (albeit somewhat impressionistic) to be advanced in relation to those who behave unethically in their research and in relation to how they respond when accused of such conduct (Freckelton 2016). A disproportionate number of these are in the diverse fields of medical research. The first issue is that those engaging in research misconduct are not a homogeneous group. Wright, a former Director of the PRI, has argued that there are four basic reasons for research misconduct: • • • •
Some form of mental disorder Foreign nationals who learned somewhat different scientific standards Inadequate mentoring Increasing professional pressure to publish studies (see Mendoza 2015)
Another perspective was provided by Lawson who has argued in relation to medical practitioners who engage in fraudulent research practice that: The usual motivation appears to be a mixture of intense career and peer pressure to produce significant results and publications, financial incentives to obtain funding grants, and personality disorders and weaknesses, especially vanity and arrogance – the messiah complex. There appear to be two main groups of fraudsters. The first group is the overly ambitious young researcher determined to climb rapidly up the career ladder. The second group is more perplexing – senior doctors, often at the height of their careers, and often occupying prestigious positions. (Lawson 2012: 48)
Davis et al. (2007) analyzed 92 cases from the ORI in which researchers had been found to have committed scientific fraud. Starting with 44 possible factors
9
Research Misconduct
167
implicated in scientific misconduct, the authors used multidimensional scaling and cluster analysis to define clusters of similar factors labelled as personal or professional stressors (e.g., pressure to produce), organizational climates (e.g., insufficient supervision/mentoring), job insecurities (e.g., competition for position), rationalizations (e.g., lack of control), and personality factors (e.g., laziness). They identified a complex interplay among the personal and contextual factors. Certain personalities have figured prominently among those who have been discovered to have engaged in the intellectual dishonesty that underlies research misconduct – those with a pathological need for affirmation, the grandiose, the narcissistic, those with an overly prominent sense of entitlement, and, at times, those with a loosened grip on reality and ethical propriety. Thus there can be elements of both personality disorder and pathology among perpetrators of research fraud. Often the persons involved are charismatic, powerful figures (such as Professor Harrison, Sir Cyril Burt, Professor Stapel, Professor Boldt, Dr Wakefield, and Dr McBride) whose industry and ambition lead them to assume an influential role in their discipline, often at an unusually early age, and which drive them (sometimes manically) to single-minded efforts to secure and maintain prominence and leadership in their field. Psychologically, while their external presentation can be characterized by narcissism, there is often a deep wellspring of inadequacy and low self-esteem in such persons and a craving for attention and ego gratification through continuing high-profile publications in their name. With the potential for celebrity status for researchers (see, e.g., Dr Haruko Obokata in Japan, Dr William McBride in Australia, and Dr Paolo Macchiarini (see Elliott 2018 in Sweden)), there can be both an elevated sense of narcissism (see Fahy 2015) and the temptation to obtain and maintain the researcher’s high public profile. Another characteristic of those who engage in research misconduct is that they tend to respond combatively to allegations about their conduct. They often deny the accusations aggressively and to make counter-allegations in a way calculated to intimidate their accusers. They can be obsessively persistent in their defense, availing themselves at considerable expense of a variety of tactics, including aggressive forensic strategies. They may sue their accusers for defamation or write selfserving accounts of how they have been victimized and understood. They may characterize their errors as attributed to carelessness, misguided enthusiasm, inadequate supervision, workplace pressures, and ill health. Even after allegations are found proved, a surprising percentage of perpetrators will continue to deny or minimize their conduct and write their own accounts, placing themselves in the role of being scapegoats.
The Criminalization of Research Misconduct A contested issue is whether the criminal law has a constructive role to play in responding to fraud in research. On the one hand, it is the intrusion of the blunt instrument of the criminal law and law enforcement into laboratories; on the other, it is the recognition of the harm that can be caused by those who are trusted and among the community’s intellectual elite (see Freckelton and McMahon 2017).
168
I. Freckelton Q. C.
In 1991, in the aftermath of the extensive frauds by the psychologist, Stephen Breuning, who fabricated data in relation to his studies on mental retardation, Susan Kuzma of the US Department of Justice argued that “unpleasant and unsettling as it is to contemplate, criminal prosecution can serve a valuable role in deterring and condemning some forms of scientific misconduct and therefore should not be rejected in favour of internal controls” (Kuzma 1991: 357). On a modest number of occasions, researchers have been criminally charged and found guilty of research fraud and associated criminality (Freckelton 2014, 2016; Freckelton and McMahon 2017). The most prominent cases have involved: • Robert Fiddes, a drug researcher at the Southern California Research Instituted, was sentenced in 1997 by the US Federal Court to 15 months’ imprisonment and ordered him to repay $800,000 to pharmaceutical companies he had deceived (Fisher 2008). • Stephen Breuning, a psychologist employed by the University of Pittsburgh, was sentenced in 1998 to 60 days in a halfway house, 5 years of probation, and 20 h of community service (Lewis 1997; Kimmel 2007). • Harry W Snyder Jr, an oncologist at BioCryst Pharmaceuticals, who researched the development of synthetic drugs, was sentenced to 3 years’ imprisonment, a sentence that was substantially upheld on appeal (United States v Harry W Snyder Jr, 291 F 3d 1291 (11th Cir 2002)). • Eric Poehlman, a physiologist from the University of Vermont, was sentenced in 2005 to imprisonment for 1 year and a day and 2 years of probation (Dahlberg and Mahler 2006). • Luk Van Parijs, a neuroimmunologist from the Massachusetts Institute of Technology, was sentenced in 2011 to 6 months of home detention with electronic monitoring and 400 h of community service (Doherty 2015: 95). • Woo Suk Hwang, an embryologist from South Korea, was sentenced by the Supreme Court to an 18-month jail term suspended for 2 years (Saunders and Savulescu 2008). • Scott Reuben, an anesthesiologist from Bayside Medical Center in Springfield, Massachusetts, was sentenced in 2010 to 6 months’ imprisonment (Borrell 2009; Marcus 2010). • Craig Grimes, an engineer at Pennsylvania State University, was sentenced in 2012 to 41 months’ imprisonment (Reich 2012). • Steven Eaton, a Scottish drug researcher, was sentenced in 2013 by the Edinburgh Sheriff Court to 3 months’ imprisonment (Cossins 2013). • Erin Potts-Kant, a biologist at Duke University, was convicted of embezzlement in 2013 and sentenced to a fine and probation and order to perform community service (McCook 2016). • Milena Penkowa, a Danish neuroscientist, was convicted of fraud and embezzlement in 2015 and given a suspended sentence of 9 months. On a technicality she later appealed successfully (Severinsen 2014; Freckelton and McMahon 2017; Zieler 2019).
9
Research Misconduct
169
• Dong Pyou-Han, a biomedical scientist at Iowa State University, was sentenced in 2015 to 57 months’ imprisonment and 3 years’ supervision (Reardon 2015: 871). • Caroline Barwood and Bruce Murdoch, neuroscientists at the University of Queensland, Australia, were sentenced in 2016 by the Brisbane District Court to 2 years’ imprisonment, wholly suspended (Freckelton and McMahon 2017). Others, such as Diederik Stapel, have manoeuvered their way out of the imposition of significant criminal penalties by negotiating arrangements which have seen them free from sentencing by the criminal justice system (see Freckelton 2016). Inevitably the involvement of the police and the courts, though, is unrepresentative of the phenomenon of research misconduct. Many researchers such as Joachim Boldt (see Wise 2013; Freckelton 2016), Anil Potti (see Pelley 2012; Gunsalus 2015), and Yoshitaka Fujii (see Miller 2012; Freckelton 2016) have generated significant numbers of high-profile publications based upon falsified data but were never criminally charged. Nonetheless, it cannot be denied that the imposition of punitive consequences upon researchers who have fabricated data carries a strong denunciatory message and has the potential to exercise a significant deterrent effect on the basis that other professionals are likely to learn of the criminal consequences for disgraced high-profile researchers and are in a position to adjust their behavior accordingly.
Destruction of Research Data When researchers are called upon to respond to allegations that they have engaged in research misconduct, they are generally asked to provide their data so that the correlation between their data, their interpretations of the data, and their ultimate conclusions can be scrutinized. However, there is the potential for this process to be subverted if the researcher is obstructive about provision of their data or destroys their records. This difficulty formed the basis for a lengthy saga involving Christopher Gillberg, a professor of child and adolescent psychiatry at Gothenburg University in Sweden. The issues arose from a dispute between another scholar and Professor Gillberg, a researcher on autism spectrum disorders and attention deficit hyperactivity disorder. Gillberg was inclined to diagnose up to 10% of Sweden’s children with a variety of psychiatric disorders and to treat them with pharmacotherapies. As controversies about his work grew, Gillberg was accused of research fraud, and attempts were made to obtain his research data through Sweden’s Administrative Court of Appeal. Judgments were obtained ordering him to provide his data, but Gillberg’s response was to deny that his accusers were bona fide and to shred the data. In 2005 Gillberg was charged and convicted of misuse of office for his destruction of the data. He unsuccessfully appealed to the Swedish Court of Appeal and then the European Court of Human Rights (Gillberg v Sweden [2010] ECHR 1676) arguing that his right to freedom of expression had been breached.
170
I. Freckelton Q. C.
Professor Gillberg then took his aggrievement to the Court of the Grand Chamber of the European Court of Human Rights contending that he had a right under Article 8 of the European Charter of Human Rights not to communicate confidential information and that his moral integrity, his reputation, and his honor had been unwarrantedly affected by his criminal conviction. Again Professor Gillberg lost, this time unanimously, the Grand Chamber observing that he had properly been convicted of misuse of office in his capacity as a public official: “his conviction was not the result of an unforeseeable application of that provision and the offence in question has no obvious bearing on the right to respect for ‘public life’. On the contrary, it concerns professional acts and omissions by public officials in the exercise of their duties.” The Gillberg decisions constitute a strong statement that destruction of research information under the shadow of misconduct investigation is likely to attract stern censure and that administrative law processes and human rights arguments are not likely to afford protection to the researcher.
Disciplinary Determinations For practitioners, such as doctors, who are registered as a precondition to their entitlement to practice, the threat of disciplinary proceedings brought by their registration body is a serious matter and therefore a potentially significant deterrent. A number of high-profile cancellations of registration have ensued from proven research fraud. Examples in the United Kingdom, Canada, and Australia are summarized hereunder. In Britain, the regulator of medical practitioners brought disciplinary proceedings in 2010 against three doctors, John Walker-Smith, a professor of paediatric astroenterology; Simon Murch, a senior lecturer in gastroenterology; and Andrew Wakefield, a senior lecturer in histopathology and a reader in experimental gastroenterology. It proved to be England’s longest ever disciplinary case (see Rao and Andrade 2011). It arose from research into the correlation by the administration of the measles, mumps, and rubella (MMR) vaccine and autism (see Freckelton 2016). The Fitness to Practise Panel of the General Medical Council found Dr. Wakefield and Professor Walker-Smith to have engaged in professional misconduct. Walker-Smith appealed (successfully on a technicality); Wakefield did not but has continued to proclaim his innocence and has published his own account of how he regards himself as persecuted by the medical establishment (Wakefield 2017). Wakefield’s registration was erased in light of his intransigent and obdurate attitude toward the conduct in which he had been found to have engaged. Australia’s longest-running disciplinary case (consuming 180 days) concerned Dr William McBride, a medical practitioner who had achieved international fame and plaudits after discovering the teratogenic qualities of thalidomide (see Magazanik 2015; Humphrey 1992). However, the New South Wales Medical Tribunal in 1993 removed his name from the medical register after finding that he invented data to support his hypothesis that another medication (hyoscine, also known as scopolamine) was also teratogenic. It concluded that his conduct was a
9
Research Misconduct
171
deliberate distortion, that he was insightless, and that his conduct in relation to another researcher who questioned his conduct was such as to be indicative of a flaw of character incompatible with his retaining his registered status. McBride published a book proclaiming his innocence (McBride 1994) and was not re-registered until many years later (Freckelton 2016). In 2017 a Canadian pediatrician, Ranjit Kumar Chandra, a former professor of nutrition and immunology at Memorial University in Newfoundland and Labrador, Canada, was found guilty of professional misconduct by the Ontario College of Physicians and Surgeons for defrauding the Ontario Health Insurance Plan. He was also the subject of criminal charges of fraud but left Canada, precluding his case going to trial. He sued CBC for a documentary which it broadcast “The Secret Life of Dr Chandra,” which reported that he published fraudulent research in medical journals. He lost his defamation action, was ordered to pay CBC $1.6 million in legal costs, and was stripped of his Order of Canada. It is apparent that the consequence of losing registration for health practitioners, such as medical practitioners, is a potent weapon in the armamentarium of deterrent remedies. However, such a regulatory response is only available for those whose vocations carry professional registration/licensure.
False Claims Act Litigation A distinctive aspect to findings that a researcher has engaged in research fraud in the United States is for actions to be brought under the False Claims Act which imposes civil liability on any person who either “knowingly presents, or causes to be presented, a false or fraudulent claim for payment or approval” (31 USC § 3729(a) (1)(A)) or who “knowingly makes, uses or causes to be made or used a false record or statement material to a false or fraudulent claim” (31 USC § 3729(a)(1)(B). The legislation is unique internationally and is mirrored in 29 states of the United States and the District of Columbia. It can be brought into operation by whistleblowers (known as “relators”) through qui tam proceedings on behalf of the government. It enables a whistleblower to receive a share of the recovery obtained by the government against the fraudulent researcher or the researcher’s institution. The scienter element of knowing conduct is defined to mean that a person with respect to information: (i) Has actual knowledge of the information (ii) Acts in deliberate ignorance of the truth or falsity of the information (iii) Acts in reckless disregard of the truth or falsity of the information (31 USC § 3729(b)(i)); United States ex rel Ahumada v NISH, 756 F 3d 268, 280 (4th Cir 2014) No proof of a specific intent is required. It has been held that an applicant under the False Claims Act “need not prove the defendant had a financial incentive to make
172
I. Freckelton Q. C.
a false statement relating to a claim seeking government funds” (United States ex rel Harrison v Westinghouse Savannah River Co, 352 F 3d 908, 921 (4th Cir 2003)). The best known example of this is a settlement reached in March 2019, whereby Duke University agreed to pay the US government $112.5 million in settlement of accusations that it submitted fake data to win federal grants through the fraudulent work of Erin Potts-Kant (see Freckelton 2019). The settlement resulted in a $33.75 million payment to Joseph Thomas, a whistleblower, who drew attention to the fraud when he worked for the University. Potts-Kant worked for 8 years between 2005 and 2013 for the University and fabricated or falsified nearly all of her grant-funded flexiVent and multiplex experiments, making the results “better” in order to help Duke and its principal investigators obtain and maintain grants and to publish scientific articles. As a result of her course of conduct, false data were incorporated into grant applications, grant progress reports, and publications, a series of which in the name of Potts-Kant were retracted. The Potts-Kant settlement drew upon earlier precedents, an example of which in 2006 involved Kenneth Jones, a former statistician at the Massachusetts General Hospital. He brought an action alleging that Marilyn Albert, a former Professor of Psychiatry at Harvard Medical School and at the time Director of the Division of Cognitive Neuroscience at Johns Hopkins University School of Medicine and Brigham and Women’s Hospital, submitted an application to the National Institutes of Health that was based on manipulated data. Jones identified what he regarded as anomalies in the research produced by one of the researchers, Ronald Killiany, an Associate Professor at the Boston University School of Medicine. He told Albert that the alterations made by Killiany were substantial. However, Albert declined to appoint an independent or external investigator and instead appointed Killiany’s superior who found no fraud. Jones was dissatisfied with this and alleged that Albert and others had violated the False Claims Act (“FCA”) by: 1. Knowingly submitting an application for a grant to the NIH that was based on falsified and fraudulently manipulated study data and false statements of blinded, reliable methodologies 2. Receiving NIH funds while knowingly in violation of regulations that require applicant institutions to investigate and report allegations of scientific misconduct (United States ex rel Jones v Brigham and Women’s Hospital, 678 F 3d 72, 76 (1st Cir 2012)) At first instance the case was dismissed before it went to trial, but on appeal the United States Court of Appeals for the First Circuit reversed the decision and concluded that the first instance court had erred in failing to receive expert evidence. It concluded that: The essential dispute is about whether Killiany falsified scientific data by intentionally exaggerating the re-measurements of the EC to cause proof of a particular scientific hypothesis to emerge from the data, and whether statements made in the Application
9
Research Misconduct
173
about having used blinded, reliable methods to produce those results were true. If the jury should find that statements in the Application are false, they must also determine whether those statements were material and whether the Defendants acted knowingly in violating the FCA. Because we conclude that genuine issues of material fact remain on these central issues, we vacate the district court’s order and remand for further proceedings consistent with this opinion. (United States ex re Jones v Brigham and Women’s Hospital, 678 F 3d 72 (1st Cir, 2012)
Ultimately, the case was settled, but the case established a precedent for the processes required to be followed in qui tam claims in relation to research misconduct. The qui tam option is not without controversy (Freckelton, 2019). It provides a major financial and potentially distorting incentive to whistleblowers to make accusations. The positive aspect of this is that both reprobate researchers and institutes which harbor or enable them are more likely to be made accountable and an incentive is created for employing entities to take suitable steps to make such conduct less likely. However, whether it is the best way to encourage persons who have observed research anomalies to ventilate their concerns is debatable. Susanne Hadley, a former Deputy Director of the Office of Scientific Integrity, for instance, has commented in relation to a case where a former Cornell University epidemiologist won a False Claims Act case that “the plaintiff may do just fine, but that might just exacerbate the perception that these kinds of questions have gone beyond the ability of the scientific community to deal with” (see Taubes 1995). In addition, there is reason to be concerned about the fostering of a research environment in which there are bounties for accusations of misconduct by colleagues.
The Lot of the Whistleblower Those who provide information to more senior persons, regulators, or the media about a colleague they believe has engaged in misconduct have been described as “bell-ringers,” “lighthouse keepers,” and, most commonly, “whistleblowers.” Such persons can have a variety of motives, varying from altruism, a desire to protect their discipline, patients or others who may be adversely affected by the misconduct, to a desire to secure retribution against the perpetrator. The preparedness of whistleblowers to expose the misconduct of their colleagues, who are usually in positions senior to them, is vital to detection and also to the feasibility of effective investigations. This means that protection needs to be extended to whistleblowers. Such protection exists in principle in many jurisdictions, such as from the Public Interest Disclosure Act 1996 (UK), the Whistleblowers Protection Act 1989 (US), and the Protected Disclosures Act 2000 (NZ) and the Public Interest Disclosure Act 2013 (Aus: Cth) protecting whistleblowers from workplace discrimination and retribution arising from their provision of information about a colleague. However, the reality for most whistleblowers is professionally and personally bleak.
174
I. Freckelton Q. C.
This has generally been the case for whistleblowers in relation to research misconduct. Occasionally such persons are senior in the profession, empowered, and thereby protected by virtue of that status, such as Professor Sprague in relation to the fraud by Stephen Breuning (see Sprague 1993), but more commonly even persons who assume a “disclosure hero” status, such as Stephen Bolsin who made disclosures about the death rates at the Bristol Royal Infirmary, suffer significantly, leave their employment, and have to take extraordinary measures to secure other work (see Bolsin 1998). Another example is Nancy Olivieri, a hematologist who gave evidence about iron-chelating drug trials at the University of Toronto (see Shuchman 2005) and, while she was subsequently vindicated, had to endure the stigma of dismissal and herself being referred for disciplinary investigation of her professional conduct. The whistleblower in the Potts-Kant case maintained he was vilified and suffered a variety of personal hardships as he found himself out of work for over a year after making his disclosures (see Chappell 2019). In Sweden the whistleblowers who exposed the research misconduct by the charismatic stem cell researcher and surgeon, Dr Paolo Macchiarini (see Herold 2018), reported it to the Karolinska Institute and requested an official investigation. They received a less than welcoming response, and Macchiarini retaliated by filing a complaint against the principal whistleblower, claiming that he had stolen data from Macchiarini. The Karolinska Institute then accused the whistleblower of carelessness in his usage of Macchiarini’s data. Ultimately, the whistleblower was wholly exonerated, and the whole board at Karolinska was replaced. However, the process took a number of years, and the whistleblowers suffered multiple adverse consequences for the stand that they took (see Herold 2018). While whistleblower legislation affords in principle protection against reprisals and discrimination based on the making of disclosures, the experience of most whistleblowers in the research misconduct area is that their disclosures tend to attract stigma and make their continued progression in the workplace extremely fraught.
Current Debates The incidence of research fraud remains unknown although it is likely that it is rising, especially in the fields of medicine and pharmacotherapy (Steen 2011; Garner et al. 2013). Less is known of the incidence in fields such as science, engineering, and the social sciences, although from time to time, high-profile scandals in each have come to public attention. What can be said with confidence is that the incidence of retractions has risen significantly in recent years (Royston 2015), and it is unlikely that this is wholly attributable to greater awareness of the phenomenon of research fraud by journal editors and reviewers. It has been conceptualized as a virus in scholarship (Montgomery and Oliver 2017). Experience has shown that review processes provide only a modest level of protection against those prepared to falsify, reconstruct, or manipulate their data (Belluz and Hoffman 2015; Gerber 2006). The reason for this is that reviewers
9
Research Misconduct
175
rarely see the full data set and tend to focus on the internal congruence of data, the conceptual plausibility of interpretations, and the quality of the research and literature review. If a researcher is prepared to falsify their data, and they do so premeditatedly and advisedly so as to reach a predetermined result, this is not easily identified by reviewers. In addition, research misconduct conduct tends to be serial; those who engage in fraud of any kind usually do so more than once, especially when they are successful in their first dishonest endeavors. It becomes part of a scholarly lifestyle (see Stapel 2012), and those who generate fake data often engage in other forms of intellectual dishonesty and unethical behavior. In addition, they tend to keep doing so until they are caught, sometimes pushing the limits further and further until exposure and disgrace finally ensue. Our responses to regulation of research misconduct have largely been driven by scandals that have created a scenario in which a response has to be forthcoming and reforms have to be pursued (see Smith 2006). Research frauds have periodically distressed and embarrassed the scholarly community, but their pathogenesis and causation have been little studied in a generic way (see though Freckelton 2016; Royal College of Physicians of London 1991; Petersdorf 1986). With increasing awareness of the vulnerability of journals to being deceived, and of the temptations to which those who have engaged in unethical conduct have succumbed, we can start to formulate strategies to address the phenomenon. The reality is that the incentives to engage in research fraud are substantial and many succeed in their dishonest conduct for a considerable time. The harm caused by research fraud is not just to colleagues, funders, and the institution. The suicide of Haruko Obokata’s supervisor, the retirement of colleagues, and the immense reputational and commercial damage suffered by Riken from her fraudulent work on the development of STAP cells (see Freckelton 2016: 98–104) give an insight into the harm that can be done by intellectual dishonesty in the research context. Two other consequences are worthy of reference as well. There is a risk in some situations that the consequences of fraud will be to direct medical treatment in a direction that otherwise would be unjustifiable and which will endanger patients. An example in this regard is the fabricated research of Scott Reuben in relation to postoperative pain management and the confusion that he caused to the practice of multimodal analgesia (see Lipman 2009; Freckelton 2016). Another is the fraudulent studies by Joachim Boldt which supported the effectiveness of intravenous solutions containing hydroxyethyl starch, or hetastarch, to stabilize the blood pressure of patients during and after surgery or trauma and which fed directly into the content of international guidelines for patient treatment (see Marcus 2018). Finally, the credibility of legitimate trajectories of research is imperilled by the backlash that can follow from revelations of unethical research. An example in this regard is the fraud engaged in between 1977 and 1990 by Roger Poisson, of St Luc Hospital in Montréal, Canada, in his breast cancer trials. Audits established that Poisson included ineligible patients in his trials and also altered hormonereceptor values. While his data fraud did not compromise either patient safety or
176
I. Freckelton Q. C.
affect the legitimacy of the overall conclusions of the studies, due to the importance of the results of the studies for women with breast cancer, the extensive press coverage of his misconduct, the subsequent congressional investigations, and the reactions of the sponsor, the perceived importance of the fraud was considerable indeed and affected the reputation of the field of research and the preparedness of funders to sponsor ongoing work in the area (see George and Buyse 2015). A further example is the extensive fraud by the Norwegian oral cancer researcher, Jon Sudbø, who fabricated data in relation to 900 patients in a study published by The Lancet, faked photography for an article published in the New England Journal of Medicine and fundamentally mishandled data in other research trials. An initial response to revelations of his fraud was cancellation of a trial to study chemopreventive agents for oral cancer and a review of grants in all areas on which Sudbø had worked (Vastag 2006). However, the ongoing fallout for his focus of research continued for a significant period of time.
Proposed “Solutions” There are no simple answers to the phenomenon of research fraud; nor is its full extent known (see Fanelli 2009). A complex response incorporating changes to ethical cultures in medicine and science, openness of data, meaningful scrutiny by way of collegiate responsibility and supervision, identified authorial responsibilities in scholarly publications, better recognition of the value of peer reviewing, improved responsiveness by journals, investigative efficacy, protection of bona fide whistleblowers, and the taking of robust punitive and deterrent measures are all part of required responses (Freckelton 2016). If we can delineate clear rules in terms of proper research conduct, that is a good start to creating a research environment characterized by expectations of integrity, transparency, and openness and transparency of data. Supervision and accountability are also vital checks and balances. Avoiding the cult of the charismatic researcher in which protections that would otherwise constrain misconduct are suspended is also important. No one should be above ordinary rules of good and proper research practice. Among other things, this entails explicit identification of the responsibility assumed for constituent parts of scholarly research by authors of articles and studies, an issue that emerged from the large number of coauthored publications containing fraudulent data generated by the physicist, Jan Hendrik Schon (see Reich 2009). It is vital too that we foster an environment in which co-researchers are able to question constructively how data are being generated and interpreted. This involves a measure of collaborative breakdown of scholarly hierarchies. It also requires real extension of protection of those who raise concerns about breaches of research proprieties and the taking of firm and robust action when those accused of misconduct behave retributively toward whistleblowers. It also requires funding, publication, and promotion structures that value replication studies.
9
Research Misconduct
177
Finally, taking away some of the incentives for fabrication and distortion of data by encouraging journals to publish negative studies and recognizing the important contribution made by reviewers of scholarly articles are important ways of reconfiguring the culture of research publication. Supporting journals’ preparedness to retract manuscripts and issue “expressions of concern” is also an valuable step. A further limitation that is important is that often retractions contain little information about the bases of retractions. This can be for fear of defamation proceedings. However, such an informational deficit diminishes the deterrent and denunciatory effect of retractions and also deprives other researchers of clear information about the bases for the corrective action that has been taken. Investigation of research fraud needs to be done as often as is feasible by external bodies that are resourced and empowered to do the job well, fairly, and promptly. Leaving the task to bodies where the alleged perpetrator is employed creates perceptions, if not realities, of conflict of interest which in turn militates against frank and fearless decision-making and, at least, confidence in the robustness of such decision-making. Finally, when serious research misconduct is identified, it needs to be the subject of meaningful sanctions. Such behavior needs to be named and the subject of shaming, with allocation of responsibility to the individual involved and any systemic or institutional failings that enabled or facilitated the conduct. Research fraud is a fundamental abrogation of scholarly and professional values because of the intellectual dishonesty that lies at its heart, the trust that is breached, and the harm that it can engender. This entails the removal of the perpetrator from funding entitlements for a significant period; a default position of sacking the person from their place of employment; the taking of disciplinary sanctions where the person is registered or a member of a professional association; and, in some circumstances, the deployment of criminal charges. Each of these individually, and sometimes in combination, is a measure that is appropriate to as a punitive response to the intellectual dishonesty of research fraud and as a means of deterring the individual and others minded to emulate them from the temptations of research misconduct. Whether the existence qui tam options for whistleblowers is a helpful adjunct to the accountability process requires further analysis and debate. Conclusions This chapter has identified that research misconduct has a variety of deleterious consequences for individuals, entities and disciplines affected by it. This in turn makes the stakes high when allegations about such conduct are made, in turn requiring a prompt, fair and focused investigative response. Given the multiplicity of motives for both engaging in research misconduct and for those who blow the whistle on what they perceive to be such conduct, it is important that an investigation be as independent and rigorous as possible so that it achieves its objectives and is regarded as credible. In and around such investigations and consequential action are a variety of potential outcomes, which may be affected by legal responses and constraints. These include criminal prosecution, disciplinary action and, in the United States, qui tam actions. In addition, actions can be taken for defamation, harassment, discrimination and unfair dismissal. All of these realities highlight both the need for addressing more effectively culture in which research misconduct is encouraged, tolerated and enabled and the importance of the adoption of effective deterrent action.
178
I. Freckelton Q. C.
References Altman E, Hermon P (1997) Research misconduct: issues, implications and strategies. Greenwood Publishing, New York Australian Research Integrity Council, “Establishment and Purpose of ARIC” (2018). https://www. arc.gov.au/policies-strategies/strategy/australian-research-integrity-committee-aric Belluz J, Hoffman S (7 December 2015) Let’s stop pretending peer review works. Vox: https:// www.vox.com/2015/12/7/9865086/peer-review-science-problems Bolsin SN (1998) The Wisheart affair: responses to Dunn – the Bristol cardiac disaster. Br Med J 317(7172):1579–1582 Borrell B (10 March 2009) A medical Madoff: anesthesiologist faked data in 21 studies. Scientific American: https://www.scientificamerican.com/article/a-medical-madoff-anesthestesiologistfaked-data/ Chappell B (25 March 2019) Duke whistleblower gets more than $33 million in research fraud settlement. NPR: https://www.npr.org/2019/03/25/706604033/duke-whistleblower-gets-morethan-33-million-in-research-fraud-settlement Cooper DKC (ed) (2014) Doctors of another calling. Physicians who are known best in fields other than medicine. Rowman & Littlefield, Maryland Cossins D (18 April 2013) Jailed for faking data. The Scientist: https://www.the-scientist.com/thenutshell/jailed-for-faking-data-39449 Council of Canadian Academies, Expert Panel on Research Integrity (2010) Honesty, accountability and trust: fostering research integrity in Canada. Council of Canadian Academies. https:// www.scienceadvice.ca/wp-content/uploads/2018/10/ri_report.pdf Crotty S (2001) Ahead of the curve: David Baltimore’s life in science. University of California Press, Berkeley Dahlberg JE, Mahler CC (2006) The Poehlman case: running away from the truth. Sci Eng Ethics 12(1):157–173 Davis MS, Riske-Morris M, Diaz SR (2007) Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics 13(4):395–424 Doherty P (2015) The knowledge wars. Melbourne University Press, Melbourne Elliott C (5 April 2018). Knifed with a smile. The New York Review of Books: https://www. nybooks.com/articles/2018/04/05/experiments-knifed-with-smile/ Fahy D (2015) The new celebrity scientists: out of the lab and into the limelight. Rowman & Littlefield, Lanham Fanelli D (2009) How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey date. PLoS One. https://doi.org/10.1371/journal.pone.0005738 Fisher JA (2008) Medical research for hire: the political economy of pharmaceutical clinical trials. Rutgers University Press, New Brunswick Freckelton I (2014) Criminalising research fraud. J Law Med 22(2):241–254 Freckelton I (2016) Scholarly misconduct: law, regulation and practice. Oxford University Press, Oxford Freckelton I (2019) Encouraging and rewarding the whistleblower in research misconduct cases. J Law Med 26:719–731 Freckelton I, McMahon M (2017) Research fraud by health practitioners and the criminal law. In: Freckelton I, Petersen K (eds) Tensions and traumas in health law. The Federation Press, Sydney, pp 433–451 Garner HR, McKiver LJ, Waitkin MB (2013) Research funding: same work, twice the money? Nature 493:599–601 George SL, Buyse M (2015) Data fraud in clinical trials. Clin Investig (Lond) 5(2):161–173 Gerber P (2006) What can we learn from the Hwang and Sudbø Affairs? Med J Aust 184(12):632–635 Gunsalus CK (23 January 2015) Misconduct expert dissects Duke scandal. The Cancer Letter: https://cancerletter.com/articles/20150123_3/
9
Research Misconduct
179
Herold E (8 October 2018) A star surgeon left a trail of dead patients – and his whistleblowers were punished. Leapsmag: https://leapsmag.com/a-star-surgeon-left-a-trail-of-dead-patients-and-hiswhistleblowers-were-punished/ Hixson JR (1971) The patchwork mouse. Anchor Press, Garden City Humphrey GF (1992) Date scientific fraud: the McBride case. Med Sci Law 32(3):199–203 Joyce C (1981) When the cheating has to stop. New Scientist 68 Kevles DJ (1998) The Baltimore case: a trial of politics, science and character. W. W. Norton, New York Kimmel AJ (2007) Ethical issues in Behavioural research: basic and applied perspectives, 2nd edn. Blackwell Publishing, Oxford Koestler A (1971) The case of the midwife toad. Random House, New York Kuzma SM (1991) Criminal liability for misconduct in scientific research. Univ Mich J Law Reform 25(2):357–421 Lawson G (2012) Painting the Mice. O&G Magazine 14(2):45. https://www.ogmagazine.org.au/14/ 2-14/research-fraud-painting-mice/ Levelt Committee (2012) (Tilburg University), Noort Committee (Groningen University) and Drenth Committee (University of Amsterdam), Flawed Science: The fraudulent Research Practices of Social Psychologist Diederik Stapel: https://www.tilburguniversity.edu/upload/ 3ff904d7-547b-40ae-85fe-bea38e05a34a_Final%20report%20Flawed%20Science.pdf Lewis M (1997) Poisoning the ivy: the seven deadly sins and other vices of higher education in America. ME Sharpe, New York Lipman AG (2009) The pain drug fraud scandal: implications for clinicians, investigators, and journals. J Pain Palliat Care Pharmacother 23(3):216–218 Magazanik M (2015) Silent shock. Text Publishing, Melbourne Marcus A (25 June 2010) Reuben sentenced in fraud case. Anesthesiology News Marcus A (2018) A scientist’s fraudulent studies put patients at risk. Science 362(6413):394 McBride W (1994) Killing the messenger, Eldorado Publishing, Sydney McCook A (2 September 2016) Research fraud at Duke piles up. Real Clear Science: https://www. realclearscience.com/2016/09/02/research_fraud_at_duke_piles_up_272187.html Mendoza M (12 July 2015) Allegations of fake medical research hit new high. BNC News: http:// www.nbcnews.com/id/8474936/ns/health-health_care/t/charges-fake-research-hit-new-high/#. XKrhny-B1UM Miller DR (2012) Retraction of articles written by Dr Yoshitaka Fujii. Can J Anesth 59(12):1081–1084 Montgomery K, Oliver AL (2017) Conceptualizing fraudulent studies as viruses: new models for handling retractions. Minerva 55(1):49 National Academies of Sciences, Engineering and Medicine (2017) Fostering integrity in research. https://doi.org/10.17226/21896 National Institutes of Health, Grant Policy Statement (October 2018). https://grants.nih.gov/grants/ policy/nihgps/nihgps.pdf Office of Research Integrity (2015) Definition of research misconduct. https://ori.hhs.gov/defini tion-misconduct Office of Research Integrity, “About ORI” (2019). https://ori.hhs.gov/about-ori Pelley S (5 March 2012) Deception at duke: fraud in cancer care? CBS News. https://www.cbsnews. com/news/deception-at-duke-fraud-in-cancer-care/ Petersdorf RG (1986) The pathogenesis of fraud in medical science. Ann Intern Med 104(2):252–254 Rao TSS, Andrade C (2011) The MMR vaccine and autism: sensation, refutation, retraction and fraud. Indian J Psychiatry 53(2):95–96 Reardon S (2015) US vaccine researcher sentenced to prison for fraud. Nature 523:7559. https:// www.nature.com/news/us-vaccine-researcher-sentenced-to-prison-for-fraud-1.17660 Reich ES (2009) Plastic fantastic: how the biggest fraud in physics shook the scientific world. Palgrave Macmillan, New York
180
I. Freckelton Q. C.
Reich ES (7 February 2012) Duplicate-Grant case puts funders under pressure. Nature 482(7384). https://www.nature.com/news/duplicate-grant-case-puts-funders-under-pressure-1.9984 Royal College of Physicians of London (1991) Fraud and misconduct in medical research: causes, investigation and prevention. J R Coll Physicians Lond 25(2):89–94 Royston M (2015) Retracted scientific studies: a growing list. The New York Times: https://www. nytimes.com/interactive/2015/05/28/science/retractions-scientific-studies.html Sabbagh K (2000) A Rum affair: a true story of botanical fraud. Farrar, Strauss and Giroux, New York Saunders R, Savulescu J (2008) Research ethics and lessons from Hwanggate: what can we learn from the Korean cloning fraud? J Med Ethics 34(3):214–221 Severinsen J (2014) Milena Penkowa – from famous to infamous. Science Nordic. http:// sciencenordic.com/milena-penkowa-–-famous-infamous Shuchman M (2005) The drug trial: Nancy Olivieri and the scandal that rocked the hospital for sick children. Random House, Toronto Smith R (2006) Research misconduct: the poisoning of the well. J R Soc Med 99(5):232–237 Sprague RL (1993) Whistleblowing: a very unpleasant avocation. Ethics Behav 3(1):103–133 Stapel D (2012) Ontsporing (The Derailment) (trans: Brown N). Prometheus, Amsterdam. http:// nick.brown.free.fr/stapel/FakingScience-20161115.pdf Steen RG (2011) Date retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics 37(4):249–253 Stern V (2012) Research misconduct on the rise. Clin Oncol News 7(12):24–26 Taubes G (1995) Plagiarism suit wins; experts hope it won’t set a trend. Science 265(5214):1125 Trager R (2018) Chaos at US Government Research Integrity Office. Chemistry World: https://www. chemistryworld.com/news/chaos-at-us-governments-research-integrity-office/3008837.article Van Der Weyden M (2006) Preventing and processing research misconduct: a new Australian code for responsible research. Med J Aust 184(9):430–431 Vastag B (2006) Cancer fraud case stuns research community, prompts reflection on peer review process. J Natl Cancer Inst 98(6):374–376 Wakefield A (2017) Callous disregard: autism and vaccines. In: The truth behind a tragedy. Simon & Schuster, New York Wise J (2013) Boldt: The Great Pretender. Brit Med J 346:f1738 Zieler C (2019) Court Win for University of Copenhagen in Penkowa Case. Uniavisen: https:// uniavisen.dk/en/court-win-for-university-of-copenhagen-in-enkowa-case/ Zwart H (2017) The Catwalk and the Mousetrap: Reading Diederik Stapel’s Derailment as a Misconduct Novel. In: Zwart H (ed) Tales of research misconduct. Library of ethics and applied philosophy, vol 36. Springer, Cham, pp 211–244. https://doi.org/10.1007/978-3-319-65554-3_11
Dual Use in Modern Research Taming the Janus of Technological Advance
10
Panagiotis Kavouras and Costas A. Charitidis
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Publications on DU: The Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The DU Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Publication of DU-Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bridging the Civil-Military Research Divide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantifying DU Synergies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anticipated Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dissemination and Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Open Access on Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synergies Between Military and Civilian Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Legal Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set of Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
182 183 186 186 186 189 190 191 193 195 195 196 196 197 197 198 199
Abstract
Despite the fact that Dual Use (DU) research and related ethical dilemmas are almost as old as modern science, the debate concerning Dual Use issues has been going on with increasing intensity during the first two decades of the new millennium. The anthrax terrorist attacks in 2001 and the experimental derivation of mammalian transmissible H5N1 influenza in 2012 posed as ominous milestones, reflecting the possibility of misuse of technological advances in biotechnology. New discussions among scientists, policy makers, and society are P. Kavouras · C. A. Charitidis (*) School of Chemical Engineering, R-Nanolab, National Technical University of Athens, Athens, Greece e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_7
181
182
P. Kavouras and C. A. Charitidis
underway, in order to find new solutions to long-standing, acute concerns. In this chapter, we discuss the latest developments in the Dual Use debate, focusing on policy issues concerning the governance of publicly funded Dual Use research. Potential Dual Use concerns arise when a number of critical decisions must be made, for example, about controlling the balance between openness and confidentiality of research data, about regulating synergies between research for civilian and military applications, and about resolving the conundrum of who is to be held responsible: the individual scientist or the overarching legal framework. This chapter is based on an extensive survey of the relevant literature, including EU and US legal frameworks controlling Dual Use research, and the latest findings of an EU-funded study that has delivered a set of best practices for identifying and assessing Dual Use issues in technological research. Keywords
Dual Use · Dual Use synergies · Research and development · Risk · Threat · Export control · Open data · Research governance
Introduction While not named as such until recently (Resnik 2009), the Dual Use (DU) dilemma or, even more DU research, is not new. The term “ethical dilemma” of DU research refers to the fact that a critical decision must be made by the research enterprise, in order to strike the right balance between the benefits of a new technological advance and the dangers it might bring, if used to create harm, intentionally or unintentionally. The possibility of DU can, in principle, be found in almost every scientific and technological leap forward; however, DU is not necessarily ethically problematic, e.g., when both uses are for benevolent purposes. This chapter deals with a study of DU, when both benevolent and malevolent applications emerge from a single technology. The DU phenomenon has been variously defined. However, there is agreement in a core idea: the very same scientific knowledge and/or technology and/or items can be used for good and bad purposes. A socially acceptable purpose was, traditionally, considered that of research targeting civilian applications. However, military use for purely defensive reasons can be also considered as good purpose, at least according to the European Commission’s (EC) wording. On the contrary, the term bad purpose refers to research that can potentially and without significant effort be used for terrorist or criminal action, offensive military applications, or proliferation of weapons of mass destruction. While DU research is defined as research yielding new technologies with the potential for both benevolent and malevolent applications, life sciences research specifically use the term DU Research of Concern (DURC). DURC is a subset of DU research defined as life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, material, or national security (Casadevall et al. 2015).
10
Dual Use in Modern Research
183
Some recent publications propose alternative approaches or more implicit definitions of DU. For example, Imperiale and Casadevall (2015) have set a framework within which DU issues can be managed in a more effective way, since it provides a synthesis of the progress that has been made during the past decade. These authors consider that the National Science Advisory Board for Biosecurity (NSABB) definition offers a major step forward, i.e., that research related to the development of new technologies and the generation of information with the potential for benevolent and malevolent purposes is referred to as DU research. However, this definition requires an assessment that cannot be effectively made by journal editors and Biosafety Committees. Imperiale and Casadevall (2015) have set five pillars for evaluating research that better define the issues at hand and promote a safer research for infectious diseases: 1. Definition of the medical and scientific problems that need to be solved to protect humanity from pandemic threats 2. Acknowledgment that research has inherent risks that can be minimized but never fully abolished 3. Acknowledgment that, although risks and benefits posed by certain experiments are difficult to quantify, efforts must be made to assess the risks and benefits 4. Development of new biosafety approaches, including safer laboratory strains, careful attention to protocol, and constant improvement of infrastructure 5. Creation of a national board to vet issues related to research with dangerous pathogens Additions to the present definition of DU have been made by Van der Bruggen (2012), who argues that, in addition to more technical aspects, a definition of DU should include the aspect of threats and intentions. This is important, since, according to the author, this approach will give guidance to policy makers on how to strike the right balance for ensuring security while avoiding undesirable governmental interventions in the conduct of science. Kuhlau et al. (2011) study whether current definitions of the precautionary principle are applicable and suitable to the field of DU life sciences research. The reverse DU dilemma is defined as “the disruptive impact of spill over into the civilian sector of technology developed by the military for legitimate national security purposes” (Marchant and Gulley 2010). DU research was investigated in other scientific disciplines, and 21 different definitions of DU research can be found, from which someone can spot “discipline sensitive” variations in the DU concepts, suggesting that broader conceptualizations of DU research are needed (Oltmann 2015). In this chapter, we use the term DU to refer to research and technologies that, without much effort, can be used for both beneficial and harmful purposes.
Background DU research has been connected with landmark scientific and technological advances in the twentieth century, especially after World War II. However, there are cases even before World War I. For example, the well-known Haber-Bosch
184
P. Kavouras and C. A. Charitidis
process, invented in 1910, led to the large-scale synthesis of ammonia. This process was the base for the industrial-scale production of fertilizers and explosives. It is worth mentioning that nitrogen fertilizers are still the base for the food production for half of the world’s current population. An emblematic case, transcending the era between the two world wars, is nuclear science and technology that produced the nuclear reactor, with obvious peaceful applications, and the nuclear bomb that jeopardized the very existence of human civilization. Another case of a scientific field with obvious DU implications is genetics, in the twentieth century, and genomics in the twenty-first century. Rocket science and jet propulsion are technological fields that matured during, and because of, the Cold War. Cases of specific technological breakthroughs with DU implications, which were based on developed scientific fields, put in parentheses, were the invention of laser (quantum mechanics), microwaves (electromagnetism) and glass ceramic materials (materials science). Figure 1 depicts a timeline of DU scientific fields, technologies, and specific breakthroughs during the twentieth century. The world had to experience cataclysmic events, like World War II, or traumatic events, like the Cuban missile crisis and an imminent thermonuclear war in 1962, before governments of the most powerful states started taking specific governance measures. Figure 2 presents an overview of export control regimes and treaties mainly concerned with the nonproliferation of weapons of mass destruction. Sources of DU concerns, like nuclear technology and genetics, gained prominence in public awareness during the twentieth century but nowadays have been overshadowed by other, more acute, concerns: the anthrax terrorist attacks in 2001;
Industrial Chemistry
Nuclear Physics
Explosives
Atomic bomb
1910
1940
Synthetic Biology
Genomics Genetics
Aerospace engineering Jet propulsion
1950
Human Genome Project
Recombinant DNA
1960
1970
1980
Ammonia fertilizers
Nuclear reactor
Rocket Science
LASER
Internet
Industrial Chemistry
Nuclear Physics
Aerospace engineering
Quantum Mechanics
Computer Networking
Fig. 1 Timeline of DU technologies
1990
Quantum physics
1st sythetic bacterial genome
2000
2010
Quantum computing
2020
1st sythetic virus
Unmanned vehicles
Genomics
Artificial Intelligence
10
Dual Use in Modern Research
185
Nuclear Suppliers Group 1974
1920
1930
The Geneva protocol 1925
1970
Nuclear non Proliferation Treaty 1970
Missile Technology Control Regime 1987 Australia Group 1984
1980
Biological and Toxin Weapons Convention 1972
Wassenaar Arrangement 1995
1990
2000
2010
United Nations Security Council Chemical Weapons Resolution Convention on 2004 nonproliferation of nuclear, chemical and biological weapon 1997
Fig. 2 Timeline of the DU governance. Export control regimes are shown above the timeline, whereas Treaties and UNSCR are reported below the timeline. (Based on Cirigliano et al. 2017)
the production of a superstrain of mousepox, again in 2001; the artificially synthesized polio virus, in 2002; the reconstruction of the Spanish flu virus, in 2005 (Selgelid 2007); and the experimental derivation of mammalian transmissible H5N1 influenza, in 2012. All of these brought about, in the most emphatic way, the potential dangers within the life sciences and adjacent disciplines, like biotechnology and synthetic biology. History is repeated, like in the case of industrial chemistry at the dawn of twentieth century. Then, the controversy was between the manufacturing of fertilizers or explosives; now the controversy is between creating new medicine and lethal biological weapons. The stakes are high, since the production of biological weapons is relatively easy and inexpensive, in comparison to nuclear weapons. The buildup of a medium-yield nuclear device needs a tremendous quantity of fissionable isotopes of uranium or plutonium, produced by enormous chemical plants that use gigantic amounts of electrical energy. Additionally, the technology behind each step of the production of a nuclear device was and remains highly classified. On the contrary, the details about how to produce lethal biological agents are readily available in published scientific literature; all someone needs is an easy-to-build biological laboratory. As a result, the DU dilemma has gone beyond the civilian-military dipole; it now has to include misuse and abuse of scientific research by terrorist groups and criminal organizations. The certainties, together with the dangers, of the Cold War/Balance of Superpowers era have been dissolved into a world of uncertainties and asymmetric threats.
186
P. Kavouras and C. A. Charitidis
Publications on DU: The Implications In scientific journals, publications on DU issues began to appear relatively late. The authors surveyed DU-related literature in the Scopus database, using the keyword “Dual Use” in the Article Title search field and in other bibliographic fields, namely, Abstract and Keywords. DU publications were categorized into the following main categories: • • • • •
General DU Ethics and Philosophy Policy/Governance Specific DU Technologies Education on DU
This is not, by any means, a unique categorization of DU literature. Moreover, in some cases, a specific publication can well appear in more than one category. The General category contains articles helpful to understand the evolution of the DU concept, while the category DU Ethics and Philosophy contains publications that shed light into the deeper and subtler concepts of DU, the relevant philosophical implications and ethical dimensions, the latter generally connected to research ethics concerns. The Policy/Governance category contains articles that map and study the regulation and governance of the relevant research, export regulations, and guidelines for publishing and patenting the research, items, and intellectual property. The Specific DU Technologies category contains articles that give a valuable overview of the specific DU concerns with respect to different technologies, e.g., synthetic biology, immunology, biosecurity, radar and telecommunications, nanotechnology, and laser technology. Finally, the Education category contains articles targeted at university students and researchers that focus on the need for short curricula on DU issues. Figure 3 shows a diagram of the relative size of the number of publications in each category.
Key Issues The DU Dilemma The key dilemma in DU research results from the potential conflict between freedom of research without necessarily giving due attention to the incipient dangers and the duty to avoid causing harm, which requires a careful assessment of the risks caused by research and, as a result, leads to imposing a restrictive framework upon scientists (Miller and Selgelid 2007). Together with this cardinal issue, another equally important one is: Who is going to resolve the above dilemma, the research enterprise or policy makers? The complexity of the research enterprise, which involves different stakeholders and sensitive relations with external decision-making mechanisms, like governments, makes it difficult to answer the question. To be clear, the research enterprise is constituted by all stakeholders related to scientific research, i.e., its scientific
10
Dual Use in Modern Research
187
Fig. 3 Thematic composition of the DU-related literature
workforce, funding and administration mechanisms, and communication. An indicative, yet not necessarily exhaustive, list of stakeholders includes professional researchers, universities (higher education entities), research institutes, funding bodies, research appraisal committees, academies of science, professional associations, learned societies, transnational science/research foundations, research integrity committees (at the institutional, regional, national, and transnational level), and publishing houses. In such a multi-actor enterprise, with a complex structure and interdependent relations between all actors, it is challenging to draft a common set of guidelines, which, among others, will define responsibilities. Giving to the research enterprise, the responsibility to self-regulate requires that all related stakeholders, all over the world and throughout all scientific disciplines, agree on a common set of principles and guidelines, related to research ethics and integrity. This is very difficult by default, considering that even in a relatively homogeneous region in terms of scientific traditions such as Europe, existing codes of conduct tend to set only a generic framework, e.g., the European Code of Conduct for Research Integrity (ALLEA 2017), not to mention that issues of research ethics and integrity are currently a matter of intense research and debate. In Europe, the European Commission H2020 framework program, through the Science with and for Society call (H2020-SwafS-2018–2020), is currently funding projects that study a wide array of related issues, e.g., clinical ethics (i-CONSENT), research ethics guidelines for genomics and human enhancement (SIENNA), and production of research ethics and integrity guidelines for non-medical sciences (PRO-RES). Such projects are bound to involve a wide array of stakeholders from the research enterprise, despite the fact that a consensus is not necessarily guaranteed. Moreover, before passing on the responsibility for regulating research solely to policy makers, we must consider the fact that it is not possible to have deep insights into potential DU or any other kind of ethical implications of cutting-edge research, due to lack of necessary anticipatory knowledge or foresight. For some experts the two questions address the same issue from different perspectives (Dubov 2014): deciding to safeguard the freedom of research is the same as giving the responsibility of decision to the research enterprise; deciding to avoid the risks of causing harm is the same as giving the responsibility of decision to policy makers (Salloch 2018). Other experts urge policy makers to empower and engage the
188
P. Kavouras and C. A. Charitidis
public, so that they can challenge decisions regarding the development and application of DU technologies and could help to construct a scientific road map (Mark 2010). From the researchers’ perspective, individual scientists should be able to decide what types of research not to undertake and, in cases of ongoing research, which results to avoid publicizing. Research institutions, like universities, should develop a monitoring system for potentially dangerous research activities by their members/ employees. Professional societies must have an influence on the development and implementation of ethical codes of conduct and play an important role in the education of their members. Publishers should decide whether scientific results are to be communicated to the scientific community, and their decisions should not be solely based on scientific quality but also consider the incipient threats they create, due to DU implications (Salloch 2018). Responsible Research and Innovation (RRI) brings about a guiding set of principles that can help through policy and regulatory dilemmas arising from the fields of technology and science, whose impacts are poorly characterized or are highly uncertain. Moreover, RRI also asks and seeks to answer the question: “What sort of future do we collectively want to create for Europe?” (Owen et al. 2012). The RRI framework, which appeared in Europe and is currently being studied and applied mainly in Europe, provides significant guidance that should be taken into account by European research projects. From the policy-making perspective, national governments should decide on research funding and legal controls, regarding potentially dangerous materials and technologies. Despite the important “harmonizing” role of the EC, national governments in Europe still play a decisive role in decision-making in matters of research governance and regulation. National regulatory frameworks on research integrity are still highly inhomogeneous throughout Europe (Godecharle et al. 2013), reflecting the different approaches of national governments. Alternatively, international and transnational bodies can build global policies, with respect to the threat of bioterrorism and bio-warfare. An analogous approach with nuclear technology could be implemented, with international treaties setting the framework within which DURC research should be implemented. Self-regulation and statutory legal control should not be considered mutually exclusive. Additionally, the representatives of free research and the representatives of security do not belong to two opposing approaches of research governance. For example, experts from science can well be placed in policy-making roles. In fact, DU dilemmas are often the result of the merging of the concerns of researchers and policy makers; in this sense, knowledge between scientists and policy makers must flow freely, since policy makers are not aware of all aspects of a new technology and scientists are not necessarily aware of the security issues raised by a scientific breakthrough. The case of the mousepox study offers a useful illustration. Australian scientists genetically engineered the mousepox virus to develop an immunocontraceptive. To their surprise, they accidentally produced a superstrain of mousepox that killed both naturally resistant and vaccinated mice. The scientists who conducted this research lacked security clearance and were systematically denied access to information
10
Dual Use in Modern Research
189
essential to assessing the security risks of the relevant publication. Since academic researchers do not have deep knowledge of security studies, assessing the dangers of publishing the mousepox study may have been beyond their expertise. As a result, they could not assess the risks produced by their research (Selgelid 2009). On the other hand, policy-making bodies with access to sensitive information about security risks cannot single-handedly determine the DU potential in advanced areas or research, where the realization of incipient risks is connected to the deep understanding of the science behind of a novel technology. The balance between freedom of research and security must be reflected in the sharing of responsibilities that finally should be jointly taken by the research enterprise and policy makers. This sharing of responsibilities must be guided by the principles of beneficence and nonmaleficence. Analogously, the responsibility to keep up with DU implications must be shared by the individual scientist and the overarching legal framework, under the following conditions: scientists must be properly trained, in order to be able to discern DU implications, and legal frameworks must be drafted in cooperation with actors of the research enterprise and policy making.
Publication of DU-Related Research Within the scientific community, the main communication channel of novel research results and technologies is publication in scientific journals. This means that the issues raised by publication of DU-related research are very important. Editors contribute to the creation of journal policies and make publication decisions. As a result, they have a central role in managing the potential dangers of publishing DUrelated scientific research that can be misused, for example, by terrorist groups or criminal organizations. Despite the fact that over the last two decades there was a prevalence of DU concerns within the life sciences, almost all other scientific disciplines, according to journal editors, also raised DU concerns (Oltmann 2015). A relatively recent survey on editors’ attitudes and experiences in monitoring the review process of DU-related research was conducted among 127 chief editors of life sciences journals from 27 countries (Patrone et al. 2012). The survey reported that fewer than one out of ten editors had any experience with biosecurity review and that no manuscript was rejected on biosecurity grounds. These results were in striking difference with the fact that 75% of the editors agreed that consideration of biosecurity risks during the review process is imperative. Moreover, the survey showed a lack of consensus among editors on how to handle specific issues in the review and publication of research with potential DU implications. Publication of underlying research data, as supportive material, is gaining ground as part of the data sharing procedures promoted by the EC, according to the Open Research Data Pilot (Article 29.3). Specifically, participants in projects that are being funded by Horizon 2020, the EU Framework Programme for Research and Innovation, are encouraged to upload their publications in electronic repositories, such as Zenodo, accessible through the OpenAIR portal. This portal is an entry point for
190
P. Kavouras and C. A. Charitidis
linking publications to underlying research data that must be findable, accessible, interoperable, and reusable. The concept of DU is valuable when reflecting on what procedures must be implemented when deciding when the underlying data are going to be openly communicated, analogously with the discussion about the procedures to publish DU-related publications. Research data should be treated as carefully and cautiously as a scientific publication or an artifact with DU potential. When developing future DU/data sharing controls and educational initiatives, it is important to address this issue in a manner that supports the openness and freedom of research without overlooking the responsibility to consider potential risks or the legal obligation to share data. Further debate will be useful to examine how DU and data sharing initiatives can be reconciled, so as to give scientists a frame of ethical norms, with regard to their data (Bezuidenhout 2013). An example can be given, concerning the 2011–2012 controversy involving the H5N1 papers. The American Society for Microbiology (ASM) instituted an ad hoc process for reviewing manuscripts with potential DU content (Casadevall et al. 2015). Since then, the process of reviewing such manuscripts has become more formal, involving three distinct phases: screening, discussion, and decision. The screening phase is designed to identify manuscripts that require discussion rapidly and unobtrusively. It takes as a starting point the author-declared information in the submission cover letter, which can alert journal editors and reviewers to the need to consider biosafety and biosecurity issues in evaluating the work. At the discussion phase, members of the ASM Responsible Publication Committee (ARPC) read the manuscript and interact via e-mail, which can lead to a teleconference if issues requiring more in-depth discussion have been recognized. Finally, at the decision phase, ARPC decides to accept, reject, redact, or publish. This decision must be accompanied with the rationale, describing the evaluation process and elaborating the biosafety and biosecurity risk mitigation in place and highlighting the benefits of the research.
Current Debate Regarding DU research governance, the basic concern of the EC nowadays is the regulation of the synergies between research for civilian and research for military applications. By promoting DU synergies, the EC seeks the potential application of technologies beyond their originally intended use, i.e., transform a military technology into a civilian application or vice versa. This issue cannot be approached solely by arguments based upon ethical reflection (Ehni 2008; Pustovit and Williams 2010), professional codes (Salloch 2018), or security guidelines. The “flow of information” connected with DU synergies has significantly changed since the beginning of the twentieth century. Until the 1960s, knowledge and technology were mainly flowing from the military to the civilian research sector. The pillars of innovative research at that time were nuclear technology, jet propulsion, and rocket science that were principally designated for military uses. During the 1970s the flow of knowledge and technology was balanced, while during the 1990s, this trend was
10
Dual Use in Modern Research
191
completely reversed, i.e., technology for civilian purposes was used for military applications (Williams-Jones et al. 2014). As described above, this is important, since the research for civilian applications is, by default, more openly disseminated than the research for military application. This renders the former more susceptible to misuse, something that had not happened with, e.g., nuclear technology, which was considered, by default, top secret. This global trend of fertilizing military research from research for civilian applications is not being closely followed in Europe; H2020 legislation raises barriers to the use of scientific results and technology for military applications. Due to this fact and also due to the significant reduction in defense expenditures in Europe, caused by the economic crisis, there are concerns that Europe’s competitors are gaining ground in their relative military strength and high-technology army capabilities. According to a recently published discussion paper from “Friends of Europe” think tank (www.friendsofeurope.org/), Europe’s leading place in innovation at all levels is jeopardized by the segmentation of civil and military research (Cami 2015). Considering that “Friends of Europe” strives to make a contribution toward a better understanding of the challenges facing Europe and its citizens, such concerns are sensible. From a historical point of view, a country which experiences a cutback in expenditure in research will suffer a decline in its military might relative to rival countries (Kennedy 1989; de Grass-Tyson and Lang 2018). The need for a “renovated” regulatory framework for DU research permeates a recently published report on responsible DU, published by the Human Brain Project‘s Ethics and Society division (Aicardi et al. 2018). There, it is pointed the fact that a clear ethical distinction between civilian and military uses based on existing arguments is not appropriate or helpful. On the basis of this and other considerations, the authors of the report have drafted a number of recommendations for the EU, in an effort to update the whole regulatory framework for DU research.
Bridging the Civil-Military Research Divide The segmentation of civil and military research reflects not only the intrinsic ambiguity between openness of scientific research for civil applications and secrecy of military classified research. It also reflects existing differences, regarding the scope of end products that lead to differences in product lifetime, development specificity, and motivations for technical change. For example, products for civil applications usually have a more limited lifetime; a computer screen is obsolete after a few months in use, while the head-up display of a fifth-generation fighter aircraft remains fully operational for years. The relatively high specificity of military products is imposed by the fact that in-house production is preferred, for purely strategic reasons (Droff, 2014). For such reasons, the civil and military research and, even more, industrial sectors are functioning in different pace, with the civilianoriented research taking the lead after the 1970s (Williams-Jones et al. 2014). Seen from a historical perspective, roughly in the 1980s, DU synergies were consciously promoted by countries representing completely different political and
192
P. Kavouras and C. A. Charitidis
economic systems. China’s and the Soviet Union’s industrial structures, overwhelmingly military in character, started moving toward more balanced technology expenditures. Japan followed an analogous trend, from the opposite side, i.e., departing from an industrial structure focused on civilian market. The USA laid somewhere in between, with DU synergies more developed. The space exploration programs pose a well-known example (de Grass-Tyson and Lang 2018; ISECG 2013). Information on new NASA technology that may be useful to industries is available in periodical and website form in “NASA Tech Briefs” (NASA 2007), while successful examples of commercialization are reported annually in the NASA publication “Spinoffs” (NASA 1976). Taking into account the everyday civil applications that came out from NASA’s space programs, it can be assumed that the abandonment of the Strategic Defense Initiative (SDI) program did not affect only defense-/militaryrelated industries. It is likely that a whole array of DU technologies would have developed with an unprecedented pace, like missile, laser, sensor, and computing technology, should SDI had come into being. In Europe, the former European Community (later the European Union) although a purely civilian organization by charter launched research programs such as the European Strategic Program on Research and Information Technology to support technologies of a DU nature. The global trend during the 1990s was the forging of the industrial technology base of that time, in order to support both economic growth and military security. The need to boost DU synergies is a policy reaction to two major market movements: globalization of the market and manufacturing systems and the leadership of civilian technology developments in many military industrial sectors (Watkins 1990). Research and Innovation (R&I) claim ever-increasing funding, while product lifetimes become shorter, e.g., in computer and consumer electronics (smartphones, wearables, etc.). In order to successfully amortize the ever-increasing product costs, manufacturers are struggling to expand markets to achieve economies of scale. That signals the effort to increase profit by reducing the price of a unit of product through increasing the size of the market they reach. The export control used until the 1980s that was often used as a pretext to avoid share state of the art technology by powerful countries like the USA, cannot keep up with the rapid pace of technological advances and their almost immediate application to commercial products. Currently, the security concerns raised by existing or incipient DU synergies are overshadowed by the economic benefit the military will have from an increasing reliance on advances in civilian technologies. A crucial factor for bridging the gap between civil and military research and industrial sectors is the diffusion and sharing of technology. It has been proposed that technology sharing and diffusion would be more effectively supported with a policy shift from fostering DU synergies for specific technologies to a harmonization of structures and cultures of civilian and military systems of research and industry (Watkins 1990). More specifically, by minimizing the differences in funding procedures and by bridging the different approaches to secrecy of the military and civil research would render technologies “DU by design,” since both sides of the DU controversy would be made mutually compatible.
10
Dual Use in Modern Research
193
There have been suggestions, from the policy recommendations point of view, that DU synergies be based on measures that transcend the research enterprise or industry (Cami 2015). DU synergies will have to do with creating an environment designed to satisfy industry’s needs, without necessarily distinguishing between the civilian and military sectors. This is clearly a far-reaching if achievable goal: it requires education systems, defense procurement, and export control policies. Watkins (1990: 398) claims that this kind of synergetic development of civil and military research and industrial sectors will be facilitated by “increasing the general level of technical, managerial, and organizational skills, and by increasing the opportunities for the free exchange of ideas among firms and between industrial sectors by minimizing the institutional barriers between them.”
Quantifying DU Synergies A number of studies concerned with patents of DU-related research help to quantify the benefits of DU synergies. Lee and Sohn (2017) explore whether military technology with a higher level of duality has been more valuable than that with a lower level of duality. For this study, military patents from the United States Patent and Trademark Office during 1976–2014 have been used as a sample. It was found that military technologies are more valuable when the technology itself can be used in various sectors, including the civilian sector, and can be converged with technologies in different fields. However, it was also found that the relation between patent value and diffusion effects toward following inventions is not confined to the civilian sector. The findings provide evidence of the impact of DU policies in military R&I. Despite the fact that the USA is the undisputed leader in military R&I patent, the close connection of DU policies and patent production also concerns Europe; the combined Europe’s output in military patents reaches almost one third of that of the USA (numbers shown in Fig. 4). It must be stressed that more than half of registered
Number of patents
12000 10000 8000 6000 4000 2000 0
Fig. 4 Nationality of applicants of military R&D patents from 1976 to 2014. (Based on Lee and Sohn 2017)
194
P. Kavouras and C. A. Charitidis
patents have both a military and a civil classification, especially in the case of the USA (Acosta et al. 2017). The “composition” of technological knowledge produced by leading companies in the defense industry has also been examined (Acosta et al. 2017). Specifically, the question was whether large defense companies producing civilian, military, and mixed patents are generating DU technologies. The results showed that while the production of civilian patents is relative to the size of the company, this relationship does not hold for the production of military patents. An examination of the production of technological knowledge by leading companies in the defense industry shows that firms engaged in DU research have higher military sales, a greater number of employees, and a larger number of patents than those that are not. Firms engaged in DU refer to firms that take advantage of research initially carried out for civilian purposes and transfer in for military purposes and vice versa. More involvement in DU per employee in European firms compared to US firms was also found, meaning a greater technological productivity of European firms engaged in DU research. These findings help to identify which firms should be targeted by government policies if increasing DU technologies become a political objective. Politically, governments determine the direction of defense and civilian technology through public expenditure, and governmental support is a key factor in taking concrete actions to exploit the DU potential of research, e.g., by developing innovative projects that are based on maximizing DU synergies in both directions between civil and defense research. In this respect, better knowledge of the characteristics of firms engaged in the production of different types of technologies and DU products may help identify which companies should be targeted. Defense firms may be interested in communicating not only their figures as defense suppliers but also their role as contributors of technologies that might be used by civilians as well. Including information about the DU of defense technology or the technological knowledge generated by a firm in its corporate social responsibility (CRS) report could be useful in stressing their contribution to public ends. This issue is particularly relevant because defense firms have been typically excluded from CSR. The potential civil use of military-oriented research was explored by a quantitative analysis based on the number of citations a military patent received in subsequent patents, out of a pool of 582 military patents (Acosta et al. 2010). Attention was paid to the type of the citing patents, i.e., civilian or military use. According to this study: • The most patented military technology corresponds to applications with DU potential. • The most cited technology is of the mixed type (both civil and military codes). • Concerning the technological uses of military patents, 25% are mixed (civil and military) and 38.7% exclusively civil. • The original patents of the mixed type (civil and military) are those that receive the largest number of civil technology citations. • The country that makes the greatest use of military technology for civil purposes is the USA, followed by Germany.
10
Dual Use in Modern Research
195
• The nationality of the military patent and that of the citing patent is a determining factor for its civil use; British, French, and US military patents are the most cited for civil uses, while Japanese patents are those that make the greatest civil use of all the military patents. Besides the quantitative results found in published studies, the JANUS project (Charitidis 2018) – the latest EC funded project that has drafted a set of best practices for boosting DU synergies – delivered a set of best practices for identifying and assessing the DU issues in enabling technologies research. JANUS presented an overview of the basic arguments in favor of DU research, as appeared in the literature. It is evident that the arguments directly relate to financial issues or to issues connected with expanding technological applications which indirectly raises again financial issues.
Anticipated Outcomes DU synergies are not an abstract concept; to boost them, mainly in Europe, the barriers that keep apart civil and military research should be lowered or rendered permeable. The barriers have been raised from both sides, as described above, while it is generally recognized that there is a lack of awareness and information on the advantages of DU synergies. The legal framework regulating the interaction should be tuned with a view to the anticipated results. According to recent literature and studies financed by the European Commission, several other aspects must be addressed as well (Charitidis 2018). There is a persistent need to boost dissemination and networking between the civil and military research establishments, while the need for open access to research data is pressing its way, mainly from the side of the researchers, finding themselves in both sides of the DU controversy. Effective dissemination and networking must take into account existing difficulties, among them psychological barriers, diverse mind-sets, and differences between civil and military research funding structure, produced by the scale and pace of the civil and military market. In the following paragraphs, we provide an overview of measures that can be materialized into best practices aiming to boost DU synergies.
Dissemination and Networking Networking of SMEs oriented to DU research with academic groups oriented to civilian research would boost the exchange of knowledge. For example, results not included in the scientific literature will most possibly remain below the radar of a wide scientific audience. For such networking to become feasible, organization of conferences dedicated to DU research and establishment of a DU-oriented cluster/ council would provide a stable basis for DU-related knowledge brokering. Those kinds of events would facilitate the exposure of researchers to military decision-
196
P. Kavouras and C. A. Charitidis
makers. Currently, one of the barriers for such an exposure is the exceptionally high costs of displaying DU results at military trade fairs.
Open Access on Data The scientific community is reluctant to share data, even in projects undertaking research for civilian purposes. Freely accessible database(s) of project results, following the FAIR principles, must be created for DU-related projects. Moreover, confidentiality restricts publication of results and data, since the military establishment is extremely restrictive insofar as its research is usually considered to be classified. This might lead to duplication of work, which must be resolved, even if not necessarily, by following the Open Access initiative (i.e., the practice of providing online access to scientific information that is free of charge to the end user and reusable) of Horizon 2020 (H2020 2017). “Scientific information” refers to either peer-reviewed scientific research articles or research data (data underlying publications, curated data, and/or raw data). Novel means to share DU-related data must be urgently found and tested, since the open dissemination of scientific information, as described above, could potentially raise serious threats, if not combined with the appropriate safeguards.
Synergies Between Military and Civilian Sector Existing differences in mind-setting create communication difficulties. Research for military purposes takes risks with a try-and-fail mentality, whereas research for civilian purposes has access to fewer resources and, consequently, cannot take such risks. Scientists who conduct research for civilian purposes find it difficult to work directly with military contractors. The military establishment is usually reluctant to accept technologies not specifically developed for military use because technologies for civil applications either have been openly disseminated through the channels of scientific literature or, in case of patented research, have been replicated relatively quickly. This threatens sensitive defense applications: • In cases of DU research, it must be clear what the final military end use might be. Furthermore, a restrictive line should be drawn for certain applications. For example, when a civil-oriented research result could be used without significant additional research effort and resources as a weapon of mass destruction, then the specific research should not be openly disseminated before the necessary safeguards have been put into place by security experts and policy makers. • European policies for the engagement of DU-related SMEs are not as inclusive as those in countries like the USA, according to the findings of the JANUS project. The researchers interviewed believe that a change in defense policies is needed.
10
Dual Use in Modern Research
197
• The defense practice, i.e., the road map for defense-oriented technological needs, must be disseminated to DU-oriented SMEs so that they will clearly discern in which kind of research they are involved. • Investment on DU resources and skills for public security and defense sector is necessary. • Creation of training programs for researchers in civilian and defense sectors for DU and, as a result, creation of a common scientific and research culture between these two sectors are recommended. Such developments are bound to produce a mind shift in the whole research community that will base the acknowledgment of DU issues on well-informed researchers.
Legal Framework A legal framework composed of “static” regulations about DU will soon become impractical, if not obsolete, as new technologies arise. Provision must be made for constant monitoring of how effective the regulations are with regard to emerging cases, such as progress in nanotechnology, robotics, artificial intelligence, and crisis/ disaster incidents. Emerging cases give rise to new challenges, such as higher uncertainty to risk assessment and poorly assessed impacts, that must be taken into account by the relevant regulations. Additionally, there is need for an active group to monitor the projects that might raise these concerns: the group’s opinions must be reflected in the amended regulations and guidance notes. Regulations and guidelines need to be simple and transparent. Many experts have detected a lack of detail and clear definition of the different terms used. Moreover, while some of the calls on security are clearly targeted at improving methods for the prevention of terrorism, suicide bombing, explosive production, etc., the current framework appears to discourage the interaction between researchers for civil applications and researchers for defense/military applications. The development of effective means of prevention should be welcomed and encouraged, especially since the military sector is interested in H2020 research that involves bomb factory detection, etc. SME ethical self-monitoring must be supervised by a higher authority, with the mandate to impose sanctions.
Set of Guidelines The following set of guidelines has been composed by the JANUS project. The guidelines are oriented to three groups of stakeholders: policy makers, project beneficiaries, and project officers and expert evaluators and monitors and ethical screening committees. An overview of the best practices, targeted to different actors, follows.
198
P. Kavouras and C. A. Charitidis
Policy Makers • A new description of the DU aspect is needed. DU should not be considered a “by default” negative concept – DU should stand as a neutral descriptor of a specific research – it will just describe a potential added value, since it can have a double impact. Regulations and guidelines need to be simple and transparent. • A training workshop on DU issues needs to be included during a kickoff meeting of projects with DU potential, in order for all partners to become aware of possible ethical implications within a project with special emphasis on DU concepts. • The European Commission should establish clearly DU-oriented calls, i.e., containing topics relative to DU technologies, in order to encourage DU synergies. A special monitoring committee will supervise the implementation of the research in order to avoid potential risks. • The involvement of entities from the security, defense, and civil sector and academia should be encouraged or even imposed at consortia undertaking DUoriented research. Project Beneficiaries and Project Officers • DU projects should boost the visibility of their results so as to increase synergies between DU-oriented entities. • The applicants of DU-related research should clearly describe at the proposal the impact of their work on both civilian and military sectors. • Project officers must encourage matchmaking activities between DU-oriented entities. • At least one dissemination activity in a defense conference/exhibition should be foreseen in the submitted proposals. • All DU-related projects should have a dedicated website, so as to raise public awareness on the benefits of the ongoing research. Expert Evaluators and Monitors and Ethical Screening Committees • The potential military end use must be clearly described. If the produced knowledge is expected to be easily implemented, a dedicated board of experts must monitor, on a constant base, the progress of the research and decide whether special measures should be taken in order to avoid potential misuse.
Conclusions DU research, technologies, and artifacts can be regarded as inherent to the human endeavor of scientific and technological advance. DU concerns do not pose as a special issue that needs attention in some cases but rather as an omnipresent necessity to safeguard the beneficial role of science toward society. DU concerns are not static; they evolve together with science and technology, but they are also heavily affected by economic and political changes. The rise of genomics and
10
Dual Use in Modern Research
199
synthetic biology, the widening division of labor, and the wave of political changes that swept Eastern Europe, all taking place during the last quarter of the twentieth century, have left a heavy mark on today’s modus operandi of the research enterprise. DU concerns can neither be focused only on the regulation of export controls of specific items/artifacts, like during the Cold War era, nor be limited to the dipole of civil versus military technology, another side effect of the balance of superpowers era. DU is perhaps the most complex, controversial, and difficult issue among the multitude of research ethics concerns. It entails deep knowledge of cutting-edge science, technology assessment, risk analysis, economic trends, and geopolitical shifts. It requires the concerted effort of the research enterprise and governance stakeholders to resolve it: to strike the right balance between the freedom of research without compromising security and without withholding cross fertilization of technology between civil and military research and industrial sectors. Currently, it is evident that we need more transparent research on both sides of the DU dilemma. Succeeding in doing so is succeeding in rendering legal frameworks more competent against misuse or abuse of research results and, at the same time, allowing a more efficient scientific and technological advance. It will surely be a difficult process, but, eventually, it is our safest resort to tame the Janus of technological advance.
References Acosta M, Coronado D, Marín R (2010) Potential DU of military technology: citing patents shed light on this process. Def Peace Econ 22:335–349 Acosta M, Coronado D, Ferrandiz E et al (2017) Patents and dual-use technology: an empirical study of the world’s largest defense companies. Def Peace Econ. https://doi.org/10.1080/ 10242694.2017.1303239 Aicardi C, Bitsch L, Bang Bådum N et al (2018) Opinion of “responsible dual use”, ethics and society division. Hum Brain Project. https://sos-ch-dk-2.exo.io/public-website-production/filer_ public/f8/f0/f8f09276-d370-4758-ad03-679fa1c57e95/hbp-ethics-society-2018-opinion-ondual-use.pdf. Accessed 6 May 2019 ALLEA (2017). https://ec.europa.eu/research/participants/data/ref/h2020/other/hi/h2020-ethics_ code-of-conduct_en.pdf. Accessed 23 Mar 2019 Bezuidenhout L (2013) Data sharing and dual-use issues. Sci Eng Ethics 19:83–92 Cami G (ed) (2015) Dual use technologies in the European Union – prospects to the future. Friend of Europe, Brussels Casadevall A, Dermody TS, Imperiale MJ et al (2015) Dual-use research of concern (DURC) review at American Society for microbiology journals. MBio 6:e01236-15 Charitidis C (2018) Best practice for identifying and assessing the dual-use issues in enabling technologies research, JANUS. Directorate-general for research and innovation, Key Enabling Technologies. Available via EUROPA, EU law and publications. https://publications.europa.eu/ en/publication-detail/-/publication/3e312d8e-8a35-11e8-ac6a-01aa75ed71a1/language-en. Accessed 5 Oct 2018 Cirigliano A, Cenciarelli O, Malizia A et al (2017) Biological dual-use research and synthetic biology of yeast. Sci Eng Ethics 23:365–374 De Grasse TN, Lang A (2018) Accessory to war (2018). W.W. Norton Droff J (2014) The economic and spatial sides of defence support. Evol Bound Def 23:51–73 Dubov A (2014) The concept of governance in dual-use research. Med Health Care Philos 17:447–457
200
P. Kavouras and C. A. Charitidis
Ehni HJ (2008) Dual use and the ethical responsibility of scientists. VARIA-Ethics Sci 56:147–152 Godecharle S, Nemery B, Dierickx K (2013) Guidance on research integrity: no union in Europe. Lancet 381:1097–1098 H2020 programme “Guidelines to the Rules on Open Accessto Scientific Publications and Open Access to Research Datain Horizon 2020” European Commission, Directorate-General for Research & Innovation (2017) http://ec.europa.eu/research/participants/data/ref/h2020/grants_ manual/hi/oa_pilot/h2020-hi-oa-pilot-guide_en.pdf. Accessed 23 Mar 2019 H2020 21 2019. http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_ pilot/h2020-hi-oa-pilot-guide_en.pdf. Accessed 23 Mar 2019 HBP (2018) Human brain project. https://sos-ch-dk-2.exo.io/public-website-production/filer_pub lic/f8/f0/f8f09276-d370-4758-ad03-679fa1c57e95/hbp-ethics-society-2018-opinion-on-dualuse.pdf. Accessed 23 Mar 2019 Imperiale MJ, Casadevall A (2015) A new synthesis for dual use research of concern. PLoS Med 12:e1001813 ISESG International Space Exploration Coordination Group (2013) Benefits stemming from space exploration. https://www.nasa.gov/sites/default/files/files/Benefits-Stemming-from-Space-Explo ration-2013-TAGGED.pdf. Accessed 23 Mar 2019 Kennedy P (1989) The rise and fall of the great powers. Fontana Press. Hammersmith, London. Kuhlau F, Hoglund AT, Evers K et al (2011) A precautionary principle for DU research in the life sciences. Bioethcs 25:1–8 Lee BK, Sohn SY (2017) Exploring the effect of dual use on the value of military technology patents based on the renewal decision. Scientometrics 112:1203–1227 Marchant G, Gulley L (2010) National security neuroscience and the reverse dual-use dilemma. AJOB Neurosci 1:20–22 Mark JH (2010) A Neuroskeptic’s guide to Neuroethics and national security. Neuroscience 1:4–12 Miller S, Selgelid MJ (2007) Ethical and philosophical consideration of the dual-use dilemma in the biological sciences. Sci Eng Ethics 13:523–580 NASA (1976). https://spinoff.nasa.gov. Accessed 23 Mar 2019 NASA (2007). https://web.archive.org/web/20141004003618/http://www.techbriefs.com/legal. Accessed 23 Mar 2019 Oltmann SM (2015) Dual use beyond the life sciences: an LIS perspective. Libr Inf Sci Res 37:176–188 Owen R, Macnaghten PM, Stilgoe J (2012) Responsible research and innovation: from science in society to science for society, with society. Sci Public Policy 39:751–760 Patrone D, Resnik D, Chin L (2012) Biosecur Bioter 10:290–298 Pustovit SV, Williams ED (2010) Philosophical aspects of dual use technologies. Sci Eng Ethics 16:17031 Resnik DB (2009) What is dual use research: a response to Miller and Selgelid. Sci Eng Ethics 15:3–5 Salloch S (2018) The dual use of research ethics committees: Why professional self-governance falls short in preserving biosecurity. BMC Med Ethics 19:53 Selgelid MJ (2007) Bioterrorism, society and health care ethics. In: Ashcroft RE, Dawson A, Draper H, JR MM (eds) Principles of health care ethics. Wiley, Hammersmith, London. pp 631–637 Selgelid MJ (2009) Dual-use research codes of conduct: lessons from the life sciences. NanoEthics 3:175–183 Van der Bruggen K (2012) Possibilities, intentions and threats: dual use in the life sciences reconsidered. Sci Eng Ethics 18:741–756 Watkins TA (1990) Beyond guns and butter: managing dual-use technologies. Technovation 6:389–406 Williams-Jones B, Olivier C, Smith E (2014) Governing DU research in Canada: a policy review. Sci Public Policy 41:76–93
Part III Key Topics in Research Ethics
Key Topics in Research Ethics Introduction
11
Ron Iphofen
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Consenting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vulnerability, Validity, and Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Covert Research and Surveillance: Balancing Privacy and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
204 204 206 208 211 212
Abstract
There are several issues that arise both in the research ethics review process and in the research field itself whose coverage is usually regarded as obligatory. No major reference work could be considered complete without some direct discussion of them. How obtaining consent is conducted and subjects/participants are informed of a research purpose is seen as central to ensuring valid participation. Privacy has grown in importance but as information technology has changed it is harder to maintain. In any case it may run counter to concerns for the safety and security of subjects and participants. The availability of and access to new data forms and technological developments expands the research field immeasurably, further complicating overlaps between private and public spaces. Confidentiality, anonymity, and the potential for deception in research all show how difficult it is to maintain these issues as conceptually distinct. This chapter raises some of the conceptual difficulties in the overlapping and, at times, conflicting application of key principles and values which are discussed at length in Part II of this handbook.
R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_53
203
204
R. Iphofen
Keywords
Consent · Privacy · Vulnerability · Anonymity · Deception · Confidentiality · Surveillance
Introduction No major reference work on research ethics and integrity could be complete without a discussion of the central topics being addressed in this section of the handbook. Unfortunately they can rarely be acted upon in a discrete way since they overlap considerably. This is because the terms are a mixture of principles (respecting privacy, protecting the vulnerable) and ways of realizing those principles (securing informed consent, anonymization, minimizing deception, maintaining confidentiality, and so on). Thus, although the principles might be considered conceptually distinct, in practice they interrelate in various ways, sometimes generating conflicts and dilemmas. For example, addressing the concept of “privacy” requires the assessment of what constitutes public space (literally and virtually). How can confidentiality be promised and maintained if a public interest research outcome is to be delivered when the discloser of a problem is also its source? If confidentiality is promised to participants, how can any significant results be delivered if it remains dependent on the personal features (social or geographical context and identity) of those respondents? Strict confidentiality in such cases would leave researchers void of results. If respondents are to be credited (a practice if not a principle which has grown in concern in recent years), participate effectively, and treated as collegial in any way then it is hard to envisage how their anonymity can be secured and so on. The rest of this chapter highlights some of these central issues and some potential responses to help manage the balance required for the ethical conduct of research. It also builds upon the recommendation in the introduction to Part II that the field of research should be broadened to all those collecting data intended to inform policy and/or influence human behavior in some way. Hence, there is coverage of surveillance research, which may or may not be covert, but is intended, one hopes, to maximize public benefits and minimize harms.
The Consenting Process Gaining consent from potential research participants is precisely one of those areas where the complex overlapping of ethical challenges is well illustrated. (Many of the views that follow were aired and tested at the Meeting of the 24th National Ethics Councils (NEC) Forum Iasi, Romania, 4–5 April, 2019 – at the invitation of the European Commission, DG Research and Innovation.) There is now general acceptance that the “consent for participation in research” is no longer to be seen as a static, one-off declaration but rather as a dynamic, ongoing process that may commence in the earliest stages of research, continue throughout, and even include
11
Key Topics in Research Ethics
205
involvement in the dissemination of research findings (Iphofen 2011: 67; Rooney 2013). There is also much more awareness of the limitations to exactly how valid, fully informed, and free any consent process can be given the imbalance in power and technical knowledge between researcher and researched. The more technical the issues involved the less “lay” participants in research are capable of fully understanding what they may be letting themselves in for. In any case even researchers themselves cannot always anticipate the outcomes of their research. Hence, the notion of “fully informed” consenting is an aspirational fiction. And there is further difficulty of interpreting the consequences for consent in research given new data protection regulations in European Union member states for example following the implementation of the General Data Protection Regulation (GDPR) which has been implemented in the national law of most European Union member states. This is a global concern since wherever the processing of personal data is regulated by law the same consideration will apply – when and how participants are to be informed, how much information can be given and what form should it take? The primary purpose of the seeking of informed consent in research is to respect the ethical principle of subject autonomy that the subject/participant remains as much in control of their own lives as possible. “. . .gaining consent cannot be separated from the giving of information. More often the phrase ‘voluntary informed consent’ is used which implies the subject was able to choose freely to give consent and to subsequently participate in the research” (Iphofen 2011: 66). The overlapping combination of these principles, which could be seen as even varying in degree, involved in gaining a form of consent that can be treated as valid at the very least include: Gaining consent – securing permission from the subject or respondent of their willingness to participate. Giving information – ensuring the adequacy of the information given to seek he subject’s decision to participate is a “fully informed” one. (Innovative methods are constantly applied to discover how best to accomplish this – see Rothwell et al. 2014.) Capacity – ensuring the research participant has the cognitive ability to understand and process the information given and what participation actually entails (regarded as a “mental” ability and so can only truly assessed “clinically” by an appropriately qualified health professional – determined in some countries by law – such as the UK Mental Capacity Act 2005). Competency – a legal status linked to authorized responsibility (e.g., a manager giving permission for a participant observation study within an organization). The technical problems involved in putting these four elements together include: how is the consent gained (written, oral, or “inferred”/implied/tacit – witnessed/ signed)? How clear and unambiguous is the consenting – on a written form or through a conversation? How is the consent “agreement” recorded? How is the information given (written, oral or “assumed”)? – all at once or in stages according to “sensitivity”? How much and in what format/style is the information given? How
206
R. Iphofen
clear is it – some information sheets use confusing language, or layout? How is capacity assessed – formally, intuitively, or assumed? It is when research ethics review committees become “wrapped up” in these complexities that they might be inclined to make unreasonable demands about how to gain consent for researchers, more so in qualitative research (Wynn and Israel 2018). The issues involved here further complicate the process. It would be rash to assume researcher is in full possession of “the facts” and is therefore able to ensure the subject is fully informed. Researchers may not be as informed as they think or should be and, more importantly, they cannot anticipate in advance everything that might happen during the research engagement – this is even more so with qualitative research. Even something as simple as an online questionnaire – “this will only take you 15 min.” – what if it takes longer, how will the subject feel when exposed to their own “incompetence” in not being able to do it in 15 min? Or they may feel misled by the researcher and so trust in science and research is diminished. Ironically it can appear paternalistic in trying to ensure the subject is adequately cared for and represented. It may be that the subject is more willing to take risks than the researcher can legally or ethically allow them to. Consider the assumed vulnerability of children and older people. In any case, autonomy might be a difficult principle to sustain in an interdependent world. We are all dependent upon others, as social beings – research is just another form of interdependence.
Vulnerability, Validity, and Capacity It is often assumed that valid consenting is in turn related to a participant’s capacity to consent and the latter assessed in terms of the participant subject’s vulnerability. Vulnerability can have different meanings in this context: as regards people being open to being persuaded by the researcher to consent when it is not in their interests, and vulnerability in the sense of being particularly vulnerable to potential effects of the research process, for instance being labeled as someone with a disability who might suffer “more” as a consequence. So care must be taken with the generic “categorization” of a population as vulnerable – such as children, those with a disability, and older people. Such groups are not necessarily homogeneous and therefore also not necessarily all equally vulnerable – either from their own perspective or by the attribution of outside observers – who may engage with them as researchers, caregivers, and/or regulators. Individual members of such groups (to which the attribution of “community” may or may not apply) are differentially capable of articulating their concerns and needs, able to be more or less active in research engagements, and varying considerably in the management of their identities as vulnerable or not. The UK National Children’s Bureau, for example, has promoted the “rights” of children to have a strong say in research conducted with, by, and/or for, them (Shaw et al. 2011) which challenges the assumptions of children’s necessary vulnerability. One way of gaining insights about how to achieve free, informed, and valid consent with a vulnerable group is to allow them some form of representation in the
11
Key Topics in Research Ethics
207
whole research process. There may be a variety of means for achieving this from participation in the research design, the research ethics appraisal, to co-researching and co-authoring of outputs – or any combination of such. Problems remain in terms of how to accomplish this, who from the group/category can be considered “representative,” and when (in the research process) any representation could best occur. Some members of vulnerable populations may “over-represent” themselves being articulate, forceful, and active. Others may be denied such representation lacking knowledge, communicative skills, technical knowledge, or terminology – in clinical trials for example lacking “medical literacy.” The form and effectiveness of representation may depend in turn on the degree to which the vulnerable population is organized or holds some form of communal understanding – such as with patient groups or disability rights activists. Ineffective or inaccurate representation may exacerbate stereotyping of the vulnerable. In any case it might still be difficult to address the asymmetry in power between researcher and researched once the research is in progress and, in some circumstances, we need reminding that researchers themselves can be or become vulnerable (Downey et al. 2007). Reflecting on the concept of vulnerability leads on to thinking about the management of the inclusion and/or exclusion of vulnerable subjects in research to whom access might be limited or even obstructed. To take some examples – the legal position on consent to the category of nonconscious patients unable to consent to participate in a research project varies between countries. This means that there is inequity in terms of the potential benefits from, say, participation in clinical trials for a category of vulnerable population that might enhance their treatment. In some cases, the barrier to informed consent arises out of insurance companies’ and/or the government’s aversion to taking any risks. This even precludes the possible assessment of any risk-based approaches, such as the detailed observation of the vulnerable that might allow of alternative forms of consent. In many cases, access to another potentially vulnerable category – ethnic minorities – is restricted by the lack of adequate funding or other resources to support forms of cultural engagement that could enable their inclusion. This could include language translators or community members acting as mediators and so on. As a result members of such communities are either neglected as research subjects or become reluctant themselves to participate. Similarly prisoners or inmates confined for any reason are also generally categorized as vulnerable and may be difficult to access for research purposes. Once again the lack of their inclusion can further inequity of their treatment and suggest an injustice if evidence of their status and needs is absent from research. In all such cases, there appears to be almost a conflict between researchers and their subjects (patients, ethnic minorities, and inmates) on one hand and regulatory authorities and indemnity systems (insurance companies) on the other. This suggests a need for some overarching set of global principles or standards that could be promoted as “good practice.” Clearly some justification for the need to include and not exclude such potentially vulnerable population categories would have to be established, perhaps in the case of each research proposal. Several categories of populations may be considered as subject to enhanced vulnerability if their consent to participate is influenced by compensatory measures
208
R. Iphofen
such as direct payment, other financial rewards (such vouchers for purchases), or other material rewards in return for their engagement in research. The decision to consent may be unduly influenced by such compensation in the case of populations for whom rewards may leverage their doubts. Poorer groups of people are evidently persuaded, as may maternal surrogates, drug users, alcoholics, and, say, patients in oncology whose family or associates may have been economically disadvantaged by the need to deal with the patient’s condition. The size and nature of proposed rewards would be a factor to consider as would the source of the rewards – say whether from pharmaceutical companies in drug trials, or commercial companies in market research. Rewards of this nature raise methodological issues as well as the ethical concern of gaining truly voluntary consent from vulnerable individuals. (See The Research Ethics Guidebook for a summary of the issues: http://www. ethicsguidebook.ac.uk/Compensation-rewards-or-incentives-89.)
Covert Research and Surveillance: Balancing Privacy and Security Even more overlapping ethical issues are raised for research into public safety and security when human rights have to be balanced with public safety, and privacy with security. These issues also overlap the key concepts of anonymity and confidentiality. To illustrate, those conducting surveillance research that may be “covert” in order to assess security threats need to answer the following questions: • What can legitimately be regarded as “suspicious behavior” and therefore justifiably studied covertly? • How can individuals engaging in such behavior be categorized and identified? • Who makes this judgment call? (What “qualifies” them to do so?) • How far is it possible to anticipate (predict) levels of threat or risk? • If privacy is not to be preserved at all costs, then under what conditions can it be compromised? Clearly there is a need to find ways to protect the public from the range of risks and threats that are part and parcel of being “in public.” The range of terrorist attacks that have occurred in public places in recent years evidence the need. Many of the underlying problems with surveillance/security are paralleled in a range of other complex public settings. We here face sets of problems that are political, economic, cultural, and psychological. A fundamental problem of civil society is how to allow individuals to freely pursue satisfaction of their own needs and desires while at the same time maintaining public order. In this case to protect the public, it may be necessary to intrude upon the privacy of individuals who threaten other people in public places and/or to intrude upon the privacy of individuals who may be threatened, for their own protection. But society is also built upon “trust” – whatever social order exists depends upon the assumption of mutual trust. This is especially important in complex public settings.
11
Key Topics in Research Ethics
209
(We “trust” others to behave in a reasonable way. Indeed we trust “the authorities” to take reasonable care of us.) Trust when broken is not easily restored. Both trust and freedom present us with dilemmas; ethics is the study of how one addresses and, hopefully, resolves such dilemmas. Privacy is espoused by the European Union as a human right under Article 8 of the European Convention on Human Rights (ECHR). At the same time the European Commission has funded research and development in a range of surveillance security technologies. Surveillance involves paying close and sustained attention (visually, aurally, etc.) to a particular, identifiable person or group for a specific reason. The assumption is that the person/group’s “suspicious behavior” could be inferred to be of a threatening (terrorist and/or criminal) nature. It is “identifiability,” the linking of data, of whatever form and however gathered, to a specific person or group that raises the challenge to privacy. Surveillance is not only used for public security purposes. We often trade privacy in return for credit points, special offers, or expanded information about goods and/or services or to gain from the efficiencies of traffic or crowd control, etc. Thus, consented surveillance is seen as less problematic as long as surveillance limits are clarified. As discussed in the opening chapter to this Handbook, it is via consented surveillance that Cambridge Analytica via Facebook was able to covertly research mass population behavior and attitudes in order, in turn, to influence it in a particular direction. The risks of consented surveillance are heightened when it becomes more of a burden to users of mass social media sites to check the limits to the surveillance they are allowing the site owners to take advantage of – i.e., the lengthy terms and conditions which users rarely read in detail. While privacy is to be valued, like all ethical principles, it is not an “unqualified good” since its absolute application could conceal criminality or harm perpetrated on others in secret (Sidgwick 1907). It entails the ability to engage in personal actions without the unwarranted and/or unwanted intrusions or interferences of others and to control the release of information about those actions. There are some international agreements on how it is to be applied, but most States either have their own specific laws on it or link it to other human rights legislation. All issues concerning ethical research can be framed by asking: who is doing what to whom, why and how? Even virtue ethicists are interested in the “who” and “why” questions: what sort of person is conducting the research and with what intent? For the answers to each of these questions, we would be seeking a sense of legitimacy. In other words, to a specified degree it is acceptable to engage in the questioned actions or there is an authoritative rationale for the answers to each of these questions. Thus, for example, as ECHR Article 8 makes clear, the right to privacy holds “unless. . .” – a criminal act is involved; a right to safety and security holds “unless. . .” – one individual’s security has to be sacrificed to protect the majority. Rights cannot be absolute: they are delimited by contextual factors. For now, just take the “how” question. How do those conducting surveillance go about collecting data? What methods are adopted for the collection and retention of data? Consented surveillance may act in the interests of a member of the public when travelling or consuming products and services – enhancing or facilitating their experiences. It is becoming increasingly feasible to accurately identify and track
210
R. Iphofen
individuals in both the physical and digital world, without their knowledge. It is in this identification of a specific person that we are no longer dealing with impersonal movements or actions of units (crowds, groups of anonymous passengers or consumers) but enhanced knowledge about a specified person. Thus, the balance between surveillance, privacy, and security becomes even more context-specific. The only way to fully examine the ways in which privacy and security can be balanced against each other is to apply the test of justifiability to “who is doing what, how and why” in each case. Most public spaces are inherently insecure in terms of privacy: they are often characterized as being “crowded” and “open.” While most people might be aware that they could be being observed/studied in a public space, there is no automatic expectation that this is the case. And it may be surprising that some people are unhappy about the potential for such observations to be taking place – assuming their own behavior and interactions in such spaces are not (or even should not be) of interest to others. Controlling information about ourselves has always been harder to accomplish in a public domain than in our own private space. This is necessarily even more the case in large transport interchanges where large numbers of people are moving between different forms of transportation in a shared, crowded space. The terrorist risk is obvious and historical experience offers adequate evidence and clearly demonstrates the extent of the risk. But, increasingly, our assumed private spaces have become more public as a consequence of digital technology, social media, and our perceived essential access to the Internet. The nature of data is changing rapidly. What constitutes data and how we access it adds to the challenges both researchers and public authorities must face (Heeney 2012). The surveillance of such public spaces constitutes normal procedure for, for example, transport authorities, and passengers have some expectation of (even “trust” in) being observed in most public transport settings. Evidently any surveillance that targets an identified individual over time would require prior regulatory approval. If both the means and methods for certain kinds of surveillance require prior approval in specific circumstances, that would be of little use in a crisis or in emergency situations. Thus, in emergency situations “targeted” surveillance might turn out to be impractical. In the field of security, the ethical principle of “voluntary informed consent,” as a protector of privacy, becomes nonsensical if applied to the prospective terrorist or criminal. In that case nonconsented, possibly covert surveillance could become vital to the protection of the public. Even the security services are “researchers” in the broad definition of the term! Another conceptually difficult but vital “test” of the ethical/legal risk is “proportionality.” So while there is an acceptance that privacy might (have to) be sacrificed if there is a security risk, the degree to which privacy can be sacrificed has to be proportionate. But who makes the decision? In whose interests do they make it? Even then, would everyone consider it proportionate? Some commentators suggest a “threshold” model: surveillance of an area for hazards might be justified, as would surveillance of an area for violent behavior. However, the threshold for acceptable intrusion will be lower when pursuing a suspected shoplifter than a suspected terrorist. Thus, the use of most tracking technologies for low level illegal activity could be regarded as excessive.
11
Key Topics in Research Ethics
211
It might be difficult to sustain this kind of “threshold” judgment on anything other than “resource” grounds where the costs of time, energy, distractions from other more important/serious contraventions, and/or the likelihood of conviction have higher priorities. In fact there is a problem of where and how tests of proportionality should be applied, given that criminals are often experts at distraction techniques such as simulating “normal behavior” to disguise their illicit intentions or actions. The maintenance of normal appearance is a classic criminal distraction technique and it would be impossible to judge from apparently normal behavior if there was criminal intent. The expert criminal is one who actually does nothing to suggest that he/she is a criminal. Inappropriate surveillance can lead to the “social sorting” or categorizing of people based on social stereotypes with the danger of institutionalizing prejudices. Thus, CCTV operators in the absence of evident suspicious behavior might disproportionately monitor young, male, and ethnic minorities out of an expectation of untoward behavior. If these groups are watched more frequently than others, they are more likely to be seen as doing something suspicious thus reinforcing the stereotype and the prejudice. Inappropriate or unjustified use of surveillance can lead to a diminution of trust in general and trust in authority in particular. Suspicion as to a State’s motives for conducting surveillance may lead to cynicism as to how the State will employ its surveillance technology in self-protection. Even if there is no evidence of wrongdoing, the State may nonetheless choose to keep records on those who they believe to pose a future threat. Surveillance transfers power from the surveilled to the surveiller with all the inherent potential for loss of dignity, informational control and, ultimately, responsibility for one’s own life. This loss of privacy is a fundamental challenge to the democratic foundations of a civil society: freedom of thought, speech, and action. It is essentially unethical, unless deemed necessary and justified for some higher moral purpose. The only people who can really know when ethical transgressions are occurring are those employing the surveillance: the authorities and the operators. If a culture of “acceptable practice” can be cultivated and sustained via their training and shared awareness, there may be some degree of public reassurance. Ultimately the most ethical way of handling these potentially conflicting pressures is transparency in motive, action, and outcome. Just as compromising the privacy of individuals might be necessary to enhance their protection, so too the promise of confidentiality or anonymity to public servants might have to be challenged in the larger public interest (Spicker 2007).
Conclusion The purpose of this chapter was to offer some overview of the ways in which the main ethical concerns in research are often in tension with one another. It is sometimes the case that one cannot pursue one major principle, value, or virtue in research ethics without challenging another. The focus on the topics of consent, vulnerability, and surveillance point up the challenges to privacy, confidentiality, and anonymization. It is evident that the management of a dynamic consenting process for vulnerable populations is vital to avoid exacerbating their vulnerability. Maintaining the balance
212
R. Iphofen
between formal regulatory requirements for gaining consent and its practical realization “in the field,” or in the lab, whatever the research setting or site, depends upon the heightened awareness of the relevant research professionals (Gelinas et al. 2016). Those professionals include researchers, research ethics reviewers, regulatory authorities (including judges), and commissioners. Such practices can only come from a level of shared cultural understanding achieved via peer socialization, education, and training. Clearly an understanding of how truly valid, free, and informed consent can be achieved requires knowledge of the formal regulatory requirements together with the means for gaining genuine consent in practice. Bringing together, say, judges, funders, reviewers, and researchers could offer insights as to the practical accomplishment of valid consent for the vulnerable. In all of these examples discussed above, it is necessary to remember that vulnerability is a perception not a rule. No one individual, whatever their status, is necessarily vulnerable. And, in some respects, we all have the potential to be vulnerable. The same applies to populations or groups. An assessment of the risk of vulnerability is the best way to approach the problem: “How able is this research population to consent to their inclusion in a research project without exacerbating their vulnerability?” Such an assessment could be included both in any research proposal and in its protocol. A depiction of the “vulnerabilities landscape” expressed within a research proposal would demonstrate the researchers’ awareness of the risks and how they planned to address them. The part that follows explores in even greater depth the conceptual and practical problems associated with the maintenance of the values that lie behind principled research.
References Downey H, Hamilton K, Catterall M (2007) Researching vulnerability: what about the researcher? Eur J Mark 41(7/8):734–739. https://doi.org/10.1108/0309056071075237 Gelinas L, Wertheimer A, Miller FG (2016) When and why is research without consent permissible? Hastings Cent Rep 46:35–43. https://doi.org/10.1002/hast.548 Heeney C (2012) Breaching the contract? Privacy and the UK census. Inf Soc 28(5):316–328. https://doi.org/10.1080/01972243.2012.709479 Iphofen R (2011) Ethical decision making in social research: a practical guide. Palgrave Macmillan, London Rooney VM (2013) Consent in longitudinal intimacy research: adjusting formal procedure as a means of enhancing reflexivity in ethically important decisions. Qual Res 15(1). https://doi.org/ 10.1177/1468794113501686 Rothwell E, Wong B, Rose NC, Anderson R, Fedor B, Stark LA, Botkin JR (2014) A randomized controlled trial of an electronic informed consent process. J Empir Res Hum Res Ethics 9(5):1–7 Shaw C, Brady L-M, Davey C (2011) Guidelines for research with children and young people. NCB Research Centre, National Children’s Bureau, London Sidgwick H (1907, first published 1874) The methods of ethics, 7th edn. Macmillan, London Spicker P (2007) The ethics of policy research. Evid Policy 3(1):99–118 Wynn LL, Israel M (2018) The fetishes of consent: signatures, paper, and writing in research ethics review. Am Anthropol 120(4):795–806. https://doi.org/10.1111/aman.13148
Informed Consent and Ethical Research
12
Margit Sutrop and Kristi Lõuk
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Origin of the Concept of Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elements of Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Who Is the Human Subject Capable of Consent? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Must the Subject Be Asked for Informed Consent? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Should the Subject Be Informed About and How? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Research Possibilities due to Advancements in Science and Technology . . . . . . . . . . . . . . . Changing Ethical Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Contested Concept of Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical Problems with Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214 215 218 219 220 222 223 225 226 227 229 230 230
Abstract
Although today valid informed consent is considered key to ethical research, there is no agreement on what constitutes adequate informed consent. Problems ensue firstly from the circumstance that a principle adopted in one area of inquiry (biomedicine) cannot be extended literally to other areas of science. In addition, new areas of scholarly inquiry and changing research contexts contribute to the emergence of new forms of consent (open, broad, dynamic, meta-consent). A second difficulty derives from an overly narrow understanding of the concept
M. Sutrop (*) Department of Philosophy, University of Tartu, Tartu, Estonia e-mail: [email protected] K. Lõuk Centre for Ethics, University of Tartu, Tartu, Estonia e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_8
213
214
M. Sutrop and K. Lõuk
of autonomy, resulting in an absolutizing of individual freedom and choices and relegating more collective values such as reciprocity, responsibility, and solidarity to the background. In this chapter, we explain the origin of the concept of informed consent, what it consists of, and what forms it can take. We will then analyze what has caused shifts in the understanding of the informed consent principle: how much is due to advancements in science and technology and how much to changing ethical frameworks. Finally, we will show why it is important to develop a contextual approach by taking into account differences in research fields as well as types of research. Keywords
Informed consent · Autonomy · Ethical framework · Genomic databases · Biobank · Open consent · Broad consent · Dynamic consent · Meta-consent
Introduction Asking for informed consent from a potential subject before the beginning of a study is a generally recognized requirement in human research ethics today. Consent is a kind of mutual agreement between investigators and subjects in which both parties affirm their intentions to behave in particular ways (Faden et al. 1986). Informed consent implies two related activities: participants first need to comprehend that they are authorizing someone else to involve them in research as well as what they are authorizing; second, they agree voluntarily with the nature of the research and their role within it (Israel 2015, pp. 79–80). Informed consent involves the full disclosure of the nature of the research and the participant’s involvement, adequate comprehension on the part of the potential participant, and the participant’s voluntary choice to participate (Dankar et al. 2019, p. 464). Informed consent is not an ancient concept with a rich history. The term “informed consent” first appeared in 1957, and serious discussions of its meaning and applicability began around 1972, moving from a narrow focus on the researcher’s obligation to disclose information to an emphasis on the subject’s understanding of information and his/her right to authorize or refuse biomedical information (Beauchamp and Faden 1995/2004, p. 1271). The principle of informed consent was adopted to make sure that the subject is aware of the circumstances related to a study and that she/he has an opportunity to decide whether or not she/he wishes to participate in the study. The aim of informed consent is to protect certain values – non-maleficence, the subject’s individual liberty, personal autonomy, and human dignity, as well as trust between subjects and investigators – thereby giving subjects confidence that they will be treated with respect and that harming them will be avoided (Resnik 2018, p. 113). Consent also presumes another important value – namely, honesty on the part of researchers. In order to maintain trust, investigators should communicate honestly and openly with subjects and inform them about research goals, methods, and procedures, as well as
12
Informed Consent and Ethical Research
215
disclose who will benefit from the study, who is funding the research and for what purpose, and what are the potential benefits and risks. Although today valid informed consent is considered key to ethical research, there is no unanimous agreement on what constitutes adequate informed consent. We have to admit that there are several unresolved problems about whether, when, and how to obtain informed consent in different fields of research as well as in different research settings (e.g., Capron 1982; Haggerty 2004; Hansson 2010; Iphofen 2009; Ploug and Holm 2016, 2017). Informed consent is a principle born in a specific context, that is, in biomedical research, where the need arose to protect research subjects from potential harm (Moreno et al. 1998; Beauchamp and Faden 1995/2004). No one doubts the principle of protecting research subjects. What is questionable, however, is whether consent requested and obtained before the beginning of research is necessarily the best way of protecting the subject and whether asking for consent makes certain kinds of research impossible. A second difficulty derives from the observation that the principle of informed consent protects diverse values which can come into conflict. Establishing a hierarchy of values depends on what is considered more important, individual rights or public interest. Ethical frameworks differ as to whether they consider individual or collective values to be more important. Thirdly, problems arise from the fact that a principle initially adopted by one area – biomedical research – has begun to be applied to all other research fields. However, both biomedicine and other research fields are in a process of continual change, due to various factors, including the expansion of technical possibilities, as new foci of research evolve and new ways to collect and analyze data become possible. In the following section, we will address the abovementioned three issues in more detail. We start by explaining the origin of the concept of informed consent, what it consists of, and what forms it can take. We will then analyze what has caused shifts in the understanding of the informed consent principle: how much this depends on advancements in science and communication technology and how much on changing ethical frameworks. Last, but not least, we will show why it is important to develop a contextual approach by taking into account differences in research fields as well as types of research.
Origin of the Concept of Informed Consent The history of medical and research ethics demonstrates how different values have been upheld at different times and how ethical frameworks have developed (Sutrop 2011a). The central values of classical Hippocratic ethics were non-maleficence, beneficence, trust, and confidentiality. Throughout the centuries, the central concern of medical ethics was how to make disclosures without harming patients by revealing their condition too abruptly and starkly; this also warranted withholding information and outright deception (Beauchamp and Faden 1995/2004, p. 1271). Tom L. Beauchamp has effectively summarized how the shift took place: “Physician
216
M. Sutrop and K. Lõuk
ethics was traditionally a nondisclosure ethics with virtually no appreciation of a patient’s right to consent. The doctrine of informed consent was imposed on medicine through nonmedical forms of authority such as judges in courts and government officials in regulatory agencies” (Beauchamp 2011, p. 515). He adds that the reason why informed consent became so important in the 1970s is that at that time issues of individual liberty, civil rights, and social equality were commanding a great deal of attention (Beauchamp 2011, p. 516). Similarly, respect for the subject’s autonomy through asking for informed consent has not always been one of the underlying principles of human research. For a long time, scientific knowledge was valued more highly than the subject’s well-being and health. Tom Beauchamp and Ruth Faden (1995/2004, p. 1273) have pointed out that in the nineteenth century, research was conducted on slaves and servants without consent of the subject. The significance of the respect for autonomy grew considerably as the biomedical ethics was caught up in rights movements. However, the shift toward an autonomy model can also be explained as a reaction to the oppressive history of eugenics and coercion of human subjects in the name of so-called public interest. The requirement for informed consent began to spread internationally after World War II when the Nuremberg Code was created. Namely, during the Nuremberg trials, the so-to-speak scientific human experiments of physicians in Nazi Germany became publicly known, and this gave a strong impetus to creating the Code. It has often been said that the ethics of human research has taken shape “thanks” to scandals and cases of maltreatment. The first point of the Nuremberg Code states that voluntary consent by the subject is absolutely essential (Nuremberg Code 1947). To give consent, the subject should have legal capacity, and no form of coercion, fraud, or deceit may be used. In order to make a decision whether or not to participate, the possible subject should have sufficient knowledge and understanding of the nature, duration, and purpose of the experiment, as well as which methods are to be used and what inconveniences may be entailed. The code also stresses that experiments must not cause suffering or injuries, that experiments should be conducted by qualified persons, and that during the experiment the subject shall have the right to request stopping the experiment (Nuremberg Code 1947). Although the Nuremberg Code is usually considered to be the source of the principle of consent, it seems to be a matter of historical controversy as to when precisely the obtaining of consent for biomedical investigations became standard practice seems to be a matter of historical controversy (Beauchamp and Faden 1995/ 2004, p. 1273). New research (Benedek 2014; Ruyter 2003; Vollmann and Winau 1996) indicates that the requirement for informed consent was already recognized in the nineteenth century. Knut Ruyter (2003) demonstrates that obtaining consent was already a central issue in the 1880 court case against a Norwegian doctor and researcher Gerhard Armauer Hansen. In an insightful article “Informed consent in human experimentation before the Nuremberg code,” Jochen Vollmann and Rolf Winau (1996) explain how the Albert L. Neisser case (conducting experiments on syphilis patients without consent in 1898) led to the adoption in 1900 of the first directive on informed consent in Prussia. The authors also point out that in the
12
Informed Consent and Ethical Research
217
context of political reform of criminal law in Germany, the Reich government issued detailed guidelines for new therapy and human experimentation in 1931 and introduced the legal doctrine of informed consent, based on patient autonomy (Vollmann and Winau 1996, p. 1446). This chapter shows that the legal doctrine of informed consent long preceded the 1947 Nuremberg Code, a fact that continues to call for reflection when writing the history of informed consent. Although the legal doctrine of informed consent is much older, the term “informed consent” was used for the first time in a US court decision (Salgo v. Leland Stanford Jr. University Board of Trustees) in 1957 (Beauchamp and Faden 1995/2004). Salgo emphasized the need for providing adequate information to the subject. Through this case of medical ethics, the term “informed consent” also found its way into research ethics. Meanwhile, a broader discussion on the term began, particularly in the USA. On the one hand, the discussion was triggered by court decisions and studies that had already been conducted (Hyman v Jewish Chronic Disease Hospital, the Willowbrook study, and the Tuskegee syphilis study); on the other hand, the theme became a topical issue in research literature, the press, legislation, and court practice, leading to the formation of a national commission in the USA (Beauchamp and Faden 1995/2004). As a reaction to the abovementioned cases, it led to the adoption of the Belmont Report (1979) which specifies three main principles for biomedicine and behavioral sciences: respect for persons, beneficence, and justice. These principles are very similar to the so-called Georgetown mantra – autonomy, beneficence, nonmaleficence, and justice. The principles were introduced in the seminal book by Beauchamp and Childress, Principles of Biomedical Ethics (Beauchamp and Childress 1979/2009, 8th ed., 2019), which has shaped bioethicists’ thinking for decades. Although these four principles originated in the biomedical field, the same central principles apply to all research on human subjects. At present, most good practices and codes regulating research ethics presume asking for informed consent, from the Helsinki declaration on biomedicine (1964, latest version 2013) and the Belmont Report on biomedicine and behavioral sciences (1979), to the European Commission’s guidelines for researchers applying for grants in the social sciences and humanities (Sutrop and Florea 2010; The European Commission 2018). However, the understanding of informed consent has shifted tremendously during the last two decades. The main difference is that consent was originally sought for a single study with a pre-specified timespan and specific purpose. It has been pointed out that “with the rise of big data, and the enormous biomedical data warehouses being built, it becomes more and more difficult to foresee the uses and applications of subject data, therefore compounding the difficulty in obtaining informed consent, according to the original definition” (Dankar et al. 2019, p. 464). As a result, the concept of informed consent has been subjected to several restrictions: it has either been modified (e.g., open, broad, dynamic consent or meta-consent in biobank research), or, in some areas of application (e.g., in ehealth projects), it has been abandoned altogether. Although most scientists accept that the process of informed consent should be an integral part of research, its applicability to a range of research types has been a
218
M. Sutrop and K. Lõuk
matter of discussion. Sometimes it is considered acceptable to conduct research on human subjects without their consent (Gelinas et al. 2016). The ethics committee can waive consent requirements for emergency research or minimal risk research (e.g., quality assurance or quality improvement studies involving the analysis of the medical records of patients at a hospital, or research on de-identified human biological samples left over from medical procedures or tests) that could not be conducted without a waiver (Resnik 2018, p. 124). However, in all these cases, the anticipated social benefits of the research must be significant enough to outweigh participants’ forfeiture of their consent. Also, there may be cases in the social sciences where obtaining written informed consent may endanger subjects rather than protect them (e.g., research on vulnerable groups such as migrants or prostitutes). Therefore, it is essential to preserve a certain flexibility and consider the research context so as to best protect subjects from the risks involved. If obtaining consent seems culturally or contextually inappropriate, other ways to document participants’ agreement should be considered (The European Commission 2018).
Elements of Informed Consent Informed consent includes the following elements: disclosure, comprehension, voluntariness, competence, and consent (Faden et al. 1986, p. 274). Thus, informed consent presumes that the subject receives enough information on the study, understands the information presented, and is capable of making a decision to participate in the study voluntarily (without manipulation or pressure) and that she/he is competent and agrees to participate in the study. This classification of elements has later been broadened in specialist literature (Beauchamp and Childress 1979/2009, pp.120–121): Threshold elements (preconditions) Competence (to understand and decide) Voluntariness (in deciding) Information elements Disclosure (of material information) Recommendation (of a plan) Understanding of the last two elements Consent elements Decision (in favor of a plan) Authorization (of the chosen plan) It is stressed that in disclosure and understanding, the central role is attributed to the language used to inform the subject.
12
Informed Consent and Ethical Research
219
Based on the aforementioned elements, it is possible to specify the following requirements for informed consent: disclosure, understanding, refraining from influencing the person, competence, and authorization of the decision by means of a signature. Informed consent is a process which may have to be treated as ongoing throughout the research engagement. It is not an event, a once-and-for-all act, nor merely the signing of papers (the information sheet and the form of consent). It is bad practice if the subject is simply told to read through the form and sign it. The process aims, on the one hand, to provide information so that the subject understands what is being proposed (including the fact that this is scientific research); on the other hand, the researcher has the obligation to check that the information has been understood. Subjects are informed by the researcher in charge or some other member of the research team. We will explain how and concerning what subjects should be informed below.
Who Is the Human Subject Capable of Consent? Persons involved in a study can be classified variously. One distinctive feature is whether or not the person is capable of consent. To classify a subject as having the capacity for consent, an assessment should be made of their cognitive and communicative competence. In order to assess the competence of a potential subject, the researcher has to assess that, assuming full and clear information has been given, the subject has the ability to understand, retain, and analyze that information, come to an independent decision, and express that decision clearly and effectively. However, judging the capacity for consent can be a matter of contention, as it is a difficult call to make at a given moment of time; it may also imply taking a patronizing attitude toward those with “less capacity” (Iphofen 2009, p. 72). A healthy adult person is considered capable of consent, that is, she/he is capable of making decisions about what is happening to his/her body and the data. Nevertheless, particularly in the context of biomedical studies, it should be considered that, depending on the disease, a patient’s/subject’s usual power of decision may be impaired. The situation is more complicated when persons not capable of consent for one reason or another are needed for the study. Such people might be unconscious patients, patients with mental disorders, or elderly people suffering, e.g., from dementia. If the research question is justified and the involvement of subjects is ethical, informed consent can be given by a subject’s legal representative. Persons whose capability to make autonomous decisions is limited are referred to as vulnerable. A distinction is made between developmental, medical, and social vulnerability. While medical or developmental vulnerability manifests themselves primarily as impairment of the power to make decisions, social, institutional, or hierarchical vulnerability means that voluntariness may be threatened due to the subject’s dependent relationship with the researcher or the research team. For example, students can be easily influenced because of their situation if research is
220
M. Sutrop and K. Lõuk
conducted by their own teacher or supervisor. It can also be difficult for military personnel or prisoners to exercise their free will for participation in a study. Ethnic minorities and refugees are also seen as vulnerable groups (e.g., see The European Commission 2019). Other groups that may be considered vulnerable include sex workers, dissidents, and traumatized people at risk of re-traumatization. Children form a separate subgroup among the vulnerable. Due justification must be provided for involving children in research. In the context of informed consent, it is important to obtain their assent and the informed consent of their parents or legal guardians. It is essential to inform the child in language appropriate to his/her age and his/her ability to understand. Depending on national regulations, in some countries, one parent’s signature will suffice, while in others both parents’ signatures are needed. Thus, if subjects have been recruited from abroad for international cooperation projects, it is necessary to be aware of specific requirements for those countries. It should also be noted that if the parent agrees but the child does not, the child must not be forced, and her/his will must prevail. In the case of very young children, it is essential to inform them in an age-appropriate manner, but they are not asked for written assent. Legally, all persons under the age of 18 are minors, thus requiring the parent’s or guardian’s consent. The cultural context must also be taken into account when conducting research. For example, it may be local practice that consent for conducting research is given by the elder of the community.
When Must the Subject Be Asked for Informed Consent? The general rule is that a person should be informed about the circumstances related to the study before it begins so that she/he can decide whether or not she/he wishes to participate. Asking for consent also depends on the study. For example, in the case of an anonymous web questionnaire (if it is really anonymous, and the respondents cannot be identified, even indirectly), it is not necessary to collect personal data in the form of a request for consent. In such a case, the header of the questionnaire must include all the necessary information, and if the person replies to the questionnaire, this means that she/he has consented to participate in the study. There may be some exceptional situations where the subjects cannot be informed about everything before the study is undertaken. In such a case, partial and severalstage informing is used; general information is given at the beginning and more specific information afterward. After receiving additional information about the study, the person must be given an opportunity to decide again whether or not, in the light of these additional circumstances, she/he allows the use of the data collected for research. This process is called debriefing, and it is primarily used in social sciences (behavioral sciences, psychology). In some rare cases retrospective consent may be sought, informing the subjects after the end of the study that they had been subjects of research. However, obtaining retrospective consent may undermine public trust in science.
12
Informed Consent and Ethical Research
221
Another exceptional case is emergency research where immediate action is necessary, and the patients/subjects are unconscious. In such a case, different practices are used for receiving consent. We recommend determining the local practice – whether it is sufficient to get consent from an independent council of physicians or whether the legal representative of the subject must also be informed and his/her consent received. In addition, for such studies, the principle applies that when a person becomes capable of consent, his/her own consent must be asked. As there are always exceptions to the general rules, situations occur where research is conducted without the subject’s (written) consent. Generally, in such cases, the consent of the ethics committee is necessary. An example is when the study is based on data collected from patients’ case histories and it is impossible to ask for everyone’s consent. In general, when the processing of personal data without the person’s consent is possible, this is fixed by law; therefore, it is necessary to be aware of the legislation on protection of personal data in one’s own country and, for example, in the European Union. Another case is the use of oral consent (with the approval of an ethics committee) as in research on migrants, whose written consent could jeopardize their anonymity, valued in such populations due to fear of persecution and law enforcement. Exceptions to fully informed consent are most likely to occur with covert observational studies where participants are not aware that they are being observed or ethnographic field research (which usually covers fringe areas of society – criminality, social deviance, the sex industry, terrorist groups, and religious cults). In such studies there may either be methodological justifications (participants should not be told too much or anything in order to accomplish research goals) or strategic reasons which have to do with the safety of researcher and/or research subjects (Iphofen 2009, p. 77). In some exceptional cases, withholding information from participants or even the use of deception may be justified; this mainly applies to cases where the study is expected to reveal something of social significance, which cannot be discovered in any other way (Bok 1978, pp. 182–202). However, research without obtaining informed consent before the commencement of the study could only be considered if it entails minimal risk to the subjects and if some way of debriefing is foreseen. In the case of procedures that can cause physical or mental harm, information must not be withheld, and no deception may be used. Both deception and covert research should be exceptions rather than the rule, and both require strong justification and a demonstration of clear benefits of the chosen method over any other approach (The European Commission 2018). It is noteworthy that ethical issues, including issues of obtaining informed consent, have not received as much attention in the social sciences as they have in biomedical research. However, there is now growing awareness that there is a variety of specific ethical problems in social sciences (e.g., exceptions to fully (written) informed consent, covert research and deception, etc.) that require special attention (Beauchamp et al. 1982; Israel 2015; Israel and Hay 2006; Iphofen 2009; Ransome 2013; Social Research Association 2003; Sutrop and Florea 2010; The European Commission 2018).
222
M. Sutrop and K. Lõuk
What Should the Subject Be Informed About and How? The most essential principle is that information should be transmitted in a way that is understandable to the subject and appropriate for his/her age. For example, if a group of minors is recruited for a study, different information sheets should be compiled for 10and 16-year-olds. For very young children, pictorial material has been recommended that shows in simple visual language what participation in the study will entail. In addition to attending to the informing process, it is essential to make sure that the presented information is understood correctly. Particularly in biomedical human research, it may happen that subjects attribute therapeutic value to the study. This is referred to as the therapeutic misconception. To avoid this, it is recommended that the physician treating the patient and the leader or member of the research team not be the same person. To clarify these role differences, it is recommended that the patient, i.e., the potential subject of the study, be informed by some other member of the research team. How is it possible to check that the subject has understood the presented information correctly? Circumstances favoring direct contact, that is, face-to-face informing, simplify recognition of how the subject receives the information and reacts to what she/he has heard. In practice, a potential subject can be invited to ask questions or answer questions about how she/he understood different aspects (such as whether the treatment offered is specific to the person or follows the protocol, how the data are analyzed, etc.) A complicating factor is that there is probably little time for informing everyone about everything, and consent forms are often dozens of pages long. In conversation, one should concentrate on what the subject might consider most essential. Following various guidelines, it can be generalized that subjects should be informed about the following (World Medical Association 2013; The European Commission 2018): • Participation is voluntary; it is possible to quit at any time; quitting does not entail any consequences in treatment; the subject need not substantiate his/her decision to end participation in the study. • Who is conducting the study? • Who is financing the study? • Potential conflicts of interest. • The aim of the study. • Why the subject should belong to the study group. • Risks and benefits of the study for the subject. • Procedures and the time the subject needs for participation in the study. • Whether or not the subject is compensated for participation in the study (e.g., time, transport, costs). • How incidental or unexpected findings are handled. • Contact information about researchers. • Processing of personal data (which data are collected; whether or not any special categories of personal data are included; in what form they are processed; who has
12
Informed Consent and Ethical Research
223
access to the data; how long and in what form the data are stored (including the code key); whether or not the data are forwarded to other researchers, transported to foreign countries, etc.). The above list characterizes the so-called specific informed consent where consent is given for a concrete research project. Another very important aspect is whether or not consent is explicit, implicit, or presumed; whether or not it is documented, and whether or not this is done in written form. Manson and O’Neill have stated that explicit consent is a two-way process. On the one hand explicit information should be given (see the abovementioned categories) to those whose consent is sought. On the other, potential subjects’ understanding of relevant information should be made explicit (Manson and O’Neill 2007, pp. 10–11).
Types of Informed Consent With the establishment of biobanks at the beginning of the present century, more and more research has begun to be based on information stored in databases, and a discussion has arisen as to what subjects should actually be informed about. Because of the characteristics of such research, requirements of specific consent did not seem suitable for research based on biobanks and databases. This new type of consent has been referred to variously, such as open, broad, dynamic, or meta-consent. What is meant by open consent is unrestricted redisclosure of data from health records and genetic research. As pointed out by Ants Nõmper (2005), “open consent is not research-project specific but rather specific to ‘conditions of open consent’.” The type of information involved cannot be fully predicted. Therefore, no promises of anonymity, privacy, and confidentiality can be made (Lunshof et al. 2008). The main problem with open consent is that it does not fulfill the necessary conditions for consent: issues related to disclosure, understanding, and voluntariness. As no specific information is disclosed, no understanding is possible except actually participating in research. What type of research does take place and who participates are often decided in accordance with (national) research regulations about research, where a great deal is left to the decisions of research ethics committees (Kristinsson and Árnason 2007). Broad consent means that the subject agrees that research on his/her data and biological material can be conducted both now and in the future (Kaye 2004). Here, “broad” means that, at the moment of giving consent, it is not known what kind of research will be carried out, what is going to be studied, etc. Broad consent is, so to say, an all-or-nothing solution – it determines whether or not a person joins a databank or biobank. However, specifications can be made, for example, that a subject allows his/her data and biological material to be used for research on cardiac diseases and nothing else. The introduction of broad consent has led to many discussions. For example, Steinsbekk et al. see that the model of broad consent is consistent with the value of autonomy and informed consent (Steinsbekk et al.
224
M. Sutrop and K. Lõuk
2013). However, the availability of broad consent has also entailed changes in rules regarding research conducted on databanks, with respect to the earlier narrow/ specific principle. In turn, this can result in other changes of rules (the so-called slippery slope argument), culminating in an undesirable effect (e.g., Tuija Takala 2017). Manson (2007, 2019) stresses that broad consent approaches do not offer the same kind of control over samples and data as specific consent approaches do. Lunshof et al. (2008) have emphasized that overly broad consent could become meaningless. They are of the opinion that the most likely pragmatic solution is to maximize data protection while informing subjects about its limits. Possibilities of applying IT solutions to informed consent have triggered discussions on dynamic consent and meta-consent. Dynamic consent enables subjects to agree to participate in new research projects in real time or to change their preferences concerning consent in a simple manner should circumstances change. The subject can tell what kind of information she/he wants, how often, and in what form (e.g., by ordinary postal letter, e-mail, or text message). Such an approach should give the participant more say in decision-making than before. In addition, it should increase transparency and public trust. Some authors have emphasized the following benefits of dynamic consent: streamlining of recruitment, enabling efficient re-contact, conformity to the highest legal standards, fine-grained withdrawal, enablement of better communication, improvement of scientific literacy, transparency, and risk management (Kaye et al. 2015). This type of solution also presumes that there are functioning systems for ethical review and opt-out possibilities. However, other critics are not convinced that dynamic consent is a better solution than broad consent. Weaknesses of the dynamic concept may include that ethical review of research projects becomes highly individualized and that for subjects there may be an ensuing risk of therapeutic misconception. Also, if there is a need for recurrent consent for every new project, the research might be viewed as trivial (Steinsbekk et al. 2013). Consequently, this view of dynamic consent does not seem to have more ethical strengths than broad consent. Meta-consent goes one step further than dynamic consent, making it possible for people to decide according to their own preferences how and when they would like to receive the request for consent and how they want their health data and biological materials to be used in the future. People will be offered a possibility to choose if they would like to decide differently about different types of data, material, and purposes these will be used for. The types of data may include electronic patient records or data regarding samples. Secondly, people will also be offered a possibility to decide if their data and samples will be used for commercial research (Ploug and Holm 2016, 2017). In the meta-consent process, people can choose among four different types of consent: firstly, consent to specific research projects (specific consent); secondly, consent to broad categories of research (broad consent); thirdly, consent to all research (blanket consent); and finally, refusal of all research use (blanket refusal) (Ploug and Holm 2016, 2017). The subjects will be offered a possibility to change their preferences. They will also be asked to revisit their preferences and choices at regular intervals (Ploug and Holm 2016, 2017).
12
Informed Consent and Ethical Research
225
As we have seen, the problem with open and broad consent is that subjects have to consent without having full information about future research of which they may become a part. Dynamic and meta-consent address this issue by asking for participants’ preferences. The problem with both dynamic and meta-consent is that by repeatedly asking subjects to think about their preferences and make decisions, the whole process of consent may become routinized. As a result, decisions will become even less reflective. It has been pointed out that if consent is not based on subjects’ reflective decisions, it will no longer offer protection for their autonomy; and continuous asking about the subjects’ preferences may become an expression of paternalism (Helgesson and Eriksson 2011). Questions remain concerning how to deliver information and how to check whether subjects have understood it. Additionally, the need for efficiency has led to the creation of electronic informed consent. Here the most essential question is how to check that the subject has understood the information. Different solutions have been proposed, such as asking the subject to take a test before giving her/his signature. Multimedia solutions for consent have also been implemented, such as presenting the information in video form. This solution can be combined, with signing a paper consent form and/or the research team’s direct answers to the subject’s questions. Understanding can be checked by asking the subject to rephrase the content in their own words and comparing this paraphrase points with the information initially given in the consent. Several interventions have been tested, including multimedia or computer-based consent, enhanced consent forms, extended discussion, and test/feedback (Flory et al. 2007). Agre and Rapkin have highlighted that a test might indicate areas where the subject’s knowledge is lacking. It might very well be that more active learning processes are needed, with the caution that the use of media solutions does not necessarily result in subjects who are more informed (Agre and Rapkin 2003, p. 6).
New Research Possibilities due to Advancements in Science and Technology Obviously, the understanding of informed consent has changed with time. The principle of informed consent was first introduced in medical ethics mainly to protect individuals against possible harm and to promote the research subject’s autonomy. Over time, the underpinnings of the idea of informed consent have shifted, both due to advances in science and technology (especially information communication technologies) and changes in the type of data being gathered (big data). While originally consent was sought for a single study with a pre-specified timespan and specific purpose, with the rise of big data, growing possibilities of the secondary use of data, and sharing of data with other researchers, it is becoming more and more difficult to foresee all the future uses of subject data. Genetic databases, especially those based on large populations, pose new challenges to the traditional research ethics framework centered on the individual and his/her informed consent. Thus
226
M. Sutrop and K. Lõuk
there is growing pressure to relinquish the traditional concept of informed consent and replace it with open or broad consent. Two main arguments have been used to justify the use of open/broad consent instead of traditional informed consent. The instrumental argument is that it is impossible to obtain fully informed consent, because at the time of collection, one does not know for what kind of research the samples will be used. The substantial argument is that there is public interest in keeping samples for an unlimited time and using them for different research projects (Sutrop 2011b, p. 376). These are valid arguments; however, in giving consent, research subjects should be assured that sensitive personal information and data will be protected in a way that will not cause stigma or harm to their dignity, will not violate privacy and confidentiality, and will not lead to discriminatory treatment (Beauchamp 2011, pp. 519–520). In the case of biobanks, one presumes a certain amount of trust from the subjects. As at the stage of joining the biobank, one does not know who the researchers will be and what kind of research will be done on the subject’s data and samples, one has to trust the institutions (biobank, ethics committee, etc.) who will secure the fulfillment of these conditions. As open/broad consent clearly undermines individual autonomy (the only thing for which the subject can give consent is joining the database), there is an ongoing search for new forms of consent. But each of these forms of consent – open, broad, dynamic, and meta-consent – is problematic in its own way. Similarly, the issue of whether informed consent is deemed mandatory for the release of data to population-based databases has arisen in the context of the creation of electronic health records, intended as a potentially useful modification for both clinical and research purposes. In discussions on whether one should prefer an opt-in or opt-out model, the public interest argument (health data can be exploited for scientific purposes and statistics, the planning and management of healthcare services) has come to prevail over the database participant’s individual interests in liberty, privacy, and self-determination; most countries have indicated that they favor opt-out policies (Sutrop 2011a).
Changing Ethical Frameworks The mapping of the human genome has paved the way for many new developments in biomedicine and related fields. In research ethics, this milestone was accompanied by calls for changes to ruling ethical frameworks. Since the millennium there has been growing dissatisfaction with ethical frameworks centered on individual rights. Several bioethicists have expressed the need for a “communitarian turn” in bioethics as well as a need to develop new ethical frameworks focusing on more collective values, such as reciprocity, mutuality, solidarity, citizenship, and universality (Chadwick and Berg 2001; Knoppers and Chadwick 2005; Chadwick 2011). It has been envisioned that concepts of solidarity, community, and public interest play a more prominent role alongside traditional concepts of biomedical ethics such as autonomy, privacy, and informed consent. Some critics have even claimed that research ethics made a big mistake by making individual autonomy a basic ethical principle.
12
Informed Consent and Ethical Research
227
It has been pointed out that an emphasis on individual autonomy, privacy, and informed consent can seriously hamper research that aims to further the common good (Academy of Medical Sciences 2006). While technologies for the gathering and analysis of information have evolved rapidly, strict regulations on data protection have obstructed the use of that information. Although there has been a growing dissatisfaction with the individual rights-centered ethical framework which emphasizes autonomy, in reality the main target of criticism has been the concept of informed consent, which has been faulted for being individual-centered to the detriment of collective or community interests, as well as being overly formal and unnecessarily cumbersome to obtain (Sutrop 2011b). In relation to biobanks, it is widely believed that restrictions requiring new informed consent for the re-use of biological samples and data severely limit research for the common good or the public interest (Beskow et al. 2001). It has also been pointed out that informed consent requirements for epidemiological, observational, or interventional studies are interpreted too narrowly, and this limits the ability of studies to provide new medical knowledge that would be beneficial to patients. It has been argued that epidemiological research has been impeded by the impossibility of collecting statistical data without a subject’s informed consent (Hansson 2010). Hansson has convincingly argued against too strict an interpretation of consent in studies entailing a low risk of harm to the individual. He suggests that research participants should have access to indirect means of exercising autonomy through institutions (e.g., ethical review boards) that then must find a proper balance between the potentially divergent interests of the individual and the public. In Hansson’s view, respect for autonomy does not imply deciding for oneself in isolation from others.
The Contested Concept of Autonomy Although traditional informed consent has been criticized as overly individualcentered (to the detriment of collective and community interests), too formal, and unnecessarily cumbersome to obtain, there are plenty of bioethicists who continue to support rights-based ethics as most appropriate in protecting research subjects from potential harm. These critics agree that there is a problem with too narrow an understanding of autonomy and too formalistic an application of informed consent, and they point to the need to reconceptualize informed consent altogether and reinterpret its underlying principle of respect for autonomy (Sutrop and Simm 2011). As an example, Neil C. Manson and Onora O’Neill (2007) have pointed out that in current bioethical discussions autonomy is understood too narrowly, mainly in the sense of autonomous individual-centered decisions. On the basis of this understanding, informed consent has been conceptualized as disclosure of information by those who seek consent and as decision-making by those whose consent is sought. These scholars argue that such a narrow focus ignores or underplays what is actually needed for effective communication and commitments between the parties (ibid.).
228
M. Sutrop and K. Lõuk
Sigurdur Kristinsson (2007) agrees with Manson and O’Neill that there is no such general duty to show mutual respect toward the formation of considered judgments. Kristinsson stresses that “the primary role of informed consent seems better understood as a way of respecting each person as a rational agent who enters into agreements as a moral equal based on honest information. In its secondary role, informed consent protects the subject’s well-being, because (1) judgments of what is burdensome or beneficial are often relative to the individual’s conception of the good, and (2) the experience of being coerced, deceived, or manipulated is generally a strike against one’s well-being” (Kristinsson 2007, p. 262). Kristinsson claims that although the Belmont Report has been very influential politically, its philosophical grounding is liable to critique. He emphasizes that one should instead have recourse to a Kantian approach, which argues for moral, rather than personal, autonomy, as a capacity which people possess to a greater or lesser degree. To have a capacity to act means to be able to act independently, correctly, and appropriately. However, there are good reasons to believe that autonomy is relational: one person can be free and independent in one context, but not in another situation or in other circumstances. Autonomy is also gradated; some people have greater, some lesser independence (O’Neill 2002, p. 23). According to Mackenzie et al. (2014) all people as humans – not only as possible research subjects – are vulnerable because they are influenced by the acts of others and also because they might need the help and care of others during their lifetime, at different moments in time, and to varying degrees: “Relational theorists regard agency and some degree of autonomy as important for a flourishing human life. For this reason, a relational approach is committed to the view that the obligations arising from vulnerability extend beyond protection from harm to the provision of the social support necessary to promote the autonomy of persons who are ‘more than ordinarily vulnerable’” (Mackenzie et al. 2014, p. 17). Theda Rehbock (2011) has also argued that the principle of respect for autonomy should not be seen as respect for autonomous decisions, leading to a narrow focus on the competency of those who are asked for consent. Rehbock (2011, p. 526) is of the opinion that respect for autonomy should be seen more broadly, as “respect of the will of the person.” Her view is that what we need is a broader, more differentiated understanding of the concept of autonomy which maintains its universality while admitting that such a principle can only be applied if other moral principles are also respected and applied (Rehbock 2011, p. 524). In addition to the common understanding, in which autonomy is regarded in close connection with principles such as competency, capacity for self-control, and rational decision-making, there is another condition which should not be overlooked: authenticity. The condition of authenticity is seen as identification with the motivating desire and action thereupon: “An authenticity condition would require actions to be consistent with a person’s reflectively accepted values and behaviour in order to be autonomous. Authenticity in this usage requires that actions faithfully represent the values, attitudes, motivations, and life plans that the individual personally accepts upon due consideration of the way he or she wishes to live” (Faden et al. 1986, p. 263). Faden and colleagues argue that if authenticity is seen as a reflective identification with a motivating desire, which is the precondition for autonomous action, then this is too
12
Informed Consent and Ethical Research
229
demanding. Many actions deemed intentional, carried out on the basis of understanding and not controlled/coerced, may nevertheless not be autonomous. However, an autonomous agent is one who is self-directed, rather than obeying the command of others. “These descriptions of autonomy all presuppose the existence of an authentic self, a self that can be distinguished from the reigning influences of other persons or alien motives” (Macklin, via Faden et al. 1986, p. 263).
Practical Problems with Informed Consent Besides the abovementioned substantive problems with the conceptual understanding of autonomy as the central concept of informed consent, there are also practical problems with applying the principle. From the field of biomedical research, the concept of informed consent has slowly and unevenly moved to other fields of research. However, this has not happened without resistance, and its appropriateness to other research contexts is still being debated. For example, in some research it is not possible to inform participants beforehand that they are indeed research subjects, as this would change their behavior and slant the results. In other contexts research subjects may be endangered if their written informed consent for participation in research leads to the revelation of something about their status, putting them at risk for potential discrimination or stigmatization (e.g., research on prostitutes or HIVpositive individuals). There may be also cases where asking for written consent can create suspicion among participants and destroy trust between researchers and research subjects (e.g., in some anthropological research). Several social scientists have expressed the concern that the principle of informed consent has been adopted mechanically by research ethics governance structures (Haggerty 2004; Israel 2015; Schrag 2010) and that it is a pure formality entailing an unnecessary bureaucratic burden. Indeed, there have also been empirical studies in the field of biomedical research showing that people see the informed consent procedure as a pure formality; they do not bother to read information sheets and simply tick the right box. With increasing complexities in research and data management, this tendency is likely to increase. As it will be practically impossible for subjects to comprehend all the details of research and data management plans, needless to say evaluating the potential risks, we should give up the idea that informed consent provides protection from potential harm. It should be the role of ethics governance to ensure that subjects are not exposed to unreasonable risks or treated unjustly. As pointed out by Sigurdur Kristinsson, “to put the main burden of assessing the risks and benefits of participation on the individual subject through informed consent would indeed be unfair” (Kristinsson 2009). Let us sum up: All the concerns and reservations addressed above point to the need for alignment of the principle of informed consent to diverse and changing research contexts and new ethical frameworks. What we need is a more contextualized approach to the principle of informed consent. Obviously, it is impossible to apply this ethical principle in the same way to all forms of research. The so-called rules of informed consent have been formulated in the context of invasive biomedical research involving humans and are not necessarily transferable to other contexts,
230
M. Sutrop and K. Lõuk
either in biomedical research (where new forms of research as population-based genetic databases have emerged) or other fields of research such as social sciences. More attention should be directed to differences among research contexts.
Conclusions Informed consent is a principle which was born in a specific context, namely, in biomedicine, as a reaction to the instrumentalization of people for the benefit of science. It was adopted in order to protect research subjects from the possibility of harm. However, in addition to values such as autonomy and human dignity, informed consent must also protect other values (e.g., trust, cooperation), and one value cannot be preferred to another. In addition, the foregrounding of autonomy (as it is one-sidedly understood) as predominant in the hierarchy of values has led to a situation in which individual freedom and autonomy has been declared the supreme value, relegating collective values, such as solidarity and reciprocity to the background. Problems also ensue from the circumstance that a principle adopted in one area of inquiry (biomedicine) cannot be extended on a literal basis to other areas of science. In addition, new areas of scholarly inquiry are emerging, and research contexts are changing; these may vary both within and across different scholarly fields. Therefore it is hardly surprising that the application of this principle across the board and in the same way to all circumstances will lead to questions and that new forms of informed consent have been proposed. The only solution is a contextual approach that takes into account differences among research fields and the settings in which research is carried out. A contextual approach need not mean that the application of the principle of reformed consent has been relativized. Since informed consent defends a variety of values (autonomy, honesty toward research subjects, refusal to harm, human dignity, trust), in its application one should strive for balance among a range of values, including the realm of individual rights and public interest. In addition, in view of such a context, one should take a flexible approach, avoiding unnecessary formalism and bureaucracy. Acknowledgments This work is based on research undertaken for the research project IUT20-5, funded by the Estonian Ministry of Education and Research and supported by the Centre of Excellence in Estonian Studies (European Union, European Regional Development Fund). We also profited from our research done for the PRO-RES project, funded by the European Commission Horizon 2020 Program. We are grateful to Tiina Kirss for her editing and help with English expression.
References Academy of Medical Sciences (2006) Personal data for public good: using health information in medical research. A report from the Academy of Medical Sciences. Academy of Medical Sciences, London Agre P, Rapkin B (2003) Improving informed consent: a comparison of four consent tools. IRB Eth Hum Res 25(6):1–7
12
Informed Consent and Ethical Research
231
Beauchamp TL (2011) Informed consent: its history, meaning, and present challenges. Camb Q Healthc Ethics 20(4):515–523 Beauchamp TL, Childress JF (1979/2009) Principles of biomedical ethics, 6th ed. Oxford University Press, Oxford Beauchamp TL, Faden RR (1995/2004) Informed consent. In: Post SG (ed) Encyclopedia of bioethics, 3rd edn. Thomson, New York, pp 1271–1277 Beauchamp TL, Faden RR et al (1982) Ethical issues in social science research. Johns Hopkins University Press, Baltimore/London Benedek TG (2014) “Case Neisser”: experimental design, the beginnings of immunology, and informed consent. Perspect Biol Med 57:249–267 Beskow L, Burke W, Merz J, Barr PA, Terry S, Penchaszadeh VB et al (2001) Informed consent for population-based research involving genetics. JAMA 286(18):2315–2321 Bok S (1978) Lying: moral choice in public and private life. Vintage Books, New York Capron AM (1982) Is consent always necessary in social science research? In: Beauchamp TL et al (eds) Ethical issues in social science research. The Johns Hopkins University Press, Baltimore Chadwick R (2011) The communitarian turn: myth or reality? Camb Q Healthc Ethics 20(4):546–553 Chadwick R, Berg K (2001) Solidarity and equity: new ethical frameworks for genetic databases. Nat Rev Genet 2:318–321 Dankar FK, Gergely M, Dankar SK (2019) Informed consent in biomedical research. Comput Struct Biotechnol J 17:463–474 Faden RR, Beauchamp TL, King NMP (1986) A history and theory of informed consent. Oxford University Press, New York Flory J et al (2007) Informed consent for research. In: Aschroft RE et al (eds) Principles of health care ethics, 2nd edn. Wiley, Chichester, pp 703–710 Gelinas L, Wertheimer A, Miller FG (2016) When and why is research without consent permissible? Hast Cent Rep 46(1):1–19 Haggerty KD (2004) Ethics creep: governing social science research in the name of ethics. Qual Sociol 27(4):391–414 Hansson M (2010) Do we need a wider view of autonomy in epidemiological research? BMJ 340:1172–1174 Helgesson G, Eriksson S (2011) Does informed consent have an expiry date? A critical reappraisal of informed consent as a process. Camb Q Healthc Ethics 20(1):85–92 Iphofen R (2009) Ethical decision making in social research: a practical guide. Palgrave Macmillan, Basingstoke Israel M (2015) Research ethics and integrity for social scientists, 2nd edn. Sage, London Israel M, Hay I (2006) Research ethics for social scientists. SAGE, London Kaye J (2004) Broad consent – the only option for population genetic databases? In: Árnason G, Nordal S, Árnason V (eds) Blood and data: ethical, legal, and social aspects of human genetic databases. University of Iceland Press, Reykjavik, pp 103–109 Kaye J et al (2015) Dynamic consent: a patient interface for twenty-first century research networks. Eur J Hum Genet 23(2):141–146 Knoppers BM, Chadwick R (2005) Human genetic research: emerging trends in ethics. Nat Rev Genet 6:75–79 Kristinsson S (2007) Autonomy and informed consent: a mistaken association? Med Health Care Phil 10(3):253–264 Kristinsson S (2009) The Belmont report’s misleading conception of autonomy. AMA J Ethics 11(8):611–616 Kristinsson S, Árnason V (2007) Informed consent and human genetic research. In: Häyry M et al (eds) The ethics and governance of human genetic databases. Cambridge University Press, Cambridge, pp 199–216 Lunshof JE et al (2008) From genetic privacy to open consent. Nat Rev Genet 9(5):406–411 Mackenzie C et al (2014) Vulnerability. New essays in ethics and feminist philosophy. Oxford University Press, Oxford
232
M. Sutrop and K. Lõuk
Manson NC (2007) Consent and informed consent. In: Aschroft RE et al (eds) Principles of health care ethics, 2nd edn. Wiley, Chichester, pp 297–303 Manson NC (2019) The ethics of biobanking: assessing the right to control problem for broad consent. Bioethics 33(5):540–549 Manson NC, O’Neill O (2007) Rethinking informed consent in bioethics. Cambridge University Press, Cambridge Moreno JD, Caplan AL et al (1998) Informed consent. In: Chadwick R (ed) Encyclopedia of applied ethics. Academic, San Diego Nõmper A (2005) Open consent – a new form of informed consent for population genetic databases. Tartu University Press, Tartu Nuremberg Code (1947). https://history.nih.gov/research/downloads/nuremberg.pdf. Accessed 20 Aug 2019 O’Neill O (2002) Autonomy and trust in bioethics. Cambridge University Press, Cambridge Ploug T, Holm S (2016) Meta consent – a flexible solution to the problem of secondary use of health data. Bioethics 30(9):721–732 Ploug T, Holm S (2017) Eliciting meta consent for future secondary research use of health data using a smartphone application – a proof of concept study in the Danish population. BMC Med Ethics 18:51. https://doi.org/10.1186/s12910-017-0209-6 Ransome P (2013) Ethics & values in social research. Palgrave Macmillan, Hampshire Rehbock T (2011) Limits of autonomy in biomedical ethics? Conceptual clarifications. Camb Q Healthc Ethics 20(4):524–532 Resnik D (2018) The ethics of research with human subjects. Protecting people, advancing science, promoting trust. Springer, Cham Ruyter KW (2003) Forskningsetikk: beskyttelse av enkeltpersoner og samfunn. Oslo, Gyldendal akademism Schrag ZM (2010) Ethical imperialism: institutional review boards and the social sciences, 1965–2009. Johns Hopkins University Press, Baltimore Social Research Association (2003) Ethical guidelines. https://sp.mahidol.ac.th/pdf/ref/10_Social% 20Research%20Association.pdf. Accessed 24 Sept 2019 Steinsbekk KS et al (2013) Broad consent versus dynamic consent in biobank research: is passive participation an ethical problem? Eur J Hum Genet 21(9):897–902 Sutrop M (2011a) Changing ethical frameworks: from individual rights to the common good? Camb Q Healthc Ethics 20(4):533–545 Sutrop M (2011b) How to avoid a dichotomy between autonomy and beneficence: from liberalism to communitarianism and beyond. J Intern Med 269(4):375–379 Sutrop M, Florea C (2010) The guidance note for researchers and evaluators of social sciences and humanities. http://ec.europa.eu/research/participants/data/ref/fp7/89867/social-sciences-humani ties_en.pdf. Accessed 24 Sept 2019 Sutrop M, Simm K (2011) Guest editorial: a call for contextualized bioethics: health, biomedical research, and security. Camb Q Healthc Ethics 20(4):511–513 The Belmont Report (1979) Ethical principles and guidelines for the protection of human subjects of research https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmontreport/index.html. Accessed 20 Aug 2019 The European Commission (2018) Ethics in social sciences and humanities. https://ec.europa. eu/research/participants/data/ref/h2020/other/hi/h2020_ethics-soc-science-humanities_en.pdf. Accessed 24 Sept 2019 The European Commission (2019) Guidance note – research on refugees, asylum seekers & migrants. https://ec.europa.eu/research/participants/data/ref/h2020/other/hi/guide_research-refu gees-migrants_en.pdf. Accessed 24 Sept 2019 Tuija Takala (2017) Setting a Dangerous Precedent? Ethical Issues in Human Genetic Database Research. Medical Law International 8 (2):105–137 Vollmann J, Winau R (1996) Informed consent in human experimentation before the Nuremberg code. BMJ 313:1445–1447 World Medical Association (2013) Declaration of Helsinki – ethical principles for medical research involving human subjects. https://wwwwmanet/policies-post/wma-declaration-of-helsinki-ethi cal-principles-for-medical-research-involving-human-subjects/. Accessed 15 Mar 2019
Privacy in Research Ethics
13
Kevin Macnish
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Does Research Impact Privacy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Privacy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why Is Privacy Valuable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Is Privacy Not Valuable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . National Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Horizon Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Facial Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ubiquitous Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anonymization and Pseudonymization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
234 235 236 238 238 239 240 240 240 241 242 243 243 243 244 244 244 245 246 246 247 247
Abstract
This chapter considers the importance of privacy in contemporary research and how best to deal with some of the challenges raised around privacy. It opens with a number of questions about privacy which will be considered in the chapter, including a consideration as to the impact that privacy has on research. K. Macnish (*) Department of Philosophy, University of Twente, Enschede, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_9
233
234
K. Macnish
Consideration is given to the history of how privacy has been treated (in the West) before looking at key issues of what privacy is and why it is valuable. The current debate focuses on concerns relating to national security, social media, and data analytics, while, looking ahead, the Internet of Things, facial recognition, and the potential for ubiquitous surveillance are raised. Finally, the chapter considers a number of means for managing privacy within the research context: consent, secure storage, anonymization and pseudonymization, and the difficulties that arise when working with groups of people. Keywords
Confidentiality · Consent · Internet of things · Social media · Facial recognition · Surveillance · Anonymization · National security · Data analytics
Introduction What exactly is privacy and why do we care about it? Some people see privacy as a right which guards their information from intrusion by others (Allen 1988; Macnish 2016; Gavison 1984; Tavani and Moor 2001; Thomson 1975). Others take privacy to be a means of controlling their information (Inness 1996; Moore 2008; Parent 1983; Scanlon 1975; Westin 2003). Still others see privacy in terms of protecting the freedom to make a decision (Etzioni 1999; Squires 1994). Why we should value privacy is equally disputed. It may be that privacy protects our autonomy, our freedom to make decisions as to how we wish to live our lives (Nathan 1990; Benn 1971). Privacy can also allow us the ability to preserve our reputation (Posner 1984) and the freedom to experiment, either physically in the privacy of our own rooms or mentally in the space of our own minds (Gavison 1984; DeCew 2000). There are also social benefits to privacy, both in terms of allowing for healthy questioning and dissent within a liberal democracy, and in permitting genuine freedom at the ballot box (Regan 1995; Solove 2002; Roessler and Mokrosinska 2015). Questions about privacy are further complicated by the fact that what we deem to be private (or, rather, what we deem ought to be private) changes from generation to generation and across the globe. Victorians would have been outraged at the amount of flesh seen on an average European or American beach today. While this may be a mark that we are less inhibited than the Victorians, their predecessors enjoyed the beach naked, and while this option still exists, for many it may be relaxing our inhibitions a little too far. Likewise, in many so-called developed countries we exercise our privacy through having private rooms, hallways to separate those rooms, and lockable doors on shared rooms used for personal ablutions. Each of these is a relatively new development in architectural design in these countries and does not exist for most people in other countries around the world. Nonetheless, just because some families live ten people to a room, it does not follow that they have no understanding of or appreciation for privacy. Anthropological studies have shown that privacy is afforded to other people in such
13
Privacy in Research Ethics
235
circumstances by turning one’s back, or pretending that one is not aware of what the other is doing. In this way, lovemaking and defecation (taking two examples) remain private activities, even when there are no walls to separate us (Locke 2010, 79–89). Taking this together with the fact that privacy, in whatever form, seems to have been a value for the vast majority of people through history, would seem to suggest that privacy is in some ways fundamental to what it means to be human. While it would not be ethical to raise a person to adulthood under conditions of total surveillance, fictional suggestions of what such a world might look like are both plausible and chilling (Orwell 2004; Bradbury 2008; Zamyatin 1993; Eggers 2014).
How Does Research Impact Privacy? Research that involves people typically involves gaining information from or about those people. This information is then stored, processed, and published in some form or another. Whichever definition of privacy is preferred, the act of carrying out research on people will diminish their privacy to some extent. Either information about them is known which might not otherwise have been known or the control of that information is passed from the research subject to the researcher. If and when this information is then published in the findings of the research, the information becomes less private still. Now others can access that information and so the research subject loses still more control over and experiences others having a greater degree of access to her information. She hence experiences a further loss of privacy, although this can be mitigated as we shall see. A relatively recent move in research has been the cross pollination of data repositories. This involves the transfer of information from one database to another. In the process, information about a person becomes available to still more people, and, as we shall see, through such cross pollination more information can be determined about the originating source despite attempts to protect that source. Very few researchers today would be so cavalier as to publish private information about research subjects, though, at least not without their prior permission. We know to respect the privacy of research subjects, and laws such as the General Data Protection Regulation ((EU) 2016/679 2016) in Europe, coupled with institutional checks and balances, ensure that researchers act accordingly. This does not seem strange today, and the professional practice of confidentiality is widely recognized as central to a researcher’s behavior. In promising confidentiality to a research subject, the researcher gives assurances that their information will be treated with respect and their privacy maintained such that anyone not indicated in a consent form should not be able to access the information. Professional confidentiality can, though, run into problems. What should a researcher do when a research subject reveals details of criminal activity (past or future)? In many countries, there are laws in place which would hold the researcher criminally responsible if she withheld information on a future crime from the police. Nonetheless, such research might be deemed important. Likewise, in the case of retrospective crimes, the police might demand information from the researcher. In
236
K. Macnish
either case, the researcher must make a decision as to the lengths to which he will go in protecting the information he has derived from his subjects. Furthermore, these lengths should be made abundantly clear to the research subject as part of the consent process in order that the subject is able to make an informed decision as to what information to share and what to withhold. This scenario came into effect in 2011 when the Police Service of Northern Ireland requested information held in the Boston College archive regarding interviews two researchers had carried out in relation to the activities of the Provisional Irish Republican Army in the 1970s. Both researchers were known to have refused to give similar information to the police in the past, which had given them credence with their research subjects. In this particular case, Boston College passed the material to the police, thereby undermining the assurances of the researchers (Sampson 2016; McDonald 2016; Erwin 2018). Helen Nissenbaum deals with confidentiality in suggesting that there are societal norms regarding how private data should be treated (Nissenbaum 2009). When those norms are violated then a wrong has occurred, giving a descriptive account of harms to privacy. Significantly, the norms differ between relationships and so information pertinent to a conversation with a medical practitioner will be different from information pertinent to a conversation with a bank manager.
Background The history of privacy is arguably as old as humanity itself. In terms of recorded history, we may see early indications of privacy, for the elite at least, in the Old Testament. The Israelite Ehud kills the Moabite King Eglon while the latter is relieving himself on his toilet. Ehud leaves and closes the door behind him, so Eglon’s courtiers are unwilling to enter the room where the dead king now lies for fear of intruding (Judges n.d., 3:12–30). In the New Testament, Jesus instructs his followers to go into a private space to pray, suggesting the common availability of such spaces (Matthew n.d., 6:6). Generally, privacy in terms of space and of much personal information was limited historically by living conditions which did not allow for individuals to escape the scrutiny of others. While people may be able to maintain a degree of privacy in the moment thanks to the tolerance and back turning of others, the fact remains that everyone in such situations knows what the other is doing. Laws regarding privacy first came into effect in the late fourteenth century in England, when regulations regarding eavesdropping were introduced. These made it punishable to approach a house from the outside and position oneself in order to hear or see what was happening inside (Locke 2010, 128–129). Interestingly, though, there seem to have been no such laws condemning those who positioned themselves against an adjoining wall for the same purpose. Domestic conditions meant that, up until the mid-twentieth century, few outside the moneyed elite could enjoy what we would today consider true privacy, in Europe and the USA at least. Throughout the industrial revolution, as people moved from
13
Privacy in Research Ethics
237
rural to urban areas and with associated poor living conditions, they often found themselves living with multiple people, if not multiple families, in a single room. It was in this period that institutions started to be designed that would fundamentally restrict people’s privacy in order to affect their behavior. Jeremy Bentham wrote about his Panopticon design in 1787, a prison (but also potentially a hospital, school, or factory) in which every inmate could be monitored by an unseen guard (Bentham 1995). Bentham suggested that not knowing if they were being watched or not at any given moment would alter the inmate’s behavior such that they would become more compliant. He also suggested that allowing members of the general public to view such institutions, and the sense of horror they experienced, would be sufficient to deter most from committing crimes themselves. Michel Foucault points out that it was at the same time that much of society was starting to be organized along similar lines of monitoring, rendering those affected more efficient cogs in an industrial machine (Foucault 1991, 210–216). In the twentieth century, Bentham’s Panopticon virtually became a model for totalitarian societies. Such societies employed advanced technological and psychological surveillance techniques and, as a result of citizens not knowing precisely who was watching or listening, they not only succeeded in maintain a watchful eye and listening ear on all citizens, also maintained a constant and pervasive sense of the possibility that one was constantly being surveilled (Funder 2004). The resulting paranoia was well explored in the early twentieth century through fiction such as Zamyatin’s We, Boye’s Kallocaine, and Orwell’s 1984 (Zamyatin 1993; Boye and Vowles 2002; Orwell 2004). In each case, life without privacy is envisaged, explored, and, ultimately, rejected as inhuman. The development of radio, telegraphy, and the telephone at the turn of the twentieth century meant that people were able to communicate at greater distances, while others, and particularly governments, were able to listen in. Following the Second World War technological advances meant that privacy became threatened still further. Satellite technology enabled truly global communications, but also global surveillance by those who had the money, power, and interest so to do. The move to fiber optic cable and the development of the World Wide Web have only served to increase both communications and the technical capacity for intercepting those communications. Despite this history, it would be wrong to conclude, as some seem to have, that privacy is dead. If privacy were truly a thing of the past then it would be strange that we should continue to mourn it. Instead, privacy is something that we continue to value and arguably do so more today than at any time in the past. Indeed, many democratic societies have developed laws and regulations to ensure that this remains the case. We are shocked, albeit perhaps not surprised, when we hear of yet another leak of personal information over the Internet. Our response is not to shrug and say “that’s the death of privacy for you,” but rather to seek to punish those who did not protect our information sufficiently well. Throughout this history, there has been relatively little reflection on the meaning of privacy. While some have considered the value or implications of a loss of privacy, it is generally acknowledged that the first written attempt to define privacy
238
K. Macnish
was published in the late nineteenth century by two American judges, Samuel Warren and Louis Brandeis (1890). Their paper was written in response to what Warren and Brandeis saw as an invasion of their privacy when journalists used newly developed telephoto lenses to take pictures of a private, high society wedding. In this paper, the judges argue that privacy is one example of the right to be let alone, a definition which still carries traction today. Following this there were few written reflections on the nature of privacy until the early 1970s, when privacy became a key issue in the Supreme Court’s ruling regarding abortion. Several papers attended to precisely what privacy, or a right to privacy, might involve. Judith Jarvis Thomson argued that there was no right to privacy as such and that privacy consisted of a collection of other rights, most notably property rights (Thomson 1975). In response, Thomas Scanlon argued that there were zones of privacy surrounding us into which people could intrude (Scanlon 1975). Taking a different tack, James Rachels argued that the reason we value privacy is that it affords us control over our relationships (Rachels 1975). As a relationship becomes more intimate, so less privacy is experienced between those people involved in the relationship. While the aforementioned authors focus on privacy from an analytic perspective, philosophers in the continental tradition have taken a different approach. Foucault referenced Bentham to demonstrate how a loss of privacy could be a means of state control (Foucault 1991, 215–216). Gilles Deleuze responded 13 years later, arguing that society was more fluid than Foucault thought and that the assault on privacy was no longer restricted to state institutions. Instead, he argued, we never really leave the practices embraced by those institutions: we are always learning (not just when we are at school) and we are always working (not just when we are at the workplace). This led him to refer to “societies of control” (Deleuze 1990). More recently still, David Lyon and Zygmunt Bauman have taken the latter’s theories on “liquid modernity” and applied these to surveillance (Bauman and Lyon 2012). As such, the assault on privacy is now seen as less a matter of state intrusion (as envisaged by Orwell) but rather a matter of the potential for everyone monitoring everyone else. Furthermore, perhaps unwittingly or, at least, without proper forethought, this is something that we have embraced as we upload photographs of our breakfast to social media and answer our emails while on holiday.
Key Issues We have already touched on a number of key issues regarding privacy in the above section on history. These can be summarized as: what is privacy? why is privacy valuable? and when is privacy valuable?
What Is Privacy? As noted above, a number of different definitions of privacy have been suggested. Thomas Scanlon has argued that privacy is a right which we enjoy in concentric
13
Privacy in Research Ethics
239
circles or zones whereby zones immediately surrounding individuals contain more intimate information while those further out contain less intimate information (Scanlon 1975). Judith Thompson, by contrast, has argued that there was no such right to privacy, but rather a bundle of other rights (Thomson 1975). Another consideration regarding the definition of privacy is whether it is a matter of control or access. Taking the example of an email. Imagine that Human Resources erroneously sends an email which contains personal information about you to one of your colleagues. Is your privacy infringed in the sending or in the reading of the email. The control account would hold to the former. The access account would claim that while the sending of the personal information is wrong, your privacy is only compromised when the email is read. Many writers hold to the control account, arguing that this is what gives privacy its value: the ability to control information about us. By contrast, others hold that control of information may be important, but that is not what is meant by privacy. They take a narrower view that privacy is purely a matter of access (Macnish 2016). Hence, they would hold that privacy is only infringed when an email, letter, or diary is actually read (“accessed”). Prior to this there may be other concerns, but these are not privacy concerns. A third debate considers precisely what it is that privacy concerns. Some have held a purely informational account of privacy, whereby privacy involves information about a person (Westin 2003). This has been disputed by others who hold that privacy may relate to more than just information (Doyle 2009). If so, then the question is raised as to what it is precisely that privacy does relate to. While all agree on information, many would agree that space is also a relevant issue for privacy (Allen 1999). Likewise, privacy seems likely to relate to image, smell, sound, touch, and even taste. For example, what a person looks like naked, how that person smells and tastes, and how the skin of that person feels are all extremely intimate and private details. And while we may not keep our voices private, we may well keep our singing private to the shower or the bedroom. Finally, privacy has also been related to decision-making (Squires 1994). If I should have privacy to dress and undress, then I also should have privacy to determine how I wish to live my life, who I would vote for, and so on.
Why Is Privacy Valuable? At face value, privacy appears to be valuable because it protects us from harm. It allows us space to think and to experiment without fear of reprisal from society, if society should happen to reject what it is that we think about or experiment with. Likewise, privacy is dignity preserving. I can try things out and act in a way that would “make a fool of myself” in the privacy of my own home without incurring much embarrassment. With these as background, we can see that privacy is also protective of autonomy (Nathan 1990; Benn 1971). When I have privacy, I have the freedom to make decisions about my life and the freedom to think through those decisions and their implications before I choose to act on them. However, the aforementioned values are largely focused on the individual. Privacy also involves others. For example, a love letter sent between two people, or a private telephone conversation. In those cases, James Rachels has put forward
240
K. Macnish
the argument that privacy is valuable because it helps us to define the nature of those relationships (Rachels 1975). We tend to reduce our privacy as a means of becoming intimate, suggesting that more intimate relationships can be defined as involving less privacy. At a broader societal level, Priscilla Regan, Daniel Solove, Dorota Mokrosinska, and Beata Roessler have all argued that privacy maintains an important social function insofar as it is a requirement for a functioning democracy, and, following Rachels, it helps to provide definition to the interplay of relationships within that society (Regan 1995; Solove 2002; Roessler and Mokrosinska 2015).
When Is Privacy Not Valuable? Privacy is generally seen as a beneficial value in society. However, this value has been challenged. Feminist thinkers, such as Anita Allen, Ruth Gavison and Annabelle Lever, have pointed out that respect for privacy has generally led to a barrier for the state operating within the domestic sphere (Allen 1988; Gavison 1992; Lever 2005). It is within this bounded sphere that men have been able to abuse women and children. Hence, an excessive emphasis on privacy may prove not to be in the interests of those who would otherwise be protected by the state.
The Current Debate There have been a number of developments over the last 20 years that have led to new areas of debate and discussion regarding privacy. I will focus on three areas in particular. The first concerns questions of national security, particularly in the wake of 9/11, where new questions as to where the legitimate boundaries of the state should lie when it comes to invading the privacy of its citizens have been raised (Macnish 2016). The second concerns the rise of social media which has led some to question whether privacy is still of value in contemporary society, given how little care many seem to place on its preservation (Preston 2014; Weinstein 2013). Third, I will discuss the area of data analytics (so-called “big data”) which has raised a number of similarly pertinent questions (boyd and Crawford 2012; Polonetsky and Tene 2012).
National Security Following the attacks of 9/11, there was, in the USA, widespread support for government agencies doing whatever they needed to catch the terrorists behind the atrocities. A new law, the Patriot Act (2001), was passed which allowed for the government to access vast quantities of US citizens’ data. Public opinion shifted, though, when Edward Snowden leaked a large number of documents from the National Security Agency (NSA) which appeared to demonstrate just how much information the US government can now access on its own citizens (MacAskill et al. 2013). Also implicated in these leaks were British, Canadian, and Australian intelligence organizations.
13
Privacy in Research Ethics
241
The public debate following the Snowden leaks centered on the apparent zerosum game of privacy versus security. If we want security, it was argued, then we must sacrifice some of our interests in privacy. Alternatively, if we want privacy, then we must sacrifice some of our national security interests. However, this is an oversimplification. One of the reasons we value privacy is that it affords us some form of security; a loss of privacy is, then, tantamount to a loss of security. Furthermore, one of the reasons we value security is because it enables us to live in the sort of society that we prefer, namely, one that endorses and protects private lives. Further, this argument fails to set boundaries on precisely how much security and how much privacy are at stake. It is unlikely that those calling for more security would be willing to lose virtually all of their privacy in order to see this happen. At the same time, privacy advocates would not likely be happy with a state of lawless anarchy, if that were the only means of securing our privacy (Macnish 2017). Finally, key questions need to be asked as to who, what, why, and how the surveillance is occurring with an eye kept clearly on the political issue of who in practice is setting the boundaries as to what is “acceptable” (Woodfield and Iphofen 2017).
Social Media In the period since the beginning of the twenty-first century, social media has proliferated. Platforms such as Facebook, Twitter, and Instagram allow people to connect across the world and share information about their lives. The nature of this information, and its quantity, implies to many onlookers that those sharing the information do not care about their privacy (Turkle 2017). However, drawing such a hasty conclusion is a mistake. Studies have shown that the information shared on social media is often highly curated in order to present a particular image of one’s life (boyd 2014). Such curation can only occur in a world where privacy is respected. Furthermore, in response to suggestions that the young in particular do not understand the value of privacy, it is notable that many teenagers maintain several different social media accounts: one for their parents and another for their friends. The availability of social media data, either as open resource or accessed through payment to the social media platform, is a potential goldmine to many researchers (Jernigan and Mistree 2009; Wang and Kosinski 2018; Arthur 2014; Tobitt 2018). The volume and the nature of the data here allow the researcher to gain a vast amount of information about the research subject without having to trouble that subject, or indeed engage with them in any way. Problems arise here, though, as the subject in question almost certainly did not post their information to social media for the purpose of research (Woodfield and Iphofen 2017, 4–5; see also in the same volume Williams et al. 2017; Salmons 2017; Townsend and Wallace 2017). As such, it is important to ask whether publishing information on social media is like sending a postcard to a friend (which may be read by others) or publishing a letter in the newspaper (which will be read by others). In the former case, a loss of privacy through other people reading the material may be anticipated. In the latter, the information is specifically targeted at those other people.
242
K. Macnish
Data Analytics Data analytics involves the processing of large amounts of data to look for patterns which may not be uncovered by the analysis of smaller data sets or those driven by specific hypotheses. This does not have to involve data about people. It may be weather data, or data about soil quality. However, where such data do involve information about people there are clear privacy implications. Two areas of concern in particular are whether the anonymization of data can adequately protect privacy, and predictions made on the basis of the data. One way in which the privacy of personal data can be preserved is for those data to be anonymized. That is, details contained in the data which may be used to identify a specific individual should be stripped out, leaving only anonymous but nonetheless relevant material behind. However, problems can arise when information is not as anonymous as was first thought. For instance, there may be very few people occupying the extreme ends of a dataset, such that in a dataset of heights in a particular country, there may only be a handful of people at a particular height, rendering such people easily identifiable from an apparently anonymous dataset. Even when data in a particular data set are genuinely anonymous, there may still be issues when that dataset is combined with other datasets. What was once genuinely anonymous may no longer be so following such a combination. For example, in 1996, and in order to help people determine between different medical providers, Massachusetts Governor William Weld published medical information on the Internet in an anonymized format. Latanya Sweeney took this anonymous information and cross-referenced it with Weld’s public diary to identify the Governor’s own medical records. She then printed these out and sent them to his office (Ohm 2009). One of the values of data analytics is that it can give us strong indications of likely patterns of behavior. At the same time, it may also reveal patterns of behavior or similarly private information about an individual. For example, in 2007 researchers at Harvard established that a person who was homosexual was likely to have a higher number of openly homosexual friends on Facebook than a person who was heterosexual (Jernigan and Mistree 2009). This meant that by analyzing a particular person’s friends on Facebook, one could determine with a relatively high degree of accuracy whether that person was themselves homosexual. More recently research that purports to determine an individual’s sexuality using facial recognition has been published (Wang and Kosinski 2018). While we may want to question the social implications of conducting such research, the fact that this research does reveal highly private information is significant. The Cambridge Analytica scandal is similarly instructive here. This involved accessing data of nonconsenting people on Facebook and subsequently using this to direct targeted political advertising during the 2016 UK referendum and the US Presidential election the same year (Cadwalladr and Graham-Harrison 2018; Ienca and Vayena 2018).
13
Privacy in Research Ethics
243
Horizon Scanning Challenges to privacy are likely to increase as the technology for discovering more information about people becomes more prevalent and more accurate. At the same time, there is reason to believe that people will continue to provide considerable amounts of personal information to the Internet, presenting a pool for potential research which would be accessible without consent. Likely areas of concern in the near future include the Internet of Things, facial recognition, and ubiquitous surveillance.
Internet of Things Increasingly, technological artifacts are being connected to the Internet. This has moved beyond home computers and mobile phones to include kettles, fridges, baby monitoring devices, and cars. In each case, the connectivity allows for the manufacturer to monitor and maintain the device and provide updates when needed. It also means that the owner of the device can benefit. For example, through connecting a mattress to a coffee machine, it is possible for the coffee machine to automatically brew a fresh cup as the owner enters a period of waking up (Hill and Mattu 2018). Likewise, through maintaining an awareness of the contents of a fridge, that fridge can remotely order staples such as milk when its contents are getting low. In each case, information about individuals and their private habits is made available over the Internet and as such is potentially available for research purposes.
Facial Recognition The quality of facial recognition has now reached the stage where automated passport barriers are in operation in many airports around the world. At the time of writing, however, there are certain constraints regarding the need for a face to be presented to a camera directly whilst the speed of recognition remains relatively slow. Nevertheless, it will not be long until real time analysis of crowds is feasible whereby every individual in a crowd will be automatically recognized and labelled and possibly connected to a social media profile. Once more, this provides a vast amount of information of potential interest to the researcher as to the sort of people (as identified by their social media profiles) who attend certain public events, be those museum openings or antigovernment protests. However, for many people, the fact that you have left your house does not mean that it is legitimate for someone to track your activities, something that is tantamount to stalking. Likewise, just because someone is in a public place it does not follow that they should not have a reasonable expectation of privacy. For example, a restaurant is a public space, but were we to discover that a restaurateur has placed microphones in the flowers on every table in his restaurant, we would feel that the privacy of those in the restaurant had been violated, even if the only use of the data was to improve the service in the restaurant.
244
K. Macnish
Ubiquitous Surveillance As can be seen from both the Internet of Things and facial recognition, current developments enable connections to be formed between different areas of our lives. While this means that we may expect a high level of convenience, it also entails a high degree of surveillance and a reduced expectation of privacy in our lives. Questions remain as to who is watching, who has the right to watch, and what legitimately may be done with that information. As noted, it is tempting for the researcher to think that the information is now publicly available and, therefore, that it is legitimate to make use of it for research purposes. However, this is not the case. The individual research subject still has reasonable expectations as to what will happen to their information, expectations that should be honored if their autonomy is to be respected and if the researcher is serious about preventing harm occurring to the research subject.
Managing Privacy We have seen in this chapter that privacy is, in most cases, a valuable consideration for individuals and society. Despite proclamations that privacy is dead, most people still have a strong expectation of privacy in different areas and contexts. The question then arises as to how privacy should be managed effectively in research scenarios. In this section, we will consider four means of managing privacy: consent, secure storage, anonymization and pseudonymity, and group work.
Consent Consent is often treated as an ethical panacea, a license for researchers to pursue their research. If a person has given their consent then it is legitimate for the researcher to carry out research upon them, at least within reason. It is important that this caveat is properly recognized and understood. Within reason means that the subject should not be exposed to harm as a result of the research (Manson and O’Neill 2007). This is similarly true in the case of privacy. It is generally felt that information about me is mine to give away and to dispose of as I choose. Indeed, many people believe that it is reasonable to speak about personal data as if it is owned and hence can be treated like property (Obama 2016). Irrespective of the ownership question, it remains the case that a research subject’s consent should be sought prior to collecting and using their information. While the primacy of consent may appear straightforward, there are increasing instances in which such consent is harder to obtain. Take, for instance, information published to social media under a pseudonym. In such a case, can the researcher use the information if they are unable to contact the originator in order to gain consent? Second, what counts as consent for information published on the Internet? We saw above that a person posting information to a social media platform may not have
13
Privacy in Research Ethics
245
research ends in mind when they publish that information. As such, is there any way in which they may have given prior consent to that information being used for research? Some might argue that the terms and conditions of the social media platform will typically make the possibility of research uses of data a factor in gaining access to the platform. However, it is also widely known that few people read the terms and conditions. As such, can it reasonably be said that a user of the platform has given meaningful consent to the data being used? If not, then even approaching a person subsequent to their data being posted (in order to gain consent for the use of that data) cannot constitute prior informed consent. One way to address at least some of these issues is to gain prior consent to data sharing, or to making the final data set openly accessible (possibly in an anonymized form). The risk here is that some subjects may refuse to provide any data if such sharing is suggested. The preferred response in these cases is therefore to offer data sharing as an option on an informed consent form to which subject may opt in if they are willing for their data to be shared in this manner (see Bishop and Gray 2017).
Secure Storage Once data about a person have been collected, they need to be kept in a secure manner such that they cannot be accessed by anyone who has not received permission from the research subject (via the informed consent form). This means that hard copies of the data should be kept in locked cabinets which can only be accessed by those with specific permission to access the data. Soft copies are more vulnerable, either to being accessed from someone inside the institution where they are held or from an external hacker. In both cases, security is therefore highly important in order to protect the interests of the research subject and maintain trust between the researcher (and researchers in general) and the general public. In such cases (or when hard copies are scanned and then destroyed), it is crucial that sensitive or personal data should be stored securely on password-protected, encrypted, or air-gapped servers. This includes both raw data (such as sound files and data sets) and data drawn from raw data which are similarly revealing (such as transcripts and Nvivo files). If research institutions become known for allowing personal data to leak, the public will cease to trust those research institutions as being competent to look after their data. It is therefore in the interests of the research community as a whole to take this security extremely seriously. There are a number of ways in which data may leak from an organization. We have already seen that this may come from a hacker or internal abuse. In addition to those, data may be leaked or lost. A leak might occur if a person with access to the data has reason to object to an activity of the research institution which holds that data. The individual may then choose to leak the data, as Edward Snowden did with information from the National Security Agency, in order to make a political point. By contrast, a loss occurs when a person with legitimate access to the data stores that data on improperly secured media (such as a USB drive or an unencrypted laptop) which is then misplaced. This has happened numerous times as CDs, USB drives, and hardcopy papers have been left by mistake in public places.
246
K. Macnish
Finally, we should note a pay-off between security and convenience. In some cases, access to properly secure, nonmobile media is impossible (e.g., in cases of research carried out in some rural parts of Africa). In others, though, where secure, encrypted and password-protected servers are available, the difficulty in using these may mean that some researchers prefer to use more convenient, less secure alternatives. The unnecessary dangers of this approach are clear and the risks taken with subjects’ data unjustified.
Anonymization and Pseudonymization One way to cope with the security risks associated with private information is to anonymize the data. For example, rather than saying “William Smith, 43, eats breakfast every day” we might record “X eats breakfast every day.” In this case, if someone were to get hold of the dataset, they would not be able to determine the name or age of the individual who ate breakfast every day. An anonymized data set is one from which the individual subjects providing the data cannot be determined. A pseudo-anonymized dataset is one from which the individual subjects providing the data can be determined by reference to a separate list. In the above example, if there were a separate list which stated that “X is William Smith (born 1975)” then the dataset would be pseudo-anonymized rather than anonymized. Paul Ohm has raised an interesting and important challenge to anonymization (2009). He argues that there is an inverse relationship between the information richness of data and its anonymity. Hence, the more anonymous a dataset is, the less informationally rich it is. Continuing with the William Smith example, we can see that “X eats breakfast every day” is less informationally rich than “William Smith, 43, eats breakfast every day.” As well as providing the name and age of the research subject, the latter suggests information about his gender and may also provide information about his ethnicity. (A full discussion of the issues concerning anonymity is to be found in ▶ Chap. 18, “A Best Practice Approach to Anonymization,” of the current volume.) Once data has been anonymized in this manner then it becomes easier to share that data or to place it in an openly accessible repository. However, given the challenges of fully anonymizing data, such moves towards data sharing should only be undertaken with the full informed consent of the research participant.
Group Work It is impossible to guarantee the privacy of individuals when they are engaged in research which involves working face-to-face in groups. This follows as one cannot guarantee that every member of the group will respect the privacy of every other member of the group. At the very least, each member of the group will know what the other members of the group looks like (except in very exceptional circumstances). As such, a guarantee of privacy should not be made to a research subject if the research involves them engaging with a group.
13
Privacy in Research Ethics
247
Conclusion In this chapter, we have looked at the role of privacy in research. In so doing we have considered the history of privacy and contemporary debates regarding the meaning and value of privacy. We have looked at emerging areas of concern with privacy related to developments in technology. We have also seen how the researcher can manage concerns regarding privacy, although noted that each of these comes with its own associated problems.
References Allen AL (1988) Uneasy access: privacy for women in a free society. Rowman & Littlefield, Totowa Allen AL (1999) Privacy-as-data control: conceptual, practical, and moral limits of the paradigm. Conn Law Rev 32:861 Arthur C (2014) Facebook emotion study breached ethical guidelines, researchers say. The Guardian, 30 June 2014, sec. Technology. http://www.theguardian.com/technology/2014/jun/30/ facebook-emotion-study-breached-ethical-guidelines-researchers-say Bauman Z, Lyon D (2012) Liquid surveillance: a conversation, 1st edn. Polity Press, Cambridge, UK/Malden Benn S (1971) Privacy, freedom, and respect for persons. In: Pennock J, Chapman R (eds) Nomos XIII: privacy. Atherton Press, New York Bentham J (1995) The Panopticon writings. Verso Books, London, UK Bishop L, Gray D (2017) Ethical challenges of publishing and sharing social media research data. In: Woodfield K (ed) The ethics of online research, vol 2. Emerald Publishing Limited, Bingley, pp 161–188 boyd d (2014) It’s complicated: the social lives of networked teens, 1st edn. Yale University Press, New Haven, Connecticut boyd d, Crawford K (2012) Critical questions for big data. Inf Commun Soc 15(5):662–679. https:// doi.org/10.1080/1369118X.2012.678878 Boye K, Vowles RB (2002). Kallocain (trans: Lannestock G), New edition. Madison/London: The University of Wisconsin Press Bradbury R (2008) Fahrenheit 451, 4th edn. Harper Voyager, London Cadwalladr C, Graham-Harrison E (2018) Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian, 17 March 2018, sec. News. https:// www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election DeCew JW (2000) The priority of privacy for medical information. Soc Philos Policy 17(2):213 Deleuze, G (1990) Postscript on the societies of control. L’Autre 1 (May) Doyle T (2009) Privacy and perfect voyeurism. Ethics Inf Technol 11:181–189 Eggers D (2014) The circle. Penguin, London Erwin A (2018) Attempt to access former IRA man’s Boston College tapes “replete with errors” court told. The Irish Times, 16 January 2018. https://www.irishtimes.com/news/crime-and-law/attemptto-access-former-ira-man-s-boston-college-tapes-replete-with-errors-court-told-1.3357750 Etzioni A (1999) The limits of privacy. Basic Books, New York EU Parliament, 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) Foucault M (1991) Discipline and punish: the birth of the prison, New edn. Penguin
248
K. Macnish
Funder A (2004) Stasiland: stories from behind the Berlin Wall, New edn. Granta Books, London, UK Gavison R (1984) Privacy and the limits of the law. In: Schoeman FD (ed) Philosophical dimensions of privacy. Cambridge University Press, Cambridge, pp 346–402 Gavison (1992) Feminism and the public/private distinction. Stanford Law Rev 45(1):1–45 Hill K, Mattu S (2018) The house that spied on me. Gizmodo, 7 February 2018. https://gizmodo. com/the-house-that-spied-on-me-1822429852 Ienca M, Vayena E (2018) Cambridge Analytica and online manipulation. Scientific American Blog Network, 30 March 2018. https://blogs.scientificamerican.com/observations/cambridgeanalytica-and-online-manipulation/ Inness JC (1996) Privacy, Intimacy, and Isolation, New edn. Oxford University Press, New York Jernigan C, Mistree BFT (2009) Gaydar: Facebook friendships expose sexual orientation. First Monday 14(10). http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2611/2302 Judges. n.d. Holy Bible: New International Version. 3:12–30 Lever A (2005) Feminism, democracy and the right to privacy. SSRN scholarly paper ID 2559971. Rochester: Social Science Research Network. http://papers.ssrn.com/abstract=2559971 Locke JL (2010) Eavesdropping: an intimate history. Oxford University Press, Oxford MacAskill E, Dance G, Cage F, Chen G, Popovich N (2013) NSA files decoded: Edward Snowden’s surveillance revelations explained. The Guardian, 1 November 2013. http://www. theguardian.com/world/interactive/2013/nov/01/snowden-nsa-files-surveillance-revelationsdecoded Macnish K (2016) Government surveillance and why defining privacy matters in a post-Snowden world. J Appl Philos, May. https://doi.org/10.1111/japp.12219 Macnish K (2017) The ethics of surveillance: an introduction, 1st edn. Routledge, London/New York Manson N, O’Neill O (2007) Rethinking informed consent in bioethics, Cambridge Matthew. n.d. Holy Bible: New International Version. 6:6 McDonald H (2016) Boston College ordered by US court to hand over IRA tapes. The Guardian, 25 April 2016. http://www.theguardian.com/uk-news/2016/apr/25/boston-college-ordered-by-uscourt-to-hand-over-ira-tapes Moore A (2008) Defining privacy. J Soc Philos 39(3):411–428 Nathan DO (1990) Just looking: voyeurism and the grounds of privacy. Public Aff Q 4(4):365–386 Nissenbaum HF (2009) Privacy in context: technology, policy, and the integrity of social life. Stanford University Press, Stanford Obama B (2016) Remarks by the president in precision medicine panel discussion. Whitehouse. Gov, 25 February 2016. https://obamawhitehouse.archives.gov/the-press-office/2016/02/25/ remarks-president-precision-medicine-panel-discussion Ohm P (2009) Broken promises of privacy: responding to the surprising failure of anonymization. UCLA Law Rev 57(6):1701–1777 Orwell G (2004) 1984 nineteen eighty-four, New edn. Penguin Classics, London Parent WA (1983) Privacy, morality and the law. Philos Public Aff 12(4):269–288 Polonetsky J, Tene O (2012) Privacy in the age of big data: a time for big decisions. Stanford Law Rev Online 64(February):63 Posner R (1984) An economic theory of privacy. In: Shoeman F (ed) Philosophical dimensions of privacy. Cambridge University Press, Cambridge, pp 333–345 Preston A (2014) The death of privacy. The Guardian, 3 August 2014. http://www.theguardian.com/ world/2014/aug/03/internet-death-privacy-google-facebook-alex-preston Rachels J (1975) Why privacy is important. Philos Public Aff 4(4):323–333 Regan PM (1995) Legislating privacy: technology, social values, and public policy. University of North Carolina Press, Chapel Hill Roessler B, Mokrosinska D (eds) (2015) Social dimensions of privacy: interdisciplinary perspectives. Cambridge University Press, New York Salmons J (2017) Getting to yes: informed consent in qualitative social media research. In: Woodfield K (ed) The ethics of online research, vol 2. Emerald Publishing Limited, Bingley, pp 111–136
13
Privacy in Research Ethics
249
Sampson F (2016) “Whatever you say. . .”: the case of the Boston College tapes and how confidentiality agreements cannot put relevant data beyond the reach of criminal investigation. Policing J Policy Pract 10(3):222–231. https://doi.org/10.1093/police/pav034 Scanlon TM (1975) Thomson on privacy. Philos Public Aff 4(4):315–322 Solove DJ (2002) Conceptualizing privacy. Calif Law Rev 90(4):1087–1155 Squires J (1994) Private lives, secluded places: privacy as political possibility. Environ Plann D Soc Space 12(4):387–401. https://doi.org/10.1068/d120387 Tavani HT, Moor JH (2001) Privacy protection, control of information, and privacy-enhancing technologies. Comput Soc 31(1):6–11 Thomson JJ (1975) The right to privacy. Philos Public Aff 4(4):295–314 Tobitt C (2018) Observer’s Carole Cadwalladr: I became a “news slave” in pursuing Cambridge Analytica data harvesting scoop – Press Gazette. The Guardian, 22 March 2018. http://www. pressgazette.co.uk/observers-carole-cadwalladr-i-became-a-news-slave-in-pursuing-cambridgeanalytica-data-harvesting-scoop/ Townsend L, Wallace C (2017) The ethics of using social media data in research: a new framework. In: Woodfield K (ed) The ethics of online research, vol 2. Emerald Publishing Limited, Bingley, pp 189–206 Turkle S (2017) Alone together: why we expect more from technology and less from each other, 3rd edn. Basic Books Wang Y, Kosinski M (2018) Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J Pers Soc Psychol 114(2):246–257 Warren SD, Brandeis LD (1890) The right to privacy. Harv Law Rev:1–19 Weinstein M (2013) Is privacy dead? Huffington Post (blog), 24 April 2013. https://www. huffingtonpost.com/mark-weinstein/internet-privacy_b_3140457.html Westin AF (2003) Social and political dimensions of privacy. J Soc Issues 59(2):431–453. https:// doi.org/10.1111/1540-4560.00072 Williams ML, Burnapp P, Sloan L, Jessop C, Lepps H (2017) Users’ view of ethics in social media research: informed consent, anonymity and harm. In: Woodfield K (ed) The ethics of online research, vol 2. Emerald Publishing Limited, Bingley, pp 27–54 Woodfield K, Iphofen R (2017) Introduction to volume 2: the ethics of online research. In: Woodfield K (ed) The ethics of online research, vol 2. Emerald Publishing Limited, Bingley, pp 1–12 Zamyatin Y (1993) We (trans: Brown C). New edn. New York: Penguin Classics
Biosecurity Risk Management in Research
14
Johannes Rath and Monique Ischi
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Status of Risk Management in Biosecurity Sensitive Research . . . . . . . . . . . . . . . . . . . . . . . Toward a Comprehensive Biosecurity Risk Management Framework in Biosecurity Sensitive Research: Principles and Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
252 252 255 256 258 260 261
Abstract
Despite substantial attention over the last decade, risk management in biosecurity is still fragmented and non-standardized at the operational level. Fragmentation is often a result of selective implementation of various building blocks, which all together would constitute a comprehensive biosecurity risk management framework. For example, while most countries have adopted export control measures on biosecurity sensitive materials, additional key elements of such a comprehensive framework, like personnel security and information security, are often not addressed. Furthermore, risk perception varies among stakeholders, and international agreement on the adequate level of risk management (and sometimes even on the need for it) is missing, contributing to the heterogeneity of standards currently applied to biosecurity sensitive research. For example, some countries like the USA have opted for stringent stand-alone biosecurity legislation also covering research, while other countries like Germany operationalize biosecurity primarily through integration in biosafety risk management frameworks. Furthermore, in light of inconsistent, incomplete, and/or missing legal guidance, J. Rath (*) · M. Ischi Department Integrative Zoology, University of Vienna, Vienna, Austria e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_10
251
252
J. Rath and M. Ischi
individual and collective responsibility-based risk management frameworks have been proposed by the scientific community. These self-governance attempts by the scientific community have resulted in a plethora of different approaches ranging from simple awareness raising concepts to individual self-censorship of research publications. This chapter highlights some of the challenges in governing biosecurity sensitive research. Key principles and processes constituting a comprehensive biosecurity risk management framework in line with international risk management standards are outlined and discussed. Keywords
Risk management · Biosecurity · Dual use · ISO 31000
Introduction Warren Buffet is attributed to once have said that “Risk comes from not knowing what you’re doing.” Today mitigating uncertainty has become the key element in risk management with the updated risk management standard ISO 31000:2018 (International Organisation for Standardization 2018) defining risk as “effect of uncertainty on objectives.” The evolution of risk management as a stand-alone discipline over the last decades has led to a maturation and consolidation of terms, general concepts, and principles and has been most recently summarized in “Risk Management – Guidelines” issued by the International Standardization Organisation (ISO 31000: 2018). ISO31000 outlines a comprehensive risk management framework building on a structured approach to risk assessment, risk treatment, risk monitoring/review, risk recording/reporting, and risk communication/consultation.
Current Status of Risk Management in Biosecurity Sensitive Research Although risk management is critical in ensuring and maintaining security overall, biosecurity risk management in research is still in its infancy. The recent failure to proactively address the H5N1 gain-of-function biosecurity controversy was a consequence of inadequate risk management frameworks (Becker 2012). As an example, the attempt by the Dutch government to invoke export control legislation as a means to address existing information security deficiencies in research highlighted the dilemma many countries are in (Enserink 2012). However, systematic personnel security and information security measures for biosecurity sensitive agents and information are rarely implemented today. Inconsistent country (Arnason 2017) and institutional (Patrone et al. 2012) and individual attitudes in managing biosecurity risks have led to an enigmatic array of approaches. These inconsistencies create loopholes which are highly problematic.
14
Biosecurity Risk Management in Research
253
For example, export control legislation is applied to restrict access outside of the EU for technologies and materials and knowledge, which can be used for both civil and military purposes (Aubin and Idiart 2011). Uncovering and analyzing the A. Q. Khan Network (Corera 2009) has shown that such a limited risk management approach solely focussing on export controls is ineffective without effective additional controls that restrict access of dual-use technologies also inside countries. Critical limitations in risk management of biosecurity sensitive research are: (a) Lack of Common Terminology The use of a common terminology is a critical prerequisite for any meaningful risk management approach. Currently, standard definitions are missing. For example, different terms are used to describe similar and often overlapping risks (e.g., dual use, dual-use research of concern (DURC), biorisk), while on the other hand the same term is used to describe unrelated concepts (e.g., the term biosecurity is used to describe control measures in the development and use of bioweapons, while the very same term is also used to describe infectious animal disease control measures). ISO Guide 73: Risk Management Vocabulary provides a set of generic risk management terms that provide a first step into the development of a standard risk management vocabulary to manage risks in biosecurity sensitive research. (b) Governance Structures and International Coordination Responsibilities and Accountabilities Numerous non-binding codes and guidelines have been developed or are under development by and for a variety of stakeholders (Table 1) with varying mandates, objectives, and scope ranging from generic awareness-raising codes of ethics to more detailed practical guidelines. Depending on the individual code/guideline, the role given to individual or collective self-governance of the scientific community varies. One source of this variation relates to different cultures in governing scientific research. Selfgovernance of research activities by researchers has a long tradition in certain disciplines like medicine and geographical locations (e.g., the USA), whereas in central Europe, for example, the role of governance by laws is more prevalent, leaving less space for self-governance. Governing biosecurity risks in research through ethics have also been suggested by including biosecurity into concepts of responsibility and accountability (World Health Organisation 2010). A relevant approach in this context is the systematic inclusion of biosecurity concerns into the Ethics Appraisal Framework of Horizon 2020: (http://ec.europa.eu/research/partici pants/data/ref/h2020/other/hi/guide_research-misuse_en.pdf). In addition, various countries have opted for legal instruments to govern biosecurity concerns (e.g., US Select Agents Regulation, EU Dual Use Export Control legislation). The area of biosecurity lacks comparable international coordination such as in the chemical and nuclear area. In contrast to the Chemical Weapons
254
J. Rath and M. Ischi
Table 1 Examples of codes of conducts in biosecurity and dual use in life sciences Title Declaration of Washington on biological weapons
Sponsor World Medical Association, WMA
Year 2003
Europa Bio’s Core Ethical Values
The European Association for Bioindustries
2016
IUMS Code of Ethics
International Union of Microbiological Societies Working Group Dual Use of the Flemish Interuniversity Council Inter Academy Panel on International Issues International Committee of the Red Cross
2008
Guidelines for researchers on dual use and misuse of research
IAP Statement on Biosecurity
Biotechnology, Weapons and Humanity: ICRC outreach to the life science community on preventing hostile use of the life sciences Tools for the Identification, Assessment, Management, and Responsible Communication of Dual Use Research of Concern A code of conduct for biosecurity Biosafety and biosecurity: Standards for Managing Biological Risks in the Veterinary Laboratory OECD Best Practice Guidelines for Biological Resource Centres Statement on dual-use research of concern and research misuse
2017
2005
Internet link http://www.wma.net/en/ 30publications/10policies/ b1/index.htm https://www.eu/ropabio.org/ sites/defaultfiles/Final% 20EuropaBio%20Core% 20Ethical%20Values%20-% 202016%20version.pdf https://www.iums.org/index. php/code-of-ethics https://www.uhasselt.be/ documents/DOC/ 2017VLIR003_ FolderOnderzoek_EN_ DEF_20180212.pdf http://www.interacademies. net/File.aspx?id=5401
2004
https://www.icrc.org/eng/ assets/files/other/icrc_002_ 0833.pdf
National Institutes of Health
2014
https://www.phe.gov/s3/ dualuse/Documents/durccompanion-guide.pdf
Royal Netherlands Academy of Arts and Sciences OIE
2009
file:///C:/Users/HP/ Downloads/20071092.pdf
2015
OECD
2007
BBSRC, MRC and Wellcome Trust position
2014
http://www.oie.int/ fileadmin/Home/eng/ Health_standards/tahm/1.01. 04_BIOSAFETY_ BIOSECURITY.pdf http://www.oecd.org/sti/ emerging-tech/38777417. pdf https://wellcome.ac.uk/sites/ default/files/wtp059491.pdf
Convention and the Non-Proliferation Treaty, the Biological Weapons Convention (https://www.un.org/disarmament/wmd/bio/), the key international treaty to address biosecurity concerns, is left without a technical organization that carries out international verification and monitoring activities or assists and coordinates international guideline development in biosecurity.
14
Biosecurity Risk Management in Research
255
The only widely used document available at the international level is the “Biorisk Management: Laboratory biosecurity” guidance document by the World Health Organisation (2006). Although a respectable attempt in 2006, the document is often generic and with limited conceptual clarity, thereby limiting its practical relevance. For example, in defining the scope, the biosecurity guidance introduces the term “valuable biological material” and provides a definition for it. This definition, however, is far too broad and not coordinated with other guidelines (e.g., in the dual-use context), making it challenging to define perimeters for biosecurity risk management. Furthermore, no comprehensive and structured risk management framework is provided, and individual risk mitigation measures such as information security although helpful are insufficiently detailed to enable an operational application. Since then the WHO has shifted its focus to an integrated safety-security risk management approach; however, it has provided no further guidance on how such integrated risk management framework could be operationalized (World Health Organisation 2010). Furthermore, an initiative to transform two CWA standards into an ISO standard on biorisk management is still ongoing (International Standardization Organisation 2018). Of note is that this lack in international coordination also hampers agreement on a common terminology as discussed before. (c) Lack of Conceptual Clarity: Biosecurity versus Biorisk Management Countries implementing biosecurity legislation and guidance have adopted two different approaches. Some countries like the USA (National Research Council 2010) have issued stand-alone biosecurity legislation, while others like Germany have aimed at integrating biosecurity and biosafety (Bielecka and Mohammadi 2014). Both approaches have pros and cons. However, these inconsistencies make it difficult to develop consistent risk treatment outcomes as scope and objective of the whole risk management process are different.
Toward a Comprehensive Biosecurity Risk Management Framework in Biosecurity Sensitive Research: Principles and Processes A variety of risk management frameworks exist for various disciplines. ISO 31000:2018 is unique in being a generic, comprehensive, and principle-based framework. As such it is not only suitable to facilitate the integration of biosecurity risk management into the overall risk management framework of an organization but also constitutes a suitable framework for the integration of related objectives like biosafety and public health into one risk management process. Critical pillars in the implementation of ISO31000:2018 compliant risk management frameworks are observation of principles and adherence to a structured risk management process.
256
J. Rath and M. Ischi
Principles Risk management in ISO 31000:2018 is guided by principles. The following points highlight some of these principles and how they relate to risk management of biosecurity sensitive research.
Value Creation Security in general is seen as a public good. Enhancing security through risk management of biosecurity sensitive research creates value. However, security is framed in different ways, and whether it is addressed as national, military, civil, or human security has significant impact on the scope of the risk management (Rath et al. 2014), on who the stakeholders are and on what roles they play in the risk management process. Depending on the framing of security, individuals (e.g., researchers), private organizations (e.g., universities, funding institutions, publishers), and public institutions (e.g., export control agencies, police, military, public health institutions) will take on different roles and responsibilities in risk assessment, treatment, monitoring, and communication. When it comes to value creation, problems arise as these stakeholders often may not share the same value system. For example, medical researchers might be much more willing to define security within the framework of health security, whereas law enforcement might be more familiar with the concept of civil security. This generates challenges in developing a common understanding among stakeholders on how, when, and where risk management of biosecurity sensitive research is a value creating process. Integral Part of Organizational Processes In contrast to established risk management frameworks in biological and medical research (e.g., biosafety, ethics), biosecurity is often not well integrated in organizational processes. Critical external stakeholders (e.g., export control authorities, National Advisory Boards like the NSABB, law enforcement agencies, military) act outside established organizational processes (e.g., proposal writing, funding application, conduct of research, publishing, patent application) and structures (e.g., research institutes, funding agencies, publisher, patent office) in research. Decisions by such outside bodies may be inconsistent with internal policies and organizational structures as no common, structured, and consistent risk management framework is applied between internal and external stakeholders. Enhancing risk communication and consultation between internal and external stakeholders (e.g., through expert advisory groups) as well as setting internal organizational structures facilitating communication and consultation (e.g., institutional biosecurity officer/board, ethics, and scientific review committees) would improve the integration of biosecurity into the organizational process of research institutions. Part of Decision-Making, Timeliness Organizational processes are often not established that would ensure availability of practical biosecurity risk management expertise (e.g., access to a biosecurity expert)
14
Biosecurity Risk Management in Research
257
throughout the whole research cycle. The beginning of a research activity is an especially critical moment for risk management, as at this early stage a large repertoire of risk treatment options, including the option not to do the research, exist. Due to lack of awareness and expertise at research and funding institutions, biosecurity considerations usually do not become part of decision-making. Later recognition of biosecurity concerns in research, e.g., at the publication level, reduces the number of available options (e.g., censorship/publication restriction) to mitigate risks. Timeliness in making biosecurity risk management part of the decisionmaking process early in the research life cycle would allow for less intrusive and better tailored risk management due to availability of a wider range of risk treatment options. On a practical level, the H5N1 gain-of-function controversy highlighted the need for early engagement in biosecurity preferably at the research conception and funding evaluation stage. The lesson also highlighted the importance of funding institutions as relevant stakeholders and the possibility that through the funding contract legally enforceable safeguards can be introduced.
Based on Best Information Available Access to security information is restricted, and without such access the level of uncertainty increases substantially making risk management highly challenging. The threat element in biosecurity risk management is far more difficult to assess for individuals (e.g., researchers) outside the security community, and even within the security community, substantial uncertainty often exists on the plausibility of the threat scenario. For example, developing plausible threat scenarios is challenging given the dynamic environment in which, for example, terrorism unfolds. Risk management therefore often focusses on assessing vulnerabilities (e.g., known weaknesses in public health toward certain agents) and mitigating consequences (e.g., vaccinations). Improving access to information (without at the same time compromising security) that would allow researchers to make more realistic threat assessments would improve acceptance and relevance of risk management in biosecurity. Customized to the Specific Environment Research is a very unique environment for risk management, and risk treatment needs to take into account the large uncertainties that are inherent to scientific experiments. Managing such uncertainties through an iterative approach by gradually moving from low-risk experiments to higher-risk levels is often recommended. In addition, customization often becomes challenging as critical risk mitigation measures such as information or personnel security measures are not established at research institutions. Takes Human and Cultural Factors into Account Established human and cultural factors in research are challenging when it comes to biosecurity risk management. The openness in which universities address information access, material transfers but also (international) mobility of personnel is challenging and often prohibitive for any attempt to establish information or
258
J. Rath and M. Ischi
personnel security measures. Furthermore, legal frameworks like export control regimes have generic exemption clauses for fundamental or basic research to account for the specific cultural factors in research. From risk management perspective, however, such exemptions limit the available risk treatment options in research, and it is not clear why security risks arising from fundamental research should be handled differently than those arising from applied research or innovation activities.
Dynamic and Responsive to Change Biosecurity risk management in research in specific needs to be highly dynamic and responsive to change. Two drivers necessitate such a dynamic approach. The first is the specific nature of the threat especially with regard to non-state actors. Threat scenarios involving terrorist and criminal organizations engaging in biological weapons are constantly changing. Second, research itself constantly modifies the risk environment through the creation of new vulnerabilities (e.g., creation of novel pathogenic agents) or the development of new risk treatment approaches (e.g., new prophylactic and treatment options). To account for such innovations, a dynamic and iterative process to risk management is needed. Systematic Biosecurity risk management in research has been driven by reactions to crisis, whether it has been the Amerithrax case (McQueen 2014) or the politicizing of the dual-use dilemma in research during the gain-of-function discussion (Hunter 2012; Koblentz and Klotz 2018). Reactionary risk management measures are hardly ever comprehensive and developed in a systematic way but focussed on addressing casespecific shortcomings. Comprehensive risk management frameworks to biosecurity in research that would allow for a systematic approach are still missing. Structured In order to ensure consistency and comprehensiveness, biosecurity risk management needs to follow a structured approach that follows a preset logic. ISO 31000:2018 structures the process of risk management into five elements: scope, risk assessment, risk treatment, risk recording and reporting, and risk communication and consultation and risk monitoring and review (Fig. 1). All these elements need to be implemented in a structured biosecurity risk management framework.
Processes Establishing the Context and Defining Objectives Context and perimeter definition in biosecurity risk management has been inconsistent and can lead to confusion (Rath et al. 2014). Perimeter risks are often defined very narrowly. For example, framing the H5N1 gain-of-function risks solely as an information security-related risk, with publication restrictions as the critical risk mitigation measure, misses out on other risk mitigation options. In this case, risk
14
Biosecurity Risk Management in Research
259
SCOPE/ CONTEXT CIVIL SECURITY
HUMAN SECURITY
RISK ASSESSMENT: THREAT-VULNERABILITY-CONSEQUENCE Model RISK IDENTFICATION
RISK ANALYSIS
RISK ESTIMATION
RISK TREATMENT PHYSICAL SECURITY
INFORMATION SECURITY
RISK MONITORING AND REVIEW
RISK COMMUNICATION AND CONSULTATION
MILITARY SECURITY
PERSONNEL SECURITY MATERIAL CONTROLS RISK RECORDING AND RISK REPORTING
Fig. 1 Comprehensive biosecurity risk management framework based on ISO31000
management should also have taken into account physical and personnel security measures as well as experimental changes like the use of molecular containment systems. Focussing on narrowly defined risk perimeters also provides challenges in developing proportionate risk mitigation measures by acknowledging the interconnectivity of risk management measures taken in biosecurity with areas like public health and biosafety (e.g., information restriction on valuable disease information; see Rath 2014).
Risk Assessment Within risk assessment the first step called risk identification is critical. Biosecurity risks whether located at the threat, vulnerability, or consequence level can only be identified if the relevant knowledge is included into the process. Key stakeholders in biosecurity risk identification are usually the researchers, the research institutions, and the research funding institutions. These actors will have detailed understanding of the proposed research activity. If knowledge in identifying risks from biosecurity sensitive research is missing at the level of these stakeholders, timely biosecurity risk management will not take place. Once risks are identified, risk analysis and risk evaluation should take place to define the level of risk. Both are challenging in the context of biosecurity, due to the already mentioned high levels of uncertainty.
260
J. Rath and M. Ischi
Risk Treatment Treatment of biosecurity sensitive research risks builds on a variety of treatment options and can focus on personnel security (e.g., security clearance levels), information security (e.g., classification of information), physical security (e.g., effective perimeter control and locked storages), transfer security (e.g., providing access restrictions and controls during material transfers), and material controls (e.g., keeping detailed inventories). Risk avoidance by not starting, continuing, or funding a certain research activity also provides an option and needs to be evaluated against loss of potential benefits. Such benefits from biosecurity sensitive research can be significant, for example, in the areas of public health and biodefence (Selgelid 2016). Risk Recording and Reporting Recording and responsible reporting of biosecurity risks in research is challenging. Research is often built on the concept of free knowledge communication, and initiatives to foster the free flow of information (e.g., open access) are actively promoted. Unrestricted reporting of risks and vulnerabilities in security, however, might further increase the risks. Therefore, alternative ways of reporting risks of biosecurity sensitive research should be evaluated (e.g., temporary classification, information access only to individuals holding relevant personnel clearances). Risk Communication and Consultation Researchers often tend to exaggerate risks as it may support their research agenda and in the past biosecurity risks have often been communicated through worst-case scenarios. As a consequence, responsible risk communication to the non-expert community (e.g., media, politicians, and lay people) has become a challenge. Risk consultation, for example, by initiating the inclusion of biosecurity experts into the risk management process in order to improve decision-making is not common and should be further increased. This can be done through the nomination of expert advisors but also by establishing advisory panels and boards. Risk Monitoring and Review Finally, continuous monitoring and review of biosecurity risks is usually not foreseen due to the lack of competent monitoring units. Exceptions exist in the areas where biosecurity is integrated into biosafety and adequate biosafety oversight structures have been established or in certain legal contexts. Nonetheless, since biosecurity risks are highly dynamic, routine monitoring to support an iterative approach to risk management is important.
Conclusion The use of biological agents for malevolent purposes and as a weapon of mass destruction is a serious and real threat in current times. In no other areas of weapons of mass destruction does research play such a dominant role in creating
14
Biosecurity Risk Management in Research
261
and managing these risks. Current risk management principles and processes applied to biosecurity sensitive research are inconsistent, unstructured, and non-comprehensive. The aim of this chapter was not to establish a new methodology from scratch but rather to build on an existing state-of-the-art risk management standard. Introducing ISO 31000:2018 to the management of risks from biosecurity sensitive research would ensure a consistent, structured, and comprehensive risk approach to the management of biosecurity risks in research.
References Arnason G (2017) Synthetic biology between self-regulation and public discourse: ethical issues and the many roles of the ethicist. Camb Q Health Ethics 26(2):246–256. https://doi.org/ 10.1017/S0963180116000840 Aubin Y, Idiart (2011) A export control law and regulations handbook: a practical guide to military and dual-use goods trade restrictions and compliance. Kluwer Law International, Netherlands Becker G (2012) The “H5N1 publication case” and its conclusions. Acta Biochim Pol 59(3):441–443 Bielecka A, Mohammadi AA (2014) State-of-the-art in biosafety and biosecurity in European countries. Arch Immunol Ther Exp 62(3):169–178. https://doi.org/10.1007/s00005-014-0290-1 Corera G (2009) Shopping for bombs: nuclear proliferation, global insecurity, and the rise and fall of the A.Q. Khan network. Oxford University Press, Oxford Enserink M (2012) Will Dutch allow ‘Export’ of controversial flu study? Science 336(6079):285. https://doi.org/10.1126/science.336.6079.285 Hunter P (2012) H5N1 infects the biosecurity debate: governments and life scientists are waking up to the problem of dual-use research. EMBO Rep 13(7):604–607 International Organisation for Standardization (2018) ISO 31000:2018 risk management – guidelines. Geneva. https://www.iso.org/standard/65694.html International Standardization Organisation (2018) ISO 35001: biorisk management for laboratories and other related organizations. https://www.internationalbiosafety.org/index.php/news-events/ news-menu/news-items/571-iso-35001-biorisk-management-for-laboratories-and-other-relatedorganizations Koblentz GD, Klotz LC (2018) New pathogen research rules: gain of function, loss of clarity. Bull At Sci. https://thebulletin.org/2018/02/new-pathogen-research-rules-gain-of-function-loss-of-clarity/ McQueen G (2014) Terror and the patriot act of 2001, implemented in the immediate wake of 9/11. Global Research. https://www.globalresearch.ca/terror-and-the-patriot-act-of-2001-implementedin-the-immediate-wake-of-911/5400910 National Research Council (2010) Responsible research with biological select agents and toxins. National Academies Press, Washington, DC Patrone D, Resnik D, Chin L (2012) Biosecurity and the review and publication of dual-use research of concern. Biosecur Bioterror 10(3):290–298. https://doi.org/10.1089/bsp.2012.0011 Rath J (2014) Rules of engagement. EMBO Rep 15(11):1119–1122 Rath J, Ischi M, Perkins D (2014) Evolution of different dual-use concepts in international and national law and its implications on research ethics and governance. Sci Eng Ethics 20 (3):769–790 Selgelid MJ (2016) Gain-of-function research: ethical analysis. Sci Eng Ethics 22(4):923–964 World Health Organisation (2006) Biorisk management: laboratory biosecurity guidance. World Health Organization 2006, WHO/CDS/EPR/2006.6 World Health Organisation (2010) Responsible life sciences research for global health security: a guidance. World Health Organisation WHO/HSE/GAR/BDP/2010.2. http://apps.who.int/iris/ bitstream/handle/10665/70507/WHO_HSE_GAR_BDP_2010.2_eng.pdf?sequence=1
Benefit Sharing Looking for Global Justice
15
Doris Schroeder
Contents Introduction and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefit Sharing as Fairness-in-Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Convention on Biological Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Nagoya Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The WHO Pandemic Influenza Preparedness (PIP) Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . Declaration of Helsinki: Post-Trial Obligations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefit Sharing as Distributive Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Human Genome Project’s Ethics Committee Statement on Benefit Sharing . . . . . . . . . . The UNESCO Universal Declaration on Bioethics and Human Rights . . . . . . . . . . . . . . . . . . . . Combining Fairness-in-Exchange and Distributive Fairness? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Global Code of Conduct for Research in Resource-Poor Settings . . . . . . . . . . . . . . . . . . . . Benefit Sharing: Open Questions and Solutions? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three Elephants in the Fairness-in-Exchange Room . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Partner for the Distributive Fairness Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
264 265 265 266 268 270 271 272 272 273 274 276 277 279 280
Abstract
Research cannot be done by researchers alone. In most cases, additional resources are required, including human research participants, access to biodiversity for biological and genetic resources, or traditional knowledge. Benefit sharing has been part of global conventions and international ethics guidelines for over 25 years, predicated on the understanding that those who contribute to the research process and its outcomes should share in the benefits as a matter of fairness. This chapter explores the different understandings of benefit sharing in a historical context, from the “Grand Bargain” of the Convention on Biological Diversity in 1992 to the Global Code of Conduct for Research in Resource-Poor Settings in 2018, and examines the D. Schroeder (*) School of Health Sciences, University of Central Lancashire, Preston, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_11
263
264
D. Schroeder
contemporary potential for the UN Sustainable Development Goals (Agenda 2030) to facilitate benefit sharing. The discussion provides guidance to researchers, through examples and short case studies, on how to discharge the obligations of benefit sharing effectively and fairly, in pursuit of research integrity. Keywords
Benefit sharing · Global justice · Genetic resources · Research ethics · Research integrity · Convention on Biological Diversity · Vulnerable populations · Fairness · Global Code of Conduct for Research in Resource-Poor Settings · PIP Framework
Introduction and Background Benefit sharing can be understood in two ways. First, it can be related to fairness-inexchange, when several parties are involved in an interaction and benefits derived from the interaction are distributed fairly. Second, it can be related to distributive fairness where, for instance, the benefits from scientific progress are shared fairly among potential recipients, whether or not they were involved in the research process. Aristotle gives examples for fairness-in-exchange, which are as valid today as they were almost 2,500 years ago, namely, acquisition and sale of goods, loans, bonds, and rents (Aristotle 2004: 1130b–1131a). The development and acquisition of goods and services is the main field where benefit sharing is relevant to research. Instances of fairness-in-exchange usually involve individual agents, such as researchers (from universities or private companies) or entrepreneurs, and those who contribute to their activities. States are only involved indirectly, through their legislative powers to influence theoretical understandings of and practical realizations of fairness. By contrast, distributive fairness is usually understood as a question of relationships between the state and its citizens. A typical distributive fairness question would be: “How should benefits derived from taxation be distributed between the privileged and the underprivileged?” John Rawls (1999: 65) gave one of the most famous answers of the twentieth century to this question: The higher expectations of those better situated are just if and only if they work as part of a scheme which improves the expectations of the least advantaged members of society.
Benefit sharing to achieve fairness-in-exchange will be explained with reference to the Convention on Biological Diversity (1992), the Nagoya Protocol on Access and Benefit Sharing (2010), the World Health Organization’s PIP Framework (2011a), and the Declaration of Helsinki (2013). Benefit sharing to achieve distributive fairness will be explained with reference to the Human Genome Project’s Ethics Committee Statement on Benefit Sharing (2000) and the UNESCO Universal Declaration on Bioethics and Human Rights (2005). Where the two converge will be illustrated with the Global Code of Conduct for Research in Resource-Poor Setting (2018). The following table gives an overview of relevant legal and ethical instruments grouped according to the type of benefit sharing covered (Table 1).
15
Benefit Sharing
265
Table 1 Overview of main legal and ethical instruments for benefit sharing Benefit sharing as fairness-inexchange Convention on Biological Diversity 1992 CBD-linked legislation Nagoya Protocol 2010 National laws Biological Diversity Act India 2002 Biodiversity Act South Africa 2004 WHO Pandemic Influenza Preparedness (PIP) WHO 2011a Declaration of Helsinki, 2013
Benefit sharing as distributive fairness Council of Europe’s Convention on Human Rights and Biomedicine, 1997 “Affirming that progress in biology and medicine should be used for the benefit of present and future generations”
Human Genome Project’s Ethics Committee Statement on Benefit Sharing, 2000 UNESCO Universal Declaration on Bioethics and Human Rights, 2005 CIOMS International Ethical Guidelines for Health-Related Research Involving Humans, 2016 Global Code of Conduct for Research in Resource-Poor Settings, 2018
Benefit Sharing as Fairness-in-Exchange To explain the main features of benefit sharing as fairness-in-exchange, three binding international legal instruments (the CBD, its Nagoya Protocol, and the PIP Framework) and one non-binding international ethics guideline (Declaration of Helsinki) will be introduced.
The Convention on Biological Diversity Human destruction of nature is rapidly eroding the world’s capacity to provide food, water and security to billions of people. . . Such is the rate of decline that the risks posed by biodiversity loss should be considered on the same scale as those of climate change. (Watts 2018)
Twenty-six years before this assessment, a United Nations conference of unparalleled size and scope was held in Rio de Janeiro in 1992. What became known as the “Earth Summit” provided a platform for launching the Convention on Biological Diversity (CBD). The CBD recognized that the conservation of biodiversity is a common concern of humankind. The legally binding convention has 196 parties (all countries in the world are signatories except for the USA and the Vatican) and 3 major objectives: The conservation of biological diversity The sustainable use of its components The fair and equitable sharing of benefits from the use of genetic resources
266
D. Schroeder
The first objective relates to the common interest of humankind, namely, to deal with the serious loss of biodiversity and its potential implications for global ecological functions as well as future uses. According to scientists, the extinction of biodiversity currently proceeds at a rate not experienced since the loss of the dinosaurs over 65 million years ago. It is estimated that 1,000–10,000 times more species are lost today through human action than would be lost naturally (Chivian and Bernstein 2008). Such biodiversity loss risks “burning the library of life,” as biodiversity can be seen as the knowledge acquired by species over many million years to survive in vastly different terrains (Carrington 2018). Hence, the main aim of the CBD is also its most pressing. The sustainable use of biodiversity components (the second objective of the CBD) becomes increasingly restricted as a result of biodiversity loss, including restrictions for scientific or commercial endeavors or to support human livelihoods. Hence, if the first objective were achieved, the second objective only adds the request to use biodiversity sustainably from now on. The third objective explains why the CBD has also been called the “Grand Bargain” (ten Kate and Laird 2000), between low- and middle-income countries (LMICs) on the one hand and high-income countries (HICs) on the other. Until the CBD was adopted, access to resources had mostly been on a first-come-first-served basis. Another formulation for this was the concept of the common heritage of humankind. Vandana Shiva described this approach critically as follows: The North has always used Third World germplasm as a freely available resource and treated it as valueless. The advanced capitalist nations wish to retain free access to the developing world’s storehouse of genetic diversity, while the South would like to have the proprietary varieties of the North’s industry declared a similarly ‘public’ good. (Shiva 1991)
If LMICs were meant to protect their biodiversity (objective 1) to enable scientific and commercial use (objective 2), the first-come-first-served approach for access to resources had to stop. The CBD therefore required that plants, animals, microorganisms, and related traditional knowledge fall under the sovereignty of nation states and that access to and use of them be governed by specific rules, in particular prior informed consent for access and “fair and equitable sharing of benefits” related to their use (objective 3). How fair and equitable sharing of benefits for nonhuman genetic resources could be realized was such a challenging question that an associated protocol was painstakingly negotiated: the Nagoya Protocol.
The Nagoya Protocol The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity, also known as the Nagoya Protocol on Access and Benefit Sharing, is a supplementary agreement to the CBD adopted in 2010. One of the main achievements of the Nagoya Protocol is the detailed list of possible benefit sharing measures. The list includes monetary and nonmonetary benefits, and the following table summarizes the main ones (Table 2).
15
Benefit Sharing
267
Table 2 Overview of benefit sharing examples from the Nagoya Protocol* Monetary benefits Access fees per sample collected Payment of royalties or license fees Research funding Joint ventures Joint ownership of intellectual property rights
Nonmonetary benefits Collaboration in scientific research Technology transfer under fair and most favorable terms Institutional capacity building Research directed toward priority needs Food and livelihood security benefit
*For the full list, see the Nagoya Protocol (2010) pp. 24–25
One hundred and ninety-three countries agreed the Protocol after 6 years of challenging negotiations. These “years of intense, complex and fractious talks . . . frequently pitted developed countries against developing countries, and providers of genetic resources against users of those resources” (Andanda et al. 2013). One disagreement between LMICs and HICs was the original exclusion of human genetic resources from the CBD (Chaturvedi et al. 2013) (early on in CBD negotiations, it was decided that human biological resources were to be excluded from its scope). While the Nagoya Protocol was negotiated, a major crisis about access to human materials emerged between Indonesia and the World Health Organization (WHO). In the end, the disagreement led to the PIP Framework (2011) (see below). But prior to the PIP Framework’s adoption, some senior LMIC policymakers and advisors argued instead for an expansion of the CBD to include human genetic resources (Chaturvedi et al. 2013). When one looks at the arguments brought forward by the Indonesian government in the dispute with the WHO about human genetic samples, one can see why Indonesia was concerned about justice issues: Disease affected countries, which are usually developing countries, provide information and share biological specimens/virus with the WHO system; then pharmaceutical industries of developed countries obtain free access to this information and specimens, produce and patent the products (diagnostics, vaccines, therapeutics or other technologies), and sell them back to the developing countries at unaffordable prices. Although it is general knowledge that this practice has been going on for a long time for other major communicable diseases – not just for avian influenza – the fear of potential pandemic influenza has magnified this gap. (Sedyaningsih et al. 2008)
The exploitative spirit of the first-come-first-served approach to accessing genetic resources makes no distinctions between type of resources and their origins. The drafters of the Nagoya Protocol therefore decided to add a nod toward human resources in the introduction, to precede the main text: Mindful of the International Health Regulations (2005) of the World Health Organization (WHO) and the importance of ensuring access to human pathogens for public health preparedness and response purposes.
As a result, the Nagoya Protocol includes a reference to regulations which govern human genetic resources, something the CBD does not.
268
D. Schroeder
The WHO Pandemic Influenza Preparedness (PIP) Framework In the mid- to late 2000s, the dispute between the WHO and Indonesia was reminiscent of the “Grand Bargain” debates of the CBD, focusing on the exploitation of LMICs (Box 1), first-come-first-served-style. Indonesia argued that it provided avian flu samples to the WHO for vaccine development, but the benefits and resulting vaccines stayed in high-income countries. Box 1 Exploitation
Robert Mayer (2007) developed a formulation of exploitation that is highly appropriate for discussions of benefit sharing: Exploitation is the failure to benefit another as fairness requires thereby obtaining wrongful gain.
In this dispute, the WHO stressed countries’ responsibilities to share their specimens or viruses without imposing “agreements or administrative procedures that may inhibit the proper functioning of the . . . [system], including in particular the timely sharing of material and information” (Sedyaningsih et al. 2008: 486; WHO 2007). By contrast, Caplan and Curry (2007) were sympathetic to Indonesian efforts to prevent exploitation and noted: Indonesia is basically correct: pandemic vaccines that are in development and early testing are largely already obligated by contract to a limited group of national governments. That list does not include Indonesia or developing nations in general.
Indonesia’s refusal to cooperate with a system it deemed unfair catalyzed an effort to develop a new framework for global virus sharing. In 2011, after 4 years of negotiations, the WHO’s Open-Ended Working Group of Member States on Pandemic Influenza Preparedness (PIP) reached agreement on the new framework for influenza virus sharing. The PIP Framework was ratified by the World Health Assembly in May 2011, and it recognizes the “sovereign right of States over their biological resources” (WHO 2011b). The formulation was chosen in line with the CBD, even though human resources are excluded from the latter. The PIP Framework has two aims (WHO 2017): • Sharing of influenza viruses that could cause a pandemic • Access to capacity development and products such as vaccines One can see here how the “Grand Bargain” of PIP operates. Aim 1 promotes access to resources, in line with objective 2 of the CBD (the sustainable use of its [biodiversity’s] components). Aim 2 is the result of bargaining between HICs and LMICs, prompted by legitimate concerns over exploitation. This mirrors objective
15
Benefit Sharing
269
Fig. 1 Structure of fairness-in-exchange models of benefit sharing
3 of the CBD (the fair and equitable sharing of benefits from the use of genetic resources). Interestingly, the PIP Framework does not promote its overall aim (securing global public health through an effective fight against viruses) as the CBD does as part of wider objectives or aims. But if it did, the structure of the fairness-in-exchange efforts would be identical (see Fig. 1). How does the PIP Framework ensure its aims are achieved? In line with the CBD approach to accessing biological resources, the PIP Framework requires binding standard material transfer agreements (SMTAs). The main difference from the CBD approach is that the institutions accessing the samples sign the SMTA with the WHO, rather than with the countries of origins of the samples. In turn, it is the WHO’s responsibility to ensure that the second aim of the PIP Framework, access to capacity development and products such as vaccines, is achieved. Capacity development and affordable access to vaccines is costly. The standard SMTA drawn up for the PIP Framework therefore includes benefit sharing options, one of which is for users of resources to pay an annual Partnership Contribution (WHO 2017). Funds from the Partnership Contribution can then be used to help countries respond to pandemics, both in terms of prevention and in actual cases. The PIP Framework has been praised as “an innovative way to make global solidarity a reality and to protect the world against devastating pandemics” (Briand 2016:180). No withholding of virus samples involving protests over benefit sharing – as in the Indonesian case has been revealed in the literature since it came into effect.
270
D. Schroeder
Declaration of Helsinki: Post-Trial Obligations “I have been used like a guinea pig, so how does he just leave me without compensation?” (Shaffer et al. 2006). The sentiment expressed by this clinical trial participant in Kenya is what a particular form of benefit sharing tries to avoid: post-study obligations. In 2000, at a meeting in Edinburgh, the World Medical Association (WMA) General Assembly added posttrial obligations to the Declaration of Helsinki, the main source of ethical guidance for medical researchers since 1964. The new article reads as follows (Carlson et al. 2004): At the conclusion of the study, every patient entered into the study should be assured of access to the best proven prophylactic, diagnostic and therapeutic methods identified by the study.
Who was going to monitor whether posttrial obligations had been discharged was not clear. As a result, in 2004, at the next General Assembly in Tokyo, the WMA added a note requesting that post-study access to drugs, medical procedures, or care be discussed during the planning of trials and documented in the study protocol (Schroeder 2008). Very few, if any, success stories were openly reported. Yet, in 2008, expectations for benefit sharing with clinical trial participants became more ambitious. Article 33 of the 2008 Declaration of Helsinki (adopted in Seoul) noted (Schroeder and Gefenas 2012): At the conclusion of the study, patients . . . are entitled . . . to share any benefits that result from it, for example, access to interventions identified as beneficial in the study or to other appropriate care or benefits.
This was the strongest benefit sharing article of the Declaration of Helsinki to date. Still, very few if any success stories were reported, and significant challenges were summarized in the literature (Schroeder and Gefenas 2012). By 2013, the term benefit sharing (“share any benefits”) had been removed from the Declaration of Helsinki, as well as the reference to “other appropriate care or benefits.” The current version of the benefit sharing article, Art 34, reads as follows (Declaration of Helsinki 2013): In advance of a clinical trial, sponsors, researchers and host country governments should make provisions for post-trial access for all participants who still need an intervention identified as beneficial in the trial. This information must also be disclosed to participants during the informed consent process.
Box 2 describes the efforts of one pharmaceutical company to discharge post-trial obligations. Box 2 Roche’s Approach to Posttrial Access (Kelman et al. 2018)
In 2013, Roche – a Swiss multinational healthcare company – publicly posted its Global Policy on Continued Access to Investigational Medicinal Products. The policy had been co-developed by Roche clinical trial professionals with (continued)
15
Benefit Sharing
271
Box 2 Roche’s Approach to Posttrial Access (Kelman et al. 2018) (continued)
experts from the fields of genetics, bioethics, law, science policy, and patient advocacy. In the policy’s Executive Summary section, it reads: Roche offers patients who participate in Roche-sponsored clinical trials continued access to the investigational medicinal product that they received after trial completion, when appropriate.
Four exemptions are made. The medicinal product will not be provided: 1. When it is reasonably available to the clinical trial participant (e.g., covered by his or her health insurance) 2. When Roche has discontinued its development 3. When safety concerns exist about the product 4. When provision of the product would violate local laws An example of provision of posttrial access is Roche’s etrolizumab clinical trial program, which studies a potential treatment for ulcerative colitis. For up to 7 years after the conclusion of the initial trial program, patients can receive the intervention as part of an open-label. (In an openlabel clinical trial, both the researcher and the research participant know which medicine is being administered. This is in contrast to a traditional double-blind clinical trial where neither researcher nor participant knows if they are receiving the intervention or a placebo extension program.) After the conclusion of the open-label extension, research participants who continue to require etrolizumab can receive it on an individual basis.
The above three legal instruments and the Declaration of Helsinki rely on the concept of fairness-in-exchange. Those who do not contribute to the scientific process, for instance, by sharing samples or acting as participants, have no right to benefit sharing claims on the outcomes, according to the instruments just introduced. However, the UNESCO Universal Declaration on Bioethics and Human Rights and the Human Genome Project’s Ethics Committee Statement on Benefit Sharing see this differently.
Benefit Sharing as Distributive Fairness To explain the main features of benefit sharing as distributive fairness, two ethical instruments will be introduced: the Human Genome Project’s Ethics Committee Statement on Benefit Sharing (HUGO 2000) and the UNESCO Universal Declaration on Bioethics and Human Rights (2005).
272
D. Schroeder
The Human Genome Project’s Ethics Committee Statement on Benefit Sharing The Human Genome Project’s Ethics Committee was the first major ethics group to produce a specific statement on benefit sharing. The committee was prompted by the fact that by 2000 commercial expenditure on genetic research exceeded the contributions from governments (HUGO 2000). And as noted earlier, the CBD had excluded human genetic resources, leaving a legal and ethical vacuum regarding access to human genetic resources. The HUGO (2000) Ethics Committee recommends: 1. 2. 3. 4.
that all humanity share in, and have access to, the benefits of genetic research. that benefits not be limited to those individuals who participated in such research. that there be prior discussion with groups or communities on the issue of benefit-sharing. that even in the absence of profits, immediate health benefits as determined by community needs could be provided. 5. that at a minimum, all research participants should receive information about general research outcomes and an indication of appreciation. 6. that profit-making entities dedicate a percentage (e.g., 1–3%) of their annual net profit to healthcare infrastructure and/or to humanitarian efforts.
The above recommendations of the HUGO Ethics Committee make strong and clear claims on distributive fairness. Benefits are not to be limited only to those who contributed to the research. The genetic heritage of humankind warrants the demand that all humans should benefit. While these recommendations focus specifically on the Human Genome Project (“the Human Genome Project should benefit all humanity,” HUGO 2000), the Committee’s decision to take an alternative approach to the CBD was later followed by UNESCO.
The UNESCO Universal Declaration on Bioethics and Human Rights
Drafting of the main text of the declaration by the IBC in consultation with international specialists.
Fig. 2 Stages in the development of the UNESCO declaration
Phase 3
Consultation with Member States of UNESCO as well as governmental and nongovernmental organizations on the objectives and content of the declaration.
Phase 2
Phase 1
In 2005, UNESCO published its Universal Declaration on Bioethics and Human Rights. The idea for the Declaration was launched by French President Jacques Chirac in 2001, upon which the International Bioethics Committee (IBC) of UNESCO started a three-level consultation (Bergel 2015) (Fig. 2). Completion of the text in two meetings of experts and unanimous ratification by the Member States of UNESCO
15
Benefit Sharing
273
Of the 28 articles of the Declaration, one of the longest is dedicated to benefit sharing (see Box 3). Box 3 UNESCO Universal Declaration on Bioethics and Human Rights, Article 15: Sharing of Benefit
1. Benefits resulting from any scientific research and its applications should be shared with society as a whole and within the international community, in particular with developing countries. In giving effect to this principle, benefits may take any of the following forms: (a) Special and sustainable assistance to, and acknowledgment of, the persons and groups that have taken part in the research (b) Access to quality healthcare (c) Provision of new diagnostic and therapeutic modalities or products stemming from research (d) Support for health services (e) Access to scientific and technological knowledge (f) Capacity building facilities for research purposes (g) Other forms of benefit consistent with the principles set out in this Declaration 2. Benefits should not constitute improper inducements to participate in research.
As in the HUGO Statement on Benefit Sharing, UNESCO subscribes to a distributive fairness interpretation of benefit sharing. While the Declaration does not explicitly state that benefits should also go to those who do not take part in scientific research, this is clear from the initial statement of the article. Benefits resulting from any scientific research and its applications should be shared with society as a whole and within the international community, in particular with developing countries. [emphasis added]
The three instruments introduced earlier regarding fairness-in-exchange are legally binding. This gives them considerable legal power. It is difficult to say how effective a non-binding Declaration such as UNESCO’s can be in practice. But it is important to note that the Declaration has the support of states rather than “only” professional organizations, such as the Declaration of Helsinki’s link to the World Health Assembly. This has been emphasized as a considerable advantage (Langlois 2008). Aristotle famously promoted the golden mean, that is, the space between two polar opposites (Aristotle 2004: 1104a25). Is there a middle way between the fairness-in-exchange and distributive fairness models of benefit sharing?
Combining Fairness-in-Exchange and Distributive Fairness? In 2018, the European Commission added a mandatory reference document to its Horizon 2020 research framework, the Global Code of Conduct for Research in Resource-Poor Settings (GCC) (Nordling 2018). The GCC was developed by a
274
D. Schroeder
European Commission funded project, TRUST (TRUST is a pluralistic project, which aims to foster adherence to high ethical standards in research globally and to counteract the practice of “Ethics dumping” or the application of double standards in research, by co-developing with vulnerable populations tools and mechanisms for the improvement of research governance structures http://trust-project.eu/). The project involved global stakeholders as partners, including universities, multilevel ethics bodies, policy advisors, civil society organizations, funding organizations, industry, and representatives from vulnerable research populations in LMICs. It combined elements of distributive fairness with elements of fairness-in-exchange. Does this attempt to combine the best of both worlds reach a golden mean?
The Global Code of Conduct for Research in Resource-Poor Settings The GCC (2018) provides guidance to researchers of all disciplines and focuses especially on research in resource-poor settings. It was developed to address ethics dumping (Schroeder et al. 2018), the practice of exporting unethical research from HICs to regions with more malleable regulatory frameworks. While not legally binding, the GCC has bite through its adoption by both the European Commission and the European and Developing Countries Clinical Trials Partnership (EDCTP). Those in receipt of research funds have to demonstrate that they abide by the GCC and ethics reviewers working on behalf of the two funders assess ethics protocols against the GCC’s articles. As Ron Iphofen noted (Nordling 2018): I could envisage reviewers now looking suspiciously at any application for funds that entailed research by wealthy nations on the less wealthy that did not mention the code.
While the GCC was published by a group funded through the European Commission (see http://www.globalcodeofconduct.org/), its authors include high-profile individuals who are part of other major global efforts, in particular: • The Chief of Bioethics of UNESCO, who had a major role in drafting the UNESCO Universal Declaration on Bioethics and Human Rights • The Director of the EDCTP • The Director of the Council on Health Research for Development (COHRED) who advised the Council for International Organizations of Medical Sciences (CIOMS) on their 2017 International Ethical Guidelines for Health-Related Research Involving Humans • The lead UN advisor on the UN Global Compact, a major UN initiative on corporate responsibility • Two previous Heads of the Ethics Unit of the European Commission • The main drafter of all Indian ethics guidelines on involving human participants in research What is widely regarded as a major achievement of the GCC is that vulnerable research populations in LMICs, in particular indigenous peoples from the Kalahari and
15
Benefit Sharing
275
sex workers from Nairobi, were represented throughout the process of drafting; this represents the GCC’s co-design process or bottom-up approach (Burtscher 2018). The spirit of benefit sharing is included in many of the GCC’s 23 articles, rather than added as a specific article. Using a four-value framework (fairness, respect, care, and honesty), the GCC takes the following stance on benefit sharing (all articles are summarized rather than written out in full; see the full code to understand subtleties): I. The seven articles under the value of fairness all address benefit sharing. Art 1 – Locally relevant research Art 2 and Art 4 – Research co-created with LMIC stakeholders Art 3 – Feedback provided Art 5 and Art 6 – Access and benefit sharing as per CBD or PIP Art 7 – Fair remuneration of local research support systems II. The four articles under the value of respect promote good communication across nations and cultures, as a basis for avoiding subsequent exploitation claims. Art 8 – Due diligence regarding cultural sensitivities Art 9 – Community assent Art 10 – Local ethics review Art 11 – Respectful collaboration with local ethics committees III. The eight articles under the value of care aim to mitigate the effects of substantial power differentials, one of the legitimate bases for benefit sharing demands. Art 12 – Adapted informed consent Art 13 – Locally suitable complaints mechanism Art 14 – No double standards Art 15 – Measures against specific risks for research participants Art 16 – Due diligence on depletion of local resources Art 17 – High animal welfare standards, if necessary matched to HIC standards Art 18 – High environmental standards, if necessary matched to HIC standards Art 19 – Managing health and safety risks for researchers IV. The four articles under the value of honesty promote equitable partnerships in practical ways. Art 20 – Honesty in distribution of research labor Art 21 – Plain language and non-patronizing communication Art 22 – No corruption, nor bribery Art 23 – Special efforts on privacy preservation As can be seen from the above summary, the GCC does not present a golden mean between the two benefit sharing types but weighs toward fairness-in-exchange (Fig. 3). Distributive fairness requires that all of humanity benefit from science and innovation. Fairness-in-exchange is restricted to those who contribute directly to research. The GCC deals mostly with research participants and other direct stakeholders (e.g., researchers in LMICs), but it does make reference to community needs, for instance, in the context of necessary community assent to studies. Hence, it is a step to the right (in terms of the figure below), toward distributive fairness.
276
D. Schroeder
Fig. 3 Position of instruments on Aristotle’s golden mean
The reason for this is that the GCC governs research. It is beyond the power of researchers and their funders to ensure that “society as a whole” and “in particular . . . developing countries” benefit “from any scientific research and its applications” (UNESCO 2005, emphasis added). However, through the building of equitable research partnerships, the previously unbalanced power distributions in global research and innovation can begin to be addressed. For instance, if “Local researchers . . . [from resource-poor settings are] included, wherever possible, throughout the research process, including in study design, study implementation, data ownership, intellectual property and authorship of publications” (Art 4, GCC), it is more likely that research would be tailored toward the needs of LMICs and not be exploitative. Is this stepped approach to benefit sharing satisfactory, or could benefit sharing become more ambitious, moving more toward distributive fairness and into the golden mean?
Benefit Sharing: Open Questions and Solutions? While benefit sharing is a relatively new concept, which only came to the fore internationally with the CBD in 1992, its spirit is both ancient and global due to its direct link to justice demands. To examine whether there are open questions in benefit sharing is therefore like asking: “Is the world just yet?” The answer has to be “no,” and the news is not that good on the benefit sharing front. The two types of benefit sharing face highly different problems or open questions. The fairness-in-exchange approach links efforts for benefit sharing with specific actions in the real world, e.g., accessing microorganisms with prior informed consent
15
Benefit Sharing
277
and the signing of SMTAs. The distributive fairness approach, on the other hand, is inspirational and idealistic. It is, in fact, unclear how it would practically invite action or operationalization. The following sections outline which problems and open questions each approach faces.
Three Elephants in the Fairness-in-Exchange Room Trying to achieve fairness-in-exchange on a global scale through legal instruments has led to highly technical requirements, which have delivered few success stories and a lot of criticism (an elephant in the room is an English metaphor for a very large problem that nobody discusses openly or at all). Box 4 describes one success story of the application of the South African Biodiversity Act. But it is one of few.
Box 4 Successful Benefit Sharing Under the South African Biodiversity Act
Zembrin® is a standardized, patent-protected botanical extract developed by HG&H Pharmaceuticals Pty Ltd. of South Africa. The extract is used primarily for anxiety, stress, and depression and is made from a cultivated selection of Sceletium tortuosum which was brought to market under the leadership of South African doctor and ethnobotanist Nigel Gericke after research in the Kamiesberg Mountains of South Africa. Gericke had first read about indigenous uses of the Sceletium plant in a library in Sydney, Australia. In 1995 Gericke engaged the services of a leading addictionologist, Dr. Greg McCarthy, to accompany him on a field visit to two communities in Namaqualand where the plant was still in common use. Based on structured interviews with rural people, it was provisionally concluded that the plant was likely to be safe and nonaddictive and may have potential mental health benefits. A year later, in 1996, active components were isolated from the plant, leading to a patent filed shortly afterward. It took almost a decade before a specific selection of the plant could be cultivated on a commercial scale and a plant extract be produced that was standardized in terms of both the content and relative composition of the key active compounds. At this point, a venture capital company, Halls Investments, invested in further research, development, and commercialization. In recognition of the ethnobotanical basis for the project, Gericke initiated benefit sharing negotiations with the South African San Council. Parties to the benefit sharing agreement, which was concluded in 2008, were the San peoples of South Africa, through the South African San Council, and HG&H Pharmaceuticals, the newly founded company intending to market Zembrin®. The agreement contained three central points: (continued)
278
D. Schroeder
Box 4 Successful Benefit Sharing Under the South African Biodiversity Act (continued)
1. Should commercialization be successful, 5% of all sales of the extract would be paid into a trust fund for the San peoples. 2. An additional 1% of all sales would be paid into the trust fund as a license for the use of a San logo by HG&H Pharmaceuticals. 3. In recognition of the foundational ethnobotanical research conducted in Namaqualand, the San Council agreed to pay 50% of the income received from HG&H to two committees representing the communities in Namaqualand. Zembrin® was first launched in South Africa in 2012 and has since been launched in the USA, Canada, Brazil, Malaysia, and Japan. The terms of the benefit sharing agreement have been respected through yearly payments to the trust fund and a mutually supportive relationship between HG&H Pharmaceuticals and the San Council.
Criticism of the fairness-in-exchange approach to benefit sharing ranges from a serious rejection of the effort itself through to highly specific objections, which are in principle resolvable. For instance, objections which are in principle resolvable (drawn from Wynberg et al. 2009) are: • The CBD commodifies knowledge by requiring benefit sharing for traditional knowledge holders who agree to share their knowledge. When knowledge holders live in remote, rural areas, adequate community consent is a major challenge. • Benefit sharing with traditional communities requires stable, robust, and representative institutions. Sufficient time, financial support, and advice, for instance legal advice, are essential elements in the process. However, these might not be available to the people who need them. • Bioprospecting activities, which lead to benefit sharing, often raise unreasonable expectations in communities. • When indigenous traditional knowledge holders reside in several countries and biological resources are shared across national borders, they are unlikely to have the government support they would need to obtain justice. As an example of a serious rejection of the fairness-in-exchange effort, 172 scientists from 35 countries published “When the cure kills – CBD limits biodiversity research” in Science (Prathapan et al. 2018). They argued that the CBD is producing overly high hurdles for biodiversity research and thereby prevents international collaborations which aim to preserve biodiversity. Paradoxically, the tool developed to secure the preservation of biodiversity is seen in practice to undermine the preservation of biodiversity. The vast administrative and legal burden on countries to operationalize CBD benefit sharing arrangements and the burden on innovators to comply is not often
15
Benefit Sharing
279
mentioned. It is the first big elephant in the fairness-in-exchange room. The second elephant in the room is the fact that the USA, the country with the highest absolute research and innovation budget (OECD 2018), is not a signatory to the CBD nor to its Nagoya Protocol. Hence, the largest research and innovation funder in the world sidesteps all benefit sharing requirements. At the same time, supporters of the fairness-in-exchange approach to benefit sharing are also not satisfied. (Due to space constraints and as the criticisms are many in number, only the three “elephants” are described. However, for the interested reader, it is worth seeing the reply to the criticism that benefit sharing creates undue inducements, written by R. Chennells (2016).) For instance, even the PIP Framework, which relates only to very specific specimens in a well-established global network, faces criticism. Gostin et al. (2014) question whether the PIP Framework can handle the “growing likelihood that genetic sequence data might be shared instead of physical virus samples.” This problem has now also been examined by a CBD (2018) factfinding mission: Fact-Finding and Scoping Study on Digital Sequence Information on Genetic Resources in the Context of the CBD and the Nagoya Protocol. The Executive Summary of the mission report is ten pages long, an indicator of the level of technicality and challenge, and ends with the following statement: This study has revealed a number of important areas which we have only touched upon, and which warrant further and deeper investigation. These include: determining/estimating the value of digital sequence information; exploring the approaches of public and private databases; investigating new and traditional forms of benefit sharing in the context of digital sequence information; reviewing user notices, MTAs, agreements and other benefit-sharing tools; reviewing national ABS measures and how they regulate sequence information; exploring the interface between scientific and technological developments and ABS; reviewing the relationship between sequence information, biodiversity conservation, and sustainable use; and investigating ways in which intellectual property rights are asserted for sequence information, and ABS implications.
The third elephant in the fairness-in-exchange room is related to and also hinted at by the CBD fact-finding report (2018). “Paralleling dramatic changes in science and technology are developments in the institutional, legal and social context of research.” Can a global legal framework, such as the CBD, stay ahead of changes in science, technology, and society to ensure fairness-in-exchange? The answer to this question is unclear.
A Partner for the Distributive Fairness Approach If the more concrete fairness-in-exchange approach to benefit sharing faces such severe challenges, how about the distributive fairness approach, which makes even higher demands: Sharing the benefits of research and innovation with all of humanity? Are there perhaps partners for the UNESCO Declaration, and if so, what would a successful partnership look like? The partner would certainly have to be pro-research and innovation and simultaneously pro-uplifting of LMICs to achieve global justice.
280
D. Schroeder
Luckily for the distributive fairness approach, there is such a partner. The biggest justice effort of our generation, the UN Sustainable Development Goals, also called Agenda 2030, was released by the UN General Assembly in 2015. Goal 9 of Agenda 2030 makes it clear that the goals are pro-research and innovation (UN 2015). Without technology and innovation, industrialization will not happen, and without industrialization, development will not happen. There needs to be more investments in high-tech products that dominate the manufacturing productions to increase efficiency and a focus on mobile cellular services that increase connections between people.
At the same time, Agenda 2030 pursues the underlying spirit of distributive fairness efforts, namely, to leave no one behind. The key to leaving no one behind “is the prioritisation and fast-tracking of actions for the poorest and most marginalised people” (Stuart and Samman 2017). This interpretation of the UN Sustainable Development Goals and their focus is reminiscent of John Rawls’ (1999: 65) approach to justice, quoted earlier: The higher expectations of those better situated are just if and only if they work as part of a scheme which improves the expectations of the least advantaged members of society.
What does this all mean for the prospect of benefit sharing? Wouldn’t one have to be very optimistic to assume that we could persuade the three elephants in the fairness-in-exchange room to leave? Or that Agenda 2030 can erase the need for a distributive justice platform for benefit sharing? Yes, one would have to be very optimistic. But there is also a much simpler solution. The need for benefit sharing has only arisen because LMICs have been and continue to be exploited (Rabitz 2015). This is what is meant by ethics dumping. Yet, not to exploit LMICs is within the power of every single researcher and innovator. Individuals can take the initiative on the benefit sharing agenda and achieve compliance with the spirit of all of the above instruments through sheer decency. Highly complex legal instruments are only necessary where researchers do not have the willingness or compassion not to exploit the vulnerable, i.e., where researchers do not show research integrity. When they do, highly technical benefit sharing discussions could become obsolete. Acknowledgments This work is based on research undertaken for the TRUST project, funded by the European Commission Horizon 2020 Program, agreement number 664771. Thanks to Julie Cook for comments on an earlier draft and editorial support.
References Andanda P, Schroeder D, Chaturvedi S, Mengesha E, Hodges T (2013) Legal frameworks for benefit sharing: from biodiversity to human genomics. In: Schroeder D, Cook Lucas J (eds) Benefit sharing – from biodiversity to human genetics. Springer, Berlin, pp 33–64 Aristotle (2004) In: Tredennick H, Barnes J (eds) Nicomachean ethics, 2nd edn. Penguin Classics, London
15
Benefit Sharing
281
Bergel SD (2015) Ten years of the universal declaration on bioethics and. Human Rights 23(3):446–455 Briand SC (2016) Into the future: are we ready to face modern outbreaks? Wkly Epidemiol Rec 91(13):178–180 Burtscher W (2018) TRUST Global Code of Conduct to be a reference document applied by all research projects applying for H2020 funding. Editorial Trust Newsletter Issue 5. http://trustproject.eu/wp-content/uploads/2018/10/Newsletter-5-TRUST-final-final.pdf Caplan AL, Curry DR (2007) Leveraging genetic resources or moral blackmail? Indonesia and avian flu virus sample sharing. Am J Bioeth 7(11):1–2 Carlson RV, Boyd KM, Webb DJ (2004) The revision of the declaration of Helsinki: past, present and future. Br J Clin Pharmacol 57(6):695–713 Carrington D (2018) What is biodiversity and why does it matter to us? Guardian, 12 Mar 2018. https://www.theguardian.com/news/2018/mar/12/what-is-biodiversity-and-why-does-it-matterto-us CBD (2018) Fact-finding and scoping study on digital sequence information on genetic resources in the context of the CBD and the Nagoya Protocol, CBD/DSI/AHTEG/2018/1/3. https://www. cbd.int/doc/c/b39f/4faf/7668900e8539215e7c7710fe/dsi-ahteg-2018-01-03-en.pdf Chaturvedi S, Crager S, Ladikas M, Muthuswamy V, Su Y, Yang H (2013) Promoting an inclusive approach to benefit sharing: expanding the scope of the CBD? In: Schroeder D, Cook Lucas J (eds) Benefit sharing – from biodiversity to human genetics. Springer, Berlin, pp 153–177 Chennells R (2016) Equitable access to human biological resources in developing countries: benefit sharing without undue inducement. Springer, Berlin Chivian E, Bernstein A (eds) (2008) Sustaining life: how human health depends on biodiversity. Center for Health and the global environment. Oxford University Press, New York Convention on Biological Diversity (1992) https://www.cbd.int/doc/legal/cbd-en.pdf Declaration of Helsinki (2013) https://www.wma.net/policies-post/wma-declaration-of-helsinki-eth ical-principles-for-medical-research-involving-human-subjects/ Global Code of Conduct for Research in Resource-Poor Setting (2018) http://www.globalco deofconduct.org/ Gostin LO, Phelan A, Stoto MA, Kraemer JD, Reddy KS (2014) Virus sharing, genetic sequencing, and global health security. Science 345(6202):1295–1296 HUGO (2000) Human genome project’s ethics committee statement on benefit sharing. http://www. hugo-international.org/Resources/Documents/CELS_Statement-BenefitSharing_2000.pdf Kelman A, Kang A, Crawford B (2018) Continued access to investigational medicinal Langlois A (2008) The UNESCO universal declaration on bioethics and human rights: perspectives from Kenya and South Africa. Health Care Anal: J Health Philos Policy 16(1):39–51 Mayer R (2007) What’s wrong with exploitation. J Appl Philos 24(2):137–150 Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity, short: Nagoya Protocol on Access and Benefit Sharing (2010) https://www.cbd.int/abs/doc/protocol/nagoya-protocolen.pdf Nordling L (2018) Europe’s biggest research fund cracks down on ‘ethics dumping’. Nature 559 (7712):17–18 OECD (2018) Main science and technology indicators database July 2018. http://www.oecd.org/ innovation/inno/researchanddevelopmentstatisticsrds.htm Prathapan KD, Pethiyagoda R, Bawa KS, Raven PHD, Rajan PD (2018) Science (New York, NY) 360(6396):1405–1406 Rabitz F (2015) Biopiracy after the Nagoya protocol: problem structure, regime design and implementation challenges. Braz Polit Sci Rev 9(2):30–53 Rawls J (1999) A theory of justice – revised edition. Oxford University Press, Oxford Schroeder D (2008) Post-trial obligations. Reciis 2(Sup1):63–73 Schroeder D, Gefenas E (2012) Realising benefit sharing – the case of post-study obligations. Bioethics 26(6):305–314 Schroeder D, Cook J, Hirsch F, Fenet S, Muthuswamy V (eds) (2018) Ethics dumping – case studies from north south research collaborations. Springer, Berlin
282
D. Schroeder
Sedyaningsih ER, Isfandari S, Soendoro T, Supari SF (2008) Towards mutual trust, transparency and equity in virus sharing mechanism: the avian influenza case of Indonesia. Ann Acad Med 37(6):482–488 Shaffer DN et al (2006) Equitable treatment for HIV/AIDS clinical trial participants: a focus group study of patients, clinical researchers, and administrators in western Kenya. J Med Ethics 32:55–60 Shiva V (1991) The violence of the green revolution: third world agriculture, ecology, and politics. Zed Books, London Stuart E, Samman E (2017) Defining ‘leave no one behind’. Briefing paper for ODI. Overseas Development Institute. https://www.odi.org/publications/10956-defining-leave-no-one-behind ten Kate K, Laird SA (2000) Biodiversity and business: coming to terms with the ‘grand bargain’. Int Aff 76(2):241–264 UNESCO (2005) Universal Declaration on Bioethics and Human Rights (2005). http://portal. unesco.org/en/ev.php-URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html United Nations (2015) Sustainable development goals. https://www.un.org/sustainablede velopment/sustainable-development-goals/ Watts J (2018) Destruction of nature as dangerous as climate change, scientists warn. Guardian: 23 Mar 2018. https://www.theguardian.com/environment/2018/mar/23/destruction-of-natureas-dangerous-as-climate-change-scientists-warn WHO (2007) Avian and pandemic influenza. Provisional agenda item 12.1, Sixtieth World Health Assembly. A60/INF.DOC./2, 22 March 2007. World Health Organization WHO (2011a) Pandemic influenza preparedness framework for the sharing of influenza viruses and access to vaccines and other benefits. short PIP framework. http://apps.who.int/iris/bitstream/ handle/10665/44796/9789241503082_eng.pdf;jsessionid=72C43E6E3890BEC48B855AB46 0D0755E?sequence=1 WHO (2011b) Pandemic influenza preparedness framework for the sharing of influenza viruses and access to vaccines and other benefits. PIP/OEWG/3/4. http://apps.who.int/gb/pip/pdf_files/ OEWG3/A_PIP_OEWG_3_2-en.pdf WHO (2017) Pandemic influenza preparedness framework, online Q&A. http://www.who.int/ features/qa/pandemic-influenza-preparedness/en/ Wynberg R, Schroeder D, Chennells R (eds) (2009) Indigenous peoples, consent and benefit sharing–learning lessons from the san-hoodia case. Springer, Berlin
Internet Research Ethics and Social Media
16
Charles Melvin Ess
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Foundational Norms and Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relational Selfhood and Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-Cultural Awareness and Ethical Pluralism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics and Judgment (Phronēsis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asking the Right Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Primary Elements of IRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics Is Method: Method Is Ethics (Markham 2006) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stages of Research and Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anticipated Outcomes/Horizon Scanning for Future Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
284 285 287 289 290 292 295 295 295 298 299 300 300
Abstract
Internet research ethics (IRE) is introduced via a historical overview of its development by the Association of Internet Researchers’ ethics committees (2002, 2012, 2020) and the Norwegian Research Ethics Committees (2003, 2006, [2018] 2019). These overlapping but importantly distinctive guidelines foreground key norms and principles (starting with human autonomy and dignity), ethical frameworks (utilitarianism, deontology, virtue ethics, feminist ethics, care ethics), and prevailing, especially question-oriented approaches to identifying and resolving representative ethical challenges in internet research. Comparing and contrasting these (and other relevant) guidelines further introduce us to additional central elements of assumptions regarding personhood and moral
C. M. Ess (*) Department of Media and Communication, University of Oslo, Oslo, Norway e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_12
283
284
C. M. Ess
agency (individual vis-à-vis relational); respecting and incorporating diverse national/cultural ethical traditions and norms by way of an ethical pluralism; the role of dialogical, process approaches and reflective ethical judgment (phronēsis); interweaving ethics and methods; and considering ethical challenges characteristic of distinct stages of research. Two challenges evoked by Big Data research techniques are examined, beginning with the possibilities and limitations of informed consent and researchers’ possible use of “gray data” (personal information that is hacked and thus made public and available to researchers). Current and future challenges cluster about protecting both researchers’ and subjects’ privacy – specifically, privacy as now reconceptualized in terms of contextual integrity as appropriate to the more relational selves facilitated especially by social media – in an emerging Internet of Things (IoT). Keywords
Internet research ethics · Virtue ethics · Care ethics · Feminist ethics · Deontology · Utilitarianism · Human Subjects Protections · Informed consent · Ethical pluralism · Reflective judgment (phronēsis)
Introduction Internet research ethics (IRE) is rooted in both communication technologies and research ethics traditions that precede the rise of the Internet per se. The historical and cultural conditions surrounding the rise of computer-mediated communication (CMC) and what became the Internet are initially US-centric. First efforts to develop ethical guidelines for Internet research (e.g., King 1996; Frankel and Siang 1999) emerged within the context and traditions of the US-based Human Subjects Protections protocols and university-based institutional review boards (IRBs, for a more complete history, see Buchanan and Zimmer 2018, pp. 13–19). But the early Internet crossed borders quickly: a key emphasis in the first guidelines from the Association of Internet Researchers Ethics Working Committee was precisely the importance of taking on board often significant differences between diverse national and cultural ethical traditions (Ess and The Association of Internet Researchers Ethics Working Committee 2002; referred to as AoIR 2002 hereafter). The next year, the Norwegian Research Ethics Committees published the first national guidelines for Internet research (NESH [2018] 2019). While there are central agreements with the normative principles undergirding the AoIR 2002 guidelines – the NESH guidelines are importantly distinctive, first as they are shaped by uniquely Norwegian presumptions regarding what it means to be a person (namely, both relational and individual, in contrast with the sharply individualistic emphases in US and Anglophone approaches more generally). At the same time, they are clearly more deontological (to mention a first ethical framework) in their ethical groundings, in contrast with a more utilitarian approach apparent in the US-based groundings of the initial AoIR document. (These ethical frameworks will be explored in the context of the chapter as well.)
16
Internet Research Ethics and Social Media
285
Since these earliest beginnings, of course, the technologies, scope, affordances, uses, impacts, etc. of Internet-facilitated communication – and so of Internet research and IRE – have dramatically expanded. But these foundational documents in IRE also introduce what remain primary elements of IRE – including the ethical frameworks primarily brought into play, beginning with utilitarianism and deontology, as well as the critical importance of cultural differences, including our foundational assumptions of personhood. These serve as our starting points for exploring these documents and their subsequent developments – i.e., through NESH ([2018] 2019) and AoIR 2020 (franzke et al. 2020; AoIR 2020 hereafter) – to develop a primary overview of IRE, including its foundational normative commitments, its primary ethical style (as process- and dialogically oriented), and thereby the primary questions both sets of documents have used to foster the ethical reflection and judgment seen to be central to effectively coming to grips with the ethical challenges evoked by Internet research, including research on social media. Along the way, we will take up additional ethical frameworks that become increasingly central for IRE – namely, virtue ethics and feminist ethics. This first section will conclude with two primary elements of IRE – namely, the emphasis on “ethics is method, method is ethics” (Markham 2006) and a concrete expression of this emphasis in an initial taxonomy for evaluating the ethical elements of research and research design. This taxonomy leads to a first concrete example of a common ethical challenge in the era of Big Data research, namely, informed consent. Gray data is then explored, followed by privacy as a central ethical requirement and conundrum, not only within Big Data research but ever more so as we move into the era of artificial intelligence and the Internet of Things.
Foundational Norms and Principles The AoIR guidelines are initially rooted in the Human Subjects Protections traditions of the United States – most importantly, the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research 1979). The Report foregrounds three primary norms – respect for persons (as free or autonomous beings), beneficence, and justice (1979, pp. 4–6). These norms in turn ground the primary ethical requirements for Human Subjects research, including minimizing risk; ensuring protection of subjects’ anonymity, confidentiality, and privacy; acquiring informed consent; etc. At the same time, however, not all Internet research (including social media research) is of Human Subjects per se: in addition, especially more humanities-based research will approach participants as “activists, authors, and/or amateur authors” (AoIR 2002, p. 31). As a start, the interactions and productions of the latter are frequently intended to be public and so fall more immediately within legal requirements for copyright protection, for example. To some degree of contrast, the NESH guidelines are shaped by the Norwegian context as a member state in the European Economic Community and thereby beholden to EU rules and regulations. Central here are the EU data privacy
286
C. M. Ess
regulations inaugurated in 1995 and recently revised – and importantly strengthened – in the General Data Privacy Regulation (GDPR 2018). To be sure, there is strong overlap with the AoIR 2002 guidelines – beginning with shared emphases on autonomy (cf. Principle 1, “Respect for the Autonomy, Privacy and Dignity of Individuals and Communities,” British Psychological Society 2017, p. 6). At the same time, however, there are critical differences. To begin with, the centrally authoritative US Office of Human Research Protections (OHRP) documents begin with a utilitarian consideration: “Risks to subjects” are allowed, if they “are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result” (OHRP 2018, p. 11; cf. Principle 4, “Maximising Benefits and Minimising Harm,” British Psychological Society, 18f.). Certainly, risks should be kept to a minimum – but such risks and harms may be justified in a utilitarian cost-benefit analysis that aims toward “the greatest good for the greatest number.” Such a calculus explicitly allows for costs to a few, as long as these costs are outweighed by greater benefits for the many – in this case, benefits to subjects and accrued knowledge (For more extensive discussion of utilitarianism and research ethics, see Ess 2015, p. 51). In sharp contrast, the NESH guidelines emphasize that “Research ethics is based on respect for human dignity and builds on general ethics and fundamental human rights” ([2018] 2019, p. 3). This announces a clearly deontological approach, i.e., one that regards human dignity and human rights as near absolutes that cannot be traded off, no matter the potential benefits. So NESH insists that “Researchers must protect personal integrity, preserve individual freedom and self-determination, respect privacy and family, and safeguard against harm and unreasonable strain” (ibid.). More broadly, NESH ([2018] 2019) further asserts that Internet research ethics must respect the central (deontological) norms of “dignity, freedom, autonomy, solidarity, equality, democracy and trust” – norms that are at the same time central to the EU GDPR (NESH [2018] 2019, p. 3; European Data Protection Supervisor 2018, pp. 16–21). (For more discussion of deontology and research ethics, see Ess 2015, 51f.) Lastly, the phrase “respect privacy and family” points to the distinctive Norwegian understanding of the human being. At the risk of oversimplification, the focus in US culture and traditions (and, perhaps, the Anglophone world more generally) is on “the individual” – what the philosopher Charles Taylor characterizes as an atomistic conception of selfhood, one that emphasizes its freedom from “established custom and locally dominant authority” (1989, p. 167). This conception of selfhood undergirds both utilitarian and deontological ethics: in particular, Kant’s classic account of the human being as a rational autonomy, a free being thereby capable of determining his/her own law or rule (auto-nomos), thereby grounds central modern norms of respect, equality (including gender equality), and basic rights – including, as we will see, the right to privacy – and the role of rational debate and deliberation in democratic societies (Ess 2015, 53f.). To be sure, to be human in Norway includes a central emphasis on being an individual: at the same time, however, Norwegian philosophies and practices conjoin this individuality with a clearly relational sense of selfhood. For example, paragraph 100 in the Norwegian constitution guarantees freedom of expression: its revised form, presented in 1999,
16
Internet Research Ethics and Social Media
287
justified these freedoms on the presumption of “the mature person,” understood as an individual inextricably bound up with others: . . .we as individuals achieve autonomy and competence through meeting others, listening to their arguments and testing their alternative perspectives. Gradually a reflective identity is formed. Only in this way can the individual achieve free formation of opinion. (Skarstein n.d., p. 4)
This conception of the mature person thus anticipates more recent feminist developments of relational autonomies, which likewise emphasize autonomy as requiring “an irreducibly dialogical form of reflectiveness and responsiveness to others” (Westlund 2009, p. 26). Similarly, within the context of Norwegian research, ethics discussions of “privacy” consistently include privatliv, “private life,” understood as the lives we share together within close-tie relationships. In the 2006 NESH guidelines, for example, § 13 announces “The obligation to respect individual’s privacy [privatliv] and close relationships” (17; Ess and Fossheim 2013, p. 51). The 2019 NESH guidelines explicitly “combine individualist and relational perspectives on human life, which is especially relevant for distinguishing between private and public matters on the internet” (2019, p. 6). This emphasis on more relational senses of selfhood highlights first of all deep cultural differences between US-based and Norwegian (if not broadly Scandinavian) assumptions regarding personhood and privacy and hence serves as our introit into the next section on cross-cultural awareness and ethical pluralism. At the same time, the rising recognition of more relational conceptions of selfhood in IRE (and contemporary ethics more broadly) is especially central for social media research and is further connected with the increasing importance of both virtue ethics and feminist ethics. Finally, as we will explore below, these conceptions of selfhood and privacy remain central to contemporary debate in IRE.
Relational Selfhood and Research Ethics Briefly, both Internet studies and especially social media research are deeply shaped by a number of theories that emphasize the prime importance of the relationships – e.g., whether familial, close friends, and/or those entangling us with our larger communities – in defining our sense of self. One of the most prominent of these is Erving Goffman’s “performative self,” i.e., a self that tunes its self-presentation toward specific relationships and their correlative expectations and norms (1959). To be sure, relationality was already central to the earliest foci of Internet research, e.g., blogs, e-mail and listserves, text- and video-based chatrooms, MUDs (multi-user dungeons) and MOOs (MUDS, object-oriented) for gameplay and various forms of sociality, and Computer-Supported Cooperative Work Systems (AoIR 2002, p. 4). But especially social media or social networking sites (SNS) can be characterized as perfect Goffmanian machines: the use and success of these sites depend entirely
288
C. M. Ess
upon our presenting a self (profile) that is carefully curated and cultivated toward specific relationships (Ess and Fossheim 2013, p. 43). For example, when older people – including parents and grandparents – began to move onto Facebook, younger people began to move off to different social media channels such as Instagram and Snapchat whose messages could be more ephemeral and allowed for more precise control over one’s followers, i.e., those who could have access to one’s communication. Obviously, such close-tie relationships among friends are different from familial relationships – as well as from more sexually – and/or romantically oriented relationships, for which we have, e.g., Tinder, Grindr, and any other numbers of relationship apps and hookup apps. In this light, it is perhaps not surprising that in parallel with the rise of social media from ca. 2005 forward, two ethical frameworks that likewise emphasize relationality – namely, virtue ethics and feminist ethics – have become increasingly central to IRE. Very broadly, virtue ethics asks the question, what must I do to be content? – where “content” translates the Greek eudaimonia, a sense of contentment or well-being, in contrast with more short-term pleasure or happiness. The brief response is that we must acquire and cultivate specific habits, capacities, or abilities (“virtues”) – such as patience, perseverance, and empathy: as Shannon Vallor points out, these capacities are required for the most primary human goods of friendship, long-term intimate relationships, and even communication itself. Such goods – such cultivated relationships – are part and parcel of “good lives” of “flourishing” (2010). The focus here on relationality is hardly accidental: the historical roots of virtue ethics – whether Aristotelian, Confucian, or Buddhist – entail precisely a relational sense of human being (cf. Vallor 2016, pp. 61–84). By the same token, feminist ethics, beginning with the work of Carol Gilligan (1982), likewise emphasizes the central importance of sustaining our “web of relationships” as a primary consideration in our ethical judgments and decisionmaking – in contrast with more rule-oriented frameworks such as utilitarianism and deontology. Briefly, Gilligan found that women confronting the most challenging ethical dilemmas, such as those surrounding abortion, tended to move beyond purely rational, norm-oriented considerations as guided by such frameworks. In addition, women as a group tended to be more concerned that all within a group (friends, family, etc.) felt that they were included, treated fairly, and so on, even if that sometimes meant following “the rules” less than perfectly (Gilligan 1982, pp. 32–38). These considerations further entailed attention to members of a group as thereby constituting a web of interpersonal relationships – i.e., as relational beings whose ethical judgments took on board the emotive aspects of an ethical dilemma. Care emerges as a particular focus here – namely, in the care ethics of Nel Noddings and Sara Ruddick, among others (see Tong and Williams 2018 for more detail; A K Kingston, Feminist Research Ethics: From Theory to Practice, this volume). Not surprisingly, feminist ethics and ethics of care emerged early on as especially appropriate frameworks, e.g., in participant-observation methodologies in which researchers often felt close bonds with and thereby heightened moral obligations to their research subjects (e.g., Hall et al. 2003; Walstrom 2004; Buchanan and Ess 2008, 276f.). McKee and Porter (2010) included care coupled with five other
16
Internet Research Ethics and Social Media
289
qualities defining the ethos of feminist research methodology – namely, commitments to social justice and improvement of circumstances for participants, critical reflexivity, flexibility, dialogical engagement with participants, and transparency (155f.) (These directly overlap and resonate with the feminist principles of empowerment, reflexivity, and reciprocity identified by Kingston, Feminist Research Ethics: From Theory to Practice, this volume.). Both care ethics and feminist approaches more broadly are increasingly applied to Big Data research (e.g., Leurs 2017; Fotopoulou forthcoming; Lupton 2018; Luka and Milette 2018; Rambukkana 2019) and other more quantitatively oriented research as well (e.g., Tiidenberg 2018; Suomela et al. 2019). Lastly, we will see below how these more relational conceptions of selfhood have direct implications for both our conceptions of privacy – most centrally in the development of privacy as contextual integrity (Nissenbaum 2010) as well as notions of “group privacy” (Taylor et al. 2017). It should be noted that they likewise entail correlative shifts in our understandings of ethical responsibility toward “distributed morality” (Floridi 2013) and “distributed responsibility” (Simon 2015), meaning, responsibility as now “stretched” across relationally intertwined persons and the larger technological infrastructures that tie us together. While these emerging conceptions of privacy are increasingly invoked in IRE guidelines and discussion, so far, to my knowledge, the implications of distributed morality and responsibility for IRE have yet to be worked out.
Cross-Cultural Awareness and Ethical Pluralism The sharp contrasts between (a) more utilitarian (OHRP) vis-à-vis more deontological approaches (EU, NESH) and (b) more individual vis-à-vis more relational understandings (NESH) thereby mark out critical differences between the USbased approaches that long dominated IRE vs. EU and NESH approaches. The global reach of the Internet requires attention to these and even greater differences in the ethical norms, traditions, and practices of diverse cultures, as noted in AoIR 2002. These differences confront researchers working across national and cultural borders with a central problem – namely, in the case of conflict, how to make the ethical judgments and decisions appropriate to a given project? For example, even within EU-funded projects, researchers frequently find the privacy protections and regulations of the national locations of diverse team members and research informants/subjects may differ. As an instance, the VOX-Pol project (https://www.voxpol.eu) was a collaboration between researchers and affiliated participants from several different EU members, including Ireland and the Netherlands. The national regulations for privacy protection were weakest in Ireland, strongest in the Netherlands. In this instance, it was a straightforward call for the privacy protections and data storage requirements of the project to follow the Dutch guidelines and requirements and thereby more than meet the other national requirements.
290
C. M. Ess
Other contrasts – especially across the deeper divides between North/South and East/West – present sharper challenges. On both a general and practical level, the AoIR 2002 guidelines endorsed an approach based on ethical pluralism. Ethical pluralism argues on a general level that the often, striking differences between cultural beliefs, norms, and practices – e.g., regarding the nature of the individual and privacy – can often be harmonized by discerning how these differences may reflect different interpretations, understandings, and/or applications of shared norms and beliefs. The Thai Buddhist philosopher Soraj Hongladarom has been a prominent exponent of this view, both as a contributor to the AoIR 2002 guidelines and in subsequent contributions. For example, Hongladarom argues that irreducible ethical differences between Confucians, who believe in the reality of the self, and Buddhists, who regard the self as a pernicious illusion, nonetheless allow for a primary agreement on a basic ethical norm, namely, that “. . .an individual person is respected and protected when she enters the online environment” (2017, p. 155). Hongladarom further argues that such a pluralistic approach will also work vis-à-vis sometimes even greater differences between Eastern and Western traditions. Again, in terms of IRE, there can be agreement on basic Human Subjects Protections such as protecting privacy and anonymity and, when this cannot be done, acquiring informed consent. Hongladarom uses an example of Thai research on mothers’ blogs. Because Thai culture is more collective, researchers confronted with the originally Western requirement for individual informed consent may not fully understand its significance and/or how to ensure anonymity when needed – but some version of these requirements are understood and applied nevertheless. In the long run, Hongladarom envisions the emergence of a global IRE made up of such shared norms, understood in a pluralistic fashion – i.e., allowing for differing interpretations and applications as these are diffracted through specific local traditions. To do so, however, will require a continued emphasis on the pluralism enunciated in AoIR 2002, including respecting and recognizing that a given national or cultural IRE is “. . . grown from the local source, meaning that the [ethical] vocabulary comes from the traditional and intellectual source of the culture in which a particular researcher is working” (2017, p. 161). We can be cautiously optimistic, as the Thai example suggests, that as more and more non-Western countries expand their Internet and social media research, they will by the same token contribute to the development of a more genuinely global and pluralistic IRE.
Ethics and Judgment (Phronēsis) IRE is further characterized by a distinctive style of ethical reflection and resolutionseeking. Briefly, a prevailing conception (and, to some degree, practice) of “ethics” gives the impression of a “rule-book” approach. Especially the frameworks of utilitarianism and deontology tend to approach a given ethical challenge in a strongly deductive, “top-down” manner: the aim is to apply general principles to a specific issue so as to conclude with a (relatively) single, final, and certain resolution. To be fair, these ethical approaches are more sophisticated and
16
Internet Research Ethics and Social Media
291
nuanced – but they nonetheless give rise to what many researchers experience as “tick-box” ethics – e.g., the general requirements of the US Office for Human Subjects Protections “Common Rule” (OHRP 2018), such as minimizing risks to subjects; ensuring subjects’ safety; protecting subjects’ privacy and confidentiality of data; acquiring informed consent when necessary; and taking particular care “to protect the rights and welfare” of vulnerable subjects (children, prisoners, etc.: 2018, 11f.; see also the related Human Subject Regulations Decision Charts, https://www. hhs.gov/ohrp/regulations-and-policy/decision-charts/index.html). These approaches clearly have their role and place: but they suggest that ethics and ethical reflection are a one-off matter, a set of requirements to be met at the outset of a research project and then largely forgotten about. By contrast, the approach articulated in the 2002 AoIR guidelines emphasizes a dialogical, process-oriented, “bottom-up” approach – one that begins precisely with the ethical intuitions and experiences of the researchers themselves. This approach is enjoined by the ancient Aristotelian understanding of human beings – namely, that as social beings we are enculturated with ethical sensibilities since our inception and that we acquire further ethical insight via experience. Most importantly, we learn to exercise a particular virtue (a capacity or habit that must be acquired and practiced in order to be applied well) – namely, the central capacity of phronēsis. While often translated as “practical wisdom,” phronēsis is more precisely the capacity for reflective judgment. Phronetic judgments, in contrast with more deductive approaches, are by definition pluralistic, uncertain, and open to further revision – especially if our initial judgment proves to be in error in some way, forcing us to correct that judgment. One reason these judgments are multiple – i.e., two different persons, each meaning well, can make two different judgment calls when confronted with exactly the same ethical circumstance and fine-grained constraints – is that each of us brings to bear in these judgments a lifetime of experience that entails tacit and embodied knowledge: our judgments are thus refracted through different experiences and insights, and so the phrase judgment call recognizes precisely that different but roughly equally legitimate judgments can be made. (For more on phronēsis, including its connections with virtue ethics, care ethics, and “cybernetics” as the science of self-correction, see Ess 2018, pp. 243–246). A dialogical, process-oriented approach to ethics thus explicitly begins with and builds from the individual and collective wisdom of the communities of researchers (among others) confronting a given issue. More formal philosophical elements – beginning with broad ethical frameworks (deontology, utilitarianism, virtue ethics, feminist ethics, and so on) – can then be brought to bear for clarifying initial arguments and judgments and oftentimes helping to articulate these more explicitly and coherently. Both the AoIR guidelines – i.e., from 1.0 (2002) through 2.0 (Markham and Buchanan 2012) and 3.0 (2020) – and the Norwegian guidelines (NESH [2018] 2019) have pursued these approaches. A particular version of this sort of approach is the art of casuistics – the use of “clear-cut matters (paradigmatic cases)” as guides for applying general principles to the “sticky problems” of a novel case: a casuistic analysis thus again calls phronēsis into play (Mckee and Porter 2009, 12f.). These process- and dialogically oriented
292
C. M. Ess
approaches – especially as they rest on phronēsis as a form of judgment always open to self-correction – thereby remain always open to further development. Indeed, the NESH guidelines make explicit that the function of these documents is “. . .to aid in the development of sound judgement and reflection on issues pertaining to research ethics, resolutions of ethical dilemmas, and promotion of good research practices” (2019, p. 4: emphasis added, CME). Such an approach is manifestly demanded in Internet and social media research, as these domains are driven by constant, sometimes explosive technological change. Hence, especially following the rise of Big Data collection and research techniques over the past few years, NESH has explicitly identified its most recent guidelines as “a living document,” i.e., one that must remain fully open to further developments through frequent updating and revision (NESH [2018] 2019, p. 2; cf. AoIR 2020, p. 2).
Asking the Right Questions This further means that while these guides offer foundational orientation, examples, and resources drawn from both philosophical and research literatures – both emphasize the importance of asking the right questions as a way of sparking and developing requisite ethical reflection. As a start, the 2002 AoIR document offered the following questions as primary: A. Venue/environment – expectations -authors/subjects – informed consent Where does the inter/action, communication, etc. under study take place? What ethical expectations are established by the venue? Who are the subjects? Posters/authors/creators of the material and/or inter/actions under study? Informed consent: specific considerations B. Initial ethical and legal considerations How far do extant legal requirements and ethical guidelines in your discipline “cover” the research? How far do extant legal requirements and ethical guidelines in the countries implicated in the research apply? What are the initial ethical expectations/assumptions of the authors/subjects being studied? What ethically significant risks does the research entail for the subject(s)? What benefits might be gained from the research? What are the ethical traditions of researchers and subjects’ culture and country? (AoIR 2002, p. 1)
This list of questions was dramatically expanded in AoIR 2012 (Markham and Buchanan 2012) to over 50 questions. These expansions sought to address both variations of issues familiar from AoIR 2002 and new issues that emerged from the rise of Web 2.0 (e.g., social networking sites (SNSs) and user-generated content sites such as YouTube), the mobility revolution (i.e., as Internet access shifted more and more to mobile devices), and early Big Data developments (see Markham and Buchanan 2012, pp. 8–11). First of all, these developments greatly complicated
16
Internet Research Ethics and Social Media
293
fundamental matters of privacy – e.g., as the rise of SNSs was marked by more and more people freely sharing in such (quasi-) public context information that was traditionally considered more personal and thus private (cf. Lange 2007; Lomborg 2012). This heightened in turn the ethical importance of attending to the expectations of participants. In some SNSs, despite their (quasi-) public context, participants expected that others – including researchers – would respect their contributions and exchanges as private: such expectations then trigger the ethical demands of either protecting these materials as anonymous and/or requiring informed consent for their use in research and publication (AoIR 2012, pp. 6f., 8; ftn. 12, p. 14). Attending to participants’ expectations in these ways was underscored and amplified by the categorical question, “How are we recognizing the autonomy of others and acknowledging that they are of equal worth to ourselves and should be treated so?” (AoIR 2012, p. 11). The eight questions under this category aim to help researchers carefully consider a number of complexities surrounding informed consent as the primary way of meeting the (deontological) norms of autonomy and equality. These include such basic questions as to how to “ensure that participants are truly informed?” as well as questions addressing the complexities of online venues as intrinsically relational: is informed consent to be sought “just from individuals or from communities and online system administrators?” (ibid.). A last example illustrates the increasing attention to Big Data: the categorical question “How are data being managed, stored, and represented?” is elaborated through six questions, beginning with the importance of securing, storing, and managing “potentially sensitive data” (AoIR 2012, p. 9). Additional questions address concerns of both benefits and risks of attempting to de-identify data in the name of protecting anonymity, privacy, and confidentiality. Perhaps most presciently, the final question asks researchers to consider how future technological developments – specifically, “automated textual analysis or facial recognition software” – might compromise such protections (ibid., 9f.). This question thus requires researchers to take on board at least likely future technological developments in meeting basic ethical norms and requirements – technologies that, as subsequent developments only reiterate, make privacy and related protections increasingly difficult. This question-oriented approach is further instantiated in the Data Ethics Decision Aid (DEDA) developed at the University of Utrecht Data School by aline franzke and Mirko S. Schaefer (for the interactive version, see https://sur vey2.hum.uu.nl/index.php/778777). Focusing especially on the ethical and legal dimensions of Big Data analysis, this questionnaire presents over 230 questions to the interested researcher. As the expanded attention to privacy, informed consent, and data security in AoIR 2012 illustrates, the growing number of questions reflects an evolving sophistication and development in IRE. But at the same time, these become more and more daunting especially for researchers, ethical review board members, and others as they must come to grips with these primary issues, along with others addressed in the remaining questions. Accordingly, the most recent iterations of AoIR and NESH guidelines introduce these considerations with a smaller set of categories. AoIR
294
C. M. Ess
2020 recommends a “General Structure for Analysis” that begins with reviewing earlier guidelines as well as relevant legal aspects, e.g., the OHRP for US-based researchers (2018), GDPR issues for EU- and EEC-based researchers, as well as the Terms of Service defining a specific online platform (AoIR 2020, 13ff.). (Whether or not ToS are in fact legally binding – e.g., so that it would be both a violation of a given ToS and thereby illegal to create fake profiles for the sake of research – is a particularly difficult and currently gray area.) Cultural dimensions must also be considered, especially as our research frequently implicates more than one national/cultural group of persons and thereby diverse traditions and expectations. The good news here is that there is a growing literature on research traditions and case studies beyond the borders of the otherwise prevailing US-European/Anglophone literatures (e.g., Mukherjee 2017). In particular, we have seen how Soraj Hongladarom addresses contrasts between foundational Confucian and Buddhist assumptions regarding the self (as either real or an “ego delusion,” respectively) and ethical frameworks, in order to argue for a pluralism that nonetheless conjoins these two worldviews through a shared agreement that individual persons deserve basic human respect (also a foundational Kantian/deontological norm) and protection in online environments (2017, p. 155). Similarly, Hongladarom further argues for a pluralism between Western and Eastern traditions regarding the primary requirement of informed consent as applied within the Thai context, as more relational and collective, vs. Western IRE emphases on the individual (with the exception of Norway and AoIR 2012, as seen above). Again, a more group-oriented approach to informed consent, as well as the basic norm of ensuring anonymity when needed, is hence required in the Thai context. These differences between Western and Eastern approaches thereby work as diverse applications and interpretations of shared basic requirements and norms – i.e., a pluralism (2017, p. 161). Next, considering who are the involved subjects is central to addressing primary duties to avoid harm (AoIR 2020, p. 12). The ethical frameworks and concepts to be brought into play need to be made explicit and precise – e.g., basic matters of anonymity, accountability, confidentiality, and informed consent and how these may be approached by way of deontology, utilitarian ethics, as well as feminist approaches to Big Data research ethics (see franzke et al. 2020). In addition, the safety of researchers themselves has become an increasingly important area of ethical consideration, as exemplified in the “#Gamergate” phenomenon. Starting in 2014, primarily women and minority “game developers, journalists, and critics” of what they identified as a toxic masculinity pervading much of gamer culture were aggressively attacked and “doxed”: personal information, including home addresses and phone numbers, was published online, accompanied by campaigns of rape- and death threats (Massanari 2017, 333f.; cf. Rambukkana 2019). Accordingly, AoIR 2020 includes a number of resources and guidelines for how researchers may protect their own identities and broader safety when engaging in controversial research. Lastly, we divide questions concerning data acquisition, analysis, storage, and dissemination into two categories, “definitions, understandings of what data is/ represents,” and “issues and procedures in managing, storing, and destroying data”
16
Internet Research Ethics and Social Media
295
(AoIR 2020, 17ff.; cf. NESH [2018] 2019, 3f.; Macnish, Privacy in Research Ethics, this volume)
Primary Elements of IRE Ethics Is Method: Method Is Ethics (Markham 2006) Many social science methodologies are rooted in nineteenth-century positivist assumptions that maintain a clear divide between the “objective” – namely, natural and social sciences as taking especially quantitative approaches aimed at discerning universal scientific laws (ideally expressed in mathematical form) – and the “subjective,” i.e., claims to knowledge that, as rooted more in the qualitative and affective, appear to be more individual, arbitrary, and thus relative to the individual subject. These assumptions entail beliefs in natural and social science as “valueneutral” vs. ostensibly subjective domains, including ethics. These assumptions further undergird the starting point belief that research methods (as objective, scientific, and so value-neutral or value-free) are clearly distinct from research ethics (as more qualitative, value-oriented, etc.). Such assumptions are simply no longer plausible, however useful they may be in some sort of heuristic fashion (Hjarvard 2017). Indeed, post-positivist approaches (most notably, Barad 2007) coupled with changing conceptions of selfhood and identity are resulting in profound shifts in the ethical frameworks and assumptions most directly relevant to Internet and social media research. Such post-positivist approaches are specifically instantiated in Annette Markham’s argumentation that methods and ethics are inextricably interwoven with each other (2006). Specifically, Markham shows that good methodological practice at once entails ongoing ethical reflection and resolution of dilemmas and challenges that will all but inevitably arise through the many stages and steps of a project. By the same token, such ethical reflection may well result in better research design, whether in a given stage of research and/or in the project overall. Researchers are thus better researchers through the work of ethical reflection – in contrast with the positivist view that would keep ethics and research divorced.
Stages of Research and Informed Consent A concrete way of applying Markham’s injunction is to more carefully delineate the stages of a research project and take up the ethical challenges and possible resolutions more specifically affiliated with a given stage. The AoIR 2020 document offers the following taxonomy: Initial research design – including first considerations of potential ethical issues, e.g., in seeking grant funding. For example, research funding from public sources (national research councils, the EU, etc.) is frequently granted on the condition
296
C. M. Ess
that all data resulting from the project be made publicly available. It is not always clear how researchers can meet these conditions while also respecting potential ethical demands for protecting anonymity, confidentiality, and privacy that may also arise with an initial research design. By the same token, research funded from corporations, private foundations, and so on may come with strings attached that directly conflict with researchers’ default obligations to disseminate their results openly and transparently (Locatelli 2020). First research processes – including acquiring data, these stages typically entail specific requirements for de-identifying data, securely storing data, and so on. Dissemination – i.e., various ways of publicizing research findings and data, this typically includes conference presentations (including injunctions not to tweet or otherwise share sensitive information presented within relatively closed contexts) and publications. As noted above in conjunction with initial research design considerations, an increasingly pressing set of issues is further generated by requirements by national and international funding bodies to make research data openly available. Close of the project – including destruction of research data and related materials (AoIR 2020, p. 9). This taxonomy specifically helps us address a prominent ethical challenge in the era of Big Data research – namely, informed consent. Broadly, carefully informing research subjects of possible risks of their involvement with a given research project, coupled with the steps taken within the project to ensure their privacy, anonymity, and confidentiality, is a default requirement in Human Subjects’ research. We have also seen how informed consent is a primary theme and issue in both the development of Western-based IRE, including IRE 1.0 and IRE 2.0, as well as in emerging nonWestern IRE literatures (Hongladarom 2017) At the same time, there are important and notable exceptions to this requirement, including research which requires deception of the subjects in order to test a hypothesis that requires the subjects to be unaware of the genuine research aims. For example, medical research exploring whether or not a nonmedical substance may nonetheless result in demonstrable benefits because subjects (patients) believe it will do so – the placebo effect can only be tested by not telling the subjects (patients) that they are being treated with neutral substances such as a sugar pill. Similar exceptions can be argued, for example, when researching “hostile authors/subjects,” e.g., participants in “alt-right” message exchanges – i.e., subjects who, should they discover that such research were being undertaken on them, might very well retaliate against not only the researchers themselves but also their families, close friends, and so on – as the example of #Gamergate illustrates (Rambukkana 2019, 316f.; cf. Massanari 2017). Moreover, the scope of “Big Data” research projects – projects that entail “scraping” or collecting, e.g., social media profile data of tens, if not hundreds of thousands of users – is often argued to thereby be exempt from informed consent requirements because to do so would be manifestly impractical, if not impossible (e.g., Rensfeldt et al. 2019, 201f.). In utilitarian terms, collecting informed consent would exact an unacceptable cost on the researchers. And, arguably, insofar as such
16
Internet Research Ethics and Social Media
297
data is “scrubbed” (as best possible) of any personally identifiable information in projects that seek to discern more quantitative patterns of groups, in contrast with more qualitative elements of more individual persons, then it would seem that an exception could be made here to the requirement for informed consent as well. On the other hand, a central reality of Big Data is that despite our best efforts to scrub data – thanks to both evolving technological capabilities (ever faster computational devices and techniques, AI-driven analyses, etc.) and the exponential growth of available raw data (including hacked and stolen data), subsequent reidentification of individuals becomes easier and easier (Buchanan and Zimmer 2018, 38f.) As we will see shortly in the discussion of hacked data and, in the final section on Current Debate, such reidentification and repurposing of data thus pose direct threats to individual privacy, anonymity, confidentiality, and so on – i.e., precisely the basic ethical norms informed consent is designed to protect. Such dangers were dramatically demonstrated in the so-called “creepy Facebook study” in 2014, making the clear point that even within such Big Data research there remain critical (deontological) requirements for respecting individuals’ autonomy – beginning with protections against manipulation of their behavior without their consent. In this study, the researchers manipulated the emotional content of nearly 700,000 user feeds in slightly more positive or slightly more negative directions. These emotions were “contagious”: their analysis showed that readers of a given feed would likely begin to shift in more positive or more negative directions, respectively (Kramer et al. 2014). However interesting the results, the study evoked an immediate and extensive outcry as it violated a range of research ethics norms and requirements, beginning with informed consent to such manipulation (Hoffman and Jonas 2017, 8f.). And the more we draw on deontological and/or care ethics frameworks, the more important it becomes to respect the requirements for informed consent and the rights to privacy, etc. that these are designed to protect. As well, if our research methods include, e.g., some form of content analysis that thereby requires exact quotes from individual subjects as data and/or illustration – the requirement for informed consent remains unavoidable. Most simply, in an era of seemingly eternal data storage and increasingly powerful search engines, a simple string search of such a quote would make finding its author a quick and trivial matter. The same is increasingly true for searches of images and photos, e.g., as used in a social media profile, a tweet, etc. But again, to acquire informed consent in early first stages of a research project – specifically, the first research processes – remains manifestly all but intractable. At the same time, this research taxonomy points to a relatively simple resolution to the problem, namely, informed consent can be reasonably acquired during the dissemination phase of the project. If direct quotes are needed, for instance, for the sake of illustrating and documenting a particular point in a publication – typically the number needed of such quotes is on the order of a few tens and is hence reasonably tractable (van Schie et al. 2016; Ess and Hård af Segerstad 2019). To be sure, there remain difficult challenges in the era of Big Data and social media research. To begin with, any number of prominent hacks, such as the Patreon hack in 2015, thereby make available to researchers “private messages, e-mail addresses, passwords,” and so on (Poor 2017, 278). It is tempting to argue that the hackers
298
C. M. Ess
have already made the data public, and so there is no further burden on the researcher to protect individual subjects’ privacy and so on: they are merely repurposing data – gathered under admittedly different conditions (beginning with expectations of privacy, etc.), to be sure, but now made public and so available for research purposes. This was the case of two MA Danish researchers who posted the personal data of ca. 70,000 users stolen from the OkCupid dating site. Most ethicists, however, recognize that for researchers to make use of this data, at least if it includes republication of specific user information, thereby only increases the real and potential harms to the individuals whose data was hacked (Zimmer 2016). And it appears that at least some data scientists increasingly agree. Bendert Zevenbergen and his colleagues, for example, highlight how unforeseen but “potentially disastrous consequences” can often follow from research projects with inadequate attention to the fact that we are dealing not just with data but with real people (Zevenbergen et al. 2015, 20f.; cf. Poor 2017). Indeed, it is sometimes the case that data scientists, network engineers, and their colleagues in related technical fields may be more acutely aware of such consequences and thereby call for greater ethical attention, including, e.g., the use of virtue ethics (Jackson et al. 2015) and care ethics (Rambukkana 2019).
Current Debate One of the most central issues in contemporary discussion of research ethics, most especially in the context of social media, regards privacy and privacy protections. As we have seen, protecting privacy – along with confidentiality and anonymity – is core to Human Subjects Protections anchored in (deontological) understandings of the human being as a freedom or autonomy. At the same time, however, protecting privacy online, especially in the contexts of social media and Big Data, presents increasingly complex ethical challenges and this on two levels. First, we have seen that the emergence of increasingly extensive databases makes identification of individuals increasingly easy – even when efforts are made to hide their identity via anonymization techniques. Buchanan and Zimmer review examples from 2002 forward of researchers being able to reidentify individuals using such innocuous data fields as zip codes, web search inquiries, and movie ratings (2018, p. 21). The emergence of Big Data approaches, including algorithmic processing of very large data sets, from ca. 2012 forward (Buchanan and Zimmer 2018, 38f.) only amplifies the possibilities of such reidentification. Indeed, it has become a commonplace among computer scientists to insist that genuine anonymity in such databases is no longer technically possible – only “de-identification,” i.e., best practices and techniques for making such reidentification as difficult as possible (cf. Buchanan and Zimmer 2018, p. 10). At the same time, as we have seen, at least the EU data privacy protections have become increasingly robust (GDPR 2018). Second, we have seen that within Anglophone countries such as the United States, conceptions of privacy have predominantly presumed and emphasized the individual and thereby individual privacy protections. By contrast, the NESH guidelines explicitly recognize the importance of our relationality – our sense of identity as defined by
16
Internet Research Ethics and Social Media
299
the multiple relationships we intertwine with – and thereby the requirement to protect not just the privacy of individuals but also of those close to them. Finally, the rise of social media and networked communication technologies has more broadly emphasized just such a relational self. In these contexts, Helen Nissenbaum’s relatively recent theory of privacy as contextual integrity offers a more accurate and relevant way of understanding privacy and thereby what researchers in these domains are ethically obliged to protect. Briefly, Nissenbaum (2010) foregrounds the role of relational contexts in defining what bits of information are appropriate to share or not to share. As a prime example, what is appropriately shared between patient and physician is often not appropriate to share further, as between physician and pharmaceutical representative (cf. Macnish, Privacy in Research Ethics, this volume, 4). This means that “traditional” foci on privacy protections in terms of specific sorts of information about individuals (such as “personally identifiable information” (PII) including, e.g., financial history, education, health information, employment or criminal history, Social Security Number, name, date and place of birth, and so on, as stressed in US policies – Buchanan and Zimmer 2018, 20f.) are, at best, incomplete. Rather than focusing on static bits of information, contextual integrity foregrounds the role of information within the context of specific relationships and, thereby, the importance of protecting information, access to which might not only be harmful to the individual but also to those implicated within a given relational context. What will be considered “private” may be hence both individually private and shared among a specific group – and this in turn can vary greatly, depending on the specific group. For example, what may be considered appropriate for sharing within a group context constituted by a prominent blogger and her audience (Lomborg 2012) will be considerably less intimate than what participants in a closed Facebook group for grieving parents will share with one another (Hård af Segerstad et al. 2017). More broadly, these are but two examples of emerging conceptions of “group privacy” that stand as middle grounds between purely individual privacy and complete publicity (cf. Taylor et al. 2017). We can recognize – especially from perspectives of deontology and care – that researchers need to be respectful and protective of information that could be damaging or harmful to both individuals and their relationships. We can further attempt to interrogate and reflect upon these obligations by way of the sorts of questions offered up in AoIR 2012 and NESH ([2018] 2019). But the shift from static bits of information to relational selves engaged in fluid and dynamically changing relationships means all the more that researchers will be forced to rely on both individual and shared judgments – ideally as sharpened through critical reflection and extensive dialogue – in order to discern the best ethical responses to a given research context.
Anticipated Outcomes/Horizon Scanning for Future Issues The ongoing expansions and transformations of new technologies, new modes of communication, increasingly global interactions, and in our very conceptions of selfhood mean that novel ethical challenges in Internet and social media research
300
C. M. Ess
arise on a more or less daily basis. Specifically, in the rapidly unfolding Internet of Things (IoT) and increasing diffusion of AI, entirely new constellations of increasingly fine-grained information about us will become available and thereby open up new ethical challenges – first of all, with regarding to protecting privacy (cf. Macnish, Privacy in Research Ethics, this volume). The good news is that the track record of some 20+ years of ethics work in these domains largely demonstrates significant success in addressing and resolving these challenges, through some combination of more established sets of norms, guidelines, and best practices, as coupled with (increasing) fostering of judgment and often creative approaches to what otherwise may initially appear to be deadlocks (e.g., informed consent in an era of Big Data). Indeed, many researchers who have participated in the dialogues and developments of the AoIR guidelines increasingly endorse their use as they not only help researchers gain requisite approval from ethics oversights boards but as the emphasis on ethics as interwoven with method and all stages of the research process frequently lead to better research. At the same time, however, this ongoing proliferation of new technologies and communication possibilities means that researchers will increasingly face “ultimately highly singular and emergent ethical problematics” (Rambukkana 2019, p. 312). While diverse research communities and authorities such as AoIR and NESH attempt to keep up with these developments by way of an ongoing “living documents” approach – researchers will frequently find themselves confronted by novel ethical difficulties that extant guidelines and earlier ethical resolutions may not obviously address. It is precisely in these instances that reflective ethical judgment – phronēsis – must come into play. Fortunately, phronēsis is not entirely individual: rather, it is deeply relational, as we experience when we talk through our specific ethical concerns with others and we find ourselves, both individually and collectively, coming to new insights and ways of resolving these often novel problems. This means that researchers confronting ethical challenges not immediately addressed in extant guidelines and/or similar research examples that might provide guidance through a casuistics approach (cf. McKee and Porter 2009) need to be increasingly engaged in ethical dialogues with colleagues and the larger research communities, e.g., through relevant listserves such as the AoIR mailing list (aoir.org).
Cross-References ▶ Feminist Research Ethics ▶ Privacy in Research Ethics
References Barad K (2007) Meeting the universe halfway: quantum physics and the entanglement of matter and meaning. Duke University Press, Durham British Psychological Society (2017) Ethics guidelines for internet-mediated research. INF206/ 04.2017. Leicester. www.bps.org.uk/publications/policy-and-guidelines/research-guidelinespolicy-documents/researchguidelines-poli
16
Internet Research Ethics and Social Media
301
Buchanan E, Ess C (2008) Internet research ethics: the field and its critical issues. In: Himma K, Tavani H (eds) The handbook of information and computer ethics. Wiley, New York, pp 273–292 Buchanan EA, Zimmer M (2018) Internet research ethics. In: Edward N. Zalta (ed) The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2018/entries/ethics-inter net-research/ Ess C (2015) New selves, new research ethics? In: Ingierd H, Fossheim H (eds) Internet research ethics. Cappelen Damm, Oslo, pp 48–76 Ess C (2018) Ethics in HMC: recent developments and case studies. In: Guzman A (ed) Humanmachine communication: rethinking communication, technology, and ourselves. Peter Lang, Berlin, pp 237–257 Ess C, Fossheim H (2013) Personal data: changing selves, changing privacy expectations. In: Hildebrandt M, O’Hara K, Waidner M (eds) Digital enlightenment forum yearbook 2013: the value of personal data. Amsterdam, IOS Amsterdam, pp 40–55 Ess C, Hård af Segerstad Y (2019) Everything old is new again: the ethics of digital inquiry and its design. In: Mäkitalo Å, Nicewonger TE, Elam M (eds) Designs for experimentation and inquiry: approaching learning and knowing in digital transformation. Routledge, London, pp 179–196 Ess C, The Association of Internet Researchers Ethics Working Committee (2002) Ethical decisionmaking and internet research: recommendations from the AoIR ethics working committee. https://aoir.org/reports/ethics.pdf European Data Protection Supervisor (2018) Towards a digital ethics. Ethics Advisory Group. https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf Floridi L (2013) Distributed Morality in an Information Society. Sci Eng Ethics 19:727. https://doi. org/10.1007/s11948-012-9413-4 Fotopoulou A (forthcoming) Understanding citizen data practices from a feminist perspective: embodiment and the ethics of care. In: Stephansen H, Trere E (eds) Citizen media and practice. Taylor & Francis/Routledge, Oxford Frankel MS, Siang S (1999) Ethical and Legal Aspects of Human Subjects Research in Cyberspace. A Report of a Workshop, June 10–11. American Association for the Advancement of Science, Washington, DC, p 1999 Franzke A (2019) Feminist research ethics. In: franzke et al Internet research: ethical guidelines 3.0. pp 28–37 franzke a, Bechmann A, Ess C, Zimmer M, The AoIR Ethics Working Group (2020) Internet research: ethical guidelines 3.0 GDPR General Data Protection Regulation, (GDPR) Regulation EU 2016/679. Approved 27 April 2016, implemented May 25 2018. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri= CELEX:32016R0679. Gilligan C (1982) In a different voice: psychological theory and women’s development. Harvard University Press, Cambridge, MA Goffman E (1959) The presentation of self in everyday life. Penguin Books, London Hall GJ, Frederick D, Johns MD (2003) “NEED HELP ASAP!!!” A feminist communitarian approach to online research ethics. In: Johns M, Chen SL, Hall J (eds) Online social research: methods, issues, and ethics. Peter Lang, New York, pp 239–252 Hård af Segerstad Y, Kullenberg C, Kasperowski D, Howes C (2017) Studying closed communities on-line: digital methods and ethical considerations beyond informed consent and anonymity. In: Zimmer M, Kinder-Kurlanda K (eds) Internet research ethics for the social age: new challenges, cases, and contexts. Peter Lang, Berlin, pp 213–225 Hjarvard S (2017) Mediatization (Critical theory approaches to media effects). In: International encyclopedia of media effects. Wiley. https://doi.org/10.1002/9781118783764.wbieme0107 Hoffman AL, Jonas A (2017) Recasting justice for internet and online industry research ethics. In: Zimmer M, Kinder-Kurlanda K (eds) Internet research ethics for the social age. Peter Lang, Berlin, pp 3–18 Hongladarom S (2017) internet research ethics in a non-western context. In: Zimmer M, Kinder-Kurlanda K (eds) Internet research ethics for the social age: new challenges, cases, and contexts. Peter Lang, Berlin, pp 151–163
302
C. M. Ess
Jackson D, Aldrovandi C, Hayes P (2015) Ethical framework for a disaster management decision support system which harvests social media data on a large scale. In Bellamine Ben Saoud N et al (eds) ISCRAM-med 2015. pp 167–180), LNBIP 233. https://doi.org/10.1007/978-3-31924399-3_15 King S (1996) Researching internet communities: proposed ethical guidelines for the reporting of results. Inf Soc 12(2):119–128. https://doi.org/10.1080/713856145 Kramer A, Guillory J, Hancock J (2014) Experimental evidence of massive scale emotional contagion through social networks. Proc Natl Acad Sci U S A 111 (24: June 17, 2014) 8788–8790; first published June 2, 2014. https://doi.org/10.1073/pnas.1320040111 Lange, P (2007) Publicly private and privately public: social networking on YouTube. J ComputMediat Commun 13 (1: 2007), article 18. https://doi.org/10.1111/j.1083-6101.2007.00400.x Leurs K (2017) Feminist data studies. Using digital methods for ethical, reflexive and situated socio-cultural research. Fem Rev 115(1):130–154. https://doi.org/10.1057/s41305-017-0043-1 Locatelli E (2020) Corporate data: ethical considerations. In: franzke et al Internet research: ethical guidelines 3.0. pp 45–54 Lomborg S (2012) Negotiating privacy through phatic communication: a case study of the blogging self. Philos Technol 25:415–434. https://doi.org/10.1007/s13347-011-0018-7 Luka ME, Milette M (2018) (Re)framing big data: activating situated knowledges and a feminist ethics of care in social media research. Soc Media Soc 4(2):1–10. https://doi.org/10.1177/ 2056305118768297 Lupton D (2018) How do data come to matter? Living and becoming with personal data. Big Data & Society (July–December 2018). pp 1–11. https://doi.org/10.1177/2053951718786314 Markham A (2006) Method as ethic, ethic as method. J Inf Ethics 15(2):37–54. https://aoir.org/aoir_ ethics_graphic_2016/ Markham A, Buchanan E (2012) Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). http://www.aoir.org/reports/ ethics2.pdf Massanari A (2017) #Gamergate and the fappening: how Reddit’s algorithm, governance, and culture support toxic technocultures. New Media Soc 19(3):329–346. https://doi.org/10.1177/ 1461444815608807 McKee H, Porter JE (2009) The ethics of internet research: a rhetorical, case-based process. Peter Lang, Berlin McKee H, Porter J (2010) Rhetorica online: feminist research practices in cyberspace. In: Schell EE, Rawson KJ (eds) Rhetorica in motion: feminist rhetorical methods & methodologies. University of Pittsburgh Press, Pittsburgh, pp 152–170 Mukherjee I (2017) Case study of ethical and privacy concerns in a digital ethnography of South Asian Blogs against intimate partner violence. In: Zimmer M, Kinder-Kurlanda K (eds) Internet research ethics for the social age. Peter Lang, Berlin, pp 203–212 NESH (The [Norwegian] National Committee for Research Ethics in the Social Sciences and the Humanities) ([2018] 2019) A guide to internet research ethics. NESH, Oslo. https://www. etikkom.no/en/ethical-guidelines-for-research/ethical-guidelines-for-internet-research/ Nissenbaum H (2010) Privacy in context: technology, policy, and the integrity of social life. Stanford University Press, Palo Alto OHRP (Office for Human Research Protections) (2018) Subpart A of 45 CFR Part 46: basic HHS policy for protection of human subjects. https://www.hhs.gov/ohrp/sites/default/files/revisedcommon-rule-reg-text-unofficial-2018-requirements.pdf Poor N (2017) The ethics of using hacked data: Patreon’s data hack and academic data standards. In: Zimmer M, Kinder-Kurlanda K (eds) Internet research ethics for the social age. Peter Lang, Berlin, pp 278–280 Rambukkana N (2019) The politics of gray data: digital methods, intimate proximity, and research ethics for work on the “Alt-Right”. Qual Inq 25(3):312–323 Rensfeldt AB, Hillman T, Lantz-Andersson A, Lundin M, Peterson L (2019) A “Situated ethics” for researching teacher professionals’ emerging facebook group discussions. In: Mäkitalo A,
16
Internet Research Ethics and Social Media
303
Nicewonger TE, Elam M (eds) designs for experimentation and inquiry: approaching learning and knowing in digital transformation. Routledge, London, pp 197–213 Simon J (2015) Distributed epistemic responsibility in a hyperconnected era. In: Floridi L (ed) The onlife manifesto: being human in a hyperconnected era. Springer Open, London, pp 145–159 Skarstein V (n.d.) There shall be freedom of expression. https://www.bibalex.org/WSISALEX/3. There%20shall%20be%20freedom%20of%20expression%20by%20Vigdis%20Skarstein.doc Suomela T, Chee F, Berendt B, Rockwell G (2019) Applying an ethics of care to internet research: Gamergate and digital humanities. Digital Studies/Le champ numérique 9(1). https://www. digitalstudies.org/articles/10.16995/dscn.302/ Taylor C (1989) Sources of the self: the making of the modern identity. Harvard University Press, Cambridge Taylor L, Floridi L, van der Sloot B (eds) (2017) Group privacy: new challenges of data technologies. Springer, Dordrecht The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979) The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. https://www.hhs.gov/ohrp/regulations-and-policy/belmontreport/read-the-belmont-report/index.html The [Norwegian] National Committee for Research Ethics in the Social Sciences and the Humanities (NESH) (2006) Forskningsetiske retningslinjer for samfunnsvitenskap, humaniora, juss og teologi [Research ethics guidelines for social sciences, the humanities, law and theology] Tiidenberg K (2018) Research ethics, vulnerability, and trust on the Internet. In: Hunsinger J, Klastrup L, Allen M (eds) Second international handbook of Internet research. Springer, Dordrecht Tong R, Williams N (2018) Feminist ethics. In: Zalta EN (ed) The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2018/entries/feminism-ethics/ Vallor S (2010) Social networking technology and the virtues. Ethics Inf Technol 12:157–170. https://doi.org/10.1007/s10676-009-9202-1 Vallor S (2016) Technology and the virtues: a philosophical guide to a future worth wanting. MIT Press, Cambridge, MA van Schie G, Westra I, Schäfer MT (2016) Get your hands dirty: emerging data practices as challenge for research integrity. In: Schäfer MT, van Ess K (eds) The datafied society: studying culture through data. Amsterdam University Press, Amsterdam, pp 183–200 Walstrom M (2004) Ethics and engagement in communication scholarship: analyzing public, online support groups as researcher/participant-experiencer. In: Buchanan E (ed) Readings in virtual research ethics: issues and controversies. Information Science, Hershey, pp 174–202 Westlund A (2009) Rethinking relational autonomy. Hypatia 24(4: Fall):26–49 Zevenbergen B, Mittelstadt B, Véliz C, Detweiler C, Cath C, Savulescu J, Whittaker M (2015) Philosophy meets internet engineering: Ethics in networked systems research. (GTC workshop outcomes paper). Oxford Internet Institute, University of Oxford. http://ensr.oii.ox.ac.uk/wpcontent/uploads/sites/41/2015/09/ENSR-Oxford-Workshop-report.pdf Zimmer M (2016) OKCupid study reveals the Perils of Big-Data science. Wired Opinion (May 14). https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science/
Research Ethics in Data: New Technologies, New Challenges
17
Caroline Gans Combe
Contents A Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Current Data Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Ethical Challenges when Using Data in Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges Posed by Data Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . To New Situations: Renewed Imperatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Revising Data-Based Tools for Assessing the Validity of the Research . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
306 306 307 308 312 314 314 317 317 319 319
Abstract
Ninety percent of the data created in the entire history of humanity has been created in the last 2 years (2.5 quintillion bytes of data generated every day, Marr 2018). In this context, the use of data in the field of research is currently undergoing a revolution. Indeed, traditionally used as a marker of excellence (it is the data that makes it possible to reproduce and validate an experiment), its profusion makes it especially highly volatile, which impacts its use by the research community. However, what is data? How are data points created, and moreover, what are the ethical issues involved in collecting, building (as in feature engineering), and using data in and for research? (In data science, feature engineering refers to all interventions (reformatting, processing, cleaning, enrichment, calibration) performed on raw data before they are taken into account by a learning algorithm.) There is abundant literature regarding the importance of data, collection methodologies, and the need to make collected datasets verifiable. However, C. Gans Combe (*) INSEEC U. Institut National d’Etudes Economiques et Commerciales, Paris, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_13
305
306
C. Gans Combe
the feasibility of such constraints in the current context of data affluence is rarely considered. Data per se is content but also a set of information on the data (e.g., when we talk about metadata). From a unique, circumscribed object, data has become multiple, abounding, and “Big.” The accuracy of the data and the speed at which data are transmitted are equally important. Today, data is transported digitally, in a dematerialized form, and is less and less tangible, now stored in the “Cloud.” Overabundant content and fast-moving carriers have redefined the current understanding of data and its uses and created new ethical challenges, particularly as to how the structure and speed of the data profoundly determine its quality and its reliability (Krippendorff 2008). The ethical issues posed to research, on the one hand, by new data architectures (Ross 2003), and, on the other hand, by data, as a tool for measuring the validity and integrity of a research process, will be addressed in this chapter. Keywords
Big data · Cloud · Urbanism · Data architecture · Data ethics · Data protection · Metadata · Data safety · Velocity · Trust in data · Net neutrality
A Preamble While 90% of the data created in the entire history of humanity has been created in the last 2 years (with 2.5 quintillion bytes of data generated every day, Marr 2018), the use of data in the field of research is currently undergoing a revolution. Indeed, traditionally used as a marker of excellence (it is the data that makes it possible to reproduce an experiment and therefore validate it), its overabundance makes it polymorphic, eccentric in the mathematical sense of the term, and especially highly fluctuating, which impacts its use by the research community. Prior to addressing the impact of data overabundance on the research community’s use of data, we will focus on understanding the current data ecosystem and why it has changed fundamentally since the seventeenth century when the prerequisites for good research design practices were established. Indeed, from the establishment of what is still known today as the “scientific method” until their theorization by Popper in the 1930s, data has always been considered as the Gordian knot of research. The literature abounds in the importance and meaning of data for the research world (Onwuegbuzie and Leech 2005), but no one has defined the importance of this tool better than Edwards Deming in his famous exclamation, “In God we trust, all others... bring data” (quoted by Friedman et al. 2001).
Understanding the Current Data Ecosystem To define the notion of data would take up an entire chapter in itself as the approaches to the concept are many and varied and raise questions about the difference between data and knowledge, data and cognizance, and therefore,
17
Research Ethics in Data: New Technologies, New Challenges
307
incidentally, what data we are talking about when we use this term in the field of research (Zins 2007). While we will keep these questions in mind throughout this analysis, it is not our objective to question them here. In this respect, we will consider “data” as a set of facts of differing quality that are collected and identified according to different methodologies for different purposes (adapted from Berti 1997). As such, to ensure the proper maintenance of data, it is necessary to be able to evaluate its quality, the methodologies of collection and identification, and, finally, its purposes or uses. (There are always two aspects to data quality improvement. Data cleansing is the one-off process of tackling the errors within the database, ensuring retrospective anomalies are automatically located and removed. Another term, data maintenance, describes ongoing correction and verification – the process of continual improvement and regular checks.) All this is in a context where data – if not virtually, at least technically – becomes increasingly voluminous, because the creation of digitally based information is nowadays measured by weight, in bytes (i.e., a set of binary digits or bits (usually eight) operated on as a unit). There is no consensual unit of measurement, or classification, for data given its value/validity but only its storage weight in a given information system (Fisher et al. 2007).
Data Quality Data quality implies that a given set of variables is adequate for exploitation. As such, the concept refers to the overall utility of a dataset (or sets) as a function of its ability to be processed and analyzed. Data quality has different categories of criteria, and their evaluations that enable the detection of errors.
Two Visions The quality of the data is approached from two significant angles: a descriptive vision and a technical approach. The proponents of the first point out that the validity of data is linked to the quality of what accompanies and describes it – its metadata (Pipino et al. 2002) – while the second tends to insist on the need for exemplary technological processing (Maydanchik 2007) by focusing on data processing protocols and not on the information it provides. A Third Approach These two approaches may appear Manichean in the sense that they are exclusive and do not consider – according to their respective paths – the multiplication of sources and tools. Berti’s more operational vision (1999) integrates multiple information sources and the notion of relative data quality when several data describe the same entity but have contradictory values (known as “homologous data”). The author’s proposal is not to be exclusive in data collection but to integrate contradictory data to have a broad vision of the situation that the data is supposed to illuminate or represent. However, the use of conflicting data, if it begins to find its credentials in research (Woodside 2016), may prejudice a study, mainly because of the impact of this type of use on the readability of a result. Indeed, in the context of their low recurrence, highlighting divergent content impacts the possibility of
308
C. Gans Combe
publicizing works (Homewood 2004). Incidentally, it is therefore not surprising to see – in the case of data reporting dissonant information – some preferring to reduce the scope of their research in the hope that these conflicting echoes will disappear. But does doing so really constitute fraud? These issues will be returned to later. Berti’s recommendation makes it possible to consider heterogeneous systems, which is precisely what the data ecosystem is nowadays (Hernández and Stolfo 1998). In addition, the risks of using data of insufficient quality can be mitigated by the implementation of valuation algorithms that take into consideration the different types of information: homologous data and their featured metadata to dynamically recommend results with the best quality and appropriateness.
The Issue of Data Appropriateness and Research Objectivity Data appropriateness is another area where research ethics and objectivity conflict. Indeed, one of the criteria recognized by data actors as fundamental to determine the quality of data is the fact that it is “appropriate” for a specific user/use (Govindarajan 1984). However, what is “appropriate” to be adequate with an expected result? But this could be seen as an attack on the objectivity of the results, on the neutrality that the researcher must have about observations, and the non-inference or interference in the representation of the results. Could this also question the appropriateness of a dataset for conducting research? Once again, good practices in the sector are silent on these subjects, although it seems essential, for the observed datasets to be considered technically as of high quality, that what is “appropriate” should be defined. In this context, which raises more questions than it answers, we can conclude from the literature analysis above that just because data is contradictory to the mainstream does not mean that it is false and, above all, that measuring the quality of the data implies considering the data, its context, its description, and the process of which it is part. It also means that the question of the appropriateness is resolved, and potential conflicts of interest related to the interpretation of the data are addressed. In any case, quality data is data which the user knows all the facets, but the proof of this in-depth understanding is rare because it consumes resources (Pedrycz and Chen 2014). A demonstration of this type would involve going much further than the usual sourcing, and to inform about the metadata, the cleaning algorithms that make the data usable, so much information that is not necessarily accessible at first sight.
Data Collection Processes One of the typical elements of data knowledge is called “source” or “origin.” It should also be remembered, for the sake of form, that just because a dataset is accessible online does not mean that it is freely usable or that it is legitimate or scientifically accurate. As simple as it is to describe a collection process involving one-to-one relationships (surveys, etc.), the dematerialization of sources and the multiplication of access to data make this process random.
17
Research Ethics in Data: New Technologies, New Challenges
309
A Detour Here, a small statistical detour is required. In each minute of every day, the following happens on the Internet (IBM Marketing Cloud Study 2018) – – – – –
Social media gain 840 new users. 455,000 Tweets are produced. Users upload 400 hours of new YouTube content and watch 4,146,600 videos. Instagram users upload 46,740 million posts. 3 million Facebook posts are produced, while 510,000 comments are posted; 293,000 statuses are updated, 136,000 photos are uploaded, and 4 million “Likes” are logged. – 3,607,080 Google searches are conducted worldwide. – 15,220,700 text-based messages are sent. This is without counting the data collected by various governments as part of their open data policies, such as those promoted by the European Union in particular (Open Government Data & the PSI Directive). In short, the volume of data available is massive, to say the least, but not much is really known about collection methodologies. Although the various institutional statistical services document these extensively, the same does not always apply to automated and opaque collections. Considered as trade secrets, GAFTA (Google, Amazon, Facebook, Twitter, and Apple) algorithms and the algorithms of other tech giants are not fully public even if their updates (and their impacts) are. Thus, on August 1, 2018, a massive upgrade of Google’s algorithm had contrasting effects on search results in the medical sector. This update was intended to increase the sensitivity of the algorithm to E-A-T (expertise, authoritativeness, and trust) signals, but it ultimately reduced the visibility of much health-related content and related research at the time – Google being the “most used [search engine] for problem-specific information seeking” (Jamali and Asadi 2010). Although both researchers expressed caution about the results due to their study field perimeter – physics, mathematics – a lack of transparency about data collection methodologies persists.
The Impact of Net Neutrality on Research The portals and other sources/tools used by researchers to obtain the data necessary for their work, consider in a somewhat limited way these updates, and often do not inform researchers of their occurrence. Lazer et al. (2014) or for what it takes, technical information: it took severe errors related to the use of the Excel spreadsheet before seeing a serious slowdown in its use as a database (Zeeberg et al. 2004). In this context, the recent repealing by the USA of net neutrality has only affected – if not moved – professionals in the field. On December 17, 2017, the US Federal Communications Commission repealed the net neutrality regulations. (However, there is indeed a motion underway in both Houses to reverse this FCC ruling, but in the meantime, net neutrality is still
310
C. Gans Combe
suspended in the USA. https://techcrunch.com/2018/05/16/senate-disapproves-fccsnet-neutrality-rollback-under-congressional-review-act/?guccounter=1) What was repealed concerns the prohibition of (1) blocking, that is, the discrimination by ISPs against lawful content by blocking websites or apps (e.g., barring a nonpaying website); (2) throttling, which refers to ISPs slowing down the transmission of data based on the nature of the (lawful) content; and (3) paid prioritization, which refers to speed discrimination based on premium payments by companies or individuals. This affects “live” research because (1) due to major search engines’ use of the BBR (bottleneck bandwidth and round-trip propagation time) algorithm, (BBR is a model-based congestion control algorithm to avoid data being blocked at a certain network nodes due to a sudden influx of data. https://tools.ietf.org/id/draft-cardwelliccrg-bbr-congestion-control-00.html) speed plays an essential role in search result disclosure and (2) paid prioritizations lead to irrelevant results being brought to the top of a search result list. In the end, this raises serious integrity and ethical issues and questions what researchers, in good faith and based on the ranking of results, will consider as convincing data, whereas their position is not linked to the quality of their content but to the fact that it has been purchased. The implications of poorly structured or lobby-influenced research being brought to the forefront and considered the most adequate by search engines are numerous, as the Cambridge Analytica scandal evidenced (Lapaire 2018). The use of search clusters which pool different sources and thus do not rely on a single algorithm to deliver a result (such as “2carrot”), or the use of the European Qwant engine, which is subject to net neutrality rules, may avoid this potential bias. Indeed, even if net neutrality has been suspended in the USA, it is the cornerstone of the European network and data legal framework. Failing to address these exogenous influences could have significant ethical impacts on the results of data collection and repercussions on the proper conduct of research, especially when social media and digital tools are used as a vital source of planned observations and analysis.
Being Aware of the Data Ecosystem When using data, it is therefore necessary to describe how it is collected and from which ecosystem. Also, sources should be varied to cross-reference the data, recurrence being the best proof of the validity of a dataset. Finally, it seems critical to focus on the legitimacy of the data collected. Legitimacy is understood in the sense of explorability of the data, that is, its degree of accessibility (Bruening et al. 2013). This questioning of the data is typical and will be revisited later in this chapter in the context of data usage. The data-access typology can be broken down into four distinct categories: (1) Legitimate data (lawfully and adequately collected) whose use is legitimate, i.e., data that does not suffer from conceptual defects as to their origin (source, everyone knows where they come from; structure, each knows their architecture; etc.) and the use of which is justified (this is the case for data used in most research projects).
17
Research Ethics in Data: New Technologies, New Challenges
311
(2) Legitimate data which may suffer from misuse or illegitimate use: this is the case, for example, when sets are aggregated without the prior authorization of data subjects, or when the use made of the data goes beyond what the data subject has consented to. This is called “mission creep” and is, unfortunately, a more common practice than is believed (Mariner 2007). (3) Illegitimate data: This is data obtained by indirect or nontransparent means, but whose use may prove legitimate. A prime example is that of the “Panama Papers,” an investigation that resulted from data theft but served to denounce the extent of tax fraud worldwide. This poses a real ethical dilemma for the protection and use of personal data. In the Panama Papers case, the data was not obtained directly by the International Consortium of Investigative Journalists (ICIJ) themselves but was provided by whistle-blowers whose identity has been protected for obvious security reasons. Nonetheless, the publication and use of the data are in compliance with international law and rules with a view to reporting unethical or illegal behavior, and therefore entirely legitimate (Sartin 2006). In addition, some data must be collected illegitimately because they are at the heart of research (Holt and Smirnova 2014).This puts researchers at risk, however, and care must be taken when using this type of information, the verification of which can be hazardous. However, if the contextual information has necessarily been eliminated, the qualification of the data can present real challenges. (4) Data collected illegitimately/unlawfully and used illegitimately/illegally. Data compiled by robots through legitimate processes (e.g., collection of the complete environment of the data, IP, etc. to secure and qualify the data) but which, if processed without proper care, can result in serious misconduct, including election rigging (Arbes 2018).
Identifying and Cleaning the Data: The Metadata and More Data, when collected, can be homonymous, duplicated, etc. As such, and to avoid confusion and errors, it is essential to consider the data in its information environment. Qualifying the data is done through metadata, which includes information about data such as set nature, author, date created, modified, set size, etc., which makes it easier to locate a specific dataset. Metadata is used for all sorts of data formats, including documents, images, videos, spreadsheets and web pages. It is generated either manually, or by automated information processing. This approach offers standardized information on each dataset informing the context of the creation of the data and thus consequently allowing for the management of homonymies, to identify duplicates, etc., in short, to clean the data source as much as possible to make it reliable (Rahm and Do 2000). The need for a standard underlying the data, designating the data environment in addition to the data itself, is even more fundamental as it provides an objective reading of the data, if not a filter. In this context, the metadata is just as relevant as the fact itself, as it qualifies it. It is therefore essential, in a data collection process, to collect the data as well as the associated metadata as these are critical to formal
312
C. Gans Combe
authentication (Mihaila et al. 2000), which is crucial especially in the context of Big Data (De Mauro et al. 2015).
Data Architectures: Why Do Data Architectures Matter? Whether or not the data is structured or defined, it must be stored in order to be accessed. Accessing the data implies that its “address” is known. To do this, one uses either search engines (and we have seen above that this was not free of questions) or search agents (small programs dedicated to the sole task of finding a specific type of data). This raises the question of conservation methodologies that also have an impact on the data and its use. Today, we see the emergence of the notion of a “data lake,” a concept borrowed from the world of electricity production. (A data lake is a storage repository that holds a vast amount of raw data in its native format, including structured, semi-structured, and unstructured data. The data structure and requirements are not defined until the data is needed.) Like artificial lakes of hydrological dams, data are stored in in a native way, i.e., in their original format without having been structured. (Data are collected using the extract, load, and transform format, which implies that the sets are not structured while being collected.) As a result, all kinds of data can be found in a data lake in a raw and unstructured way. These solutions are very agile in collecting and processing data and are particularly appreciated by data scientists. However, since their maturity is limited (“data lakes” are a recent phenomenon), they raise prevalent questions about safety and quality (Fang 2015) and ethics. Indeed, data pools, because of their absolute need to be as broad and as unstructured as possible, are subject to security attacks which may consist of filling them with data that may be false or falsified. For the researcher, this calls into question the confidence that can be placed in the data on which the research relies. In addition to this, structured registers called data warehouses (which only store data that has been modeled/structured) exist. Each data warehouse includes data with an identical architecture to the exclusion of any other. To cumulate data of a different nature, it is necessary to query several data warehouses, which requires significant resources to carry out the necessary calculations. Hence, although this methodology clarifies the nature of the data accessed (in a sense, everyone knows what they are accessing and how), it limits creativity and multicentric analysis. For the researcher, the emergence of these different structures further reinforces the question of the relevance of the collected data.
Using Data Finally, the end-use made of the data is the ultimate determinant of its quality. Indeed, it is the needs assessment that is at the core of the pressure exerted on data in the field of research, and which often determines the good or bad use of it, and therefore the occurrence or not of misconduct at different stages of the research prism. In this way, legitimate data (properly and lawfully collected) can be misappropriated, while data collected illegitimately, or unlawfully in some cases, can,
17
Research Ethics in Data: New Technologies, New Challenges
313
on the contrary, serve criminal justice purposes (Johannesen and Stolper 2017). As explained above, the use of data can be classified from the most legitimate to the most illegitimate (Fig. 1). To ensure the quality of what is collected, it is crucial for the researcher to be able to position the information that is to be used by relating collection to use. As such, and even if the data source is uncertain (e.g., responses in fora that can be faked using the Mechanical Turk (https://www.mturk.com/) (Bai 2018) or (Paolacci and Chandler 2014)), there is no risk that the research methodology will be called into question. However, justifying the data one wants to use can be problematic, because what one considers to be a legitimate use of data may not be a view shared by others (Ateniese et al. 2007). We can imagine that those denounced in the Panama Papers did not consider the exposure of their identities and activities was for a “legitimate” purpose. Therefore, when we speak of data collection and access, the notion of “fair,” rather than “legal” or “legitimate,” is now gaining importance (Arbes 2018) as these processes can be certified or accredited (Berg et al. 2004). However, until these issues are resolved, the researcher must be able to conduct his or her research in a context where claiming to be bona fide is not sufficient to ensure (or document) best practices. It is from this perspective – and not to impose an additional administrative burden on the researcher – that the notion of a “data management plan” arises.
1 : Double legitimacy 4 : Double illegitimacy
Data is “explorable” and exploitable, obtained from legitimate sources and usable for legitimate uses.
Data is not explorable, and should not be used, though it is (data collected on false pretense and misused (e.g.: the Cambridge Analytica case).
2 : legitimate data/illegitimate use
3 : illegitimate data/legitimate use Data is wrongly collected, but for legitimate reasons (whistle blowing, e.g.: Panama Papers).
Fig. 1 The data legitimacy circle
Data is explorable, obtained from legitimate sources, but wrongly used (e.g.:mission creep).
314
C. Gans Combe
The Data Management Plan (DMP) The immediate effect of alleging that the quality of the data collected is in line with the objective of the conducted research is that one has to provide evidence of the reality of this claim. It is not enough to declare oneself in conformity with all the best practices and prerequisites for data management and to list texts describing conformance to ultimately be compliant. Explicit planning and commitments must be in place before research is undertaken, and this is where the DMP comes in. A DMP is a written document that describes (and defines) the data that is expected to be acquired or generated during a research project, how those data will be managed, analyzed, accessed, sorted and stored, and what mechanisms will be used at the end of the project to share and preserve these data. In short, all of what is detailed in the above sections (information on the data ecosystem, data quality, nature and extent of data collection, identification, classification, standardization and architecture, etc.) must be described and presented in the DMP, though it is seldom done. In any case, making all the constituent elements of a dataset visible in a single document makes it easier to monitor it, certainly from a legal point of view, but above all from the angle of the trust that can be placed in it. As such, DMP could be considered the ultimate data quality assessment tool (Sallans and Lake 2014). Overabundant content and fast-moving networks and devices have redefined the current understanding of data and its uses and created novel ethical challenges, specifically as to how the structure and speed of the data profoundly determine its quality. Issues remain that incidentally coincide with the emergence of new ethical questions. In this perspective, we are now going to address the ethical issues posed by new data architectures and the underlying question of maintaining the use of data as a tool for measuring the validity and integrity of a research process.
Current Ethical Challenges when Using Data in Research As mentioned above, the different routes and strategies currently used by data technologies have a significant impact on research activities. Thus, conventional concepts and certainties could very well be undermined by the present situation, which questioned principles considered as immutable such as data architecture standards or automated information acquisition frameworks.
Challenges Posed by Data Ethics To consider all of the challenges faced by researchers in their use of data, it is necessary to highlight the extent to which the questions approached through data ethics have an impact on the standards of ethical research practices as defined by the various codes of conduct developed in the field, more specifically the Code of Conduct of the All European Academies (ALLEA), which is an industry standard. In the 20-page document, data is quoted 28 times including numerous times as paramount to the research environment and has a differentiated treatment, in this case, a dedicated chapter. By way of comparison, integrity, which is the core subject
17
Research Ethics in Data: New Technologies, New Challenges
315
of the code, appears 30 times. This shows the importance of data as subject matter. In their approach to data, the codes authors focus on the five principles that must be addressed in research while using data (see Fig. 2). In a view founded on data ethics, these critical paths can be summarized as two imperatives: (1) respect for data ethics and (2) respect for the ethics related to data practices. Data ethics is characterized as ‘the branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including the use of artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation and attitudes toward data such as avoiding Fabrication (the making up of results and recording them as if they were real). /Falsification (manipulating research materials, equipment or processes or changing, omitting or suppressing data or results without justification)., programming, hacking and professional codes), to formulate and support morally good solutions (e.g. right conducts or right values)’ (Floridi and Taddeo 2016).
Fig. 2 Conjunctions and differences in approaches: good research practices and data ethics
316
C. Gans Combe
Respect for ethics related to data practices appears to be at the core of current state-of-the-art research. Fig. 2 shows that the ALLEA code focuses more on research practice issues (at the core of the five principles quoted within the code defining adequate data treatment: access, stewardship, transparency, legitimacy, and ownership) rather than on the technical concerns generated by the data (only raised by three of the five concerns). However, it is the very same technological considerations that raise the most significant questions (Fig. 3). Indeed, in the current data ecosystem, the researcher is confronted with two major topics: • Data legitimacy (adequacy and right to use) • Data manipulation (materialized, e.g., by the arrival of bots and artificial intelligence that can give the illusion of the existence of a respondent in an online survey even though it is an artificial creation) and associated topics such as data reliability which can be questioned by multicentrism (the conservation of elements of the same data in several centers such as in the block chain, geo-distribution, or the difficulties of access to raw data vs concatenated data, etc.) as well as data fabrication and falsification which are various facets of the one and same concern: the quality of the data (ISACA Auditing Artificial Intelligence 2018).
when research ethics meet data ethics : current challenges ÿ
ÿ ÿ ÿ
Re-identification of individuals through data-mining,-linking, -merging, Re-using of large datasets, Group privacy/ group discrimination, Trust/transparency
ÿ ÿ ÿ
Increasing complexity & autonomy of algorithms, Moral responsibility /accountability Unforeseen & undesired consequences/missed opportunities
Ethics of Data
ÿ ÿ
ÿ Responsibilities/liabilities
people & organizations in charge of data processes, innovation & usage ÿ Protection of the rights of individuals & groups: consent, user privacy, secondary use. ÿ Responsible
Ethics of
Ethics of
Algorythme
Practices
Data legitimacy (right to use) Data manipulation
ÿ ÿ
Data reliability Data falsification
Questionable Research Practices Adapted from Floridi & Taddeo 2016
Fig. 3 When research ethics meet data ethics
17
Research Ethics in Data: New Technologies, New Challenges
317
The reason for this focus is to be found in what researchers ultimately expect from the data, i.e., not only the supports on which to conduct their research but also sets through which all third parties can “replay” this research (Lokanan 2019).
To New Situations: Renewed Imperatives In the context of current data and data polymorphism, the technical possibility of repetition is largely undermined, in particular, because of the replication possibility of a data usage pathway identically and integrally: there is always something that has changed, the very second the initial process is completed (Rivers and Lewis 2014). Likewise, it is complex to determine at what precise point in time the data may become “false” and whether the researcher’s responsibility as a good faith user of the false data can be sought. Using live data has its setbacks that we are aware of. Mostly, as the network is “live,” it repeatedly introduces novel results which need to be considered in time. This is an excellent way to capture the evolution of a topic in compliance with actual Big Data standards (enhanced veracity through enhanced live data usage). The legal context can also influence live search results. Data is not always structured now, so there are multiple paths to obtain it. In short, one can access the same data with different questions, while identical questioning in a different temporality can give entirely different results without these being fraudulent. This is very obvious in forensic science, for example, or for all areas where facts occur continuously and cannot be isolated other than through a temporal cutoff. Nonetheless, even if the collection methodology is faultless, the data may have been manipulated or transformed in different ways and without the researcher’s knowledge. However, expectations in terms of data validation (the need for controls by metadata) are poorly addressed in guidelines and responsibilities, while reproducing and using unknowingly falsified data are not addressed (Anoop et al. 2019). Therefore, it is easier to focus on “ways of doing things” as a marker of the reproduction of a process. This highlights that the codes in force are more particularly interested in these points (the methodology) in their approach to ethics in terms of research data. In any case, this leaves a number of unanswered questions, particularly in terms of standardized tools for ethical monitoring of data use practices that should be considered (Ryan et al. 2007).
Revising Data-Based Tools for Assessing the Validity of the Research In light of this discussion, several questions arise: • As the validity of a research endeavor must be measured over time, is it reasonable to keep reproducibility as a reference knowing that the polymorphism of the data and the conservation methods will make this operation more and more delicate (e.g., deployment of data lakes)? What value should
318
C. Gans Combe
be given to the reproduction of an experiment on a set of data collected and provided by the researcher for this sole purpose and with this single objective? • Is it reasonable in the context of massive data collection to point out errors and consider them as falsifications when they may be due to falsification in the data collection chain (data reliability)? There is thus a need to determine the starting point of the falsification procedure to be able to attribute it. This is not easy because such situations are often detected quite late in the chain, such as at the time that results are exploited, published, or found their way onto the market. – Is it reasonable to consider data as “neutral” and above all “finite,” when the use of algorithms to construct data (e.g., features extraction) are developing, particularly in the context of predictive analyses? In this respect, there are many cases that can be observed in both research and industry. The Volkswagen “emission scandal” (Ewing 2017) and “clean diesel” (Ewing 2018) cases exemplify this, as it was – both times – necessary to go back up the data chain to determine the starting point of the fraud. Nonetheless, it seems that such a forensic approach to the analysis is not currently favored by academic practice (and it might be of interest to find out why in another paper). Furthermore, the good practices advocated by the various codes of conduct do not question new topics such as the veracity of the data. While the notion appears in research, particularly when it comes to verifying the statements of patients participating in surveys (Smith-McDonald 2016), at no time is the term used in the codes of conduct that refer to it. Indeed, as shown in Simone Rebaudengo’s project “Ethical things,” results can be distorted using human intelligence tasks, for which the only motivation is a minimal gain and for which results are increasingly biased (Kan and Drummey 2018). (A project showing how much the use of Turk, what is called human intelligence tasks by Amazon, can be biased. http://www.simonerebaudengo. com/project/ethicalthings) Similarly, in the context of machine learning, how can one prevent artificial intelligence from delivering biased results? This is not addressed within the existing literature (Dieterich et al. 2016). There is, however, a positive point here: researchers increasingly have a range of tools (some of them open source) that allows bias detection in built datasets. These tools are based on probabilistic “metrics” which, if they deviate from statistical normality, will generate alerts and leads to solve the problem. (The “normality assumption” implies that the set has been tested prior to certain regressors being applied and that data roughly fits a bell curve shape (Lumley et al. 2002).) The challenge is to choose the right metrics for a given artificial intelligence. This is done on a case-by-case basis, but some hidden biases may never be discovered depending on the metrics chosen. Nonetheless, current research guidelines remain silent on these indicators. Although the EU published Ethics Guidelines for Trustworthy AI in April 2019 (HLEG 2019), they are not specific to the field.
17
Research Ethics in Data: New Technologies, New Challenges
319
Conclusion As we have seen, the ethical issues raised by the evolution of architectures and approaches to data collection are numerous. However, rapid changes in data collection architectures, uses, and methodologies are likely to influence good research practices in the short- to medium- term. For now, in the absence of a much-needed in-depth consideration of technological imperatives, the deployment of thorough DMPs, taking into account all aspects of the data – including “voluntary fabrication” (feature extraction) as in the case of AI – and not just the issues related to data practice, constitutes the best safeguard against different types of data misuse. Unfortunately, DMPs are too often seen as tools that enhance technocratic control of research, whereas the original ambition was naturally to serve it and to improve the transparency of its practices. So that this does not remain a pious wish, it is also the duty of the structures supporting the scientific community not to use this resource as an administrative whim as it is too often the case but as a practical tool for improving efficiency and trust in the data being used in R&D operations (Dabrowski 2018).
References ALLEA (2017) The European Code of Conduct for Research Integrity. Last accessed 3 Sept 2018. https://www.allea.org/wp-content/uploads/2017/05/ALLEA-European-Code-of-Conduct-forResearch-Integrity-2017.pdf Anoop K, Gangan MP, Deepak P, Lajish VL (2019) Leveraging heterogeneous data for fake news detection. In: Linking and mining heterogeneous and multi-view data. Springer, Cham, pp 229–264 Arbes V (2018) Fair data accreditation: ‘Antidote in the wake of a scandal’. B&T 2826:26 Ateniese G, Burns R, Curtmola R, Herring J, Kissner L, Peterson Z, Song D (2007, October) Provable data possession at untrusted stores. In: Proceedings of the 14th ACM conference on computer and communications security, pp 598–609. ACM Bai H (2018) Evidence that a large amount of low quality responses on MTurk can be detected with repeated GPS coordinates. Retrieved from: https://www.maxhuibai.com/blog/evidence-thatresponses-from-repeating-gps-are-random Berg BL, Lune H, Lune H (2004) Qualitative research methods for the social sciences, vol 5. Pearson, Boston Berti L (1997) Out of over information by information filtering and information quality weighting. In: IQ, pp 187–193 Berti L (1999) Quality and recommendation of multi-source data for assisting technological intelligence applications. In: International conference on database and expert systems applications. Springer, Berlin/Heidelberg, pp 282–291 Bruening P, Leta Jones M, Abrams M. Data, B. Analytics: Seeking foundations for effective privacy guidance. A discussion document, February 2013 Dabrowski A (2018) Productivity through data management (aka Writing an effective data management plan) De Mauro A, Greco M, Grimaldi M (2015) What is big data? A consensual definition and a review of key research topics. In: AIP conference proceedings, vol 1644, no 1, pp 97–104. AIP Dieterich W, Mendoza C, Brennan T (2016) COMPAS risk scales: demonstrating accuracy equity and predictive parity. Northpoint, USA
320
C. Gans Combe
Ewing J (2017) Faster, higher, farther: the inside story of the Volkswagen scandal. Random House, USA Ewing J (2018) 10 monkeys and a beetle: inside VW’s campaign for “clean diesel”. The New York Times 25 Fang H (2015, June) Managing data lakes in big data era: What’s a data lake and why has it became popular in data management ecosystem. In: Cyber technology in automation, control, and intelligent systems (CYBER), 2015 IEEE international conference on, pp 820–824. IEEE Fisher CW, Lauría EJ, Matheus CC (2007) In search of an accuracy metric. In: ICIQ, pp 379–392 Floridi L, Taddeo M (2016) What is data ethics? Philos Trans R Soc A 374:20160360. https://doi. org/10.1098/rsta.2016.0360 Friedman J, Hastie T, Tibshirani R (2001) The elements of statistical learning, vol 1, no 10. Springer series in statistics. Springer, New York Govindarajan V (1984) Appropriateness of accounting data in performance evaluation: an empirical examination of environmental uncertainty as an intervening variable. Acc Organ Soc 9(2):125–135 Hernández MA, Stolfo SJ (1998) Real-world data is dirty: data cleansing and the merge/purge problem. Data Min Knowl Disc 2(1):9–37 HLEG A (2019) Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-allianceconsultation. Retrieved 30 Sept 2019 Holt TJ, Smirnova O (2014) Examining the structure, organization, and processes of the international market for stolen data. Research report submitted to the U.S. Department of Justice available electronically. https://www.ncjrs.gov/pdffiles1/nij/grants/245375.pdf Homewood J (2004, June) Consumer health information e-mails: content, metrics and issues. In: Aslib proceedings, vol 56, no 3, pp 166–179. Emerald Group Publishing. ISACA Auditing Artificial Intelligence (2018) Isaca white paper. http://www.isaca.org/KnowledgeCenter/Research/ResearchDeliverables/Pages/Auditing-Artificial-Intelligence.aspx retrieved 21 August 2019 Jamali HR, Asadi S (2010) Google and the scholar: the role of Google in scientists’ informationseeking behaviour. Online Inf Rev 34(2):282–294 Johannesen N, Stolper T (2017) The deterrence effect of whistleblowing: an event study of leaked customer information from banks in tax havens Kan IP, Drummey AB (2018) Do imposters threaten data quality? An examination of worker misrepresentation and downstream consequences in Amazon’s Mechanical Turk workforce. Comput Hum Behav 83:243–253 Krippendorff K (2008) Reliability. The international encyclopedia of communication Lapaire JR (2018) Why content matters. Zuckerberg, Vox Media and the Cambridge Analytica data leak. ANTARES: Letras e Humanidades 10(20):88–110 Lazer D, Kennedy R, King G, Vespignani A (2014) The parable of Google Flu: traps in big data analysis. Science 343(6176):1203–1205 Lokanan ME (2019) Methodological problems with big data when conducting financial crime research Lumley T, Diehr P, Emerson S, Chen L (2002) The importance of the normality assumption in large public health data sets. Annu Rev Public Health 23(1):151–169 Mariner WK (2007) Mission creep: public health surveillance and medical privacy. Boston Univ Law Rev 87:347 Maydanchik A (2007) Data quality assessment. Technics Publications, USA Mihaila GA, Raschid L, Vidal ME (2000, May) Using quality of data metadata for source selection and ranking. In: WebDB (informal proceedings), pp 93–98 Onwuegbuzie AJ, Leech NL (2005) On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int J Soc Res Methodol 8(5):375–387 Paolacci G, Chandler J (2014) Inside the Turk: understanding mechanical Turk as a participant pool. Curr Dir Psychol Sci 23(3):184–188
17
Research Ethics in Data: New Technologies, New Challenges
321
Pedrycz W, Chen SM (eds) (2014) Information granularity, big data, and computational intelligence, vol 8. Springer, Cham Pipino LL, Lee YW, Wang RY (2002) Data quality assessment. Commun ACM 45(4):211–218 Popper K (2005) The log ic of scientific discovery. Routledge, London Rahm E, Do HH (2000) Data cleaning: problems and current approaches. IEEE Data Eng Bull 23(4):3–13 Rivers CM, Lewis BL (2014) Ethical research standards in a world of big data. F1000 Research, vol 3, 15pp Ross J (2003) Creating a strategic IT architecture competency: learning in stages Ryan F, Coughlan M, Cronin P (2007) Step-by-step guide to critiquing research. Part 2: Qualitative research. Br J Nurs 16(12):738–744 Sallans A, Lake S (2014) Data management assessment and planning tools. In: Ray JM (ed) Research data management: practical strategies for information professionals, pp 87–107 Sartin B (2006) ANTI-Forensics–distorting the evidence. Comput Fraud Secur 2006(5):4–6 Smith-McDonald J (2016) Patient self-report data and assessment measure correlation. Doctoral dissertation, The Chicago School of Professional Psychology Woodside AG (2016) Embrace complexity theory, perform Contrarian case analysis, and model multiple realities. In: Bad to good: achieving high quality and impact in your research. Emerald Group Publishing, Bingley, pp 57–81 Zeeberg BR, Riss J, Kane DW, Bussey KJ, Uchio E, Linehan WM et al (2004) Mistaken identifiers: gene name errors can be introduced inadvertently when using Excel in bioinformatics. BMC Bioinf 5(1):80 Zins C (2007) Conceptual approaches for defining data, information, and knowledge. J Am Soc Inf Sci Technol 58(4):479–493
Data Sources IBM Marketing Cloud Study. Last accessed 15 Aug 2018. https://public.dhe.ibm.com/common/ssi/ ecm/wr/en/wrl12345usen/watson-customer-engagement-watson-marketing-wr-other-papersand-reports-wrl12345usen-20170719.pdf
A Best Practice Approach to Anonymization
18
Elaine Mackey
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Is Anonymization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Absolute Versus Risk-Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Should We Expect: Risk and Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessing and Managing Reidentification Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Environment Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional Anonymization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anonymisation Decision-Making Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anonymisation Decision-Making Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
324 325 327 328 330 330 331 331 332 340 341 341
Abstract
The need for clear guidance on anonymization is becoming increasingly pressing for the research community given the move toward open research data as common practice. Most research funders take the view that publicly funded research data are a public good which should be shared as widely as possible. Thus researchers are commonly required to detail data sharing intentions at the grant application stage. What this means in practice is that researchers need to understand the data they collect and hold and under what circumstances, if at all, they can share data; anonymization is a process critical to this, but it is complex and not well understood. This chapter provides an introduction to the topic of
E. Mackey (*) Centre for Epidemiology Versus Arthritis, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_14
323
324
E. Mackey
anonymization, defining key terminology and setting out perspectives on the assessment and management of reidentification risk and on the role of anonymization in data protection. Next, the chapter outlines a principled and holistic approach to doing well-thought-out anonymization: the Anonymisation Decision-making Framework (ADF). The framework unifies the technical, legal, ethical, and policy aspects of anonymization. Keywords
Anonymization · Anonymisation Decision-making Framework · Data environment · Personal data · General Data Protection Regulation
Introduction Anonymization is essentially a process to render personal data nonpersonal. Given this description of anonymization, it may seem as if it is a simple process, but this is not the case. Anonymization is complex and not well understood, but nevertheless it is integral to the collection, management, and sharing of data appropriately and safely. The need for clear guidance on anonymization is becoming increasingly pressing, in particular for the research community given the move toward open research data as common practice. Most research funders take the view that publicly funded research data are a public good which should be made openly available, with as few restrictions as possible and, within a well-defined time period (see Concordat on Open Research Data 2016; Open Research Data Taskforce Report 2017, 2018). As a consequence researchers are frequently required to detail data sharing intentions at the grant application stage. What this means in practice is that researchers need to understand the data they collect and hold and under what circumstances, if at all, they can responsibly share data. This chapter provides an introduction to the topic of anonymization; it is divided into two parts. In the first part, the key terminology associated with anonymization is defined within the framework of European (including UK) data protection legislation. The discussion then turns to consider how it is one thinks about the reidentification problem influences the way in which risk is understood and managed. The reidentification problem refers to the inherent risk of someone being reidentified in a confidentiality dataset. In the final section of part one, the role of anonymization in data protection is examined; there are two approaches, i.e., the absolute approach and the risk-based approach. These approaches take opposing views on what the risk of identification in a confidential dataset should realistically be. In the second part of the chapter, a framework for doing well-thought-out anonymization is outlined, the Anonymisation Decision-making Framework (ADF), which is written up in a book of the same name (Elliot et al. 2016). The ADF is a first attempt to unify into a single architecture the technical aspects of doing anonymization with legal, social, and ethical considerations. The framework is underpinned by a relatively new way of thinking about the reidentification problem called the data environment perspective. This (data environment) perspective shifts
18
A Best Practice Approach to Anonymization
325
the traditional focus on data toward a focus on the relationship between data and data environment to understand and address the risk of reidentification (Mackey and Elliot 2013; Elliot and Mackey 2014).
What Is Anonymization If you know anything about anonymization, you will invariably have noted the complex nature of the topic. The term, itself, is not uniformly described, nor is there complete agreement on its role in data protection, or on how it might be achieved. In this section, some of the inherent complexities underpinning anonymization are drawn out and current thinking on the topic outlined which can be seen as in line with the new General Data Protection Regulation (GDPR 2016/679) and the UK’s Data Protection Act (DPA 2018). The terms anonymization and de-identification are sometimes taken to mean the same thing. However these two terms have quite different meanings when applied in a European (and UK) context. As an aside, it is worth noting that the USA, Canada, and Australia use this terminology differently from Europe. Anonymization as a concept is understood within a legal framework, so to elaborate further, one must look to GDPR and in particular at how the Regulation defines personal data. GDPR defines personal data as meaning: any information relating to an identified or identifiable natural persons’; an identified natural person is one who can be identified, directly or indirectly . . . Article 4(1)
The key part of the definition of personal data of interest for the purpose of describing anonymization is the sentence highlighted in bold, namely, that an identified person is one that can be identified from data either: 1. Directly or 2. Indirectly The process of de-identification refers to the removal, replacement, and/or masking of direct identifiers (sometimes called formal identifiers) such as name, address, date of birth, and unique (common) reference numbers and as such it addresses no more than the first condition, i.e., the risk of identification arising directly from data (Elliot et al. 2016). The process of anonymization, in contrast, should address both conditions 1 and 2, i.e., the risk of identification arising directly and indirectly from data. The removal of direct identifiers is necessary, but rarely sufficient on its own for anonymization. Thus, one will be required to either further mask or alter the data in some way or control the environment in which the data exists (Elliot et al. 2016). Anonymization, therefore, might be best described as a process whereby personal data, or personal data and its environment, are modified in such a way as to render data subjects no longer identifiable. Data that has undergone the process of anonymization is
326
E. Mackey
considered anonymous information and is not in scope of GDPR. Recital 26 of GDPR stipulates that “the principles of data protection should therefore not apply to anonymous information” (2016/679). There is a third term critical to this discussion, that of “pseudonymization.” It is a concept newly introduced into data protection legislation by GDPR. Pseudonymization is defined as meaning, “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person” (GDPR Article 4(5) 2016/679). Reading this definition, in particular the text in bold, we might reasonably surmise that pseudonymization maps on to the description given previously of de-identification, most especially, that pseudonymization like de-identification addresses no more than the risk of identification arising directly from data. The idea that pseudonymization addresses only part of the reidentification risk fits with the very important point that, within the framework of GDPR, data that has undergone the process of pseudonymization is considered personal data. The discussion has thus far provided a description and definition of the core concepts of anonymization, de-identification, and pseudonymization (for the remainder of the chapter, the term de-identification will not be used). The next step is to apply the concepts of anonymization and pseudonymization in practice, and this is where it gets complicated. How these concepts are applied will be shaped by how one thinks about and manages the reidentification problem. Traditionally, researchers and practitioners have addressed the reidentification problem by focusing (almost exclusively) on the data to be shared meaning that the models built to assess and control reidentification risk, whilst statistically sophicated, are largely based on assumptions about real world considerations such as the how and why of a reidentification attempt (Mackey and Elliot 2013). Furthermore, this approach does not take into account the issue of perspectives, that is who is looking at the data; a ‘data controller’ or ‘data processor’ or ‘data user’. A data centric approach is likely to lead to the conclusion that data that has undergone the process of pseudonymization is always personal data, but there may be particular sets of circumstances under which one can argue that this is not the case, remember the definition of anonymization just given that it may be achieved by modifying data or data and its environment. To explain, and this point is fundamental – data does not exist in a vacuum. Rather, data exists in an environment that Mackey and Elliot (2013) describe as having four additional components to the data to be shared, that of the presence or absence of other data (that can be potentially linked to the data to be shared), agents (which incorporations the notion of who is looking at the data), the presence or absence of governance processes and infrastructure (expanded on in section “Data Environment Perspective”). Thinking about data in relation to environments puts a spotlight on a crucial point that data should not be viewed as a static unchanging object; for example as ‘anonymous’ for ever once the researcher has removed/masked direct identifiers and masked/altered (some of) the statistical properties of the data. Rather data should be understood as a dynamic object, its status as personal data or anonymous information is dependent
18
A Best Practice Approach to Anonymization
327
on the inter- relationship between data and environment – not just data. If factors are changed with either or both data and environment the status of the data, as personal data or anonymous information, might well change. This point about the dynamic nature of data underpins Mourby et al. (2018) argument that data that has undergone the process of pseudonymization may or may not be personal data – it will crucially depend on the data - data environment relationship (also see Elliot et al. 2018). To illustrate this point further, imagine the following scenario: one detailed dataset and two possible share environments, one is a “open publication” environment the other a highly controlled “safe haven” environment. The dataset is described thus: direct identifiers are removed (i.e., participants’ names, address, date of birth); it contains demographic data (3-digit postcode, age in yearly intervals, gender, and ethnicity), clinical data (current diseases and conditions, past medical history, and concomitant medication), and event date data (hospital admissions and outpatient clinic attendance). Taking a data-centric approach invariably leads one to ask the question how risky is the data (as a basis for classifying the data as pseudonymized personal data or anonymous information)? A more pertinent question would be – given the data how risky is the data environment? (as a basis for classify data). With that in mind, consider the dataset in respect to the following two environments – the open publication environment and the safe haven. In the case of the open publication environment such as the Internet where there are little or no infrastructure and governance controls and no restrictions on access to other data or on who can access it (which is potentially anyone in the world); the reidentification risk is likely to be very high. Given this the data should be considered as pseudonymized meaning indirectly identifying personal data. In the case of the safe haven environment, lets suppose that there are strict controls on both the who and how of access such as a project approval process, researcher accreditation training, physical and IT security infrastructure, and governance protocols on what can be brought into and out of the physical safe haven infrastructure; the reidentification risk is likely to be remote. Given the data and features of the safe haven environment – where the risk is determined to be remote and the agent looking at the data does not have the reasonable means to access the directly identifying information (sometimes referred to as keys) – one might reasonably make a case for classifying the data as anonymised or what Elliot et al. (2016) refer to as functionally anonymised (the concept of functional anonymization is outlined in section “Functional Anonymisation”).
The Absolute Versus Risk-Based Approach There is an ongoing debate on the role of anonymization in data protection. This debate is dominated by two approaches: the absolute approach and risk-based approach. The absolute approach has its origins in the fields of computer science and law, while the risk-based approach has its origins in the field of statistical disclosure control (SDC). In a nutshell, scholars in computer science and law have questioned the validity of anonymization – suggesting that it has failed in its purpose (and promise to data subjects) to provide an absolute guarantee of data confidentiality. Conversely, scholars in the field of SDC have long acknowledged that
328
E. Mackey
anonymization is not a risk-free endeavor – and as a consequence, they have built an enormous body of work on risk assessment and disclosure control methodology for limiting reidentification risk (see, e.g., Willenborg and De Waal 2001; Duncan et al. 2011; Hundepool et al. 2012). Both approaches, it is fair to say, have influenced academic and legal literature and indeed practice in data protection. These approaches are considered albeit briefly next. From the 1990s onward, there were a series of, what are referred to as, “reidentification demonstration attacks” by computer scientists, the purpose of which had been to illustrate that reidentification was possible in datasets considered to be anonymous (e.g., Sweeney (1997), the AOL (Arrington 2006), Netflix (CNN Money 2010) and New York Taxi case (Atokar 2014), also surname inference and identification of genomic data by Gymrek et al. 2013). One of the most famous demonstration attacks was carried out by Latanya Sweeney in 1996; she identified a Massachusetts Governor, Governor Weld, in a confidential public release hospital insurance dataset. Sweeney matched the hospital insurance dataset with a voter registration file, which she had brought for $20, matching on date of birth, gender, and zip code. She had in addition “other external information” from media reports as the Governor had recently collapsed at a rally and been hospitalized (see Barth-Jones 2016). This case, as with other more recent examples, has led to the questioning of the validity of anonymization to do what it had been expected to do, i.e., to ensure that confidential data remain confidential (Rubinstein 2016). Paul Ohm a US law professor discusses at length the AOL, Netflix, and Governor Weld reidentification demonstrations, suggesting in his 2010 paper, titled Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, that reidentification could occur with relative ease. Ohm argued that data could be useful or anonymous, but not both; he said, “no useful database can ever be perfectly anonymous, and as the utility of the data increases, the privacy deceases” (Ohm 2010: 1706). Ohm’s position on anonymization has been widely critiqued (for a comprehensive critique see Elliot et al. 2018; Barth-Jones 2012, 2015, 2016). One of the key points made by those critiquing Ohm’s position was that in the reidentification cases such as Governor Weld, AOL, Netflix, and New York Taxis, the data were poorly protected compared to modern standards of anonymization. More especially, the approach used in some of the cases was more akin to pseudonymization (as described in this chapter) rather than that of anonymization. It is important to recognize that just because one claims that data is anonymous does not mean that it is.
What Should We Expect: Risk and Utility The notion of absolute or irreversible anonymization is about an expectation of zero risk of reidentification in a confidential dataset. The argument put forward by Ohm (2010), that “no useful data can ever be perfectly anonymous,” Elliot et al. (2016, 2018) note is perfectly true – but, and this is the critical issue, Ohm’s argument misses the fundamental point that anonymization is not just about data protection. It
18
A Best Practice Approach to Anonymization
329
is a process inseparable from its purpose to enable the sharing and dissemination of useful data. There is after all little point in sharing data that does not represent what it is meant to represent. Low utility is problematic on two accounts: (i) if the data are of little, or no, use to users, the data controller will have wasted their time and resources on them, and there may still be a re-identification (disclosure) risk present but no justifiable use case and (ii) the data could lead to misleading conclusions, which might have significant consequences if, for example, the data are used for making policy decisions (Elliot et al. 2016). Anonymization is essentially a risk management process. Elliot et al. suggest: ‘anonymised’ (should be) understood in the spirit of the term ‘reinforced’ within ‘reinforced concrete. We do not expect reinforced concrete to be indestructible, but we do expect that a structure made out of the stuff will have a negligible risk of collapsing’. (Elliot et al. 2016: 1)
The role of the data controller, when anonymization is understood in this way, is to produce useful data for the intended audience(s) while managing risk such that it is remote. It is not a matter of utility versus data privacy as Ohm (2010) had suggested (i.e., as . . .“utility of the data increases, the privacy deceases,” Ohm 2010: 1706); you can have both. Achieving low risk and high utility however requires considering data in relation to its environment when assessing and managing reidentification risk. A risk-based approach to anonymization has broad agreement, among academics at least (Rubinstein 2016), and is implemented in practice by National Statistical Institutes around the world. For example, the UK’s Office for National Statistics carries out an enormous amount of work in the area of statistical disclosure risk assessment and control, to provide confidential census data in varying formats to a wide range of audiences. It is also a position supported by the UK’s Statutory Authority, the Information Commissioner’s Office, which stated in its 2012 Code of Practice on Anonymisation: The DPA (1998) does not require anonymisation to be completely risk free – you must be able to mitigate the risk of identification until it is remote. If the risk of identification is reasonably likely the information should be regarded as personal data - these tests have been confirmed in binding case law from the High Court. (2012: 6)
Although not explicitly stated in the legislation, it would seem that a risk-based approach is supported by GDPR. Recital 26 stipulates: . . . To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments. (GDPR 2016/679)
330
E. Mackey
Note the conditioning for determining identifiability is on the means reasonably likely to be used to identify, not on the risk of identification. It is not within the realms of this chapter to discuss this point further, other than to say it places a spotlight on the issue of how (the means by which) identification might occur.
Assessing and Managing Reidentification Risk In addition to the different perspectives on the role of anonymization in data protection, there are also differing perspectives, as introduced in section “What is Anonymisation,” on the way in which the reidentification problem is understood and addressed: they are the data-centric approach and environment-centric approach. The data-centric approach is the dominant position; it sees risk as originating from and contained within data. This approach undoubtedly underpinned, for example, the Governor Weld’s case, whereby those that released the hospital insurance database had failed to take account of the environment in which they were releasing the data, namely, an open environment with no governance or infrastructure controls, where potentially a large number of people could access the data and where other data existed that could be linked to it (i.e., the voter register and media coverage about the Governor’s health as a high-profile figure). The focus on data comes at the expense of other wider and key considerations such as how, or why, a reidentification might happen or what skills, knowledge, or other data a person would require to ensure his or her attempt was a success (Mackey and Elliot 2013). In contrast, in the environment-centric approach, the data environment perspective seeks to address the limitations arising from a preoccupation with the data in question.
Data Environment Perspective The data environment perspective has been developed from work undertaken over a number of years (Mackey 2009; Elliot et al. 2010, 2011a, b; Mackey and Elliot 2011, 2013; Elliot and Mackey 2014). This perspective posits that you must look at both the data and environment to ascertain realistic measures of risk and to develop effective and appropriate risk management strategies. Its component features are described thus: • Other data is any information that could be linked to the data in question, thereby enabling reidentification. There are four key types of other data: personal knowledge, publicly available sources, restricted access data sources, and other similar data releases. • Agents are those people and entities capable of acting on the data and interacting with it along any point in a data flow. • Governance processes denote how agents’ relationships with the data are managed. This includes formal governance, e.g., data access controls, licensing arrangements, and policies which prescribe and proscribe agents’ interactions and behavior through norms and practices, for example, risk aversion, culture of prioritizing data privacy or not, etc.
18
A Best Practice Approach to Anonymization
331
• Infrastructure denote how infrastructure and wider social and economic structures shape the data environment. At its narrowest level, infrastructure can be best thought of as the set of interconnecting structures (physical, technical) and processes (organizational, managerial) that frame and shape the data environment. At its broadest level, infrastructure can be best thought of as those intangible structures, such as political, economic, and social structures, that influence the evolution of technologies for data exploitation, as well as data access, sharing, and protection practices. The data environment perspective leads to a particular anonymization approach, that of functional anonymization.
Functional Anonymization Functional anonymization was a term first coined by Dibben et al. (2015) and later applied and developed by Elliot et al. (2016) in the Anonymisation Decision-Making Framework book and further discussed in the Elliot et al. (2018) paper, Functional Anonymisation: personal data and data environment. Functional anonymization (FA) is, essentially, a form of anonymization which posits that one must consider both data and environment to determine realistic measures of risk and to manage that risk through reconfigurating data and or the environment. In the case of an open public release environment where the environment is pre-determined, the approach directs one to identify the need to either rethink the dissemination environment (is it appropriate) or rethink what you do to the data (i.e., further mask or alter it), to manage risk. The Anonymisation Decision-making Framework is a decision-making tool for doing functional anonymization, which guides one to consider how the law, ethics, and social expectations may interact with the process of anonymization.
Anonymisation Decision-Making Framework The Anonymisation Decision-making Framework was developed by Mark Elliot, Elaine Mackey, Kieron O’Hara, and Caroline Tudor. It represents a broader collaborative piece of work undertaken by the UK Anonymisation Network (UKAN) of which the authors of the ADF are founding members. The ADF’s broader collaborative underpinnings come about from the input of UKAN’s core network of 30 representatives (drawn from the public, private and charity sectors) in considering 2 fundamental anonymization questions: 1. How should anonymization be defined and described, given the many different perspectives on it? 2. What should practical advice look like, given that anonymization is a complex topic requiring skill and judgment?
332
E. Mackey
The core network’s input, in to addressing these questions, was captured in a series of workshops. The ADF developed, thereafter, was a ten-component framework, spanning three core anonymization activities, that of, (i) data situation audit, (ii) risk assessment and management, and (iii) impact management. The purpose of developing the ADF was to fill the gap between guidance given in the ICO’s 2012 Code of Practice on Anonymisation, and that which is needed when grappling with the practical reality of doing anonymization. In early 2018, with the advent of the General Data Protection Regulation UKAN undertook a further engagement exercise (over a 6-month period) meeting with legal and privacy experts from the UK and Europe, and UKAN’s user community, to consider two issues: 1. The likely impact of GDPR on the ADF 2. How the framework had so far been received and applied in practice, since its publication in 2016 As a result of this engagement work, a variety of new materials are being developed including a second edition of the ADF book. In providing an overview of the ADF, in this chapter material from the 2016 publication and from more recent developments has been drawn on. It is worth noting that work on the second edition of the ADF is ongoing and not yet published; the details of the component framework given here may differ to that which is given in the second edition – however the essence of what is being written about will not.
Anonymisation Decision-Making Framework The framework is founded on four principles: • Comprehensiveness principle: posits that one cannot determine identification risk by examining the data alone (the data-centric approach) and you must consider both data and environment (data environment perspective). • Utility principle: posits that anonymization is a process to produce safe data, but it only makes sense if what you are producing is safe useful data. • Realistic risk principle: is closely associated with the utility principle; it posits that zero risk is not possible if one is to produce useful data. Anonymization is about risk management. • Proportionality principle: posits that the measures you put in place to manage reidentification risk should be proportional to the likelihood and (likely) impact of that risk. The ADF has, since it was first written about, evolved into a more nuanced 12component framework, and those components are: 1. Describe the use case. 2. Sketch the data flow.
18
A Best Practice Approach to Anonymization
3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
Map the properties of the environment(s). Describe the data. Map the legal issues. Meet your ethical obligations. Evaluate the data situation. Select the processes you will use to assess and control risk. Implement your chosen approach to controlling risk. Continue to monitor the data situation. Plan how to maintain trust. Plan what to do if things go wrong.
333
These components cover the same three anonymization activities noted previously: data situation audit, risk assessment and risk management, and impact management. The framework can be used to: (i) establish a clear picture of one’s processing activities and (ii) assess and manage risk. Anonymization however is not an exact science; one will still need to make judgment calls as to whether data are sufficiently “anonymized” for a given data situation (Elliot et al. 2016).
Data Situation Audit The term data situation is used to capture the notion of data in an environment. A data situation can either be static or dynamic. Static data situations are where data exist within a single (closed) environment – no data in or out. Most situations are however dynamic situations where data moves because it is shared internally and/or externally. A data situation audit can be undertaken as: • A standalone piece of work, to provide an audit of one’s processing activities and to demonstrate compliance with the GDPR (2016) and DPA (2018). • It can be used to feed into the anonymization activity of risk assessment and control. In presenting the components of the ADF in this chapter for ease of illustration, a simple share scenario is used involving the flow of data between three environments; data flows are commonly more complex than this.
Component 1: Describe the Use Case The use case is principally the rationale for a data share (internally or externally) or release. It is likely to be a strong determent of what data is shared, or released, to whom, and by what means, so it is important to establish what the use case is early on. The use case is intrinsically connected to decisions about the utility-risk balance. The aim, of course, is to provide high-utility low-risk data that meet the requirement of the intended audience. Component 1 is about considering the why, who, and how associated with a use case, by identifying:
334
E. Mackey
1. The rationale for sharing or releasing data. 2. Who the groups are, that may want to access the data being shared or released. 3. How those accessing the data might want to use it. Hypothetical Scenario of a Data Share
Let us imagine that a research team (based at University A) applies for an extract of motor vehicle accident data held by the Department of Health & Safety (DHS). The DHS agrees to share the data extract; under the terms of a data sharing agreement the DHS will securely transfer the data extract to a safe haven at University A. In the data sharing agreement the DHS also stipulate for what purpose the research team can analyze the data and the conditions for publishing research results (i.e. research outputs must be checked against ESSNet guidelines to ensure they are not disclosive). The purpose of the research is to better understand the factors and impact associated with vehicle accidents. This scenario is used as a basis for outlining the other components. Component 2: Sketch the Data Flow Sketching a data flow from the origin of data collection across the environments in which it will be held allows one to visualize the parameters of a data situation. Hypothetical Scenario of a Data Share
Figure 1 illustrates the data flow between DHS, the safe haven environment, and publication environment. Component 3: Map the Properties of the Environment(s) It is important to describe, in a structured way, data environments taking into account, other data, agents, infrastructure, and governance.
Fig. 1 Data flow between DHS, University A “safe haven,” researcher, publication environment
18
A Best Practice Approach to Anonymization
335
Table 1 Safe haven environment Data to be shared Other data
Agents Infrastructure
Governance processes
Extract of data from DHS No unauthorized data can be brought into or removed from the safe haven All research outputs checked to ensure they are not disclosive Research team at University A Researchers must have completed accreditation training course Restrictions placed on who can access the safe haven Secure IT infrastructure (ISO 27001 compliant) Controls on the work-space User Agreement between research team and Safe Haven SOP on how to work in the safe haven and penalties for misuse of safe haven
Table 2 Research publication environment Data to be shared Other data
Agents Infrastructure Governance processes
Aggregate data derived from the extract of shared data and checked against ESSNet guidelines Potential data sources in the pubic domain include: 1. Publicly accessible sources, e.g., public records, social media data, registers, Newpaper archives etc. 2. Personal knowledge 3. Other similar datasets 4. Restricted access datasets Potentially anyone in the world Open, few infrastructure controls Open, no governance controls
This information feeds into: • Component 7: to help evaluate the data situation • Component 8: if further work on risk assessment and management is required Hypothetical Scenario of a Data Share
University A, acts as the Data Controller for the data extract provided to it; in order to fully understand the risks associated with processing the data the research team needs to establish what the properties of the share and publication environments are. Lets imagine that the safe haven environment is as described in Table 1, and that the research publication environment is as described in Table 2. Component 4: Describe the Data As well as understanding the data environment(s) you need to understand the data, one way of doing this is to create a risk profile for data. A risk profile can be established by specifying: • The data structure: i.e., whether the data in question is numerical, text, film, an image, etc.
336
E. Mackey
Table 3 Hypothetical data share – DHS–University A: data risk profile Risk profile The data structure The data type The data population type The variable types
The dataset property type Topic area type
Proposed extract of shared data Numerical Microdata Population data No direct identifiers included Quasi-identifiers: accident type, date of accident (month and year), number of vehicles involved, number of persons involved, number of fatalities, number of persons with life changing injuries, road conditions, accident location, demographics of persons involved: age in single years, gender Special category data: ethnic origin, concomitant health, concomitant medication Cross-sectional, large-scale dataset; age of data - 2 years old; hierarchical data - yes; family relationships captured; quality of data - good Sensitive
• The data type: i.e., whether the data is microdata (at the individual level) or aggregate data (at the group level). • The data population type: i.e., whether the data represents a sample or whole population. A whole population may be a census or all people in a particular population, such as all passengers on a shipping manifest. • The variable types: i.e., whether there are any special features, such as direct identifiers, quasi-identifiers, and special category data. • The dataset property type: this includes identifying relationships in the data structure (e.g., households), data quality, size of dataset, age of data, and time period captured (e.g., one off or overtime in the case of longitudinal data). • Topic area type: i.e., whether the topic area may be considered sensitive. Hypothetical Scenario of a Data Share
Let us imagine that the extract of shared data provided by DHS to the research team at University A has the following risk profile (given in Table 3). A risk profile is a top level assessment of the data which feeds into: • Component 7: to evaluate the data situation • Component 8: if further work on risk assessment and management is required Component 5: Map the Legal Issues The movement of data across multiple environments can complicate the issue of roles and responsibilities. Establishing a profile of the legal issues can help clarify who is the data controller, data processor and data user. The profile can be established by specifying: • The provenance of the data
18
A Best Practice Approach to Anonymization
337
Table 4 Hypothetical data share: DHS–University A legal issues profile Hypothetical data share Provenance of the data What one needs to know is where the data originates because this will help in determining processing responsibilities Means and purpose of processing
GDPR considerations
Mechanism for sharing
DHS-University A DHS collates accident data across the UK from data provided by local Police Forces. DHS is the data controller for this dataset University A is considered a data controller for the extract of shared data because the research team has determined the means and purpose of the processing in this scenario. University A’s responsibilities as a joint data controller for the extract of shared data are agreed in a contract between DHS and University A The research team’s legal basis for processing the accident data is: Article 6 (1e) – Public task In addition for processing special category data, Article 9 (2j) – for scientific and research purposes As the proposed share is new and is on a sensitive topic area, a Data Protection Impact Assessment will he carried out by the research team; this will provide a clear audit trail of University’s A processing activities and ensure that a data protection by design approach is embeded in the project at the planning stage Data sharing agreement between DHS and University A
• Who has determined the purpose for which, and the manner in which, personal data is processed • GDPR requirements, i.e., a legal basis for processing, whether a data protection impact assessment is needed, etc. • Other relevant legislation enabling the share (E.G. Part 5 of the Digital Economy Act, 2017; Common Law duty of confidence) • The method used for formalizing the share, e.g., a data sharing agreement, contract, or licensing agreement Hypothetical Scenario of a Data Share
Let us imagine that the data extract shared between DHS and University A has the following profile (described in Table 4, Legal issues Profile). Component 6: Meet Your Ethical Obligations The reason why ethics is an important consideration is that anonymization is a process that invariably involves the processing of personal data; even once data has gone through the anonymization process there are reasons for thinking about ethics, this is because: (i) data subjects might not want data about them being reused
338
E. Mackey
Table 5 Hypothetical data share – DHS–University: Assessment of data situation sensitivity 1.
Is the proposed purpose for which the data will be reused likely to be considered inconsistent with the original data collection purpose? No 2. Is the planned data share new? No 3. Does the project involve a commercial sector partner, investor, or benefactor? No 4. If a commercial sector partner is involved, will they have access to the data? Νo 5. Are special category data being accessed? Yes 6. Have data subjects consented to the proposed reuse of the data? No 7. Have any public engagement activities been planned? Yes 8. Is information about the project clear and accessible? Yes Evaluation: The more you answer yes to questions 1–5 and no to questions 6–8, the greater the data situation sensitivity: Given the answers 1–7, the level of data situation sensitivity is considered relatively low
in general by specific third parties or for particular purposes and (ii) the risk of a confidentiality breach is not zero. The notion of data situation sensitivity is key here; underpinning this concept is the idea of reasonable expectations about data use and or reuse – more especially the need not to violate (data subjects and the publics’) expectations about how data can be used. An assessment of a data situation’s sensitivity can be carried out by answering the following questions: 1. Is the proposed purpose for which the data will be reused likely to be considered inconsistent with the original data collection purpose? 2. Is the planned data share new? 3. Does the project involve a commercial sector partner, investor, or benefactor? 4. If a commercial sector partner is involved will they have access to the data? 5. Are special category data being accessed? 6. Have data subjects consented to the proposed use/reuse of the data? 7. Has any public engagement activities been undertaken? 8. Is information about the proposed data processing clear and accessible to key stakeholders? The more you answer yes to questions 1–5 and no to questions 6–8, the greater the likelihood of a potentially sensitive data situation (Mackey and Thomas 2019). Hypothetical Scenario of a Data Share
Let us imagine that the data situation sensitivity for the extract of shared data is as described in Table 5, Assessment of Data Situation Sensitivity. Component 7: Evaluate the Data Situation The parameters of a data situation can be determined by mapping the expected flow of data across the environments involved in the data share or dissemination. The data flow can be populated with the key relevant information across the data flow, i.e., the use case and sensitivity profile, and also within each environment, i.e., data risk
18
A Best Practice Approach to Anonymization
339
profile, description of environment and legal issues and data sensitivity profile. This evaluation should feed into the next anonymization activity risk assessment and risk management.
Risk Assessment and Risk Management Component 8: Select the Processes You Will Choose to Assess and Control Risk A formal assessment may include all three parts of the process described below or only some elements of it. 1. An analysis to establish relevant plausible scenarios for the data situation under review. Scenario analysis considers the how, who, and why of a potential breach. 2. Data analytical approaches to estimate risk, given the scenarios developed under procedure 2. 3. Penetration testing to validate assumptions made in procedure 2, by simulating attacks using “friendly” intruders. The first procedure is always necessary, while the second and third may or may not be required depending on the conclusions drawn from the previous two. Component 9: Implement Your Chosen Approach to Control Risk Processes for controlling risk essentially attend to either or both elements of a data situation: the data and their environment. If the risk analysis in component 8 suggests that stronger controls are necessary, then there are two non-exclusive options: 1. Change the data specifications. 2. Reconfigure the data environment. Component 10: Continue to Monitor the Data Situation Once you have shared or disseminated data you need to continue to monitor the risk associated with those data over time. This may involve monitoring technological developments and the availability of other data, as well as keeping a register of all data shared to cross-reference with future releases.
Impact Management Much of what has been set out thus far has been framed in terms of risk management, but one should in addition prepare for if the worst should happen. Impact management is all about putting in place a plan for reducing the impact of a reidentification should it happen. Component 11: Plan How to Maintain Trust Effective communication with key stakeholders is crucial to build trust and credibility, both of which are critical to difficult situations, where a data controller might
340
E. Mackey
need to be heard, understood, and believed. The key point is that a data controller is better placed to manage the impact of a reidentification, if they and their stakeholders have developed a good working relationship. Component 12: Plan What to Do if Things Go Wrong If, in the rare event, a breach were to occur. It is recommended that: 1. All processing activities are clearly document to provide a clear audit trail. This is in fact a requirement under GDPR (2016/69) which has newly introduced the principle of accountability. 2. Plans and policies are developed and communicated to ensure it is clear who should do what, when, and how if a breach occurs.
Concluding Remarks This chapter provides an introduction to the topic of anonymization: first outlining the key terminology of pseudonymization and anonymization. Pseudonymization is described as addressing no more than the risk arising directly from data, while anonymization should address the risk arising both directly and indirectly from data. To classify and address identification risk arising directly and indirectly from data, it has been argued, in this chapter, that one must take account of data in relation to the environment in which it exists. Thus anonymization is defined as, a process whereby personal data, or personal data and its environment, are modified in such a way as to render data subjects no longer identifiable. The discussion then moves on to illustrate how one thinks about the reidentification problem influences how one applies the concepts of pseudonymization and anonymization. There are two approaches to thinking about the reidentification problem, the data-centric and environment-centric approaches. The data-centric approach is the traditional and dominant approach – it focuses on data at the expense of key considerations associated with the who, how, and why of reidentification. In contrast the data environment approach considers data in relation to its environment(s) to take account of the who, why, and how of reidentification. It is this latter approach that informs and underpins the Anonymisation Decisionmaking Framework. Just as there are two perspectives on how to understand and address the reidentification problem, there are two approaches on the role of anonymization in data protection. These two approaches are the absolute approach that makes the argument that anonymization should be irreversible and the risk-based approach that makes the argument that there is an inherent risk of reidentification in all useful data and so the role of the data controller is to ensure that the risk is remote. It is the latter approach that underpins the Anonymisation Decision-making Framework. Anonymization should be understood as a process inseparable from its purpose of analysis and dissemination which means it only makes sense if what is produced is useful anonymised data. It is possible to have both high utility and low risk data if
18
A Best Practice Approach to Anonymization
341
one considers data in relation to its environment(s). Anonymization understood in this way, Elliot et al. (2016, 2018) call functional anonymization. The ADF is a tool for doing functional anonymization. In the final section of the chapter, a decision-making tool for doing well-thoughtout anonymization was introduced, the Anonymisation Decision-making Framework. The ADF unifies the legal, ethical, social, and technical aspects of anonymization into a single framework to provide best practice guidance.
Cross-References ▶ Big Data ▶ Biosecurity Risk Management in Research ▶ Creative Methods ▶ Ethical Issues in Data Sharing and Archiving ▶ Privacy in Research Ethics ▶ Research Ethics Governance ▶ Research Ethics in Data: New Technologies, New Challenges
References Arrington M (2006) AOL proudly releases massive amounts of user search data. TechCrunch. http://tinyurl.com/AOL-SEARCH-BREACH. Accessed 30 May 2016 Atokar (2014) Riding with the stars: passenger privacy in the NYC taxicab dataset. http://tinyurl. com/NYC-TAXI-BREACH. Accessed 30 May 2016 Barth-Jones D (2012) The identification of Governor William Weld’s medical information: a critical re-examination of health data identification risks and privacy protections, then and now. https://fpf.org/wp-content/uploads/The-Re-identification-of-Governor-Welds-Medical-Informa tion-Daniel-Barth-Jones.pdf Barth-Jones D (2015) How anonymous is anonymity? Open data releases and re-identification. Data & Society. https://datasociety.net/pubs/db/Barth-Jones_slides_043015.pdf Barth-Jones D (2016) why a systems-science perspective is needed to better inform data privacy public policy, regulation and law. Brussels privacy symposium, November 2016 CNN Money (2010) 5 data breaches: from embarrassing to deadly. http://tinyurl.com/CNNBREACHES/. Accessed 30 May 2016] Concordat on Open Research Data (2016). https://www.ukri.org/files/legacy/documents/ concordatonopenresearchdata-pdf/ Dibben C, Elliot M, Gowans, H, Lightfoot D, Data Linkage Centres (2015) The data linkage environment. In: Harron K, Goldstein H, Dibben K (ed) Methodological Developments in Data Linkage, First Edition. Edited by Katie Harron, Harvey Goldstein and Chris Dibben. © 2016 John Wiley & Sons, Ltd. Published 2016 by John Wiley & Sons, Ltd Duncan GT, Elliot MJ, Salazae-Gonzalez JJ (2011) Statistical confidentiality. Springer, New York Elliot M, Mackey E (2014) The social data environment. In: O’Hara K, David SL, de Roure D, Nguyen CM-H (eds) Digital enlightenment yearbook. IOS Press, Amsterdam Elliot M, Lomax S, Mackey E, Purdam K (2010) Data environment analysis and the key variable mapping system. In: Domingo-Ferrer J, Magkos E (eds) Privacy in statistical databases. Springer, Berlin
342
E. Mackey
Elliot M, Smith D, Mackey E, Purdam K (2011a) Key variable mapping system II. In: Proceedings of UNECE worksession on statistical confidentiality, Tarragona, Oct 2011 Elliot MJ, Mackey E, Purdam K (2011b) Formalizing the selection of key variables in disclosure risk assessment. In: 58th congress of the International Statistical Institute, Aug 2011, Dublin Elliot M, Mackey E, O’Hara K, Tudor C (2016) The anonymisation decision-making framework. UKAN Publication, Manchester, United Kingdom Elliot M, O’Hara K, Raab C, O’Keefe C, Mackey E, Dibben C, Gowans H, Purdam K, McCullagh K (2018) Functional anonymisation: personal data and the data environment. Comput Law Secur Rev 34(2):204–221 ESSNet (2007) Guidelines for the checking of output based on microdata research, Workpackage 11. Data without Borders. Project N : 262608. Authors: Steve Bond (ONS), Maurice Brandt (Destatis), Peter-Paul de Wolf (CBS). Online at https://ec.europa.eu/eurostat/cros/content/guide lines-output-checking_en Fienburg SE, Makov UE, Sanil A (1997) A Bayesian approach to data disclosure: optimal intruder behaviour for continuous data. J Off Stat 13(1):75–89 Gymrek M, McGuire AL, Golan D, Halperin E, Erlich Y (2013) Identifying personal genomes by surname inference. Science 339(6117):321–324. https://doi.org/10.1126/science.1229566. [PubMed] Hundepool A, Domingo-Ferrer J, Franconi L, Giessing S, Nordholt ES, Spicer K, DE Wolf PP (2012) Statistical disclosure control. Wiley, London ICO Anonymisation: managing data protection risk code of practice 2012. https://ico.org.uk/media/ 1061/anonymisation-code.pdf Mackey E (2009) A framework for understanding statistical disclosure control processes. PhD thesis, The University of Manchester, Manchester Mackey E, Elliot M (2011) End game: can game theory help us explain how a statistical disclosure might occur and play out? CCSR working paper 2011–02 Mackey E, Elliot M (2013) Understanding the data environment. XRDS 20(1):37–39 Mackey E, Thomas I (2019) Data protection impact assessment: guidance on identification, assessment and mitigation of high risk for linked administrative data. Report for the Administrative Data Research Partnership Mourby M, Mackey E, Elliot M, Gowans H, Wallace S, Bell J, Smith H, Aidinlis S, Kaye J (2018) Anonymous, pseudonymous or both? Implications of the GDPR for administrative data. Comput Law Secur Rev 34(2):222–233 Ohm P (2010) Broken promises of privacy: responding to the surprising failure of anonymization. UCLA Law Rev 57(1701):1717–1723 Open Research Data Taskforce with Michael Jubb (2017) Research data infrastructure in the UK landscape report. https://www.universitiesuk.ac.uk/policy-and-analysis/research-policy/openscience/Pages/open-research-data-task-force.aspx Open Research Data Taskforce (2018) Realising the potential. Open Research Data Taskforce final report. https://www.gov.uk/government/publications/open-research-data-task-force-finalreport Rubinstein I (2016) Brussels Privacy Symposium on Identifiability: policy and practical solutions for anonymisation and pseudonymisation – framing the discussion. In: Proceedings of Brussels Privacy Symposium: identifiability: policy and practical solutions for anonymisation and pseudonymisation. Brussels, Nov 2016. https://fpf.org/wp-content/ uploads/2016/11/Mackey-Elliot-and-OHara-Anonymisation-Decision-making-Framework-v1Oct-2016.pdf Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Online at https://eur-lex.europa.eu/legalcontent/EN/ TXT/?qid=1568043180510&uri=CELEX:32016R0679
18
A Best Practice Approach to Anonymization
343
Sweeney L (1997) Weaving technology and policy together to maintain confidentiality. J Law Med Ethics 25(2–3):98–110. https://doi.org/10.1111/j.1748-720X.1997.tb01885.x UK Data Protection Act (2018) London, The Stationery Office. Online at http://www.legislation. gov.uk/ukpga/2018/12/contents/data.pdf Willenborg L, DE Waal T (2001) Elements of disclosure control. Springer, New York
Deception Its Use and Abuse in the Social Sciences
19
David Calvey
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Framing Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classical and Contemporary Exemplars: The Deceptive Diaspora . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Ethical Regulation of Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deception in Action: A Covert Study of Bouncers in the Night-time Economy . . . . . . . . . . . . . . The Future of Deception: Cyber Research and Autoethnography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions: Deception as a Creative Part of the Ethical Imagination and Research Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
346 346 349 355 357 359 361 363
Abstract
Deception is a controversial and emotive area, which is strongly associated with the transgression, violation, and breaching of research integrity. Deception has been object of both fear and fascination for many researchers and practitioners for a lengthy period of time. For some, deception runs counter to the established principle of informed consent and hence has no place in ethical decision-making. However, for others, deception does have a rich role to play, albeit submerged, in the critical imagination and toolkit of the researcher. Deception occupies a classic love or loathe position in social research, which often results in extreme and hyper responses from its audiences on both sides. For me, deception can be justified and has been successfully used in various research settings across the social sciences to gain rich insider knowledge and creatively manage the problem of artificiality. It has been long demonized, maligned, and stigmatized in various research communities and is under used. This chapter shall frame the usage of deception in different contexts – popular culture, occupational, and social scientific. The D. Calvey (*) Department of Sociology, Manchester Metropolitan University, Manchester, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_15
345
346
D. Calvey
author shall explore the diaspora of classical and contemporary exemplars of deception followed by some reflections on the longitudinal use of deception in his covert sociological study of bouncers in the night-time economy of Manchester. The chapter shall also examine the increasing ethical regulation of deception as well as investigate the future landscape for its use and abuse. Keywords
Bouncers · Covert · Criminology · Cyber · Deception · Ethnography · Ethics · Sociology · Violence
Introduction The chapter is organized into seven sections. Following the introduction, deception shall be framed by exploring the different contexts and categories of its use, including definitions and genealogies. Third, a diaspora of classical and contemporary exemplars from the social sciences shall be critically outlined. Fourth, the author’s covert ethnography of bouncers in the night-time economy of Manchester shall be reviewed as a display of deception in action. Fifth, the ethical governance and regulation of deception shall be examined, which has broadly stifled the use of many deceptive strategies in social research. Sixth, the future landscape of deception shall be investigated in the form of cyber research and autoethnography, followed finally by a conclusion. The rationale here is the rehabilitation of deception in scientific research such that it can be creatively used in appropriate circumstances rather than have a crude and restrictive blanket rejection of it as a frowned upon, marginalized, and stigmatized last resort and outlaw methodological pariah (Calvey 2017, 2018, 2019). Detailed and comparative genealogies of deception are limited, stifled, subsumed, and glossed over in many research methodology literatures partly due to the inflated emotive and ethical responses to the uses and consequences of deception. Deception should form part of a robust scientific research imagination and a creative toolkit rather than be locked away in a closet as an unethical horror.
Framing Deception Deception has interested scholars from multidisciplinary backgrounds for some time (Levine 2014). Masip et al. broadly define deception as: The deliberate attempt, whether successful or not, to conceal, fabricate, and/or manipulate in any other way, factual and/or emotional information, by verbal and/or nonverbal means, in order to create or maintain in another or others a belief that the communicator himself or herself considers false. (2004, p. 147)
In sociological traditions, which trade on a covert approach (Calvey 2017), deception is typically associated with fake passing with a specific identity group
19
Deception
347
and/or subculture under study in order to gain firsthand insider knowledge. What Rohy usefully describes as: “passing designates a performance in which one presents oneself as what one is not” (1996, p. 219). Gamer and Ambach (2014) argue that “Research on deception has a long tradition in psychology and related fields. On the one hand, the drive for detecting deception has inspired research, teaching and application over many decades” (2014, p. 3). Lie detector tests, or polygraphs, have been in popular use by police and security agencies worldwide, since the technology was invented at the University of California, Berkley, United States, in 1921 by John Larson, a medical student and police officer. Various psychologists have focused on detecting deception in body and face demeanor and micro-expressions (Ekman 1985). Robinson (1996) makes a useful distinction between the interpersonal and institutional contexts of deception and views lies, falsity, belief, and intentionality as the core areas of the multidisciplinary study of deception. Methodologically, deception has been used in various psychological field and laboratory experiments by deliberate staging, managed misinformation, and the use of confederates who fake and feign various psychological states and symptoms. For those in this tradition, the use of deception is a key part of experimental control and justified to help manage artificiality and avoid overly reactive subjects. Using deception in such traditions is typically legitimated by the detailed retrospective debriefing of the subject. Korn (1997) argues that deception is used in social psychology to create “illusions of reality,” which has led to some of the most dramatic and controversial studies in the history of psychology. Herrera, in a similar historical review, stresses that “deception is still treated as if it is a necessary evil, which it may be for some time” (1997, p. 32). Brannigan (2004) views experiments as theaters and provocatively argues that the new regulatory ethical environment will lead ultimately to the end of experimental social psychology. He hopes that such a death will help rehabilitate the field. For Brannigan (2004), deceptive experimental methodology has ironically partly led to both the rise and fall of social psychology. Deception is bound up with lying in complex ways. Lying is typically perceived as a common form of deception and deceit in everyday life. What Adler (1997) usefully characterized as “falsely implicating.” Moral philosophy has long explored the topic as related to the development of our shared moral compass and conscience in society from childhood. What Bok (1978) refers to as a series of “moral choices” in both our private and public lives. Barnes (1983), in his innovative sociological view of lying, argues that “lying has been a human activity for a long time, and is not merely a recent and regrettable innovation” (1983, p. 152). Barnes adds that it is “generally regarded as a form of deviance that needs to be explained, whereas truthfulness is taken for granted as the normal characteristic of social intercourse” (1983, p. 152). Barnes (1994, p. 1) later opens up his book boldly stating: “Lies are everywhere.” Barnes views lying as a mode of deception where “different contexts, however, provide different possibilities for deceit” (1994, p. 19). Deception is also bound up with secrecy in society, which is an accepted systemic feature of modern societies (Simmel 1906; Bok 1982; Barrera and Simpson 2012). Deception is the core specialist business of M15 and M16 in the United Kingdom
348
D. Calvey
and the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency (CIA) in the United States, which are familiar names to the public. The Official Secrets Act (1889, 1911, 1920, 1989) is still in operation in the United Kingdom for the purposes of protecting national security, and there is very similar legislation in Hong Kong, India, Ireland, Malaysia, Myanmar, and Singapore. Clearly spying and espionage have long roots in several societies (Goldman 2006). What Pfaff and Tiel (2004) describe as specific “military and intelligence ethics.” In many ways, this is the accepted face of professionalized deception, particularly in the current and intensified context of global counter-terrorism. In this sense, state deception is bound up with ideas of public trust (Shilling and Mellor 2015). Turning to the different contexts of usage for deception, it is very clear that deception is woven into the fabric of everyday life and is part of the public imagination and popular culture. This context involves media scandals, exposed documentaries, and comedic dupery from mass entertainment. We have become a voyeur nation (Calvert 2000), obsessed with, and indeed normalized to, peering and watching others and strangers. Undercover journalists like Nellie Bly, Mimi Chakarova, Barbara Ehrenreich, Anna Erelle, Stuart Goldman, John Howard Griffin, Stetson Kennedy, Suki Kim, Tim Lopes, Mazher Mahmood, James O’Keefe, Donal MacInytre, Antonio Salas, Roberto Savianno, Gloria Steinem, Chris Terrill, Polly Toynbee, Eva Valesh, Norah Vincent, Gunter Wallraff, and Peter Warren, to name some of the most celebrated, have become household names in their respective countries and are probably more familiar to the public than academics using deception. These journalists have investigated a long list of sensitive topics, wrongdoings, corruptions, cover-ups, breadline poverty, violence, abuse, criminal, deviance, and extremist organizations. Broadly, their investigative logic would often be a paradoxical one of “lying to get to the truth” in a sort of “quick and dirty realism” manner. Many of them have pushed the envelope and placed themselves in considerable personal risk and danger. The evocative, partisan, cavalier, and heroic populist image of deception has been tied up with these figures rather than academic ones, particularly when their deceptive investigations have been widely televised, caused headline scandals and media moral panics, and in some cases become major films. Such an image partly causes a problem of credibility and legitimacy for academic deceptive researchers. The imperatives and aims of their deception game differs from academic ones, and the differences should not be simply collapsed. Indeed, some academics fear that deceptive research has been effectively hijacked by investigative journalism (Van Den Hoonaard 2011). In a very different context, the invented characters of controversial comedians Sacha Baron, Dom Jolly, Novan Cavek, and Marc Wootton provocatively use antagonistic deception in their performances to satirize and challenge current thinking, celebrity culture, political figures, and household institutions. Such television hoaxing programs have attracted and sustained high viewing figures over a number of years. Clearly, deception sells well, and the cogent narrative it can weave has stood the test of time. In the commercial world, deception has a credible place. The large supermarkets in the United Kingdom extensively use mystery shoppers and simulated clients as a
19
Deception
349
way of gaining data to profile consumption patterns. The Department of Work and Pensions in the United Kingdom routinely uses surveillance tactics to investigate suspected fraudulent benefit claims. Various councils use undercover agents to review the health and safety, hygiene, and fair trading standards of the restaurant and hospitality industry. In the high-end culinary world, revered Michelin stars are still awarded by covert dining visits and private detectives are regularly used as alternative ways to spy on failed intimate relationships (Calvey 2017). The medical world has also routinely used placebo experiments. This typically involves controlled and comparative experiments in clinical trials where participants are deceived as to what they are consuming and not consuming. Such accepted deception has a long history in medical science (Beecher 1955; Gotzsche 1994). Lichtenberg et al., in a review of the role of placebo in clinical practice, provocatively argues: The placebo is the most commonly-employed treatment across cultures and throughout history. Today’s physician, resting on her evidence-based laurels, might have no trouble accepting this claim when considering the medical practice of yore. After all, what else can one make of the potions, herbs, leechings and rituals of our distant colleagues of an earlier age—medicine men, shamans, wizards—if not that they were, wittingly or ignorantly, purveyors of placebos? (2004, p. 215)
Deception has also played a significant role in police culture in different ways. As expected, the police covertly investigate a wide range of sensitive topics including pedophilia, drug abuse, football hooliganism, people trafficking, and counterfeit goods. What Loftus and Goold (2012) elegantly describe as the invisibilities of policing. There is an industry of glamorized and gritty firsthand accounts of the undercover work of former police and security professionals, some of which have been popularized in films. Again, this can produce a rather glossed, mythic, heroic, and romantic image of deception. These different professional and practitioner contexts are more practically than theoretically driven and have different agendas, aims, challenges, and legal sensitivities than academic ones, which must be recognized and appreciated. Thus, certain forms of deception in our society that are seen to be motivated by public interest and are contained in the hands of specialist professionals with expert knowledge are more readily accepted and expected. What is clear is that deception is used in hybrid ways over the years and is intrinsically woven into society. It is a clearly still an object of both fear and fascination.
Classical and Contemporary Exemplars: The Deceptive Diaspora There is a wide and dispersed range of studies and topics in the deceptive diaspora. On further granulation, many of these deceptive studies are rarely purist and employ more mixed strategies such as gatekeeping and key informants. The diaspora then is more akin to a continuum rather than a fixed state of deception. The studies, from
350
D. Calvey
different eras, are drawn from various fields across the social and human sciences, including anthropology, investigative journalism, psychology, and sociology. Some of studies in the diaspora are what can be termed the “usual suspects” (Calvey 2017), which conventionally frame the field and often have ongoing scholarship about them, while others are less popular but still instructive gems. Because deception studies is not an incremental, integrated, or cross-fertilized field, some of the studies have a rather stand-alone status in their respective fields. I will present them in chronological order. Nellie Bly, an alias for Elizabeth Jane Cochrane (1864–1922), was an inspirational icon for feminists, with her courageous and early covert study of a women’s lunatic asylum in New York, evocatively titled Ten Days in a Mad-House (1887). She revealed the brutalization of inmates, which resulted in police investigations and legal reform. This was one of her first undercover assignments as a young reporter at 23 years old. She gained entry after feigning hysteria and a court appearance. Bly struggled to get released from the asylum and needed the support of her newspaper to verify her story as a genuine journalist. The Tuskegee syphilis study (1932), in Alabama, United States, was a notorious example and historical marker of belligerent and harmful deceptive research which strategically used medical misinformation and controlled nontreatment. In many ways, this became a negative landmark and milestone public health study that changed the face of modern ethical regulation of research, principally the primacy of informed consent and the protection of the subject against harm. Sensitive concerns about the institutionalized racism (Brandt 1978), that the study clearly displayed, are well documented and still quoted today. Thomas and Quinn sum up the perception of this flawed and reckless public health experiment when they stress that “there remains a trial of distrust and suspicion” (1991, p. 1498). Paul Cressey, from the University of Chicago’s famous Department of Sociology, wrote The Taxi-Dance Hall (1932) after longitudinal materials on taxi dance halls were gathered over a 5-year period from a team of covert investigators, acting as “anonymous strangers.” This was a pioneering early study of the commercialization of sex work, which influenced future scholars researching sexual deviance and sex work. While serving as a case worker and special investigator for the Juvenile Protective Association in Chicago in the summer of 1925, Cressey (1899–1969) was asked to report on the new and morally controversial “closed dance halls,” open to male patrons only as a prototypical early strip joints. For Cressey the growth of such spaces was an inevitable feature of modernity and leisure capitalism. Cressey sympathetically points toward subjugation as he states: “Feminine society is for sale, and at a neat price” (1932, p. 11). In his exploration of racial prejudice and discrimination, psychologist Richard LaPiere (1934) travelled throughout the United States with a Chinese couple, visiting restaurants and using deceptive tactics throughout. The work of Leon Festinger and colleagues within social psychology was seminal in the study of religious cults, particularly the application of cognitive dissonance in the management of individual and group delusion. Festinger and colleagues did not use
19
Deception
351
gatekeeping arrangements and used a team of trained researchers, including the authors, posing as “ordinary members” during their work in publishing When Prophecy Fails (1956). This work was a seminal psychological study of religious cults. Festinger et al. (1956) state “Our observers posed as ordinary members who believed as the others did” (1956, p. 237) and stress that their work was “as much a job of detective work as of observation” (1956, p. 252). Melville Dalton, an organizational sociologist, explored management work culture and bureaucracy in Men Who Manage: Fusions of Feeling and Theory in Administration (1959). It is very distinctive in the length of time he spent in a sustained covert role, which was around a decade gathering rich longitudinal organizational data. His justification of his covert stance centers around his critique that “controlled experiments which are not suited to my purpose” (1959, p. 275). Dalton innovatively used an extensive network of key informants or what he describes as “intimates.” Seminal sociologist Erving Goffman’s iconic book Asylums (1961) was to have a major influence on the social science community, including healthcare fields and the anti-psychiatry movement. Goffman spent a year doing fieldwork in the mid-1950s in this gatekeeping commissioned piece of covert ethnography, which radically put the patient perspective at the heart of the analysis. Despite the recognized methodological glosses, his study cogently explored “the mortification of self” within a total institution. His covert insider account, as an assistant to the physical director, was an innovative part of developing “a sociological version of the self” (1961, p. xiii) in such a setting. Goffman later commented, published after his death, on fieldwork strategies: . . .with your ‘tuned up’ body and with the ecological right to be close to them (which you’ve obtained by one sneaky means or another) you are in a position to note their gestural, visual, bodily responses to what’s going on around them and your empathetic enough-because you’ve been through the same crap they’ve been taking-to sense what it is they’re responding to. (1989, p. 125)
Psychologists Latane and Darley (1969) explore “bystander apathy” and the “bystander effect” (Darley and Latane 1970) on strangers helping people in public by using deceptive confederates in a number of field experiments. It was later extended by Darley and Batson (1973), with similar deceptive field tactics, in their famous “Good Samaritan experiments.” Similar psychological field experiments have been done on public honesty and stealing with lost letters and lost money previously (Merritt and Fowler 1948) and more recently (Farrington and Knight 1979). Sociologist Laud Humphreys’ Tearoom Trade (1970) is an infamous landmark study found in most ethics handbooks. The semi-covert study was based on his sociology doctoral thesis and analyzed “the social structure of impersonal sex” (p. 14), which was a criminal act of sexual deviance at the time in the United States. The covert stages included his covert participation observation as a voyeuristic “watch queen” and transgressive fake health researcher doing home interviews, combined with the less recognized overt interview data with his “intensive dozen” of key
352
D. Calvey
informants. His work was to have a seminal impact on sexuality studies. The Humphreys’ trope forms part of sociological folklore, and the ethical landscape was never the same again. As a gay man himself, this was a profoundly partisan and activist piece of work which attempted to de-stigmatize homosexuality. Part of this was his valiant efforts to protect the participant’s anonymity, despite intense police pressure to incriminate, the clear threat of personal prosecution, and being academically discredited. Humphreys reflects “There was no question in my mind that I would go to prison rather than betray the subjects of my research” (1975, p. 230). Humphreys describes his cruder critics defiantly as “Ayatollahs of Research Ethics” (1980, p. 714). Criminologist James Patrick, a pseudonym, provides a rich covert participant observation account of a juvenile gang in Glasgow over 4 months in the mid-1960s in A Glasgow Gang Observed (1973). His account of brutality and violence, alongside camaraderie and fictive kinship, became a seminal study of juvenile delinquency and a precursor to modern research on youth gangs. This undercover study would not have been possible without the secret collusion with a key informant, who was a gang leader and who he used to teach in the past. His passing as a gang member, particularly blurred any age differences, was very artfully done. Patrick resolved to be a “passive participant,” but this still presented him with a complex set of ethical dilemmas in terms of what he witnessed during his fieldwork. Ultimately, for Patrick: “In fact it was the internal struggle between identification with the boys and abhorrence of their violence that finally forced me to quit” (1973, p. 14). The Rosenhan experiment (1973), provocatively titled “Being Sane in Insane Places,” was a very influential covert pseudo-patient study, which had a considerable impact on the anti-psychiatric movement. The field experiment consisted of eight pseudo patients, including psychologist David Rosenhan of Stanford University, feigning the same mental health problem in order to gain admission to different psychiatric hospitals. The majority of them were wrongly diagnosed, gained admission, and were quickly given medication for schizophrenia. Rosenhan polemically asks on psychiatric diagnosis “If sanity and insanity exist, how shall we know them?” (1973, p. 250). Rosenhan astutely claims that a “psychiatric label has a life and an influence of its own” (1973, p. 253) and these environments “seem undoubtedly counter therapeutic” (1973, p. 257). Rosenhan justifies his deceptive stance stressing “Without concealment, there would have been no way to know how valid these experiences were” (1973, p. 258). The study caused outrage from various professional psychiatrists. Stanley Milgram’s work in social psychology and his obedience to authority experiments (1974), often popularly characterized as the pain or torture experiments, was to be a landmark one which impacted on a very broad audience both inside and outside academia. The experiments were undertaken at Yale University in the 1960s, and repeated with 18 different variations, before being collected in Obedience to Authority (1974). The Holocaust provides the wider political and rather emotive backdrop to his work.
19
Deception
353
Milgram was clearly influenced directly by his mentor Solomon Asch and his conformity experiments, which extensively used confederates, as well Allport on personality theory. Milgram’s highly staged experiments involved various types of deception, including confederates and a fake electric shock machine. They were ultimately designed to “shock” and produce counterintuitive results. Milgram (1977) in an interesting latter defense of his experiments states: A majority of the experiments carried out in social psychology use some degree of misinformation. Such practices have been denounced as “deception” by critics, and the term “deception experiment” has come to be used routinely, particularly in the context of discussions concerning the ethics of such procedures. But in such a context, the term “deception” somewhat biases the issue. It is preferable to use morally neutral terms such as “masking”, “staging”, or “technical illusions” in describing such techniques. (1977, p. 19)
Milgram became the standard trope on deception. What Miller accurately describes as “unprecedented and continuing interest and impact” (2013, p. 17) and Blass (2004), an authoritative figure in Milgram scholarship, titled his book as The Man Who Shocked the World. Questions of ethics and data reliability have been consistently raised (Baumrind 1964, 2013, 2015; Perry 2013; Brannigan et al. 2015). What Brannigan et al. (2015) recently term “unplugging the Milgram machine.” The American Association for the Advancement of Science initially awarded Milgram a research prize in 1964, but this was later revoked on ethical grounds as to the question of Milgram causing deliberate harm to the participants. Psychologist Philip G. Zimbardo’s Stanford prison experiment (Haney et al. 1973; Zimbardo 2007) was an extreme simulation funded by the Office of Naval Research and approved by the Stanford Human Subjects Research Review Committee. Zimbardo, who took the role of a guard as well as researcher, was influenced by Milgram in terms of his situationalist analysis. The experiment was an attempt to explore labelling, survival group dynamics and situational extremity but was eventually pulled after only 6 days because of the increasing brutality of the guards toward prisoners and the harm caused to the participants. Although participants knew about the simulation, there was an amount of deception-induced stress caused as prisoners were subjected to unexpected, mock arrests at home, in full view of neighbors and police custody. There was debriefing done by both Milgram and Zimbardo, but doubts have been raised about their consistency and the genuine level of support offered. Working within the controversial pseudo-patient tradition is R. W. Buckingham and colleagues’ provocative and underutilized Living with the Dying (1976). This study was conducted by medical anthropologists who, with careful gatekeeping, used covert participation observation to explore the culture of treating terminal cancer patients in a palliative care ward in a hospital in Montreal, Canada. Despite being only 9 days long, this was a very intense form of passing. The units were told they were being evaluated but not in what detailed form. Buckingham passionately committed himself to an embodied covert role as he assumed the role of a patient with terminal pancreatic cancer, with a second medical anthropologist acting as his cousin and his regular key research contact.
354
D. Calvey
Buckingham et al. stress the emotional angst over his disguised role and his feelings of going native “he identified closely with these sick people and became weaker and more exhausted. He was anorexic and routinely refused food. He felt ill” (1976, p. 1212). Their results showed that, although the needs of the dying and their families are widely recognized, the patient perspective still needs emphasizing. Some distancing by medical staff resulted in feelings of isolation and abandonment for the dying. They argued that such vulnerable groups need more resourced specialized care. Buckingham et al. humanely conclude: There is a need for comfort, both physical and mental, for others to see them as individuals rather than as hosts for their disease, and for someone to breach the loneliness and help them come to terms with the end. (1976, p. 1215)
Nancy Scheper-Hughes, a medical anthropologist, conducted a controversial undercover ethnography of global organ trafficking (2004), which was to have concrete impact on policy and criminal prosecutions, which took her to 12 different countries and a protected medical power elite. She later describes her politicized work as a form of “engaged ethnography” (Scheper-Hughes 2009). She used mixed methods, which included several important deceptive roles to access delicate and protected information. Hence, she briefly posed as a kidney buyer in a suitcase market in Istanbul, travelled incognito with a private detective from Argentina investigating organ theft from inmates in a state home for the vulnerable, and posed as the relative of a patient looking for a kidney broker with sellers and brokers in person and over telephones. She passionately stresses: “deceptions are no longer permissible for researchers operating under the strict guidelines of human subjects protection committees. But there are times when one must ask just whom the codes are protecting” (Scheper-Hughes 2004, p. 44). She asked for the project to be given exceptional dispensation akin to a human rights investigative reporter, which was granted. Geoff Pearson, from a sports sociology and socio-legal studies background, has openly used deception in his autoethnographic research on football hooliganism since the mid-1990s. Pearson poses the challenging questions about deception: “Can participation in criminal activity by a researcher be justified on the grounds that it is necessary to prevent the distortion of the field? Alternatively, can the difficulties in gaining and maintaining access in such spheres excuse such conduct?” (2009, p. 243). Pearson honestly reflects: “I found myself both witnessing criminal offences and being put under pressure to commit them personally” (2009, p. 245). Pearson, because of the sensitive topic under study, walked a risky legal tightrope throughout his study, including pitch invasion, threatening behavior, and illegal alcohol consumption, which were all rites and rituals of credible acceptance in that subculture. Pearson stresses: Little formal guidance is provided to researchers in the social sciences who wish to carry out ethnographic research within ‘criminal’ fields. . .Without researchers who are willing to embark upon covert research, and are sometimes willing to break the law in order to gather
19
Deception
355
this data, some aspects of society will remain hidden or misunderstood. . .researchers will continue to operate in potentially dangerous research fields without adequate risk assessment or guidance. (2009, pp. 252–253)
The Ethical Regulation of Deception The standard contemporary ethics debates, which have structured much discussion on deception, have been centered on informed consent, debriefing, protection, and harm in various guises. Such debates help drive standard research practices. In such a framework, deception must be clearly justified and presents itself as a typical ethical and moral dilemma to be avoided or at best minimalized. Deception, in various psychological fields, is a backbone of experimental control and a long-established and accepted way to manage reactivity and artificiality (Herrera, 1997). For sociology, anthropology, and allied disciplines, their concerns are not typically framed within an experimental tradition and more applied to participant observational contexts, but the clear concern with violating informed consent, causing harm and protecting the subject, is shared. Barrera and Simpson (2012) stress that divergent disciplinary views of deception and norms governing usage “stifle interdisciplinary research and discovery” (2012, p. 383). The ethical sensibilities and obligations are firmly enshrined in various professional codes and associations, which typically inform and guide various social science disciplines. Most take a standard prescriptive and pejorative view on deception, where it is frowned upon in different ways as a “last resort methodology” (Calvey 2008). Thus, the methodological orthodoxy presents deception as a pariah (Calvey 2018). The rationalizing tendencies of the ethical review boards deny ambiguity in the research relationship, which is problematic in real-world research. Most sensible researchers are not against informed consent per se but are skeptical of the pervasive “one-size-fits-all” mentality and, in some cases, the strict application of an outdated medicalized model to social science topics and fields. In many ways, the much quoted essay by Kia Erikson (1967) displays the conventional stance of condemnation against what was generally described then as “disguised observation” and “secret observation” in sociology. Erikson argues that “It is unethical for a sociologist to deliberately misrepresent the character of the research in which he is engaged” (1967, p. 373). For Erikson, deceptive strategies may be appropriate for espionage and journalism, but he argues that professional sociology has different rules of engagement. Homan (1991), a well-quoted figure, firmly occupies the standard position in his opposition to deception in social research and takes up the Erikson position in a modern context. For him, covert methods are seldom necessary. As a counter argument to this, a smaller amount of dissident authors are broadly more sympathetic to the appropriate use of deception (Spicker, 2011). Bulmer (1982) stresses the need to recognize a wider variety of observational research strategies that
356
D. Calvey
are not captured by the crude binary dichotomy around covert and overt research. Such simplistic reasoning, which does not recognize complexity, for Bulmer, “stultifies debate and hinders methodological innovation” (1982, p. 252). Similarly, Mitchell disagrees with crude debates about deception and stresses: “Secrecy in research is a risky but necessary business” (1993, p. 54). Barrera and Simpson suggest that researchers should adopt “a pragmatic, evidence-based approach to the question of when deception is advisable” (2012, p. 406). Similarly, Roulet et al. (2017) argue for a reconsideration of the value of covert research as informed consent is typically ambiguous in many participant observation research settings. Roulet et al. stress: “Covert participant observation can enable researchers to gain access to communities or organizations and to collect knowledge that would otherwise remain unavailable. In some situations, covert participant observation can help create knowledge to change society for the better” (2017, p. 512). Linked to this, there is a growing dissident literature on informed consent (Crow et al. 2006; Librett and Perrone 2010; Sin 2005) with many researchers viewing informed consent as ultimately partial, contingent, dynamic, and shifting rather than fixed and absolute. It is not to say that informed consent does not have a valuable role to play in social research or that it is only ever ceremonial. Rather, it is my contention that we effectively have a “fetish” for informed consent; it attracts a sort of blind faith which, in its simple form, denies the use of deception. Of course, there is reason to restrict covert research in certain areas and with vulnerable populations. In such instances, there is relevant and appropriate legislation and related codes of practice, such as the Mental Capacity Act 2005 and Safeguarding Vulnerable Groups Act 2006. Clearly there are very particular contexts and settings, where the vast majority of sensible researchers would be ordinarily compliant with specific rules. The point here is that most research settings are more mundane and less extreme. Hence, it is clearly not and never can be an ethical “one-size-fits-all” blanket situation. Put simply, covert research can be applicable in some settings and not in others rather than never applicable in any settings. Lawton (2001) and Marzano (2007) each discuss the challenges and messy complexities of doing research in palliative care units, which involved liminal mixtures of both deception and informed consent at different stages of the research process. Concerns about the restrictive regimentation of research are not new with deception, but what we have in the current regime is a distinct intensification in ethical regulation and governance. There is also a growing dissident literature on ethical governance and regimentation (Israel 2015; Van Den Hoonaard 2011, 2016). What researchers are effectively faced with is an “ethics creep” (Haggerty 2004), an “audit creep” (Lederman 2006), and a focus on reputational brand management (Hedgecoe 2016). Hammersley (2009, 2010), Hammersley and Traianou (2012), and Dingwall (2006, 2008) rightly caution the social science community about the growing disconnect of the ethical regime from the complexities of real-world research. Hammersley (2010) provocatively describes the process as “creeping ethical regulation and the strangling of research.” Moore (2018) argues that the current reactionary views toward deception are bound up with the repressive
19
Deception
357
transparency and openness agenda, sensitivities over public trust, and the development of the audit and ethics culture. Uz and Kemmelmeier (2017) adopt a more sympathetic approach to forms of managed deception in social and behavioral research, stressing that “rather than a type of research reserved for exceptional cases, there should be no prejudice against the use of deception” (2017, p. 104) with their important caveat that “at the time of consent, participants must be informed that the research procedures they are about to experience may include deception” (2017, p. 104). Perhaps this type of middleground position might be more palatable to more researchers in the social science community. It seems as if the inflated reactions and responses to deceptive research is based on extremity and certain assumptions about the integrity of deception. As if, to put it bluntly, all deceptive research results in the inevitable harm and brutalization of both researched and researcher. Logically, then, it follows that there can be no place for deception, which I think is reductionist and reactionary. Deceptive research can be, although not always, a creative and reflective endeavor and is not utterly devoid of scientific integrity. Rather, it requires a different type of situated integrity which involves nuanced ethical self-regulation. Put more boldly, for some in deceptive research, the wider social science community could be missing a trick. Clearly, it is not to everyone’s intellectual taste and will not become mainstream, but it still has a different and valuable contribution to make to research methodology. Deception, for me, is part of a necessary and sensible process of ethics rupture (Van Den Hoonaard 2016). This is not an extreme relativistic position urging the removal of ethical scrutiny and review but the relaxation of it as regards deception. Some universities are thankfully encouraging more flexibility in ethical research governance by constituting discipline-specific ethics committees rather than institution-wide pseudo-medicalized ones. We certainly cannot apply a simple “checklist mentality” to complex ethical decision-making (Iphofen 2009, 2011). Ethics committees do valuable work, and they are needed as checks and safeguards to research, particularly with regard to the cost and consequences of deception. Clearly, we have to work out ways of having sensible dialogues among various stakeholders in the research community, including deceptive researchers.
Deception in Action: A Covert Study of Bouncers in the Night-time Economy My original 6-month deceptive fieldwork (Calvey 2000, 2008, 2013) was based in a range of pubs and clubs in Manchester, where I both lived and worked. It was a nomadic and embodied ethnography that never quite finished, and by both drift and opportunism, it became a longitudinal one (Calvey 2019). Despite managed avoidance after the fieldwork period had finished, I was regularly invited into various night clubs and pubs as “ponytail Dave,” my door nickname, for a number of years. It was commonly assumed that I was in between doors and looking for some door
358
D. Calvey
work, with a refusal of free entry being seen as a clear offense. As my door community network was quite extensive and unbounded, it was proving problematic and messy to cleanly exit the door world. Hence, I was never ‘off duty’ as a sociologist in these liminal days. Namely, my “bouncer gaze,” if you will, was quickly resurrected as I went “back into character” and, ironically, it became a source of further and rich immersive data (Calvey 2019). My deception was now an unforeseen longitudinal one. My covert study was a purist piece of deception with no gatekeepers, key informants, or retrospective debriefing. I had come close to developing a key informant in the fieldwork period but finally decided against it in terms of the potential information spread that would be difficult to control. I had also seriously considered retrospective debriefing in the post-fieldwork period, but this can be dangerous, messy, and impractical. I also felt, with some bouncers, that I would cause them emotional distress by this revelation, so, ironically, the continued deception seemed a way of managing fake friendships and sensitive confessions. This continued deception was a source of considerable anxiety and guilt for me. In a complex way, the deceptive bouncer self, which is fabricated, manipulated, and engineered one, became intimately tied with my biography and true identity in ways that I had not planned for. The analytic push was to debunk a deeply demonized subculture and occupation (Calvey 2000; Monaghan 2003; Hobbs et al. 2003; Rigakos 2008; Sanders 2016; Winlow 2001) and investigate the everyday world of bouncers as a “lived experience” (Geertz 1973) of doing the doors. I wanted to resist a type of analytic exotica in producing yet another zoo-keeping study of deviance (Gouldner, 1968). My deceptive role and manufactured hyper-masculine bouncer self was deeply dramaturgical (Goffman 1967) throughout. Part of my autoethnographic layered portrait (Rambo 2005) of bouncing was about managing my “secret self” which involved the emotional management and guilty knowledge around “ethical moments” (Guilleman and Gillam 2004) in the field such as witnessing violence and faking friendship. In this account of deception in action, I wanted to reject and resist a heroic, belligerent, and cavalier view of deception and replace it with a more nuanced one which involved types of ethical self-regulation and self-censorship throughout the fieldwork and beyond. Related to this, I made deliberate choices as to what I would publish to help protect the participants. A useful example of the ambivalent nature of the situated ethics in this deceptive role was when I withheld information from the police about an assault on a bouncer, which I had witnessed, at the strong request of the bouncer, who was the actual victim of the assault. My personal ethics were at odds with this, but the priority was to suspend my own moral code in the setting and not judge or correct those I was researching. I many ways, I was walking a type of legal tightrope (Pearson, 2009) that could have gone wrong. In this instance, the ethics of the other (Whiteman 2018) ran counter to my own but was ultimately privileged over mine. My deceptive ethnography was a “lived intensity” (Ferrell and Hamm 1998), an “experiential immersion” (Ferrell 1998) and a form of edgework (Lyng, 1990).
19
Deception
359
I perceive deception in this context as craft-like, demanding empathy, flexible passing, and nuanced mimicry. Deception got me closer to the everyday realities of being a bouncer and, in turn, a more sensible and realistic view of how they performed violence and masculinity as a doing which involved shades of bravado, deterrent and persuasion in both obvious and subtle ways. I wanted to disrupt the rather one-dimensional charicature of bouncers that many commentators, both academics and journalists, had cogently painted.
The Future of Deception: Cyber Research and Autoethnography In the cyber or virtual world, which is very different from the traditional fieldwork locations, ethical concerns have not gone away but, indeed, are more difficult to regulate in this diffuse and fragmented environment. Cyberspace has, in some ways, become what I describe as a “covert playground” (Calvey 2017). Informed consent takes on new challenges in such an arena, which has become a serious concern for some researchers in their ongoing attempt to develop specific Internet ethics and protocols for various online ethical dilemmas (Buchanan 2004; Flick 2016; Hine 2005). Carusi and Jirotka (2009) accurately characterize the field of Internet and online ethics as an “ethical labyrinth,” with the Association of Internet Researchers consequently producing some ethical guidelines in 2002, modified in 2012. Granholm and Svedmark (2018) caution that online research with vulnerable populations can harm both the researcher and the researched, which ethics boards and committees must be actively vigilant about. A diverse range of sensitive topics, including Internet sex (Sharp and Earle 2003), cosmetic surgery (Langer and Beckman 2005), and extreme dieting (Day and Keys 2008), have been explored by online lurking, and it is reasonable to assume that this is likely to increase. Murthy argues that “the rise of digital ethnographies has the potential to open new directions in ethnography” (2008, p. 837). Moreover, Murthy describes digital ethnography as a “covert affair” stressing: “my survey of digital ethnographies reveals a disproportionate number of covert versus overt projects” (2008, p. 839). Social media is also saturated in forms of deception. The cases of bullying, intimidation, racist comments and textual hate by internet trolls (Jane 2013; Hughey and Daniels 2013) typically involve fake cyber selves (Calvey 2017). Similarly, the growth of fake selfies by the public is designed deceptively to shock and fool as many people as possible. Similarly, instances of “fake news” are massively on the rise in social media. As Tandoc et al. stress “Fake news has real consequences, which makes it an important subject for study” (2018, p. 149). Indeed Google and Facebook specifically employ more staff to try to censor the dissemination of fake news. Whistleblowing, although still based on anonymity, is the opposite side of the cyber coin and can be empowering (De Maria 2008). The other side of this coin is the recent data harvesting and behavior modeling scandal associated with corporate giants Facebook, Amazon, and Google, which caused outrage and government inquiries. Zuboff (2018) cautions us that complex
360
D. Calvey
forms of deception is part of the new logic of surveillance capitalism. As big data research projects using social media become much more commonplace in the future, the problem of deceptive Internet research ethics will not go away but, if anything, will intensify and become more important. Deception is not an “outdated” methodology that is resigned to the past, as some critics might assume. It has been successfully applied more recently to critically explore a range of topical contemporary issues in autoethnographic forms. Autoethnography is a relatively recent development in qualitative research (Ellis 1999, 2004, 2007), which has a variety of styles. Despite the criticisms of autoethnography as narcissistic (Delamont 2009; Tolich 2010), it is increasingly popular in various ethnographic communities. Some styles and use of it can involve deception, with the justification being that is biographically and experientially based. Ronai and Ellis (1989) explored the interactional strategies of strippers and table dancers in the United States, with Ronai retrospectively reflecting on her biography as an erotic dancer in the past. This was a dancer as researcher autoethnographic role, as a way of accessing and exploring a closed deviant subculture and involved some deceptive tactics, particularly with the customers. Similarly, this liminal insider position from the “dancer’s perspective,” which involved some deceptive moves and tactics, was also used by Frank (2002), in her 6-year immersion of stripping in the United States and Colosi (2010) in her 2-year study of lap dancing in the United Kingdom. Because the community is unbounded and transitory, it is difficult to maintain any standardized form of informed consent. Ward (2010) faced similar challenges in her autoethnographic study of raving and recreational drugs culture in London. Her study initially started out as overt but shifted to covert by drift and not design due to the fluid nature of the dance community. Woodcock (2016) went undercover for 6 months in a UK call center which sold life insurance to explore surveillance, control, resistance, and copying mechanisms in such places. What Woodcock describes as “working the phones,” which involved the use of standardized customer focused scripts and resulted in stressed employees with little autonomy. Call centers are typically known as the modern sweatshops and employ a large amount of young people precariously in the service sector. Woodcock employs a Marxist analysis to unpack the alienation of this modern workplace. Similarly, Brannan went undercover for 3 months to explore “the mechanics of mis-selling” (2017, p. 641) and how that becomes embedded in the organizational practice of an international financial services call center. Brannan explored the training, induction, and initial work of direct sales agents as a set of sales rituals and legitimacy arrangements, with run counter to the increasing regulation of such activities. Some key informants were used by Brannan to further unpack misbehavior in the workplace. Zempi (2017) used deception in her research on victimization and Islamophobia by wearing a Muslim veil in public, at the suggestion of her interview participants. Zempi, a female from a Christian background, argues that her covert autoethnographic experience was a way of gaining insider knowledge and an opportunity to “generate appreciative criminological data” (2017, p. 9).
19
Deception
361
What these discussions partly display is the successful use of deception as part of a mixed or multiple methods strategy. For me, such hybrid methodological moves will become more popular in the future. Deception can then be potentially more sensibly viewed as a more complementary rather than necessarily antagonistic field research strategy.
Conclusions: Deception as a Creative Part of the Ethical Imagination and Research Toolkit Deception has not gone away and is currently still a very controversial topic, which arouses suspicion, shock, fascination, and emotive outcries. Although there is generally much less deception used now, it has not been confined to the history bin and realistically probably never will be. The award-winning documentary films, The Twinning Reaction (2017), Three Identical Strangers (2018), and Secret Siblings (2018), investigate secretive psychological experiments on twin and sibling separation by Dr. Viola Bernard, a child psychologist and adoption consultant, and Dr. Peter Neubauer, a psychiatrist and principal investigator. The longitudinal experiment started in 1960 and was closed 20 years later in 1980. The participants, including various sets of twins and triplets, were babies from the Louise Wise Services, a prominent Jewish adoption agency in New York. They were part of a comparative adoption study, including extensive film footage and observational fieldwork data, with babies being separated and strategically placed in different homes. The key deceptive element here being that researchers actively withhold information from both parents and children over the entire study period that they had any siblings. None on the participants at any point were informed about the true aims of the secretive experiment. At least three individuals have committed suicide so far, and genuine doubts remain as to whether all the participants are now even fully aware of the hidden nature versus nurture experiment, which has been covered up for a lengthy period of time. Although Bernard and Neubauer, both psychoanalyst practitioners linked to the Freud family, are generally published in conventional adoption science, this belligerent and cruel deceptive study has never been published. Clearly, such an experiment stands alongside the infamous Tuskegee syphilis experiment, conducted between 1932 and 1972, for ethical belligerence and human cruelty. Drs. Neubauer and Bernard have now both passed away, the specific adoption agency that colluded in the experiment closed and the project data sealed in the Vale University archives until 2065, with very restricted access to the data until publication. It is the source of ongoing litigation and is a seismic game changer in the genealogy of ethics as it has only recently come to light by the committed activism of filmmakers, investigative journalists, and television producers. In a very different field, the left field activities of undercover police officers, investigating targeted activist groups during 2011, have been the subject of ongoing media attention and are now under formal investigation in the United Kingdom.
362
D. Calvey
Such practitioner work is typically justified as part of the protection of national security. Various male officers, the most prominent being Mark Kennedy, had become involved in long-term intimate relationships while still undercover, including marriage, fatherhood, and using the fake identities of dead children. Several women have taken legal action against the Metropolitan Police Force, and the Special Demonstration Squad was disbanded. Various compensation packages totalling millions have been paid to the victims, and a public inquiry into undercover policing in England and Wales since 1968, chaired initially by the late Sir Christopher Pritchard and then Sir John Mitting, was launched in July 2015. Historically, the inquiry is investigating the covert tactics used by around 144 officers on around 1,000 political groups. The enquiry is estimated to report its final findings in 2023 and is estimated to have cost around £10 million up to now. These are clearly two extreme minority cases and should not represent nor spoil the use of deception in other areas. It is however a cautionary and provocative tale of the consequences of the potentially reckless use of deception. It appears that deception is far more extensive and diverse than first anticipated. This is compounded by some rather sanitized research accounts, which underplay and gloss over deceptive tactics and moves. What we realistically have are “varieties of deception” using a “continuum of deception” as not all theorists are doing the same thing when using deception. My intention here is not to develop a prescriptive recipe manual on how to do deceptive research. Rather, my intention has been to compare and contrast different deceptive scenarios and reflect on what lessons can be learnt from the deceptive condition. It is my strong contention that the history and direction of the social sciences would definitely not have been the same without deception. The deceptive outsiders, outcasts, heritics and pariahs are needed to provocatively breach and positively disrupt the canonical methodological rules around doing social research. A number of provocative questions remain. What is the rationale for deliberate deception? Can deception be reasonably justified as a means to an end? Can deception provide a more intimate, nuanced, and layered view? Can deception potentially provide something different, something left field, push the envelope, and be outside of the box? Can deception be legitimately used to whistle blow and dig the dirt on wrongdoing? Is social science missing a trick by excluding deception as a research strategy? In terms of justifying deception within a typical means-end schema, the principle of beneficence can still come into play, not automatically, but on a case-by-case basis via self-regulation. Clearly, some research settings with vulnerable groups, sensitive topics, and legal boundaries would not be appropriate for the use of deception. My point here is that some cases, which are in the minority, are extreme and untypical with the vast majority of research cases being more open to the use of deception. My focus in the chapter has been on deliberate deception, whereas some common forms of deception are more akin to actively blurring your research identity, faking agreement, and concealing your true feelings and views, to develop rapport with the research participants. Many overt researchers use such chameleon tactics in
19
Deception
363
the field, some without much ethical reflection. In this sense, some forms of subtle deception are more widespread than anticipated. Deception and covertness are thus not fixed states but negotiated and shifting ones. Deceptive methodology is not a heroic panacea, as no research methodology is, and it has some serious moral and ethical baggage which must be borne in mind, but it should not be ignored, marginalized, and censored. Deception can and should be recognized and appreciated more creatively as part of a wider and robust ethical imagination and research toolkit. Using deception shall realistically remain a niche position and not become the mainstream, but its minority status does not mean to say that it lacks value. Surely we need a broad range of methodological strategies to cope with the complexity of real-world research. It should be afforded a fairer and less prejudiced reading. Being able to equally choose deception in appropriate research settings is part of our research freedoms and, ultimately, a form of creative research integrity and research ethics.
References Adler JE (1997) Lying, Deceiving, or Falsely Implicating. The Journal of Philosophy 94(9):435–452 Barnes JA (1983) Lying: a sociological view. Aust J Forensic Sci 15(4):152–158 Barnes JA (1994) A pack of lies: towards a sociology of lying. Cambridge University Press, Cambridge Barrera D, Simpson B (2012) Much ado about deception: consequences of deceiving research participants in the social sciences. Sociol Methods Res 41(3):383–413 Baumrind D (1964) Some thoughts on ethics of research: after reading Milgram’s ‘behavioral study of obedience’. Am Psychol 19:421–423 Baumrind D (2013) Is Milgram’s deceptive research ethically acceptable? Theor Appl Ethics 2(2):1–18 Baumrind D (2015) When subjects become objects: the lies behind the Milgram legend. Theory Psychol 25:690–696 Beecher HK (1955) The powerful placebo. J Am Med Assoc 159(17):1602–1606 Blass T (2004) The man who shocked the world: the life and legacy of Stanley Milgram. Basic Books, London Bly N (1887) Ten days in a mad-house and miscellaneous sketches: ‘trying to be as servant’ and ‘Nellie Bly as a white slave’. Ian L. Munro Publishers, New York Bok S (1978) Lying: moral choice in public and private life. Pantheon Books, New York Bok S (1982) Secrets: on the ethics of concealment and revelation. Pantheon Books, New York Brandt AM (1978) Racism and research: the case of the Tuskegee syphilis study. Hast Cent Rep 8(6):21–29 Brannan MJ (2017) Power, corruption and lies: mis-selling and the production of culture in financial services. Hum Relat 70(6):641–647 Brannigan A (2004) The rise and fall of social psychology: the use and abuse of the experimental method. Aldine de Gruyter, New York Brannigan A, Nicholson I, Cherry F (2015) Introduction to the special issue: unplugging the Milgram machine. Theory Psychol 25(5):551–563 Buchanan EA (ed) (2004) Readings in virtual research ethics. Information Science Publishing, Hershey Buckingham RW, Lack SA, Mount BM, MacLean LD (1976) Living with the dying: use of the technique of participant observation. Can Med Assoc J 115:1211–1215
364
D. Calvey
Bulmer M (1982) When is disguise justified? Alternatives to covert participation observation. Qual Sociol 5(4):251–264 Calvert C (2000) Voyeur nation: media, privacy, and peering in modern culture. Westview Press, Boulder Calvey D (2000) Getting on the door and staying there: a covert participant observational study of bouncers. In: Lee-Treweek G, Linkogle S (eds) Danger in the field: risk and ethics in social research. Routledge, London, pp 43–60 Calvey D (2008) The art and politics of covert research: doing ‘situated ethics’ in the field. Sociology 42(5):905–918 Calvey D (2013) Covert ethnography in criminology: a submerged yet creative tradition in criminology. Curr Issues Crim Just 25(1):541–550 Calvey D (2017) The art, politics and ethics of undercover fieldwork. Sage, London Calvey D (2018) Covert: the fear and fascination of a methodological Pariah. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London, pp 470–485 Calvey D (2019) The everyday world of bouncing: a rehabilitated role for covert ethnography. Qual Res 19(3):247–262 Carusi A, Jirotka M (2009) From data archive to ethical labyrinth. Qual Res 9(3):285–298 Colosi R (2010) Dirty dancing? An ethnography of lap-dancing. Willan Publishing, Abingdon Cressey PG (1932) The taxi-dance hall; a sociological study in commercialized recreation and city life. University of Chicago Press, Chicago Crow G, Wiles R, Heath S, Charles V (2006) Research ethics and data quality: the implications of informed consent. Int J Soc Res Methodol 9(2):83–95 Dalton M (1959) Men who manage: fusions of feeling and theory in administration. Wiley, New York Darley JM, Batson CD (1973) From Jerusalem to Jericho: a study of situational and dispositional variables in helping behavior. J Pers Soc Psychol 27(1):100–108 Darley JM, Latane B (1970) The unresponsive bystander: why doesn’t he help? Appleton Century Crofts, New York Day K, Keys T (2008) Starving in cyberspace: a discourse analysis of pro-eating disorder websites. J Gend Stud 17(1):1–15 De Maria W (2008) Whistle blowers and organizational protestors: crossing imaginary borders. Curr Sociol 56(6):865–883 Delamont S (2009) The only honest thing: auto ethnography, reflexivity and small crises in fieldwork. Ethnogr Educ 4(1):51–63 Dingwall R (2006) Confronting the anti-democrats: the unethical nature of ethical regulation in social science. Med Sociol Online 1(1):51–58 Dingwall R (2008) The ethical case against ethical regulation in humanities and social science research. Twenty-First Century Soc 3(1):1–12 Ekman P (1985) Telling lies: clues to deceit in the marketplace, politics and marriage. W. W. Norton, New York Ellis C (1999) Heartfelt autoethnography. Qual Health Res 9(5):669–683 Ellis C (2004) The ethnographic I: a methodological novel about autoethnography. AltaMira Press, Walnut Creek Ellis C (2007) Telling secrets, revealing lives: relational ethics in research with intimate others. Qual Inq 13(1):3–29 Erikson KT (1967) A comment on disguised observation in sociology. Soc Probl 14(4):366–373 Farrington DP, Knight BJ (1979) Two non-reactive field experiments on stealing from a lost letter. Br J Soc Clin Psychol 18(3):277–284 Ferrell J (1998) Criminological verstehen: inside the immediacy of crime. In: Ferrell J, Hamm MS (eds) Ethnography at the edge: crime, deviance, and field research. Northeastern University Press, Boston, pp 20–43 Ferrell J, Hamm MS (1998) True confessions: crime, deviance, and field research. In: Ferrell J, Hamm MS (eds) Ethnography at the edge: crime, deviance, and field research. Northeastern University Press, Boston, pp 2–20
19
Deception
365
Festinger L, Riecken H, Schachter S (1956) When prophecy fails: a social and psychological study of a modern group that predicted the destruction of the world. University of Minnesota Press, Minneapolis Flick C (2016) Informed consent and the Facebook emotional manipulation study. Res Ethics 12(1):14–28 Frank K (2002) G-strings and sympathy: strip club regulars and male desire. Duke University Press, Durham Gamer M, Ambach W (2014) Deception research today. Front Psychol 5:Article 256, 1–3 Geertz C (1973) The interpretation of cultures: selected essays. Basic Books, New York Goffman E (1961) Asylums: The Essays on the social situation of mental patients and other inmates. Doubeday, New York Goffman E (1967) Interaction ritual: essays on face-to-face behaviour. Doubleday, New York Goffman E (1989) On fieldwork. J Contemp Ethnogr 18(2):123–132 Goldman J (2006) The ethics of spying: a reader for the intelligence professional. The Scarecrow Press, Oxford Gøtzsche PC (1994) Is there logic in the placebo? Lancet 344:925–926 Gouldner, A (1968) The sociologist as partisan: Sociology and the welfare state. The American Sociologist 3(2):103–116 Granholm C, Svedmark E (2018) Research that hurts: ethical considerations when studying vulnerable populations online. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London, pp 501–509 Guilleman M, Gillam L (2004) Ethics, reflexivity and ‘ethically important moments in research’. Qual Inq 10(2):261–280 Haggerty KD (2004) Ethics creep: governing social science research in the name of ethics. Qual Sociol 27(4):391–414 Hammersley M (2009) Against the ethicists: on the evils of ethical regulation. Int J Soc Res Methodol 12(3):211–226 Hammersley M (2010) Creeping ethical regulation and the strangling of research. Sociol Res Online 15(4):16 Hammersley M, Traianou A (2012) Ethics in qualitative research: controversies and contexts. Sage, London Haney C, Banks C, Zinbardo P (1973) Interpersonal dynamics in a simulated prison. Int J Criminol Penol 1:69–97 Hedgecoe A (2016) Reputational risk, academic freedom and research ethics review. Sociology 50(3):486–501 Herrera CD (1997) A historical interpretation of deceptive experiments in American psychology. Hist Hum Sci 10(1):23–36 Hine C (2005) Virtual methods: issues in social research on the Internet. Berg, Oxford Hobbs D, Hadfield P, Lister S, Winlow S (2003) Bouncers: violence and governance in the nighttime economy. Oxford University Press, Oxford Homan R (1991) The ethics of social research. Macmillan, London Hughey MW, Daniels J (2013) Racist comments at online news sites: a methodological dilemma for discourse analysis. Media Cult Soc 35(3):332–347 Humphreys L (1970) Tearoom trade: impersonal sex in public places. Aldine Publishing Company, Chicago Humphreys L (1975) Tearoom trade: impersonal sex in public places. Aldine Publishing Company, Chicago. (Enlarged edition) Humphreys L (1980) Social science: ethics of research. Science 207(4432):713–714 Iphofen R (2009) Ethical decision making in social research: a practical guide. Palgrave Macmillan, Basingstoke Iphofen R (2011) Ethical decision making in qualitative research. Qual Sociol 11(4):443–446 Israel M (2015) Research ethics and integrity for social scientists: beyond regulatory compliance. Sage, London Jane EA (2013) Beyond antifandon; cheerleading, textual hate and new media ethics. Int J Cult Stud 17(2):176–190
366
D. Calvey
Korn JH (1997) Illusions of reality: a history of deception in social psychology. State University of New York Press, Albany Langer R, Beckman SC (2005) Sensitive research topics: netnography revisited. Qual Mark Res 8(2):189–203 LaPiere RT (1934) Attitudes vs. actions. Soc Forces 13(2):230–237 Latane B, Darley JM (1969) Bystander “apathy”. Am Sci 57(2):244–268 Lawton J (2001) Gaining and maintaining consent: ethical concerns raised in a study of dying patients. Qual Health Res 11(5):693–705 Lederman R (2006) Introduction: anxious borders between work and life in a time of bureaucratic ethics regulation. Am Ethnol 33:477–481 Levine TR (ed) (2014) Encyclopedia of deception. Sage, London Librett M, Perrone D (2010) Apples and oranges: ethnography and the IRB. Qual Res 10(6):729–747 Lichtenberg P, Heresco-Levy U, Nitzan U (2004) The ethics of the placebo in clinical practice. J Med Ethics 30(6):551–554 Loftus B, Goold B (2012) Covert surveillance and the invisibilities of policing. Criminol Crim Just 12(3):275–288 Lyng, S (1990) Edgework: A Social Psychological Analysis of Voluntary Risk Taking. American Journal of Sociology 95(4):851–886 Marzano M (2007) Informed consent, deception and research freedom in qualitative research: a cross-cultural comparison. Qual Inq 13(3):417–436 Masip J, Garrido E, Herrero C (2004) Defining deception. An Psicol 20(1):147–171 Merritt CB, Fowler RG (1948) The pecuniary honesty of the public at large. J Abnorm Soc Psychol 43(1):90–93 Milgram S (1974) Obedience to authority: an experimental view. Tavistock Publications, London Milgram S (1977) Subject reaction: the neglected factor in the ethics of experimentation. Hast Cent Rep 7(5):19–23 Miller AG (2013) Baumrind’s reflections on her landmark ethical analysis of Milgram’s obedience experiments (1964): an analysis of her current views. Theor Appl Ethics 2(2):19–44 Mitchell RG Jr (1993) Secrecy and fieldwork. Sage, London Monaghan L (2003) Danger on the doors: bodily risk in a demonised occupation. Health Risk Soc 5(1):11–31 Moore S (2018) Towards a sociology of institutional transparency: openness, deception and the problem of public trust. Sociology 52(2):416–430 Murthy D (2008) Digital ethnography: an examination of the use of new technologies for social research. Sociology 42(5):837–855 Patrick J (1973) A Glasgow gang observed. Eyre Methuen, London Pearson G (2009) The researcher as hooligan: where ‘participant’ observation means breaking the law. Int J Soc Res Methodol 12(3):243–255 Perry G (2013) Behind the shock machine: the untold story of the notorious Milgram psychology experiments. The New Press, New York Pfaff T, Tiel JR (2004) Ethics of espionage. J Mil Ethics 3(1):1–15 Rambo C (2005) Impressions of grandmother: an autoethnographic portrait. J Contemp Ethnogr 34(5):560–585 Rigakos GS (2008) Nightclub: Bouncers, Risk and the Spectacle of Consumption, McGill-Queen’s University Press, Montreal Robinson WP (1996) Deceit, delusion and detection. Sage, Thousand Oaks Rohy V (1996) Displacing desire: passing, nostalgia and Giovanni’s Room. In: Ginsberg EK (ed) Passing and the fictions of identity. Duke University Press, Durham, pp 218–233 Ronai CR, Ellis C (1989) Turn-ons for money: interactional strategies of the table dancer. J Contemp Ethnogr 18(3):271–298 Rosenhan DL (1973) On being sane in insane places. Science 179(4070):250–258 Roulet TJ, Gill MJ, Stenger S, Gill DJ (2017) Reconsidering the value of covert research: the role of ambiguous consent in participant observation. Organ Res Methods 20(3):487–517
19
Deception
367
Sanders B (2016) In the Club: Ecstasy Use and Supply in a London Nightclub. Sociology 39(2):241–258 Scheper-Hughes N (2004) Parts unknown: undercover ethnography of the organs-trafficking underworld. Ethnography 5(1):29–73 Scheper-Hughes N (2009) The ethics of engaged ethnography. Anthropology News, September, pp 13–14 Sharp K, Earle S (2003) Cyber punters and cyber whores: prostitution on the Internet. In: Jewkes Y (ed) Dot.cons: crime, deviance and identity on the Internet. Willan Publishing, Collumpton, pp 36–52 Shilling C, Mellor P (2015) For a sociology of deceit: doubled identities, interested actions and situational logics of opportunity. Sociology 49(4):607–623 Simmel G (1906) The Sociology of Secrecy and of Secret Societies. American Journal of Sociology 11(4):441–498 Sin CH (2005) Seeking informed consent: reflections on research practice. Sociology 39(2): 277–294 Spicker P (2011) Ethical covert research. Sociology 45(1):118–133 Tandoc EC, Lim ZW, Ling R (2018) Defining ‘fake news’. Digit Journal 6(2):137–153 Thomas SB, Quinn SC (1991) Public health then and now. Am J Public Health 81(11):1498–1505 Tolich M (2010) A critique of current practice: ten foundational guidelines for autoethnographies. Qual Health Res 20(12):1599–1610 Uz I, Kemmelmeier M (2017) Can deception be desirable? Soc Sci Inf 56(1):98–106 Van Den Hoonaard WC (2011) The seduction of ethics: transforming the social sciences. University of Toronto Press, Toronto Van Den Hoonaard WC (2016) The ethics rupture: exploring alternatives to formal research ethics review. The University of Toronto Press, Toronto Ward J (2010) Flashback: drugs and dealing in the Golden Age of the London rave scene. Routledge, London Whiteman N (2018) What if they’re bastards?: ethics and the imagining of the other in the study of online fan cultures. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London, pp 510–525 Winlow S (2001) Badfellas: Crime, Tradition and New Masculinities. Berg, Oxford Woodcock J (2016) Working the phones: control and resistance in call centres. Pluto Press, London Zempi I (2017) Researching victimisation using auto-ethnography: wearing the Muslim veil in public. Methodol Innov 10(1):1–10 Zimbardo P (2007) The Lucifer effect: understanding how good people turn evil. Random House, New York Zuboff S (2018) The age of surveillance: the fight for a human future at the new frontier of power. Profile Books, London
Part IV Research Methods
Ethical Issues in Research Methods Introduction
20
Ron Iphofen
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Innovation and Technological Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methodological Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
372 373 375 376 378 378
Abstract
This chapter explores only some of the ethical issues related to methodology – issues that have to be borne in mind when selecting the most appropriate research method or methods through which to conduct a study. Some methods are more ethically appropriate to some study topics and target populations rather than others. While reviewers of research proposals often seek to separate method from ethics, the distinction is far from clear. There is, though, a relatively clear distinction between what is likely to be most effective in producing answers to research questions and what is ethical. Some methods distance the researcher too far from their participant study population, others require far too close an engagement. There is no one “best” method for all study populations and all research settings. The choice of method, like the perception of ethical risks, is highly context-dependent. Researchers, reviewers, funders, and subjects should all be in a position to make a judgment call about which method is most appropriate to their needs and risks in the specific research setting.
R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_54
371
372
R. Iphofen
Keywords
Experiment · Sampling · Technological innovation · Big data · Methodological innovation · Creative methods · Feminism · Research ethics
Introduction It would be quite a challenge to cover all the available methods for conducting research across the sciences – or at least another, bigger, handbook just on the ethical consequences of methodologies alone. Necessarily what is presented in Part IV are some key methods that have seen applications in most research disciplines. Thus, the experimental method is not exclusive to the natural sciences. Increasingly research in social sciences and social policy has developed experimental designs that avoid the kind of heightened ethical risks that might have been associated with the forms of experimental design conventionally employed for clinical trials. What we have sought to cover in the section is some conventional methods (experiments, observations, and ethnography), some “movements” that have inflected research and raised reflexivity for some years (discourse analysis and feminism), and more recent methodological challenges to how good research can be accomplished (creative methods and the wide range of new data analytics). In the simple terms of the ethical questions raised in the Introduction to Part II – the who is doing what, to whom, how and why? – methodological issues point up the necessary overlap between each of these elements. The “who” and “to whom” elements link researcher and researched: some gender experts would argue that gender matching is essential to this relationship as would some indigenous researchers argue for matched ethnicity. In similar vein it has been argued that persons with a disability are best investigated by other persons with a disability, the old by the old, the young by the young and so on. Of course this challenges the more methodological notion that the expertise required for research is evenly distributed between the “who’s” and the “whom’s” – that is, the “who does what” to “whom”? The “why” element relies upon adequate justification for any research engagement – that it has a useful purpose. The focus, however, for any methodological question has to be upon the “what” is to be done and “how” is it to be done? To take an illustrative example, the interview is a well-recognized “what.” Thus, it may be proposed that one of the best ways to explore people’s views, opinions, thoughts, and behavior is to ask people about them. But the range of “how’s” for interviews are enormous – one-to-one, face-to-face, by telephone, online, on paper, in groups, solo, during activity, reflectively or not. The nature of the experiment is similarly open to variation – a randomized trial (clinical or not), prospective, field or laboratory, to say nothing of the range of designs for experiment: case control, series, Solomon four group, and many more. None of the methods is inherently more ethical than others, the selection of any specific method will depend upon the “why” and the “who” – in other words, the actual context in which the research is to be conducted.
20
Ethical Issues in Research Methods
373
This introductory chapter discusses just a few key issues to illustrate the embedded ethical concerns that need to be made evident to avoid the complacency of routinized methodological choices. The section that follows addresses these issues in more depth.
Innovation and Technological Change A constant feature of life is change. The safest prediction one can make about the social and the natural worlds is that change is endemic. The most challenging problem for research methodology is how to confront and examine such change. Social and technological innovation is a main driver for change and new technologies impact people’s everyday lives as never before. The modes of innovation processes have changed significantly. Mainly driven by new IT tools, interactions between a wide range of stakeholders in innovation processes have intensified and co-creation of new technologies seems to be the new way to ensure competitive advantage. These developments confront various stakeholders with unique challenges – not least how research is conducted and how it is evaluated. Ethical considerations are crucial in guiding, monitoring, and maybe even regulating such developments via the research effort that lies behind them. Innovation necessarily entails research. New knowledge is generated, refined, and applied by all forms of innovation. The ethics appraisal of research proposals or planning is almost always “anticipatory”: it attempts to assess, disclose, and warn researchers of the risks of a research plan or of a proposal and to alert other stakeholders (research participants, funders, civil society, etc.) of potential ethical risks. However, innovation all too often does not emerge in such a ‘planned’ manner – sometimes it does, but more often innovation is under way before anyone thinks of whether or not some form of ethics appraisal or monitoring for ethical impact should take place. There are several concepts addressing ethical considerations in technology development processes that focus on their means as well as their ends. Thus, one can ask how do innovation processes take place and how can they be made more inclusive, interactive, and anticipatory; and what are the impacts and the unexpected and unintended implications of innovations, e.g., addressing key challenges in fields such as the environment and health and improving the quality of life and well-being of citizens? The overarching problem is what has become known as the Collingridge (1980) dilemma – that technology proceeds much faster than the means for controlling, regulating, monitoring, or checking it. Or as Hans Jonas (1974) has put it “The ethical problem of technology development could be summed up by the two-speed shift: the first is that of our increasingly powerful and rapid technological action; the second is our ability to predict the consequences.” The main means of control, for example, in the ethical appraisal that is required for all research within the European Commission’s well-funded Framework Programmes, has been the precautionary principle which, in the extreme, could stifle innovation and when applied too often
374
R. Iphofen
in the ethics review process becomes one of the grounds for treating such reviews as obstacles that inhibit scientific development. One of the more contemporary challenges to ethical research is in the field of gene editing. This offers a useful illustrative example of managing the balance between technological development and the risks of harm to humans or at least to human research participants. One of the most commonly assumed “institutional imperatives” of science is the encouragement of the communal sharing of knowledge (Merton 1942). The idea being that new knowledge ought to be freely shared with other scientists to enable scientific progress. Of course Merton was not naïve enough to believe that this “ideal” was always realized in practice and priority disputes and intellectual property rights have always obstructed the pursuit of the ethos of scientific “communism.” The problem is that in the contemporary situation even without such principles of scientific “openness” – modern forms of communication enable or permit rapid sharing of knowledge and information. In the case of gene editing the technology is easy to do and does not require much “tooling-up.” This exacerbates the Collingridge dilemma since the inherent precautionary “hesitancy” of sophisticated complex technologies is no longer restricted to well-resourced, established lab facilities. In contexts where regulatory or cultural restraints apply more cautious steps are taken since awareness of the restraints is high. In contexts where such restraints are only just emerging, the temptations to proceed apace are hard to constrain. This appears to have been what happened when a Chinese scientist, Dr. He Jiankui, announced the birth of twins (girls Lulu and Nana) from embryos whose genome had been modified prior to their transfer into the womb. The modification was done using CRISPR-Cas9 disabling copies of the CCR5 gene in human embryos with the stated aim of preventing the transmission of a HIV infection from the father’s side. While this rationale might be seen as a reasonable moral justification for the action taken, the lack of any confirmed assessment of the implications of the action as well as the availability of other noninvasive means to achieve this prevention casts doubt on the scientist’s motives and raised international concerns about the premature application of such a ground-breaking technology. Dr. He did not initially present any solid proof or witnessed confirmation that this experiment on human embryos had actually taken place, the lack of ethical procedures, assessments, and guidance, which characterized this project, shocked the scientific community. To change the public perception on his actions, Dr. He tried to argue that the procedure was safe and that no other genes were affected. However, targeting the CCR5 gene is just one potential route of virus cell entry clearly pointing out that this procedure could not be seen solely as a consequence of a real medical need, rather more like a proof of concept. More precisely this was an experiment on human embryos that could not be seen as absolutely necessary for the care of the unborn (Hirsch et al. 2019). Many other countries vary in the degree to which such regulation applies or scientific culture is slow to catch up. Perhaps the growth of technology is exponential relative to regulatory or professional/cultural cautions, which makes international variations inevitable. In other words, the Collingridge problem is even greater than it
20
Ethical Issues in Research Methods
375
was. As technology moves so quickly the ethical concerns cannot match it – and gene editing is not the only field: Big Data, drone technology, AI, robotics, 3D printing, online communication forms, and so on are all moving so fast that ethics will always be an afterthought.
Sampling One inevitable feature of research methodology is the need to “sample” populations although this is not often treated as thoroughly an ethical issue as it ought to be. Sampling is taken to be a technical statistical term, I use it here to refer to the need to be highly “selective” when studying most populations. Research coverage of a total population is rarely possible since the extent of a total population may be unknown and even when legally required, such as in a national population census, it is known that all members of the population do not take part. So how to take a sample ethically and how to treat that sample ethically becomes vital to the reliability and validity of the outcomes to a study. “Reliable” in that a similar sample taken on another occasion would produce at least a similar result, and “valid” in that the sample is as accurate as possible a representation of the population. If the reliability and validity of a sampling process is poor then the whole study is called into question, and money and resources wasted. Hence sampling procedures that lack rigor or are not seen as systematic are often treated as unethical. Some sampling procedures are denigrated as “convenience” samples – often conducted as a means of rapidly evaluating some forms of service delivery – a quick gathering of information from those members of the relevant “population” that happen to be available. Such sampling is not inherently unethical, depending on the claims that are made for its validity. Indeed I have heard eminent statisticians argue that all samples are “convenient” in the sense that once a sampling frame is established, access to the relevant population to sample must be “convenient” or the sampling cannot take place. Moreover, convenience along with “utility” is important criterion for any measurement device. Published tests or established measures are valuable for comparative purposes and a lot will already be known about their validity and reliability; hence, their convenience is an ethically sound criterion. Over and above convenience, sample size and randomization are often seen as key criteria for ethical sampling. However, both features could be regarded as unethical with certain population categories. Research with social services clients, for example, requires careful thinking about what might constitute an “optimal” sample size and the chance of it being realistically randomized are low due to unreliable participation rates with such a population (Dattalo 2010). There is widespread belief that studies are unethical if their sample size is not large enough to ensure adequate statistical power. But sample size influences the balance between the burdens that participants accept and the clinical or scientific value that a study can be expected to produce. It can be argued that the value per participant declines as the sample size increases and that smaller studies therefore have more favorable ratios of
376
R. Iphofen
projected value to participant burden. Lower power does not make a study unethical (Bacchetti et al. 2005). Nevertheless larger sample studies may be desirable for other than ethical reasons much as public health policies may challenge the rights and wellbeing of individuals while striving to care for society as a whole. And many assumptions are made about the value of the analysis of large data sets. But here researchers must remain aware of the latent dangers of sampling larger data sets despite their attraction and popularity. While inductive generalizations from the gathering of qualitative data are often subject to critical assessment – both for ethical reasons and for reservations about their validity and reliability, we may often be too complacent about the adequacy of quantitative data. For example, computational models can be distorted as a result of biases contained in their input data and embedded algorithms. Even well-engineered computer systems holding “big data” can result in unexplained outcomes or errors, either because they contain bugs or the conditions of their use changes, invalidating assumptions on which the original analytics were based leading to their possible contributions to economic or political disaster (O’Neil 2016). Algorithms can be opaque, making it impossible to determine when their outputs may be erroneous due to biases in source data and giving rise to potential discrimination. Examples of such flawed assumptions can be found in Ziliak and McCloskey’s (2009) account of how faith in the “t” test and the Standard Error has “cost jobs, justice and lives.” The lack of transparency around algorithms is a serious ethical issue. If we do not know exactly how they make decisions, we cannot judge the quality of those decisions or challenge them when we disagree. And the unchallenged conventions about how best to analyze quantitative data leave us further subject to unnecessary ethical risks. The main attractions of big data for researchers are its apparent ease of availability, its “preconsented” nature, its assumed comprehensive “population” coverage, and the possibility of massive computational quantitative data analysis – one does not have to actually talk to people. But much like the decline in response rates for conventional survey techniques, the same could happen if users became less inclined to spend so much time on their digital devices, in which case the reliability of such data would decline as would its attractiveness to researchers (Newport 2019).
Methodological Innovation The most effective response to rapid societal and technological change when conventional research methods appear limited is to devise novel methods of investigation and analysis. One of the more innovative methods in response to ongoing and accelerating evidence of social inequalities, claims to objectivity, and genderbiased assumptions was the development of feminist methodologies. Martyn Hammersley (1993, Chap. 3) has offered a thorough account of the origins and consequences of feminist methodology. Feminist research methods and connected ideas about research ethics emerged out of a critique of then current social science methods and assumptions. It was argued that girls and women were
20
Ethical Issues in Research Methods
377
frequently left out of investigations, even though the findings claimed to be about people in general – thus purportedly inclusive of females. But the critique also involved a perceived failure to take account of the distinctive experiences of females. In short, the research results produced suffered from male bias. Furthermore, existing research claimed to be objective, or value-neutral, whereas in fact it fitted the interests of males. In response to this, feminists insisted that research is necessarily shaped by the personal characteristics of the researcher, that it cannot be impersonal: so that there is a need for reflexivity – the way the “nature” of the researcher influences the researched. Feminists sought to explore the experiences of women, and their research was explicitly aimed at serving their interests: the task was to emancipate and empower them to challenge patriarchy and thus it became ideological. This led to it increasingly being carried out in a participatory manner so as to reduce the hierarchy that had previously operated within the process of inquiry between researcher and researched. All of these features had important implications for research ethics, and feminists elaborated these in a number of directions. One was what came to be called the ethics of care. Another focused on the importance of reciprocity, or even equality, between researcher and researched. However, feminists were also committed to the more usual principles of research ethics, such as minimizing harm and respecting autonomy. These various principles led to some dilemmas in practice, and feminists’ reflections on research ethics also arose from addressing these dilemmas in carrying out the new forms of research that they pioneered. One dilemma related to the very concept of women’s experience. For example, it came to be insisted that the experience of women differs sharply according to ethnic and social status, sexual orientation, and whether or not they have a disability. This had implications not just for what was investigated but also for who should carry out research on what, and how this affected the knowledge produced. Another dilemma was that women were not always willing to engage as full participants in the process of inquiry. Furthermore, the commitment to reflexivity led to questions about whether research accounts had become too focused on the researcher and need to be moved back to the researched. The argument that there is a specifically feminist methodology implies not just that feminists select research topics on a different basis to non-feminists, but that when a feminist investigates a particular topic, the whole process of research reflects her commitment to feminism. (Martyn Hammersley 1992, p. 191)
It might seem odd to some to hold up feminist research methodology as innovative given its now lengthy pedigree. However, it is highly illustrative of how ongoing reflective critique of existing practices can produce alternative perspectives that further enhance our understanding of the world. That together with the fact that there is nothing static about feminist methodologies – they continue to challenge, both other feminist views alongside conventional research forms. They have certainly helped the development of reflexive methods, community participatory approaches, interactive qualitative approaches and discourse methods and, some
378
R. Iphofen
might suggest, the growth of “creative” methods. Creative methods utilize a range of elicitation techniques with researchers inviting participants to produce drawings and collages, or to work with sandbox materials or other forms of arts-based strategies. They offer insights that may not be possible via more direct strategies hence they raise different ethical concerns about conventional issues such as those associated with the potential for harm, the nature of consent, confidentiality, and anonymity. What seems like a more cooperative, gentler research engagement could raise unanticipated emotional consequences. As with all methodological innovation tests of validity and reliability must run alongside vigilance about consequences for both researcher and researched.
Conclusions The issues raised here were chosen to illustrate the complex interrelationship between methods and morals, between research design and its ethical consequences. Different methods raise different ethical issues and while combining methods in a form of triangulation is appealing for both methodological and ethical reasons, the multimethod or mixed method approach offers no guarantees for reducing risks of harm. It cannot be overstated that the choice of method remains highly contextspecific. A full consideration of the population to be researched and the setting for the research should guide the choice of method. While the impact of innovation can rarely be accurately “predicted” principles of “anticipatory thinking” (thought experiments?) can be promoted via transparency, collaboration, co-creation, and wide stakeholder management of innovatory processes. Even in terms of research ethics appraisal systems, perhaps reviewers should become more involved in societal impact assessment – horizon scanning and such anticipatory thinking should become more a part of what ethics review should be doing. In any case, there is no doubt that researchers themselves should be thinking ahead in terms of assessing the potential consequences of their chosen methods of investigation. The rest of the chapters in section IV of this handbook explore these and more issues in further detail.
References Bacchetti P, Wolf LE, Segal MR, McCulloch CE (2005) Ethics and sample size. Am J Epidemiol 161(2):105–110 Collingridge D (1980) The social control of technology. Frances Pinter, London Dattalo P (2010) Ethical dilemmas in sampling. J Soc Work Values Ethics 7(1):12–23 Hammersley M (1992) On feminist methodology. Sociology 26:187–206 Hammersley M (1993) The politics of social research. SAGE, London Hirsch F, Iphofen R, Koporc Z (2019) Ethics assessment in research proposals adopting CRISPR technology. Biochem Med 29(2):206–213 Jonas H (1974) Philosophical essays: from ancient creed to technological man. University of Chicago Press, Chicago
20
Ethical Issues in Research Methods
379
Merton RK (1973[1942]) The normative structure of science. In: Merton RK (ed) The sociology of science: theoretical and empirical investigations. University of Chicago Press, Chicago Newport C (2019) Digital minimalism: on living better with less technology. Penguin, London O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Allen Lane, London Ziliak ST, McCloskey DN (2009) The cult of statistical significance: how the standard error costs us jobs, justice, and lives. The University of Michigan Press, Ann Arbor
On Epistemic Integrity in Social Research
21
Martyn Hammersley
Contents Selecting and Developing Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resourcing Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Use of the Existing Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting Cases for Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection or Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reporting the Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Engaging with Critics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
384 386 387 389 390 394 396 398 399 400
Abstract
The concept of research integrity has gained increasing public salience in recent times. The chapter deals with a crucial aspect of this in the context of social research – what can be referred to as epistemic integrity. This amounts to a practical commitment to values that are intrinsic to all research activity, given that its goal is the production of worthwhile knowledge. These values include: truth and justifiability of findings, their relevance to human concerns, feasibility of strategies, and honesty about the research process. The implications of these values are outlined in relation to the various stages of research, from selecting and developing research questions to reporting research findings and engaging with Much of the thinking behind this chapter was stimulated by a seminar on Scientific Integrity in Qualitative Research organized by Lakshmi Balachandran Nair, Utrecht University, September 2017. Gerben Moerman, of the University of Amsterdam, in particular, supplied encouragement and information about recent cases of scientific fraud in the social sciences. M. Hammersley (*) Open University, Milton Keynes, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_16
381
382
M. Hammersley
critics who dispute the findings or question the methods. A central theme is that, as with ethical issues, these values do not amount to specific injunctions, nor can they be satisfied simply by following standard procedures. Instead, researchers must exercise reflective judgment about what epistemic integrity demands in particular circumstances. Keywords
Research integrity · Epistemic integrity · Commitment to truth · Honesty in reporting research The term “research integrity” (or sometimes “researcher integrity”) has come to be widely used in recent years, especially in official documents relating to the governance of scientific research (see Banks 2015, 2018; see ▶ Chap. 2, “Regulating Research,” this volume). Its current popularity is quite closely related to the rise of ethical regulation; and there are problems with what it is taken to entail that arise from this – a tendency to interpret it as requiring the “following procedures or protocols,” or “adherence to principles.” This implies a “compliance” conception of integrity that is open to question. Nevertheless, this popularity of the concept of research integrity has served a useful function in drawing attention to aspects of the role obligations of researchers that have not always received the attention they deserve: those relating to the actual task of producing knowledge, as compared with (equally important) ethical issues to do with the treatment of participants in research.1 There is, though, some uncertainty about what the phrase “research(er) integrity” means in current discourse, and clarification is clearly required if it is to be used effectively. The uncertainty lies, in part, in its relationship with “research ethics.” Also relevant is whether it is research findings, institutions, individual researchers, or their practices that are the focus. In some usage, the two terms appear to be treated as complementary, so that “integrity” is taken to refer to important aspects of researchers’ behavior that are not always included in, and are certainly not usually central to, discussions about research ethics, such as avoidance of plagiarism, the declaration of conflicts of interest, and a commitment to research rigor (see, for instance, Shamoo and Resnik 2015; see ▶ Chap. 9, “Research Misconduct,” this volume). However, at other times, the term “research integrity” appears to operate as an overarching category that includes those issues normally discussed under the heading of “research ethics.”2 In this chapter, the term will be used in an overarching sense, to refer to all of the role obligations of social researchers. However, a distinction will be drawn between 1
Of course, integrity has long been recognized as an important moral quality, in relation to research and more generally. As regards research, it is central to Weber’s (1917, 1919) notions of “value freedom” and “science as a vocation,” and its general ethical importance can be traced back to the writings of Aristotle. 2 The task of gaining clarity here is not eased by the fact that there are also problems with the meaning of “research ethics”: Hammersley and Traianou 2012, Chap. 1.
21
On Epistemic Integrity in Social Research
383
“epistemic” and “ethical” aspects of research integrity. To some degree, this distinction points to two different sets of values that are relevant to the research process, though what is perhaps more important is the function that these values play. Truth is the most obvious epistemic value, but we can add justifiability, relevance to human concerns, feasibility of research strategies, and honesty about the research process. Elsewhere it has been argued that these values have an intrinsic relationship to the practice of research, while ethical (along with prudential) values operate as important external constraints (see Hammersley and Traianou 2012, Chap. 2). However, for the purposes of this chapter, the reader does not need to accept this argument, only that there is an important aspect of research integrity that is concerned with epistemic matters; in other words, with the production of worthwhile knowledge, one that relates to researchers both individually and collectively as members of research teams and communities. The importance of epistemic integrity at the present time can hardly be exaggerated. More than ever, we live in a world in which factual claims are distorted, or simply made-up – by advertisers, governments and politicians, tabloid newspapers, online “news” sources, and others. Moreover, in public disputes, for example those around climate change, scientists are sometimes pushed by the media, or by their own strongly held commitments, to go beyond the evidence, while those who find particular research findings not to their liking raise spurious questions about them. While it may be an exaggeration to say that we live in a “post-truth world,” these processes are increasingly frequent, and they seriously damage the production and distribution of sound evidence relevant for important decisions facing not just policymakers and practitioners of various kinds but also individual service-users and consumers.3 There are also specific threats to the quality of research at the present time. Israel (2014:3) has argued that “the pressures on academic integrity are growing. The greater dependence of universities and their researchers on sponsorship and the linking of government grants and salary increments to research performance have heightened the prospects of unethical behaviour by researchers.”4 In this situation, it is incumbent upon researchers not only to uphold in public the need for integrity in producing and handling evidence but also (more than ever) to try as best they can to meet the requirements of integrity in their own work. Above all, epistemic integrity requires that a researcher strives to make sound judgments regarding what would be best in pursuing a particular project so as to produce sound knowledge, about the validity of the findings produced in studies, and concerning the current state of knowledge in her or his field. While some aspects of this have been identified and discussed by a number of authors (Macfarlane 2009; Hammersley and Traianou 2012, Chap. 1; Banks 2015, 2018), the full range of relevant issues has rarely been spelt out, and this is what will be attempted here, under headings that relate to key aspects of the research process.
3
On the notion of a post-truth world, Leith (2017) provides a useful review of some books on this topic. 4 In the terms I am using here, and the point I am making, the last part of what Israel writes should read: “the prospects of a lack of epistemic integrity on the part of researchers.”
384
M. Hammersley
Selecting and Developing Research Questions A first requirement for epistemic integrity is to ensure that research questions, whether these are relatively vague and open-ended in character or constitute specific hypotheses, have some worth, in that the answers would, at the very least, be of general human relevance or interest. This is a complex and contentious matter, in the sense that it is subject to differential interpretation – ranging from very narrow to much broader conceptions of what would be of value. It is a common complaint about some kinds of social research that they deal with matters of only “academic” interest, where this word is taken to imply that they are of no relevance to the lives of ordinary people, or that they do not address important public issues.5 By contrast, others argue that academic research should not be tied to current policy priorities or even to lay people’s most pressing concerns – that researchers should be free to investigate a wide range of issues, including ones that may seem relatively trivial in themselves, so long as these have at least some indirect human relevance or importance (in general, or at some particular place and time). Such differences in view take us into the question of what should be the social function of research, and how it ought to be organized and controlled. In recent years – in the UK, and elsewhere – there have been increasing attempts strategically to manage social research, for example, with funders identifying priority areas, as against operating in responsive mode (where they simply consider whatever applications for funds, on whatever topic, are submitted by researchers). Universities have also begun to engage in specifying research topics on which “their research” will concentrate, requiring that the work of academics be related to these. Such strategic management has frequently involved a blurring of the distinction between applied and academic research, with the latter tending to be reduced to the former (Hammersley 2011: Intro). Also involved here is a conflict between two very different conceptions of how academic research needs to be organized if it is to flourish. Many years ago, Polanyi (1962) offered a strong critique of attempts strategically to control natural science, outlining the endogenous mode of organization that had facilitated its success in the nineteenth and early twentieth centuries.6 However, this mode of operation has largely been abandoned in many fields of natural science today, partly under the influence of the high costs involved in such research, as well as both commercial and governmental pressure to focus on areas that are taken to be of the highest priority in technological or practical terms (Ziman 2000). And, in the field of social research, there have been increasing calls for a move away from the traditional mode of operation, close in character to that recommended by Polanyi, to a new, more flexible interdisciplinary, in
5
This criticism has also been applied to some areas of natural science. For a spirited recent contribution to the debate about “meaningful” research, see Alvesson et al. (2017). 6 The attempts at strategic management that Polanyi criticized (which, interestingly, were partly prompted by the Soviet Union’s “planning” of science) are also at odds with academic freedom, which has an elective affinity with the endogenous model of scientific organization he proposes. On academic freedom and the threats to it, see Fish (2014), Traianou (2015), and Hammersley (2016).
21
On Epistemic Integrity in Social Research
385
fact postdisciplinary, mode that is concerned with tackling specific, practical problems (see, for instance, Gibbons 2000; Huff 2000; Novotny et al. 2001). These different conceptions of the basis on which topics for research should be selected, and of how research should be organized, tend to lead to very different conclusions about what are and are not justifiable research questions. It is my view that the narrow conceptions of relevance, along with attempts strategically to control academic research, are undesirable and have damaging consequences, intended and unintended (Hammersley 2011: Intro). Equally, though, social research must be strongly oriented collectively to building knowledge over time that relates to important social issues. For the purposes of the present discussion, however, my point is simply that research questions should be formulated with a view to what is worth investigation, and that the conception of worth employed in deciding this needs to be given explicit consideration. A second, equally important, consideration regarding research questions is that they must be capable of being answered by means of empirical research. It perhaps needs to be emphasized that not every interesting and important matter is open to effective investigation; indeed, much of the time, only a small proportion are. One reason for this is that being able to answer some questions (and sometimes even being able to formulate them effectively) relies upon having answered prior ones. In this important sense, social research is, or should be, a developmental process; so that what questions can be addressed effectively will depend upon the relative stage of development in the field concerned. More than this, though, many “big questions” are not open to empirical investigation in principle because they concern, for instance, whether a policy or practice is desirable or undesirable, right or wrong; empirical research, by its very nature, cannot answer such questions on its own (Weber 1917; Hammersley 2014). Furthermore, while it can, in principle, answer questions about what happened in some situation, about the features of particular events or institutions, and about causes and effects, it cannot tell us what will happen in the future. The best it can do is to provide some of the resources that are necessary for answering value questions and for anticipating future outcomes. Equally important is the issue of available material resources. There are research questions that it may be possible to address in principle at the present stage of inquiry but that would require a level of resources that is not likely to be available. This, again, should rule out some research questions, however important they may be. While there is much pressure to tackle “big,” or highly policy-relevant, questions, attempts to answer many of these will, very often, be futile, especially through a single study and with the level of resources usually available to most academics in the social sciences. So, epistemic integrity requires that researchers only tackle questions that are open to effective investigation, and it also demands that they are honest about the limits to this. Above all, it proscribes their pretending to have answered questions that their research did not tackle effectively and perhaps could not have tackled. While both the issue of what is worth investigating and what is realistically open to effective inquiry can be difficult to resolve with certainty, nevertheless some
386
M. Hammersley
decision about what to focus on must be made at the beginning of any research project, at least in broad terms; though, of course, the judgments involved can often be revised over the course of inquiry. Indeed, they may need to be revised, since all manner of contingencies can occur, anticipated and unanticipated, that can change what is a worthwhile and feasible focus of inquiry. So, the issue of whether the research questions being addressed are appropriate must be continually revisited as the research develops.
Resourcing Research Not only must researchers take into account the resources likely to be available to them, they will also of course often need to bid for those resources, and here too issues arise that are relevant to epistemic research integrity. A first one concerns from whom funds for research should not be accepted. Some potential sources may be ruled out on ethical grounds, most obviously organized crime, but perhaps some commercial and governmental sources too. However, epistemic integrity is also relevant where there are likely to be conflicts of interest involved with particular funders. For instance, if one were planning to investigate the reasons that lead people to quit smoking, one might reasonably hesitate before accepting funds from a tobacco company for this work (were they to be available). Closely related is the question of the terms on which funds are allocated, in other words what “strings” are (or may be) attached. It is not uncommon to find funding bodies laying down various requirements and restrictions. These may involve the provision of interim reports privately to them about the progress of the research (opening up the possibility of termination of the research if they are not happy with the direction in which it is going). They may also require permission to be sought before publication of the findings, or even insist on the right to embargo or modify research reports. Such restrictions can be contained in government contracts for research and in commercial ones as well. While they may be reasonable enough from the point of view of the funder, they threaten the process by which research can be carried out effectively, especially given the extent to which this is dependent upon the validation of any knowledge produced by the wider research community. For this reason, careful thought needs to be given to what is and is not legitimate here, from the perspective of academic research. Moreover, any restrictions under which the research was funded should be made public in research reports. Another issue concerns how, in applying for funds, the research should be presented to the funding body. There may be a temptation to overestimate the value of the likely findings and to downplay the problems that could be involved in doing the research. One may also be inclined to exaggerate the scope of the inquiry that will be possible, for example proposing investigation of a larger sample than is likely to be feasible in practice, aiming at a higher response rate than will probably be achievable, proposing a more in-depth or more extensive analysis than will likely be possible, and so on. Clearly this verges on, if it does not amount to, dishonesty.
21
On Epistemic Integrity in Social Research
387
However, this is not a simple matter, any more than is deciding what restrictions exercised by a funder are acceptable. Prudential considerations are also involved. What if many other researchers oversell their proposed research, and that only those who do this are successful in getting funds? What if the expectations of funding bodies are unreasonable about what can be investigated with a given level of resources? Does having integrity allow engaging in “reasonable exaggeration,” or does it rule this out? Even from a “strategic” point of view, one would, of course, be wise not to promise a great deal more than is likely to be deliverable. But should we at least adopt an “optimistic” rather than a “pessimistic” assessment of what will be possible? Here, as elsewhere, reasonable discretion must be exercised, but what is and is not reasonable is a matter of judgment and will clearly be open to dispute.
Use of the Existing Literature A researcher has an obligation adequately to search for relevant literature and to provide sufficient information to readers about how this was done. Of course, searches can never be absolutely exhaustive. One reason for this is that time and other resources are scarce, and those that are allocated to searching the literature are not available for other activities, including reading and reviewing what is found. More fundamentally, there are often no built-in limits to what could count as relevant literature. Indeed, it would be better to think of relevant research literatures, since there are often different ones relating to different aspects of any set of research questions, as well as to the methods that it is proposed to use in investigating them. Moreover, in each case, what could be relevant extends indefinitely, potentially, so that some judgment has to be made about cut-off points. Nevertheless, if researchers do not search effectively for relevant literature, there is a danger that they will go over much the same ground without learning from the past. And in my experience this frequently happens. We should also note that what is relevant literature for a project may well change as research questions develop, and as methodological strategies are adapted to deal with emerging conditions and the developing process of inquiry. There is a need, then, to recognize the changing requirements of a project as regards use of the literature, and to carry out new searches as appropriate. It is thus unlikely that one will be able to write the final version of a literature review, to be incorporated in the research report, before one has collected and analyzed the data. Equally important, time and effort must of course be put into reading and assessing the relevant literature that has been found. There are different kinds of reading, and for some purposes some relevant literature can simply be skimmed, but it is essential that the most relevant material is studied in depth and with care. Furthermore, crucial to this is making assessments of the likely validity of the findings of studies, and of how well those studies were carried out, with a view to learning all that can be learned from them for the purposes of one’s own research. There is danger in relying entirely on secondary sources, such as previous reviews of the literature, accounts in textbooks, etc., since these may be inaccurate. At the same
388
M. Hammersley
time, it is important that relevant secondary literature is given attention, particularly that which itself engages in critical assessment of the works concerned. Failure to do this is not uncommon. An example, on a considerable scale, can be found in research that draws on so-called “post-structuralist” French philosophy, where Derrida, Foucault, Deleuze, et al. are quoted from English translations of their work, but without much apparent attention to the substantial critical literature, in French and English, that has grown up around this work. There is also a research integrity issue regarding how one should present relevant previous studies in research reports. There may be a temptation here to downplay or even misrepresent their contribution, in order to clear the way for one’s own study. This is perhaps especially likely when studies are used as illustrations of the inadequacies of previous investigations, or as exemplifying a misguided approach. Unfortunately, it is not uncommon for previous work to be caricatured in this process, which is not to deny that it may well have major failings. Research integrity requires that the literature is shown appropriate respect; in other words, sufficient effort must be made to grasp what previous authors have done and why; the evidence and methods they have used; and the reasons for the choices they made in carrying out their work. It is easy to pigeon-hole studies under a general category, or to dismiss them completely on the basis of some perceived flaw, or on the grounds of their supposed political commitments or motives. Equally to be avoided is the misinterpretation of previous work where it is used as a positive source or model for one’s own work: here, there may be a tendency for criticisms that have been made of these studies to be neglected or downplayed. It is not uncommon for past work to become seriously misrepresented in this way; indeed, a tradition of misrepresentation sometimes builds up that results in authors routinely being taken to have argued almost the opposite of what they did in fact argue.7 Once again, this points to the need to read the original work, not just commentaries on it. More specifically, care must be taken in using quotations from other people’s writings, to try to make sure that these are not “presented out of context,” in the sense that, as quoted, they carry different implications from what seems originally to have been intended. It is not unusual, for example, to find a quotation being used from part of a text where the author was presenting two sides of an argument, while the quotation only relates to one of these sides. Errors may also occur where a quotation cuts out words from within a sentence, even when the omission is indicated. For example, a qualification an author attaches to a statement may be omitted. Such errors can occur inadvertently as a result of relying on one’s notes rather than going back to the original source. Checking the sources of quotations is essential. Much more obviously, integrity requires that plagiarism is avoided: the incorporation of others’ words into one’s own writing without any indication that quotation
Examples include Becker’s article “Whose side are we on?” (Becker 1967), on which see Hammersley (2000, Chap. 3), and Rosenthal and Jacobson’s (1968) Pygmalion in the Classroom. Sometimes this sort of distortion derives from the feeling that some ideas or findings are “too good to be false” (see Hammersley 2011, Chap. 5).
7
21
On Epistemic Integrity in Social Research
389
is involved or without appropriate citation. Also to be guarded against, though, is the inclusion of too much quotation, a practice offered a justification in the notion of “uncreative writing” (Goldsmith 2011). Sometimes this is associated with a failure to read carefully what has been quoted, and frequently it involves a failure to consider how it could have been expressed better. But, equally important, excessive quotation amounts to failing to take full responsibility for the argument being presented; even though, of course, direct quotation is often essential to illustrate what is being discussed. While the literature must be respected, this does not, of course, mean that it should not be critically assessed, especially in the case of the key studies directly relevant to one’s work; and this assessment must be made explicit. There is an obligation to engage in such criticism, and this can focus on a number of features: the concepts used – how well-formulated they are, how appropriate, and how well they are used; the formulation of research questions; the selection of cases for investigation; the types of data employed and how these are presented in the report; and the likely validity of the findings and the conclusions. Such evaluations are sometimes done with a view to making a case for a new research proposal or a new study, and I noted the dangers of bias resulting from this earlier. But reviews can be research products in their own right, designed to summarize what is known about some topic. Where this is done for a lay audience, there may be a danger of oversimplification or a threat of bias resulting from a desire to maximize the “impact” of the research findings, or even to move a policy issue up the public agenda or to push policy in a particular direction. These, too, are risks to be guarded against in the name of research integrity.
Selecting Cases for Investigation It is an obvious requirement that the cases selected for study must be appropriate for the research questions being investigated. However, cases can serve different functions. Is the aim to select one or more cases that are typical of some category or representative of some population? If so, what category or population is involved, what exactly does “typicality” or “representativeness” mean in this case, and how can it be best achieved? Or is the aim comparative analysis of some sort? In which case, decisions need to be made about whether the aim is to minimize or maximize differences between the cases, as well as about which differences are relevant. Of course, a case may be selected initially as an instance of something interesting and/or important, with a view to providing a description and explanation of its unique features. But in most research, including much qualitative inquiry, more general claims will come to be made explicitly or implicitly, and when this happens the rationale for generalization (empirical or theoretical) needs to be considered and stated. There is a whole host of issues here, then, that researchers must take into account if they are to do their research well. A number of ancillary points can be mentioned. One is that while sampling based on statistical theory can be a very useful technique, it does not, in itself, facilitate the
390
M. Hammersley
task of selecting cases for comparative analysis. Nor is it always feasible or necessary even when the concern is with what is typical or representative. Its use is certainly not mandatory for research integrity. What is mandatory is attention to how well whatever sampling strategy adopted serves the purposes of the research, both in principle and in practice (for example, when “nonresponse” is taken into account). It also needs to be remembered that if the research questions change over the course of an inquiry, it may be necessary for the sampling strategy to be modified: what that strategy must serve is the research questions that are eventually addressed in the research report, not those that were initially formulated. Thus, where the aim is generalization to some finite population, the nature of this population must be made clear, and the grounds for assuming that the case(s) studied are representative examined. Similarly, where the goal is to identify some conditional causal relationship among categories of phenomena, the nature of that relationship and the evidence for concluding that it operates must be carefully considered. Neither kind of generalization is unproblematic in the context of social science (Gomm et al. 2000; Cooper et al. 2012; Hammersley 2014) and achieving either with a high level of likely validity is challenging. It is important that there is honesty about what is being attempted, and the degree of success likely to be achieved or actually achieved. The other side of this issue is that, given that it is not possible to represent any case exhaustively, there must be clarity about which aspects of cases are to be represented and why. Of course, in the early stages of research, what is relevant may not be clear to the researcher: it is unavoidable, indeed often desirable, to operate on the basis of “hunch,” relying on one’s best judgment about what will be fruitful. But, as the research goes on, the rationale for what is being focused on and what ignored in studying particular cases (along with why study of these cases is appropriate) needs to be made as explicit as possible and evaluated.
Data Collection or Production A first question that needs to be addressed under this heading is whether, in fact, new data are required. There should be at least some consideration given to whether the necessary data are already publicly available or can be accessed in an archive. There is now a considerable amount of archived research data, especially quantitative but also qualitative; it is often argued that this is underutilized. While, very often, these data will not be sufficient to address the research questions of a new study, they may sometimes be; and they can also be, a worthwhile supplement to new data. Given that what type of question is being addressed can make a significant difference to what (if any) new data need to be collected, careful attention must be given to this relationship, throughout the research process. For example, if the aim is to produce a description of some phenomenon, what is necessary will be different from where the aim is to explain the occurrence or nature of that phenomenon. Producing explanations still requires descriptions (of cases in which what is to be explained occurs or is absent, and perhaps also of those in which various potential
21
On Epistemic Integrity in Social Research
391
explanatory factors are present or absent, or are present to some degree), though here the descriptions will need to be tailored to the explanatory task. And explanation also requires some sort of comparative analysis, even if this amounts to a thought experiment rather than a systematic comparison of actually existing cases of different kinds. Careful consideration needs to be given to what is the most productive comparison, and what conclusions can be drawn from it. And some assessment of this ought to be provided in research reports. Furthermore, there can be change in the requirements of description and explanation, as the research questions become clarified and the research process becomes more progressively focused. All this reinforces a point made earlier: that integrity requires that we be as clear as possible at each stage of the research about our goal, and about the requirements of achieving it, making adjustments to the research design as appropriate. Even with clarity about the intended product of the research, how to obtain the data required is usually by no means a straightforward matter: it requires the exercise of intelligence if the research is to be done well – which is what integrity requires our goal to be. For example, setting up interviews with people and asking them questions designed directly to answer our research questions will rarely be effective. Instead, ways will usually need to be found to gain data that will enable us to answer the research questions indirectly. Another important element of epistemic integrity is that the researcher must consider the full range of methods that could be used to obtain data relevant to answering the research questions. There may be a tendency for researchers to choose from a relatively narrow range of methods; it has been claimed, for instance, that there is an increasing tendency among qualitative researchers to opt immediately for interviews. It is important to remember not only that there are several other sources of social data available (observation, use of documents) but also that interviews (like other methods) can be carried out in a variety of ways: the number of participants may vary (on both sides), as can where the interviews are carried out (for example, on whose territory), the projected length, whether a single interview is to be employed or repeated interviews, the sorts of question to be asked (not just whether these are strongly or weakly structured but also what form they take – such as invitations to reflect, requests for detailed description, challenges to claims that have been made, etc.), whether prompts of various kinds are to be used (photographs, video extracts, magazines, etc.), and so on. Given that there is an array of methods that social researchers can employ, and internal diversity within the use of particular methods, careful (and continual) attention must be given to the particular manner in which data are to be collected.8
8
Of course, deciding what data are required and how to obtain them is not simply based on what is most appropriate for the initial research questions. Indeed, as I have noted, these questions may change, not least as a result of the data collected – an interactive process is involved here. Also relevant is the existing competence of the researcher in relation to particular methods. Few, if any, researchers can be competent in the use of all methods. However, what this ought to mean is that research questions are selected partly in terms of whether they are amenable to the methods which the researcher is able to deploy.
392
M. Hammersley
Another consideration often involved in the selection of methods concerns the assumptions built into the use of particular methods. For instance, there are researchers who believe that the goal of science is causal explanation of phenomena, and that experimental method is the gold standard in pursuing this goal. At the other end of the spectrum are those who insist that human experience and social life are too complex to be grasped in causal terms, and that the first priority is detailed description of personal experience and/or of behavior. There are clearly fundamental – and, in some respects at least, reasonable – disagreements here about what is a possible and desirable research product, and about how it can best be achieved. Given this, neither of these views can be legislated as part of research integrity. However, at the same time, integrity extends beyond “being true to one’s paradigm.” There is an obligation to reflect carefully on the assumptions built into whatever approach is being used, and its competitors, and to modify one’s position, as appropriate, on the basis of these reflections (not least because paradigms typically come in changing varieties, and because in practical terms the differences among them are often less than claimed). A further point is that, in using particular methods, there is an obligation to employ these in ways that reflect an understanding of what has been learned about them in the past by other researchers. This means that some familiarity with the methodological literature is required, but also reflection upon the method and how it would be best to use it in one’s own project. Often this is a matter of balancing different potential features of a method. For example, an essential element of interviewing is to listen very carefully to what informants say. This is particularly important in relatively unstructured interviews, where the next question one asks should usually be based upon what the informant has just said (rather than following a prearranged sequence of preformulated questions); but listening is important to some degree in all kinds of interviewing. Any tendency to force what people say into one’s own framework must be resisted. At the same time, people do not always tell, or know, the truth, nor do they always produce responses that are authentic in other respects (they may say what they assume the researcher wishes to hear, or what would show them in the best light). Nor is what they say always relevant to the research, though care must be exercised before treating it as irrelevant. Therefore, it is sometimes necessary to challenge what people say, or to stimulate them to reflect on what they have said, or to test out the implications of what they seem to be saying, or to steer them back to what is relevant to the research. This is not incompatible with listening to people carefully, with a view to understanding the logic and validity of what they are saying. However, in the context of the interview, and in that of analysis, there may well be a tension between these two concerns. Integrity requires that an appropriate balance is struck, as far as possible. Of course, once again, what this means in any particular case is a matter of judgment and may well be contestable. A rather different aspect of research integrity relating to data collection is that the researcher will need to resist attempts, for example, by gatekeepers who control access to informants or to particular settings, to shape what data are to be collected, and perhaps also how they are analyzed. This is a source of potential bias, and it is
21
On Epistemic Integrity in Social Research
393
the responsibility of the researcher to try to minimize it, along with other possible biases. Of course, this may well be implicated in complex negotiations, in which ethical and prudential considerations will also need to be taken into account.9 Nevertheless, judgments about what restrictions to accept, and which to challenge, must treat methodological considerations as a priority. Even aside from attempts by gatekeepers, and perhaps also by participants, to shape the data and analysis in particular directions, demands may be made on researchers, for example for services of various kinds, that may affect what data can be collected and/or reduce the time available for collecting, processing, and analyzing the data. For instance, in the case of research in schools, it has been quite common for researchers who are trained teachers (and sometimes for those who are not) to be asked to “look after” a school class for some period of time or even to do supply teaching. The response made to such requests (or demands) must take account of the consequences for the research, as well as of ethical considerations regarding the students. At the same time, of course, other considerations, including ethical ones regarding reciprocity, may lead to such services being provided, despite the fact that they reduce the time available for data collection, or may affect the type of data produced.10 Deciding what sorts of data are required and how to obtain them is often conceived as a matter of research design, and this is reasonable so long as we do not assume that the process of design takes place entirely at the beginning of research – it goes on throughout inquiry, even if it is more of a priority in the early stages. Also important to recognize is that decisions will have to be made in conditions of considerable uncertainty – particularly about what the consequences of any decision will be for the data collected and for the analysis. The aim can only be to make the most reasonable judgment at each point in time. Once an initial decision about the types of data to be used has been made, and the process of data production has begun, there must be continual monitoring of what data have been obtained, along with reflection on how well these data can serve to answer the research questions, what further data
9
Prudential considerations here relate to such matters as whether a particular method would carry a serious risk of some sort for the researcher. For example, while the best data for a study may come from direct observation by the researcher in some setting, careful consideration should be given to what this would involve and the dangers that may be associated with it. For a discussion of this issue in the case of qualitative research, see Bloor et al. (2010). Ethical considerations are also relevant as well, of course, but I leave these aside here because they have been well-covered in the literature on research ethics. 10 While it may be possible to collect important data through being a participant, it must be remembered that the demands of a role such as that of teacher may prevent collection of much of the data that an observer in the classroom could obtain, even if there is experiential data made available thereby that an observer would have less easy access to. Furthermore, playing an established role in the field will shape one’s relationships with others, beneficially in some ways perhaps (for example, to continue with the example, improving relations with other teachers in the school) but also perhaps in undesirable ways (for example, in shaping relations with the students).
394
M. Hammersley
may be required, how the data collection process will need to be modified to provide what is required, and so on. In flexible forms of qualitative research, especially, this will be an extremely demanding task, and one that can only be achieved in a rough and ready way at best. However, such ongoing attention to research design is essential, and this is true of quantitative research too. Even when this requirement is met, it is likely that at the end of the project, looking back, one may well see gaps in the data that it might have been possible to have avoided, so that a revised evaluation is reached about some of the decisions made during data production. But this is almost unavoidable. What is important, from the point of view of research integrity, is that there is honesty about any weaknesses in the data collection process and about their implications for the likely validity of the findings. It may be that they undercut the possibility of reaching any sound conclusion at all about the main research questions; if this is the case, it must be acknowledged. However, usually, weaknesses in the data production process simply indicate that qualifications are required regarding the likely validity of particular findings. And these could perhaps be remedied through further research.
Data Analysis What is required here can be described fairly simply: development of the most appropriate mode of analysis for answering the research questions, as currently constituted, while also taking account of the nature of the data. However, even more than data production, analysis is by no means simply a matter of choice from a range of well-defined options. The most obvious axis of variation is quantitative versus qualitative analysis: between an approach in which data are structured so as to provide counts, rankings, and/or measurements, and one that assigns data to various categories or themes that are not mutually exclusive or exhaustive, and may be significantly reformulated in the analytic process.11 In the first approach, a clearly defined set of categories, or scale, is developed within which each relevant data item can be assigned to a unique place, with the category system or scale being exhaustive of the relevant data. In the second approach, the data are assigned to categories in a way that is flexible and multiple, these categories serving as
11
It is important to recognize that what is involved here is not a simple dichotomy but a multidimensional space, so that there is much scope for variation. We should note, for example, that Qualitative Comparative Analysis and perhaps also Analytic Induction, methods that may be qualitative rather than quantitative in other respects, require a categorization process that allocates items to one and only one of some set of categories, by contrast with grounded theorizing and many other kinds of qualitative analysis. Furthermore, there is no reason, in principle, why thematic categories cannot be refined and developed into the sort of category system required for counting and recording frequencies.
21
On Epistemic Integrity in Social Research
395
sensitizing concepts that are refined or modified so as better to serve the development of explanatory ideas. However, there is a range of strategies under these two broad headings. If the data have been structured in a way that allows counts, rankings, or measurements, there is still considerable scope for variation in mode of analysis, from the use of relatively simple descriptive statistics (percentage differences, rates, indexes, averages and measures of variation, and so on) to much more sophisticated techniques that can be very demanding in the requirements they place upon the data. These techniques can vary, too, according to whether they are aggregate-based or case-focused (see Byrne and Ragin 2009).12 Similarly, if the data have been structured in a looser way to facilitate the development of “thick descriptions” or explanatory theories, there are various possibilities regarding how this is to be done, from grounded theorizing to narrative or discourse analysis.13 Choice among ways of analyzing the data, as well as the particular substantive categories or scales developed, must show due care and diligence, avoiding overly superficial analysis, but also eschewing the use of complex techniques that make excessive demands on the data available. Up to now the aim has been to outline the sorts of reflection required in the selection of analytic strategies. Also relevant under this heading, as under others, is an obligation to minimize the risk of bias – in other words, systematic rather than haphazard error. Bias in analysis can arise from a number of sources. One of these is the background preferences of the researcher: most of us will find some lines of analysis and some conclusions much more appealing than others, and may even find some unpalatable. But we should be prepared to pursue whatever conclusions appear most likely to be true: we must follow the analysis wherever it leads. A second potential source of bias is a concern to produce “big news,” or to generate positive rather than negative findings. These types of conclusion may well be desirable, but they must not be forced out of the data. As noted earlier, there are often external pressures as well as internal temptations to do this. A further kind of bias, particularly relevant in qualitative research, is a concern with producing a coherent story: the danger here is overlooking inconsistencies in the dataset and/or bending data to fit what is taken to be the emerging picture or theme. It is a fact of life that the world is complex, so that there may be apparent contradictions. For instance, people’s perspectives are by no means always internally consistent – there may also be situational variation in what they say and do – and this must not be “tidied up,” at least not without a clear indication of what has been done. Similarly, the narrative process of events is often complicated and uncertain, sometimes meandering back upon itself, and there is a temptation to reduce it to a simpler, more direct, pattern. But this temptation must be resisted, at least up to the point when an explicit process of modelling begins, which necessarily involves simplification.
12
There are also more specific issues, for example, about how far to go in data reduction, where there may be a trade-off between delicacy in representing variation and ensuring that categories have a sufficient number of cases in them for statistical or some other form of analysis. 13 Significantly, there are several versions of each of these approaches.
396
M. Hammersley
Reporting the Research A first requirement here is that the research questions eventually addressed (which as noted earlier will almost always be somewhat different from those identified at the start) are presented clearly and distinguished from the assumptions that have underpinned the research. This may seem obvious, but it is not uncommon to find research reports that do not clearly present the specific research questions being addressed, or which give the impression that rather grander ones have been tackled, and/or that conflate findings with what appear to have been guiding assumptions. Also required is that sufficient information is provided about how the research was carried out. In some fields, notably psychology, what may be demanded is sufficient information for the research process to be replicated. But the more basic requirement is sufficient information for readers to understand what was done and why, and for them to be able to assess the likely validity of the findings by taking account of potential threats to validity. It may be added that too much information about how the research was carried out, and/or about the researcher, can be almost as bad as giving insufficient information. This is because it clutters up and obscures the necessary information provided, or it may deprive other parts of the report of sufficient space. However, just what level and kinds of information are necessary, and how this should be presented, is not determinable precisely in general terms – reasonable judgments are required. And it may turn out that further information needs to be supplied in response to requests and criticisms from audiences.14 It is also important that sufficient evidence is provided in research reports. Here, again, it may be difficult to determine what is required. It is necessary to think about what functions the provision of evidence is intended to serve. It has been argued that this ought to allow readers to replicate the analysis carried out by the researcher – this is sometimes suggested by conversation analysts, for instance, and such replication is also possible to a degree with some quantitative research and with Qualitative Comparative Analysis. However, this requires that all of the evidence is supplied to readers.15 Once again, though, the basic requirement is that sufficient 14
It perhaps needs to be underlined here that what information is necessary may vary according to the audience being addressed. For instance, there is an interesting and difficult question about how much information ought to be provided about methodology to lay audiences: they often have little interest in this, and yet they ought to take it into account in evaluating the findings. 15 This is rarely possible within the constraints of an article or even a book. It may be possible, of course, for the data to be supplied in appendices or online, though the protection of confidentiality may be a barrier to this. There is also the problem that in some kinds of qualitative research, notably ethnography, even if all of the recorded data were archived this would not give access to what are sometimes referred to as “headnotes”: memories and tacit knowledge built up by the researcher during the course of fieldwork (Sanjek 1990; Pool 2017; van der Port 2017). It is perhaps also worth emphasizing that, even where all the evidence is provided, readers must still exercise trust in accepting what is presented as authentic, at least until there are signs that trust is not warranted. Research cannot be made “fully transparent.” And there are deep questions about what level and
21
On Epistemic Integrity in Social Research
397
evidence is provided in research reports to allow readers to assess the likely validity of the findings. Most empirical research reports provide only summaries and/or small samples of the data, and very often this will be all that is necessary. Once again, though, researchers must be prepared (where possible and ethical) to provide more evidence should this be requested by audiences. A further requirement is that the findings are presented consistently as neither more nor less likely to be true than is reasonable. Knowing what likely validity to assign to conclusions is not easy, and is never a precise matter, but here too sound judgments can be made – and unreasonable ones are usually easily identifiable by the researcher, if not always by readers. It might be thought that researchers would never put forward their findings as less likely to be true than they actually are, but this sometimes occurs in parts of a research report where they are anticipating criticism; for example, the research may be presented as only exploratory, whereas elsewhere the findings are put forward as conclusive. Qualitative researchers, in particular, sometimes oscillate between emphasizing the tentative character of their research findings and presenting these confidently as true, even if they are hesitant to use that word. Another issue concerns the audience for research reports. There is a great deal of pressure on researchers today to address lay audiences and to maximize the “impact” of their work thereby. However, in my view, communication with lay audiences ought usually to take the form of a review of all the relevant literature, rather than the presentation of findings from a particular study. Indeed, it can be suggested that promoting the findings of a single study in the public sphere is an offence against research integrity. This is because those findings will not yet have been subjected to critical appraisal by fellow researchers, and judgments about their validity must take account of the findings of other relevant studies. Of course, there is a great deal of pressure on researchers at the present time to make their findings immediately and widely available, in order to justify public expenditure on research. This is one of several respects in which epistemic integrity is under pressure today. In line with a point made earlier about research questions, epistemic integrity also demands that the “findings” presented must not be of a type that empirical research cannot validate on its own. In particular, they should not be practical evaluations and prescriptions. Such value conclusions can only be legitimate if put forward in conditional terms, in other words as dependent upon the adoption of a particular set of value judgments. However, it is not uncommon for them to be presented as if they derived directly from the research evidence; and, often, the value assumptions involved are not made explicit, even less provided with any justification. Also ruled out by the requirements of integrity, in my view, are admittedly fictional accounts based on research data. Fictions have long been of value in scientific research in the form of idealizations and composite types, where their function is either to facilitate the production of knowledge or to summarize kind of trust it is reasonable to expect readers to have in researchers – here we go back to the problem of a post-truth world and the question of whether social science research has itself become corrupted.
398
M. Hammersley
it. However, some qualitative researchers have presented their findings in the form of poems, stories, or plays (examples can be found, for instance, in the journal Qualitative Inquiry).16 This falls foul of scientific integrity, in my judgment, and indeed many of the researchers adopting this mode of presentation would deny any commitment to science, advocating “arts-based research” instead. Presumably, a different notion of integrity applies to this. Finally, there is the extreme case of research fraud, where findings and perhaps also research procedures are simply invented. This can take a variety of forms, from the supplementation of data with additional cases that are imaginary to research reports that rely entirely on fabricated data presented as genuine. There have been a number of scandals relating to fraud of these kinds, from that surrounding the work of the British psychologist Cyril Burt (see Tucker 1997) and the American anthropologist Carlos Castaneda (see de Mille 1978, 1990) in the 1970s to more recent ones concerning the Dutch social psychologist Diederik Stapel (see Levelt et al. 2012) and the anthropologist Mart Bax (see Baud et al. 2013). Similar scandals have occurred in the natural sciences, and the publication of fraudulent findings and data is perhaps the most serious threat to epistemic integrity of all. In general, it should be clear from my discussion that in order to maintain integrity, researchers must continually assess the judgments they have made and reflect on their character and consequences, as well as on their implications for future decisions. At the core of this assessment are assessments of the validity and worth of what they are producing, as well as the effectiveness (alongside ethicality and prudence) of what they have done. And some of these reflections may need to be included in the research report.
Engaging with Critics In my view, the research production process does not end with the publication of a research report: the dialectic of communal assessment is an essential element of it (Hammersley 2011, Chap. 7).17 Any knowledge claims produced by a single study must be assessed by the relevant research community. And, for this to be done effectively, the researcher must engage with colleagues, not least with those who may be sharply critical of the study. Of course, the researcher will have already engaged in this dialectic in producing a literature review, but this engagement must continue after the research report has been published. In the course of discussions in the research community, issues and arguments may surface that did not emerge for 16
This journal also sometimes publishes poems not based on research, see for instance Hammersley 2019. 17 There are further aspects of integrity associated with playing the role of a reviewer for journals, publishers, and funding bodies; editing journals; evaluating colleagues in appointment and promotion committees; and so on. In relation to editing journals and refereeing articles, there are questions not just about detecting fraud, or about what is and is not worth publishing, but also about the danger of publication bias arising from the failure to publish statistically nonsignificant, inconclusive, or negative findings. There are also, of course, issues of integrity relating to academic teaching.
21
On Epistemic Integrity in Social Research
399
the researcher in the course of carrying out the investigation, as well as ones that did. What is required in such engagement is that researchers seek to understand any criticisms properly, and to respond to them in a way that contributes to the collective task of building knowledge. In particular, criticisms must not be immediately dismissed as the product of ignorance, incompetence, malice, or political commitment. Of course, not all criticism of research will be accurate, judicious, or wellintentioned, but the researcher’s starting assumption must be that it is – even if this judgment comes to be revised later. I do not pretend, of course, that this dialectic currently operates well in all research communities. There are several respects in which the process frequently falls short (sometimes a long way) of what ought to happen. For one thing, because of the sheer volume of articles and books now produced, the findings of many studies are never assessed in a sustained way, or perhaps even at all. This may reflect the treatment of publications by prevailing regimes of research governance as “outputs” to be ranked and the pressure to produce them: reviews do not count for much in this process of assessment. But, even where publications are reviewed, what is done often does not meet the requirements I outlined earlier. For example, it is common to find book reviews that do not provide a clear and accurate account of the arguments and evidence presented in the book, and/or that make little critical assessment of these. At the other extreme, reviews sometimes engage in dogmatic criticism – methodological, theoretical, ethical, or political. Furthermore, where there have been disputes about particular studies, it has been quite common for there to be failure, on one or both sides, to engage with the arguments of the other.18
Conclusion At the beginning of this chapter, it was noted that the concept of research integrity, or researcher integrity, has become prominent in recent years, especially in official documents relating to research governance (see Banks 2015, 2018; see ▶ Chap. 5, “Research Ethics Codes and Guidelines,” this volume), but that there is some uncertainty about the meaning of the term. It was proposed that it should be treated as an overarching concept that incorporates both those issues that have been central to most discussions of research ethics – such as minimizing harm, preserving privacy, and respecting the autonomy of research participants – and those epistemic values and virtues that relate to the goal of research: the production of knowledge (see Hammersley 2020). Here, my focus has been entirely on this second set of considerations, because their scope and character has not been sufficiently recognized. 18
All of this raises the interesting question of what are the necessary preconditions for the healthy operation of research communities (see Hammersley 2002, Chap. 5). A good case could be made that current conditions are increasingly inhospitable to the cultivation of epistemic research integrity.
400
M. Hammersley
Epistemic integrity is especially important at the present time when, more than ever, there is skepticism on the part of wide sections of the public about expert knowledge claims, as well as a considerable disregard for the truth of arguments and evidence – the overwhelming preoccupation often being whose interests they support or damage. In such a climate, it is essential that we have a clear sense of what epistemic research integrity is, and that we try to live up to its requirements. However, this cannot be achieved, as some official pronouncements seem to suggest, simply by following a set of methodological and ethical injunctions. Instead, it has been argued here that integrity necessarily relies upon the exercise of judgment by researchers. One way of conceptualizing the nature of the judgment required is that it is aimed at achieving “reflective equilibrium.” This requires us to start from our sense of what is the right thing to do, about which we may experience conflicting feelings; then to try to identify what general principles underpin these; and, finally, to think about how these principles can be reconciled among themselves and with the facts of the case concerned (see Elgin 1996). Of course, in the climate of skepticism that has fed the rejection of expertise, any such appeal to judgment is immediately suspect. But this skepticism derives from a false dichotomy between the supposed “transparency” of applying rules, on the one hand, and the obscurity, inconsistency, and possibly even nefarious character, of “subjective judgment,” on the other. While challenging this myth may be difficult, the fact remains that the production and assessment of knowledge necessarily relies upon judgment, and the quality of this can vary according to the degree of relevant knowledge and experience deployed. The likely validity of research findings depends upon the quality of the judgments made by researchers – these are at the heart of research integrity.
References Alvesson M, Gabriel Y, Paulsen R (2017) Return to meaning: a social science with something to say. Oxford University Press, Oxford Banks S (2015) From research integrity to researcher integrity: issues of conduct, competence and commitment. Paper given at Academy of Social Sciences seminar on Virtue ethics in the practice and review of social research, London, May 2015. https://www.acss.org.uk/wp-con tent/uploads/2015/03/Banks-From-research-integrity-to-researcher-integrity-AcSS-BSA-Virtue -ethics-1st-May-2015.pdf Banks S (2018) Cultivating researcher integrity: virtue based approaches to research ethics. In: Emmerich N (ed) Virtue ethics in the conduct and governance of social science research. Emerald, Bingley. (This is a revised version of Banks 2015) Baud M, Legêne S, Pels P (2013) Circumventing reality. VU University, Amsterdam. https://www.vu.nl/ en/Images/20131112_Rapport_Commissie_Baud_Engelse_versie_definitief_tcm270-365 093.pdf Becker HS (1967) Whose side are we on? Soc Probl 14:239–247 Bloor M, Fincham B, Sampson H (2010) Unprepared for the worst: risks of harm for qualitative researchers, Methodol Innov Online 5(1):45–55. http://journals.sagepub.com/doi/pdf/10.4256/ mio.2010.0009 Byrne D, Ragin C (eds) (2009) The Sage handbook of case-based methods. Sage, London Cooper B, Glaesser J, Hammersley M (2012) Challenging the qualitative-quantitative divide: explorations in case-focused analysis. Continuum/Bloomsbury, London
21
On Epistemic Integrity in Social Research
401
de Mille R (1978) Castaneda’s journey. Abacus, London de Mille R (ed) (1990) The Don Juan papers. Wadsworth, Belmont Elgin C (1996) Considered judgment. Princeton University Press, Princeton Fish S (2014) Versions of academic freedom. University of Chicago Press, Chicago Gibbons M (2000) Mode 2 society and the emergence of context-sensitive science. Sci Public Policy 26(5):159–163 Goldsmith K (2011) Uncreative writing: managing language in the digital age. Columbia University Press, New York. See http://veramaurinapress.com/pdfs/Kenneth-Goldsmith_uncreative-writing.pdf Gomm R, Hammersley M, Foster P (eds) (2000) Case study method. Sage, London Hammersley M (2000) Taking sides in social research: essays on partisanship and bias. Routledge, London Hammersley M (2002) Educational research, policymaking and practice. Paul Chapman/Sage, London Hammersley M (2011) Methodology, who needs it? Sage, London Hammersley M (2014) The limits of social science. Sage, London Hammersley M (2016) Can academic freedom be justified? Reflections on the arguments of Robert Post and Stanley Fish. High Educ Q 70(2):108–126 Hammersley M (2019) A paean to populist epistemology. Qual Inq, Online First 13 July 2019 Hammersley M (2020) The ethics of discourse studies. In: De Fina A, Georgakopoulou A (eds) Handbook of discourse studies. Cambridge University Press, Cambridge Hammersley M, Traianou A (2012) Ethics in qualitative research. Sage, London Huff AS (2000) Changes in organizational knowledge production. Acad Manag Rev 25(2):288–293 Israel M (2014) Research ethics and integrity for social scientists, 2nd edn. Sage, London Leith S (2017) Peak bullshit: how the truth became irrelevant in the modern world. Times Literary Supplement, 18 and 25, pp 3–4 Levelt W, Drenth P, Noort E (2012) Flawed science: the fraudulent research practices of social psychologist Diederik Stapel. ommissioned by the Tilburg University, University of Amsterdam and the University of Groningen, Tilburg. https://www.tilburguniversity.edu/upload/3ff904d7547b-40ae-85fe-bea38e05a34a_Final%20report%20Flawed%20Science.pdf Macfarlane B (2009) Researching with integrity: the ethics of academic enquiry. Routledge, London Novotny H, Scott P, Gibbons M (2001) Re-thinking science: knowledge and the public in an age of uncertainty. Polity, Cambridge Polanyi M (1962) The republic of science: its political and economic theory. Minerva 1(1):54–73 Pool R (2017) The verification of ethnographic data. Ethnography 18(3):281–286 Rosenthal R, Jacobson L (1968) Pygmalion in the classroom. Holt, Rinehart and Winston, New York Sanjek R (1990) A vocabulary for fieldnotes. In: Sanjek R (ed) Fieldnotes: the making of anthropology. Cornell University Press, Ithaca Shamoo A, Resnik D (2015) Responsible conduct of research, 3rd edn. Oxford University Press, Oxford Traianou A (2015) The erosion of academic freedom in UK higher education. Ethics Sci Environ Polit 15(1):39–47 Tucker W (1997) Re-considering Burt: beyond a reasonable doubt. J Hist Behav Sci 33(2):145–162 van der Port M (2017) The verification of ethnographic data: a response. Ethnography 18 (3):295–299 Weber M (1917) The meaning of “value freedom” in sociological and economic sciences. English translation in Bruun H, Whimster S (eds) (2012) Max Weber: collected methodological writings. Routledge, London, pp 304–334 Weber M (1919) Science as a profession and a vocation. English translation in Bruun H, Whimster S (eds) (2012) Max Weber: collected methodological writings. Routledge, London, pp 335–353 Ziman J (2000) Real science. Cambridge University Press, Cambridge
Ethical Issues in Data Sharing and Archiving
22
Louise Corti and Libby Bishop
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical Issues and Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical Frameworks for Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relevant Legal Frameworks for Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The General Data Protection Regulation (GDPR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics and Registration Services Act 2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Duty of Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chatham House Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informed Consent and Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When to Seek Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informed Consent and Unknown Future Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharing Data Where No Consent for “Data Sharing” Was Gathered . . . . . . . . . . . . . . . . . . . . . . Access to Data: Safeguards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disclosure Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anonymization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Governance Strategies for Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Safe Access to Data: The Five Safes Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case Studies of Successful Data Sharing: Ethical Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Forms of Data and Ethical Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meeting the Transparency Agenda and Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
404 405 406 406 407 408 408 409 409 411 412 413 414 414 415 417 418 420 420 422 423 423
L. Corti (*) UK Data Archive, University of Essex, Colchester, UK e-mail: [email protected] L. Bishop Data Archive for the Social Sciences (GESIS), Cologne, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_17
403
404
L. Corti and L. Bishop
Abstract
Worries about the status of informed consent or unintended use of personal information of data collected from the research process are often viewed as an obstacle to collating existing data for reuse. It may seem that when research “data” are shared, there is a risk of eroding the trust initially established in the researcher-participant relationship. With the General Data Protection Regulation (GDPR) in Europe gaining so much recent attention in the world of research, there can be a confusion between the roles that ethical, legal, and moral duties play and represent, and how the principles of open science can mesh with the concept of open data. In this chapter some of the issues as they relate to data sharing are elucidated, and how access to data can be safely and successfully managed through a robust and trusted governance framework is demonstrated. The principles of research integrity can be met while paying close attention to ethical duties. Challenges for non-research sources where various protections that usually apply in the traditional social research data life cycle may not have taken place are also covered. With the potential risk of disclosure that increases with the capacity to link multiple data sources, the use of informed consent and the “Five Safes” protocol to ensure safe access to data are considered. Finally, with the pressure for research transparency and reproducibility high on the research agenda, the practical steps that can be taken to document how data have been collected and treated are considered, ensuring that ethical processes are captured. Keywords
Research ethics · Data sharing · Qualitative data · Informed consent · Legal issues · Data protection · Five Safes · UK Data Service · Data archiving
Introduction Citizens and researchers cannot have failed to note the increasing drive for openness and sharing, value and transparency in their daily lives. The General Data Protection Regulation (GDPR) and open access, as well as concerns about how data has been (mis) used in referendums and elections, have all raised public awareness of this push. Research data sharing and publishing arose not out of the driver for validation and replication but from a desire for open science. The 2007 report OECD Principles and Guidelines for Access to Research Data from Public Funding declared data as a public good that pushed forward action on realizing open science (OECD 2007). This data sharing ideology promotes both value for money through subsequent analysis of data, opportunities for collaboration and comparison, and research integrity through richer exposure for methods and validation of research results. The Royal Society (2012) further proposed that researchers and institutions should
22
Ethical Issues in Data Sharing and Archiving
405
increase openness and recognize the importance of data. These high-level calls have been answered through funders mandating data sharing, the collective use of common standards for data publishing, an appreciation of the art and role of data management, and the demand for open-source data and tools. Worries about the status of informed consent or unintended use of personal information of data collected from the research process are often viewed as an obstacle to collating existing data for reuse. It may seem that when research “data” are shared, there is a risk of eroding the trust initially established in the researcherparticipant relationship. With the GDPR gaining so much recent attention in the world of research, there is some confusion between the roles that ethical, legal, and moral duties play and represent, and how the principles of open science can mesh with the concept of open data. This chapter aims to help researchers distinguish these concepts and presents practical pathways that can enable safe data sharing. Robust and trusted governance frameworks that speak to research integrity and ethical duties are at the heart of the solution. Data sharing relies on complying with ethical and legal expectations, but it also relies on trust on all sides, with data creator, curator, and future users working together to ensure ethical treatment of the research and its outputs. This includes protecting data subjects and their evidence as well as giving them voice. There are additional challenges for working with non-research-derived sources, such as administrative, transactional, smart meter, or social media data, where various protections that usually apply in the traditional social research data life cycle may not have taken place and the potential risk of disclosure increasingly comes with the capacity to link multiple data sources. With the pressure of research transparency and replicability high on the research agenda, the chapter considers what practical steps can be taken to capture and document primary data, including its ethical provenance. Though the focus of this chapter is mainly on the UK and Europe, the issues raised have general applicability.
Ethical Issues and Data Sharing Collecting, using, and sharing data in research conducted about people require that ethical and legal obligations be respected. Most countries have laws that cover the use of data, but the scope of these laws varies widely. In addition to the legal requirements when working with human subjects, every researcher is expected to maintain high ethical standards. Every academic discipline has its own tailored guidelines produced by professional bodies and relevant funding organizations. Most institutions are also signed up to a code of conduct for researchers that ensures research integrity is maintained by its researchers (UK Research Integrity Office 2018). In the aftermath of the Cambridge Analytica scandal involving inappropriate sharing of social media data, it is even more urgent that the research community complies with laws and upholds ethical standards (University of Cambridge 2015; Doward et al. 2017).
406
L. Corti and L. Bishop
The main topics most relevant to reusing and sharing data are informed consent, confidentiality, safe handling of data, and appropriate governance strategies. Following best practices and appropriate protocols, even sensitive and confidential research data can be shared legally and ethically.
Ethical Frameworks for Research Most researchers will confront ethical considerations in practice when writing a research proposal, applying for ethics approval, or having to deal with ethics dilemmas that arise during a research project. Fortunately, there are many excellent publications and guidance that set out the core principles of ethical conduct. For example, the UK’s core funder of social science research, the Economic and Social Research Council (ESRC), outlines key principles for UK social science research ethics that research should maximize benefit for individuals and society and minimize risk and harm; respect the rights and dignity of individuals and groups; ensure participation is voluntary and informed, conducted with integrity and transparency; and clearly define lines of responsibility and accountability and “independent” – avoiding conflicts of interest (ESRC 2015). Many funders and professional bodies also publish their own disciplinary codes of conduct that cover ethical principles. However, many still fail to cover issues around handling data and more specifically data sharing. In this respect they are falling behind the narrative of the open science agenda, largely because of embedded research traditions, for example, in recommending the destruction of data following a research project’s completion. Whether this is a legal or ethical issue is hard to ascertain, but it can be tricky for the researcher to know how to do research with integrity, when they are faced with the dilemma of data sharing mandates from their funder, and data destruction recommendations from their professional body. Researchers must be ready to question older and sometimes out-of-date guidance and consider how they can best meet funders’ and journals’ policies that are driving data sharing. There are a range of persuasive ethical arguments for the extended use of data, beyond the original project, for example, those clearly set out in a recent UK Patient Data campaign that is trying hard to persuade the public of the benefits of sharing research data to improve the nation’s health outcome, as discussed later in this chapter. The core principles of research ethics that have a direct bearing on sharing or archiving of research data include informed consent, promises of anonymity, and future uses of data. Before these are introduced, a brief scan of the relevant legal landscape is provided.
Relevant Legal Frameworks for Data Sharing While there exist different legal frameworks in each country, in the UK, there are a few important legislative and common law constraints that, together with ethical considerations, impact on data sharing. Most researchers will be familiar with the
22
Ethical Issues in Data Sharing and Archiving
407
values of the European Convention on Human Rights (ECHR), an international treaty that sets out the fundamental rights need to live a free and dignified life in including the right to respect for one’s private and family life, including one’s home and correspondence (European Court of Human Rights 1998). Most countries have legislation that covers the use of personal and organizational information and national data sources. In Europe, a cross-national approach has been taken, embodied in the European Union Data Directive, adopted in 1995 and, since 2018, the EU GDPR that harmonizes inconsistencies in implementation and enforcement of data protection legislation across the EU member states. From 1995, EU member states were required to bring their national laws into compliance with the original Directive, and the UK adopted its Data Protection Act in 1998. In Germany, considered to have the strictest data protection regulation in the world, privacy is a constitutional right under the Volkszählungsurteil of 1983, and the Federal Data Protection Act, Bundesdatenschutzgesetz, formulates legal requirements to inform people of the purposes of collecting, processing, or using of personal data and stipulates that consent must be given in written form. In other European countries, the nature of protection varies greatly, with some countries who are not EU members following the GDPR and others with very few protections in place. In the US, there is no similar comprehensive legislation, but there exists sector-specific laws that define how any personal information may be used in separate areas such as medical and credit records, with more reliance on selfregulation (Singer 2013). A comprehensive list of privacy and data protection legislation in countries around the world is provided by Kruse and Thestrup (2017).
The General Data Protection Regulation (GDPR) The GDPR and a new Data Protection Act came into force in the UK in 2018 (Gov.UK 2018). From the research perspective, it is anticipated that the GDPR will enable greater accountability and transparency for those who process personal data. Personal data under the GDPR mean: “information relating to natural persons who can be identified or who are identifiable, directly from the information in question; or who can be indirectly identified from that information in combination with other information”. “Personal data may also include special categories of personal data or criminal conviction and offences data. These are considered to be more sensitive and you may only process them in more limited circumstances. Pseudonymised data can help reduce privacy risks by making it more difficult to identify individuals, but it is still personal data”. (Information Commissioner’s Office 2018)
What is not personal data is also important to understand. Information about a deceased person, companies, or public authorities do not constitute personal data and are not subject to the GDPR, and once personal data are fully anonymized, then the anonymized data is also not subject to the GDPR (Information Commissioner’s Office 2018).
408
L. Corti and L. Bishop
The new legislation offers enhanced rights to individuals whose data is being processed, and at the same time, the various safeguards that are mandated reflect current research integrity and best practice. The ESRC provide a concise summary of relevance to social research and highlight some key points (ESRC 2018). First, is that research can be viewed as “a task in the public interest.” Research organizations need a lawful basis to collect, use, or store personal and when processing special categories of personal data, for example, on health or ethnicity. The GDPR recognizes that this can be necessary for scientific research purposes in accordance with safeguards. The key points here are that, first, participants in research can be assured that research organizations will use their data for public good and privacy will be protected and, second, that the GDPR recognizes the value of research data that can be kept for the long-term, so that important collections do not need to be destroyed. Long-term retention must be justified and the decisions documented and periodically reviewed. Under this clause, data can be used for multiple research purposes regardless of the initial reason for its collection. Finally, various technical and organization measures need to be in place, focusing on robust security and access systems, storage in pseudonymized or anonymized form where possible. The robust research governance and ethics systems in place in academia are well placed to meet these requirements.
Statistics and Registration Services Act 2007 The Statistics and Registration Services Act 2007 covers the structure and function of the UK Statistics Authority and applies only to data designated as Official Statistics (The National Archives 2007). Data access is a statutory function of the Statistics Authority, and the Act defines the legal gateways under which personal information can be disclosed. The Act allows disclosure of personal information to an “Approved Researcher” for the purposes of statistical research and states that disclosure of personal information outside of the legal gateways is a criminal offense. While the Act does not apply to individual researchers managing confidential research data not designated as Official Statistics, researchers might wish to adapt the Approved Researcher model for access to highly confidential data, as, for example, the UK Data Service has done for access to ESRC-funded data (see the Five Safes below). In the UK, there is additional international and national legislation that may have an impact on the sharing of confidential data. There are two further “rules” that are relevant in data collection and sharing in research. The first is the common law duty of confidentiality and, second, the Chatham House rule.
Duty of Confidentiality In research, in addition to a robust data protection and ethical frameworks, the common law, duty of confidentiality, is important to consider when information is gained from research. The common law only applies to information that is not
22
Ethical Issues in Data Sharing and Archiving
409
already in the public domain and when information is offered to a third party, in confidence. Where an explicit statement of agreement has been captured, such as via a consent form, this could constitute a contract, and it need not be in writing. Subsequent disclosure of the information could constitute a breach of the duty of confidentiality and, in law, possibly a breach of contract. For research, it is important to think carefully about what one is promising before fieldwork starts and whether this can be upheld. Information sheets and consent forms can offer an opportunity to set out what kind of information will and will not be shared. The duty of confidentiality is not absolute and is not protected by legal privilege; researchers may be required to offer up research data in response to a court subpoena or to the police as part of an ongoing investigation.
Chatham House Rule The world-renowned Chatham House Rule can be invoked at meetings to encourage openness and the sharing of information. Chatham House, home of the Royal Institute of International Affairs in the UK, is a think tank whose aims are independent analysis of and informed debate on critical global, regional, and countryspecific challenges. The Chatham House Rule is used to help facilitate free speech and confidentiality at meetings. The Rule states that: When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker (s), nor that of any other participant, may be revealed. (Chatham House 2018)
For a researcher this means that when undertaking fieldwork, anything recorded or minuted under the Chatham House Rule needs to be flagged as such, with a clear record of where the Rule might apply to the recording or minutes of a meeting, so that the speaker’s request is upheld. When it comes to data sharing, it is important not to share the detail, and best practice is to share anonymized notes or transcriptions or return back to the participants to discuss what can be shared.
Informed Consent and Data Sharing In accordance with the good ethical conduct, researchers are usually expected to ensure that every person from whom data are gathered for the purposes of research consents freely to participate in the research on the basis of sufficient information. Thus consent should be valid, competent, informed, and voluntary. Further participants must, during the data gathering phases, be able to freely withdraw or modify their consent and to ask for the destruction of all or part of the data that they have contributed. (See ▶ Chap. 12, “Informed Consent and Ethical Research” in this Handbook, on informed consent.)
410
L. Corti and L. Bishop
Thinking about the data and its status, researchers should clearly convey information about what will happen to the outcomes of research, e.g., publications, and how confidentiality will be maintained, where this is required. Also, under the GDPR, regarding how personal data will be treated or processed, researchers can be confused about where to put this information; in the consent form or in an information sheet. Statements about outputs, including uses of anonymized data, can often be managed by using a good information sheet, rather than adding to the consent form. While more recent best practice has been to advise researchers not to use consent forms that preclude “data sharing,” such as by promising to “destroy data” unnecessarily, the GDPR has offered some clarity on the meaning of “data” in these cases. Where published research outputs and anonymized data outputs for reuse are expected, consent for future use is likely not needed but can and should be advised, as part of good ethical conduct. Where personal information will be passed on, explicit consent must usually be gained unless there are legal grounds for accessing personal data. The UK’s Medical Research Council offers useful advice on when consent should be sought for medical research (MRC 2018). They note that in medical research, data are usually anonymized before sharing, ensuring that adequate controls are in place to prevent (re)identification. Identifiable information can only be shared beyond the original collection purpose if there is a legal route for disclosure, what research participants would have expected, and that the amount and sensitivity of the information are minimized. Participant expectations are managed through the (explicit) consent process, with sufficient and transparent information about intended sharing being made available at the outset. Any onward use of personal information also requires appropriate information security measures and safeguards to be used, set out later in this chapter. For longitudinal studies, consent to participate in each new data collection is normally gained, especially where tests, such as blood samples and/or administrative data, will be linked. Informed consent to participate in research or for collection and handling of personal data can be obtained in writing, using a detailed consent form, by means of an informative statement, or verbally, depending on the nature of the research, what and how data will be gathered. In the UK, a printed information sheet and a signed consent form are recommended to assure compliance with the GDPR and with ethical guidelines of professional bodies and funders. Verbal agreement can be audio recorded, after information sheet details have been read out. Putting the last example into context, for surveys, where personal identifiers such as people’s names are not collected and are not included in the digital data file or where responses have been aggregated, explicit written consent for onward data handling and sharing does not need to be gathered, but consent to participate might. A good information sheet will set out the purpose of the survey, with a clause stating that an individual’s responses will not be used in any way that would allow his/her identification. The information sheet should also detail the nature and scope of the study, the identity of the researcher(s) and what will happen to data collected, including any data sharing.
22
Ethical Issues in Data Sharing and Archiving
411
In qualitative or experimental research, video data with recognizable faces and voices would require explicit permission to onward share, as it must be treated as personal data. Corresponding interview transcripts, if they can be fully redacted, would not need explicit written consent to share, but an information statement can be used to talk about outcomes which will be released from the research. If anonymization is not possible, then various mitigating safeguards around data access can be explained to participants. A range of template consent forms and appropriate wording are available from sources such as the NHS Health Research Authority (2018) and the UK Data Service (2018).
When to Seek Consent As with all aspects of consent, the timing of when to seek, it should balance the need for consent to be informed with careful consideration of when participants are best able to make decisions. Consent needs, eventually, to address not only research participation but also dissemination and sharing of any personal data. Depending on the nature of the research, consent can either be a one-time, one-off occurrence or a more ongoing process: One-off consent is simple and practical and avoids possibly burdensome requests to participants for re-consent. Consent is gained early in the research process and covers all aspects of participation and use of personal information. For research where no confidential or sensitive information is gathered, for example, in a simple one-off survey, or where there will be only a single contact with the participant, this is usually sufficient and most practical. When research is more exploratory, not all data uses, research outputs, and methods are known in advance, or for longitudinal research, a more process-oriented approach is preferable. Process consent is considered throughout the research cycle and assures active informed consent from participants. This is generally recommended, such as by the ESRC’s Framework for Research Ethics (2015), and is especially important in a research design that involves more than one point of contact with a participant. Consent for various uses of personal data, such as linking a survey to administrative records, can be sought each time a new linkage is performed. Longitudinal studies, such as the UK Millennium Cohort Study (MCS), which chart the conditions of social, economic, and health advantages and disadvantages facing children born at the start of the twenty-first century, use an approach to consent that is explicit and consistent (University of London 2018). The MCS sought informed parental consent from the outset (Shepherd 2012). Letters and leaflets sent in advance of the surveys summarize what participation in the survey will involve, and written consent is sought from parents for their participation (interview) and the participation of their child(ren) (e.g., assessments, anthropometric
412
L. Corti and L. Bishop
measurements, collection of oral fluids and saliva, linking of administrative data on education and health, teacher surveys). Where parents give consent to the participation of their child(ren) in one or more elements of a survey, the inclusion of the child (ren) requires their agreement and compliance. Parents were not asked to consent on behalf of the child but were asked for their permission to allow the interviewer to speak to the child and ask for their consent to participate in each element. Additionally for linking of health records to the study data, around 92% of cohort members’ parents consented to linking. survey data with birth register and/or hospital. Maternity data was obtained from 92% of the cohort mothers (Tate et al. 2006). Research datasets from the MCS are curated in the UK Data Service. Finally, a risk of the process approach to obtaining consent is overburdening respondents with repeated consent requests. The Open Brain Consent project provides useful sample consent forms and information sheets for studies using magnetic resonance imaging (MRI) that are written with data sharing in mind (Open Brain Consent 2018).
Informed Consent and Unknown Future Use Data sharing can raise questions around the requirement that consent to participate in a study be fully “informed.” Where the intention is to share anonymized research data for future researchers, specific use cannot be known in advance. However, participants can be informed about some of the likely ways in which their information will be used and by whom. Researchers can provide positive examples to participants of how similar data have been reused in the past and can inform them about possible options for limiting the use of any personal or sensitive data. For example, informed consent is gathered for taking and storing human tissues in biobanks, where future uses may be unknown. Broad consent, as it is known, has produced high rates of participation (over 99%), such as for the Wales Cancer Bank (2007). There is still an overriding presumption on the part of researchers that participants want their data destroyed or kept confidential and inaccessible. However, evidence suggests that they are often willing to have their data shared with other researchers if appropriate pseudonyms and other protections are provided. A study in Finland by their national social science data archive found that participants were far more open to future uses of data by others than researchers often believe (Kuula 2011). Participants from previous studies were contacted to see whether they might agree to sharing their interviews, and issues about the research, archiving, and terms of future use of the data were discussed. The team was able to find and recontact 169 research participants, of whom 98% (165) agreed to archive their data; only 4 did not accept the idea of archiving. They stated that they had initially participated in the research because they had thought the subjects of the interviews were worth studying and that they wished to help advance science. Others were slightly irritated by the recontact because they already believed that any future and as yet unknown use of the data by researchers would not conflict in any way with
22
Ethical Issues in Data Sharing and Archiving
413
their original decision to participate in the research. All datasets included unique and personal stories and occasionally sensitive experiences about the issues at hand. The UK Caldicott report sets out comprehensive recommendations on sharing patient data (Caldicott 2017). Following this, some excellent advocacy videos have been made to promote the benefits of using of patient data to improve health outcomes (Imperial College 2018; Understanding Patient Data 2018).
Sharing Data Where No Consent for “Data Sharing” Was Gathered A common challenge often arises when a researcher wants to share research data from a project they conducted many years ago, but consent to participate in the research was either not gathered (as it wasn’t required at that time, these days considered unethical), or the consent form used highly restrictive language, such as “No one else other than the research team will see the data.” Almost no data collector, in the social sciences certainly in Europe before 2000 or so, gained explicit consent to share or reuse “data,” and the definition of “data” here is blurry. Many surveys, even in recent years, do not ask for explicit consent to share their “anonymized data” or findings, but it is the taking part and trust placed upon the organizers not to reveal personal data that have been key in keeping high ethical and legal standards in social and biomedical research. Thus, for many legacy data collections, consent forms that record an explicit agreement to participate in research do not exist. However, information sheets that inform participants about the nature of the study and the assurance of confidentiality, which were at the time legally and ethically acceptable, often do exist. Having this documentation available is important to have when assessing whether to make older study data for future use. Examining this situation in closer detail, assuming the term “data” refer to “personal data” and not anonymized data, then under personal data legislation (e.g., GDPR and duty of confidentiality), there is a legal and ethical responsibility to safeguard the information; future requests would need to be carefully reviewed and assessed, with all the mitigating ethical and legal safeguards put in place, required under the GDPR. The important factor here is ensuring that any duty of confidentiality (DoC) made at the time of the original research is upheld. Whether participants are dead or alive but cannot be contacted, or resources required to contact would be too costly, one can make a “reasonable expectations” test (e.g., via a research ethics review committee) around whether a new intended use is in keeping with the broader aims of the study and/or in the public interest. Restrictive legacy forms can now be reinterpreted so that future approved research, under suitable safeguards, become an extended part of the principles of the broader scientific endeavor of the project and that respondents would likely not object to the extended use of data, as long as the DoC is upheld and safeguards are in place. With appropriate de-identification methods in place and appropriate safeguards for access, legacy research data can be made available.
414
L. Corti and L. Bishop
Access to Data: Safeguards When data will be shared, consideration needs to be given to whom, for what, where, and how. The ethical and legal frameworks help guide us in making decisions about whether data need to be anonymized or not and, in the case of personal data, putting legal gateways and safeguards in place. Where there may be some lower degree of risk of identification in data, there are also strategies for restricting access to data, using closure and embargoes, plus safeguards. The three solutions that are to be discussed here are risk and privacy assessment, anonymization, and access control using appropriate governance frameworks.
Disclosure Risk Assessment As noted earlier, personal information should never be disclosed, unless a participant has given specific consent to do so, ideally in writing. In some forms of research, for example, where oral histories are recorded or in some anthropological research, it is customary to publish using the names of people in the studies, for which they have given their consent. Disclosure occurs when a person or an organization uses published data in order to find and reveal sensitive or unknown information about a data subject. A person’s identity can be disclosed from direct identifiers such as names, addresses, postcode, telephone numbers, email addresses, IP addresses, or pictures or indirect identifiers which, especially in combination with other publicly available information sources, could identify someone, such as information on workplace or residence, occupation, or exceptional values of characteristics like salary or age. Demographic information can be particularly revealing when combined, for example, in a detailed employment description and a very localized geographic area. Further, text and open-ended variables in surveys can contain detailed unique information. Sensitive information such as health or religious status or those relating to crime, drugs, etc. also need to be considered. Where there is no identifying information in a dataset and it cannot be combined, these measures are not in themselves risky. Once assessed, data can be either treated or access controlled. Data publishers aim to minimize the potential risk of disclosure to an appropriate level while sharing as much data as possible. However, the risk is likely never zero, and best practice is to minimize risk. Much research data shared for onward researchers have safeguards placed upon it, including legal agreements signed by users, including not to disclose any confidential information. Where research data plans to be widely shared, confidentiality is usually maintained through deidentification or anonymization. This maybe for ethical reasons or for legal reasons, for example, protecting confidential sensitive or illegal information, disguising a research location, or avoiding disclosure of personal data under the GDPR or reputation damage, e.g., libel and slander, or for commercial reasons, such as avoiding revealing company secrets or patented information. Where there are legal gateways in place, disclosive information can be shared, such as using administrative or registered data for research.
22
Ethical Issues in Data Sharing and Archiving
415
Privacy assessment strategies therefore need to devise a plan for striking a balance between removing and replacing disclosive information with retaining as much meaningful information as possible, that is, balancing risk with utility. A Data Protection Impact Assessment (DPIA) can help identify and minimize the data protection risks of a project and is required for processing personal data that is likely to result in a high risk to individuals (Information Commissioner’s Office (ICO) 2018a). The ICO supplies useful questions and guidance, identifying factors such as: assessing the nature, scope, context and purposes of data processing; appraising the necessity, proportionality and compliance measures; identifying and assessing risks to individuals; and identifying measures to mitigate those risks. The UK Anonymisation Network (UKAN) also offers practical guidance on anonymization strategies, with work through examples, aiming to be less technical than the statistics and computer science literature (Elliot et al. 2016). In the USA, a team at Harvard introduced the idea of “DataTags” as a means of clearly identifying handling and access requirements for numeric data. Using a decision-making tree, the system assigns “tags” that encode levels of handling and sharing restrictions. A set of six tags covers options from data having no risk to data requiring maximum protection (Sweeney et al. 2016). Like the Five Safes framework discussed later, this model offers a useful framework for research labs, research repositories, government repositories, multinational corporations, and institutional review boards. The system has been used to classify medical and educational data.
Anonymization Direct identifiers are often collected as part of the research administration process but are usually not essential research information and can therefore easily be removed from the data and dealt with separately, following the GDPR requirements. Otherwise, data need to be assessed to look at risk of disclosure. Terminology in this field can be confusing. De-identification refers to a process of removing or masking direct identifiers in personal data, while anonymization involves a process of reducing the risk of somebody being identified in the data to a negligible degree. Invariably this means doing more than simply de-identifying data and often requires that data be further altered or masked. Depending on the nature and complexity of the data, e.g., if collected over time, the process can be time-consuming. Extensive detailed advice about anonymization procedures is available, for example, that published by the UK ICO (Information Commissioner’s Office 2018b) and the UK ONS (Office for National Statistics 2018a). Anonymization techniques for quantitative data may involve removing or aggregating variables or reducing the precision or detailed textual meaning of a variable. Special attention may be needed for relational data and sources that could be linked, where connections between variables in related datasets can disclose identities (Corti et al. 2014). Decisions need to be made around utility of data. For example, year of birth can be used rather than the day, month, and year to help protect against
416
L. Corti and L. Bishop
identification, yet some studies might need raw age, so compromises on other identifying variables may need to be used. It’s important to document such changes made, so that users of data understand what has been done to the original data. There exists software to help with assessing risk in microdata files, usually by identifying unique patterns of values for selected variables. These include the free R tool, sdcMicro, μ-Argus, a software recommended by Eurostat for government statisticians, and ARX (Templ et al. 2018; Statistics Netherlands 2018; ARX 2018). For qualitative data sources or textual data that are to be shared, such as transcribed interviews and textual or audio-visual data, this should be anonymized when promised during the informed consent process. The process is often bespoke and less open to automation. Pseudonyms or more general descriptors can be used to disguise identifying information, or small especially problematic segments can be complexly removed. This is preferable to blanking out large parts of information, which reduces the historical value quite significantly. Discussing and agreeing with participants during the consent process what may and may not be recorded or transcribed can be a much more effective way of creating data that accurately represents the research process and the contribution of participants. Consistency in any approach across the research team and throughout the project is important, as is noting any redaction within the text and keeping a record of replacements as part of documentation. There are techniques available to partially automate some of the most straightforward processes of anonymizing text, for example, word processing macros or algorithms (e.g., using named entity recognition) to identify regular expressions, but these are usually bespoke as the subject areas vary so much in any one research study. “Search and replace” techniques should be used with care so that unintended changes are not made. Audiovisual data (audio, video, images) is much harder to anonymize and should be done sensitively. While real names or place names can be bleeped out, voices altered through changing pitch, or faces obscured through by pixilation of an image, these can reduce the usefulness of data for research. Further they are highly laborintensive and expensive. If confidentiality of audio-visual data is an issue, it is better to obtain the participant’s consent to use and share the data in an unaltered form, or use legal gateways to enable controlled access to data. MRI data is routinely used in psychology or cognition studies and can be revealing. As such, some kind of de-identification is needed to protect the privacy of the subject. For MRI images, certain features such as eyes, nose, or mouth could lead to recognizing a familiar individual. A process of “defacing” uses algorithms to disguise full facial features. Tools recommended by the OpenfMRI project are Pydeface and mri_deface. Other accompanying identifying attributes are likely to contain medical details, which can be removed or generalized, as above (DEID and MITRE Identification Scrubber Toolkit). Social science data archives provide training on specialist types of research approaches, particularly when it comes to complex areas such as consent, data formats, copyright, anonymization, and description of methodology (UK Data Service 2018b; Finnish Social Science Data Service 2018a).
22
Ethical Issues in Data Sharing and Archiving
417
Governance Strategies for Sharing Data While there remains a wide scope for using different models of data access depending on the study and data type, those that support better transparency (having information publicly available on how researchers can access data), and accountability (having clarity of data access mechanisms and responsibility for decisionmaking) are preferable. Public bodies providing services, for example, health have their own regulatory protocols that also cover dealing with accessing data for research purposes. The UK’s NHS Health Research Authority (2018a) offers useful guidance on governance arrangements for Research Ethics Committees. The tenets recommended following from the Expert Advisory Group on Data Access (EAGDA) report on Governance of Data Access recommended that the process for seeking and gaining access to study datasets should be readily available to prospective users, together with a range of criteria that helped reduce the burden on researchers (Expert Advisory Group on Data Access 2015). These were especially welcomed by data users of longitudinal and cohort resources and spurred data owners to improve their published access information and process. However, it is very much still the case that differential data access and governance models still exist across the UK’s longitudinal research, including cohorts and clinical trials, very much defined by the source of funding and the cultural values and practices that come with their constituent disciplines. For example, ESRC-funded studies like the Millennium Cohort Study tend to provide an independent archived dataset that is free at the point of access and where trust is established through authorization and project approval, whereas the Medical Research Council (MRC) and the Wellcome Trust supported cohorts (such as the Avon Longitudinal Survey of Parents and Children, ALSPAC) primarily allow access to bespoke datasets on a case-by-case basis, utilizing a cost recovery model and often restricting access to “bona fide researchers” and collaborators. Such access models tend to be deeply ingrained; their origins embedded in the often long history of each study. The potentially disclosive nature of the small study populations makes reidentification a real risk, and so the trust models used are much tighter than that used for access to national studies. In summary, governance mechanisms tend to vary in access policies and arrangements, have different sustainability models, such as using a centralized data repository or not, and may have a burden on the user in terms of cost and accreditation of researchers. The Five Safes framework for safe access to data that meet a wide range of ethical and legal constraints, introduced below, could be usefully deployed across all areas of data provision to create a more level playing field. Many large infrastructure resources for data have established secure research environments, such as data safe havens, secure labs, safe rooms, and so on. Accreditation varies across different secure environments, but a common model could be used to increase efficiency and portability of a recognized bone fide status, as calls for governance panels and data access committees to be more transparency in processes and procedures increase. For many social science data archives, such as the UK Data Service, most of the data collections are not in the public domain. Their use is restricted to specific
418
L. Corti and L. Bishop
purposes after user registration. Users agree an end user license, which has contractual force in law, in which they agree to certain conditions, such as not to disseminate any identifying or confidential information on individuals, households, or organizations and not to use the data to attempt to obtain information relating specifically to an identifiable individual. Thus users can use data for research purposes but cannot publish or exploit them in a way that would disclose peoples’ or organizations’ identities (UK Data Service 2018a). Confidential data held by the UK Data Service, in discussion with the data owner, may impose additional access controls: • Specific authorization needed from the data owner to access data • Placing confidential data under embargo for a given period of time until confidentiality concerns are no longer pertinent • Providing access to approved researchers only • Providing secure access to data by enabling remote analysis of confidential data but excluding the ability to download or take data away Access controls should always be proportionate to the kind of data and the level of confidentiality involved and an assessment of the risks involved in the disclosure. Mixed levels of access control may be put in place for some data, combining standard access to nonconfidential data with controlled access to confidential data through safe havens, as described in the Five Safes section below.
Safe Access to Data: The Five Safes Framework Where data are no longer deemed “safe,” e.g., are not open, or need to be deidentified or be anonymized, researchers can be granted access to detailed disclosive data within a controlled environment. Data safe havens or secure labs provide trusted researchers with controlled access to sensitive or confidential data, enabling researchers to access and use datasets in a secure and responsible way. A model of remote access to restricted social and economic data is also used, providing a confidential, protected virtual environment within which authorized researchers could access sensitive microdata, rather than relying on their existing procedures that require researchers to visit secure sites in person (Foster 2018; Lane et al. 2008). These secure access services allow researchers to analyze the data remotely from their institution, on a central secure server, with access to familiar statistical software and office tools. No data travel over the network; the user’s computer becomes a remote terminal, and outputs from the secure system are only released after statistical disclosure checks. Limited data, often from business, social, and biomedical surveys, are available with detailed geographic information, such as postcode-level variables or other fine-grained information. The provision of access to these restricted data is through an “approval” model, where researchers have potential projects approved (for ethics and feasibility) and undergo training, support, and advice necessary to manage and analyze the data in the safe haven to maximize
22
Ethical Issues in Data Sharing and Archiving
419
research outputs while protecting respondents’ privacy. Based on trust, a two-way commitment is upheld through user agreements and breach penalties. The UK Data Service and the Office for National Statistics (ONS) call this security philosophy the “Five Safes” framework, a set of principles that enable data services to provide safe access to data for research, meeting the needs of data protection but also fulfilling the demands of open science and transparency. Five Safes has been adopted by a range of secure labs, including the Office for National Statistics (UK Data Service 2017a; Office for National Statistics 2017). The five simple protocols provide complete assurance for data owners and researchers by ensuring the following “Safes”: • • • • •
Safe data: data is treated to protect any confidentiality concerns. Safe projects: research projects are approved by data owners for the public good. Safe people: researchers are trained and authorized to use data safely. Safe settings: a secure lab environment prevents unauthorized use. Safe outputs: screened and approved outputs that are non-disclosive.
In this model, researchers have access data to personally identifiable data in a controlled environment, under an appropriate legal gateway, agree to conditions for handling personal data and associated breach penalties, and agree to be trained to become an “Approved Researcher,” and their host institution acts as guarantor on their behalf. Projects are approved using review panels or access committees run by the data owner. A decision-making framework is set up using specific criteria for assessment, including thresholds for approval or rejection and transparency of application outcomes. Ethics review for the project through the host institution is also required. Panel membership typically consists of appropriate stakeholder representation, including both professional and lay members. An example in the UK is the National Statistician’s Data Ethics Advisory Committee (NSDEC) (UK Statistics Authority 2018). In terms of researcher accreditation, the “Approved Researcher Scheme” has been established by the ONS and provides a day’s training course on security and assessing risk in outputs and includes a test (Office for National Statistics 2018). The controlled environment enables researchers to freely work with data, but all statistical results and outputs are thoroughly checked for confidentiality, using a process known as output-based statistical disclosure control (SDC), which contrasts with the more traditional input-based SDC where a dataset has been anonymized prior to research access (Ritchie 2007). Checks are made to see whether particular individuals can be identified from the content of the output. While some checks can be undertaken on tabular data using tools such as DataShield, the context is also important, as are the disclosure risks and the agreed level of acceptable risk (Wilson et al. 2017). The method is judged as sufficient when a guarantee of confidentiality can be maintained, for example, according to a pre-agreed standard. Threshold rules are usually set, for example, cell counts in tables might be considered nonconfidential if the frequency of units is at least five. Higher frequencies may be necessary if there is insufficient variation in the data or the data can be identified with a small number of statistical units.
420
L. Corti and L. Bishop
Case Studies of Successful Data Sharing: Ethical Solutions The UK Data Service has successfully archived a number of studies from qualitative research and from the linking of different kinds of data to longitudinal and cohort studies, all of which that have encountered ethical challenges where the sensitivity of the research topic could have been a barrier to making data available beyond the research team. Each required treatment through solid ethical planning, anonymization, or revisiting participants to gain consent. (A) A health-related sensitive topic Accessing HIV postexposure prophylaxis: gay and bisexual men in the UK describe their experiences (Dodds et al. 2017) (B) An emotionally sensitive topic: Managing Suffering at the End of Life: a Study of Continuous Deep Sedation Until Death (Seymour 2017; Seymour et al. 2011) (C) A politically sensitive topic: Health and Social Consequences of the Foot and Mouth Disease Epidemic in North Cumbria, 2001–2003 (Mort 2006) (D) A politically sensitive topic: Being a Doctor: a Sociological Analysis, 2005–2006 (Nettleton 2009; Corti et al. 2018) Stories from these and other depositors are available from the UK Data Service website (UK Data Service 2017b).
New Forms of Data and Ethical Challenges Researchers are increasingly excited about the potential for data analysis facilitated by new forms of data for research and technology. Since 2010, big data from sources like social media, digital sensors, and financial and administrative transactions have become available as data commodities for the social scientist. The panacea of big data has led to a focus on developing solutions for powerful analytics, such as predictive modelling and text mining, sometimes at the expense of questioning sustainability and reproducibility, ethics, and data protection. Data archivists focus on the tenets of their trade, data provenance and trustworthiness, ethical and legal entitlements, usability, and data structure and quality, and are adapting older methods to deal with these issues. For new forms of data, typically generated for non-research purposes, consent has usually not been gained, and often archives are unable to host sources unless they are anonymized. The ethical challenges raised by research that uses new and novel data can seem daunting; the risks are both real and substantial. However, the opportunities are also great, and with a growing collection of guidance and examples, it is possible to pursue many such opportunities in an ethical manner. But what makes research ethics for research with big data any different from the ethical matters discussed above? When it comes to data curation and sharing, issues of privacy, informed consent, de-identification, unequal access to data, and research integrity are all pertinent.
22
Ethical Issues in Data Sharing and Archiving
421
Because of the nature of the data, typically not collected for research, the usual ethical protections that are applied at several points in the research data life cycle have not taken place, including: • The data collection was not subject to any formal ethics review process, e.g., research ethics committees (RECs) or institutional review boards (IRBs). • Protections when data were collected (e.g., informed consent) and processed (e.g., de-identification) will not have been implemented. • Using the data for research may substantially differ from the original purpose for which it was collected (e.g., data to improve direct health care used later for research), and this was not anticipated when the data were generated. • Data are less often held as discrete collections; indeed the value of big data lies in the capacity to accumulate, pool, and link many data sources (Bishop 2017). The relationship between data curators and data producers is often indirect and variable. An OECD (2017) report argues this relationship is often weaker or nonexistent with big data, limiting the capacity of repositories to carry out key activities to safely manage personal or sensitive data. The ethical issue of consent arises because in big data analytics, very little may be known about intended future uses of data when it is collected. With such uncertainty, neither benefits nor risks can be meaningfully understood. Thus, it is unlikely that consent obtained at the point of data collection (one-off) would meet a strict definition of “informed consent.” While existing procedures exist for “broad” and “generic” consent to share genomic data, these are criticized on the grounds that such consent cannot be meaningful in light of risks of unknown future genetic technologies. Obtaining informed consent may be impossible or prohibitively costly due to factors such as scale or the inability to privately contact data subjects. The validity of consent obtained by agreement to terms and conditions is debateable, especially when agreement is mandatory to access a service. Changes in the data landscape present new challenges when relying on traditional practices to uphold rights to privacy. The systems and tools for analyzing big data are becoming more powerful, with new algorithmic methods of analysis that raise questions over the effectiveness of existing techniques of anonymization and deidentification. As Narayanan and Felton (2014) found, there is no “silver bullet” option. Once this is acknowledged, it is possible to move away from thinking in black-and-white terms and focus on minimizing, but not eliminating, risks and balancing risks with the benefits to society from the use of these data. The growing capacity to link diverse data sources presents challenges for the mechanisms that control access to data (Barocas and Nissenbaum 2014). The problems with the processes of anonymization, consent, and access are not restricted to big data, but big data has transformed the “fault lines” in existing practices into “impassable chasms” (Barocas and Nissenbaum 2014, 45). The problems that big data brings to light extend beyond legal or technical inconveniences and reveal fundamental conceptual challenges.
422
L. Corti and L. Bishop
The tension between safeguarding the privacy of peoples’ (or household) data and the benefits of research using new forms of data needs to be considered. As the risk of “jigsaw” identification increases through linkage with available public sources, so the need to enable safe and trusted access to linked data becomes essential, such as the use of the Five Safes, described earlier. The UK’s Data Ethics Framework sets out useful high-level principles for undertaking such data science ethically and has a useful workbook to accompany it (Department for Digital, Culture, Media and Sport, 2018). Finally, cross-country differences in legal, ethical, and administrative rules for nonstandard survey data gathering also need to be considered when working cross-nationally on surveys. An example of linking register data from various countries to the European Survey of Health, Ageing and Retirement in Europe (SHARE) – a multidisciplinary and cross-national panel study running since 2004 – is covered by Schmidutz (2018). In this case, consent for record linkage is unambiguously separated from the consent for participation in the survey and is tailored to the specific linkage project in each country. Across countries there are differences in the national actors involved; the variables that are being linked, which in some cases include special categories of data; the exact purposes of and procedures for data processing that vary in accordance with the particular linked administrative data source (e.g., health data or pension-related data); and requirements of the providers of the administrative data.
Meeting the Transparency Agenda and Data Sharing There has been a growing interest in data sharing driven by research funder mandates and the open science agenda to save and publish data and reuse value. Since 2013, the “research transparency agenda” has also re-emerged, this time in response to calls for scientific openness and research audit in the wake of evidence of fraudulence, plagiarism, and lack of publishing negative or null findings in research (Unger et al. 2016). In quantitative methods, reproducibility is held as the gold standard for demonstrating research integrity with some journals now requiring data, syntax, and prior registration of hypotheses to be made available as part of the peer review. Springer Nature journals have led the way in facilitating compliance with research funder and institution requirements to share data and provide a set of standardized research data policies that can be easily adopted by journals and book (Springer Nature 2017). The shift in semantics from “open access” to “transparency” and “replication” in the research space is important but, in some ways, quite troublesome for qualitative research, for ethical and logistical reasons. It is important to counter mistrust in research findings, yet qualitative researchers might envisage another impending “attack” on qualitative research, for which it is typically harder to demonstrate transparency. For example, replicating analysis of large volumes of data gathered from a long-running anthropological study would likely be impossible. The American Journal for Political Science (AJPS) is one of the few journals that offer simplified guidance on what to submit to meet its reproducibility mandate. While
22
Ethical Issues in Data Sharing and Archiving
423
most articles submitted to this kind of predominantly quantitative research journals easily fit these instructions, the AJPS suggests that alternatives are sometimes necessary to accommodate papers based on qualitative analyses (AJPS 2016). This focuses on providing description of how texts were collected, assembled, and processed prior to undertaking analyses, offering context and a summary of ethical issues. Corti et al. (2018) further argue that instead of retreating, scholars should meet the challenge with a more positive greeting. Every researcher can share something from their methods and data and see “transparency” as an opportunity and not a threat. Providing ethical review and informed consent documentation can be incredibly useful for methodological insight and review.
Conclusion In summary, data can be made available using a range of access pathways to meet legal and ethical obligations, including confidential and sensitive research data. The three pillars to be checked and met are: checking the status of informed consent, including possible ethics review at a later stage to enable data access; protecting identities in data by anonymizing data; and using access controls where necessary. The four case studies offered show how researchers who have led challenging studies, in terms of consent and confidentiality, have worked to ensure that some elements of their data can be shared.
References AJPS (2016) Qualitative data verification checklist, American Journal of Political Science, Version 1.0. https://ajpsblogging.files.wordpress.com/2016/05/qual-data-checklist-ver-1-0.pdf ARX (2018) ARX data anonymization tool. http://arx.deidentifier.org/ Barocas S, Nissenbaum H (2014) Big data’s end run around procedural privacy protections. Commun ACM 57(11):31–33. Available at: https://cacm.acm.org/magazines/2014/11/179832big-datas-end-run-around-procedural-privacy-protections/abstract Bishop L (2017) Big data and data sharing: ethical issues. UK Data Service Guide. https://www. ukdataservice.ac.uk/media/604711/big-data-and-data-sharing_ethical-issues.pdf Caldicott F (2017) Impact and influence for patients and service users: National Data Guardian for Health and Care 2017 report: Gov.uk. https://www.gov.uk/government/publications/nationaldata-guardian-2017-report Chatham House (2018) Chatham House Rule, Chatham House. https://www.chathamhouse.org/ chatham-house-rule Corti L, Van den Eynden V, Bishop L, Woollard M (2014) Managing and sharing research data: a guide to good practice. Sage, London. ISBN: 978-1-44626-726-4 Corti L, Haaker M, Janz N, Nettleton S (2018) Show me the data: research reproducibility in qualitative research. UK Data Service Impact Blog, September 18, 2018. UK Data Service, University of Essex. http://blog.ukdataservice.ac.uk/show-me-the-data/ Department for Digital, Culture, Media and Sport (2018) Data Ethics Framework: Guidance. https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework
424
L. Corti and L. Bishop
Dodds C, Keogh P, Weatherburn P (2017) Accessing HIV post-exposure prophylaxis: gay and bisexual men in the UK describe their experiences. [Data collection]. UK Data Archive, Colchester. https://doi.org/10.5255/UKDA-SN-852745 Doward J, Cadwalladr C, Gibbs A (2017) Watchdog to launch inquiry into misuse of data in politics. The Observer, 4 Mar 2017. https://www.theguardian.com/technology/2017/mar/04/ cambridge-analytics-data-brexit-trump Elliot M, Mackay E, O’Hara K, Tudor C (2016) The anonymisation decision-managing framework. UK Anonymisation Network. http://ukanon.net/wp-content/uploads/2015/05/TheAnonymisation-Decision-making-Framework.pdf ESRC (2015) Framework for research ethics. Economic and Social Research Council, Swindon. http://www.esrc.ac.uk/files/funding/guidance-for-applicants/esrc-framework-for-researchethics-2015/ ESRC (2018) Why GDPR matters for research. ESRC Blog 25 May 2018. https://blog.esrc.ac.uk/ 2018/05/25/why-gdpr-matters-for-research/ European Court of Human Rights (1998) European convention on human rights. https://www.echr. coe.int/Documents/Convention_ENG.pdf Finnish Social Science Data Archive (2018a) Anonymisation and personal data. https://www.fsd. uta.fi/aineistonhallinta/en/anonymisation-and-identifiers.html The Expert Advisory Group on Data Access (2015) Governance of data access. The Wellcome Trust. https://wellcome.ac.uk/sites/default/files/governance-of-data-access-eagda-jun15.pdf Finnish Social Science Data Archive (2018b) Anonymisation and personal data. https://www.fsd. uta.fi/aineistonhallinta/en/anonymisation-and-identifiers.html Foster I (2018) Research infrastructure for the safe analysis of sensitive data. Ann Am Acad Pol Soc Sci 675(1):102–120. https://doi.org/10.1177/0002716217742610 Gov.UK (2018) Guide to the general data protection regulation. https://www.gov.uk/government/ publications/guide-to-the-general-data-protection-regulation Imperial College (2018) The importance of patient data sharing. Video Imperial College. https://www.youtube.com/watch?v=wja81uz7L5A Kruse F, Thestrup J (eds) (2017) Research data management – a European perspective. De Gruyter Saur, Berlin/Boston. https://www.degruyter.com/view/product/430793 Lane J, Heus P, Mulcahy T (2008) Data access in a cyber world: making use of cyberinfrastructure. Trans Data Priv 1:2–16 Information Commissioner’s Office (2018) Key definitions. The GDPR. Information Commissioner’s Office. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-gen eral-data-protection-regulation-gdpr/key-definitions/ Information Commissioners Office (2018a) Data protection impact assessments. https://ico.org.uk/ for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulationgdpr/accountability-and-governance/data-protection-impact-assessments/ Information Commissioners Office (2018b) Anonymisation: managing data protection risk code of practice (2018). https://ico.org.uk/media/1061/anonymisation-code.pdf Kuula A (2011) Methodological and ethical dilemmas of archiving qualitative data. IASSIST Q 34(3–4) and 35(1–2):12–17. https://iassistdata.org/sites/default/files/iqvol34_35_kuula.pdf MRC (2018) Data sharing and publishing. Using information about people in health research. Medical Research Council (MRC) Regulatory Support Centre. http://www.highlights.rsc.mrc. ac.uk/PIHR/index.html#/lessons/j9vZgLPLH49AA5cPhzPd8j-zrengyWWl?_k=6ne476 Mort M (2006) Health and social consequences of the foot and mouth disease epidemic in North Cumbria, 2001–2003 [computer file]. UK Data Archive [distributor], Colchester. SN: 5407. https://doi.org/10.5255/UKDA-SN-5407-1 Narayana A, Felton E (2014) No silver bullet: de-identification still doesn’t work. http:// randomwalker.info/publications/no-silver-bullet-de-identification.pdf NHS Health Research Authority (2018) Informing participants and seeking consent. https://www. hra.nhs.uk/planning-and-improving-research/best-practice/informing-participants-and-seekingconsent/
22
Ethical Issues in Data Sharing and Archiving
425
NHS Health Research Authority (2018a) Governance arrangements for research ethics committees. https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/gover nance-arrangement-research-ethics-committees/ Nettleton S (2009) Being a doctor: a sociological analysis, 2005–2006. [Data collection]. UK Data Service. SN: 6124, https://doi.org/10.5255/UKDA-SN-6124-1 OECD (2007) OECD principles and guidelines for access to research data from public funding. Organization for Economic Co-operation and Development. http://www.oecd.org/dataoecd/9/ 61/38500813.pdf Office for National Statistics (2018a) Policy for social survey microdata. ONS. https://www.ons. gov.uk/methodology/methodologytopicsandstatisticalconcepts/disclosurecontrol/policyfor socialsurveymicrodata Office for National Statistics (2017) The ‘five safes’ – data privacy at ONS. ONS Blog. https://blog. ons.gov.uk/2017/01/27/the-five-safes-data-privacy-at-ons/ Office for National Statistics (2018) Approved Researcher Scheme. https://www.ons.gov.uk/ aboutus/whatwedo/statistics/requestingstatistics/approvedresearcherscheme Open Brain Consent (2018) Make open data sharing a no-brainer for ethics committees. https://www.open-brain-consent.readthedocs.io Schmidutz D (2018) Implementing consent for record linkage in a cross-national survey: a practical example from SHARE Wave 8. SERISS. https://seriss.eu/wp-content/uploads/2018/11/ Schmidutz_Consent_SHARE_linkage.pdf Singer N (2013) Data protection laws, an ocean apart. The New York Times, 2 Feb. http://www. nytimes.com/2013/02/03/technology/consumer-data-protection-laws-an-ocean-apart.html?_r=0 The National Archives (2007) The Statistics and Registration Services Act 2007. https://www. legislation.gov.uk/ukpga/2007/18/contents Sweeney L, Crosas M, Bar-Sinai M (2016) Sharing sensitive data with confidence: the DataTags system. Technology Science. October 16, 2015. http://techscience.org/a/2015101601 Ritchie F (2007) Statistical detection and disclosure control in a research environment. Mimeo, Office for National Statistics. http://doku.iab.de/fdz/events/2007/Ritchie.pdf The Royal Society (2012) Science as an open enterprise. The Royal Society Policy Centre report 02/ 12. http://royalsociety.org/uploadedFiles/Royal_Society_Content/policy/projects/sape/201206-20-SAOE.pdf Seymour J, Judith Rietjens J, Brown J, van der Heide A, Sterckx S, Deliens L (2011) The perspectives of clinical staff and bereaved informal care-givers on the use of continuous sedation until death for cancer patients: the study protocol of the UNBIASED study. BMC Palliat Care 10(5). http://www.biomedcentral.com/1472-684X/10/5 Seymour J (2017) Managing suffering at the end of life: a study of continuous deep sedation until death. [Data collection]. Economic and Social Research Council, Colchester. https://doi.org/ 10.5255/UKDA-SN-850749 Shepherd P (2012) Millennium Cohort Study Ethical review and consent. https://cls.ucl.ac.uk/wpcontent/uploads/2017/07/MCS-Ethical-review-and-consent-Shepherd-P-November-2012.pdf Springer Nature (2017) Research data policies. https://www.springernature.com/gp/authors/ research-data-policy/journal-policies/15369670 Statistics Netherlands (2018) μ-ARGUS. http://neon.vb.cbs.nl/casc/mu.htm Tate R, Calderwood L, Dezateux C, Joshi H (2006) Mother’s consent to linkage of survey data with her child’s birth records in a multi-ethnic national cohort study. Int J Epidemiol 35:294–298. https://doi.org/10.1093/ije/dyi287 Templ M, Meindl B, Kowarik A (2018) sdcMicro. https://cran.r-project.org/web/packages/ sdcMicro/vignettes/sdc_guidelines.pdf Unger JM, Barlow WE, Ramsey SD et al (2016) The scientific impact of positive and negative phase 3 cancer clinical trials. JAMA Oncol 2:875. https://doi.org/10.1001/jamaoncol.2015.6487 University of Cambridge (2015) Computers using digital footprints are better judges of personality than friends and family. Research, 12 Jan 2015. https://www.cam.ac.uk/research/news/com puters-using-digital-footprints-are-better-judges-of-personality-than-friends-and-family
426
L. Corti and L. Bishop
University of London, Institute of Education, Centre for Longitudinal Studies (2018). Millennium Cohort Study: Sixth survey, 2015, 4th edn. [Data collection]. UK Data Service. SN: 8156. https://doi.org/10.5255/UKDA-SN-8156-4 Understanding Patient Data (2018) Data saves lives animations. Animations, understanding patient data. https://understandingpatientdata.org.uk/animations UK Data Service (2018a) Terms and conditions of access. https://www.ukdataservice.ac.uk/getdata/how-to-access/conditions/eul UK Data Service (2018b) UK Data Service anonymisation of qualitative data. https://www. ukdataservice.ac.uk/manage-data/legal-ethical/anonymisation/qualitative UK Data Service (2017a) Secure lab: the 5 safes. https://www.ukdataservice.ac.uk/use-data/securelab/security-philosophy.aspx UK Data Service (2017b) Depositor stories. https://www.ukdataservice.ac.uk/deposit-data/stories. aspx UK Research Integrity Office (2018) Code of practice for research, UK Research Integrity Office. https://ukrio.org/publications/code-of-practice-for-research/ UK Statistics Authority (2018) National Statistician’s Data Ethics Advisory Committee. https://www.statisticsauthority.gov.uk/about-the-authority/committees/nsdec/ Wales Cancer Bank (2007) Annual report 2006–2007, Wales Cancer Bank. https://www. walescancerbank.com/getfile.php?type=reports&id=WCB_AnnualReport_2006-07.pdf Wilson RC, Butters OW, Avraam D, Baker J, Tedds JA, Turner A, Murtagh M, Burton PR (2017) DataSHIELD – new directions and dimensions. Data Sci J 16:21. https://doi.org/10.5334/dsj2017-021
Big Data A Focus on Social Media Research Dilemmas
23
Anja Bechmann and Jiyoung Ydun Kim
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics in Social Media and Big Data Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical Dilemmas: A Big Data Study of Facebook Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informed Consent from All Data Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anonymity: From Aggregated Data to Natural Language Processing . . . . . . . . . . . . . . . . . . . . . Anonymity: Gender Coding from User Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IRBs in Different Countries: Cross-Cultural Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
428 429 432 433 436 438 439 440 442
Abstract
Big data research is an umbrella term that characterizes research in many fields. This chapter will focus specifically on big data research tied to the use of social media primarily with a focus on humanities and social science research. Social media as a data source provides opportunities to understand how people individually and collectively communicate, socialize, and critically scrutinize platform infrastructures, exposure, and interaction logic. However, the data and the subsequent processing are closely tied to important ethical issues especially concerning tensions between privacy on the one side and accountability/transparency on the other side. Through an illustrative big data case study of Facebook groups supplemented with existing literature, the chapter will explore ethical dilemmas that occur in connection with social media big data research. The chapter argues that we need to justify our research design balancing protection of the individuals and the aim of creating knowledge for the good of society.
A. Bechmann (*) · J. Y. Kim DATALAB, Center for Digital Social Research, Aarhus University, Aarhus N, Denmark e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_18
427
428
A. Bechmann and J. Y. Kim
Keywords
Ethics · Social media · Big data · Privacy · Facebook
Introduction Big data (Kitchin 2014; Mayer-Schönberger and Cukier 2013) has been a hot topic in the last 10 years within humanities and social science due to ever increasing size and number of datasets made available. This, in turn, has driven various methods of computing forward (Heiberger and Riebling 2016). Such methods and large-scale research design have generated new knowledge of the social world, but cases of data breaches, exposures, and exploitations have also caused strong criticism from the research community and the broader society when it comes to privacy law and ethics. One case, in particular, stands out in the context of social media data, the case of Cambridge Analytica in early 2018, in which social media data (e.g., total number of likes) on several hundred thousand participants and millions of their friends were used to inform the company’s political campaigns without friends explicitly consenting to this particular purpose. Cambridge Analytica is a consultancy company that builds psychological profiles of voters through established big social media data methods (Kosinski et al. 2013) to help clients win elections. The data acquisition used for political campaigning happened with the help of a researcher at Cambridge University who transferred data into commercial use even though the data was collected for research purpose (Madrigal 2018; Tufekci 2014). This is clearly ethically unacceptable and created outrage in the research community. However, the case had a profoundly damaging effect on the academic research reputation and hampered the opportunity of doing ethical sound large-scale social media research in general. Due to the outrage of the public, Facebook closed down access in 2018 for both commercial and research use through their application programming interface (API) that was the main data collecting tool for researchers worldwide and across disciplines (Bechmann and Vahlstrup 2015; Lomborg and Bechmann 2014). Taking such drastic measures shut down effectively the large-scale access to social media data for third-party companies. At the same time, Facebook also made sure that knowledge of social media communication, behavior, and associated knowledge of social collective behavior and social systems stayed in the hands of the private company making the knowledge gap between public and private research immense. This is convenient for Facebook as a social media company in order to mitigate public scrutiny and hold the power that such data may have for the prediction and manipulation of collective behavior on a large scale (Bruns et al. 2018). It questions the freedom of science not only as a human right (Guidotti 2018; Porsdam Mann et al. 2018) but also as a common good for society, building knowledgeable societies, and the ability for the research community to uphold standards for accountability and reproducibility that are the keystones for academic research. In the light of this controversial case, the chapter argues that we need to strike a balance between protecting the individual participants in our research and
23
Big Data
429
creating knowledge from social media data for the common good of society so that modern societies do not end up in a situation where only commercial companies hold important societal knowledge. In order to reach this goal, researchers need to account for and discuss important ethical dilemmas when using social media data. The aim of this chapter is to outline some of these dilemmas and discuss potential solutions. In doing so we will outline dilemmas addressed by existing literature in the field, followed by a big data study of Facebook groups in order to emphasize the practical matters in greater details. The case study chosen is of Facebook groups across two countries. The purpose of selecting this particular case study is to highlight different dimensions of consent, openness, and cultural differences in ethics. The chapter concludes by drawing parallels between the general discussion and the practical setup in pursuit of potential solutions for the future.
Ethics in Social Media and Big Data Research Even though privacy regulation is often mentioned in the context of research ethics, it is important to distinguish between legal and ethically sound research. All research needs to adhere to existing regulation. Yet, even though a research project is legal, it is not per se ethically sound, and the focus of this chapter is on ethics. Recently the European General Data Protection Regulation (GDPR) proposed a stricter approach to privacy of individuals and groups in (digital) societies. GDPR holds important definitions of, for instance, personal data, data minimization, informed consent, identifiers, pseudonyms, and anonymous.
Personal data: is any information relating to an identified or identifiable natural person (“data subject”); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. Data minimization: refer to the practice of only collecting minimally personal data for the particular purpose. Personal social data must not be collected just in case they might be useful in the future. There has to be clear specified need for collecting the personal data. Informed consent: is any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which s/he, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to the person in question. An identifier: relates to name, an identification number, location data, an online ID or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. Pseudonyms: refer to the practice of replacing identifiers with a case ID instead of for instance a personal identification number or name. Information on the original values and techniques used to create the pseudonyms should be kept organizationally and technically separate from the pseudonymized data. Anonymous: data are created when separately kept identifying information (in pseudonymous data) are destroyed and thus cannot be linked to the original personal data (DIRECTIVE (EU) 2016/680; Vollmer 2018).
430
A. Bechmann and J. Y. Kim
Despite defining whether a research project handles personal data and is compliant with the regulation or not, these dimensions are also useful as a backdrop of ethical discussions, and we will refer to some of the issues in the chapter. GDPR has a more liberal approach to academic research than commercial companies as it needs to adhere to more strict ethical standards and legal control. This highlights the need to take ethics more serious in order to mitigate unexpected consequences when striving to protect individuals and to meet potential public critique of the research modus operandi. One of the most widely used ethical standards (Ess 2013; Zimmer and KinderKurlanda 2017) for social media research within humanities and social science is the ethical guidelines issued by the Association of Internet Researchers (AoIR), namely, IRE 1.0 (Ess and The AoIR Ethics Working Committee 2002), IRE 2.0 (Markham and Buchanan 2012), and IRE 3.0 (forthcoming 2019). These guidelines emphasize that different countries have different ethical standards and encourage the researchers to ask themselves whether those standards cover Internet/social media research adequately as they are often coined with health research objectives (see, e.g., The Belmont Report – Department of Health, Education, and Welfare 1978). The guidelines highlight different dimensions of ethically sound social media research and associated questions to pose even though they do not solely focus on neither social media nor big data studies but Internet studies more generally. Privacy is the overarching dimension in the guidelines in the understanding of protecting the individuals from harm by securing and preserving: anonymity, confidentiality, and informed consent. Consideration of informed consent is important in Big Data research projects as recruitment is either done through application programming interfaces (APIs) and other scraping tools (collecting data on subjects that did not consent), through Internet panels with consent or a combination of both. In all cases many people do not read clickwrap agreements and therefore are not informed (Bechmann 2014) about neither the research project in question nor the terms of service for the specific platform that might state that the platform share data with third party and all communication therefore must be treated as public (e.g., Terms of Service for Twitter – Spring 2019, https://twitter.com/en/tos). Privacy as an ethical concept goes beyond the most basic obligations of Human Subjects Protections in the foundations of IRE 1.0 and 2.0. These guidelines suggest that the subjects unambiguously give consent for personal information to be gathered, know why data is collected, and have the ability to correct erroneous data, opt out, and be protected from transfer of data to different legal jurisdictions (e.g., other country). At the same time, the guidelines highlight contextual expectations (Nissenbaum 2009) and, for instance, distinguish between participants being subjects/persons or authors of text/data and whether the platform analyzed contains what is presumed by subjects to be private communication or public (IRE 1.0 & 2.0). In both cases the first instance needs to adhere to a stricter protection of the subjects (i.e., anonymity/ pseudonymity, confidentiality, and informed consent). Yet, in the case of social media research, it is less clear when communities are public or private. How many followers and/or members of a group are needed in order to make it public or does it follow the privacy setting of the group solely? For instance, Facebook groups could
23
Big Data
431
be public but contain topics that are highly sensitive, debated between few members, whereas secret groups could contain nonsensitive topics debated between a very large group of people. Expectations of the subjects have been the focal point in guidelines (IRE 2.0) as well as the research debate in general (Zimmer 2010), and at the same time IRE 2.0 clearly defy “attempts to universalize experience or define in advance what might constitute harmful research practice” (p. 7) arguing for a casebased approach instead. This is arguably an excellent approach, but in big data ethics, it leaves researchers with the challenge of how then to understand the experience of the individual subject included in the study. At the same time, the guidelines suggest to consider the potential risk that subjects can endure and especially highlight minors and vulnerable groups (IRE 2.0). Risk is evaluated differently depending on whether the researcher has a utilitarian or a deontological approach in terms of the question of what benefits might be gained from the research. Having a utilitarian approach means that benefits (including the greatest good for the greatest number) might outweigh costs and potential harm for the subjects involved, whereas having a deontological approach means that the research would never be able to violate basic principles and rights even though the benefits are substantial (Ess and The AoIR Ethics Working Committee 2002, p. 8). This is an extremely interesting discussion due to our initial claim that social media ethical research needs to strike a balance between freedom of science and privacy of subjects. In the utilitarian approach, freedom of science is in the interest of society and thereby might overrule potentials for privacy breaches, whereas it is more unclear whether a deontological approach considers freedom of science as an equally important principle and right in the current research practice or debates in the light of Cambridge Analytica (see, e.g., Bruns et al. 2018). It is worth pointing out here that the different ethical approaches have different emphasis and it is not simply a matter of which works best here but to underline the difficulty of balancing privacy and freedom of science especially in the light of Cambridge Analytica where privacy tends to be favored on the cost of knowledge for the good of society. We will go into more detail with this discussion later as we find it central to the discussion of social media research dilemmas in its current form where data is controlled by private companies. An associated discussion that the guidelines only briefly touch upon is the potential that the terms of service of the platforms might not be ethical (IRE 2.0, p. 8). For instance, the terms of service of Facebook in Spring 2019 (https://www. facebook.com/terms.php) ban researchers from scraping data from individuals and organizations using the platform for research purposes. Yet, if this is the only way to retrieve data due to the proprietary right of social media platforms, then it might prevent society from understanding fundamental problems of, e.g., privacy issues, viral effects violating democracy, and amplifying cyberbullying. A construct that on the surface is designed to “protect” privacy by keeping data inside the companies might end up destroying privacy because academic research is not able to point to specific problems due to the proprietary right and legal status of the terms of service. In the same fashion, a similar dilemma arises when balancing privacy and freedom of science in the case of collecting data and processing data anonymously. IRE 2.0 only briefly mention dilemmas that are central to big data studies –
432
A. Bechmann and J. Y. Kim
collecting data that reveal links to and text/data of people outside the participants consented to the study (second- and third-degree data) and destroying knowledge in the data when processing it anonymously to protect the privacy of the individual. In the first case, studying second- and third-degree interactions belonging to other people in the network might be necessary in order to say anything substantial about the logics of virality in a certain case of, for instance, disinformation spread. In the second case, processing anonymously data that is designed to disclose identity might hinder important research questions to be asked that could benefit society at large as it would disclose, for instance, collective biases (Bechmann and Bowker 2019) as we shall see from the case study presented in the next section. The shift from an individual concept of selfhood and identity toward a more relational concept is critical to look at when we are working especially with relational metadata in big data studies, but at the time of writing (spring 2019), such relational data is being shut down for research access even though it is a substantial source of knowledge for humanities and social science research. Instead such potent knowledge is kept in private companies. Platforms argue that this happens because they want to protect user privacy, but they do not distinguish between commercial and legitimate academic research use, for instance, in differentiated application programming interface (API) solutions. We will return to this dilemma in the conclusion after exemplifying dilemmas in our case study.
Ethical Dilemmas: A Big Data Study of Facebook Groups Our big data case study on Facebook groups aims to discuss systematic gendering in social media by comparing two countries, Denmark and South Korea, with very different gender roles in society (Hausmann et al. 2012). We examine this by looking at patterns of and predicting gender differences. In this analysis we have measured gender differences in personal network size and participation in groups with different privacy levels (Kim et al. 2019a, b). Based on the research questions, we thus collect relational data from the number of friends and group membership and amount of comments and posts in the groups. We are interested in the use of Facebook groups as a form of online community with three different privacy levels. Also, we aim to study how the privacy of online community correlates with the topics by gender in two countries. For instance, are some topics debated more by females than other topics, and/or is this done in groups with particular privacy settings? Facebook provides users with a subcommunity called “group” that is a place where people interact with others in a more or less controlled environment. Facebook groups are increasing in popularity, and not just among users. In Facebook’s F8 developer conference, Mark Zuckerberg announced that Facebook newsfeed algorithm has changed to give the priority to friends, family, and groups over Facebook pages posts that are mostly for businesses and brands (Facebook for Developers – F8 2018 Day 1 Keynote n.d.). As outlined in Table 1, there are three types of Facebook groups based on their privacy settings and with associated different attributes. The most open setting for a group is public, meaning that anyone
23
Big Data
433
Table 1 Public, closed, and secret groups on Facebook and associated privacy settings Who can see the group’s name?
Public Anyone
Closed Anyone
Who can see the group description?
Anyone
Anyone
Who can see the list of members in the group?
People on Facebook People on Facebook Anyone
Current members People on Facebook Current members People on Facebook People on Facebook Current members
Who can see admins and moderators in the group? Who can see what members post in the group? Who can find the group in Facebook search? Who can request to join? Who can see stories about the group on Facebook (e.g., newsfeed and search)?
People on Facebook People on Facebook People on Facebook
Secret Current and former members Current and former members Current members Current members Current members Current members Former members Current members
can see who is in the group and what has been posted. Public groups are similar to Facebook pages, created by users, companies, celebrities, and brands to engage with their audience. While administrators of pages can only post to the account, open group members can post and comment in the group. When you create a group, closed privacy setting is default, and the privacy settings of closed groups are in between public and secret groups. By creating a closed group, the group is provided with a more private forum to share information in than open groups based on the formality of the Facebook group description. Closed groups also allow you to still be able to search for the group information and see who the members are, but people will not be able to see any posts or information within the closed group unless they are a member. Even people without log in Facebook can see the names and descriptions of public and closed groups, and they can see posts in public groups. Secret group has a privacy setting that does not allow anyone on Facebook to see nor search the groups other than the group members. To become a member of a secret group, people need to be invited or create the group themselves. Because of the restriction of being a member, the level of openness is the lowest among the three groups.
Informed Consent from All Data Subjects To conduct our study, we collected data using the Digital Footprints software (http:// digitalfootprints.dk/) developed by DATALAB at Aarhus University. Digital Footprints is a data extraction software that allows researchers to extract data from Facebook’s public streams as well as private data from closed and secret groups with the consent of the users as a four-step procedure (see Bechmann and Vahlstrup
434
A. Bechmann and J. Y. Kim
2015). Data from a total of 1,121 Korean and 1,000 Danish participants were retrieved, including data from 20,783 groups that were members of at this particular time. We collected profile and description data and retrieved likes, posts, and comments from all the groups since the group was created. The data was collected between the 1st of January 2014 and the 30th of April 2015. A total of 10,662,832 posts, 44,732,331 comments, and 67,787,993 likes were collected from 20,783 groups with 9,557,986 group members. After April 2015 Facebook changed the APIs, and this data collection is no longer available for the independent researcher, but the dataset and the potential knowledge that is possible to derive from the database serve as an excellent case for discussion of the balance between societal interests and protection of individual subjects. The data retrieved in the project is stored on university servers with highest degree of protection, and the data is not portable. Only anonymized aggregated data is allowed to be downloaded on single laptops and travel with researchers. As pointed out in the review of the ethical literature, there is an ethical dilemma in our data collection concerning whether it is ethically sound to collect the network/ relational data, and this concern has increased in the light of the case of Cambridge Analytica that arose 3 years after the collection took place. For example, the list of full names of participants’ friends and the comments and posts of group members have not consented to participate in the research project. Even though the project is legally sound and Facebook has permitted the app used for data collection and people have consented through signing terms of service (Beauchamp and Childress 2001), it is worth discussing whether the project in question (and similar big data studies) should live up to the gold standard of informed consent: having informed consent from all subjects included in the study is desirable (Beauchamp 2011; McKee and Porter 2012; Porter and McKee 2009). Here, consent could mean that every time we process a data point (like, comment, or post), we should explicitly reach out to the particular subject that made it and ask for permission instead of relying on their legal and general consent to this activity (Facebook’s terms of service and privacy policy). Data collection in big data social media studies depends on private companies that own data. This is different from traditional data collection methods such as survey studies that draw on the relationship between researchers and individual subjects agreeing to participate through a direct consent potentially with an Internet panel as a recruitment platform. Even though it is legally sound due to the generic sign in to the terms and conditions of Facebook, it would be more ethically sound for data collection, bridging between independent research for society and privacy, if Facebook, for instance, instead of shutting down completely for access would allow individuals to be notified by Facebook every time their personal data was used for independent research in order to secure second- and third-degree relational consent and the ability to opt in to such research (Proferes and Zimmer 2014). This ethical dilemma raises two questions on data collection: What is the best or feasible way to have informed consent to network/relational data for both independent academic researchers and participant in these networked surroundings? How
23
Big Data
435
can the researcher interpret the expected sensitivity level of social media big data before we collect the data to mitigate any potential harm to subjects involved? Even though the secret group collection came up with an explicit warning to the participants before collection, this could be improved to make the users more aware of this kind of extensive collection (Markham and Buchanan 2012, p. 8). In this case study, we built a four-step consent procedure to increase likelihood of it being informed and not only consensual. In this context, it can also be argued that each member or each open, closed, and secret group may have different levels of perceived expectations on their group privacy based on community functions, contents and the size of communities, and similar attributes. Sensitivity to such differences can be suggested and powered by a more advanced learning algorithm than purely relying on three generic standards typically identified once as a static entry. This will allow for a more contextual approach to privacy in which privacy can shift over time per default (van Dijck 2013) depending on how the community might evolve, thereby also providing different access restrictions for independent academic researchers when collecting for research purposes. According to the explanation of Facebook group privacy (Table 1), we can only assume the level of sensitivities in each group as we do not know the content and network data beforehand. Even though the data from an open group seems to be collected and analyzed like Twitter data, we need to critically discuss the expectations blindly. In other words, we do not know if an open group would contain sensitive topics before we have analyzed it. Some secret groups may talk about public issues like sports or music with a considerable number of people, and there may be some groups talking sensitive or personal topics in the public group with a small number of members. Our study (Kim et al. 2019b) shows that in Danish groups among three different levels of privacy groups, secret groups have the highest average number of active group members (223.1) followed by closed groups (198.1) and open groups (187.6). The three privacy levels provided by Facebook are a way for administrators to label the groups, but it is difficult for researchers to measure all group member’s expectations with regard to using their activity patterns for research. Reflecting on our data collection, we ask do we need to give up our research questions on friend list, on group member activities, and also about privacy level on online communities? Having consent of 9,557,986 participants of all members of groups is impossible and is a dilemma that all social media big data researchers are confronted with. Minimization of the collection on personal data is necessary in the research plan, and apart from safe space solutions, we will in the next sections discuss more in detail what could be done to minimize potential harm when processing such data. Yet, some research questions of importance to, for instance, biases in society, like ours, can only be answered using network/relational data. It is almost impossible to have informed consent from 9 million participants as an individual researcher. If money was not a problem, researchers could buy the participants’ data from Facebook just like private companies do for advertising. Is it more ethically sound? We would argue against such logic as it would create less
436
A. Bechmann and J. Y. Kim
transparency and only allow rich academic projects and universities to investigate such data, but not in a more ethically sound way. We would argue in a less ethically sound way. The same goes for solutions where a selected few scholars get extensive access with permission of the platforms (Bruns et al. 2018). We need to provide big data university-employed researchers leverage to collect enough data from the data collection inductively and deductively for producing knowledge for society but at the same time require them to comply to highest ethical standards for the community. And there is an unclear distinction on the collectible data; the list of friends, how open group data is more acceptable to collect compared to the closed and secret group? At the same time, we hope that our discussion here shows that context matters here. And in order to take into consideration expectations of the subjects, we need to consider this when deciding on what to collect for what purpose and how to process and store data accordingly. We explain lists of data we will collect in our informed consent procedure, e.g., the list of friends and group data. However, we cannot guarantee that the participants take time reflecting on our description of the data type to be collected and research purpose even though we confront them with this information in several steps. For example, that they trust us with their friends’ names and content of the groups, they are members of on behalf of everyone in the group. In the new paradigm on privacy within research ethics, there is an identified need to shift from individual concepts of privacy to a concept of social, relational, and networked privacy, especially in the context of social media big data research (Zimmer and KinderKurlanda 2017, p. 26). In this context, we could also not only talk about a gold standard of consent but a gold standard of trust that researchers established trusted procedures for information, storage, processing, and publication that live up to the highest standards (see also IRE 3.0).
Anonymity: From Aggregated Data to Natural Language Processing When analyzing the collected material, the first thing to do for a research team is to minimize the possibility to de-anonymize our data in any stages of research. We build our database as safe as possible with pseudonymization, replacing the realscreen name with a number ID, and we only travel with aggregated data for safety reasons. In the process of anonymization, first we categorize full-screen name to gender labels (female, male, and unknown) so our participants cannot be identified as an individual but as a category of gender. Second, we aggregate data to make visible the type of posts; the frequency of posts, comments, shares; likes on posts; and likes on comments based on the group level. Aggregation allows quantitative analysis which provides statistical analysis for summarizing and comparing groups. Lastly, we detach the posts and comments from personal information and link it instead to gender categories and group levels. Table 2 shows the identifier type of data, consent, anonymization method, and the ethical concerns after anonymization. Full-screen names of 2,021 participants, participants’ friends’ list, and open group member pseudonymization to ID on the
23
Big Data
437
Table 2 Facebook data on data type, consent information, anonymization method, and low, medium, and high ethical concerns in our case study
Data type : after anonymization Full-screen name of 2,021 participants: gender category Full-screen name of 2,021 participants’ friends: gender category Full-screen name of all members of open groups (8,759) where 2,021 participants are a member of: gender category Full-screen name of all members of closed groups (8,756) where 2,021 participants are a member of: gender category Full-screen name of all members of secret groups (3,268) where 2,021 participants are a member of: gender category The frequency of posts, comments, shares, likes on posts, the likes on comments of 20,783 open, closed, and secret group: numbers Natural language data from posts and comments 20,783 open, closed, and secret group: stemmed words by group
Consent Yes Partially
Anonymization method Remove/changed to ID Pseudonymous Remove/categorize Pseudonymous
Ethical concerns after anonymization Low Low
Partially
Remove/categorize Pseudonymous
Low
Partially
Remove/categorize Pseudonymous
Medium
Partially
Remove/categorize Pseudonymous
High
Partially
Aggregation based on groupand gender-categorized participants
Low
Partially
Stemming and lemmatization based on group- and gendercategorized participants
Medium
database and coded with gender for the portable dataset for the researchers, trying to meet ethical concerns on de-anonymization in the best possible way. The ethical concern after these procedures is reduced, but it will never disappear as following a gold standard of trust will require the team to always be concerned with potential risk of subject harm. A high awareness of this makes the research team always consider ethics at each step of the data processing and storing as a basic requirement in doing social media big data research. For instance, in this case having converted screen names to gender was a less ethical concern of ours than the perceived expectations of communication in secret groups, even though we have identified this label not to match completely with the type of content and network size expected from this label in comparison with more public groups. After the posts and comments are detached from the individual, the text is stemmed to words level based on group and gender. This create a medium level of ethical concern of ours distinguishing between low, medium, and high level as identified in Table 2.
438
A. Bechmann and J. Y. Kim
In order to analyze our data, we built Bayesian Poisson regression models with the separate aggregated numbers of online participation: the number of friends, group membership, posts, and comments on three different privacy levels of groups as count outcomes. Big data methods as quantitative methods can statistically analyze data in two different ways: to find patterns of variables inductively or to test hypotheses deductively. Moreover, the machine learning methods can by way of correlation, classification, and training produce an algorithm to predict outcomes based on probability (Bechmann and Bowker 2019; Zimmer and Kinder-Kurlanda 2017, p. 98). One of the profound ethical dilemmas in terms of anonymization in social media big data research is the ethical gray zone on conducting natural language processing on data of users, and is it pivotal to have the informed consents from each individual occurring in the dataset in order to process comments and posts of individual users through the mathematical model? We argue here that doing distant big data analysis on natural language data may actually be more ethically sound than traditional close reading in textual analysis of such data in terms of safe guarding individuals’ confidentiality (privacy). In natural language processing, researchers do not read the text but only look at aggregated results from the processing and only use aggregated labeling. However, a profound ethical concern is to justify the validity of such labels to make sure that the data is not misinterpreted. This in turn requires researchers to make use of mixed methods in which a close reading of a randomized subset would be relevant and therefore ties back into the dilemma of confidentiality. In this case we will risk de-anonymizing the data and disclose relationship between identity and content. Still, in our case we try to mitigate this by replacing all names with gender classes, supporting the famous yet controversial comment that most big data studies are not about knowing who the individual participant is but what they are like (Matzner 2014).
Anonymity: Gender Coding from User Names As illustrated from the last section, it is not only an ethical dilemma to connect aggregated data with natural language processing but also to use real names to identify gender. How do we then ethically and responsibly identify gender in large datasets? We will here argue that it is important to society to maintain the analysis of some classes in order to analyze potential biases that the algorithmic society might magnify or at least sustain. Otherwise we become blind to such biases (Bechmann and Bowker 2019). From a practical point of view, identifying names across countries is not a trivial task, especially when dealing with non-English-speaking countries. Acquiring the list of names from the Danish statistics department and Korean statistics department to act as a ground truth created problems. For the Danish names, we have the list of names from the Danish statistics department and assigned a gender using a Naive Bayesian classifier. The classifier was built from the real-world data using a simple model for prediction (Kim et al. 2019a). However, in an international
23
Big Data
439
communication community like Facebook groups, we cannot sort only on Danish names. The subjects are diversified in terms of nationality creating only partial recognition. We allocate the gender of 9,557,986 users based on their names. Gender assigned to users whose “actor_name” on Facebook either begin with or include Danish first names. This method assigned gender to 7,496,327 user names. The Gender API (gender-api.com), the biggest platform on the Internet to determine gender by a first name, was used to assign gender to Korean names to make explicit way for assigning gender categories to linguistic attributes with Hangul (Korean). However, Korean user names in Facebook can be used in many different ways: written in Korean with surname and then given name as 김지영, 김 (Kim as surname) and 지영 (Jiyoung as given name), written in Korean with the given name first and then surname later as 지영김, written in English with given name and then surname later as Jiyoung Kim, and surname and then given name as Kim Jiyoung. Also, there are different ways to spell Korean names in English, for instance, Jiyoung Kim as Jiyeong Kim or Jiyeong Gim and using a hyphen between the given name as Ji-young Kim. Unlike the Danish name list, we cannot obtain the first name of the list of Korean names from Korean statistics. Still, we used Gender API to provide gender coding on Korean names in our dataset. Since the results from Gender API might be unreliable if the sample data in their database is small or if the accuracy is close to 0.5, we used the threshold of 0.6 to maximize accuracy. Theoretically, participants’ self-identification should be the gold standard for ascribing gender categories. We can only have gender data by self-identification for our 2,121 participants. Larson (2017) argues that applying gender as a variable in natural language processing in this way may create ethical issues. You risk placing people in categories they do not belong to or interpreting gender in a strict binary sense. Researchers in big data therefore need to apply gender variables thoughtfully and to discuss explicitly in publication how the study might lead to false conclusions due to, for instance, errors in name lists and binary interpretations of gender as ground truth measurements.
IRBs in Different Countries: Cross-Cultural Comparison The last dilemma that we will discuss in this chapter is cultural boundaries in terms of social media and communication data ethics and ethical modus operandi. This is especially relevant in cross-country comparative studies in a maximum variation setup. For instance, countries like Denmark and South Korea have profound different cultures and legal protections. Even in the same platform like Facebook, the expectations of privacy in open, closed, and secret groups may differ from country to country. Especially as social media research hopefully entails a global scope, it is of importance here to exemplify some of the differences in ethical standards between South Korea and Denmark. To collect the data from two different countries, we need to apply the Institutional Review Board (IRB) separately in Denmark and Korea for approval from each county. In the Korean data, collection was approved by IRB from the University
440
A. Bechmann and J. Y. Kim
Institutional Review Board. The IRB required to submit a research plan, a resume of the research director, a pledge of bioethic compliance, a conflict of interest disclosure form, and a copy of education ethics related to education certification of all researchers. In the Danish case, the application and subsequent permission were provided by the Danish Data Protection Agency, a national review board that in line with the EU approach already laid out in this chapter, focuses on, e.g., the type of data collected, the purpose of the project, and whether these are proportional in the context of research. Also, the agency required an explicit duration of the project, together with information on informed consent procedures, if data is shared with third parties, how and where data is to be stored, and an update on who is involved in the project at all time during the duration of the project. After GDPR, the application and permission processes are more similar to the IRB approach where applications are sent to the legal department of the universities in Denmark still requiring compliance with law (GDPR) and ethics. In principle for our research project, this meant that there were differences in the requirements for data sharing and mobility in the two countries and requirements for records on who at a given time had access to data. We therefore argue for using highest standards in such design and also for creating highest standards in international research communities such as Association of Internet Researchers (AoIR) in order for researchers to discuss and benchmark actions against such standards even though they comply with country code of practices. In order not to be paternalistic, this requires such communities to be truly representative of cultures worldwide and not Western-centric. Cross-cultural awareness is almost always required as the great majority of social media big data research projects inevitably involve either researchers from diverse national or cultural backgrounds.
Conclusion This chapter has argued for balancing the principles of privacy with the need for public knowledge derived from social media data outside private companies. This means that the Cambridge Analytica case cannot be allowed to prevent all big data social media research. Instead it means that we need to focus more strongly on how to do such research in an ethically sound way, creating a gold standard for a trusted relationship between data subjects and researchers. The chapter has also shown that in order to do so, we need to combine a dominating generic discussion on big data ethics with dilemmas in specific projects. Laying out such considerations is a very fragile matter for researchers as they/we make us vulnerable to attacks from both public and private actors, but it is a necessity in order to progress the discussion and thereby improve the specificity of the potential problems that arise and solution that might create a more sustainable ecology and work process instead of the easy solution to just stop doing big data social media research in order to safeguard the academic career and reputation at the cost of both transparency and new (public) knowledge.
23
Big Data
441
Turning to this dangerous fragile strategy, the chapter has tried to illustrate choices made and discussion arisen in a very specific case study touching upon dilemmas of informed consent, topic identification, and the meaning of the label “secret,” identifying gender from names and obtaining cross-country IRB approval. All dilemmas built on the societal need for the knowledge, which can be derived from research done as a fundamental principle, need to be balanced against the highest possible standards for privacy. The discussion laid out in the chapter thus discusses not if but how this kind of level can be obtained. Here, especially, concerns of safeguarding individual identity have been essential, along with always asking how the perceived or expected level of privacy matters to the individual, as publications in big data research consist of aggregated reports, even though descriptions of the dataset might disclose identity (see, e.g., Zimmer 2010). This chapter has worked with high, medium and low levels of ethical concern during all stages of research with a focus on data collection and processing. By not asking if but focusing on how such big data research can take place, the reader might assume that we choose a utilitarian approach to ethics. Yet, instead we argue deontologically that knowledge is a prerequisite to secure privacy and cannot be separated as two distinct and mutually exclusive principles – as privacy cannot happen in a networked and platform-dominated society if we do not know what knowledge can be derived from data in those private companies. In this way, controlling access for research from the platforms’ side through terms of service and centralized solutions for selecting only a few to have access in a specific way addressing specific questions (e.g., Social Science One – socialscience.one) hampers the opportunity to safeguard fundamental principles to our society. This also is relevant when shutting down access to the relational data and knowledge of such logics that is the backbone in our networked society. Relational data is at the moment on Facebook only accessible through collaboration with the company making it possible for commercial parties to control what is made available and for what academic purpose. For instance, Facebook data science team and academia together conducted a massive-scale experiment on emotional contagion through social networks, manipulating the sentiment of the Facebook newsfeed content (Kramer et al. 2014). Such collaboration blurs the responsibility for checking whether ethical considerations and potential harmful implications have been discussed or potential risk scenarios been taken care of. The study also illustrates the ability of the companies to manipulate citizens through data and what independent researchers not have access to discover unless they work together with the same companies (Zimmer and Kinder-Kurlanda 2017, p. 213). This dependency we argue is not beneficial for freedom of science and therefore from an ethical point of view needs to be taken care of in future policy and regulation on an international level. Instead we suggest a designated and legitimate way of access for academic research that allows for screening of ethical considerations and approval from the university in question, something that did not take place before the Cambridge Analytica case. In similar way, the chapter has demonstrated that the privacy label “secret” means very different things to individual, groups, and cultures and suggests
442
A. Bechmann and J. Y. Kim
more advanced labeling and subsequent ethical considerations that consider network size and content shared in such groups. The amount of personal data in our society presumably will increase, but in big data research, data is treated as “a resource for very different modes of inquiry – rather than as entries with a fixed meaning” (Zimmer and Kinder-Kurlanda 2017, p. 55). Big data analysis allows researchers to not only look for something in the data but also make us “listen” to what to look for so the practice of identifying purpose of research needs to leverage this opportunity to some degree. This resembles a qualitative inquiry approach where you look for new potential knowledge instead of testing deductive hypotheses. It is there relevant to ask: Do the ethical issues proportionally increase based on the volume of data? In this chapter we argue that it is still a matter of how we collect data, for what purpose, and if we manage the data ethically. This will hopefully re-establish trust in independent academic humanities and social science researchers working with big data. This chapter has only discussed some possible solutions to regain this trust, but much more research and studies need to target this important matter as data types and volume will only accelerate with the next generations of Internet devices, services, standards, and protocols.
References Beauchamp TL (2011) Informed consent: its history, meaning, and present challenges. Camb Q Healthc Ethics 20(4):515–523 Beauchamp TL, Childress JF (2001) Principles of biomedical ethics. Oxford University Press, New York Bechmann A (2014) Non-informed consent cultures: privacy policies and App contracts on Facebook. J Media Bus Stud 11(1):21–38. https://doi.org/10.1080/16522354.2014.11073574 Bechmann A, Bowker GC (2019) Unsupervised by any other name: hidden layers of knowledge production in artificial intelligence on social media. Big Data Soc 6(1). https://doi.org/10.1177/ 2053951718819569 Bechmann A, Vahlstrup PB (2015) Studying Facebook and Instagram data: the digital Footprints software. First Monday 20(12):1–13. https://doi.org/10.5210/fm.v20i12.5968 Bruns A, Bechmann A, Burgess J, Chadwick A, Clark LS, Dutton WH, . . . Howard P (2018) Facebook shuts the gate after the horse has bolted, and hurts real research in the process. Internet Policy Review. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA Ess C (2013) Digital media ethics. Polity Press, Cambridge Ess C, The AoIR Ethics Working Committee (2002) Ethical decision-making and Internet Research: recommendations from the AoIR Ethics Working Committee (IRE1.0). Retrieved from https://aoir.org/ethics/ Facebook for Developers – F8 2018 Day 1 Keynote (n.d.) Retrieved 2 Apr 2019, from Facebook for Developers website https://developers.facebook.com/videos/f8-2018/f8-2018-day-1-keynote/
23
Big Data
443
Guidotti TL (2018) Scientific freedom and human rights. Arch Environ Occup Health 73(1):1–3. https://doi.org/10.1080/19338244.2017.1364522 Hausmann R, Tyson LD, Zahidi S (2012) The global gender gap report 2012. World Economic Forum, Geneva Heiberger RH, Riebling JR (2016) Installing computational social science: facing the challenges of new information and communication technologies in social science. Methodological Innov 9:2059799115622763. https://doi.org/10.1177/2059799115622763 Kim JY, Fusaroli R, Bechmann A (2019a) Systemic gendering in Facebook group participation. Presented at the International Association for Media and Communication Research Kim JY, Park HW, Bechmann A (2019b) A typology on Facebook groups: a large-scale case study of measuring network age, network size, gender and topics. Presented at the XXXIX sunbelt social networks conference of the international network for social network analysis (INSNA), Montréal Kitchin R (2014) The data revolution: big data, open data, data infrastructures and their consequences. Sage, London Kosinski M, Stillwell D, Graepel T (2013) Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci 110(15):5802–5805. https://doi.org/10.1073/ pnas.1218772110 Kramer ADI, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci 111(24):8788–8790. https://doi.org/ 10.1073/pnas.1320040111 Larson B (2017) Gender as a variable in natural-language processing: ethical considerations. In: Proceedings of the first ACL workshop on Ethics in natural language processing, 1–11. https://doi.org/10.18653/v1/W17-1601 Lomborg S, Bechmann A (2014) Using APIs for data collection on social media. Inf Soc 30(4):256–265. https://doi.org/10.1080/01972243.2014.915276 Madrigal AC (2018) What took Facebook so long? The Atlantic. Retrieved from https://www. theatlantic.com/technology/archive/2018/03/facebook-cambridge-analytica/555866/ Markham A, Buchanan E (2012) Recommendations from the AoIR Ethics Working Committee (Version 2.0). 19. Matzner T (2014) Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”. J Inf Commun Ethics Soc 12(2):93–106. https://doi.org/10.1108/JICES-082013-0030 Mayer-Schönberger V, Cukier K (2013) Big data: a revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt, New York McKee HA, Porter JE (2012) The ethics of archival research. Coll Compos Commun 64(1):59–81. Retrieved from JSTOR Nissenbaum H (2009) Privacy in context: technology, policy, and the integrity of social life. Stanford University Press, Stanford Porsdam Mann S, Donders Y, Mitchell C, Bradley VJ, Chou MF, Mann M, . . . Porsdam H (2018) Opinion: advocating for science progress as a human right. Proc Natl Acad Sci U S A 115(43):10820–10823. https://doi.org/10.1073/pnas.1816320115 Porter JE, McKee H (2009) The ethics of internet research: a rhetorical, case-based process (First printing edition). Peter Lang Inc., International Academic Publishers, New York Proferes NJ, Zimmer M (2014) A topology of Twitter research: disciplines, methods, and ethics. Aslib J Inf Manag 66(3):250–261. https://doi.org/10.1108/AJIM-09-2013-0083 The United States. National Commission for the Protection of Human Subjects of Biomedical, & Behavioral Research (1978). The Belmont report: ethical principles and guidelines for the protection of human subjects of research (Vol. 2). Dept. of Health, Education, and Welfare, National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research Tufekci Z (2014) Engineering the public: big data, surveillance and computational politics. First Monday 19(7). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4901
444
A. Bechmann and J. Y. Kim
van Dijck J (2013) The culture of connectivity: a critical history of social media. Oxford University Press, Oxford/New York Vollmer N (2018) Article 4 EU general data protection regulation (EU-GDPR) [Text], September 5. Retrieved 1 Apr 2019, from http://www.privacy-regulation.eu/en/article-4-defini tions-GDPR.htm Zimmer M (2010) “But the data is already public”: on the ethics of research in Facebook. Ethics Inf Technol 12(4):313–325. https://doi.org/10.1007/s10676-010-9227-5 Zimmer M, Kinder-Kurlanda K (eds) (2017) Internet research ethics for the social age: new challenges, cases, and contexts (New edition). Peter Lang Inc./International Academic Publishers, New York
Ethics of Ethnography
24
Martyn Hammersley
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minimizing Harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Autonomy and Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reciprocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
446 446 447 450 452 453 455 455
Abstract
The ethical issues relevant to social research take distinctive forms in the case of ethnography. One reason for this is that it involves entering the territories of others, rather than inviting them into that of the researcher (for the purposes of carrying out an experiment or an interview, or administering a questionnaire). Participant observation in “natural” settings is usually involved, taking place over weeks, months, or even years. Also significant is the flexible character of ethnographic research design, which means that only quite limited information can be provided at the start of data collection, when access is initially being negotiated, about exactly what the research will entail. A further distinctive feature is that relatively close relationships are established with some research participants, setting up obligations of one kind or another. Furthermore, these people and their activities are described in detail in ethnographic reports, with the result that they may be identifiable – at least by those who know the setting investigated. In this chapter, some of the central commitments of research ethics – minimizing harm, respecting autonomy, preserving privacy, and offering some reciprocity – M. Hammersley (*) Open University, Milton Keynes, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_50
445
446
M. Hammersley
are examined as they arise in ethnographic work. In particular, the issue of informed consent is considered. This is a common requirement in the context of ethical regulation, but there are difficulties involved in achieving it in ethnography, and its appropriateness is sometimes open to question. Keywords
Research ethics · Ethnography · Qualitative research · Harm · Privacy · Autonomy · Reciprocity
Introduction Generally speaking, the term “ethnography” is taken to refer to an exploratory form of social research in which the researcher visits one or more social settings relevant to the research topic; participates there in some role (even if only as a visitor or onlooker) for weeks, months, or years; observes and records what happens (using field notes but possibly also photos, audio- or video recording), interviews participants informally and perhaps more formally, and collects documentary data or perhaps even material artifacts. While, traditionally, participant observation has been face-to-face, there is now, of course, a considerable amount of “online ethnography,” using parallel means of data generation in “virtual” environments. If we take what are widely regarded as the central principles of research ethics – minimizing harm, respecting autonomy, protecting privacy, and offering some reciprocity – ethical issues arise in relation to all phases of ethnography, from initial negotiations with funding bodies and ethical regulators, through access negotiations and field relationships, to leaving the field, writing reports, and disseminating the findings. Furthermore, the character of ethnography has significant implications for research ethics. One reason for this is that it involves entering the territories of others, and this has implications not just for what ought and ought not to be done but also for the level of control that the ethnographer can exercise over what happens. Furthermore, the flexible design of ethnography means that, at the outset, when initial access is being negotiated, only quite limited information can be provided about what will be involved. Finally, relatively close relationships are likely to be established with some participants, building up obligations; and these people (and other participants) may be identifiable in publications, at least by those who know the setting concerned.
Background There is a long history of concern with ethical issues on the part of ethnographers. Very often, this has centered on criticisms of particular studies and responses to these. A few examples will be mentioned here (for a more extensive list, see Hammersley and Traianou 2012:Intro.). In the 1950s, there were challenges to Festinger et al.’s (1956) covert study of a small apocalyptic religious group: they joined this and pretended to share its beliefs, in order to study the reactions of its members when the world was not
24
Ethics of Ethnography
447
destroyed on the date predicted. Critics questioned whether the deception involved could be justified and suggested that the research may have had detrimental effects on participants’ psychological health (see Riecken 1956; Erikson 1967; Bok 1978). Around the same time, Vidich and Bensman’s (1958) study of a small town in upstate New York also attracted criticism. Their book emerged out of a larger investigation and was specifically intended to counter the “positive,” “bland” account of the town portrayed by the main study, focusing instead on conflicts within the community and on the power of some key community members. Critics raised issues about how individuals and the community were portrayed, given that some people were easily identifiable. Also, there were questions about the relationship between the authors and the larger project in which they had participated (see Vidich and Bensman 1964; Bell and Bronfenbrenner 1959; Becker 1964). More recently, there was the “El Dorado Scandal,” which arose when an investigative journalist published a book (Tierney 2001) criticizing a well-known US anthropologist, Napoleon Chagnon, for behaving unethically in relation to the Yanomamö people, whom he had studied over a long period of time. It was claimed, for example, that he had exaggerated the violent behavior of tribe members and that, on his own account, he had violated some of the principles of the American Anthropological Association’s ethics code, for example using deceitful means to obtain information about names and genealogy. And it was argued that his work had been used against the interests of the Yanomamö by Brazilian mining companies (see Geertz 2001; Pels 2005; Dreger 2011; Fluehr-Lobban 2003; Borofsky 2005; Chagnon 2013). A more recent major controversy, of a rather different kind, has surrounded Alice Goffman’s (2014) study On the Run, in which she studied the lives of the residents of a black neighborhood in Philadelphia, focusing on a group of young black men and the effects of intensive policing on their lives. While Goffman’s research was acclaimed by many, some critics argued that there were inconsistencies and inaccuracies in her account that reflected insufficient rigor in data collection and analysis. There were also ethical criticisms, in particular of her driving one of the men round the neighborhood when he was looking for the killer of his friend and was carrying a gun (see Lewis-Kraus 2016; Lubet 2018). These controversies raised a wide range of issues about ethnographic work, but most relate to a small set of principles that form the core of research ethics: the obligations to minimize harm, respect autonomy, protect privacy, and ensure some acceptable degree of reciprocity. However, these principles have been subject to different interpretations, and they can conflict in their implications, as well as with the requirements of successfully carrying out ethnographic research. In the remainder of this chapter, I will take each principle, outline broadly what it involves, and then examine some of the ways that it can apply in ethnographic work and the issues that these generate.
Minimizing Harm Causing harm is sometimes a danger in ethnographic research, though this does not usually arise in the manner assumed by biomedical ethics, where this results from a treatment whose effectiveness is being tested by the researcher(s). There is no such
448
M. Hammersley
treatment in ethnography, but harm can arise from a participant role that the ethnographer takes on, as in the case of Goffman’s becoming a friend and driving her key informant around the neighborhood or Whyte’s joining in illegal repeat voting with the “corner boys” he was studying (Whyte 1993:313–4; see also Whyte 1984:66–7). It can also stem from the sheer presence of the ethnographer, which may encourage undesirable behavior on the part of participants. For example, Durão (2017:232) expresses concern that the Portuguese police officers she was studying detained a young man and tried to get him convicted in order to demonstrate to her their skills in detective work. Harm can also arise from an ethnographer’s failure to play a participant role effectively, whether through inability or ethical qualms. Even simply asking people questions about sensitive or upsetting matters may be deemed harmful, especially if it reinforces rather than eases trauma. This is especially likely where people are already in stressful situations and/or are particularly vulnerable in some respect. An example would be people with terminal illnesses, and their relatives (Cannon 1992; Seymour 2001). However, there are complexities surrounding both the concept of vulnerability and that of harm. In the case of the first, we must ask, in any particular case, vulnerable to what and to what degree (see, for instance, Oeye et al. 2007; van den Hoonaard 2018.). Moreover, it is by no means always obvious what constitutes harm, nor is the causing of harm always undesirable (it may be necessary to bring about good), and of course what counts as harm as regards one person or group of people may constitute benefit or even justice for some others. Equally important, harm can vary in severity. In my view, most harm potentially resulting from ethnographic fieldwork is minor, but this does not mean that the issue should be ignored. Harm can result not just from ethnographic fieldwork but also from the publication of research findings (see Murphy and Dingwall 2001:341–2). An extreme case from many years ago concerns Condominas’s anthropological account of Sar Luk, a mountain village in South Vietnam, published in French in 1957. This was subsequently used by the US Army in the Vietnam War as part of “ethnographic intelligence” (see Barnes 1979:155–6). Aside from such indirect use, some anthropologists have worked directly for the US army. As part of the Human Terrain System, researchers were embedded with soldiers in war zones in Afghanistan and Iraq, the research being designed to improve the army’s effectiveness through providing local cultural knowledge (McFate and Lawrence 2015; Durkin 2015; González 2018). It is important to recognize the arguments on both sides of debates about the use of ethnographic knowledge and skills for such purposes. For those who reject the policy aims and military strategies of the US Government, such direct involvement in these is ethically unacceptable. And the same may be true for those who oppose the subordination of ethnographic work to political or practical goals of any kind. However, on the other side, it has been argued that such work may counter ethnic stereotypes within the US military and reduce casualties. It is also sometimes suggested that opposition to military involvement by ethnographers reflects a naïve pacifism that ignores the unavoidable complexities of both national defense and legitimate foreign intervention. A whole host of difficult and contentious issues are involved here (see Wakin 2008; Albro et al. 2012).
24
Ethics of Ethnography
449
A more mundane example of the sort of harm that can arise from publication is provided by Chege’s (2015:473) study of “beach boys” in Kenya. They were concerned that she: [. . .] would misrepresent them as individual beach workers and as an occupational group, by producing negative representations related to their interactions with tourists. This could then pose a threat to their livelihoods. For example, during an individual introductory meeting I had with Willi, his concern was that the data generated would be published by the media and would expose their experiences with female tourists in fallacious or negative ways to the world. He said this would in turn endanger their sources of subsistence.
A parallel case is Ditton’s study of “fiddling and pilferage” among bread salesmen. He opens the preface to his book in the following way: I am lucky enough to have a number of friends and colleagues. Probably not as many of the former [. . .] now that this book has been published. I don’t expect that many of the men at Wellbread’s will look too kindly on the cut in real wages that this work may mean to them, and my bakery self would agree with them. (Ditton 1977:vii)
These cases are also complex. For instance, it is true that Ditton’s exposure of “fiddling and pilferage” caused harm not only to the fortunes and reputations of those who worked for the bakery he studied but probably also to people working in other bakeries as well – workers who were on relatively low wages. At the same time, the behavior exposed was illegal. It is interesting to consider whether our judgment about this would be different if those exposed had been bankers rather than bakers (see Ho 2009)? Publication may not only damage people’s material circumstances, it can also affect their reputations or sense of self. McKenzie (2015) reports sleepless nights while writing her book about a working-class estate where she and her family lived, worrying about how the stories could be used by the media to further stigmatize working-class people. Clearly, publicity can damage the reputations of individuals, organizations, and locations, as well as hurting the feelings of those involved. Not all of the responsibility for this lies with the researcher, of course, but these dangers must be given attention. At the same time, it should be said that most ethnographic work attracts little or no media attention, and the identities of participants are usually successfully protected through anonymization. Finch (1984) raises a more general issue about harm in relation to her work on playgroups and clergymen’s wives. She argues that it is difficult even for feminists “to devise ways of ensuring that information given so readily in interviews will not be used ultimately against the collective interests of women” (1984:83). Of course, it is not always clear what is in whose interests, and some would argue that the value of scientific knowledge, or the public right to know, outweighs such considerations. But many ethnographers insist on the importance of trying to ensure that the knowledge produced by their research is used for good, and not for bad, purposes. How far this can be done, and how well-grounded their judgments are about what is good and bad, is open to debate. Furthermore, ethnographers will usually have little control over the consequences of publishing their work.
450
M. Hammersley
The issue of harm also arises when ethnographers witness harmful actions by people within the field. Alcadipani and Hodgson (2009:137) provide an example from their research on a printing company: While in the field, I routinely witnessed instances of sabotage, bullying and racism. Some forms of sabotage are particularly dangerous when a press is running at 80mph, not only disrupting production [. . .] but also putting other individuals at risk. Racist comments were continuously addressed towards Asian workers and managers, and bullying was widely practiced.
In a very different context, the study of an impoverished family, VanderStaay (2005:399) asks himself: “how much neglect a researcher can observe before placing the health and welfare of a child above all other priorities.” Here, again, what counts as ethically appropriate behavior on the part of the ethnographer is by no means clear or uncontentious. Moreover, the possibility that intervention will worsen the situation of those harmed must also be taken into account. Indeed, VanderStaay’s experience bears this out, since his actions may have contributed to a murder. As should be clear from this discussion, the issue of harm can arise in different ways, and it can take diverse forms and vary considerably in seriousness. Furthermore, there can be debate about what is and is not harmful, who is responsible for any harm, and whether causing harm may sometimes be acceptable. However, it must be reiterated that most ethnographic studies do not involve threats of severe harm, compared with some other kinds of research – such as many medical trials – or with the risks that people encounter in their ordinary lives. What this indicates is that a sense of proportion, as well as awareness of the complexities involved, must be exercised in thinking about the issue.
Autonomy and Informed Consent The second ethical principle I will discuss – autonomy – is also complex, and the priority given to it is, arguably, more culturally variable: it is sometimes taken to be a distinctively Western value. A commitment to protect people’s exercise of autonomy is the main principle underpinning the requirement of informed consent that is frequently regarded as essential in social research. The argument is that people should have the right not to be involved in research, that they should opt into it, and should be able to opt out at any point they wish. It must be noted that the requirement of informed consent was initially developed in the context of medical treatment and in relation to medical research involving interventions that could affect people’s health. Here there is a close relationship with the issue of harm: the key purpose of informed consent was for people to be aware of the risks involved, alongside potential benefits, and to make a judgment about these as well as about the wider benefits of the research, so as to decide whether or not to participate. In the case of ethnography, and many other kinds of social research, as noted earlier, there is no “treatment” involved. Nevertheless, as we have seen,
24
Ethics of Ethnography
451
participation in it can affect people’s lives, even if usually much less dramatically. However, respecting autonomy is often also treated as required in itself, irrespective of whether harm is likely. In ethnography, the issue of informed consent arises, first of all, in negotiating entry to settings. Very often this has to be done via one or more gatekeepers, who may or may not themselves be participants in the setting to which access is being negotiated and may or may not legitimately be taken to speak on behalf of participants. Of course, in some cases ethnographers can avoid negotiating initial access through carrying out research in public settings or by covert investigation, but both of these strategies raise questions about what respect for others’ autonomy entails. In the case of public settings, it may be argued that entry is open to the ethnographer just as it is for anyone else, though it may nevertheless be found necessary on prudential grounds to inform people that research is taking place or perhaps even to obtain their consent to be researched. At the same time, there can be practical constraints on providing information and securing consent in public settings: the number of people involved may be large and subject to continual change, and/or they may be involved in pressing or engrossing activities and not want to be disturbed. Here, as elsewhere, the autonomy of the researcher must be weighed against that of participants. In the case of covert research, by definition, participants (or most of them) are not informed or given the option to consent or to refuse consent. Studies of this kind have generated considerable controversy, in large part because they breach most interpretations of the principle of autonomy. But they have also been defended, for example on the grounds that this will sometimes be the only way to secure important knowledge. It is also often pointed out that lack of informed consent and deception are common in everyday life and especially in the activities of some professions, including journalism (Calvey 2017). Even where access is openly negotiated by ethnographers, issues arise about what information should be given and what scope there is for refusal of consent. Indeed, securing access from a gatekeeper may make it difficult for the ethnographer to obtain informed consent from participants. Very often, ethnographic research is carried out in communities or organizations with hierarchical power structures, and, in order to gain access, ethnographers must negotiate with those who are in power. Of course, they can still seek to inform participants and allow them to opt out of the research, but there can be severe constraints on doing this, not least that it could be interpreted as challenging the authority of the gatekeeper and thereby threaten continued access. Furthermore, if a gatekeeper has given the go-ahead for the research, participants may not feel they are free to opt out of it: they could fear sanctions from the powerholder, or from others, if they try to do so (see, for instance, Alcadipani and Hodgson 2009). We should note, though, that this is an aspect of a more general problem about what constitutes “free consent,” and whether it can ever be realized (Hammersley and Traianou 2012:ch4). There are also questions to be asked about what information gatekeepers and participants should, and can, be given. As already noted, since ethnography involves a flexible research design, the researcher will not usually know initially, in detail or with certainty, exactly what the research will involve. It could also be argued that
452
M. Hammersley
seeking to ensure fully informed and free consent right at the start is inadvisable. This is because the fact that the research will involve the ethnographer participating in the setting means that people will often be primarily concerned with her or his personal characteristics, especially trustworthiness, so that they need to be given time to get to know the researcher, rather than being forced to decide whether or not to be involved at the beginning (Murphy and Dingwall 2007:5). We should also note that ethnographers seek to shape the impressions of gatekeepers and participants in negotiating access and field relationships. And, from an ethical point of view, this indicates a significant tension at the heart of ethnographic research: much of the ethnographer’s effort goes into building rapport with participants with a view to minimizing reactivity – the impact of being researched on their behavior. In effect, this amounts to encouraging participants to forget that they are being researched. Duncombe and Jessop (2002:111) have made a similar point specifically in relation to interviews: If interviewees are persuaded to participate in the interview by the researcher’s show of empathy and the rapport achieved in conversation, how far can they be said to have given their ‘informed consent’ to make the disclosures that emerge during the interview?
So, while the principle of autonomy is certainty a consideration that ethnographers take into account in their work, doing this is by no means straightforward. And this principle, like others, can rarely, if ever, be fully realized. At the same time, it must be remembered that there are questions around the achievement of informed consent in all kinds of social research, as well as in other sorts of social practice, not just in ethnography.
Privacy Protecting people’s privacy is another central theme in research ethics, and one that is of particular relevance to ethnography: this kind of research is often concerned with finding out what really goes on in some setting, what people really believe, as against the public accounts they give about their behavior and beliefs. Given this, it could seem that the invasion of privacy is built into ethnography – it has sometimes been caricatured as “making public people’s private lives.” At the same time, a concern with the protection of privacy is evident in ethnographers’ anonymizing of people and places in research reports. Of course, these efforts are by no means always successful, particularly where visual or online data are involved. Furthermore, even if not identifiable by outsiders, participants will often be able to identify one another in research reports. More fundamentally, the desirability of anonymizing places and even people has been questioned. Not only do some participants want their own names included in research reports, but anonymization, and the changing of details that is often required to maintain anonymity, can make it difficult for readers to assess the
24
Ethics of Ethnography
453
evidence offered (For discussion of these issues, see Hammersley and Traianou 2012:126–31 and Jerolmack and Murphy 2017). Ethnographers, like other social researchers, also aim to keep data confidential, not disclosing who told them what, so as to preserve privacy, though, in addition, it may protect informants from potential harm. However, keeping data confidential is not always easy, as Alcadipani and Hodgson (2009:136) illustrate from research on a media organization, when the Production Director (who had played a major role in facilitating the research) asked for inside information acquired by the researcher. In response, the researcher underlined his commitment to confidentiality and the need to follow strict research ethical guidelines, but the Production Director’s response was: “Come on, mate. Life is about trade-offs. Your research has to be good for all of us.” In this case, as in others, the ethnographer was caught up in complex relationships that could only be negotiated with difficulty and in ways that may be felt to be not entirely satisfactory in ethical terms, not least because they relate to conflicting ethical principles. The problem of maintaining confidentiality can become particularly severe when informants reveal information in confidence that indicates serious lawbreaking. Tomkinson (2015:40) illustrates this from her research on immigration tribunals, when she discovered that an applicant had deceived the tribunal: Within ten minutes into our meeting, Aadil asked me in his broken English, ‘How do you think my performance at the hearing?’ I was surprised, and did not really understand his question. He continued, ‘I am not homosexual, my wife is in Pakistan and my children,’ and he showed me their photos. He was laughing and kept saying ‘performance, good performance.’ I laughed with him, without really knowing what else to do. I was taken aback, considering how truthful and genuine his ‘performance’ seemed at the hearing. He explained to me the whole process of finding an agent who helped him with the narrative, matched him with another male claimant as if they were a couple, and arranged their passports, visas and flights to Canada.
Protecting privacy and maintaining confidentiality are important ethical considerations, but what they require in particular cases, and how conflicts with other principles should be handled, cannot be determined in abstract; this must take into account the particular circumstances involved.
Reciprocity The principle of reciprocity requires that there be a reasonable balance as regards the costs and benefits deriving from the research for participants and for researchers. It is sometimes claimed that ethnographic (and other) research involves the exploitation of those studied: that people supply information which is then used by the researcher while getting little or nothing in return. Furthermore, some commentators suggest that since, typically, ethnographers study those who are less powerful than themselves, they are able to establish a research bargain that advantages them and disadvantages those they study. This is a problem that can even arise in those
454
M. Hammersley
situations where the researcher has an intellectual and emotional commitment to the people studied and seeks to establish a nonhierarchical relationship with them (Finch 1984; Ribbens 1989; Stacey 1991; Tang 2002; Oakley 2016). The concepts of reciprocity and exploitation imply a comparison between what is given and what is received, also taking in what is contributed to the research by each side. Of course, there are often benefits as well as costs for participants in ethnographic research, but neither are easy to assess, nor is the balance between them. As a result, it is difficult to decide when reciprocity has been achieved or when exploitation has occurred, except in extreme cases, and there is scope for substantial possible disagreement. Furthermore, sometimes, as important as whether there is reciprocity is whether participants believe that they are being exploited. The argument about the exploitative potential of ethnographic research leads commentators to make a variety of recommendations. These include that researchers should give something back, in the way of services or payment; that participants ought to be empowered by becoming part of the research process; or that research should be directed toward studying the powerful and not the powerless. It has been common for ethnographers to provide minor services for, and/or to give gifts to, at least some participants, or even to pay them. However, there is much scope for disagreement about what is appropriate here. There are also questions about the effects of this on the quality of the data. Howarth (2002:25) found that, despite what she assumed was prior agreement, informants reacted angrily to the payment she offered, on the grounds that it was insufficient and was therefore racially discriminatory. In her study of impoverished Qur’anic students, Hoechner (2018:3) raised the question of whether research across deep socioeconomic differences can be justified, on the grounds that offering payment can do “no more than reaffirm the existing inequality” (see Bleek 1979:200–201). Meanwhile, VanderStaay (2005:388) provides an example of a very different view. The mother of the young man he was studying, Clay, asked whether he knew of a church or charity that could help her with her water bill: I said I would try and did, phoning a local minister and a few agencies. Unable to find anyone to help with the bill, I paid it myself but told Serena a church had done so. I mentioned this to ‘Daniel,’ Clay’s restitution officer. A newcomer to the youth court, Daniel had expressed interest in my research and had spoken to me at length about his hopes for his new position and his own work with Clay. ‘Do you know you did that family a great disservice?’ Daniel began. ‘You merely enabled them to continue to shuck their responsibility.’ Daniel, one of the court’s only white employees, held to a strict 12-step philosophy and said that I was codependent with the family. He had been a drug addict himself, he explained, and knew from experience that Clay and his mother would not change until they hit rock bottom, a process my assistance had postponed. It was because of people like me, he said, that there was a ghetto in the first place.
While not accepting this diagnosis, VanderStaay (2005:402) nevertheless remarks that: “my overarching fear of exploiting my subjects proved counterproductive to the work of treating them as equals.”
24
Ethics of Ethnography
455
It is also worth noting that ethnographers may sometimes face requests or demands from participants that could lead them toward behavior that is unethical or even illegal. Giollabhuí et al. (2016:641) provide an example from a study of undercover policing: One of the most common dangers faced by surveillance teams is a serious collision with another vehicle. The potential for this hazard is constant: surveillance routinely involves driving at high speed and often involves dangerous manoeuvering. The following excerpt describes an occasion when we shared the risk: The worst has happened: the team has lost the subject vehicle. The cars fan out to try to recover it. We drive down a narrow road on a built-up residential estate at 60 mph, making for a stretch of open road that the suspect might have taken. In the gap between two rows of parked cars, we come within a hairs breadth of hitting another motorist. I see the look of fear and disbelief on her face as our vehicle charges the gap. As we pass through the eye of the needle, David lets out a roar: ‘Fuck mate! I thought I was going to have to ask you to lie there!’ [. . .] These moments of complicity with our research participants were not uncommon, especially as time wore on.
What is appropriate reciprocity, and what counts as exploitation, are contestable matters, then. Nevertheless, this is an ethical and practical issue that ethnographers cannot avoid.
Conclusion In this chapter we have examined a range of ethical issues that arise in ethnographic research and explored some of the complexities that can be involved. It should be clear that while ethnographers must bear in mind the principles of minimizing harm, respecting autonomy, preserving privacy, and providing reciprocity, there are no prescriptions that will avoid ethical problems arising, nor recipes for immediately resolving them. Instead, the best that can be done is to make appropriate judgments in the circumstances, taking into account the full range of relevant considerations; not just key ethical principles but also what is feasible, what is required if the study is to be adequate in methodological terms, and so on. This makes the application of ethical codes to ethnography, and the operation of regulatory regimes, very problematic; though the same is also true of other kinds of social research (Hammersley 2009).
References Albro R, Marcus G, McNamara L, Schoch-Spana M (eds) (2012) SecurityScape: ethics, practice, and professional identity. Left Coast Press, Walnut Creek Alcadipani R, Hodgson D (2009) By any means necessary? Ethnographic access, ethics and the critical researcher. Tamara J 7(4):127–146 Barnes J (1979) Who should know what? Social science, privacy and ethics. Penguin, Harmondsworth
456
M. Hammersley
Becker HS (1964) Problems in the publication of field studies. In: Vidich et al (eds) Reflections on community studies. Wiley, New York Bell E, Bronfenbrenner U (1959) “Freedom and responsibility in research”: a comment. Hum Organ 18(2):49–50 Bleek W (1979) Envy and inequality in fieldwork. An example from Ghana. Hum Organ 38(2):200–205 Bok S (1978) Lying: moral choice in public and private life. Harvester Press, Hassocks Borofsky R (2005) Yanomami: the fierce controversy and what we can learn from it. University of California Press, Berkeley Calvey D (2017) Covert research. Sage, London Cannon S (1992) Reflections on fieldwork in stressful situations. In: Burgess RG (ed) Studies in qualitative methodology, vol. 3, learning about fieldwork. JAI, Greenwich Chagnon N (2013) Noble savages: my life among two dangerous tribes – the Yanomamö and the anthropologists. Simon and Schuster, New York Chege N (2015) “What’s in it for me?” negotiations of asymmetries, concerns and interests between the researcher and research subjects. Ethnography 16(4):463–481 Ditton J (1977) Part time crime: an ethnography of fiddling and pilferage. Macmillan, London Dreger A (2011) ‘Darkness’s Descent on the American Anthropological Association: A cautionary tale’, Hum Nat 22:225–46. https://doi.org/10.1007/s12110-011-9103-y Duncombe J, Jessop J (2002) Doing rapport, and the ethics of “faking friendship”. In: Miller T, Birch M, Mauthner M et al (eds) Ethics in qualitative research. Sage, London, pp 108–121 Durão S (2017) Detention: police discretion revisited (Portugal). In: Fassin D (ed) Writing the world of policing: the difference ethnography makes. University of Chicago Press, Chicago Durkin T (2015) Symbolic interaction, public policy, the really really other, and human terrain system training. Symb Interact 38(2):298–304 Erikson K (1967) A comment on disguised observation in sociology. Soc Probl 14(4):366–373 Festinger L, Riecken H, Schachter S (1956) When prophecy fails. University of Minnesota Press, Minneapolis Finch J (1984) “It’s great to have someone to talk to”: the ethics and politics of interviewing women. In: Bell C, Roberts H (eds) Social researching: policies, problems and practice. Routledge & Kegan Paul, London Fluehr-Lobban, C. (2003) Darkness in El Dorado: research ethics, then and now, Fluehr-Lobban, C. Ethics and the profession of anthropology, 2nd, Walnut Creek, Altamira Geertz C (2001) Life among the anthros. The New York Review of Books, February 8 Goffman A (2014) On the run: fugitive life in an American city. University of Chicago Press, Chicago González R (2018) Beyond the human terrain system: a brief critical history (and a look ahead). Contemp Soc Sci. https://doi.org/10.1080/21582041.2018.1457171 Hammersley M (2009) Against the ethicists: on the evils of ethical regulation. Int J Soc Res Methodol 12(3):211–225 Hammersley M, Traianou A (2012) Ethics in qualitative research. Sage, London Ho K (2009) Liquidated: an ethnography of wall street. Duke University Press, Durham Hoechner H (2018) Accomplice, patron, go-between? A role to play with poor migrant Qur’anic students in northern Nigeria. Qual Res. Online first 18(3):307–321 Howarth C (2002) Using the theory of social representations to explore difference in the research relationship. Qual Res 2(1):21–34 Jerolmack C Murphy A (2017) The ethical dilemmas and social scientific trade-offs of masking in ethnography. Sociol Methods & Res. https://doi.org/10.1177/0049124117701483 Lewis-Kraus G (2016) The trials of Alice Goffman. New York Times, January 12. http://www. nytimes.com/2016/01/17/magazine/the-trials-of-alice-goffman.html?r50 Lubet S (2018) Interrogating ethnography: why evidence matters. Oxford University Press, New York
24
Ethics of Ethnography
457
Mac Giollabhuí S, Goold B, Loftus B (2016) Watching the watchers: conducting ethnographic research on covert police investigation in the United Kingdom. Qual Res 16(6):630–645 McFate M, Lawrence J (2015) Social science goes to war: the human terrain system in Iraq and Afghanistan. Oxford University Press, New York McKenzie L (2015) Getting by: estates, class and culture in austerity Britain. Policy Press, Bristol Murphy E, Dingwall R (2001) The ethics of ethnography. In: Atkinson P, Coffey A, Delamont S, Lofland J, Lofland L (eds) Handbook of ethnography. Sage, London Murphy E, Dingwall R (2007) Informed consent, anticipatory regulation and ethnographic practice, Social Science and Medicine 65:2223–34 Oakley A (2016) Interviewing women again: power, time and the gift. Sociology 50(1):195–213 Oeye C, Bjelland A, Skorpen A (2007) Doing participant observation in a psychiatric hospital – research ethics resumed. Soc Sci Med 65:2296–2306 Pels P (2005) “Where there aren’t no ten commandments”: redefining ethics during the Darkness in El Dorado scandal. In: Meskell L, Pels P (eds) Embedding ethics. Berg, Oxford Ribbens J (1989) Interviewing – an ‘unnatural situation’? Women’s Stud Int Forum 12(6):579–592 Riecken H (1956) The unidentified observer. Am J Sociol 62(2):210–212 Seymour J (2001) Critical moments – death and dying in intensive care. Open University Press, Buckingham Stacey J (1991) Can there be a feminist ethnography? In: Gluck S, Patai D (eds) Women’s words. Routledge, New York Tang N (2002) Interviewer and interviewee relationships between women. Sociology 36(3):703–721 Tierney P (2001) Darkness in El Dorado. Norton, New York Tomkinson S (2015) Doing fieldwork on state organizations in democratic settings: ethical issues of research in refugee decision making’ [46 paragraphs]. Forum Qual Sozialforschung/Forum: Qual Soc Res 16(1):6. http://nbn-resolving.de/urn:nbn:de:0114-fqs150168 Van den Hoonaard W (2018) The vulnerability of vulnerability. In: Iphofen R, Tolich M (eds) The sage handbook of qualitative research ethics. Sage, London VanderStaay SL (2005) One hundred dollars and a dead man: ethical decision making in ethnographic fieldwork. J Contemp Ethnogr 34(4):371–409 Vidich A, Bensman J (1958) Small town in mass society. Princeton University Press, Princeton. Revised edition, Champaign IL, University of Illinois Press, 2000 Vidich A Bensman J (1964) The springdale case: academic bureaucrats and sensitive townspeople. In: Vidich et al (eds) Reflections on community studies. Wiley, New York, pp 313–349 Vidich A, Bensman J, Stein M (eds) (1964) Reflections on community studies. Wiley, New York Wakin E (ed) (2008) Anthropology goes to war: professional ethics and counterinsurgency in Thailand. University of Wisconsin Press, Madison Whyte WF (1984) Learning from the field. Sage, Newbury Park Whyte WF (1993) Street corner society, 4th edn. University of Chicago Press, Chicago. First published 1943
Experimental Design Ethics, Integrity, and the Scientific Method
25
Jonathan Lewis
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Brief Sketch of Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Ethical Dimension of Controlled Variation and Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . The Causal Presuppositions of Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability, Validity, and Problems of Causal Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
460 461 462 466 468 471 472
Abstract
Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental design has become an important part of research in the social and behavioral sciences. Experimental methods are also endorsed as the most reliable guides to policy effectiveness. Through a discussion of some of the central concepts associated with experimental design, including controlled variation and randomization, this chapter will provide a summary of key ethical issues that tend to arise in experimental contexts. In addition, by exploring assumptions about the nature of causation and by analyzing features of causal relationships, systems, and inferences in social contexts, this chapter will summarize the ways in which experimental design can undermine the integrity of not only social and behavioral research but policies implemented on the basis of such research. J. Lewis (*) Institute of Ethics, School of Theology, Philosophy and Music, Faculty of Humanities and Social Sciences, Dublin City University, Dublin, Ireland e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_19
459
460
J. Lewis
Keywords
Experimental design · Randomization · Controlled variation · Deception · Informed consent · Causal relationship · Causal inference · Reliability · Internal validity · External validity · Validity
Introduction It is possible to distinguish levels of scientific discourse and associated practice (Guala 2009; Arabatzis 2014). The most specific level concerns the precepts of an experiment in a certain discipline. It includes the rules governing the correct use of apparatus and instruments in particular experiments. The next is made up of discourse about experimental design and the practices associated with, for example, laboratory experiments, quasi-experiments, and randomized controlled trials (“RCTs”). At a more abstract level, discussions concern the nature of science in general and, in particular, the nature of scientific theories and the concepts associated with theory appraisal. In recent decades, attention in both science and philosophy has increasingly been paid to discourse about the design and implementation of experimental setups. At the same time, there has been a marked increase in the use of experimental methods in traditionally nonexperimental fields. With regard to the ethical dimension of experimental design, social and behavioral sciences tend to operate within a framework that was devised primarily with a view to regulating biomedical research. As a result, the methodology literature on the ethics of social and behavioral research often addresses the objectification of research participants, potential harms, purported benefits, coercive and manipulative practices, and issues of privacy and consent. Generally speaking, such concerns do not apply to experimental research in the natural sciences. There will also be issues that arise in one discipline but not another. For example, unlike traditional biomedical experiments, social science research frequently facilitates interventions that have winners and losers, create risks for some and not for others, harm or benefit nonparticipants, and operate without the consent of all parties affected by them (Humphreys 2015). Furthermore, not all the ethical issues associated with social and behavioral research result from the employment of experimental methods; many could just as readily occur in the context of nonexperimental studies. Nevertheless, experimental design raises specific ethical problems (though that is not to say that these problems are expressed equally in all branches of empirical research). Indeed, some consider experiments and ethics to be at odds with one another as a result of the tension between the vulnerability of research participants and the interests of pursuing valid and reliable science (Miller and Brody 2003). Others have claimed that experimental design undermines ethical relationships between researchers and participants. For example, experiments not only frequently fail to meet the standards of genuine informed consent, they often involve deception (Sieber 1982). As a result of various biases, experimental design can also impinge upon the integrity of research, specifically, the reliability and validity of research. Some sorts
25
Experimental Design
461
of bias are not particular to experimental design: value biases can shape practices at different stages of any research, from the research questions posed to the way they are framed, from decisions about what data to collect to methods for gathering and processing that data, and from the inferences drawn to the reporting and dissemination of results (Douglas 2014). Financial and publication interests, for example, can justify the use of experimental designs in place of observational and more theoryladen approaches (Crasnow 2017). Research funders can impose their values in order to influence experimental design, data interpretation, and the dissemination of results (Wilholt 2009; Brown et al. 2017). Social scientists can act to make the social world more like their models, a “performativity” practice that requires social decision-making and political engagement (Risjord 2014). Value biases can also arise as a result of the tension between non-epistemic values and epistemic values, for example, when considering the possible social consequences of accepting an inference as evidenced when it is not or, conversely, rejecting a claim as invalid when it is, in fact, true. Both non-epistemic and epistemic values can play a part in shaping the reliability and validity of the research. There are, however, varieties of inferential bias, including selection bias (errors resulting in overrepresentation of one or more factors in the comparison groups) and failures to take into account confounders (omitted variables) (Crasnow 2017), which can arise because experimental methods have been employed. Understanding how these sorts of bias are particular to experimental design requires an engagement with specific theories of causation. Following a brief introduction to the principal aims and methods of experimental design (section “A Brief Sketch of Experimental Design”), the second section will summarize the key ethical issues that arise when experimenters seek to control various variables and randomly assign participants to treatment and control groups (section “The Ethical Dimension of Controlled Variation and Randomization”). The following section will articulate the causal presuppositions of experimental design (section “The Causal Presuppositions of Experimental Design”). Subsequently, the section “Reliability, Validity, and Problems of Causal Inference” will summarize the main problems associated with causal inference in social contexts and thereby illustrate some of the ways in which experimental design can affect the reliability and validity of social and behavioral research as well as any policies implemented on the basis of such research.
A Brief Sketch of Experimental Design On standard accounts, laboratory experiments aim to isolate purported causes and manipulate causal factors in order to test scientific theories, identify causal relationships, or make particular causal inferences. The laboratory experiment is considered to be an ideal, one that other experimental methods aim to approximate. Indeed, arguments for the value of experimental design tend to invoke the standards of ideal experiments. However, when it comes to investigating social phenomena, laboratory
462
J. Lewis
experiments either cannot be created or they isolate systems to such a degree that inferences within an experiment cannot be extrapolated to real social situations (Risjord 2014; Crasnow 2017). In order to mimic the logic of laboratory experiments in social environments, two experimental methods tend to be employed. The first is quasi-experimentation, whereby two groups are studied in (more or less) “real-world” situations – one group receives the experimental intervention (the “treatment” group), and the other does not (the “control” group). By creating treatment and control groups, the aim is to ensure that circumstances are controlled so that the only significant difference between the two groups is some causal factor of interest. In other words, the aim is to rule out all explanations for any observed difference in outcome among those in the treatment group apart from the explanation concerning the average effect of the treatment. By controlling for the other variables in this way, the manipulated variable (X) is considered to be independent of any other possible causes. The problem is that an intervention could produce a correlation between two variables being investigated (X and Y) without the independent variable (X) causing the change in the other variable (Y). For example, an intervention could have modified a third variable (Z) in addition to the independent variable (X), thereby resulting in a spurious correlation. In order to avoid this kind of correlation, a second experimental method involves the random assignment of participants to treatment and control groups. Randomization is meant to ensure that there are no systematic differences between the two groups. The idea is that any potential causes that can affect the outcome of the experiment are evenly distributed. Proponents claim that randomization controls for known, unknown, and unconsidered confounders; it controls for selection bias; it makes the variable on which we are intervening independent of all other variables and thereby allows for conclusions about whether a specific treatment caused the significant difference in outcome between two groups (Conner 1982; Urbach 1985; Papineau 1994; Worrall 2002, 2007; Crasnow 2017). RCTs, which implement randomization in a thoroughgoing way, are considered to be as free from bias as any trial could be outside of the laboratory. Consequently, as Urbach (1985) and Worrall (2002, 2007) suggest, proponents – especially in the biomedical profession – deem RCTs to be necessary for scientific validity or, at the very least, to carry special epistemic weight. Furthermore, and despite the fact that there is little discussion of what justifies these claims, grading schemes for policy effectiveness regularly state that RCTs provide the most reliable scientific evidence, ahead of quasi-experimental research, case studies, and interpretive approaches (Cartwright 2012).
The Ethical Dimension of Controlled Variation and Randomization The logic of controlled variation is considered to be a hallmark of all forms of experimentation (Guala 2009). In order to make genuine inferences within an experiment, a researcher needs to control the variable that is being manipulated and those that are being fixed (Sobel 1996; Goldthorpe 2001; Gangl 2010; Guala
25
Experimental Design
463
2012). It follows that groups should be situated in conditions whereby extraneous factors are controlled such that the variable of interest can be manipulated in order to observe changes in a second variable. In principle, the employment of a control group facilitates the controlled approach to variable manipulation. However, in order to achieve a sufficient amount of experimental control, researchers in the social and behavioral sciences often employ methods of deception. Even when deception is not used, participants tend not to be fully informed about the research. In addition, for social interventions in particular, researchers regularly do not seek prior consent from their participants. Indeed, informed consent is considered to be impossible in large-scale social experiments (Baele 2013). A research participant is deceived when any false information is deliberately given or information is deliberately withheld in order to mislead them into believing that something is true when, in fact, it is not (Geller 1982; Sieber 1992; Bordens and Abbott 2013). Consequently, deception is distinct from the usual experimental practice of not fully informing research participants in advance (Hegtvedt 2014). Deception can vary from false information about the main purpose of the study or certain experimental procedures to the presentation of false feedback. It is claimed that deception can undermine a participant’s self-confidence or enhance self-doubt, anxiety, and emotional distress and, in extreme cases, can result in the objectification or dehumanization of research participants (Kelman 1982). Even if a certain deception is considered to be fairly innocuous, a standard claim in biomedical ethics is that researchers who either deliberately present false information or deliberately conceal information violate the ethical principle of “respect for autonomy” (Gillon 1994; Beauchamp and Childress 2013). Broadly speaking, by employing deceptive practices, researchers undermine not only a participant’s ability to make genuine or authentic commitments and decisions but a participant’s responsibility for those commitments and decisions. Particularly in experimental research, deceptive practices are methodologically and epistemologically justified. For example, without deception, participant behavior may no longer be “natural” (Clarke 1999). Such reactive behavior can undermine the conditions needed for the successful implementation of controlled variation and thereby compromise the reliability of the experiment and the validity of the data. Furthermore, even if a participant is forewarned of the possibility of deception without an account of what it will entail, they may try to determine the nature of the deception and adapt their behavior accordingly (Geller 1982). In order to mitigate these methodological and epistemological concerns, some recommend debriefing sessions for both pretest and study participants. During these sessions, participants can be informed about the deceptive practices employed with the aim of building trust between researchers and participants, demonstrating researcher respect for participants, restoring participants’ positive well-being, and removing any selfdoubt, anxiety, or emotional distress caused by the experiment (Holmes 1976). Although deception is considered to be distinct from the practice of not fully informing participants about the research, it is claimed that a deceived participant does not fully understand the nature of the research and, therefore, cannot be fully informed (Clarke 1999). The autonomy of participants can be given a measure of
464
J. Lewis
respect if their consent is sought to use the results of deceptive research after debriefing has taken place. However, this form of post-experimental consent is not an adequate substitute for genuine informed consent. After all, a research participant might not have chosen to participate had they been properly informed about the nature of the experiment and the deceptive practices involved. In such circumstances, it can be argued that not only has informed consent not been given, but a researcher has failed to respect the participant’s individual autonomy. In a laboratory, participants are typically required to give their consent beforehand (though the level of information that is provided can vary from case to case). By contrast, in social experiments, participants tend neither to be informed nor consenting. The reason is that an intervention needs to be perceived by the research participants as naturally occurring (Humphreys 2015). The argument is the same as the one that is used to justify deception; namely, ignorance prevents reactive behavior – Hawthorne effects, John Henry effects, and “looping effects” – that could threaten the scientific outcome by introducing bias (Hacking 1999; Baele 2013). In general, alternative forms of acceptable consent, on the one hand, are challenged on the basis of the methodological and epistemological aims of experimental design. On the other, they are criticized because they do not adequately fulfill the demands of genuine informed consent. This adds weight to the claim that experiments and ethics are fundamentally at odds with one another (Sieber 1982; Miller and Brody 2003). One of the most frequent ethical concerns raised in connection with controlled variation is the denial of potentially beneficial services for eligible participants (Conner 1982). Moreover, in large-scale social interventions, control-group members (as well as those that do not meet criteria for participation) may actually be worse off as a result of the experiment. Of course, depending on the intervention, control participants may also benefit by not receiving potentially harmful treatments. However, according to one argument, if an experimental intervention is expected to be more beneficial than current social programs, then all individuals of an eligible group should have an equal right to the benefits of the intervention (Conner 1982). Indeed, if a treatment is readily available, then it may be offered to members of the control group after the experiment has concluded. However, if the treatment is beneficial, and if the claims to the treatment are significantly greater for the control group, then, by not ensuring a certain level of well-being for the worse-off group before maximizing the well-being of the treatment group, experimental interventions can violate the prioritarian moral principle (Baele 2013). Furthermore, social interventions tend to operate with relatively scarce resources such that treatments cannot be given to all who stand to benefit from them. It is claimed that the method of randomization, as well as controlling for confounders and selection bias, ensures that all possible targets of a social experiment have an equal chance of being selected to receive the benefits of the intervention (Conner 1982). If a benefit or harm cannot be distributed equally, then randomization – if properly implemented – appears to be a way of guaranteeing that participants are assigned to treatment and control groups in an equitable manner (Lilford and Jackson 1995). In other words, it seems that it can guarantee some sort of fairness.
25
Experimental Design
465
Randomization, however, is not ethically neutral. Although primarily epistemologically motivated, it carries extraordinary ethical weight (Worrall 2007). By randomly assigning participants to treatments and controls, experiments advantage some people and disadvantage others even though proponents of randomization tend to consider them to be equal (Baele 2013). Conner (1982) argues that such inequities are amplified in laboratory experiments because the participants of the control group are unlikely to receive the standard services available to them in the social world. By contrast, when it comes to RCTs in social settings, control group members are usually not prevented from seeking other available services. Secondly, randomization is not a guarantee of fairness. In order for a potential participant to be treated fairly in a context where a potentially beneficial treatment cannot be distributed equally, their claim to the treatment needs to be taken into account. This is not a right to the treatment but merely a duty to acknowledge the participant’s claim to the treatment. As Broome (1984) observes, randomization is a fair option only when the participants potentially affected by an experiment have (more or less) equal claims to the treatment. Consequently, it is not enough for a researcher to advocate randomization in a specific context because they assume that all potential participants have equal claims to the treatment; a positive argument is required to show that the claims are actually equal (or roughly equal). Randomization, therefore, commits researchers to the judgment that the members of a potential target group have (more or less) equal claims to the treatment. If a beneficial treatment cannot be distributed equally among both treatment and control groups and if certain participants have greater claims to the treatment, then it seems as if the fairest thing to do in that situation would be to give each person a chance proportional to their claim. In order to justify the use of randomization in controlled trials, researchers in biomedical and public health contexts tend to argue for a necessary state of “clinical equipoise,” that is, a state of genuine uncertainty on the part of the expert community regarding the comparative therapeutic merits of each arm in a RCT (Freedman 1987). Some have claimed that there should be a state of “personal equipoise” whereby researchers should be indifferent to the therapeutic value of the experimental and control treatments (Alderson 1996). Others have suggested that both clinical and personal equipoise are necessary for a truly unbiased RCT (Cook and Sheets 2011). The assumption of a state of uncertainty has been identified as the central ethical principle for randomization in human experimentation (Lilford and Jackson 1995). Underlying equipoise is the norm that no patient should be randomized to a treatment known or thought by the expert community to be inferior to the established standard of care (Freedman et al. 1996). This follows from what is perceived to be a clinical researcher’s duty of care to their participants (Miller and Weijer 2006), a duty that places severe ethical restrictions on the use of placebo-controlled groups (Fried 1974; Freedman 1987; Miller and Weijer 2006). It is debatable whether a state of clinical or personal equipoise can be achieved in practice. Indeed, even if a state of equipoise exists prior to the commencement of an RCT, events during a clinical trial, such as the emergence of unexpected adverse events or early signs of efficacy, can quickly disturb clinical and personal equipoise. Furthermore, when it comes to the deployment of experimental methods in
466
J. Lewis
nonclinical settings, specifically, in social contexts, the state of equipoise is either indeterminate or unattainable (Oakley et al. 2003; Hammersley 2008). On the one hand, critics claim that the use of randomization is ethically impermissible because social interventions fail to meet the standards of equipoise in clinical settings. On the other, where a state of equipoise cannot be achieved or when equipoise is dramatically disturbed during the course of a study, randomization cannot escape the ethical questions regarding fairness addressed above. Furthermore, when circumstances make it impossible to distribute a treatment equally to both treatment and control groups, randomization (even when a state of equipoise has been achieved) will violate the duty of fairness if participants’ claims to the treatment are not taken into account. Although the principle of equipoise is adopted to justify randomization, this particular justification is deemed to be necessary when the ethics of research are closely aligned with the duty of care. As we have seen in the context of both controlled variation and randomization, the means to effect experimental design can lead to problems when researchers attempt to uphold that duty. It has been claimed that there is a fundamental conflict between pursuing valid and reliable science and ensuring that no participant is denied a beneficial treatment (Gifford 1986; Miller and Brody 2003). Furthermore, critics argue that this tension cannot be resolved due to a fundamental distinction between the ethics governing experimental research and the ethics of practice-based care (Levine 1979; Churchill 1980; Miller and Brody 2003). Indeed, once we distinguish research ethics from the ethics of care, equipoise can be considered to be no longer relevant to the justification of randomization (Veatch 2007).
The Causal Presuppositions of Experimental Design An understanding of how experimental design can impinge upon the integrity of research requires a shift from mid-level discussions about experimental design to high-level discourse concerning the causal assumptions held by proponents of experimental design (Guala 2009; Cartwright 2014). From there, it is possible to identify and analyze the problems of causal inference that arise particularly in social contexts and that contribute to the overall validity and reliability of research as well as any policies implemented on the basis of such research. What is important about experimental practice is not so much observational results and the products of experiments but the design and implementation of experimental setups that reveal or produce causal relationships in a reliable manner. In other words, one of main aims of experimental design is to make reliable causal inferences about the effects of some particular causal relationship. In principle, according to the logic of controlled variation, this is done by controlling the variables that are being fixed and by isolating and manipulating the purported cause of the variation in the dependent variable. Consequently, causal relationships should be understood in terms of the relations of dependence between variables that remain invariant under interventions. This “interventionist” (also known as a
25
Experimental Design
467
“manipulationist”) theory of causation entails that a fair test of a causal relationship is to apply an intervention on a variable (X) that will change another variable (Y) when, and only when, other causes, preventatives of the effect and other causal relationships involving the effect, are held fixed at some value (Woodward 2008; Cartwright 2014; Kaidesoja 2017). The interventionist view implies that an experiment, when properly implemented, enables researchers to make a clear distinction between spurious correlations and genuine causal relationships by ensuring that the manipulated variable is the only factor that determines the direction of causality. According to the interventionist theory, two conditions need to be met in order to identify genuine causal relationships (Brady 2011). Firstly, there should be no “interference” across treatment and control groups. In other words, the two groups must be kept separated, isolated, and unable to communicate with each other. The non-interference condition assumes that each supposedly identical treatment really is identical. It also helps to ensure that treatment and control groups are as similar as possible except for the difference in treatment. Brady (2011) claims that if this condition fails, then it will be either difficult to make generalizations about an experiment or impossible to interpret the results. Secondly, treatment and control groups should be, on average, “identical” except for the existence of the putative cause. This “identity” or “unit homogeneity” condition assumes that a variable (X) is independent of any other characteristic that could influence the effect. The two conditions are obviously related; where a control group to interfere with a treatment group or vice versa, the identity condition would no longer hold. Such circumstances could generate (potentially innumerable) unforeseen causal factors that impact upon the causal relationship the researcher is attempting to isolate and manipulate. Consequences of interference include compensatory rivalry between treatment and control groups, resentful demoralization on the part of control groups, attempts by researchers to overcome inequalities between groups, and the diffusion of treatment among members of the control group (Cook and Campbell 1986). Although unit identity is commonly assumed in laboratory work, it cannot be taken for granted in quasi-experiments and RCTs in social settings (Brady 2011; Risjord 2014). Furthermore, it is impossible to obtain sufficient knowledge about individuals to ensure that the two groups are, on average, identical. If unit homogeneity cannot be guaranteed, then it is a possibility that treatment and control groups are substantially different. It follows that researchers will be required to identify confounders in order to rule out spurious correlations. As we have seen, in order to eliminate the need to identify confounders yet still ensure that an intervening variable (X) is not itself objectively dependent on any other causal factor that could influence the effect on variable (Y), randomization of the study population into experimental and control groups is deemed to be necessary (Papineau 1994; Pearl 2000). There are doubts, however, that randomization can guarantee unit homogeneity to control for confounders and thereby justify causal inferences. While the theoretical underpinnings of randomization support the idea that confounders can be eliminated, it would be a miracle if, in practice, a single random allocation resulted in balanced groups given the innumerable unknown (possible) causes involved (Worrall 2007). Indeed, in any particular case, randomization – even
468
J. Lewis
when properly implemented and even when the two groups seem clearly comparable with respect to all known factors – might result in an unknown or unconsidered distinction between treatment and control groups that plays a significant role in the effect being measured. It is suggested that repeated trials diminish the problem (Binmore 1999; Brady 2011; Crasnow 2017). However, for many social interventions, limited resources make it impossible to facilitate the level of repetition needed to overcome unit imbalance. In addition, repeated draws come with their own risks, primarily an increase in the likelihood of reactive behavior and interference. Consequently, an experimental design with repetition does not automatically guarantee or improve the validity of an experiment (Guala 2009). Even if randomization cannot, in practice, control for unknown causes, proponents might still claim that it is necessary for ensuring that two groups are comparable with respect to all known factors. However, such balancing can be achieved in a number of ways: by deliberately matching treatment and control groups, by adjusting data ex post, or by checking for “baseline imbalances” (as in clinical trials) and randomizing again when these imbalances are discovered. The point is that randomization is not epistemologically superior to any other method that can be deployed to ensure that there is no positive reason to think that treatment and control groups are unbalanced (Worrall 2007). Of course, proponents might yet claim that randomization is necessary to avoid selection bias. However, Worrall (2007) has shown that, from a theoretical point of view, it is the blinding process, which is effected by randomization, that is ultimately responsible for controlling for selection bias. Again, randomization is not the only means to facilitate this process. Furthermore, it is not clear that, in practice, randomization can fully eliminate selection problems (Crasnow 2017). For all these reasons, it is argued that randomization is neither necessary nor sufficient for guaranteeing genuine causal inferences (Worrall 2002, 2007; Guala 2009).
Reliability, Validity, and Problems of Causal Inference In the context of experiments involving social phenomena, there are, in general, two types of inference that a researcher can make: firstly, inference from the data/ evidence to a cause and, secondly, inference from a particular experiment to other experimental and social contexts (Guala 2012). Given that genuine causal inferences within an experiment demand the ideal conditions of noninterference and unit homogeneity, biases that result from, for example, reactive behaviors, omitted variables, and overrepresentation can affect the strength of the inference drawn from the evidence. Proponents of experimental design in behavioral and social sciences claim that, as a result of randomization, we can be confident that the experimental and control groups are comparable. In addition, by successfully implementing the experimental design, we can isolate different causal factors from possible confounders. Nevertheless, in practice, it is difficult to make a genuine inference that some treatment causes some effect. There is the possibility that interference and reactive behaviors may lead to the reorganization of studied groups in a way that
25
Experimental Design
469
compensates for the disruption caused by the intervention (Mitchell 2009). Even when aspects of the underlying system have been controlled to the degree that treatment and control groups seem balanced with respect to all known causal factors, there will be many more unconsidered and unknown biases (Worrall 2007). The problem is exacerbated in the case of quasi-experiments; with randomization no longer an option, researchers are forced to make careful determinations about the possible confounders and biases that might be present. The issue is that the reliability and validity of causal inferences based on the data will depend on the ability of the researchers to consider and adjust for these possible confounders and biases, which will depend upon their specific knowledge of the background causal system that gives rise to causal relationships in an experiment (Guala 2005, 2009). Since no real experiment is ideal and since we have no way of knowing how near to the ideal a real experiment is (save for baseline imbalances and other known confounders), any particular experiment – whether randomized or not – may mislead about causes (Worrall 2007). Problems surrounding causal inferences within experiments do not turn solely on the question of whether it is possible, in practice, to control the variables that should be fixed. According to the logic of controlled variation, researchers must also control the variable that is being manipulated. Nevertheless, in the social and behavioral sciences, there can be a number of different variables that cannot be directly manipulated. It can also be extremely complicated – both in theory and in practice – to isolate the intervening variable from the background causal structures of the experiment. Even if it is the case that the variable of interest can be identified, it may not be possible to manipulate it in isolation (Bogen 2002; Woodward 2008). Slight changes in the underlying causal system of an experiment or manipulations of other variables can affect causal processes (Guala 2005), tap into different causal factors (Sullivan 2009), or even subject circumstances to entirely different causal principles (Cartwright 2012). These problems can be dealt with by modifying the ideal design of an experiment accordingly. However, because the circumstances in which social interventions take place are never ideal, changes to the ideal design can trigger a trade-off whereby one problem is solved at the cost of introducing another (Guala 2009). The legitimacy of causal inferences, as we have seen, depends upon the success of a particular design in controlling for both the variable that is to be manipulated and the variables that are to be held fixed. This issue is typically discussed under the general concept of “validity” (Feest and Steinle 2016). The “internal validity” of an experiment pertains to the inferences within experiments from the data/evidence to the cause. This is contrasted with “external validity,” which pertains to the inferences from particular experiments to other contexts. The question of external validity concerns whether the same causal mechanisms operate in other contexts. Some claim that causal inferences made within laboratory experiments are more reliable than those within field experiments precisely because researchers are better equipped to control the relevant variables (Kuorikoski and Marchionni 2014). Others suggest that laboratory conditions simplify the background causal structures of an experiment, thereby making the situation more epistemically manageable (Guala 2005,
470
J. Lewis
2009). Due to the isolation of variables of interest or the simplification of underlying causal systems, target situations are deemed to correspond to the experimental design because the causal relationships involved are believed to be context independent. However, when it comes to the social and behavioral sciences, it cannot be assumed that the causal relationships are context independent to the degree that would allow straightforward inferences from laboratory conditions to the phenomena of interest (Kuorikoski and Marchionni 2014). The methodological or epistemic control exerted in laboratory experiments often results in highly confined and artificial conditions. This is one of the motivations behind field experiments, including quasi-experiments and RCTs, which, according to proponents, have greater external validity precisely because they are conducted in real-world, natural environments over which a researcher has only limited control beyond the intervention (Morton and Williams 2010). The problem is that greater external validity comes at the cost of internal validity. Due to the fact that social phenomena are situated in highly complex causal systems that give rise to compound causal relationships and mechanisms of causal force, limited control over variables can make it difficult to identify genuine causes of social events (Risjord 2014). Nevertheless, proponents claim that the evidence that supports a causal inference in a particular quasi-experiment or RCT is more likely to be generalizable to other real-world contexts. On first appearance, this seems like a fair assumption; if the intervention has, in fact, identified a genuine causal relationship between the manipulated and outcome variables, then presumably the causal factors of interest are context independent such that that relationship can be generalized to other cases. However, this argument depends on important additional assumptions about the similarities between the circumstances under which the experiment is carried out and the circumstances to which the results of the particular experiment are extrapolated. Even if we assume that a genuine causal inference can be drawn within a localized intervention, the flux of the social world may undermine the conclusion that the same causal inference holds beyond the particular case (Crasnow 2017). Consequently, without additional knowledge of the similarities and differences, it may not be justifiable, or even possible, to generalize the causal inferences drawn within a particular experiment to other experimental situations let alone general social contexts. It is widely believed that there is a trade-off between the reliability of our inferences within the confines of the experiment and the reliability of our extrapolations from the experiment. In other words, there is a trade-off between internal and external validity (Guala 2005, 2009; Cartwright 2012; Kuorikoski and Marchionni 2014; Feest and Steinle 2016; Crasnow 2017). External validity is an important concept in the context of much of today’s experimental research. With an ever-increasing concern for “evidence-based policy” and “social impact,” public and private research funders tend to favor those projects, disciplines, and research methods that can show us “what works.” The effectiveness of a proposed policy is regularly explained and justified on the basis of the reliability of a specific type of research. RCTs and quasi-experiments are typically presented as the most reliable forms of research (Cartwright 2012). As Cartwright (2012) suggests, the effects of an experiment are reliable if they are the results of causes that
25
Experimental Design
471
happen under the governance of a causal relationship, which, in turn, results from an underlying causal structure. In the case of an experiment that aims to infer a causal relationship between teacher-pupil ratio and pupil achievement, the causal system might include factors such as the frequency of pupil attendance; the ages of the students; whether the teacher is the same in all classroom situations; the fact that schooling is mandatory; the ability, competency, and qualifications of the teacher; the socioeconomic environment in which the school is situated; and so on. The problem is that the causal relationships involved in social experiments are claimed to be local and fragile (Cartwright 1999, 2007). They are local because they depend on the organization of the underlying causal system and thereby they are deemed to hold only when the “socioeconomic machine” is in place to support them. Policy interventions may involve a different complex of causal factors to that of the initial experiment because of differences in the background causal systems. The causal relationships are fragile because policy interventions made on the basis of a particular causal inference within a specific RCT or quasi-experiment are likely to change the organization of the background system such that the causal relationships no longer hold. Furthermore, by employing randomization in order to insulate an intervention from all known and unknown confounders and biases, even experiments themselves can alter the background causal system that makes a causal relationship possible. As a result, randomization is not a guarantee of external validity. To warrant the belief that a causal inference can be extrapolated from a particular, well-implemented experiment to other contexts, the proposed techniques of causal inference stress the importance of multiple kinds of evidence and methods (Kuorikoski and Marchionni 2014). For Cartwright (2012), what is required is knowledge of the nature and stability of the causal factors involved and the background causal structures of both the experimental setup and the target setup. Such knowledge, she claims, cannot be underwritten by any particular RCT; it depends on a complicated balance of theory and empirical studies. More importantly, external validity claims require that any additional interventions do not disrupt the causal system that supports the causal relationship identified in the particular RCT or quasi-experiment. Broadly, the problem of extrapolation can be overcome in a similar fashion to the problem of causal inferences within experiments, namely, by way of knowledge of the context and background conditions of the research. It has been suggested that the data provided by experimental design can contribute to external validity only when it is supplemented with analyses of detailed “on-the-ground” evidence generated through multiple, independent nonexperimental methods, including qualitative methods such as case studies and process tracing (Risjord 2014; Crasnow 2017).
Conclusions This chapter has articulated the main ethical issues associated with experimental design, specifically, those issues that arise when experimental interventions seek to control variables by randomly assigning participants to different groups.
472
J. Lewis
Furthermore, by exploring assumptions about the nature of causation and by analyzing features of causal relationships, this chapter has illustrated some of the ways in which experimental design can undermine the reliability and validity of causal claims thereby affecting the integrity of research and evidence-based policy.
References Alderson P (1996) Equipoise as a means of managing uncertainty: personal, communal and proxy. J Med Ethics 223:135–139 Arabatzis T (2014) Experiment. In: Curd M, Psillos S (eds) The Routledge companion to philosophy of science, 2nd edn. Routledge, London, pp 191–202 Baele S (2013) The ethics of new development economics: is the experimental approach to development economics morally wrong? J Philos Econ 7(1):2–42 Beauchamp T, Childress J (2013) Principles of biomedical ethics, 7th edn. Oxford University Press, Oxford Binmore K (1999) Why experiment in economics? Econ J 109(453):16–24 Bogen J (2002) Epistemological custard pies from functional brain imaging. Philos Sci 69(3):59–71 Bordens K, Abbott B (2013) Research and design methods: a process approach. McGraw-Hill, Boston Brady H (2011) Causation and explanation in social science. In: Goodin R (ed) The Oxford handbook of political science. Oxford University Press, Oxford, pp 1054–1107 Broome J (1984) Selecting people randomly. Ethics 95(1):38–55 Brown A, Mehta T, Allison D (2017) Publication bias in science: what is it, why is it problematic, and how can it be addressed? In: Jamieson K, Kahan D, Scheufele D (eds) The Oxford handbook of the science of science communication. Oxford University Press, Oxford, pp 93–101 Cartwright N (1999) The dappled world: a study of the boundaries of science. Cambridge University Press, Cambridge, UK Cartwright N (2007) Hunting causes and using them. Cambridge University Press, Cambridge, UK Cartwright N (2012) RCTs, evidence, and predicting policy effectiveness. In: Kincaid H (ed) The Oxford handbook of philosophy of social science. Oxford University Press, Oxford, UK, pp 298–318 Cartwright N (2014) Causal inference. In: Cartwright N, Montuschi E (eds) Philosophy of social science: a new introduction. Oxford University Press, Oxford, pp 308–337 Churchill L (1980) Physician-investigator/patient-subject: exploring the logic and the tension. J Med Philos 5(3):215–224 Clarke S (1999) Justifying deception in social science research. J Appl Philos 16(2):151–166 Conner R (1982) Random assignment of clients in social experimentation. In: Sieber J (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 57–77 Cook T, Campbell D (1986) The causal assumptions of quasi-experimental practice. Synthese 68(1):141–180 Cook C, Sheets C (2011) Clinical equipoise and personal equipoise: two necessary ingredients for reducing bias in manual therapy trials. J Man Manipulative Ther 19(1):55–57 Crasnow S (2017) Bias in social science experiments. In: McIntyre L, Rosenberg A (eds) The Routledge companion to the philosophy of social science. Routledge, London, pp 191–201 Douglas H (2014) Values in social science. In: Cartwright N, Montuschi E (eds) Philosophy of social science: a new introduction. Oxford University Press, Oxford, pp 162–182 Feest U, Steinle F (2016) Experiment. In: Humphreys P (ed) The Oxford handbook of philosophy of science. Oxford University Press, Oxford, pp 274–295 Freedman B (1987) Equipoise and the ethics of clinical research. N Engl J Med 317(3):141–145 Freedman B, Glass K, Weijer C (1996) Placebo orthodoxy in clinical research II: ethical, legal, and regulatory myths. J Law Med Ethics 24(3):252–259
25
Experimental Design
473
Fried C (1974) Medical experimentation: personal integrity and social policy. Elsevier, New York Gangl M (2010) Causal inference in sociological research. Annu Rev Sociol 36:21–47 Geller D (1982) Alternatives to deception: why, what, and how? In: Sieber JE (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 38–55 Gifford F (1986) The conflict between randomized clinical trials and the therapeutic obligation. J Med Philos 11:347–366 Gillon R (1994) Medical ethics: four principles plus attention to scope. Br Med J 309 (6948):184–188 Goldthorpe J (2001) Causation, statistics, and sociology. Eur Sociol Rev 17(1):1–20 Guala F (2005) The methodology of experimental economics. Cambridge University Press, Cambridge Guala F (2009) Methodological issues in experimental design and interpretation. In: Kincaid H, Ross D (eds) The Oxford handbook of philosophy of economics. Oxford University Press, Oxford, pp 280–305 Guala F (2012) Experimentation in economics. In: Mäki U (ed) Philosophy of economics. Elsevier/ North Holland, Oxford, pp 597–640 Hacking I (1999) The social construction of what? Harvard University Press, Cambridge, MA Hammersley M (2008) Paradigm war revived? On the diagnosis of resistance to randomized controlled trials and systematic review in education. Int J Res Method Educ 31(1):3–10 Hegtvedt K (2014) Ethics and experiments. In: Webster M, Sell J (eds) Laboratory experiments in the social sciences. Academic, London, pp 23–51 Holmes D (1976) ‘Debriefing after psychological experiments: I. Effectiveness of postdeception dehoaxing’ and ‘Debriefing after psychological experiments: II. Effectiveness of postexperimental desensitizing’. Am Psychol 32:858–875 Humphreys M (2015) Reflections on the ethics of social experimentation. J Glob Dev 6(1):87–112 Kaidesoja T (2017) Causal inference and modeling. In: McIntyre L, Rosenberg A (eds) The Routledge companion to philosophy of social science. Routledge, London, pp 202–213 Kelman H (1982) Ethical issues in different social science methods. In: Beauchamp T et al (eds) Ethical issues in social science research. John Hopkins University Press, Baltimore, pp 40–98 Kuorikoski J, Marchionni C (2014) Philosophy of economics. In: French S, Saatsi J (eds) The Bloomsbury companion to the philosophy of science. Bloomsbury, London, pp 314–333 Levine R (1979) Clarifying the concepts of research ethics. Hast Cent Rep 9(3):21–26 Lilford R, Jackson J (1995) Equipoise and the ethics of randomization. J R Soc Med 88(10):552–559 Miller F, Brody H (2003) A critique of clinical equipoise: therapeutic misconception in the ethics of clinical trials. Hast Cent Rep 33(3):19–28 Miller P, Weijer C (2006) Fiduciary obligation in clinical research. J Law Med Ethics 34(2):424–440 Mitchell S (2009) Unsimple truths: science, complexity, and policy. University of Chicago Press, Chicago Morton R, Williams K (2010) Experimental political science and the study of causality: from nature to the lab. Cambridge University Press, Cambridge, UK Oakley A et al (2003) Using random allocation to evaluate social interventions: three recent UK examples. Ann Am Acad Pol Soc Sci 589(1):170–189 Papineau D (1994) The virtues of randomization. Br J Philos Sci 45:437–450 Pearl J (2000) Causality-models, reasoning and inference. Cambridge University Press, Cambridge, UK Risjord M (2014) Philosophy of social science: a contemporary introduction. Routledge, London Sieber, Joan (1982) Ethical dilemmas in social research. In: Sieber J (ed) The ethics of social research: surveys and experiments. Springer, New York, pp 1–29 Sieber J (1992) Planning ethically responsible research: a guide for students and internal review boards. Sage, Newbury Park Sobel M (1996) An introduction to causal inference. Sociol Methods Res 24(3):353–379 Sullivan J (2009) The multiplicity of experimental protocols. A challenge to reductionist and nonreductionist models of the unity of neuroscience. Synthese 167:511–539 Urbach P (1985) Randomization and the design of experiments. Philos Sci 52:256–273
474
J. Lewis
Veatch R (2007) The irrelevance of equipoise. J Med Philos 32(2):167–183 Wilholt T (2009) Bias and values in scientific research. Stud Hist Phil Sci 40(1):92–101 Woodward J (2008) Invariance, modularity, and all that. Cartwright on causation. In: Cartwright N et al (eds) Nancy Cartwright’s philosophy of science. Routledge, New York, pp 198–237 Worrall J (2002) What evidence in evidence-based medicine? Philos Sci 69(3):316–330 Worrall J (2007) Why there’s no cause to randomize. Br J Philos Sci 58(3):451–488
Ethics of Observational Research
26
Meta Gorup
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions of Ethics and Integrity in Shadowing Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Official Ethics Versus Ethics in the Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Voluntary and Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Privacy, Confidentiality, and Anonymity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Participant and Researcher Well-Being . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rapport and Researcher Position in the Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions of Representation and Betrayal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informing and Critiquing Policy and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
476 478 478 480 481 482 484 487 488 490 490 490
Abstract
This chapter addresses issues of ethics and integrity in observational research, with a specific focus on qualitative shadowing. Qualitative shadowing is a form of nonparticipant observation, which includes following selected individuals and a detailed recording of their behavior. Shadowing thus implies a level of intrusiveness, which results from the close proximity between shadowers and shadowees. As such, it enables access to the otherwise invisible parts of the studied settings, resulting in intimate insights into the shadowees’ lives. At the same time, shadowing is highly unpredictable as shadowers’ access and exposure depend on shadowees. The invasive, relational, intimate, and emergent nature of shadowing gives rise to numerous ethical and integrity dilemmas: the potential discrepancy between acquiring a formal ethical approval and engaging in ethical practice once in the field; the challenge of gaining a truly voluntary M. Gorup (*) Ghent University, Ghent, Belgium e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_20
475
476
M. Gorup
and informed consent; the difficulty of maintaining privacy, confidentiality, and anonymity; risks to participant and researcher well-being; the complexity of managing relations and researcher roles in the field – and the effects thereof on the research process; the danger of causing feelings of betrayal among research participants; and the contradictions arising from shadowing research when it is employed as a tool for change or critique. While the present chapter cannot claim to provide a set of fixed guidelines for an ethical and rigorous conduct of shadowing, its aim is to sensitize shadowers to potential challenges and possible ways of addressing them by drawing on the rich experience of shadowing researchers across a variety of academic disciplines. Keywords
Ethics · Integrity · Observational research · Nonparticipant observation · Qualitative shadowing
Introduction In their seminal text on social research methods, Moser and Kalton (1971) describe observation as “the classic method of scientific enquiry” (p. 244). Broadly speaking, “[t]he distinguishing feature of observation /. . ./ is that the information required is obtained directly, rather than through reports of others” (Moser and Kalton 1971, p. 244). However, the generic term “observation” may refer to a number of different methodological approaches used in a variety of academic disciplines; from social and behavioral ones, such as psychology, communication and media studies, sociology, and anthropology, to various medical sciences. Across these varied disciplines, the use of observation techniques implies distinct ethical challenges. On one side of the spectrum, in medicine, observational studies are considered naturalistic and nonexperimental in that they do not interfere with individuals under study for the purposes of research. Though medical researchers conducting observational research need to pay close attention to the basic principles of research ethics, the largely noninvasive character of such studies diminishes ethical concerns more common in other forms of medical research, such as randomized clinical trials (for further discussions of ethics in medical observational research, see, e.g., Orchard 2008; Kho et al. 2009; Yang et al. 2010; Norris et al. 2012; Boyko 2013; Rasmussen et al. 2014). In social sciences, on the other hand, whether researchers engage in participant or nonparticipant observation, they will necessarily alter the observed settings due to their presence. Moreover, particularly if employing qualitative approaches, this type of observational research is likely to result in close proximity between the researchers and the researched, raising further ethical dilemmas (Czarniawska 2007; Johnson 2014). Bearing the ethically charged character of social observational studies in mind, the remainder of the chapter focuses on observational research in the context of social sciences. With participant observation addressed elsewhere in this handbook (see ▶ Chap. 24, “Ethics of Ethnography” by Hammersley in this handbook), the
26
Ethics of Observational Research
477
present text more specifically looks at nonparticipant observational research and in particular qualitative shadowing as one of its forms. Qualitative shadowing has been defined as an observational methodology which involves the researcher typically following a limited number of individuals as they go about their everyday tasks. During shadowing, researchers generally do not participate in their shadowees’ activities; however, they may encourage their research participants to comment on and explain the observed situations with the aim of grasping their points of view (McDonald 2005; Czarniawska 2007). While issues of ethics and integrity arising in shadowing research bear a similarity to those prominent in other forms of qualitative inquiry, the intensity of some of these challenges may be heightened due to the often intimate, sometimes rather intrusive, inherently relational, and highly unpredictable nature of shadowing. Given the close proximity between the researcher and the researched, frequently over extended periods of time, shadowing has a “distinctively intimate character” (Gill et al. 2014, p. 86) with researchers accessing “invisible and intimate spaces” (Gill 2011, p. 130). In addition to disrupting their routines, this might give shadowees the feeling of their professional and personal lives being intruded, potentially making shadowing an uncomfortable experience (Bartkowiak-Theron and Sappey 2012; Ferguson 2016). The intimacy and intrusiveness of shadowing imply the central importance of the character of the relationship between shadowers and their research participants. Field relations and the position of the researcher in the field do not only carry consequences for the ethics of research but also for its methodological integrity, making the acknowledgment of researcher positionality and the significance of reflexivity in shadowing studies ever so crucial. Shadowers need to pay constant attention to how they engage with their research participants and how the nature of these engagements shapes the processes of data collection and analysis (Gill 2011). These circumstances make shadowing an uncertain and emergent research endeavor, making it near to impossible to predict the detail of how the study will evolve and which ethical dilemmas one might face (McDonald 2018). No “one-size-fits-all” set of ethical guidelines can be provided as issues and solutions to them vary from project to project (Czarniawska 2007; Johnson 2014). It is thus crucial for shadowers to remain flexible and accept that potential risks need to be continuously revisited and addressed according to specific circumstances (Gill et al. 2014). While the present chapter cannot claim to provide a set of fixed guidelines for an ethical and rigorous conduct of shadowing, its aim is to sensitize shadowers to potential challenges and possible ways of addressing them by drawing on the rich experience of shadowing researchers across a variety of academic disciplines. The following sections focus on a number of issues of ethics and integrity arising from the characteristics pertaining to shadowing research identified above: the potential discrepancy between acquiring a formal ethical approval and engaging in ethical practice once in the field; the challenge of gaining a truly voluntary and informed consent; the difficulty of maintaining privacy, confidentiality, and anonymity; risks to participant and researcher well-being; the complexity of managing relations and researcher roles in the field – and the effects thereof on the research
478
M. Gorup
process; the danger of causing feelings of betrayal among research participants; and the contradictions arising from shadowing research when it is employed as a tool for change or critique. Although each of these issues is addressed in a separate section, the reader should bear in mind that they are closely connected and often overlap. The chapter closes with concluding remarks suggesting that drawing from cases and researcher experience rather than attempting universal ethical guidelines offers the best way forward.
Questions of Ethics and Integrity in Shadowing Research Although some of the classic shadowing studies and/or their predecessors were published decades ago, such as Mintzberg’s famous analysis of managers’ work (Mintzberg 1970, 1973) and Wolcott’s 2-year observation of a school principal (Wolcott 1973/2003), shadowing remains a relatively underused and underdiscussed observational technique. In-depth methodological debates of shadowing – and of particular interest to this chapter, conversations regarding shadowing ethics – are fairly recent. Nevertheless, there is a growing body of literature dedicated to shadowing across a number of disciplines and spheres of interest such as in educational, organizational, and healthcare studies. These will be drawn upon throughout this chapter.
Official Ethics Versus Ethics in the Field As most research projects involving human participants, shadowing studies are required to acquire approval from one or more ethics committees prior to data collection. Researchers will typically have to provide detailed information on research purpose and contributions, participant selection and informed consent, potential risks and benefits, and dealing with privacy, confidentiality, and anonymity. This process, although undeniably necessary and arguably helpful for researchers working to diminish potential risks for their participants, has frequently caused frustration among shadowing researchers; the ethical approval process is not well-adjusted to a methodology as open-ended and flexible in nature, and ethics committees are often unfamiliar with shadowing and its standards of methodological rigor (Johnson 2014). Ethics committees usually require a rigid research design and a detailed plan for ensuring an ethical research conduct. This has been referred to as “procedural ethics” (Guillemin and Gillam 2004), that is, the official process which determines whether a research project is granted an ethical approval based on agreed provisions. Similar to other scholars adopting emergent research designs, shadowers frequently experience this process as too inflexible because the reality of shadowing practice rarely neatly corresponds to ethics committees’ ideas of ethical research design and
26
Ethics of Observational Research
479
conduct. In other words, the so-called “ethics in practice” (Guillemin and Gillam 2004), that is, ethical decision-making that takes place when researchers face ethical dilemmas in the field, may not be consistent with the requirements set by and agreed on with the ethics committee. Not only might some responses to ethical challenges not work as planned, shadowers are likely to encounter issues not raised and addressed in the process of gaining ethical approval (Bartkowiak-Theron and Sappey 2012; Johnson 2014). For example, ethics committees typically require that shadowing researchers clarify the criteria for participant selection, not only in relation to shadowees but also people to be observed interacting with them. This can be problematic because shadowers’ access and exposure to particular events and people depend on shadowees’ activities and access negotiations, both of which are difficult to predict and control. Gorup (2016) reports on difficulties foreseeing who she would be meeting while shadowing university department heads. As additional groups of people emerged as potential participants, she resubmitted her application for ethics approval several times, thus complying with procedural ethics. But would her practice really be unethical if she did not? In another case, while shadowing a nurse, Quinlan (2008) gained unplanned access to the nurse’s consultations with her patients. The researcher seized the opportunity despite not having gained prior ethical approval. Did this make her observations unethical even if both the nurse and the patients agreed to her presence? These examples raise questions of what it means to be an ethical researcher: complying with ethics committees’ prescriptions or making sure your research participants are not at risk? Such dilemmas have no simple answers, need to be dealt with within short time frames, and are difficult to foresee. While not necessarily improving the procedural aspect of ethics, one possible way to better plan for ethical decision-making is to discuss potential ethical challenges with the individuals to be shadowed; they have a better insight into the site under study and ethical dilemmas that may emerge as well as how these can be overcome, with the specific setting in mind (Wolcott 1973/2003). Beyond the ethics committees’ frequent misunderstanding of the emergent character of shadowing, sometimes they also fail to grasp the criteria for its quality and validity. Johnson (2014) shares his experience of an ethics committee’s concerns that his small sample size was not representative of the group of managers under study. Small samples are common in shadowing due to its intensity and consequently very rich datasets, and generalization is typically not the purpose of shadowing studies. Nevertheless, rather than arguing about the standards of rigor with the ethics committee and thus delaying the approval, Johnson (2014) decided to appease them by presenting his initial results to larger groups of managers for validation. While this is not to say that validating the findings with members of the broader community under study is wrong, the case points to the ethics committee’s misunderstanding of the goals and validity standards in shadowing research. This example suggests that shadowing researchers might sometimes need to compromise on their ideal research design to more promptly acquire ethical approval.
480
M. Gorup
Voluntary and Informed Consent One of the central requirements for an ethical research conduct – and, as shown below, one that is also fraught with discrepancies between “procedural ethics” and “ethics in practice” (Guillemin and Gillam 2004) – is acquiring voluntary and informed consent from all research participants. This emphasis is important in the context of shadowing, since the principle applies not only to the shadowees who represent the focus of the study but also to other individuals observed in the process of shadowing (Bartkowiak-Theron and Sappey 2012). Obtaining consent from these two participant groups is characterized by distinct challenges. With regard to shadowees, it is typically considered good practice to thoroughly discuss what shadowing will entail, prior to any data collection (Johnson 2014; Bøe et al. 2016). At the same time, however, researchers might want to be careful not to share too many details regarding their research interests so as not to cause discomfort among shadowees about certain features of their behavior (Gill et al. 2014). Once shadowees agree to participate, the data collection can begin, but this does not mean that the researcher has been granted an indefinite consent; despite a signed consent form, participants can ask the researcher to pause the observations at any time (Quinlan 2008; Johnson 2014). Although this may seem fairly uncomplicated, shadowers are likely to find themselves in situations where shadowees do not explicitly ask them not to observe, yet researchers might feel uncertain about what can be used as data and what should be left out. For example, Vasquez et al. (2012) report on their uncertainty regarding whether a conversation with a shadowee over lunch was to be considered as data. Given the informal setting, did the participant realize that they were being studied? Following their exchange, the researcher resolved the tension by openly asking the shadowee for permission to use their lunch conversation as data, to which the participant agreed. Ferguson (2016) dealt with similar situations by distinguishing between topics that were directly related to her research interests as opposed to those that were largely unrelated; she decided not to take notes on discussions that were not central to her study. In these instances, shadowers engaged in ethical decisionmaking to protect their shadowees from involuntarily being part of something they had not agreed to. Yet, Ferguson (2016) also shares her experience of continuous hesitation among some of her participants despite their consent to partake in the study. She experienced those encounters as extremely uncomfortable, deciding to never again shadow anyone who does not convey complete certainty about participating. Issues like these are even more acute for “secondary” participants, that is, those observed in addition to shadowees. They are likely to be informed about shadowing to a lot less detailed extent and might thus not understand the exact nature of the researcher’s presence (Wolcott 1973/2003; Johnson 2014). Johnson (2014) advises to inform internal organizational members about the shadower’s presence ahead of the study as well as contact them prior to any scheduled meetings that are to be observed. When it comes to unscheduled meetings, he suggests to first explain the researcher’s presence and gain verbal consent and seek written consent as soon as possible after the meeting. However, this might not always be possible, especially
26
Ethics of Observational Research
481
when the observed individuals are external to the organization under study. Johnson (2014) reports on cases when he was not introduced to external attendees who left right after a meeting, resulting in their complete unawareness of being unwitting participants to a study (see also McDonald 2018). Others have described situations where, despite participants knowingly partaking, they only gave verbal consent, hence not fulfilling the ethics committee’s requirement for a written one (Quinlan 2008; Gilliat-Ray 2011). Furthermore, Vasquez et al. (2012) and Johnson (2014) refer to instances in which, although all participants formally agreed to participate, the shadowers had doubts regarding the voluntary nature of their participation. Given that the shadowees held management roles and were hierarchically superior to other observed individuals, their participation might have taken place under pressure. When shadowers sensed this might be the case or that some of the observed individuals felt uncomfortable with the observer’s presence, they paused their observations (see also Arman et al. 2012; Bøe et al. 2016). The above examples illustrate that despite precautions and against researchers’ intentions, shadowing might result in being a “quasi-covert” research endeavor wherein not all participants voluntarily and knowingly partake in the study (Johnson 2014). While this is commonly seen as unethical practice, some shadowing researchers have argued that seeking written consent might significantly disrupt the observed situation (Ferguson 2016), and hence not informing participants about the research might be less intrusive and consequently more ethical than insisting on satisfying ethics committees’ requirements of signed consent forms (Johnson 2014). As in the case of participant selection described earlier, there is likely to be a mismatch between “procedural ethics” and “ethics in practice” (Guillemin and Gillam 2004) when seeking voluntary and informed consent. Again, no clear answers can be provided, but what is crucial if such issues arise is that shadowers openly discuss them with participants whenever possible and, if this is not feasible, shadowers should be aware of their surroundings and the contexts in which a voluntary and informed consent is granted – or otherwise.
Privacy, Confidentiality, and Anonymity Another cornerstone of research ethics is the protection of participant privacy, confidentiality, and anonymity. Fully complying with these principles in the conduct of shadowing, however, can be challenging. Let us begin with privacy, that is, the participants’ right to keep selected information about themselves private. It was implied earlier that lack of informed consent in shadowing may sometimes be implicit, and thus the respect for participant privacy lies in the hands of a researcher committed to ethical decision-making when faced with such dilemmas. This may include pausing observations or not taking notes of private information (Ferguson 2016); however, considerations of privacy may also apply to settings in which shadowees spend significant amounts of time working alone in their offices – even if this does not involve sharing any information whatsoever (Arman et al. 2012). While shadowing will almost certainly entail moments of researchers’ insight into the private details of participants’ lives, shadowers should pay close attention
482
M. Gorup
to avoiding extensive intrusion into individuals’ privacy and work on diminishing participants’ “negative feelings of personal and professional invasion” (BartkowiakTheron and Sappey 2012, p. 13). Maintaining confidentiality, that is, the responsibility to keep certain information confidential, might also prove to be difficult for several reasons. Ferguson (2016) shares that due to the small size of the community in which she conducted shadowing, everybody knew who the shadowees were even though she never shared that information with others. Given that the identities of her shadowees were not confidential, she notes being extremely careful in choosing what to report in her findings (see also Engstrom 2010). Describing situations in which he was not sure what was considered private and confidential and what was not, Johnson (2014) gave the shadowees the opportunity to review his reports before publishing them to ensure not to publicly discuss potentially sensitive or harmful topics. Conducting shadowing among hospital nurses, Urban and Quinlan (2014) report on agreeing with the ethics committee to only observe their shadowees in common areas to make sure patient confidentiality – understood as an agreement between patients and medical professionals treating them – would not be at risk by the researcher’s presence. Despite this effort, however, the shadower frequently heard conversations about patients as well as observed “hallway nursing” (p. 54). While on paper patient confidentiality was seemingly protected, in practice it was impossible to achieve. Although this does not mean that the shadower reported the confidential information she was privy to, patient confidentiality was breached simply by her presence. Anonymization – the effort to remove identifying information that could lead to recognition of participants’ identities – is also crucial, given that shadowing studies often focus on a limited number of participants among whom the researchers collect detailed data about their working and private lives. Wolcott (1973/2003) discusses this issue in some detail. In his study of a school principal, he relied on pseudonyms; however, he recognizes that those close to the setting in which shadowing took place might still have been able to speculate about the research participants’ identities. At the same time, Wolcott (1973/2003) opines that anonymization should not happen at the risk of withholding details important to the research findings and suggests that the typically rather long period of time between fieldwork and publication of findings may itself result in anonymization since people tend to forget what they themselves and others said. He suggests that the best way to address the issue of anonymity is to limit the number of people with whom we share information about our research location, thus emphasizing that anonymity is not something to only consider in the write-up phase but should be of shadowers’ concern throughout the research process.
Participant and Researcher Well-Being As implied at several points above, shadowers should do their best to diminish potential negative effects of shadowing on their participants at all stages of research. This means that shadowers should not only be concerned about the effects of the
26
Ethics of Observational Research
483
findings on their participants – an issue raised earlier and further addressed below – but should also do their best to ensure participant well-being during the conduct of shadowing itself. However, given the constant presence and proximity of the observer, shadowers might face difficulties in achieving this goal. The mere act of being followed around for most of the day can be very stressful (Gill 2011), not least because shadowing necessarily disrupts the normalcy of one’s routine (Quinlan 2008; Engstrom 2010; Johnson 2014). Shadowers can be seen by shadowees as “just one more thing for them to accommodate in their already overwhelming schedules” (Urban and Quinlan 2014, p. 57). Furthermore, it is likely that shadowees will feel that they are being judged by the researcher, even if they are assured that their behavior would not be assessed or reported to management (Arman et al. 2012; Ferguson 2016). In spite of this, McDonald (2005) has argued that most shadowees get adjusted to the researcher’s presence fairly quickly and that the initial awkwardness is likely to fade away. Indeed, Cooren et al. (2007) were told by their participants that despite not only being shadowed but also being video recorded, they quickly forgot about the researcher’s and camera’s presence. Bøe et al. (2016) had a similarly positive experience while shadowing and video recording early childhood leaders. They argue that this might have been related to the fact that they purposefully recruited experienced rather than freshly minted leaders, assuming that the latter might have felt more intimidated by their presence. Shadowers should thus keep in mind that careful participant selection may enhance an ethical conduct of shadowing. However, not all shadowees easily get used to being observed. Vasquez et al. (2012, p. 154), for example, were faced with explicit unease and one of the shadowees even openly asked them: “What should I do?” Czarniawska (2007) and Ferguson (2016) also report of shadowees who never seemed to get completely adjusted to the researcher’s presence and displayed more or less overt signs of discomfort or annoyance. While not all shadowees experience disquiet, observers should be prepared to deal with uncomfortable situations. Several strategies have been proposed for shadowers to avoid or at least mitigate participant anxiety. Gill et al. (2014) suggest that shadowers should openly talk to shadowees about the trade-offs of shadowing and be open about potential awkward situations. They argue that shadowers’ openness regarding such instances may encourage shadowees to bring up issues they feel distressed about, if they arise. Another example of good practice is to establish protocols that allow for modifying the times and intensity of shadowing (Bartkowiak-Theron and Sappey 2012) and doing so in a manner that empowers the participants – rather than solely the researchers – to set temporal and spatial boundaries (Gill 2011). Furthermore, despite shadowing being potentially emotionally demanding for shadowees, researchers may encourage the participants to engage in self-reflection which may end up being valuable for them (Bartkowiak-Theron and Sappey 2012; Ferguson 2016). However, while these approaches to dealing with participant discomfort may be successful in some instances, Ferguson (2016) suggests that sometimes participants may never overcome the stress caused by shadowing, in which case she suggests that observations be discontinued. At the most basic level, shadowers should constantly pay attention to shadowees’
484
M. Gorup
comfort (Ferguson 2016), even if this results in a loss of opportunities for data collection (Johnson 2014). Having acknowledged the potential negative effects of shadowing on shadowees, it is also important to recognize that shadowing might cause significant stress to those conducting it. Shadowers may face mental discomfort and anxiety linked to the unpredictable nature of shadowing and the large amounts of time spent with another person, during which researchers need to be constantly careful about not intervening in the observed situations and be conscious of how they present themselves to their research participants (Czarniawska 2007; Gill 2011; van der Meide et al. 2013; Gill et al. 2014). Shadowing involves significant amounts of “emotional labor” (Gill et al. 2014, p. 76) which researchers should be trained to cope with though this does not seem to be a common practice as yet. One way of grappling with the emotional strain may be by employing it as a source of learning and insight, both with regard to better understanding shadowees’ emotions – which might be similar to those of the shadower – and researchers’ self-understanding (Czarniawska 2007, 2008; Gill et al. 2014). Perhaps most important to researcher well-being during shadowing – which might be a rather isolated endeavor – is ensuring that shadowers have social support, that is, somebody with whom they can regularly share and discuss shadowing-related problems (Gill et al. 2014).
Rapport and Researcher Position in the Field Given that shadowing involves close proximity between the researcher and the researched, acknowledging the relationships shadowers build and the positions shadowers take in the field as observers plays a crucial role in committing to an ethical and rigorous conduct. As discussed in detail below, shadowers can position themselves in various ways in terms of their (non)participation, role in coconstructing the observed environments, proximity or distance to their research participants, and their insider or outsider status. This suggests that there is no one right way of going about building rapport and positioning oneself in the field. Rather, what is crucial is to foster an awareness of how a certain relationship and researcher’s position affect the research process. Central to this question is an acknowledgment of researcher positionality and commitment to reflexivity. Positionality refers to the researcher’s awareness of her or his position in regard to the research, for example, in relation to her or his identity or view of the issue under study, recognizing that these may affect the way in which the study and the findings are shaped. Reflexivity is closely related to positionality, that is, a reflexive researcher will openly and honestly reflect on and address her or his role in making sense of the field and subsequently formulating the analysis (Gill 2011). The following sections discuss the various ways in which shadowers have positioned themselves in relation to their research participants and settings and how this has shaped their research practice. The reader should note that the issues discussed below are closely connected and implicated in each other and are thus addressed separately only for the purposes of clarity of presentation.
26
Ethics of Observational Research
485
To Participate or Not to Participate It was argued earlier that shadowing is a form of nonparticipant observation (Czarniawska 2007). However, shadowers’ close proximity to their participants and their continuous, often long-term presence in the field, raises questions about the notion of nonparticipation. While shadowing researchers typically do not engage in the activities they observe, their presence may result in some form of participation; be it by choice or necessity. For example, while Wolcott (1973/2003) saw himself as merely an observer in most situations, he decided on taking participant role at occasions such as a Christmas party, leaving his notebook behind. Johnson (2014) describes examples of becoming a participant by necessity, for example, when he felt he could not turn down the observed managers’ requests for help because he was afraid that would negatively affect their rapport which was crucial for his continuous access. In addition, he saw this as a question of ethics: is it unethical to refuse to help when help is needed? Czarniawska (2007, p. 55) takes this a step further, arguing that “all direct observation is indeed participatory – one’s mere physical presence and human decency requires participation.” This suggests that, from an ethical standpoint, shadowers cannot and should not fully detach themselves from the observed individuals and situations, even if they do not actively participate in the observed activities. Observer Effect and Co-construction of the Field Shadower’s presence and level of participation also raise questions of the so-called Hawthorne or observer effect (Mintzberg 1970; Quinlan 2008), that is, the possibility of “participants changing their behaviour due to the researcher’s presence” (Gorup 2016, p. 147). Some shadowers have argued that individuals under observation will need to accomplish their tasks regardless of being shadowed, implying that their routine would not be disrupted to a significant extent (Mintzberg 1970; Cooren et al. 2007). However, others have reported of shadowees falling behind their schedule (Quinlan 2008), attempting to be more efficient than usually (Arman et al. 2012), or engaging in activities not common in their regular workday (GilliatRay 2011). It has been suggested that extended presence might help with observer effect as the impact of the observers’ presence is likely to diminish over time (McDonald 2005; Czarniawska 2007). In addition, researchers have proposed triangulation of data as a way to mitigate potential empirical gaps caused by observer effect by – in addition to shadowing – conducting diary studies or interviews with shadowees as well as other members of the studied community or comparing the data with existing studies (Arman et al. 2012; Bartkowiak-Theron and Sappey 2012; Gorup 2016). It has also been suggested that video recording – although not eliminating the fact that what is chosen to be recorded is decided on by the observer – may provide more detailed accounts than audio recording or note taking (Cooren et al. 2007). That said, many shadowing researchers openly acknowledge and take observer effect for granted, calling for open conversations with participants about the researchers’ influence on shadowees’ practices rather than attempting to eliminate
486
M. Gorup
or mitigate it (Wolcott 1973/2003; McDonald 2005; Engstrom 2010; Gilliat-Ray 2011; Gorup 2016; McDonald 2018). Shadowing can be conceptualized as an intersubjective endeavor wherein it is recognized that the researcher will always have an impact on the studied individuals and settings. The site of the study is seen as co-constructed by the researcher and the researched. The way forward once acknowledging this is for shadowers to engage into the practice of reflexivity, that is, openly and honestly discussing how their presence and predispositions shaped the research. In doing so, shadowers should, among other things, pay attention to how their experiences and interpretations are influenced by gender, age, sexuality, class, and ethnicity (Meunier and Vasquez 2008; Engstrom 2010; Gill 2011; Vasquez et al. 2012).
Managing Closeness and Distance Acknowledging the co-constructed nature of research settings involves shadowers needing to pay close attention to the character of field relations they develop because these affect data collection and analysis as well as carry ethical implications. As mentioned earlier, at its most basic, shadowers should make sure to be respectful and empathic toward their participants (Czarniawska 2007). But beyond that, researchers have taken various approaches to building relationships with their participants. A number of authors have argued that developing rapport is important for shadowees to feel comfortable with an observer’s presence. At the same time, however, while qualitative shadowing is often an interpretive endeavor, shadowers should be careful about building a close relationship with shadowees if it is to result in the researchers’ inability to develop a critical, analytical stance toward the studied topics and instead fully adopt participants’ perspectives (Wolcott 1973/2003; McDonald 2005; Engstrom 2010). Arman et al. (2012) recognize that researchers may develop an empathic view of their participants but suggest it is important for them to acknowledge how their positive – or negative – attitudes toward their participants affected their interpretations. Others have explicitly embraced the position that shadowers and shadowees can be friends, although these researchers also warn of the importance of maintaining some distance (Quinlan 2008; Ferguson 2016). Quinlan (2008), for instance, argues that due to her unfamiliarity with the studied environment, an element of distance was inherent in her study as she could critically examine the discourses and practices she observed. At the same time, this did not mean that she and her participants could not share their “hopes, apprehensions, and frustrations about [their] jobs, families, and lives in general” (Quinlan 2008, p. 1491). She developed what she terms a “paradoxical, close-but-distant relationship with [her] informants” (Quinlan 2008, p. 1496). Ferguson (2016) also developed friendships while in the field, observing that some of her participants treated her as a confidant. She acknowledges that her friendships affected the data and her interpretation thereof, which is why she was particularly conscious of delivering a balanced portrayal of the field. Managing a balance between proximity to and distance from research participants is a question of research ethics as well as of integrity. While a level of empathy is
26
Ethics of Observational Research
487
crucial, a too close relationship may result in feelings of betrayal at a later stage of research, as discussed in more detail below. With regard to methodological integrity, the examples above show that – although occupying different points on the spectrum – most shadowers aim for a balance between closeness and distance that allows for a simultaneous thorough understanding of the shadowees’ perspectives and an academic analysis thereof. This is what Gilliat-Ray (2011) refers to as “critical proximity,” that is a combination of “critical distance” and “sympathetic proximity.” In other words, “critical proximity” acknowledges “the need to be selfaware and reflexive, but also sympathetically and fully engaged in the process of understanding the world of another” (p. 482).
Dynamics of Insiderness and Outsiderness Adding to the complexity of shadowers being required to simultaneously take on roles of empathic and analytical observers is their status as insiders or outsiders to the research settings. Several shadowing scholars argue it is beneficial to not “go in cold” (McDonald 2005, p. 460), that is, it is advisable to have some prior knowledge of the studied setting in order to more quickly get a better grasp of it (Engstrom 2010; Bøe et al. 2016; Ferguson 2016). Being an insider, to the extent that the shadower has previous experience of the work done by the participants, means that the researcher will understand the “language” spoken by those under study (Wolcott 1973/2003, p. 12). An insider status can also be advantageous in building field relations, as participants can more easily accept the shadower knowing that she or he can closely relate to their experiences (Wolcott 1973/2003; Urban and Quinlan 2014; Bøe et al. 2016). Others have pointed out some downsides of holding an insider status. Engstrom (2010), for example, avoided revealing his previous work experience in the studied field. He suggests that although this might be seen as unethical, it resulted in gaining more information because participants felt that he, as an outsider, needed more detailed explanations. Drawing on the earlier discussions of positionality though, this assumption can be challenged; a different researcher status does not necessarily result in more or less data but rather in different data and potentially access to different sites (Gorup 2016). Urban and Quinlan (2014) report on potential additional ethical issues arising from the insider status. Having prior experience as a nurse was valuable in building rapport with the shadowed nurses and being included in patient-related conversations; however, the shadower perceived that getting engaged in those discussions would not be ethical, particularly when they included confidential patient information.
Questions of Representation and Betrayal The effects of balancing closeness and distance discussed above become particularly visible when reporting the findings (Johnson 2014). While shadowing often involves developing a rather close relationship between the observer and the observed – enabling the shadower to gain in-depth understanding of shadowee’s
488
M. Gorup
experiences – the voices of the researcher and the researched remain distinct (Gill et al. 2014; McDonald 2018), and it is shadowers who ultimately control the analysis (Ferguson 2016). In most cases, shadowers will preserve a level of outsidedness, which may cause feelings of betrayal or disappointment among participants who might expect an empathic analysis (Wolcott 1973/2003; Gill et al. 2014). Researchers describe feeling concerned that their research participants might be offended or not fully approve of the findings (Wolcott 1973/2003; Johnson 2014; Ferguson 2016). To alleviate distress among participants, some have suggested to be clear with shadowees about the research goals from the very beginning (Wolcott 1973/2003; Engstrom 2010). While this does not fully eliminate the potential for feelings of betrayal upon reading the report, it will prepare the participants for the likely discrepancy between their own and researchers’ interpretations. Additionally, while writing, shadowers should try to imagine how participants would react to and whether they would accept the researcher’s account (Wolcott 1973/2003). Ferguson (2016) describes how she tried to be neither too critical nor too complimentary as a strategy to avoid her participants perceiving that she broke their trust. Another way of dealing with the issue of betrayal is sharing the analyses with shadowees prior to finalizing them. Engaging participants at this stage of research provides them with a level of agency over the interpretation of results (Quinlan 2008; Engstrom 2010; Gill et al. 2014; Gorup 2016). At the same time, this may improve the quality of research as it involves the validation of findings by the participants. While they might not fully agree with researchers’ analyses, this practice spurs a more collaborative approach to reporting the results (Quinlan 2008), and their comments can be used to enrich the existing dataset (Arman et al. 2012).
Informing and Critiquing Policy and Practice Beyond the narrowly defined research ethics as pertaining to the research conduct, shadowing scholars should also consider the broader implications of their studies and the ethics thereof. How do they impact policy and practice? And what is the place of critique in shadowing research? Shadowers report on various ways in which shadowing can help to inform policy and practice at multiple levels. Several scholars have shared that shadowing had a beneficial effect on their shadowees’ professional development. Wolcott (1973/2003), for instance, reports that being observed caused his shadowee “to take a closer look at himself,” which resulted in the participant’s “professional growth” (p. ix). Bøe et al. (2016) went a step further and specifically designed their study to simultaneously develop insights into their research topic as well as used it as a tool for “reflective practice” (p. 617) among their shadowees. They encouraged and supported their participants in reflecting on their practice as early childhood leaders based on the actual actions observed during shadowing. Shadowing scholarship can also have implications for practice beyond individual shadowees. Wolcott (1973/2003), for example, was convinced that his study of school principalship should represent a crucial contribution to informing the process
26
Ethics of Observational Research
489
of training future principals. He specifically argued that a detailed study based on shadowing can provide a more realistic picture of the system under study and thus better inform future practice than the often superficial findings of the more common survey research. Bartkowiak-Theron and Sappey (2012) see shadowing as a technique that enables not only detailed description of selected policies or programs but also carries potential for explanatory insights which are central to improving said initiatives. Importantly, these authors draw on the critical potential of shadowing research, arguing for it to be “a pluralist tool of systematic, critical evaluation” (p. 14). Employing shadowing as a tool for informing change and developing critique, however, comes with some risks. The “pluralist” nature of shadowing pointed out by Bartkowiak-Theron and Sappey (2012) challenges the integrated vision of organizations typically embraced by management. This may result in shadowers having difficulty in gaining access in the first place (Gorup 2016) and, if they do negotiate access, raises the chances of discrepancy between official organizational views and those of the researcher. Czarniawska (2007) suggests this can be particularly problematic in environments where our research participants are theoretically informed about the topics of research – but may draw on theories and explanations that contradict those adopted by the shadower. There may also be differences in perceptions of the benefits of a study. For example, whereas Wolcott (1973/2003) saw his accounts as helpful in informing future principals of the reality of the job, the shadowed principal worried that the researcher’s account would discourage people from considering the job of a principal. Not only may critique raise ethical questions in regard to the research participants and broader audiences, some researchers have wondered about their ethical obligation toward other shadowers: if a shadower is too critical of the studied setting, will this hamper the chances of gaining access for future shadowers (Engstrom 2010; Johnson 2014)? While some participants welcome constructive critique, shadowers might want to be cautious in their formulations of critical insights to avoid negative consequences for participants and researchers alike (Engstrom 2010). Another way to go about these tensions may be to embrace more collaborative research approaches. Sclavi (2014), for instance, promotes shadowing as a method “particularly attuned to action research” (p. 66). She has combined it with consensus building approaches in order to stimulate organizational change informed by perspectives of various organizational actors. Gill (2011) calls for shadowees to be involved in research as “co-collaborator[s]” (p. 129) who contribute to designing the inquiry. In this way, research questions do not only serve the researcher’s interests but also the interests of those under study. Finally, given that shadowing often enables access to otherwise invisible settings, shadowers should be prepared to respond to potential unethical practices among their participants. This issue has to date not been thoroughly addressed in shadowing literature; however, it calls for a careful consideration for two reasons: the implications of such actions for the studied environments and the effects of not reporting on such behaviors on the validity of findings (Bartkowiak-Theron and Sappey 2012; Urban and Quinlan 2014).
490
M. Gorup
Concluding Remarks The present chapter has addressed a number of ethical dilemmas and questions of integrity in shadowing research by drawing on the rich experiences of shadowers across a number of academic disciplines. In doing so, it has promoted Johnson’s (2014) notion of “ethical reflexivity,” which encourages shadowers to carefully consider ethical implications of their research and to develop “contingency plans” for ethical challenges likely to arise (p. 35). Given the relational and emergent nature of shadowing which has prevented shadowing scholars from developing a fixed set of ethical guidelines, learning about and reflecting on other researchers’ experience may be the best way forward. At the same time, Johnson (2014) argues, making such discussions more visible in the broader research community will provide precedents for ethics committees responsible for assessing shadowing projects and advising shadowers. While this chapter has intended to provide guidance to shadowing scholars, the examples shared throughout the chapter suggest that very few clear answers to the identified dilemmas can be given. Rather, they may have raised further questions, but hopefully reflecting on them will help the reader to better plan for and engage in ethical decision-making throughout the research process. Indeed, it is hoped that in this way the chapter contributes to a continuous discussion of methodological and ethical issues arising in shadowing research and the ethical implications of observational methods in general.
Cross-References ▶ Ethics of Ethnography
References Arman R, Vie OE, Åsvoll H (2012) Refining shadowing methods for studying managerial work. In: Tengblad S (ed) The work of managers: towards a practice theory of management. Oxford University Press, Oxford, pp 301–317 Bartkowiak-Theron I, Sappey JR (2012) The methodological identity of shadowing in social science research. Qual Res J 12(1):7–16 Bøe M, Hognestad K, Waniganayake M (2016) Qualitative shadowing as a research methodology for exploring early childhood leadership in practice. Educ Manag Adm Leadersh 45(4):605–620 Boyko EJ (2013) Observational research – opportunities and limitations. J Diabetes Complicat 27(6):642–648 Cooren F, Matte F, Taylor JR et al (2007) A humanitarian organization in action: organizational discourse as an immutable mobile. Discourse Commun 1(2):153–190 Czarniawska B (2007) Shadowing: and other techniques for doing fieldwork in modern societies. Copenhagen Business School Press, Copenhagen Czarniawska B (2008) Organizing: how to study it and how to write about it. Qual Res Organ Manag 3(1):4–20
26
Ethics of Observational Research
491
Engstrom CL (2010) Shadowing practices: ethnographic accounts of private eyes as entrepreneurs. Dissertation, Southern Illinois University Carbondale Ferguson K (2016) Lessons learned from using shadowing as a qualitative research technique in education. Reflective Pract 17(1):15–26 Gill R (2011) The shadow in organizational ethnography: moving beyond shadowing to spectacting. Qual Res Organ Manag Int J 6(2):115–133 Gill R, Barbour J, Dean M (2014) Shadowing in/as work: ten recommendations for shadowing fieldwork practice. Qual Res Organ Manag Int J 9(1):69–89 Gilliat-Ray S (2011) ‘Being there’: the experience of shadowing a British Muslim hospital chaplain. Qual Res 11(5):469–486 Gorup M (2016) Studying higher education close-up: unexplored potentials of “shadowing” in higher education research. In: Huisman J, Tight M (eds) Theory and method in higher education research. Emerald Group Publishing Limited, Bingley, pp 135–155 Guillemin M, Gillam L (2004) Ethics, reflexivity, and ‘ethically important moments’ in research. Qual Inq 10(2):261–280 Johnson B (2014) Ethical issues in shadowing research. Qual Res Organ Manag Int J 9(1):21–40 Kho ME, Duffett M, Willison DJ et al (2009) Written informed consent and selection bias in observational studies using medical records: systematic review. BMJ 338:b866 McDonald S (2005) Studying actions in context: a qualitative shadowing method for organizational research. Qual Res 5(4):455–473 McDonald S (2018) Going with the flow: shadowing in organizations. In: Cassell C, Cunliffe AL, Grandy G (eds) The SAGE handbook of qualitative business and management research methods: methods and challenges. Sage, London, pp 205–218 Meunier D, Vasquez C (2008) On shadowing the hybrid character of actions: a communicational approach. Commun Methods Meas 2(3):167–192 Mintzberg H (1970) Structured observation as a method to study managerial work. J Manag Stud 7(1):87–104 Mintzberg H (1973) The nature of managerial work. Harper and Row, New York Moser CA, Kalton G (1971) Survey methods in social investigation, 2nd edn. Ashgate, Aldershot. (2001 reprint) (first published 1958) Norris A, Jackson A, Khoshnood K (2012) Exploring the ethics of observational research: the case of an HIV study in Tanzania. AJOB Prim Res 3(4):30–39 Orchard J (2008) For debate: should observational clinical studies require ethics committee approval? J Sci Med Sport 11(3):239–242 Quinlan E (2008) Conspicuous invisibility: shadowing as a data collection strategy. Qual Inq 14(8):1480–1499 Rasmussen LS, Gisvold SE, Wisborg T (2014) Ethics committee approval for observational studies. Acta Anaesthesiol Scand 58(9):1047–1048 Sclavi M (2014) Shadowing and consensus building: a golden bridge. Qual Res Organ Manag 9(1):66–68 Urban A-M, Quinlan E (2014) Not for the faint of heart: insider and outsider shadowing experiences within Canadian health care organizations. Qual Res Organ Manag Int J 9(1):47–65 van der Meide H, Leget C, Olthuis G (2013) Giving voice to vulnerable people: the value of shadowing for phenomenological healthcare research. Med Health Care Philos 16(4):731–737 Vasquez C, Brummans BHJM, Groleau C (2012) Notes from the field on organizational shadowing as framing. Qual Res Organ Manag Int J 7(2):144–165 Wolcott HF (1973/2003) The man in the principal’s office: an ethnography, 2nd edn. AltaMira Press, Walnut Creek Yang W, Zilov A, Soewondo P et al (2010) Observational studies: going beyond the boundaries of randomized controlled trials. Diabetes Res Clin Pract 88:S3–S9
Creative Methods Anonymity, Visibility, and Ethical Re-representation
27
Dawn Mannay
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creative Data Production: Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creative Data Production: Key Issues and Debates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creative Dissemination and Impact: Key Issues and Debates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
494 496 498 501 503 504 505
Abstract
Researchers employ creativity in their studies when designing, conducting, and presenting their data; in this way, creativity is central to academic practice. Within this more general sense of being creative, “creative methods,” as presented in this chapter, refer to creativity in the literal sense, where researchers and participants are involved in producing visual images, artifacts, or other representations, through a range of arts-based or performative techniques. This creative and novel “making” is often associated with the field of creative methods in visual studies, particularly in relation to photography and film. The recognizability of people and places in these photographic modes has invoked tensions between revealing and concealing the visual images we produce in social research, in relation to confidentiality and anonymity. This becomes more problematic in a climate where the burgeoning use of new technologies means that images are more easily shared, disseminated, and distorted. Accordingly, once a visual image is created, it becomes very difficult to control its use or remove it from public view if participants decide that they no longer want to be represented in a fixed
D. Mannay (*) School of Social Sciences, Cardiff University, Wales, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_21
493
494
D. Mannay
visual trope for time immemorial or if they decide to withdraw their data from a study. Arguments around anonymity and participant visibility are most closely related to photographs and film. However, it remains important to explore the ethical issues around other creative practices where researchers and participants make something new and understand that such issues can also be contentious. This chapter focuses on techniques of data production, including drawing, collaging, and sandboxing, where participants are involved in creating some form of artifact to represent aspects of their lives and experiences. It examines how these creative approaches generate a number of uncertainties around voice, confidentiality, informed consent, avoidance of harm, and future use. The chapter also explores the opportunities that creative approaches, which produce novel outputs, can offer in re-representing and revisualizing research data, engaging audiences in nontraditional formats, and increasing the impact of research studies. In this way, the chapter offers an insight into the ethical risks and potentialities of creative methods as tools of both data production and dissemination. Keywords
Anonymity · Creative methods · Collaging · Drawing · Dissemination · Ethics · Qualitative research · Revisualization · Sandboxing · Visual methods
Introduction Creative methods, as presented in this chapter, refer to creativity in the literal sense, where researchers and participants are involved in producing visual images, artifacts, or other representations, through a range of arts-based or performative techniques. For a long time, researchers have made use of photography, film, and more recently digital video, to creatively produce and represent empirical knowledge. As technology becomes less expensive and more user-friendly, it becomes relatively easy to generate visual data and to edit, crop, and alter these images with computer software. At the same time, image sharing platforms feature pervasively in everyday life, and mobile photography carries the conflicting narratives of control and loss (Uimonen 2016). For example, in terms of control, individuals can edit their online presentations to create idealized images of self and the family, carefully selecting occasions and staging events to curate an acceptable private archive and online presence (Blum-Ross and Livingstone 2017; Lindgren and Sparrman 2014), while simultaneously there are concerns about loss, where mobile devices are stolen or social media accounts hacked, exposing personal data to reuse, reimagining, and recirculation. In this way, everyday photographic practices in the digital era are embraced by users who “act, interact, form identity and play out their social lives online” (Tonks et al. 2015: 326). It is unsurprising then that the use of photographic images and their dissemination has been a key area of ethical interest in academic practice as once research data are placed in the public domain, their impact and interpretation become
27
Creative Methods
495
extremely difficult to control and they are fraught with tensions around revealing and concealing participants identities (see, for example Helen Lomax, Ethics in Visual Imagery Work, this volume). Therefore, photographic techniques in social science research often place researchers within a “moral maze of image ethics” (Prosser 2000: 116), with debates about how images are seen by, shared by, and interpreted by different audiences. However, creative methods are not restricted to the realms of photography and film. As discussed earlier, creative methods can include any technique where researchers and participants are involved in making artifacts to represent aspects of their lives and experiences. Accordingly, research ethics and scientific integrity need to be considered in relation to a far wider range of techniques. There are numerous creative approaches employed by social researchers, including photography, film, timelines, photographic diaries, Plasticine modeling, Lego Serious Play, and drawing, collaging, and sandboxing, which will be the focus of this chapter. These three techniques were selected, rather than other approaches, specifically for this chapter for several reasons. The ethics of photograph and film featuring research participants have been widely debated (see, for example Gubrium et al. 2014; Hannes and Parylo 2014; Lomax et al. 2011; Prosser 2000; Wiles et al. 2011, 2012); and the ethics of photography is the subject of another chapter in this volume (Helen Lomax, Ethics in Visual Imagery Work). Therefore, a decision was made not to include these forms but to explore more low-technology creative methods, which have received less attention in relation to anonymity, visibility, and ethical practice more widely. The rationale for focusing on drawing and collaging is related to their accessibility; the equipment is relatively inexpensive and portable, meaning that they are modes that researchers can, and do, easily embed into their practice. However, the associated ethical risks are not always considered; consequently, more literature pertaining to ethical practice can be seen as advantageous. Sandboxing (a technique incorporating miniature figures and trays of sand) is explored here as, although it is rooted in psychoanalytical practice (an ethical issue in itself which will be discussed later), it is a relative newcomer in the social researcher’s toolkit, and there is a lacuna of writing on this mode in terms of research ethics and scientific integrity. This chapter begins with a concise overview of these three forms of creative data production that are drawn on in qualitative research, namely, drawing, collaging, and sandboxing. For each of these techniques, there will be a consideration of the central ethical issues in the process of creating data. As Burnard (2018) argues, creative methods are sharp tools; consequently participants’ involvement in producing creative forms of data can engender more emotive and emotional engagements and reflexivity. The ways in which the creative can provoke such sensitivities can come as a surprise to both participants and researchers, problematizing the concept of “informed” consent. This unconscious element of relationality is often linked with an association between creative tools and psychoanalytical practice, questioning whether psychoanalytically informed techniques can, or should, be applied outside of therapeutic spaces. The chapter will explore this question and the balance between the avoidance of harm and the generation of meaningful data that centralizes the
496
D. Mannay
voice of participants. An emphasis on the experiences, perspectives, and meaningmaking of participants is prioritized in the case examples presented in this chapter, which, in line with many qualitative approaches, centralize participants’ subjective understandings of social worlds. The ethical tensions underlying these approaches can be problematic both in terms of the actual process of creating data and how this is then communicated to wider audiences. Consequently, the chapter then moves to an exploration of approaches to dissemination. While the outputs associated with drawing, collaging, and sandboxing may not be subject to the levels of recognizability that are presented by photographic images, the chapter contends that researchers still need to guard against the potential for identifying characteristics to threaten the anonymity of participants. As well as considering these dangers, there will also be an exploration of the possibilities that creative methods incorporating multimodal arts-based genres can offer researchers who want to engage audiences with their research findings and generate impact. This can involve the revisualization of research data and working with professionals from the creative industries to develop material outputs that rerepresent participants’ accounts, without threatening the ethical imperative of confidentiality, where participants have been assured anonymity. The following sections will draw on a range of studies to illustrate the potentialities and limitations of creative approaches and consider how these can be best applied in projects that seek to maintain ethical integrity and promote best practice in social research.
Creative Data Production: Background Creative data production, as presented in this chapter, represents a group of linked techniques applied to explore the perceptions, subjectivities, and lived experiences of participants, including drawing, collaging, and sandboxing. This approach to data production can be seen as creative because in each case participants are involved in representing their views through making a novel artifact. This section offers the reader a brief overview of these three techniques, providing a contextual base to support the evaluation of these approaches in relation to ethical practice. Drawing is an ancient communicative form that can be a simple sketch or a finished work and can be a representation of an objective reality or the subjective ideas, imaginings, and designs of the artist. It is a genre that the reader will be familiar with and that most will have engaged with either as a feature of schooled subjects and work-related tasks or for personal interest and pleasure, utilizing paper, canvas, pens, charcoal paint, or more recently digital technologies. Drawing has been used frequently in research studies both with children and adult participants (Darbyshire et al. 2005; Lyon and Turland 2016; Mitchell 2006; Ross 2005). Collaging presents an opportunity for participants to create a visual representation of their worlds using pictures, photographs, and words from various sources, including magazines, newspapers, and the Internet, which can be physically cut out and glued or created digitally. Although less common than drawing as a technique of qualitative data production, this approach has been increasingly utilized
27
Creative Methods
497
Fig. 1 Example of a sandscene
by researchers with an interest in identity over the last two decades (Awan 2007; Mannay 2010, 2016; Scott 2018). Sandboxing has been more recently adapted as a method of qualitative inquiry (see Mannay et al. 2017a, b; Mannay and Turney in press). Drawing from the psychoanalytical therapeutic tool, the “world technique” (Lowenfeld 1950) sandboxing enables participants to metaphorically represent their ideas and experiences [1]. As illustrated in Fig. 1, this is enabled by participants creating threedimensional scenes, pictures, or abstract designs in a tray filled with sand and a range of miniature, realistic, and fantasy, figures, and everyday objects. In all three of these research techniques, once the creative activity is completed, researchers often engage in an elicitation interview in which participants describe what they have produced and explain the subjective meanings attached to each element in the completed drawing, collage, or sandscene. This descriptive and explanatory element is absent in some collections created by artists, where the product is deliberately left to “speak for itself,” while in some psychoanalytical and therapeutic practices, emphasis is placed on the creative outputs themselves and the therapist’s interpretation of what the drawing, collage, or sandscene “means” (Weinrib 2004). The examples discussed in this chapter differ in this respect as they are interested in exploring participants’ understandings and meaning-making. Therefore, auteur theory is applied, in which the most salient aspect in understanding an image or artifact is what the creator intended (Rose 2001). This necessitates an elicitation interview where data is produced, in the conversation between participant and researcher about what has been created in response to a research topic; and the
498
D. Mannay
practice of asking participants to engage in this “show and tell” model has become a common feature of social science research (Belin 2005; Darbyshire et al. 2005; Lomax et al. 2011; Richardson 2015a).
Creative Data Production: Key Issues and Debates Creative methods offer some important benefits, but they also raise ethical problems in the process of production, which will be explored in this section. In the case of drawing, there are many different forms applied in the study of social life (see Lyon in press); however, the act of drawing itself can be situated as both experiential and relational. Drawing is an intimate occupation; it is by nature a First Person activity because of the direct connection between the individual and the marks (s)he makes. Its most fundamental characteristic is that it evolves as it progresses – it is a process. (Cain 2011: 265)
This centralization of process is also important in considering collaging and sandboxing. Even in the role of viewer of an artwork, there is a form of sentient engagement, which is characterized by an active interface between viewer and artwork that is relational, embodied, and affective. When we ask participants to become involved in creative activities, this process of creating also produces an affective encounter, as introducing a visual element to the process of data production can potentially provide different ways of knowing and understanding (Gauntlett 2007). This process asks participants to slow down their perception, to linger and to notice, to reflect on their lives, and to represent them for the researcher, a process that engenders reflexivity and defamiliarization (Mannay 2010). Creative practice, therefore, may be an element that can overcome the restrictions of more traditional question-and-answer techniques, open up experience, and make the familiar strange for participants. Participants set the agenda through the artifact they create (albeit in response to a research theme or question); they also often lead the conversation around their creation, which can quieten the researcher voice and enable more spaces of reflection and listening that broaden the associated conversations. However, this also brings an element of uncertainty to the research process as by providing a gateway to new destinations, the participant, and indeed the researcher, may be confronted with information that was not envisaged at the outset of the project. As discussed later in the chapter, the same could be said of open-ended qualitative interviews; however, rather than providing an account in the moment, participants often create artifacts over hours, days, or weeks, extending the timeframe for reflexive engagement. For example, in my own work, and in other research, participants have discussed how the process of drawing, collaging, and sandboxing has brought to the fore new aspects about their lives, with the associated comments that they did not fully realize the significance of particular facets of their lived experience until they engaged in a creative mode of data production (Mannay 2010, 2016; Mannay et al. 2017b; Richardson 2015a; Scott 2018).
27
Creative Methods
499
Visual and verbal metaphors then can potentially enhance individuals’ selfunderstanding; however, this can be challenging as what surfaces in these accounts can be unexpected and unintended [2]. In terms of ethical practice, this sets up tensions around informed consent and the avoidance of harm. In terms of the former, the open nature of creating a visual representation may take the participant, and the researcher, on a different pathway than that set out in the original project brief. Institutional and regulatory structures ask researchers to obtain informed consent in the early stages of a research project, which mistakenly implies that researchers can know in advance all the purposes data will be put to and the nature of all the data that is produced (Clark in press). Furthermore, as with all social research, there are questions around how “informed” participants actually are, particularly where they have no connection with academia or the creative arts, or represent marginalized communities. Participants may well sign the informed consent sheet with a hasty scribble that notionally meets institutional requirements. However, if they do not know the nature of an academic article, conference, and seminar and do not understand the ways in which engaging with creative practice can generate new insights in their own lives, we should not deceive ourselves that such consent is in any way informed or ethical (Mannay 2014). Informed consent then becomes highly problematic, but there are also issues with avoidance of harm. This links to the arguments against researchers applying techniques that have a foundation in psychoanalysis or therapeutic practice (Frosh 2010). Drawing, collaging, and sandboxing all feature in clinical practice as tools to both interpret individuals’ interior experience and begin the process of amelioration in the disturbances and discomforts experienced by individuals (Lowenfeld 1950). In the field of research, particularly in interpretivist approaches, there is also often a desire to understand the unique subjectivities and lifeworlds of participants. However, researchers are not trained therapists and are not in the position, at least not directly, to provide a professionalized route toward amelioration. Does this then mean that creative practice should be avoided by the researcher? The argument posed here is that creative techniques of data production should not be abandoned but that researchers should apply them with particular care. As discussed earlier, the concept of informed consent is problematic, whether or not the project involves creative techniques. However, researchers can take practical steps to introduce participants to the field in which their stories will feature. This can be achieved to some extent by taking participants to a conference session or seminar showcasing other studies, and/or sharing the textual and visual outputs of previous studies in other ways, to communicate to potential participants the explicit nature of research outputs. Additionally, consent needs to be continually negotiated with a consideration of situated ethics, rather than ending with the signing of a consent form. This involves extending the ethical imperative so that participants are included in decisions about research ethics in a process of ongoing negotiation and adaptation (Gubrium et al. 2014; Kara 2018; Edwards 2019). This more flexible approach could involve participants in the design of a study; offer sufficient opportunity for feedback, clarification, and adaption in the fieldwork process; and consult with participants about where and how their data will be shared and disseminated (see Lomax 2015).
500
D. Mannay
However, these more participatory approaches also raise ethical issues in relation to who is ultimately responsible for the research process. The requirements of those funding research, institutional practices, and the aims of individual researchers may act to curtail the extent to which participants can authentically contribute to decisions about research ethics and actively negotiate changes. Accordingly, there are both possibilities and limitations attached to initiatives aimed at enabling participants to have a voice in the research process. While some studies can engender forms of meaningful participation, others are accused of being tokenistic and simply “ticking boxes and missing the point” (Batsleer 2008: 141). Reflecting on the potential for these techniques to elicit harm, there is also a responsibility for researchers to make participants aware of the ways in which creative techniques engender reflexivity, defamiliarization, and potential new insights. As Ahmed (2010: xvii) contends: Secrets’ aren’t simply information or details that are passed or not passed. A secret might be something we keep from ourselves, something that is too hard or too painful to come to light.
Consequently, careful groundwork needs to be put in place so that conversations with participants around the purpose and nature of creative projects are sufficiently attentive to the complex dynamics of emotion, memory and reflexivity. Accordingly, it is important for researchers to explicitly explain to participants that the process of creating an artifact can open up new topics and avenues that were not envisaged or discussed at the beginning of the research process [3]. This imperative is not only focused on the participant. It is essential to “trouble taken-for-granted research practices, in which the researcher is assumed to maintain a neutral and objective standpoint” (Fink 2018: xx); and work needs to be undertaken to recognize the researcher’s feelings, investments, and subjectivities. Therefore, researchers also need to be aware, and equipped, to respond to sensitive topics engendered by creative techniques that fall outside of their original research remit and were not necessarily expected or intended. For example, in my own work exploring education, employment, and intergenerational patterns, the open and reflexive nature of the creative drawing and collaging activities produced accounts that instead centered on domestic and familial violence and abuse (Mannay 2013). It is a key responsibility for researchers to reflexively consider the emotional weight of relational encounters in the field and the how creative activities may trigger a wide range of memories and emotions. Of course, this proviso can be applied to qualitative research in a more general sense, as troubling accounts and emotionality are not only a feature of projects that adopt creative techniques. Undertaking research is often an intensely emotional experience (Ehn and Löfgren 2007), and in empirical qualitative studies, the researcher inevitably becomes embedded in the personal, emotional, embodied, and affective worlds of those being researched (Gabb 2008; Loughran and Mannay 2018). Additionally, this form of embeddedness is necessary to explore participants’ unique subjectivities and the boundaries of sensitive topics that can contribute to informed, rather than ignorant social policy, interventions and solutions. However,
27
Creative Methods
501
creative research techniques are sharp tools (Burnard 2018), which enable both potentialities and risks, and as with any other sharp tools, the researcher has to use them with care. One way in which researchers can begin this process is by trialing creative methods on themselves, and with colleagues and social networks, before they enter the field to work with participants. This attention to carefulness and reflexivity is necessary before and within the process of creative data production and, as discussed in the following section, equally necessary in projects of dissemination.
Creative Dissemination and Impact: Key Issues and Debates Researchers regularly have to negotiate decisions about what is “the unsayable and the unspeakable,” “who to represent and how,” and “what to omit and what to include” (Ryan-Flood and Gill 2010: 3). This can be particularly problematic where issues of anonymity and identification are related to visual materials (Wiles et al. 2008), even when these are not related to film or photographic representations. For example, a drawing of a house may include the door number, or a collage may be annotated with family members’ names to signify the meaning of particular images. For this reason, researchers need to be vigilant with the visual data generated and consider what is safe to present at conferences or in academic articles, as once images are made public they are extremely difficult to control and they may be available for time immemorial (Brady and Brown 2013). Of the techniques discussed here, only sandboxing does not have the ethical difficulties associated with the dissemination of other recognizable forms of data. The generic nature of “sandboxing” figures and lack of attachment to individual participants beyond the situated and transient nature of the fieldwork mean that they can be used in ethical, yet impactful, strategies of dissemination (Mannay et al. 2017a). Beyond the concerns with anonymity, there is a wider question about the purpose of research and the ethical obligations that researchers hold in relation to disseminating their findings. Research studies often generate the standard outputs of chapters and journal articles that may seek to preserve the confidentiality of participants and at the same time attempt to communicate their messages; but engagement with these outputs is often restricted to academic audiences (Timmins 2015). However, there has been a growing emphasis on the potential for research to “change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (REF 2011) [4]. This can mean rethinking the modes and means of sharing research to engage wider and more diverse audiences and increase the impact of social research. As Becker (2007: 285) contends, “there is no best way to tell a story about society. . . the world gives us possibilities among which we choose.” Therefore, if researchers consider the view that creative methods of dissemination and more traditional written forms are not in competition, but rather in collaboration with one another, then the potential for engaging in both modes of communication, simultaneously rather than only one or the other, becomes apparent. As illustrated
502
D. Mannay
in the following examples, creative mediums of dissemination are increasingly utilized by researchers, who have an ethical imperative to both protect the confidentiality of their participants and audience their work beyond the confines of academia to increase understandings and negotiate social change. These examples work on the basis of a creative re-representation and revisualization of research findings. Drawing on the genre of drama, Richardson (2015b: 615) engaged with performance-based dissemination by employing “theatre as a safe-space.” The play, titled Under Us All, built on the testimonies of participants, exploring how notions of masculinity, Irishness, religion, and family have shifted over time. Verbatim accounts of research participants were performed by a professional actor to (re)present their accounts but also ensure anonymity. Although participants were not actors, they were partners in the co-creation of the play and provided input to the script. For Richardson, organizing a theatre commission moved dissemination “beyond the confines of the ‘Ivory Tower’, outside the bounded walls of words in books and texts – and through spoken word and performance” (Richardson 2015b: 621). Film can also be used while respecting the confidentiality of participants, illustrated by the work of Hernandez-Albujar (2007) who was interested in the migratory experiences of Latin American citizens in Italy. Hernandez-Albujar intended to produce an ethnographic film of participants’ lived experiences; however, the tenuous legal status of the participants meant that many did not want to be captured on film. This led to a change in approach, and in-depth qualitative interviews were conducted. Yet, rather than simply communicating these in text-based outputs, Hernandez-Albujar produced a symbolic representation of the participants’ accounts through the medium of film. Drawing on shots of the local neighborhood and objects selected to act as metaphors for key themes, the film, although a representation, enabled an alternative way for viewers to “get to know” migrant communities. Similarly, Mannay et al. (2017a, c) conducted research with care experienced children and young people in Wales, UK. Given the educational and lifecourse inequalities faced by those in and leaving care, the participants were eager that their messages and recommendations were seen, heard, and acted upon. Again, this posed difficulties in terms of direct representation because the requirement to maintain anonymity meant that images of participants and their voices could not be featured, which can create a disjuncture in participatory approaches premised on “giving voice” to children and young people. Yet it was important to produce multimodal resources that could engage teachers, practitioners, foster carers and young people. Drawing on the verbatim interviews with participants, the research team worked with professionals in the creative industries to re-represent and revisualize these accounts. The outputs included five films; six graphic art materials; four music videos; a magazine for young people, Thrive; a magazine for foster carers, Greater Expectations; and an education charter. The online community of practice ExChange: Care and Education was also developed [5], which offers a platform for these multimodal project outputs alongside a range of other free resources for foster carers, young people, and other key stakeholders. Producing a range of creative multimodal outputs can extend both the reach and impact of research studies and go some way toward raising the voice of participants
27
Creative Methods
503
while still protecting their confidentiality. However, the creation of multimodal outputs is not without issue, and they must be seen as a re-representation and revisualization of participant’s accounts. In the third study discussed in this section, the researchers worked carefully with artists and young people outside of the research to create voice overs and translate the key research messages from interview transcripts into artwork, songs, and films. This work had some tensions, and it was often difficult to ensure that the re-representations aligned with the initial interviews (see Mannay et al. 2019; Staples et al. 2019). This was important as although creative outputs are often associated with poetic license [6], a key priority for the researchers was to retain, as closely as possible, the experiences of and messages from the participants. For example, the children and young people who provided voice overs in the films were not the original research participants, some were care experienced and others were not, and these voices were not always well matched. It is difficult to recreate the tone, emphasis, and depth of meaning from the original accounts, and this was not always achieved; nevertheless, the central messages of the accounts were retained. Similarly, the artwork, magazine articles, and songs were also not a direct representation, rather one edited in relation to the mode of media. However, overall, the researchers and the children and young people involved who have seen and commented on the outputs feel that they have kept an active sense of the stories shared, the experiences documented, and, importantly, the key messages about what needs to change.
Concluding Thoughts This chapter has reflected on ethical issues and challenges in the field of creative research. Firstly, it explored creative data production, in relation to the techniques of drawing, collaging, and sandboxing, and the ways in which they can intensify difficulties with informed consent and avoidance of harm. As discussed, creative techniques have much to offer in terms of their potential to engender refection, enable more participant-led data production, and move beyond the confines of the set question-and-answer interview style to broaden and extend discussions of, and outside of, the topic under examination. However, this flexibility can have unintended consequences, and for this reason, researchers need to consider the ethics of creative approaches. As outlined, practically there are preparatory steps that researchers can take to ensure informed consent is more “informed,” and actively engaging with a range of creative approaches first hand can provide a better understanding from which to apply them in the field. Once in the field, it is important to apply situated ethics and a high level of reflexivity (Coffey 1999) and to be open to changing and adapting techniques where required. Secondly, the chapter considered modes of creative dissemination, emphasizing the requirements to consider the risks of being seen, and how researchers need to be vigilant with the materials they share to protect participants’ anonymity. Alexandra’s (2015: 43) question, “what impact does voice have if no one is listening?,” was also
504
D. Mannay
explored in relation to the ethical imperative for researchers to effectively communicate their findings and enable forms of political, personal, and practice-based audiencing. Many busy practitioners and key stakeholders may not have time to read detailed reports, and the often dry and obtuse nature of academic writing may not be the best way to engage target audiences. However, creative outputs can facilitate a momentary connection with the accounts of and messages from participants and potentially encourage greater attention to research findings and associated actions. In this way, multimodal forms of creative output can facilitate a productive dialogue with diverse viewing publics. Although they are re-representations and revisualizations, they can be designed to retain anonymized traces of the participants and enable a differential form of authentic voice, providing an opportunity to translate, re-represent, and revisualize research findings in creative, ethical, engaging, and impactful ways.
Notes 1. Developed by Margaret Lowenfeld, the ‘world technique’, employing sand trays and figures, is predominantly used as a form of child and adolescent therapy. The technique provides a safe, secure and thought-provoking environment to elicit emotional states and an emphasis is placed on the child’s meaning making and explanation of the scenes they create with figures in the sand trays. 2. Examples of the use of visual metaphors are commonly seen in collaging and sandboxing activities. For example, in a sandboxing activity one participants used a battery to represent falling energy levels, a basket to represent their brain, and a figure of a karate man to act as a metaphor for a range of barriers in their life that were experienced like a ‘punch in the face’ (see Mannay and Turney (in press)). These objects and figures did not have an actual existence in the participant’s life but they represented their subjective experiences of their situation. In this way, the objects were used as figurative metaphors for the participant, and their meaning was communicated to the researcher through the accompanying elicitation interview. 3. This point is also important in relation to training students and researchers to use creative data production techniques. In my workshops, I discuss the potential for creative methods to bring up troubling data even outside of the research process. For example, in one of my training activities I have asked delegates to ‘draw a map of their journey to the workshop’. The task itself seems an easy one, and not self-evidently distressing. However, one participant in this activity found it upsetting because in drawing the journey, which included leaving her child in day care, it made her reflect on this separation and the intense feelings of loss that she had been attempting to repress in her everyday working life. 4. It should be noted that this emphasis on change and impact beyond academia has been critiqued. For example, it can put additional pressures on academic staff and be seen as part of an audit culture where researchers are continually measured and evaluated (see Knowles and Burrows 2014).
27
Creative Methods
505
5. ExChange: Care and Education - http://sites.cardiff.ac.uk/cascade/looked-afterchildren-andeducation/. 6. Poetic licence is the act by a writer, poet, filmmaker or other sectors of the creative industry, of changing facts or rules to make a story or poem more interesting or effective.
References Ahmed S (2010) Foreword. In: Ryan-Flood R, Gill R (eds) Secrecy and silence in the research process. Routledge, London, pp 1–12 Alexandra D (2015) Are we listening yet? Participatory knowledge production through media practice: encounters of political listening. In: Gubrium A, Harper K, Otaṅez M (eds) Participatory visual and digital research in action. Left Coast Press, Walnut Creek, pp 41–56 Awan F (2007) Young people identity and the media: a study of conceptions of self-identity among youth in southern England. PhD Thesis. Bournemouth University Batsleer J (2008) Informal learning in youth work. Sage, London Becker HS (2007) Telling about society. University of Chicago Press, Chicago Belin R (2005) Photo-elicitation and the agricultural landscape: “seeing” and “telling” about farming, community and place. Vis Stud 20(1):56–68 Blum-Ross A, Livingstone S (2017) ‘Sharenting’, parent blogging, and the boundaries of the digital self. Pop Commun 15(2):110–125 Brady G, Brown G (2013) Rewarding but let’s talk about the challenges: using arts based methods in research with young mothers. Methodol Innov Online 8(1):99–112 Burnard P (2018) Arts-based research methods: a brief overview. Presented at Creative Research Methods Symposium, 2 July 2018, University of Derby Cain P (2011) Evolution of the practitioner. Intellect, Bristol Clark A (in press) Visual ethics beyond the crossroads, in L. Pauwels and D. Mannay (eds.), The Sage Handbook of Visual Research Methods. (2nd Ed) London: Sage, pp. xx–xx Coffey A (1999) The ethnographic self: fieldwork and the representation of identity. Sage, London Darbyshire P, MacDougall C, Schiller W (2005) Multiple methods in qualitative research with children: more insight or just more? Qual Res 5(4):417–436 Edwards V (2019) How might we work more ethically with children and young people: the case of ethics. http://www.exchangewales.org/single-post/2019/05/15/How-might-we-work-more-Ethi cally-with-Children-and-Young-People-The-%E2%80%98Case-of-Ethics%E2%80%99 Ehn B, Löfgren O (2007) Emotions in academia. In: Wulff H (ed) The emotions: a cultural reader. Berg, Oxford, pp 101–117 Fink J (2018) Foreword. In: Loughran T, Mannay D (eds) Emotion and the researcher: sites, subjectivities and relationships. Studies in Qualitative Methodology, vol 16. Emerald, Bingley, pp xix–xxi Gabb J (2008) Researching intimacy in families. Palgrave Macmillan, Basingstoke Gauntlett D (2007) Creative Explorations: New Approaches to Identities and Audiences. London: Routledge. Gubrium A, Hill H, Flicker S (2014) A situated practice of ethics for visual and digital methods in public health research ad practice: a focus on digital story-telling. Am J Public Health 104(9):1606–1614 Hannes K, Parylo O (2014) Let’s play it safe: ethical consideration from participants in a photovoice research project. Int J Qual Methods 13(1):255–274 Hernandez-Albujar Y (2007) The symbolism of video: exploring migrant mothers’ experiences. In: Stanczak GC (ed) Visual research methods: image, society and representation. Sage, London, pp 281–386
506
D. Mannay
Kara H (2018) Research ethics in the real world: euro-western and indigenous perspectives. Bristol: Policy Press Lindgren A, Sparrman A (2014) Blogging family-like relations when visiting theme and amusement parks. Cult Unbound 6:997–1013 Lomax H (2015) Seen and heard? Ethics and agency in participatory visual research with children, young people and families. Fam, Relatsh Soc 4(3):493–502 Lomax H, Fink J, Singh N, High C (2011) The politics of performance: methodological challenges of researching children’s experiences of childhood through the lens of participatory video. Int J Soc Res Methodol 14(3):231–243 Loughran T, Mannay D (2018) Introduction: why emotion matters. In: Loughran T, Mannay D (eds) Emotion and the researcher: sites, subjectivities, and relationships. Studies in Qualitative Methodology, vol 16. Emerald, Bingley, pp 1–18 Lowenfeld M (1950) The nature and use of the Lowenfeld world technique in work with children and adults. J Psychol 30(2):325–331 Lyon P (in press) Using drawing in visual research: materializing the invisible. In: Pauwels L, Mannay D (eds) The sage handbook of visual methods, 2nd edn. Sage, London, pp xx–xx Lyon P, Turland M (2016) Manual drawing in clinical communication: understanding the role of clinical mark-making. Vis Methodol J 5(1):39–44 Mannay D (2010) Making the familiar strange: can visual research methods render the familiar setting more perceptible? Qual Res 10(1):91–111 Mannay D (2013) ‘I like rough pubs’: exploring places of safety and danger in violent and abusive relationships. Fam Relatsh Soc 2(1):131–137 Mannay D (2014) Storytelling beyond the academy: exploring roles, responsibilities and regulations in the open access dissemination of research outputs and visual data. J Corp Citizenship 54:109–116 Mannay D (2016) Visual, narrative and creative research methods: application, reflection and ethics. Routledge, Abingdon Mannay D, Turney C (in press) Sandboxing: a creative approach to qualitative research in education. In: Delamont S, Ward M (eds) Handbook of qualitative research in education. Edward Elgar, Cheltenham, pp xx–xx Mannay D, Staples E, Edwards V (2017a) Visual methodologies, sand and psychoanalysis: employing creative participatory techniques to explore the educational experiences of mature students and children in care. Vis Stud 32(4):345–358 Mannay D, Creaghan J, Gallagher D, Marzella R, Mason S, Morgan M, Grant A (2017b) Negotiating closed doors and constraining deadlines: the potential of visual ethnography to effectually explore private and public spaces of motherhood and parenting. J Contemp Ethnogr. https://doi.org/10.1177/0891241617744858 Mannay D, Evans R, Staples E, Hallett S, Roberts L, Rees A, Andrews D (2017c) The consequences of being labelled ‘looked-after’: exploring the educational experiences of looked-after children and young people in Wales. Br J Educ Res 43(4):683–699 Mannay D, Roberts L, Staples E, Ministry of Life (2019) Lights, camera, action: translating research findings into policy and practice impacts with music, film and artwork. In: Mannay D, Rees A, Roberts L (eds) Children and young people ‘looked after’? Education, intervention and the everyday culture of care in Wales. University of Wales Press, Cardiff, pp 210–244 Mitchell LM (2006) Child centered? Thinking critically about children’s drawings as a visual research method. Vis Anthropol Rev 22(1):60–73 Prosser J (2000) The moral maze of image ethics. In: Simons H, Usher R (eds) Situated ethics in education research. Routledge, London, pp 116–132 REF (2011) Assessment framework and guidance on submissions. REF 02.2011 www.ref.ac.uk/ pubs/2011-02/. Accessed 16 Aug 2018 Richardson M (2015a) Embodied intergenerational: family position, place and masculinity. Gend Place Cult 22(2):157–171 Richardson MJ (2015b) Theatre as safe space? Performing intergenerational narratives with men of Irish descent. Soc Cult Geogr 16(6):615–633
27
Creative Methods
507
Rose G (2001) Visual Methodologies: An Introduction to Researching with Visual Materials. London: Sage. Ross NJ (2005) Children’s space. Int Res Geogr Environ Educ 14(4):336–341 Ryan-Flood R, Gill R (eds) (2010) Secrecy and silence in the research process. Routledge, Abingdon Scott C (2018) Elucidating perceptions of ageing through participatory drawing: A phenomenographic approach. Unpublished PhD thesis, University of Brighton Staples E, Roberts L, Lyttleton-Smith J, Hallett S, CASCADE Voices (2019) Enabling care experienced young people’s participation in research: CASCADE voices. In: Mannay D, Rees A, Roberts L (eds) Children and young people ‘looked after’? Education, intervention and the everyday culture of care in Wales. University of Wales Press, Cardiff, pp. 196–209 Timmins F (2015) Disseminating nursing research. Nurs Stand 29(48):34–39 Tonks A, Lyons AC, Goodwin I (2015) Researching online visual displays on social networking sites: methodologies and meanings. Qual Res Psychol 12(3):26–339 Uimonen P (2016) I’m a picture girl: mobile photography in Tanzania. In: Gomez Cruz E, Lehmuskallio A (eds) Digital photography and everyday life: empirical studies on material visual practices. Routledge, London, pp 19–34 Weinrib EL (2004) Images of the self: the sandplay therapy process. Temenos, Cloverdale Wiles R, Prosser J, Bagnoli A, Clarke A, Davies K, Holland S, Renold E (2008) Visual ethics: ethical issues in visual research. ESRC National Centre for Research Methods Review Paper NCRM/011. ESRC National Centre for Research Methods, University of Southampton. http:// eprints.ncrm.ac.uk/421/1/MethodsReviewPaperNCRM-011.pdf. Accessed 16 Aug 2018 Wiles R, Clark A, Prosser J (2011) Visual ethics at the crossroads. In: Pauwels L, Margolis E (eds) Sage handbook of visual research methods, 1st edn. Sage, Thousand Oaks, pp 685–706 Wiles R, Coffey A, Robinsons J, Heath S (2012) Anonymisation and visual images: issues of respect, ‘voice’ and protection. Int J Soc Res Methodol 15:41–53
Ethics of Discourse Analysis
28
Meta Gorup
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions of Ethics and Integrity in Discourse Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discursive Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Research Planning and Acquiring Ethical Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Obtaining Voluntary and Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researching Sensitive Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Securing Anonymity and Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancing Analytical Rigor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discursive Psychology Beyond Academic Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Narrative Inquiry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Facing Ethics Committees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gaining Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researcher-Participant Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protecting Participant Confidentiality and Anonymity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpretation and Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Narrative Inquiry as Empowerment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Critical Discourse Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rigorous and Reflexive Critical Discourse Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Critical Discourse Analysis as an Ethical Endeavor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
510 511 513 514 514 515 516 516 517 518 518 519 519 520 521 523 523 524 526 527 528
Abstract
This chapter discusses ethical dilemmas and questions of integrity characteristic of the conduct of discourse analysis (DA). Following a general introduction to the ethics and integrity of DA, the text expands on three discourse analytic approaches: discursive psychology, narrative inquiry, and critical discourse M. Gorup (*) Ghent University, Ghent, Belgium e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_22
509
510
M. Gorup
analysis. The chapter presents how ethical and integrity concerns manifest themselves and how they may be addressed in regard to the specifics of each of these forms of DA. Readers are introduced to the basic principles of research ethics – such as voluntary and informed consent, confidentiality and anonymity, and minimizing harm – as well as those distinctive to the interpretive and critical endeavors characteristic of much DA. These closely relate to questions of research integrity, and more specifically to the grounds of analysts’ interpretations, representation of research participants, and the place of discourse scholars’ critique and application of their analyses. The chapter concludes by calling for a continual and honest discussion of ethical dilemmas if we are to ensure that consideration of ethics becomes inherent to any discourse analytic endeavor. Keywords
Ethics · Integrity · Discourse analysis · Discursive psychology · Narrative inquiry · Critical discourse analysis
Introduction Discourse analysis (DA) is here defined as the analysis of various forms of texts, wherein it is understood that social reality is constructed and given meaning through language use. This is not to say that there is no reality outside the text, but that elements of social reality – such as power relations, individual and collective identities, or concepts like gender and race – are constructed through social interactions. Hence, at its most basic, the interest of the discourse analyst lies in language use and its role in the construction of social reality. As an endeavor whose central focus is represented by texts produced by and/or about people, DA entails a number of ethical issues common to research involving human participants. Discourse analysts should pay attention to issues of informed and voluntary consent, confidentiality and anonymity, and minimizing the risk of harm. Furthermore, they should be mindful of ethical and research integrity dilemmas likely to arise in the process of interpretation, publication, and potential application. When is a discourse analytic interpretation considered valid? Are researchers’ analyses true to participants’ perspectives – and are they supposed to be? Should discourse analysts criticize the status quo and attempt to change the society? Before we turn to the discussion of ethical and integrity concerns raised above, it is important to recognize that DA is not a unified field of study. Rather, it encompasses an array of discourse analytic approaches, including conversation analysis; discursive psychology; various forms of linguistic discourse analysis, such as sociolinguistics and applied linguistics; narrative inquiry; poststructuralist discourse analysis; and critical discourse analysis. This chapter focuses specifically on three forms of DA: discursive psychology (DP), narrative inquiry (NI), and critical discourse analysis (CDA). These approaches and their aims vary significantly. DP scholars look at language as constructive of psychological phenomena and focus on micro elements of interaction, such as turn-taking or use of specific words and phrases. In doing so, they commonly detach
28
Ethics of Discourse Analysis
511
themselves from study participants and the details of their particular experiences. Scholars conducting NI, on the contrary, cultivate an interest in collecting and analyzing individuals’ narratives. In the process, they frequently develop empathic attitudes towards participants. Critical discourse analysts dedicate themselves to a different goal altogether: they view language as reflective and constructive of ideologies that constitute an unfair social order. The aim of CDA is to provide a critique of such discourses. Thus, scholars working in this field often occupy an explicitly value-laden position towards the texts they work with. Due to the variation across these approaches, both in terms of analytical foci and attitudes towards the analyzed texts and their authors, ethical and integrity dilemmas manifest themselves in rather different ways depending on the specifics of each form of DA. In fact, although this chapter only elaborates on three types of DA, the selected approaches illustrate the wide range of characteristics of DA at large. The following discussions of DP, NI, and CDA collectively demonstrate a broad spectrum of ethical and integrity concerns pertinent to discourse analytic endeavors of almost any kind. However, given the limitations of space, the three discussions are necessarily concise and draw a rather generalized picture, especially considering that each of the examined discourse analytic approaches is itself further fragmented. At the same time, given the commonalities across these variations of DA and the intent to provide three self-standing discussions, some repetition cannot be avoided. Despite this and the fact that certain ethics and integrity questions might be more relevant to some forms of DA than others, readers will benefit from sensitizing themselves with concerns raised by the different types of DA. A broad understanding of what conducting ethical DA means might further contribute to one’s ethical awareness and skill in both identifying and addressing potential dilemmas. That said, while this chapter hopefully provides a set of helpful guidelines, readers should note the vital importance of context and case-by-case consideration when engaging in ethical decision-making. The remainder of this chapter first provides a general overview of ethics and integrity concerns in DA. It then expands on ethical issues and questions of research integrity as manifested in three forms of DA: DP, NI, and CDA. The chapter concludes with a discussion of overarching issues pertaining to ethics and integrity discussions in the context of DA and offers suggestions on how to move forward to encourage sustained awareness of ethical and integrity dilemmas among discourse scholars.
Questions of Ethics and Integrity in Discourse Analysis The present section provides a general outline of ethical and integrity concerns in DA. At the most basic level, discourse researchers should be aware of participants’ legal rights, such as those related to the use of protected documents – for example, medical or legal records – or those linked to ownership and secondary use of data, whereby permission might need to be obtained for the purposes of reproduction and/or re-use. Next, and very importantly, the studies should aim to minimize the risk of serious harm to research participants (Taylor 2001b; Rapley 2007).
512
M. Gorup
Additionally, discourse analysts should respect participants’ autonomy by taking precautions with regard to voluntary and informed participation. This means prospective research participants should be given a clear choice regarding whether or not they wish to participate. They should also be provided with information detailing the nature of their consent. However, gaining voluntary and informed consent may not be straightforward because the researcher generally possesses much more detailed knowledge of the study than the participants. This raises questions of the extent to which participants can be truly informed before consenting to be part of the research (Taylor 2001b; Rapley 2007). While participants will typically be aware of their partaking in the study, for example, by agreeing to be interviewed, they are likely not to fully realize how the data will be used and interpreted by the discourse analyst. Arguably, a certain level of deception is thus inevitable – although not necessarily harmful (Hammersley 2014; cf. Taylor and Smith 2014). Further, while securing research participants’ confidentiality and anonymity is not always possible – or desirable – most discourse analysts pay close attention to the protection thereof. The principle of confidentiality describes the researcher’s responsibility to prevent participants being identified as the source of the data, for example, by arranging secure storage. Employing the principle of anonymity further assists in securing confidentiality and typically involves removing identifying information about the participants, for example, through the use of pseudonyms and limiting or changing the details provided about people and places. Beyond this, researchers should also respect participants’ privacy and dignity by considering how much and what type of data actually need to be collected to address their research questions. Overall, the discourse analyst should show respect for her or his participants, making sure to consider how the study might be affecting them throughout all stages of research (Taylor 2001b; Rapley 2007). Closely linked to questions of research ethics are integrity-related ones. Scientific integrity includes adherence to ethical research; however, it goes beyond ethics. The researcher is not only responsible for the ethical treatment of research participants but also for rigorous and honest conduct throughout a research project. In positivist social studies, the criteria that warrant rigorous research are those of validity and reliability, whereby validity assures that the findings correspond to the objective reality, and reliability guarantees the stability of findings across different contexts. In constructionist research – including a number of discourse analytic approaches – drawing on reliability and validity as defined above is not feasible. Constructionist approaches define social reality as constructed through situated language in use, which implies that any element of social reality can be interpreted in multiple ways and thus no single, objective truth is possible in the first place (Wood and Kroger 2000; Taylor 2001a, b; Rapley 2007). Discourse analysts hence generally draw on an alternative set of criteria, one that emphasizes credibility and plausibility (Rapley 2007). Credibility of discourse analytic research can be enhanced in a number of ways: by conducting an analysis of deviant cases, a comparison of our findings to existing work, and by discussing our findings with others, including colleagues or in some cases research participants (Rapley 2007; see also Wood and Kroger 2000; Taylor 2001a). Definitions of credibility, however, may differ depending on the discourse analytic tradition. Some discourse analytic approaches give more weight to participants’
28
Ethics of Discourse Analysis
513
perspectives – in which case feedback from participants might be helpful. Other approaches advocate framing the analyses primarily according to a selected theoretical framework, in which case so called “member checking” is not considered an appropriate tool for the evaluation of credibility or analytic quality (Taylor 2001a, b). Contributing to plausibility of research is a commitment to a more open and critical reflection and reporting on our practices as researchers, including on ethical dilemmas faced in the research process (Rapley 2007; see also Wood and Kroger 2000; Taylor 2001a). In addition, some authors call for the practice of reflexivity in a broader sense, whereby they recognize that certain aspects of analysts’ identities influence the research process and that these should be openly acknowledged. For example, one’s gender, age, upbringing, and world view may affect the choice of research topic and questions, data collection – especially when researchers establish direct relationships with their participants, for example during interviews – and data analysis. This practice mirrors the constructionist assumption of multiple interpretations – of which a researcher’s analysis is just one – thus encouraging the acknowledgment of “inherent researcher bias” (Baker and Ellece 2011, p. 113) and its influence on a study, rather than denying or ignoring it (Taylor 2001b; Baker and Ellece 2011). The practice of reflexivity extends to the need for discourse scholars to reflect on the political nature of their discourse analytic endeavors. Some would argue that because all language use is political, the work of discourse analysts – that of studying how language constructs social reality – will necessarily involve dealing with politics. By “politics” we mean that engaging in language use implies making statements about the distribution of “social goods in a society” – we directly or indirectly remark on who has power or privileged status, for example, according to certain societal rules (Gee 2011, pp. 7–9). Though discourse scholars working in different strands of DA might engage in such political endeavors more or less explicitly, they should always reflect on the moral and ethical bases of their research and the implications thereof for their research participants, individually and collectively (Wood and Kroger 2000). While discourse analytic research can and has been used to inform recommendations for social interventions or to help empower certain social groups, often by way of critique of the status quo, discourse scholars should always be “cautious about claiming to speak for others” (Taylor 2001a, p. 327). While understanding the nature of ethics and integrity concerns in DA at large is important, it does not suffice because the issues raised above manifest themselves differently in specific forms of DA. The sections below in turn look at the particulars of ethical and rigorous research practice in relation to three discourse analytic approaches: discursive psychology, narrative inquiry, and critical discourse analysis.
Discursive Psychology Discursive psychology (DP) aims at formulating constructionist explanations of psychological phenomena. That is, by looking at language in use – often in the form of naturally occurring interactions – DP analysts aim to understand how psychological issues are constructed in and through talk (Potter and Wetherell 1987; Potter 2012a). In doing so, DP scholars are strongly committed to ethical
514
M. Gorup
conduct, not least because they often study topics that may be considered sensitive – such as disability, suicide, abuse, sexism, ageism, and racism – among vulnerable populations, including people with disabilities and mental health issues or victims and witnesses of various kinds of abuse (Hepburn and Potter 2004; Potter 2010; Wiggins 2017). This section addresses some of the most pressing questions of ethics and integrity in DP.
Research Planning and Acquiring Ethical Approval Paying attention to research ethics is not only important during data collection when ethical concerns tend to be most apparent; ethics should already be considered in the early stages of planning a research project. From the outset, researchers should think about the group of participants they would like to recruit and the topics they aim to explore. Will participants represent a vulnerable group? Can the topic of interest be considered sensitive? If so, ethical issues due to arise will likely be especially important and challenging, and researchers should be mindful of them from the very beginning (Wiggins 2017). DP scholars will need to think thoroughly about the ethical implications of their research once they seek approval from an institutional ethics committee to conduct a study. Most ethics committees will require the researcher to complete an application form and provide documents such as information sheets and consent forms (for an example of a consent form, see Wiggins 2017, pp. 81–82). Further, the researcher will be asked to provide details regarding research participants, the nature of the data to be collected, and an explanation of how the data are to be stored and used (Wiggins 2017). In some cases, ethical approval will also be needed from other organizations, when conducting research with them and/or individuals affiliated to them. The process of gaining ethical approval can be lengthy – especially if it is required from more than one committee – and may thus feel frustrating. At the same time, however, this process gives the researcher an opportunity to deeply engage with and address potential ethical concerns well before they arise (Wiggins and Potter 2017).
Obtaining Voluntary and Informed Consent Once ethical approval is granted, the next challenge is to establish contact with the selected organization and/or research participants and obtain organizational and/or individual agreement to conduct research. This is not only a logistical issue, but also a matter of ethics. Establishing trusting relationships with potential research participants is crucial, particularly when researching topics that are likely to be considered sensitive (Wiggins and Potter 2017). Researchers should pay close attention to potential anxieties among research participants and give participants the opportunity to discuss these issues prior to commencing the study. In some instances, agreement will need to be reached at
28
Ethics of Discourse Analysis
515
multiple levels, as was the case in Hepburn and colleagues’ research on the helpline run by the National Society for the Prevention of Cruelty to Children (Hepburn and Potter 2004; Potter 2012a). The project included a simultaneous study of helpline employees and callers: gaining voluntary and informed consent from individuals in each of the two groups presented distinct challenges. For the employees, one of the central issues was respecting their right to anonymity, as a part of assurance that the research would not be used to evaluate their performance. In regard to the callers, who typically called to report on very sensitive issues, the helpline’s managers emphasized the necessity of the callers being able to ask questions about the study prior to consenting to participate. In light of this, the management asked the researchers to adjust their initial proposal of playing a recorded message at the beginning of the phone call – stating that the call may be recorded for the purposes of research and training – on the grounds of which the callers could opt in or out of participation. Instead, the researchers prepared a consent document to be read by helpline employees at the beginning of each call, giving them the authority to deal with gaining informed, voluntary consent in ways they felt most appropriate given each specific situation. This was an approach suitable to both the researchers – who needed to comply with their professional association’s standards – as well as the organization under study (Hepburn and Potter 2004). Gaining voluntary, informed consent may be fraught with a number of other ethical tensions. For example, if research participants are children, consent to participation should be given by their legal guardians but also the children themselves (Rapley 2007; cf. Wiggins and Potter 2017). If research participants are to be recorded on video, they should always agree to it, with the researcher seeking permission in advance of the day of recording and giving participants an opportunity to withdraw data. Additional consent should be sought if the recorded data are to be subsequently used during conference presentations and/or for secondary analysis. Gaining informed consent may also be delicate when conducting online research, and there are conflicting views as to how to best address this issue. For example, if the research site is a forum, openly introducing the research and seeking consent from the forum’s moderators and members can be seen as an ethical requisite. On the other hand, however, this may cause anxiety among forum members, especially if the topics of discussion are considered sensitive (Wiggins 2017).
Researching Sensitive Topics As implied above, additional care is required when researching topics potentially considered sensitive and it has been debated whether DP scholars should pursue them in the first place. Drawing on her study conducted in suicide discussion forums, Wiggins (2017) argues that, although ethically charged, such issues should be researched precisely “because they are sensitive and life-threatening” (p. 211; emphasis in the original). On the other hand, Willig (2004), whose work has explored multiple strands of DA, including DP, argues that such research should
516
M. Gorup
be avoided if it requires participants to be interviewed and actively reflect on themes related to suffering or health problems solely for the purpose of research. Interviews are typically “facilitative” and “non-judgmental,” thus encouraging the interviewees to open up (p. 166). However, if the data are subsequently subjected to DP analysis, the analyst’s primary concern will not lie in the participants’ experiences but in their discursive practices and resources they draw upon (Willig 2004). Interestingly, the views are divided with regards to this latter point. Namely, while one could argue that engaging in such practice is unethical because it “silences” participants’ accounts of suffering (Willig 2004, p. 168), others have suggested that potential research participants might actually be reassured by the fact that DP focuses on the details of interaction rather than on individuals (Wiggins 2017). This not only raises questions about suitable data collection techniques when researching sensitive topics from a DP perspective – implying that a study of naturalistic interactions might be less disruptive and deceptive than interviewing – but also calls for a debate regarding the goals of DP beyond providing academic analyses, further discussed in the sections below.
Securing Anonymity and Confidentiality Once the data are collected some practical steps should be taken to address the research participants’ right to anonymity and confidentiality, typically considered a standard procedure in DP. Data files should be anonymized, password-protected, and securely stored. If necessary for the purposes of public presentations or dissemination, anonymization tools are available for audio and video recordings (Wiggins and Potter 2017). With the exception of short excerpts, unanonymized data should not be sent via email. Instead they should preferably be shared via a protected website or a USB memory stick. Further, identifying participant details should be stored separately from the anonymized data (Wiggins 2017).
Enhancing Analytical Rigor Questions of ethics and integrity also surface during data analysis. As previously mentioned, discourse analytic endeavors typically rely on a set of criteria for credibility and plausibility (Rapley 2007) that does not conform to the more established positivist criteria of validity and reliability. In the context of psychological research, DA – including DP – has frequently been accused of adopting an “anything goes” approach (Antaki et al. 2003). Because of its constructionist premises, it has sometimes been considered scientifically inferior to the traditional statistical and experimental psychological research. However, DP scholars have disputed the view that “anything goes” in discursive research (Edwards 2012; Potter 2012b; Stokoe et al. 2012), and a number of guidelines for a rigorous and quality analysis have been proposed (Potter 2012a; Wiggins 2017; Wiggins and Potter 2017).
28
Ethics of Discourse Analysis
517
Wiggins (2017) outlines five steps for enhancing the rigor of a DP analysis. First, researchers should be explicit about the steps taken during data collection and analysis as well as in reporting on them. A clear overview and explanation of analytical choices should be further enhanced by presenting examples of data, thus giving the readers the opportunity to judge the content and quality of the analysis. Second, rather than drawing on analysts’ sense-making, analyses should be developed on the grounds of “participants’ orientations.” That is, talk should be interpreted to reflect how participants made sense of it, by employing the so-called “next turn proof procedure” (p. 137): the next turn of the analyzed interaction can prove whether a researcher’s interpretation corresponds to the participant’s by enabling a direct insight into how the participant responded to the previous turn. Third, analyses should be coherent, that is organized according to the identified patterns in the data. It is likely, however, that not all data align with these patterns; such instances are referred to as “deviant cases” (p. 137). While the analyst’s first impression might be that deviant cases are problematic in that they may be at odds with the initial findings, such cases can in fact strengthen the analysis by pointing out the exceptions to the identified patterns. Fourth, examining the functions of discourse used in the analyzed interactions might reveal “by-products and consequences” of certain discursive practices that go beyond their primary function. According to Wiggins (2017), paying attention to and discussing by-products and consequences of discourse further enhances the analysis. Finally, analysts should be explicit about how their analyses are embedded in and contribute to existing research (see also Potter 2012a). Beyond paying attention to analytical rigor, DP scholars should also be aware of the ethical aspects of their analyses. Although DP’s primary goal lies in offering insights to the various academic communities rather than in building “connections with research participants” (Potter 2010, p. 664), some analyses introduce researchers’ “moral, political or personal stance” (Antaki et al. 2003). While DP is typically not seen as a critical approach (Potter 2010) and “taking sides” does not in itself constitute an analytical endeavor (Antaki et al. 2003), analysts may decide to engage in critique or show solidarity to certain social groups, particularly when addressing issues such as discrimination or abuse. In those instances, quoting the victims and thus “giving them voice” may be seen as a form of participant empowerment (Antaki et al. 2003). However, DP scholars generally tend not to consider endorsing their participants’ views as central or necessary to their approach (Potter 2010).
Discursive Psychology Beyond Academic Analyses Despite what critics may see as a detached, rather than empathic, stance of DP analysts towards their research participants and topics of research (see, e.g., Willig 2004), several DP projects have resulted in applications considered helpful by the participants. For example, DP analyses have encouraged reflection on work practices, been used to support professional training, and consequently led to strategies for modifying
518
M. Gorup
participants’ practices (see, e.g., Hepburn 2006; Hepburn and Wiggins 2007; Stokoe et al. 2012). Furthermore, many DP scholars have conducted – and sometimes applied – their research in “socially responsible” institutions (Stokoe et al. 2012, p. 495) and shed light on important social issues such as abuse, disability, sexism, racism, and nationalism (Potter 2010; Wiggins 2017). Applications of DP come with ethical issues, however. When considering whether and how to apply one’s DP research, analysts should carefully weigh the implications of such engagements. Wiggins (2017, p. 222) encourages DP scholars to consider questions such as: “What is being applied? For what purpose is the research being applied? Who benefits from the application? What are the actual benefits of application?” These are not simple questions and researchers will be required to closely and critically reflect on the “political and moral judgments” guiding their applied endeavors (Hepburn 2006, p. 328; see also Hepburn and Wiggins 2007; Stokoe et al. 2012). Nevertheless, this should not prevent them from developing DP analyses that might be considered valuable beyond academic circles.
Narrative Inquiry Narrative inquiry (NI) refers to the interdisciplinary endeavor dedicated to the study of the lived experience as narrated by those living it. The term describes the study of various narrative forms, such as life stories, personal narratives, oral histories, and performance narratives (Chase 2005). A characteristic feature of NI – and one that represents the source of many ethical and integrity dilemmas – is its relational nature (Ellis 2007; Etherington 2007; Clandinin et al. 2018). The narrative researcher holds a “dual role,” wherein she is simultaneously a scholar interested in advancing knowledge as well as a person building intimate relationships with research participants (Josselson 2007). While this may hold true for other forms of research involving close proximity between the researcher and her participants, such as ethnography, narrative researchers in addition often explicitly “endeavor to conduct research with other people rather than on them” (Josselson 2007, p. 559; emphasis in the original). Instead of solely focusing on their scholarly interests, narrative researchers tend to take account of their participants’ needs as well – which may result in additional ethics and integrity tensions. Largely drawing on literature on life story studies as a form of NI, the remainder of this section describes some of the ethical and integrity issues likely to be encountered by narrative researchers at large.
Facing Ethics Committees Seeking approval from an ethics committee, a necessary formal step to be taken prior to initiating a study, is seen as inherently contradictory to the relational nature of NI. In the context of NI, ethics should ideally be negotiated with and adjusted to research participants rather than fixed at the outset (Clandinin and Connelly 2000; Ellis 2007). The ethics of NI cannot be represented by a set of rigid rules; they
28
Ethics of Discourse Analysis
519
are relational, situated, and dynamic (Smythe and Murray 2000; Josselson 2007; Adams 2008). As a consequence, narrative researchers should be prepared to not only address the technical aspects of ethics raised by the committees, but also pay attention to relational aspects that might arise throughout the study (Clandinin and Connelly 2000; Ellis 2007; Clandinin et al. 2018). Moreover, NI adopts an emergent research design, meaning that multiple applications for ethical approval might be required due to design changes during the research process. This puts narrative researchers in a less than ideal situation since this necessary step – even when ethical risks are limited – may cause delays or, worse, narrow the scope of research (Bruce et al. 2016).
Gaining Informed Consent An illustrative example of the need for a continual, rather than a one-time, examination of ethical concerns is that of informed consent. Due to an emerging research design, it is impossible to fully inform research participants ahead of data collection of the precise research focus and analytical directions to be taken. Thus, in the context of NI, informed consent – commonly seen as something to be gained at the beginning of a study and taken for granted thereafter – “is a bit oxymoronic” (Josselson 1996, p. xii). Instead, informed consent should be understood as a process (Smythe and Murray 2000; Lieblich 2006; Ellis 2007), with some authors suggesting to make use of multiple consent forms. Josselson (2007) proposes the use of two consent forms. The first one is to be signed prior to data collection and should focus on participants’ agreement to voluntarily take part and to clarify their right to withdrawal. The second consent form should address ways in which the data will be processed and is to be introduced at the end of data collection. Explicit consent should also be sought if the data are to be archived or shared with other researchers. The end of the interview is also a good time to provide details regarding the research focus that the researcher – for methodological reasons – might have kept vague at the outset. Smythe and Murray (2000) similarly call for multiple consent forms or verbal agreements, with narrative researchers checking at several points during the research whether participants agree to continue. In their view, an important part of gaining informed consent is to inform participants from the very beginning of the possibility for multiple interpretations, including the researcher interpreting their narratives in ways not fully aligned with their own perspectives. This contradicts Josselson’s (2007) suggestion to only reveal the nature of the analytical process at the end of the interview in order not to adversely affect the researcher’s relationship with her participants.
Researcher-Participant Relationships Given the inherently relational nature of NI, the importance of the relationships developed with research participants cannot be overstated. Josselson (2007) suggests
520
M. Gorup
that the better the rapport with participants, the more trust they will feel in the researcher, warranting more “openness” in sharing their narratives (p. 539). However, the narrators should not feel exploited (Ellis 2007; Josselson 2007), which is why it is important for researchers to acknowledge the likely power differences between themselves and the participants. Etherington (2007) advises to openly reflect on the nature of the researcher-participant relationships together with the participants at the beginning of the study, particularly if the research relationship develops in addition to an existing or former relationship. Beyond the data collection phase, it is not unethical to develop a personal relationship with participants, although researchers should be mindful of their position in this relationship given their power to eventually publish the narratives. This might involve a balancing act between maintaining a friendly relationship with participants and reporting on issues the narrators might be offended by (Ellis 2007). On a more practical note, with interviews representing the most common method in NI, it is crucial for narrative researchers to be aware of certain interviewing guidelines. During the interview, researchers should be empathic and nonjudgmental (Josselson 2007) and constantly pay attention to the well-being and potential vulnerability of the participant (Smythe and Murray 2000; Sabar and Sabar BenYehoshua 2017). The researcher should be prepared to deal with very emotional narratives (Josselson 2007) and make sure not to abuse participants’ trust by seeking information interviewees might not be ready to share (Smythe and Murray 2000). In the course of the interview, it is unethical to provoke confrontation in order “to elicit more data” (Josselson 2007, p. 547); however, the researcher is free to introduce her personal reflections. This might positively effect relationship-building, though the researcher should make sure to reflect on how this influenced the narrative.
Protecting Participant Confidentiality and Anonymity Crucial to both obtaining informed consent and developing relationships that will make participants feel at ease sharing the details of their lives is the researcher’s commitment to protecting their right to confidentiality and anonymity. However, this can be relatively easily jeopardized due to the very detailed narratives typically generated by participants and presented in research reports. The researcher’s first steps should include anonymization of all names appearing in transcripts and notes, and keeping the details of anonymization separate from the anonymized documents (Smythe and Murray 2000; Josselson 2007). When preparing research reports, narrative researchers are advised to use various forms of fictionalization in order to disguise participants as well as other individuals mentioned in their narratives. This may include the use of pseudonyms, changing times and places of the described events, or even constructing composite narratives which describe what happened but blur the details regarding the contexts and those involved (Caine et al. 2017). Still, anonymity might be difficult to achieve when research is done in small communities, when participants are approached based
28
Ethics of Discourse Analysis
521
on snowball sampling, when researchers write about people close to them, or when on-site observations are conducted and it is thus not entirely up to researchers what information their participants share about the research (Chase 1996; Clandinin and Connelly 2000; Ellis 2007; Josselson 2007). When complete disguise is impossible, researchers should consider reaching an agreement with their participants on what can be published and what should be omitted (Lieblich 1996, 2006), as well as consider obtaining explicit consent for their reports to mention people other than the research participants themselves (Josselson 2007; Adams 2008). In approaching participants with these questions, however, narrative researchers should carefully examine whether in some cases it might be best not to publish some of the details of the narratives even if participants had agreed to it, and whether potentially having to omit certain details about people other than the participants will result in silencing the participants’ narratives (Josselson 2007; Adams 2008; Sabar and Sabar BenYehoshua 2017). Moreover, fictionalization may be ethically problematic – and inappropriate given the relational nature of NI – when researchers decide on its form without negotiating it with participants, when picking pseudonyms that do not reflect the cultural and social settings of the research, or when participants would prefer to use their real names rather than pseudonyms (Caine et al. 2017). The latter may be related to the feeling that the researcher is getting all the credit for work that others contributed to as well. In some instances research participants may decide to withdraw their anonymity in order for their narratives to be publicly recognized, or even engage in coauthorship practices with narrative scholars (Clandinin and Connelly 2000; Sabar and Sabar Ben-Yehoshua 2017). Furthermore, protecting participants’ anonymity and confidentiality raises dilemmas regarding research integrity. Narrative scholars may experience conflict between seeing themselves as empathic listeners – focusing on the needs of participants – as opposed to researchers bearing the responsibility for accurate, scholarly reporting (Josselson 2007; Sabar and Sabar Ben-Yehoshua 2017). Addressing participants’ concerns by making excessive changes to the narratives may significantly affect the quality of the study (Lieblich 1996, 2006; Sabar and Sabar BenYehoshua 2017). Despite this, Sabar and Sabar Ben-Yehoshua (2017) maintain that narrative researchers should do their “utmost to respect their [participants’] wish when receiving their feedback,” even if it results in losing important data. Still, researchers may engage in “respectful negotiations” (p. 421) and try to find solutions to preserve research integrity while accommodating their participants’ requests.
Interpretation and Representation Securing anonymity and confidentiality is also linked to the more general concerns regarding the ownership and analysis of the narrative. Who owns the narrative and whose interpretation of the narrative is (most) valid? Holding the power to write and distribute other people’s accounts, narrative researchers necessarily find
522
M. Gorup
themselves in a privileged position compared to their participants (Josselson 2007; Adams 2008). This raises concerns regarding how they go about using this privilege. Typically, two main positions – as well as various combinations of the two – are possible: the researcher’s principal goal might be to “give voice” to participants, that is, to employ “supportive voice,” or she might aim to conceptualize the narratives according to a specific conceptual framework, that is, use a more “authoritative voice” (Chase 2005; see also Josselson 2007). When employing “supportive voice,” it is the narrator’s voice that is put “into the limelight” (Chase 2005, p. 665), with the aim for the audience to get to know the narrator’s story as told by him or her, while the researcher’s remarks remain in the background. Those using “authoritative voice” focus on exploring issues that might be “taken-for-granted” (Chase 2005, p. 664) by the narrators, thus shedding light on phenomena that would not necessarily be explicit without an academic analysis. When researchers are inclined towards using “supportive voice,” ethical issues may arise in relation to their accuracy in representing the participants. To avoid misrepresentation, researchers are likely to develop collaborative relationships with participants which may involve transcript and/or research report validation, and in some cases even co-authorship (Clandinin and Connelly 2000; Smythe and Murray 2000; Lieblich 2006). If using a more “authoritative voice,” ethical concerns might emerge due to participants potentially feeling misrepresented and betrayed (Lieblich 1996; Ellis 2007). However, one could argue that, once the analysis is conducted, the text also belongs to the researcher who will want to stand by her interpretation regardless of participants’ opinions (Chase 1996; Ellis 2007; Josselson 2007). Narrative researchers are thus engaged in a challenging balancing act, wherein they attempt to do justice to both their participants as well as their own scholarly efforts, pointing yet again to the deep intertwinement of ethical and integrity questions in NI (Clandinin and Connelly 2000; Smythe and Murray 2000; Josselson 2007; Adams 2008). While issues of “narrative ownership” (Smythe and Murray 2000) are likely to persist regardless of the researcher’s position on the interpretive spectrum, one step to alleviating these concerns is to develop a reflexive practice. Although it is impossible for a researcher to be fully aware of how she influenced the participants’ narratives and what the factors affecting her interpretations thereof are, narrative researchers should aim to reflect on and disclose their influence on the research findings in as much detail as possible (Smythe and Murray 2000; Etherington 2007; Josselson 2007; Adams 2008). Further, while researchers’ interpretations might be at odds with their participants’, they should always maintain a respectful tone and treat participants and their narratives with dignity and care. As mentioned above, it is also important to inform participants at an early stage that research findings and conclusions will be defined based on the narratives of a number of research participants (Josselson 2007), as well as – and this is crucial – that NI presupposes “multiple narrative perspectives,” that is, each narrative can be interpreted in more than one way (Smythe and Murray 2000). However, even with all these precautions in place, we can never be entirely sure how individuals will respond to our research, making it difficult to be certain whether others will perceive our decisions as ethical or not (Adams 2008).
28
Ethics of Discourse Analysis
523
Narrative Inquiry as Empowerment Many studies grounded in the NI tradition have been conducted among vulnerable and underrepresented groups. Narrative researchers working among such communities should be particularly mindful of potential unintended negative consequences of their studies, such as inadvertently making their participants vulnerable to surveillance – particularly in the context of the increased use of online tools (Berger Gluck 2014) – or unwittingly stereotyping them in negative ways (Josselson 2007). Nevertheless, most NI studies can be considered beneficial in that they help us to enhance our understanding of human beings, often those we know comparatively little about (Josselson 2007). Moreover, not only can NI be emancipatory at a micro level – in that it may enable individuals to construct narratives that help them to overcome some of the traumas or difficulties faced in their lives – it can also serve to sensitize and evoke empathy towards various kinds of social inequalities and injustices among broader audiences (Chase 2005; Lieblich 2006). If narrative researchers whose aim is to contribute to social justice efforts are to further this agenda, more attention should be paid to these potentially empowering features of NI. Narrative scholars should consider in more detail who their audiences should be, creatively thinking of ways to bring about change that can contribute to improving the social position of those marginalized and oppressed by the dominant social groups (Chase 2005).
Critical Discourse Analysis The roots of critical discourse analysis (CDA) can be traced to works in “critical linguistics”; however, it is scholars such as Norman Fairclough and Teun van Dijk who are considered the early pioneers of CDA as we know it today. The title of one of the seminal works in the field of CDA, Fairclough’s Language and Power (1989), implies that the essence of this discourse analytic approach lies in its commitment to exposing power inequalities, injustices, and abuses constructive of and constructed through language, for example through discourses of racism, xenophobia, antiSemitism, sexism, and capitalism. Beyond uncovering the unjust social structures, the adjective “critical” in “critical discourse analysis” suggests that critical discourse scholars also engage in developing a critique of these inequalities. This makes CDA explicitly evaluative, which raises two crucial, inter-related issues. The first one concerns the integrity of CDA as a scientific research approach. Its critics tend to emphasize the “unscientific” and subjective nature of CDA which supposedly stems from its fundamentally critical approach (Graham 2018, p. 200). The second question relates to the ethics of CDA: if it is inherently “critical” and thus “judgmental,” what are the grounds on which the analysts’ judgments are made? In other words, what are the ethical frameworks on which CDA rests (Graham 2018, p. 186)? Largely drawing on the recent special issue of Critical Discourse Studies (2018, Vol. 15, No. 2) and Graham’s (2018) discussion on ethics in CDA, the remainder of this section in turn concentrates on these two central questions, although, as will
524
M. Gorup
be shown, they are almost impossible to disentangle. The selected focus also reflects the relative scarcity of practical discussions regarding ethics in terms of scholars’ responsibilities towards research participants – more commonplace in some other forms of DA – which might be explained by the common use of publicly available texts for the purposes of CDA.
Rigorous and Reflexive Critical Discourse Analysis As mentioned above, CDA has often been questioned by its critics – given its openly critical and subjective nature – as to whether it can be considered scientific (Graham 2018). Many social scientists, especially those adopting qualitative methods, would contend that such arguments are flawed; Fairclough (1989) argues that every social researcher is “opinionated” (p. 5). However, this does not free social scientists from the obligation to be reflexive and clear about the basis of their analyses. This is especially important in the case of critical discourse analysts due to the primacy of their critical commitment and aims towards societal transformation not found in the same extent in other forms of DA. While critical discourse analysts agree on the crucial role of critique in their analytic endeavors, the grounds of and the process along which the critique is developed may differ. Is CDA primarily a social scientific research approach or is it an instrument of political advocacy? The boundaries between the two are fuzzy and any CDA will likely have elements of both (Fairclough and Fairclough 2018). Given that many critical discourse analysts conduct research on topics they feel strongly about, the main question to be addressed in this context is how scholars are to deal with the impact of their attitudes towards the studied phenomena on the process of analysis. This is where different authors have taken rather different stances. On the one side of the spectrum are critical discourse scholars who have placed their political views at the center of their research endeavors. For example, the early work of Norman Fairclough (1989) expressly states the author’s political stance as “socialist” (p. 5), while contending that being open about his political disposition does not mean that he, as an analyst, is not required to provide the evidence supporting his statements and arguments. In a similar vein, van Dijk (1993) calls for critical discourse analysts to “take an explicit sociopolitical stance” (p. 252). Their work is thus necessarily normative and political, however, critical discourse analysts should be both “social critics and activists” as well as “social and political scientists” (p. 253). In this sense, CDA is more than a theory or a method of analysis; rather, it is an “academic movement of scholars” (van Dijk 2008, pp. 821–822), a political, “transformative praxis” (Roderick 2018, p. 154). In an early critique, Hammersley (1997) questions these premises of CDA. While social researchers can surely hold political views while undertaking research, he argues that most critical discourse scholars do not acknowledge the issues of validity that arise when scientific analyses are “geared to serve” the analysts’ “political commitments” (p. 239), in
28
Ethics of Discourse Analysis
525
other words, when the analytical findings are judged “according to their political implications as much if not more than their validity” (p. 245). Towards the other end of the spectrum are authors who, although as profoundly committed to social change, more explicitly call for critical discourse analysts to set aside their opinions when developing critique. Reisigl and Wodak (2001) maintain that critical discourse scholars should be aiming at reducing their bias. Their work is to present an accurate analysis grounded on triangulation of multiple disciplinary and methodological approaches as well as empirical and contextual data. Analysts should be explicit about the choices made throughout the research process, but also need to be able to theoretically explain why certain interpretations are more plausible (Wodak 2001). In a like manner, Herzog (2016) insists that although critical discourse analysts are likely to be driven by feelings of indignation, in the process of analysis they should distance themselves from this departure point: “[u]nder no circumstances” should the analysts’ standpoints “determine the results of the research” (p. 163). As a way of reaching this goal, he proposes for analysts to develop “immanent critique.” That is, rather than establishing critique on the grounds of the researcher’s own norms, critique should be built on the normative frameworks of the society under study. Crucial to such an analysis is staying close to the empirical materials while also broadening the methods of CDA: because societal norms are often implicit, discourse analysts should develop sensitivity towards – in addition to discourses – “affective reactions, practices, and material dispositions” (Herzog 2018, pp. 120–121). Recent work of Fairclough and Fairclough (2018) recognizes that CDA is “advocative” and “partisan” because it advocates for righting certain “wrongs” while siding with those who suffer from these “wrongs” (p. 169), but also calls for a CDA that is not “politically partisan” (p. 170; emphasis added). Critical discourse analysts should not advocate for “pre-determined political standpoint[s],” but rather engage in a process of deliberation by looking at a variety of arguments in a discussion and evaluating them in an impartial way, “based on factual evidence” (p. 179). To them, CDA should primarily be a social scientific method following a rigorous four-step analytical process: from developing a normative critique, to explanation of the normative critique, explanatory critique of the current circumstances, and finally, transformative action. Underlying the discussion on ways of enhancing a rigorous practice of CDA is the principle of reflexivity. That is, critical discourse analysts should acknowledge and clarify their positions in relation to the topic, purpose, and means of their analyses (Graham 2018). Any language use, including a critical linguistic analysis, functions to convey a certain “perspective and therefore some kind of fundamental bias,” about which critical discourse scholars should be explicit (Graham 2018, p. 200; see also van Leeuwen 2018). Taking a step further, Graham (2018) argues that “critical engagement with any topic implies the need for still further criticism” (p. 189). Indeed, Fairclough and Fairclough (2018) contend that critical discourse scholars should subject their own argumentation “to systematic critical questioning” (p. 184), following the same criteria of impartiality and “factual evidence” (p. 179) applied to arguments other than their own.
526
M. Gorup
Critical Discourse Analysis as an Ethical Endeavor The principles of rigorous and reflexive critical discourse analytic practice are almost impossible to disentangle from viewing CDA as an ethical undertaking. Because CDA is necessarily critical and evaluative, it is essential for any critical discourse scholar to explicitly engage in a discussion of grounds for their judgment and ethical frameworks they choose to apply as well as ethical implications of their critique (for detailed discussions of possible frameworks of critique in CDA, see for example Hammersley 1997; Graham 2018; Herzog 2018). However, despite the central importance of this issue, CDA’s ethical basis for critique remains undertheorized and there is a need for critical discourse analysts to further engage into a reflexive debate regarding this question (van Dijk 2008; Roderick 2018). While CDA has historically rested on critique rooted in the Frankfurt School of Marxism (van Dijk 1993), critical discourse scholars have since explicitly or implicitly drawn on a variety of normative and ethical stances. The remainder of this section presents a necessarily limited selection of possible approaches towards ethical critique in the context of CDA. Calling for an accurate process of triangulation mentioned earlier, Wodak (2001) contends that CDA is not primarily about an evaluation of rights and wrongs. Nonetheless, a critical analysis does include an “ethico-practical dimension,” built on a respect of human rights and the acknowledgement of suffering (Reisigl and Wodak 2001, pp. 33–34). Similarly, Herzog (2018) suggests that the departure point for any critical discourse analytic endeavor should be “social suffering.” However, social suffering should be defined not from the analysts’ perspective, but from the viewpoint of those who suffer. Suffering is thus seen as an experience that “contradicts the normative expectations of the participants” (p. 117). In calling for “immanent critique,” Herzog (2018) acknowledges “a multiplicity of historically changing norms” (p. 118; emphasis in the original), challenging the common critical discourse scholars’ practice of applying “external norms” of critique (p. 113). Related to the above is Roderick’s (2018) assessment that CDA faces “the problem of deontology” (pp. 165–166). Deontological approach to ethics presupposes a coherent set of rules about what is right and what is wrong, derived from a rational, deliberative communication process. Roderick (2018) warns, however, that rationality is not a universal, objective concept. He calls on critical discourse scholars to adopt context-dependent, intersubjective ethical principles that are “other-oriented”; that is, principles that “recognize and accommodate the needs of an other” in a specific social setting (p. 166) rather than “a normative, universal set of principles” (p. 165). In doing so, critical analysts would avoid positioning themselves as “privileged” actors judging “the actions of others” (p. 166). In Fairclough and Fairclough’s (2018) procedural, argumentative approach to CDA – one that gives primacy to deliberation over a set of arguments – ethical critique constitutes “a part of normative critique” (p. 169). An ethical questioning of argumentation should include reflecting on multiple ethical standpoints in the process of analysis: (1) deontological ethics, that is ethics based on rules regarding what is right and wrong; (2) consequentialist ethics, wherein the main base for
28
Ethics of Discourse Analysis
527
judgment is the consequence of a certain action; and (3) virtue ethics, focusing on the moral character of the actor that carries out an action. While the process of deliberation might not result in consensus, critical discourse analysts can increase the quality of ethical critique by paying attention to a vast array of arguments, subjecting each of them to critical evaluation in the spirit of “an ethical commitment to impartiality” considered crucial to CDA as a social scientific approach (p. 170). This process of deliberation in combination with a questioning of arguments in reference to various ethical frameworks contributes to the development of simultaneously impartial and ethical solutions to the identified social problems. In finalizing their decisions over the most appropriate and ethical transformative action, Fairclough and Fairclough (2018) suggest for critical discourse analysts to draw on the values and norms corresponding to the “limited advocacy and partisanship of CDA” (p. 181), and a consequentialist ethical perspective as “the most significant” one in determining whether an action is desired (p. 172). To summarize, recent debates on rigorous and ethical CDA emphasize the need for analysts to be explicit about their methodologies, reflexive in regard to their biases, and open to alternative interpretations and critiques of the studied phenomena. As implied above, these three prerequisites for a quality and ethical CDA go hand in hand, with steps taken towards a methodologically rigorous CDA – such as triangulation (Reisigl and Wodak 2001; Wodak 2001), immanent critique (Herzog 2016, 2018), or a procedural, argumentative approach (Fairclough and Fairclough 2018) – simultaneously also presenting a more ethical approach to CDA.
Concluding Remarks The present chapter identified ethical and integrity concerns characteristic of DA and three discourse analytic approaches specifically – discursive psychology, narrative inquiry, and critical discourse analysis – and provided discussions of possible ways of addressing them. In the process of developing this text, it became clear, however, that conversations about ethics in the field of DA do not yet seem to be fully natural to the research process. While issues of integrity are more commonly addressed, the majority of handbooks on DA, even those taking an explicit step-by-step, how-to approach, do not consider or only very briefly mention research ethics. This is not to say that discourse analysts have not engaged in discussions of ethics – in the context of this chapter, narrative inquiry in particular stands out with well over 20 years of explicit and active debates on ethics – but much remains to be done if we are to ensure that consideration of ethics becomes inherent to any discourse analytic endeavor. An important step forward should be a continued, more overt and conscious, discussion of research ethics among discourse scholars, and, importantly, one that goes beyond the tick-box approach typically required by ethics committees. Ethics are necessarily dynamic and context-dependent, which is why honest and detailed reporting of examples of actual dilemmas experienced in the research process is crucial to learning about and undertaking ethical DA. In the end, guidelines are just –
528
M. Gorup
guidelines. While they can be helpful in grasping the basics of what an ethical practice of DA involves, it should by now be clear to any discourse analyst that the majority of ethical dilemmas can be answered with “well, it depends.” This is why the often very general and technical approach of ethics committees should be replaced with more supportive, method-specific discussions that would actually help to improve ethical aspects of research rather than encourage a perception of ethics as a bureaucratic hurdle. Ongoing conversations about ethics are crucial as the practice of DA continuously evolves. New ways and means of interacting – for example, via online forums and social media – as well as understanding discourse increasingly not only as text but also as inclusive of visual imagery and bodily expressions, prompt discourse scholars to reconsider and expand the sources of their analyses. This necessarily raises new ethical dilemmas, such as those related to conducting online research or research that involves photographs or video recordings of people. This requires discourse analysts to constantly adjust to new research contexts, and be engaged in rethinking what an ethical and rigorous practice of DA means and how it is to be enabled.
References Adams TE (2008) A review of narrative ethics. Qual Inq 14(2):175–194 Antaki C, Billig M, Edwards D et al (2003) Discourse analysis means doing analysis: a critique of six analytic shortcomings. Discourse Analysis Online 1 Baker P, Ellece S (2011) Key terms in discourse analysis. Continuum, London/New York Berger Gluck S (2014) Reflecting on the quantum leap: promises and perils of oral history on the web. Oral Hist Rev 41(2):244–256 Bruce A, Beuthin R, Sheilds L et al (2016) Narrative research evolving: evolving through narrative research. Int J Qual Methods 15(1):1–6 Caine V, Murphy MS, Estefan A et al (2017) Exploring the purposes of fictionalization in narrative inquiry. Qual Inq 23(3):215–221 Chase SE (1996) Personal vulnerability and interpretive authority in narrative research. In: Josselson R (ed) Ethics and process in the narrative study of lives. SAGE, Thousand Oaks, pp 45–59 Chase SE (2005) Narrative inquiry: multiple lenses, approaches, voices. In: Denzin NK, Lincoln YS (eds) The SAGE handbook of qualitative research, 3rd edn. SAGE, Thousand Oaks, pp 651–679 Clandinin DJ, Connelly FM (2000) Narrative inquiry: experience and story in qualitative research. Jossey-Bass Publishers, San Francisco Clandinin DJ, Caine V, Lessard S (2018) The relational ethics of narrative inquiry. Routledge, New York Edwards D (2012) Discursive and scientific psychology. Br J Soc Psychol 51(3):425–435 Ellis C (2007) Telling secrets, revealing lives: relational ethics in research with intimate others. Qual Inq 13(1):3–29 Etherington K (2007) Ethical research in reflexive relationships. Qual Inq 13(5):599–616 Fairclough N (1989) Language and power. Longman, London Fairclough N, Fairclough I (2018) A procedural approach to ethical critique in CDA. Crit Discourse Stud 15(2):169–185 Gee JP (2011) An introduction to discourse analysis: theory and method, 3rd edn. Routledge, New York/London
28
Ethics of Discourse Analysis
529
Graham P (2018) Ethics in critical discourse analysis. Crit Discourse Stud 15(2):186–203 Hammersley M (1997) On the foundations of critical discourse analysis. Lang Commun 17(3):237–248 Hammersley M (2014) On the ethics of interviewing for discourse analysis. Qual Res 14(5):529–541 Hepburn A (2006) Getting closer at a distance: theory and the contingencies of practice. Theory Psychol 16(3):327–342 Hepburn A, Potter J (2004) Discourse analytic practice. In: Seale C, Gobo G, Gubrium JF et al (eds) Qualitative research practice. SAGE, London, pp 168–185 Hepburn A, Wiggins S (eds) (2007) Discursive research in practice: new approaches to psychology and interaction. Cambridge University Press, Cambridge/New York Herzog B (2016) Discourse analysis as social critique: discursive and non-discursive realities in critical social research. Palgrave Macmillan, London Herzog B (2018) Suffering as an anchor of critique: the place of critique in critical discourse studies. Crit Discourse Stud 15(2):111–122 Josselson R (1996) Introduction. In: Josselson R (ed) Ethics and process in the narrative study of lives. SAGE, Thousand Oaks, pp xi–xviii Josselson R (2007) The ethical attitude in narrative research: principles and practicalities. In: Clandinin DJ (ed) Handbook of narrative inquiry: mapping a methodology. SAGE, Thousand Oaks, pp 537–566 Lieblich A (1996) Some unforeseen outcomes of conducting narrative research with people of one’s own culture. In: Josselson R (ed) Ethics and process in the narrative study of lives. SAGE, Thousand Oaks, pp 172–184 Lieblich A (2006) Vicissitudes: a study, a book, a play: lessons from the work of a narrative scholar. Qual Inq 12(1):60–80 Potter J (2010) Contemporary discursive psychology: issues, prospects, and Corcoran’s awkward ontology. Br J Soc Psychol 49(4):657–678 Potter J (2012a) Discourse analysis and discursive psychology. In: Cooper H, Camic PH, Long DL et al (eds) APA handbook of research methods in psychology:Vol. 2, Research designs: quantitative, qualitative, neuropsychological, and biological. American Psychological Association, Washington, DC, pp 119–138 Potter J (2012b) Re-reading discourse and social psychology: transforming social psychology. Br J Soc Psychol 51(3):436–455 Potter J, Wetherell M (1987) Discourse and social psychology: beyond attitudes and behaviour. SAGE, London Rapley T (2007) Doing conversation, discourse and document analysis. SAGE, London Reisigl M, Wodak R (2001) Discourse and discrimination: rhetorics of racism and antisemitism. Routledge, London/New York Roderick I (2018) Multimodal critical discourse analysis as ethical praxis. Crit Discourse Stud 15(2):154–168 Sabar G, Sabar Ben-Yehoshua N (2017) ‘I’ll sue you if you publish my wife’s interview’: ethical dilemmas in qualitative research based on life stories. Qual Res 17(4):408–423 Smythe WE, Murray MJ (2000) Owning the story: ethical considerations in narrative research. Ethics Behav 10(4):311–336 Stokoe E, Hepburn A, Antaki C (2012) Beware the ‘Loughborough School’ of social psychology? Interaction and the politics of intervention. Br J Soc Psychol 51(3):486–496 Taylor S (2001a) Evaluating and applying discourse analytic research. In: Wetherell M, Taylor S, Yates SJ (eds) Discourse as data: a guide for analysis. SAGE, London, pp 311–330 Taylor S (2001b) Locating and conducting discourse analytic research. In: Wetherell M, Taylor S, Yates SJ (eds) Discourse as data: a guide for analysis. SAGE, London, pp 5–48 Taylor S, Smith R (2014) The ethics of interviewing for discourse analysis: responses to Martyn Hammersley. Qual Res 14(5):542–548 van Dijk TA (1993) Principles of critical discourse analysis. Discourse Soc 4(2):249–283
530
M. Gorup
van Dijk TA (2008) Critical discourse analysis and nominalization: problem or pseudo-problem? Discourse Soc 19(6):821–828 van Leeuwen T (2018) Moral evaluation in critical discourse analysis. Crit Discourse Stud 15(2):140–153 Wiggins S (2017) Discursive psychology: theory, method and applications. SAGE, London Wiggins S, Potter J (2017) Discursive psychology. In: Willig C, Stainton Rogers W (eds) The SAGE handbook of qualitative research in psychology. SAGE, London, pp 93–109 Willig C (2004) Discourse analysis and health psychology. In: Murray M (ed) Critical health psychology. Palgrave Macmillan, Basingstoke/New York, pp 155–169 Wodak R (2001) The discourse-historical approach. In: Wodak R, Meyer M (eds) Methods of critical discourse analysis. SAGE, London, pp 63–94 Wood LA, Kroger RO (2000) Doing discourse analysis: methods for studying action in talk and text. SAGE, Thousand Oaks
Feminist Research Ethics From Theory to Practice
29
Anna Karin Kingston
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practicing Feminist Research Ethics: Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Empowering Women Who Have Experienced Partner Abuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Dynamics Documented by Pathway Researchers in Bangladesh . . . . . . . . . . . . . . . . . . . From Individual to Team Research: Dealing with Intimate Involvement Dilemma . . . . . . . Feminist Ethics and Critical Pedagogy in Student-Community Research Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conflicting Interests in Participatory Action Research on Intimate Violence . . . . . . . . . . . . . . Power Relations Dilemma in Canadian Disability Feminist Collaborative Project . . . . . . . . Mothers of Special Needs: Reflections from an “Insider” Researcher . . . . . . . . . . . . . . . . . . . . . Conclusions and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
532 534 537 537 539 539 540 542 543 544 546 547
Abstract
Feminists endeavor to conduct research through a gender conscious prism while challenging patriarchal structures in society. The topic “feminist ethics” is well documented in the literature; however, “feminist research ethics” is less so. The empirical practice of applying a feminist ethics in research has hitherto not received large attention from feminist scholars as the debate instead tends to focus on underpinning philosophical theories. Some feminists argue that it is unethical not to apply a gender perspective in all research regardless of discipline. This standpoint may suggest that all feminist researchers, by virtue of claiming to be feminists, consider themselves adhering to a feminist research ethics while conducting their research. There is, however, a myriad of feminisms and subsequent research methods which challenge the notion of a uniform feminist A. K. Kingston (*) School of Applied Social Studies, University College Cork, Cork, Ireland e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_64
531
532
A. K. Kingston
research ethics. This chapter attempts to address some key aspects on the topic, by firstly introducing the reader to some of the feminist theories which underpin a feminist ethical approach to social scientific research. I will then continue to discuss examples of practices of feminist ethics in research internationally. There is growing concern among contemporary feminists about how research is conducted, who is involved, and most importantly who benefits from research results. To illustrate these ethical dilemmas, I will also discuss my own PhD research, a feminist ethnography documenting the experiences of 18 mothers of children with special needs in the Republic of Ireland. Finally, drawing from these documented practical experiences, I will summarize suggested recommendations for researchers who aim to pursue research that adheres to feminist research ethics. Keywords
Feminism · Empowerment · Reciprocity · Reflexivity · Ethics of care · Feminist ethnography · Feminist participatory action research
Introduction Revaluing and appreciating women’s participation in research are important cornerstones of feminist ethics. Furthermore, empowerment, reflexivity, and reciprocity are three key principles of feminist research. Firstly, empowering research is part of the feminist agenda challenging often male-dominated hierarchical conventional research models and encouraging more equal relationships between the researcher and the research participant. Asking women to participate in research is seen by some feminists to encourage personal empowerment and bring about changes in women’s lives. Mary Maynard (1994) concurs with Anne Opie’s (1992) argument regarding facilitating empowerment in research: firstly, by women contributing to tell their stories and making visible a social issue; secondly, there is the potential therapeutic effect this may have on the woman who reflects on and reevaluates her experience in the interview; and, thirdly, through the “generally subversive outcome that these first two consequences may generate” (p. 17). Kelly et al. (1994) also suggest that following up with the women, asking them if they did benefit from taking part, would better inform researchers. “We would then be in a much stronger position to develop appropriate conceptions of what kinds of empowerment are possible through research” (p. 37). Traditionally, feminists have focused on the empowerment of women by placing their lives at the center of their analysis. However, this feminist research approach has also come to encompass empowering members of other marginalized groups in society who suffer oppression due to class, race, ethnicity, religion, (dis)ability, age, and sexuality. A second principle of feminist research is the researcher’s position in relation to the research and how this is reflected upon. Researchers practicing reflexivity are aware of their personal/cultural/social identities and what these bring to the research project. Reflexivity demands that the feminist researcher employs a self-critical
29
Feminist Research Ethics
533
analysis and articulates the same transparently throughout the research process. Thirdly, and related to both empowerment and reflexivity, is reciprocity. A reciprocal relationship between researchers and research participants is one of the core values of feminist research. Reciprocity entails giving something back to research participants, very often recruited from within vulnerable communities, by aiming to ensure that the research is mutually beneficial. Reciprocity also concurs with the value of reflexivity as the researcher continually reflects on the power dimension in the research relationship. These feminist principles, however, bring about their own challenges, increasingly documented by social scientists. As there is not one set definition of feminism, there also are various theories on feminist research methods. It is also important to note that researchers who do not claim to be feminists may still adhere to the principles of empowerment, reflexivity, and reciprocity while conducting social scientific research. One ethical research principle, however, uniting contemporary feminist and majority social scientists is the endeavor of “not doing harm” to research participants, conscious of exploitative positivist studies done to vulnerable members of society in the past. Many feminist researchers applying a feminist research ethics aim to go further than “not doing harm” to research participants. Knowledge production in feminist research should then be contributing to “doing good” and make a difference in the lives of the researched, with researchers taking a political stance advocating a social change. Focus has also shifted to the emotional rapport between the researcher and the research participant with a stronger commitment to do research “with” or “for” participants, rather than “on” them. Furthermore, a consensus among feminists is the acknowledgment that feminist research has its limits in changing patriarchal hierarchical structures within society at large. Spalter-Roth and Hartmann (1999), for example, discuss how feminist researchers have reflected on their own ability to bring about social change. The main concern, they argue, is to “reveal the agency of the women we study and the economic, ideological, and political context in which they make their lives” (p. 340). The practical aspect of how to apply feminist ethics in data collection, analysis, and reporting continues to be debated among feminist scholars with their positions varying according to philosophical viewpoints. It would be an impossible task to summarize all various and numerous approaches to feminist ethics in this chapter as there is an ongoing, and in my opinion, energizing debate informing the future of feminisms and its impact on research practice. While old and new versions of feminist epistemologies emerge and merge, core principles of feminist research practices remain: empowerment with a focus on marginalized people and ethical responsibilities in knowledge production. This chapter will address some of the key arguments in the current debate on feminist research and the challenges feminist researchers encounter in practice while attempting to subscribe to feminist research ethics. A brief summary of some feminist theoretical approaches in research will be outlined, followed by examples of how feminist researchers have applied these theories in practice. Challenges and dilemmas encountered in these international case studies will be discussed, including my experience of researching the lived experiences of 18 mothers of children with autism, ADHD, and/or Down syndrome
534
A. K. Kingston
in the Republic of Ireland. These examples are chosen for the purpose of this chapter with a focus on practical research experiences, and I acknowledge that there are numerous other feminist studies that could have be discussed instead.
Theoretical Background The 1960s and 1970s saw feminists and other scholars challenging a positivist hierarchical research approach which had dominated social sciences since the early 1900s. The so-called “objective” and “politically neutral” research methodologies were considered part of a patriarchal research discourse ignoring women’s lived experiences and voices. This “gender and value neutral” research approach viewed the social world as objectively fixed where researchers collected data and produced what was considered “true and unbiased” knowledge. Furthermore, second-generation feminists challenged perceived “male-biased” research and became researchers/ activists highly critical of unequal gendered power relations in the research process. Feminist scholars put women center stage, claiming that the “personal was political” while rejecting the notion of objectivity and value-free research. Feminist policy researchers emphasized the political necessity of using methodologies in order to bring about social change. A majority of feminists at this time considered women’s traditional roles within the home as part of patriarchal oppression and believed that making women visible in research would lead to emancipation and access to public spheres hitherto dominated by men. A feminist epistemology, defined as a theory about knowledge and how we know what we know (Harding 1987), challenging patriarchy’s social construction of gendered roles, evolved to encompass other power inequalities in society on the grounds of class, race, ethnicity, sexuality, age, and (dis)ability. Knowledge itself was considered a social construction and a feminist standpoint theory emerged with a focus on how knowledge production is influenced by physical/social location and cultural/historical positions. Feminists influenced by postmodernist and poststructuralist theorists rejected all forms of binary/dualistic entities that could facilitate hierarchical oppression. Other categories of difference were considered by recognizing the fluidity and complexities of multiple intersectional identities. Postmodern feminists, while critiquing feminism as a political movement underpinned by universalist and essentialist notions, nevertheless emphasized the possibilities of empowerment by using feminist research methods. Deveaux (1999), among others, argued for a feminist methodology focusing on women’s agency, as opposed to viewing them as powerless victims. In doing this, feminist researchers can challenge power relations and enable political activism. Arguments for a “strong objectivity” was introduced but in the form of a “strong reflexivity” (Harding 1991). A “strongly reflexive” feminist researcher then continuously reflects ethically on epistemological and ontological assumptions in the knowledge production. The principle of reflexivity, where the researcher continually reflects on ethical decision-making during the research process, is one of the most important cornerstones of feminist research. This is closely linked to a feminist approach referred to
29
Feminist Research Ethics
535
as an “ethic of care.” This care-based ethical theory, stemming from Carol Gilligan’s work on different gendered “moral” voices (1982) and Nel Noddings’ discussion of ethical responsibility in relationships (1984), further challenged notions of a “detached” researcher. Women, according to Gilligan, formulate care values and priorities differently to men which shapes an understanding of interdependence between humans. Noddings sees caring as women’s response to ethical decisions and differentiate between an ethics based on rights and justice (the voice of the father) and a moral ethics, which she has called “the mother’s voice (Preissle 2007). Feminists adhering to an ‘ethic of care’ then view relationships between the researcher and the participants as one of the most important considerations in research, where the researcher has a caring responsibility to treat participants empathetically. Gilligan (2011) argues that an ethic of care is even more important today, referring to the reality of “interdependence and the cost of isolation; we know that autonomy is an illusion - that people’s lives are interconnected” (pp. 17–18). Edwards and Mauthner (2002) concur with the argument that feminist discussions of the research process have a lot in common with the theory of an ethics of care where daily life dilemmas are shaped by different social contexts and each experience generates different ethical perspectives. The authors address ethical dilemmas in qualitative social research from a feminist perspective and suggest guidelines for ethical research practice grounded in a feminist ethics of care. They argue that while there is a rich bulk of literature on feminist ethics, there is less focus on dilemmas encountered in practice by researchers adhering to feminist research ethics. They also attribute a rise in concern with research ethics to factors such as academic institutions’ fear of litigation as well as being “rooted in a genuine and legitimate concern with issues of power” (p. 18). Scholars with a long involvement in the feminist movement contribute to the discussion on feminist research ethics by reflecting on core themes and issues. One of these writers is Judith Preissle, in a piece called “Regrets of a Women’s Libber” (Hesse-Biber and Leavy 2007), who sees feminist research ethics as “self-conscious frameworks for moral decision making – helping decide whether decisions are right or wrong by feminist values and standards” (p. 205). For Preissle, the principles of an ethical framework “involve justice for women, care for human relationships, and a commitment to finding the political in the personal” (p. 205). Oakley (2016), reflecting on her classic publication from 1981, “Interviewing Women: a contradiction in terms?,” admits that the “friendship” relationship between researcher and researched was not sufficiently discussed at the time. Building rapport between researchers and participants is, however, a current contested topic among feminists. Duncombe and Jessop (2012), for example, suggest that this aspect of feminist research relationships has led to suggestions that the interviewer, in order to encourage participation, “fakes friendship.” The authors argue that the “skills of doing rapport,” in this sense, has been commodified (2002, pp. 120–121). Oakley disagrees and perceives the suggestion that researchers would force participants to make disclosures during interviews due to “faked” friendship as “patronizing.” The notion of “the gift,” Oakley argues, in the researcher-researched relationship is helpful in this discussion. The participant then gives the researcher a narrative, a life story, with
536
A. K. Kingston
an understanding that they will not in return receive control of the research product (Oakley 2016, p. 208). Some feminists who promote empowering research argue that oral history, depending on how the study is designed, may encourage collaboration and emancipation. For example, it is suggested that “the decentering of authority resonates with many feminists. . .” and “. . .also carries with it a set of politics and a host of ethical considerations linked to the empowerment of research subjects and the social activist component of feminist research” (Leavy 2007, p. 168). Participants then are more likely to feel empowered if they are fully included and have control over the narrative: Likewise, as feminism is an earned political perspective, the value of retaining ownership over the resulting knowledge so that it can best serve the greater goals of feminism also makes sense for many researchers and their participants. The various sets of choices all have ethical considerations, and none are simply right or wrong, better or worse, more or less “feminist.” (Leavy 2007, p. 170)
Members of the UK “Women’s Workshop” discuss the relevance of an ethics of care, and subsequent feminist research ethics, in their practical research experience in a special issue of Women’s Studies International Forum (Philip and Bell 2017). A common agreement among the contributors is that procedural or institutional ethics cannot cater for the range of different ethical issues often encountered when working in the field. This would include emotionally sensitive encounters that both researcher and participant may experience during the research. One of the authors, Hoggart (2017), discusses ethical dilemmas when working in partnership with policymakers and practitioners. Her applied social research projects in the field of sexual health highlight challenges for the researcher inflicted by the unequal power relationship with the funder for the project and the restrictive policy framework in place. Nevertheless, Hoggart argues that “feminist reflexivity at each stage of the research process should permit us to claim partial knowledge. This is arguably infinitely better than making no knowledge claims at all, or making unrealistic positivist claims to objectivity and truth” (Cited in Philip and Bell 2017, p. 73). The rigidity of institutional ethics research policies is also criticized by Australian researchers Halse and Honey (2005) who document the challenges they faced when attempting to convert a research proposal on “anorexic girls” into a form that would win ethics committee approval. The researchers struggled, for example, with the creation of a consent form that assumed that “anorexic girls” form a homogeneous population, disregarding the moral complexities involved for both researchers and participants. Halse and Honey conclude that there is no simple answer to the concerns they raise. Our point is political. Despite advances in the theorizing and practice of feminist research, it is easy to underestimate or to fail to see the ways in which the social, organizational, and cultural practices of the research ethics process work as conceptual and concrete barriers that impede feminist research approaches and position feminist researchers in ideologically uncomfortable spaces. (2005, p. 2160)
29
Feminist Research Ethics
537
Posthumanist feminist scholars contribute to the most recent discussion on feminist ethics by looking at human relationships with the nonhuman world (Haraway 2004). Posthumanist theories are grounded in the argument that our lives are intertwined also with nonhuman objects, for example, those of machines, plants, and animals. One such approach is referred to as “agential realism” developed as a metaphysical framework by materialist scholar Barad (2007) (referenced in Mauthner 2018). “Agential realism” is described as a contrast with both naturalistic and social constructivist approaches “in that it does not commit itself to the ontological existence of material and/or cultural entities” (Mauthner 2018, p. 52). Rather than representing identities as in traditional research approaches, “agential realism” performs how we conceptualize research practices which include both human and nonhuman identities. Ethics is considered a practice (and not a researcher) accounting for “its own material existence and its material effects in helping to constitute the world” (p. 53). Ultrasound technology is one example of ethical concern for posthumanist researchers, as ontological assumptions of its innocence can have potentially moral consequences by constituting the fetus as an autonomous subject. “A posthumanist ethical practice of ultrasound technology is a practice that accounts for its own non-innocence and for its non-innocent ontological effects in the world” (Mauthner 2018, pp. 53–54). Having outlined the evolvement of feminist theories on research ethics, I will now turn to practical experiences of conducting feminist research. The remainder of the chapter will discuss a selected number of case studies where feminist researchers have encountered challenges adhering to feminist principles of reflexivity, reciprocity, and empowerment. Themes include interviewing women who have experienced intimate violence, attempts to empowering women in Bangladesh, different approaches to research with refugees in the USA, feminist ethics and critical pedagogy in a community-based research module in California, conflicts in research partnerships in Canada, and feminist research projects in the field of disabilities.
Practicing Feminist Research Ethics: Case Studies Empowering Women Who Have Experienced Partner Abuse American author Burgess-Proctor (2015) reflects on the ethical challenges involved in conducting feminist interviews with women who have experienced intimate partner abuse. Outlining several guiding principles of feminist research methodology, she offers recommendations on how to empower participants rather than simply protect them in accordance with the rules of university Institutional Review Boards (IRBs). Burgess-Proctor emphasizes the common themes in feminist methodologies while at the same time acknowledging the heterogeneity in feminist research. These themes include focusing on the lived experiences of women and girls, empowering research participants, encouraging reflexivity, embracing an ethic of care, and practicing reciprocity. She also cautions that simply applying these concepts in
538
A. K. Kingston
research does not solve all ethical dilemmas and that there is an ongoing feminist debate on whether an egalitarian relationship between researchers and participants is achievable (referring to Nazneen et al. 2014). More importantly, raising participants’ expectations that the research will change their lives for the better can have the opposite effect and disempower rather than empower the participants (referring to Gillies and Alldred 2012). Burgess-Proctor goes on to illustrate the dilemmas facing feminist researchers by using examples from her research with teenage women who have been in intimate violent relationships. During the interviews, she found herself having to constantly “contemplate and consider the tensions between protecting and empowering participants, and to evaluate how best to allow the women to navigate and take charge of the interview process, even when their decisions raised ethical questions for me.” (p. 128). One participant is clearly traumatized by telling her story but insists on continuing the interview. Burgess-Proctor reflects on whether her repeatedly offering to stop was in fact violating the young woman’s agency “denying her the ability to give voice to her experiences?” She goes on to say: “using researcher reflexivity, I later considered whether I was merely importing protectionist biases into this interview” (p. 129). However, she was reassured that she had done the right thing by continuing as the young woman seemed more at ease after the interview and even gave her a hug. Applying a feminist ethics of care, Burgess-Proctor also ensured that there was follow-up support in the shelter afterward should the woman need it. Another participant, who had recently begun to see her ex-partner who had seriously assaulted her, found herself, during the interview, realizing the risks of continuing that relationship. This in turn made her express a wish not to accept the renumeration promised for participating in the research as a form of gratitude toward the researcher. However, waiting until the end of the interview, when the participant was less emotional, Burgess-Proctor encouraged her to take the money. This event during the interview, according to Burgess-Proctor, reflects other feminist research demonstrating that financial remuneration – valuable though it may be – is not the only “compensation” participants may derive from being interviewed (Burgess-Proctor 2015 referring to Logan et al. 2008). Burgess-Proctor uses the examples of these interviews to demonstrate that researchers encounter “ethically important moments” during difficult and emotional interviews and that it is possible to turn these interviews into a positive experience for the participant if handled with respect, care, and compassion. She goes on to offer several strategies underpinned by feminist methodologies to help researchers empower participants and not only offer protection: (1) asking participants to choose their own pseudonyms gives them some control of their participation and offers some agency; (2) life history calendars or other ways of documenting autobiographical events can assist participants to control the interview process; (3) certificates of completion can be offered to those who can use it to their benefit; (4) expressing and reciprocating emotion improves the rapport with participants; (5) concluding interviews on a positive note; and (6) share research findings with participants. She emphasizes that “given the importance
29
Feminist Research Ethics
539
participants often place on sharing their experiences in order to help other women, the need for feminist scholars to recast relationships with our participants from protection to empowerment remains great” (p. 133).
Power Dynamics Documented by Pathway Researchers in Bangladesh Nazneen and et al. (2014) introduce a collection of papers by Pathways researchers (a global research consortium on women’s empowerment) who reflect on their experiences of doing feminist research in Bangladesh, Egypt, Ghana, Brazil, and Palestine. The focus is on the possibility to empower women through the selected research methods as well as analyzing the practical and ethical dilemmas they encountered. Examples of the challenges include Bangladeshi Pathway researchers who, as member of the community, felt inferior to the women organizations’ leaders who were older and more experienced activists which “created a particular power dynamic” (p. 57). Furthermore, the women’s organizations found it difficult to trust that the researchers, albeit being fellow activists, would be able to interpret/represent their history and strategies using an academic lens. A second project involved taleem women, where the Bangladeshi Pathway researchers had to cover their heads and join in religious activities in order to gain access to the groups. This made the researchers feel uncomfortable as they did not adhere to such practices in their own daily lives. Furthermore, they were unable to challenge the taleem women on their limited views of gender role because they feared losing access. This ethical dilemma, according to Nazneen and Darkwah, may “perhaps indicate the need for having realistic expectations about bridging the power relations and also the limits of erasing the differences between researchers and participants based on feminist principles of negotiation and sharing” (p. 58). Despite these challenges, the Pathways researchers’ collective reflections on their research practices demonstrated that “the process of empowerment is a journey in time that involves continuous negotiations and capturing changes at individual, institutional and structural levels and as such requires the use of a diverse set of methods” (p. 59).
From Individual to Team Research: Dealing with Intimate Involvement Dilemma American sociologist Kimberley Huisman (2008) reflects on her endeavor to apply an ethics of reciprocity and positionality in feminist ethnographic research. She outlines the many ethical dilemmas encountered in her doctoral dissertation involving Bosnian Muslim refugees concurring with Stacey (1988) who queried if “interpersonal, engaged nature of ethnographic work can lead to even more exploitation than traditional positivist methods, a method by which the researcher is more detached and objective throughout the research process” (Huisman 2008, p. 372). Committed to feminist principles of reciprocity and empowerment, the author became intimately
540
A. K. Kingston
involved with participants who came to see her as a “family member” (p. 391). Huisman experienced tensions within herself, and in her relationship with academia and the community throughout the research process. This was provoked by the fact that the research was likely to enhance her chances of an academic career, but the outcome for the participants was less certain: “As much as I wanted the relationship to be mutually beneficial, it was not. . .while I was striving to eliminate hierarchies in the field, I could not escape the reality that I was structurally positioned within a hierarchical institution and one of the motivations for doing this research was to advance my position within the academic hierarchy” (pp. 380–381). Once the project was completed, she would leave the community behind. Furthermore, she argues that her academic institution was largely unsupported, and while she had ethical approval from the Institutional Review Board (IRB) to conduct the fieldwork, she argues that “the IRB seemed far more interested in protecting the university from lawsuits than providing any guidance about ethics in research” (p. 383). Huisman brought the lessons learnt from her PhD experience to her next project, a participatory action community-based research (PAR) with Somali refugees in New England. In order to strengthen and broaden the project and address the challenges of becoming too intimate with participants, Huisman formed a team of researchers from different disciplines. The components of the project involved creating a “library of real stories” through collecting narratives as well as developing and performing a theatre script about Somali women’s experiences. Community advocacy is the third part of the project where members draw on the knowledge gained to address the specific needs in the community (p. 393). Huisman concludes that PAR as a research process “truly espouses the feminist principles of equality, democracy, reciprocity, and social change that I had strived for in my earlier work” (p. 394). The tension she experienced doing her dissertation forced her to critical self-reflection and motivated her to stay true to her feminist values as a researcher in her second project. At the time of writing the paper, Huisman and her team had been involved in this project for 3 years, and she had avoided many of the ethical dilemmas concerning intimacy with participants which she had experienced as an individual researcher.
Feminist Ethics and Critical Pedagogy in Student-Community Research Collaboration An example of feminist ethics in collaborative research is discussed by Ganote and Longo (2015) who involved students at their Californian university. The authors discuss the challenges involved in practically applying feminist ethics and critical pedagogy in a collaborative community-based research model. The benefits for students and communities in successful research collaborations, they argue, are well acknowledged. However, they query the quality of this work in practice: “So, even when we know the value of successful university-community collaboration, why is it so hard to actually do this work collaboratively?” (p. 1066). The authors blame the neoliberal individualistic model for this failure where focus is on the rights of autonomous individuals rather than their civic duties and responsibilities in the
29
Feminist Research Ethics
541
community. Furthermore, they argue that creating a community-based research agenda without “sustained consultation with community members” can then possibly even do harm to the community (p. 1067). Ganote and Longo propose an alternative model grounded in feminist ethics and critical pedagogy theories, where knowledge creation is “an interconnected, social process that happens within community” (p. 1071). They applied this alternative model in the creation of two linked community-based research courses designed in collaboration with the same community partner, the Women’s Economic Agenda Project (WEAP), a grassroots organization in Oakland, California. The authors criticize those faculty members who choose community partners that, according to them, operate an individualist model of social change such as homeless shelters and soup kitchens. They argue that WEAP, on the other hand, is a grassroot community organization consisting of poor women and their families with aims to challenge societal and government policies to make long-lasting improvements in the lives of their members. Students were prepared for the community-based research with course readings, discussions, journal writing, and in-class reflections. Learning goals, among others, were to teach students concepts of interdependence/relations and encourage the development of critical consciousness. This also included addressing some students’ discomfort levels with “unfamiliar situations and/or racial biases” which, according to the authors, “could have jeopardized the relations with our community partner and the entire project if biases and negative stereotypes were reaffirmed” (p. 1074). WEAP members also intensively engaged in training the students prior to the fieldwork, when teams of students and WEAP members proceeded with the data collection. Once data were collected, it was used by WEAP leaders for organizing campaigns. Imperative for the project was then a commitment to present the research result in whatever form was required by the community group. The authors emphasize the challenges involved in deviating from what they call a “mainstream positivist assumption undergirding knowledge creation” (p. 1079), where the researchers set the agenda without any involvement of the community. In their courses, the community partner was actively involved from the start, including collaboratively marking the students at the end of the course. This caused some confusion and discomfort among the students but was acknowledged by the faculty members as being part of the learning process. Overall, we aspired to create a truly collaborative community-based research course, made rich with a foundation of feminist ethics and Freirian praxis. Praxis (melding theory and practice in an iterative process) is a relational activity in which we question our actions and work with others to achieve collaborative and ethical goals. (Freire 2000 [1970]) (Ganote and Longo 2015, p. 1076)
Preparing the students for the partnership is crucial, according to the authors. Community partners do not operate on semester terms only but are often extremely busy the whole year around. From their perspective, serving as co-educators of students in a community-based collaboration can be demanding of valuable resources and time “. . .we hold great responsibility for not foisting students onto
542
A. K. Kingston
communities who will drain community resources at best, and do harm at worst” (Ganote and Longo 2015, p. 1081).
Conflicting Interests in Participatory Action Research on Intimate Violence Participatory action research (PAR) is sometimes applied in feminist research in the belief that the very process of participation in knowledge creation can address power imbalances and underpin social changes. Canadian sociologists Langan and Morton (2009) reflect on the difficulties encountered in a community-academic research partnership on intimate violence against women. The researchers wished to apply a feminist participatory action research (FPAR) as they felt that a critical feminist theory was the most appropriate framework for such research. The stakeholder, the provincial ministry, also expressed a wish for a feminist approach during several meetings. However, as the researchers proceeded, it became evident that the ministry disapproved of their suggested methodology due to different views regarding the purpose of the participatory research. The main interest of the ministry, according to Langan and Morton, was to evaluate if an expansion of service provision to abused women should be funded. This demanded examining the performances of the service providers. In contrast, the service providers were interested in how expanded services would operate and the service users’ views on this. So, we found ourselves in a difficult position. We were committed to a participatory research process, but the question became: ‘With whom is the participatory process taking place?’ We saw the service providers as an important voice, and they were presented to us as part of the team with whom we were working, but the ministerial officials came to be clearly identified as our ‘bosses’ and their interests were different from the service providers. (p. 169)
Langan and Morton reworked their proposal to facilitate the ministry, hoping that a combination of the requested quantitative method could be combined with the FPAR approach and still give the women, who had experienced intimate violence, a voice. This was an attempt to avoid the integrity of the research becoming, as they argue, “seriously compromised” and violating their research ethics (p. 171). The revisions, however, were not deemed acceptable by the ministry who subsequently canceled the research contract. The ministry’s reason for this termination, according to the authors, was not disagreement regarding research methods, but relating to the ministry’s view that it was too soon to do an evaluation as the initiative had yet to be implemented (p. 174). The authors argue that it was the ministry’s lack of understanding of the value of FPAR that led to the collapse of the research project. Different views on risks involving the women became apparent. The stakeholders did not want any qualitative research done with the women, while the researchers believed that this was the only way of gaining knowledge. Citing Parnis et al. (2005, p. 649), they concurred with the sense of feeling “unable to conduct fully the feminist qualitative research we were committed to.” Power relations, then, between
29
Feminist Research Ethics
543
the funders of the research and the sociologists undertaking the fieldwork became the ultimate obstacle for progress and that this “intensified the ethical struggles that we were already dealing with because of what we had come to recognize as the differences in our theoretical and methodological commitments” (2009, pp. 176–177). Langan and Morton recommend that theoretical and methodological issues are clarified carefully at the start of research collaborations, between all stakeholders. This include making explicit the agenda of all the parties involved, paying attention to power relations.
Power Relations Dilemma in Canadian Disability Feminist Collaborative Project Feminist researchers challenge unequal power hierarchies often found in health research methodologies by moving toward a partnership approach producing mutually beneficial research projects. Gustafson and Brunger (2014) discuss the challenges encountered in a Canadian Feminist Participatory Action Research (PAR) Project involving women with disabilities and adaptive technologies. Negotiating power relations became problematic as each community defined boundaries and asserted individual/collective identities during the research process. A student undertaking the research was also a board member of the collaborating partner, a disability community group. Several conflicts emerged during the research process, relating to power relations between academia and the disability organization. Ethical requirements from the academy were perceived by the disability organization as paternalistic and oppressive. Furthermore, ownership of data became an issue of tension where the student had initiated the research and received funding for the same but at the same time wanted to adhere to the feminist values of reciprocity and longterm outcomes for the disability community. Following the principles of PAR, the group had collaborated in the production and distribution of documents based on the findings using an ISBN (International Standard Book Number). This prevented the student from using the same data in her dissertation as originally planned. The student also felt conflicted about her academic goal of keeping within time frames and her activist commitment to the community. The authors, agreeing with Manzo and Brightbill (2007), stress the impossibility of implementing ethical restrictions at the start of a research partnership: “such ethical conundrums are not easily predicted at the outset of a project” (p. 1000). A considered strength in this partnership, according to the authors, was the feminist collaborative work applied in the project: “a respect for our distinct areas of expertise, our unique subjectivities, and the ever-shifting ways we make meaning of a research experience” (p. 1001). Gustafson and Brunger emphasize the importance of using contract-style relationship agreements between supervisor and student, and student and community, beyond the researcher-community agreement. They also argue that relationship building with communities with subsequent negotiations is outside the remit of a research ethics board.
544
A. K. Kingston
Mothers of Special Needs: Reflections from an “Insider” Researcher My own feminist ethnography, drawing on feminist matricentric theories (Ruddick 1989; Kittay 1999; Malacrida 2003; O’Reilly 2006), adhered to feminist principles of empowerment, reflexivity, and reciprocity as outlined in this chapter. The patriarchal construction of motherhood, in my opinion, restricts women’s agency, and mothers of children with special needs become even more marginalized within this construction. The objective with my doctoral dissertation was to make these mothers visible and to publish the research findings in a book that I felt was needed to fill a gap in literature on both feminist mothering and disability research. Several ethical dilemmas, however, occurred during the research process, in relation to aspects of empowerment and reciprocity particularly. My subjective influence on the research consisted of cultural/social locations as a Swedish journalist who now resided in Ireland, and I also brought “insider” knowledge as a mother of special needs to the research context. This knowledge proved to be advantageous during the study which included face-to-face interviews and follow-up meetings a year later, as I developed a great rapport with the 18 participants. Having spent years together with women like these, sharing the struggle for services and the perceived paternalistic treatment by professionals, I had a political and passionate commitment to make a difference in our lives. I also had the advantage of being relatively free to conduct the research according to my own feminist research agenda, as no funder/employer imposed other conditions. Despite my feminist stance, I chose not to discuss gender divisions and patriarchy with my participants. While not entirely withholding my personal convictions, I was reluctant to make them a theme in the interviews out of fear of alienating the women. I respected their cultural backgrounds, the majority of which were Catholics, and that they did not share my concern for how the social construction of motherhood imposed so many of our difficulties in the day-to-day life with our children. I encountered the first challenge when I offered the participants the verbatim transcripts from their interviews to read. This provoked reactions that I had not anticipated. Firstly, the sight of the spoken word written down disturbed many of the mothers. They reported that they felt “awful” and “embarrassed” when reading their own stories. In one case, I received a letter from a participant stating that she never realized she spoke so badly and that she now wondered if she was at all capable of helping her autistic son. It appeared that I had in fact disempowered this mother rather than empowered her by applying the reciprocal principle of sharing data. The depression in her letter worried me immensely, and I had to make a phone call to reassure her that her transcript was no different to anybody else’s and that in the context of the final version, all the narratives would be edited. Another participant returned her transcript with half of the pages deleted. Much of this data contained details regarding her son’s school and her personal struggle trying to teach him social skills. She was extremely worried to reveal any information that could have identified her or her son, but I also think that she felt uncomfortable revealing certain aspects of her mothering and in this sense used a form of self-censoring. At the follow-up meeting, a year after the initial interview, I was able to explain how
29
Feminist Research Ethics
545
valuable her full story was, which gave me permission to use some more of what she had deleted. I would argue that this negotiation empowered the mother to believe in her own agency and resist patriarchal social constructions of being a “good mother.” It was at this stage in the research process that I also asked for the participants’ written consent to use what they deemed acceptable from their transcript for my dissertation and any possible publications thereafter. All mothers were given the opportunity to choose their own pseudonyms which almost all of them wanted to do, carefully choosing names for both themselves and their children. This, I believe, gave them agency in the research partnership, rather than passively having pseudonyms assigned to them by me afterward (Burgess-Proctor 2015). The second dilemma related to the relationship dynamics with the participants. Having met them on two occasions over a period of a year, with the second meeting taking place in a restaurant, the boundaries of our research relationship were becoming a bit blurred. Tina Miller (2017) also raises this dilemma in the context of digital technologies where interactions/connections with participants often take place outside the immediate context of the interviews. Thus, friend requests on Facebook or LinkedIn invitations after the research has been completed can prolong the “friendship” and potentially pose an ethical challenge. My rapport with one or two of my participants did in fact result in a “digitally” continued friendship. However, in my analysis, I ascribe a “resilient agency” to many mothers of children with special needs, and in these cases, I did not fear that becoming too close would do harm. They had self-selected to take part in the study, they were all involved in support groups (hence they had heard about the research), and most of the participants chose not to remain in touch with me once I had finished my study. The third dilemma related to the publication of the research and the principle of empowerment and making a difference. I had chosen not to reveal my feminist standpoint at the outset of the research, which in a sense contradicts the principle of transparency in research relationships. Anticipating participants’ reactions to the book therefore caused me concerns wondering what they would think of my feminist analysis of their narratives. Perhaps I was, as a feminist researcher, overly sensitive to the ethical impact my research might have had on the participants? Tina Miller (2002, p. 66) asks this question reflecting on her research on UK women becoming mothers for the first time. Nevertheless, I found consolation in the fact that many of the mothers were delighted to receive the book, very happy with the content, and a few attended the book launch. One mother, who couldn’t attend, sent me a fridge magnet with a feminist slogan which both surprised and pleased me. I interpreted that she was indeed empowered and that she did agree with my feminist agenda, now out in the open because of the book (Kingston 2007). What happened then to my end goal with my research and my book: challenging patriarchal structures and improving lives for mothers of special needs in Ireland? I have received letters from mothers living in different continents who feel empowered by reading the stories of these Irish mothers. I have given copies to professionals and policymakers with a hope of raising awareness, but I have unfortunately not received any responses despite that the book’s foreword is written by a child/adolescent psychiatrist and international expert on autism and ADHD.
546
A. K. Kingston
Hoggart (2017) discusses the feminist researcher’s position as subjectively influencing data, up to and beyond research findings. Questions regarding the importance of the research being taken seriously are then constantly being reflected upon and many times leaving the researcher disillusioned. Journalists, on the other hand, are sometimes instantly rewarded for their part in giving mothers of children with special needs in Ireland a platform for voicing their despair on live radio and TV, generating urgent extra funding for respite and this without going through any formal ethics board’s rigorous scrutiny. In December 2018, for example, Irish national television (RTE Raidió Teilifís Éireann Primetime Investigates) broadcasted “Carers in Crisis” which prompted the Minister of State for disability issues to allocate €10 million extra respite funding the following week (McGrath 2017). In sum, conducting a feminist ethnography underpinned by the principles of empowerment, reflexivity, and reciprocity is not without its own ethical challenges. As documented by other feminist researchers cited in this chapter, feminist research ethics cannot be formulated in a specific set of rules at the start of a research project. It needs to be contextualized and adapted to suit specific pieces of research during the entire research process.
Conclusions and Recommendations There is an emerging discussion about the relevance of feminist research ethics among social scientists. While there is not one definition of either feminism, or feminist research methods, some common themes can nevertheless be found in current debates. The old mantra “the personal is political” is still relevant in this context, as is the aim of deconstructing patriarchal hierarchies. A constant debate regarding how to best ensure that research participants benefit from being involved in social science research is vital. Applying a feminist research ethics in research thus can entail some of the following guidelines: • Empowering research participants. This does not necessarily demand a research outcome which makes a positive difference to the participant’s life through political action and policy reforms. Raising awareness among participants throughout the research process can also be a form of empowerment. This can be achieved during fieldwork, for example, in interviews, if conducted with care and empathy. Realistic expectations of the outcome of the research must nevertheless be stated at the outset for everyone involved. • Reflexivity during the entire research process. As research is initiated, feminist researchers should raise questions such as “who is this research for?” and “who will benefit?” while considering the impact the study will have on participants. A strong reflexivity also entails a transparent acknowledgment of what the researcher brings to the project in terms of personal and social locations. • Revisiting consent forms. Consent forms cannot be restricted to follow the rigidity of institutional ethics committees/boards but may have to be adapted to contexts and revisited during the entire research process. It is important for
29
Feminist Research Ethics
547
participants to know what they are consenting to do. Consent forms should also clarify if participants have a right to influence analysis and dissemination of research findings. • Feminist Participatory Action Research useful method. Careful attention needs to be paid to potential research partners and their assessment of research needs in the community. Feminist Participatory Action Research, if successfully implemented, can be a useful methodology to empower rather than exploit community members. • Clarify relationships at the start of the research. Research projects are subjectively influenced by the researcher before, during, and after fieldwork. It is important to reflect on the balance between rapport and friendship and how the relationship with participants will be affected long term. This needs to be clarified at the outset. • Reciprocity and sharing of findings. A feminist ethic of care also applies to disseminations of research findings. Participants should be offered to comment on findings; however, careful attention needs to be paid to how these findings are presented.
References Barad K (2007) Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Durham, NC and London: Duke University Press Burgess-Proctor A (2015) Methodological and ethical issues in feminist research with abused women: reflections on participants’ vulnerability and empowerment. Women’s Stud Int Forum 48:124–134 Deveaux M (1999) Feminism and empowerment: a critical reading of Foucault. In: Hesse-Biber S, Gilmartin C, Lydenberg R (eds) Feminist approaches to theory and methodology. Oxford University Press, New York, pp 236–258 Duncombe J, Jessop J (2012) ‘Doing rapport’ and the ethics of ‘faking friendship’. In T. Miller, M. Birch, M. Mauthner, & J. Jessop (Eds.), Ethics in qualitative research (2nd ed.). London: Sage Edwards R, Mauthner M (2002) Ethics and feminist research: theory and practice. In: Mauthner M, Birch M, Jessop J, Miller T (eds) Ethics in qualitative research. Sage, London, pp 14–28 Freire P (2000 [1970]) Pedagogy of the oppressed (30th Anniversary Edition). New York: Continuum Ganote C, Longo P (2015) Education for social transformation: infusing feminist ethics and critical pedagogy into community-based research. Crit Sociol 41(7–8):1065–1085 Gilligan C (1982) In a different voice: psychological theory and women’s development. Harvard university press, Cambridge, MA Gilligan C (2011) Joining the resistance. Polity Press, Cambridge, UK Gillies V, Alldred P (2012) The ethics of intention: Research as a political tool. In T. Miller, M. Birch,M. Mautner, & J. Jessop (Eds.), Ethics in qualitative research (pp. 43–60) (2nd ed.). Thousand Oaks, CA: Sage Gustafson DL, Brunger F (2014) Ethics, “vulnerability,” and feminist participatory action research with a disability community. Qual Health Res 24(7):997–1005 Halse C, Honey A (2005) Unraveling ethics: illuminating the moral dilemmas of research ethics. Signs J Women Cult Soc 30(4):2141–2162 Haraway D (2004) The Haraway reader. Routledge, London Harding S (1987) Introduction: is there a feminist method? In: Harding S (ed) Feminism and methodology. Indiana University Press, Bloomington
548
A. K. Kingston
Harding S (1991) Whose science? Whose knowledge? Thinking from women’s lives. Open University Press, Buckingham Hesse-Biber SN, Leavy PL (eds) (2007) Feminist research practice. Sage, London Hoggart L (2017) Collaboration or collusion? Involving research users in applied social research. Women’s Stud Int Forum 61:100–107 Huisman K (2008) “Does this mean you’re not going to come visit me anymore?” An inquiry into an ethics of reciprocity and positionality in feminist ethnographic research. Sociol Inq 78(3):372–396 Kelly L, Burton S, Regan L (1994) Researching women’s lives or studying women’s oppression? Reflections on what constitutes feminist research. In: Maynard M, Purvis P (eds) Researching women’s lives from feminist perspective. Taylor & Francis, Portsmouth, pp 27–48 Kingston AK (2007) Mothering special needs: a different maternal journey. Jessica Kingsley Publisher, London Kittay EF (1999) “Not my way, Sesha, your way, slowly”: “Maternal thinking” in the raising of a child with profound intellectual disabilities. In: Hanigsberg JE, Ruddick S (eds) Mother troubles. Rethinking contemporary maternal dilemmas. Beacon Press, Boston Langan D, Morton M (2009) Reflecting on community/academic ‘collaboration’: the challenge of ‘doing’ feminist participatory action research. Action Res 7(2):165–184 Leavy PL (2007) The practice of feminist oral history and focus group interviews. In: Hesse-Biber SN, Leavy PL (eds) Feminist research practice. Sage, London Logan T, Walker R, Shannon L, Cole J (2008) Combining ethical considerations with recruitment and follow-up strategies for partner violence victimization research. Violence Against Women 14(11):1226–1251 Malacrida C (2003) Cold comfort: mothers, professionals and attention deficit (hyperactivity) disorder. University of Toronto Press, Toronto Manzo LC, Brightbill N (2007) Towards a participatory ethics. In S. Kindon, R. Pain, & M. Kesby (Eds.), Participatory action research approaches and methods: Connecting people, participation and place (pp 33–40). New York: Routledge Mauthner N (2018) A posthumanist ethics of mattering: new materialisms and the ethical practice of inquiry. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London Maynard M (1994) Methods, practice and epistemology: the debate about feminism and research. In: Maynard M, Purvis J (eds) Researching women’s lives from a feminist perspective. Taylor & Francis, London McGrath F (2017) Minister of State for disability issues homepage. http://www.finianmcgrath.ie/? p=14176. Accessed 8 Oct 2018 Miller T (2017) Telling the difficult things: creating spaces for disclosure, rapport and ‘collusion’ in qualitative interviews. Women’s Stud Int Forum 61:81–86 Miller T, Bell L (2002) Consenting to what? Issues of access, gate-keeping and ‘informed’ consent. In Mauthner M, Birch M, Jessop J, Miller T (Eds.) Ethics in Qualitative Research. London: SAGE Nazneen S, Darkwah A, Sultan M (2014) Researching women’s empowerment: reflections on methodology by southern feminists. Women’s Stud Int Forum 45:55–62 Noddings N (1984) Caring: a feminine approach to ethics and moral education. University of California Press, Berkeley O’Reilly A (2006) Rocking the cradle: thoughts on motherhood, feminism and the possibility of empowered mothering. Demeter Press, Toronto Oakley A (2016) Interviewing women again: power, time and the gift. Sociology 50(1):1195–1213 Opie A (1992) Qualitative research, appropriation of the ‘Other’ and empowerment. Fem Rev 40:52–69 Parnis D, DuMont J, Gombay B (2005) Cooperation or co-optation? Assessing the methodological benefits and barriers involved in conducting qualitative research through medical institutional settings. Qualitative Health Research 15(5):686–697
29
Feminist Research Ethics
549
Philip G, Bell L (2017) Thinking critically about rapport and collusion in feminist research: relationships, contexts and ethical practice. Women’s Stud Int Forum 61:71–74 Preissle J (2007) Feminist research ethics. In: Hesse-Biber S (ed) Handbook of feminist research: theory and praxis. Sage Publications, London, pp 515–532 RTE (Raidió Teilifís Éireann) Prime Time – Carers in Crisis. Broadcasted 5 Dec 2017. https://www. rte.ie/news/player/prime-time/2017/1205/. Accessed 8 Oct 2018 Ruddick S (1989) Maternal thinking: toward a politics of peace. Ballantine Books, New York Spalter-Roth R, Hartmann H (1999) Small happiness: the feminist struggle to integrate social research with social activism. In: Hesse-Biber S, Gilmartin C, Lydenberg R (eds) Feminist approaches to theory and methodology. Oxford University Press, New York, pp 333–347 Stacey, Judith (1988) “Can There Be a Feminist Ethnography?” Women’s Studies International Forum 11:21–7
Part V Subjects and Participants
Acting Ethically and with Integrity for Research Subjects and Participants
30
Introduction Ron Iphofen
Contents Introduction: The “Subjects” of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Trapped” Subjects: Animals, Armed Forces, Prisoners, Migrants, and Students . . . . . . . . . . . . Vulnerability and Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
554 555 557 559 560
Abstract
Different research subjects or participants raise different concerns in terms of how they are accessed, recruited, and treated before, during, and after the research engagement. This chapter highlights some of the ethical issues that arise in the relationship between researchers and the researched that are explored in further depth in this section of the handbook. While research ethics must take the participant in research as the focus of harm minimization and risk mitigation, the many hidden assumptions about the subjects of research must be made less latent: they rarely can be assumed to “typify” or be representative of the population category of which they are attributed membership, nor should the category be assumed to be homogeneous, nor necessarily more vulnerable than population members not being researched. The expectations of researcher and the requirements of research ethics committees must strive to move beyond conventional assumptions about how study subjects should be treated at the same time as ensuring their autonomy and rights are respected. Keywords
Research subjects · Research participants · Animals · Vulnerability · Safety · CBPAR · Community action research R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_55
553
554
R. Iphofen
Introduction: The “Subjects” of Research It has almost become heresy to refer to the human participants being “studied” as part of a research investigation as “subjects.” Throughout this Handbook, we have permitted the authors to use the term they are most comfortable with, either for themselves or within their particular research field. Thus, some authors do refer to the people being studied as subjects and I would argue strongly for allowing the continued use of this use of the term in specific circumstances. This is more than a simple semantic concern. It relates to methodological accuracy (Corrigan and Tutton 2006). The range of meanings attributed to the term “subject” certainly complicates the issue – experimental psychologists and phenomenologists use the term in quite different ways. In the case of research conducted with and/or on humans to accurately apply a term, one must consider whether the people being studied are referred to as “participants,” “respondents,” or “subjects” according to the precise nature of their engagement with or within the research project (Birch and Miller 2002). If the people being studied are participating in a jointly conceived and disseminated project, they can clearly be referred to as “participants.” And ethnographers observing social interaction see their subjects as participants since they are being observed “participating” in their own social interactions. However, if people are simply answering survey questions delivered to them briefly on the street or in an online questionnaire, they are responding to a set of questions created by the researcher, in which case it is accurate to refer to them as “respondents.” Indeed the generic use of the term “participants” can be misleading since it can also apply to the professional researchers who are participating in the study to differing degrees. It is necessary to distinguish the professional researcher from the human subject/participants who are more actively engaged in the design and conduct of the research project. If they are taking such an active part perhaps it would be best to designate them as “partners,” “collaborators,” or “participant co-researchers.” In the same way, not all people under study in an ostensibly “passive” manner are “respondents” – specifically giving answers in reply to questions in surveys, interviews, focus groups, or questionnaires. Often, they are simply being observed and their actions only then regarded as “responsive” if the researcher’s intervention is intended to induce a self-reported change (experimentally or quasi-experimentally) in their behavior, thoughts, and/or feelings. Since no researcher can be assumed to be fully in control of the people they are studying, the range of “activity” on the part of the individual can vary such the notion of all respondents being equally passive could be challenged. If no self-report is sought or required, it is fair to regard the human beings under study as “subjects” of the research. One of the reasons the term “subject” is avoided is to challenge the notion that people need to be “objectified,” or are necessarily so, when being researched. The concern originally arose among qualitative researchers who regard the application of the term as an ethical concern in that it suggests the kind of objectification of people one finds in more experimental or quantitative forms of research. Thus, I argue that it would be inaccurate not to see “the people being studied” as the “subjects” of study – hopefully they are deliberately chosen as the subject or as a route to the subject, they
30
Acting Ethically and with Integrity for Research Subjects and Participants
555
are certainly being “subjected” to a research intervention and it remains an ethical responsibility of the researcher to ensure they are not “objectified” in a way that denies their humanity. Their characterization and treatment lies securely in the methods adopted by the researchers. It is for these reasons that I continue to support the use of the generic term “human subjects” on the grounds that it is difficult to find a terminologically accurate alternative. The “people being studied” can be an individual or a group – the term “subject” encompasses both. Not all people being studied are genuinely “participating.” Some research designs aim to be as inclusive as possible (e.g., participative action research) in which case the subjects can be regarded as much as “participants” as the researchers. (Although that says nothing about the precise balance of power in the research relationship.) All things considered, retaining the term subject is more accurate and does less injustice to those under study than if we were to imagine them as participating when they are not, when they might not see it that way and, in any case, we may be only interested in certain aspects of their life in how it relates to participation in the concerns of our study (Iphofen 2009/2011: 210–211; Oliver 2003). It was in early data protection legislation that the person about whom data was held was designated as the “datasubject.” At the time there was much debate about how that led to “depersonalizing” the person of interest. It seems to me to reflect that awareness that merely holding selective data about a person does not mean one “knows” the person. Indeed all selective data collection is a form of depersonalization and to pretend that by holding data we have somehow captured the person’s character is misleading. That is to say nothing of more recent attempts to protect the individual’s privacy by careful control of personal identifiers. With the growth of online and social media research new terms for the datasubject as the unit of study have emerged; they can now be designated as “authors” or “content creators.” And with this, new relationships between researcher and researched will need to be refined. The foregoing discussion might come across as an over-concern with semantics or as excessively defensive about the choice of terms. Why I think it important is to highlight in the researcher’s mind what exactly is their relationship to the people they are studying, how those people perceive their involvement with the research and the researcher, and what their role and value might be within that engagement. Thereafter, the researcher’s preferred term can be chosen to highlight their own perception of that relationship.
“Trapped” Subjects: Animals, Armed Forces, Prisoners, Migrants, and Students Some human subjects or potential participants can find themselves in situations where it may be difficult for them to refuse to participate in research. One of the crucial principles assigned to the subjects of research is the “rights” issue of autonomy – the freedom to participate or not. Having the fully informed choice to decide whether to assist research by participating in it or being the subject of it is an
556
R. Iphofen
assumed right. All subjects vary in the degree to which such rights or choices are available to them – thereby restricting their autonomy. Few baulk at the allocation of the term “subjects” when nonhuman animals are the focus of research and animals are clearly constrained to participate since the concept of choice is rarely allowed them and, even if it were, it becomes hard for humans to judge whether animals can ever be considered adequately “fully informed” to be able to make a choice. Admittedly animals used in research have increasingly been assigned rights by the growing pressure to replace, reduce, and refine (3Rs) the use of what are euphemistically referred to as animal “models” (See in the UK The National Centre for the Replacement Refinement and Reduction of Animals in Research – https://www.nc3rs.org.uk/). Such an approach still sees a scientific value in animal models while others stress considering animal rights more broadly (see the Nonhuman Rights Project at https://www.non humanrights.org/) arguing not just for the 3Rs in research but for the fair treatment of animals in all walks of their and our (human) lives. Clearly it is the power differential between researcher (humans) and researched (animals) that has permitted and sustained the ongoing use of research on animals without their explicit permission mostly in the interests of zoological/observational and biomedical sciences. Of course rights are not solely about autonomy (Hammersley and Traianou 2012) since rights can be assigned to animals much as they are to children who may be assumed not to have the capacity for autonomous decision making until an assigned state of maturity is reached. And in some countries animals such as apes and elephants have been assigned personhood as a means of guarding their rights. However, there have been research developments examining human-animal interactions and seeing animals as “participating” in such interactions. These models may aim to minimize animal suffering, but human wellbeing is still prioritized. Obviously further ethical thinking about such relationships is required it is nevertheless a way of researching “with” nonhuman animals, rather than researching “on” them (Tumilty et al. 2018). Animals are not the only subjects of research constrained to participate by their lack of situational power. Prisoners might seem evidently at risk of lacking freedom of choice and necessarily in a position of exploitation when “asked” to participate in research (Israel 2016). In a qualitative study of prisoners’ reasons for participating in clinical research in jails and prisons in one US state, the authors sought to examine prisoners’ own considerations and motivations when deciding to participate in research by conducting interviews with adult male and female prisoners who were current or past participants in other research initiatives. A range of motives were disclosed not unlike those of nonprison populations – being made to feel special, anticipating other rewards for cooperation, having nothing better to do and so on. But the research also revealed problems with privacy, fully informed consent processes alongside resisting the temptation to view the prison population as necessarily more homogeneous than any other population category under study (Christopher et al. 2017). Students in many disciplines might find themselves constrained or pressured to participate in research conducted by their educators with little opportunity to object.
30
Acting Ethically and with Integrity for Research Subjects and Participants
557
The Code of Human Research Ethics for the British Psychological Society (https:// www.bps.org.uk/news-and-policy/bps-code-human-research-ethics-2nd-edition-2014) offers an illustration of how this obligation is “justified.” It recognizes that students are in a dependent or unequal relationship with their lecturers and, while undergraduate participation in psychological experiments is not required for Society accreditation, it is argued that most psychological research involves human participants and that courses in psychology need to acquaint students with appropriate methods for carrying out such research. The “direct” experience of being a study subject is perceived as valuable for understanding what their research subjects might feel when the student conducts their own research. Further the Code makes the ethical argument that it could be seen as unethical for psychology students or graduates to carry out research with others unless they have been willing to participate, and have had experience of participation in such research themselves. Despite a subsequent argument that a student’s consent must be “valid” and not coerced, especially when more sensitive research is being conducted, the final clause suggests that students should anticipate such concerns when selecting their course of study and avoid courses that might require participation. In some degree courses penalties in the forms of reductions in marks or the requirement to complete extra, more challenging assessments are employed to discourage nonparticipation. We could equally explore the range of situational factors that constrain many other population categories in their ability to freely choose to participate in research – the armed forces, children, older people, patients – and most of these populations are discussed in more detail in the following section of the handbook. It is important to examine the research participant’s choices in context and the balance of individual or personal reasons for participating or not and the felt communal or collective pressures that might influence their choices. And, indeed, we should not assume those choices to be static. As situations change then motives for participation can also alter.
Vulnerability and Safety Reluctant participants may find themselves less reluctant to be research subjects if they find themselves more vulnerable as victims of disasters whether caused by nature or other humans since highlighting their plight might assist in its alleviation. Equally such populations could fear being made even more vulnerable by the “disclosures” of a research investigation. Migrants offer an example of such concerns and the European Commission produced a Guidance Note to help researchers consider how best to approach research in this field (see: https://ec.europa.eu/research/ participants/data/ref/h2020/other/hi/guide_research-refugees-migrants_en.pdf). Once again balancing the personal with the communal is the key to respecting people in such difficult circumstances. Interestingly, these concerns overlay research with Indigenous peoples or any research that explores ethnicity and requires accessing “hard-to-reach” populations (usually those engaged in some kinds of criminal activities but can also include elite groups of any kind). Ethical research
558
R. Iphofen
requires respecting the full range of motives held by “situationally constrained” populations and avoiding any sense that they may be coerced into participation (ten Have 2018). It becomes difficult to weigh the human rights issues from the values of research to the community against the putative vulnerability of participants, particularly during the process of research ethics review. Such dilemmas are often tested in community-based participant action research (CBPAR) or in any action research where participant involvement must be heightened. If participation is “expected” as in community studies or action research the danger of enhancing vulnerability is greater. Although vulnerability as a key concept was discussed in the introduction to Part II of this handbook what was neglected, there was the role of the research ethics review process in assisting researchers in addressing the potential for vulnerability in their proposed research population. One could argue that the entire process of ethical review is necessarily patronizing or paternalistic – seeking to “protect” those perfectly capable of and preferring to protect themselves – there are enduring elements of ethics review that sustain perspectives “on” or “about” human subjects or participants that they would not endorse themselves. The idea that some types of person are particularly vulnerable and that researchers must therefore be very careful in how they carry out research that relates to them is enshrined in much of the regulatory culture. But it is also part of the ethical culture of researchers operating in fields of which they have gained some familiarity. Those striving to access the hard-to-reach groups discussed later in the section – addictive user of drugs and/or alcohol and, more so, those researching people with a disability – are often well aware of when their research engagement exacerbates any vulnerability they have or when it might be conceived of as liberating. Over sensitivity by ethics reviewers can lead to severe restrictions on research of this kind, or even discourage researchers from carrying it out. Thus, the concept of vulnerability requires much closer attention, with more analysis of the variation in what people are vulnerable to, and how research may relate to this, or to “degrees” of vulnerability and degrees of harm. Moreover, injunctions about how “vulnerable” people ought to be treated run a serious risk of reducing their opportunities to exercise autonomous decision-making, amounting to a form of paternalism. In light of all this, a more nuanced approach towards the issue of vulnerability is required and van den Hoonaard’s chapter in the following section offers a few suggestions about what is required. Categorical notions of vulnerability have often been inherited from biomedicine and relational notions have been inserted into that frame. It might not work well because power relations are not static rather they involve flux and flow. We touch on another fairly controversial topic in this section when researcher safety is seen to be just as ethically important as participant safety. It has been suggested that the “health and safety” of researchers is a governance issue, rather than something ethics review needs to consider. In other words, the responsibility should lie with research management since they incur the indemnity costs. That is a rather narrow view that leaves aside a responsibility for the “care” of researchers that has a consequence for research methodology, for field practices and, in any case,
30
Acting Ethically and with Integrity for Research Subjects and Participants
559
may have repercussions for the reciprocal care of participants and their communities. Although the chapter from Tolich and colleagues offers a “case study” related to a specific discipline (social science/social research) and a related methodology (qualitative research/ethnography), both the concern for the physical and emotional safety of the researcher and the proposed supportive mechanism could work equally well with most other disciplines and methodologies. It is as sure to be neglected by research ethics committees (RECs) and by research managers as is qualitative research in all its forms. For example, most RECs overseeing research “on” animals do not ask if the researcher has any emotional concerns about “using” the animals and what they can do about it. Perhaps they should ask that and similar questions whatever the field of study and whoever is the subject/participant that is the focus of the study.
Conclusions Regardless of the terms applied ethical research seeks to ensure the researcher’s perspective is sustained or at least does not lead to a neglect of the subject/participant/responder’s perspectives, nor to one that does not become hegemonic, narrow, exclusive, or subject to confirmation bias. It was once assumed that adequate objectivity or detachment would help towards “truth.” The growth of qualitative and reflexive research ensured a “turn” towards the perspective of the participant who was the subject of the research. Several of the chapters in this section endorse and expand upon that turn. Edward and Greenough’s chapter perhaps sums this up in advocating “queer literacy” as having relevance to all who have been “othered.” The ethical skills required of researchers include emotional intelligence, the management of visibility, and understanding of the processes of self-identification and its consequences in empowerment. For all of these subjects and/or participants, parallel developments in social and economic infrastructure mediate how these emergent processes are managed. Thus, an understanding of the Internet (for knowledge gain and production), social media (for public sharing of personal experience), and legislative process (for awareness of regulatory change) must all become part and parcel of the ethical researcher’s repertoire of skills. The discourse and dialogue that must be accessed to gain insights into these processes is not confined to a closed community. Similarly, for all such research engagements the dialogical process is essential to ethical outcomes. Staying close to and engaged with the participants and their interests is vital to their personal well-being and to the population “categories” (not necessarily “communities”) of which they may be deemed representative. For that to be the case most of the authors to this handbook might endorse the exhortation of Edward and Greenough: “We advocate a move from pompous prose which often renders academic texts dense and impenetrable to those outside of academia, to a use of clear language which ensures the research is accessible. In terms of language, we advocate fluency, coherency, accuracy and accessibility.” While this might not help in a research recruitment phase it may be only in that
560
R. Iphofen
way can those who have been the key subjects of the study relate to what is said and written about them and engage with the consequences as “participants.” Still much will depend upon how research outcomes are effectively communicated to those who have been studied and the larger society which hopefully benefits from their contribution.
References Birch M, Miller T (2002) Encouraging participation: ethics and responsibilities, Chapter 5. In: Mauthner M, Birch M, Jessop J, Miller T (eds) Ethics in qualitative research. Sage, London, pp 91–106 Christopher PP, Garcia-Sampson LG, Stein M, Johnson J, Rich J, Lidz C (2017) Enrolling in clinical research while incarcerated: what influences participants’ decisions? Hast Cent Rep 47(2):21–29 Corrigan O, Tutton R (2006) What’s in a name? Subjects, volunteers, participants and activists in clinical research. Clin Ethics 1:101–104 Hammersley M, Traianou A (2012) Ethics in qualitative research: controversies and contexts. Sage, London Iphofen R (2009/2011) Ethical decision making in social research: a practical guide. Palgrave Macmillan, London Israel M (2016) A history of coercive practices: the abuse of consent in research involving prisoners and prisons in the United States. In: Adorjan M, Ricciardelli R (eds) Engaging with ethics in international criminological research. Routledge, London, pp 69–86 Oliver P (2003) The student’s guide to research ethics. Open University Press, Maidenhead, pp 3–9 ten Have H (2018) Disasters, vulnerability and human rights, Chapter 11. In: O’Mathuna DP, Dranseika V, Gordijn B (eds) Disasters: core concept and ethical theories. Springer, Switzerland, Open, pp 157–174 Tumilty E, Smith CM, Walker P, Treharne G (2018) Ethics unleashed: developing responsive ethical practice and review for the inclusion of non-human animal participants in qualitative research, Chapter 26. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage Reference, London, pp 396–410
Ethical Issues in Community-Based, Participatory, and Action-Oriented Forms of Research
31
State of the Field and Future Directions Adrian Guta and Jijian Voronka
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reflexive Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Community-Based, Participatory, and Action-Oriented Forms of Research . . . . . The Problem of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics Review as Barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics Review as Opportunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Beyond the Traditional Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics and Rigor in CBPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Critical Intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toward a Resolution? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
562 562 563 563 564 565 566 566 566 567 568 568 570 570 571 572
Abstract
This chapter explores ethical issues in community-based, participatory, and actionoriented forms of research (CBPAR). These approaches to research have evolved from diverse philosophical, theoretical, and disciplinary traditions but share a commitment to bringing researchers, community members (those most affected by a social or health issue), and other relevant stakeholders together in meaningful ways to conduct research (e.g., to co-develop the research questions, collect and analyze the data, and disseminate the findings). This level of collaboration between the researcher and the researched is understood to blur traditional boundaries and A. Guta (*) · J. Voronka School of Social Work, University of Windsor, Windsor, ON, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_24
561
562
A. Guta and J. Voronka
safeguards, and raises important questions about ethics and scientific integrity in CBPAR. This chapter will provide an overview of key debates in the social science and research ethics literature about the suitability of contemporary research ethics review processes for CBPAR. In addition to discussing the ethics review process, this chapter will also explore emerging scholarship about what it means to be an ethical researcher working in a CBPAR tradition and negotiating different conceptions of science, ethics, and worldviews. Keywords
Community-based research · Participatory research · Action research · Peer researchers · Ethics · Barriers · Opportunities
Introduction This chapter explores ethical issues in community-based, participatory, and actionoriented forms of research (henceforth CBPAR) which have grown in popularity in the past few decades among researchers, communities, charities and nonprofits, health and social services, research funders, and governments (Milton et al. 2011; Salimi et al. 2012; Woolf et al. 2016). These approaches to research have evolved from diverse philosophical, theoretical, and disciplinary traditions but share a commitment to bringing researchers, communities (those most affected by a social or health issue), and other relevant stakeholders together in meaningful ways to conduct research and generate useful evidence (e.g., to co-develop the research questions, collect and analyze the data, and disseminate the findings) (Aldridge 2016; Chevalier and Buckles 2013; Wallerstein et al. 2017). These approaches may improve the quality of the research and have broad impact through policy change that improves community level conditions and health outcomes (Balazs and Morello-Frosch 2013; Greenhalgh et al. 2016; Jagosh et al. 2015). This level of collaboration between the researcher and the researched is understood to blur traditional boundaries and safeguards and raises important questions about ethics and scientific integrity (Fox 2003). This chapter will provide an overview of key debates in the social science and research ethics literature about the suitability of contemporary research ethics review processes for CBPAR. Beyond the review process, this chapter will also explore what it means to be an ethical researcher working in a CBPAR tradition and negotiating different conceptions of science, ethics, and worldviews.
Reflexive Note Before entering this discussion, we believe it is important to locate ourselves as scholars in relation to CBPAR. Together we have lengthy experience conducting research on sensitive issues with socially and economically marginalized communities, navigating the ethics review process across numerous institutions, and responding to complex ethical issues in the field. Adrian Guta has training in social work, public health, and bioethics and has been involved in community-engaged research for
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
563
15 years in Canada. With HIV as a central focus, these studies have explored elements of sexuality, substance use, and other sensitive issues, and he has written about the ethical and methodological implications of CBPAR (Guta et al. 2010b, 2012, 2013a, 2016; Strike et al. 2015, 2016). His service on institutional review boards (IRBs) includes a unique HIV-specific review board at the University of Toronto with a mandate to review CBPAR. Jijian Voronka has training in sociology, equity studies, and social justice education and uses critical disability and mad studies perspectives to elucidate confluences of power that negatively affect disabled people within health, social service, research, and education systems. Her work prioritizes mental health service user knowledges through community-based, service user-led, and narrative inquiry and analysis (Voronka 2013, 2017, 2019a). She also holds experiences of “being included” in research projects as a peer researcher, most notably the At Home/ Chez Soi Project, a national research demonstration project on how to best house and serve the “chronically homeless mentally ill” in Canada (Adair et al. 2016; Nelson et al. 2016; Piat et al. 2015; Silva et al. 2014; Voronka 2019b).
Background Defining Community-Based, Participatory, and Action-Oriented Forms of Research A comprehensive historical overview of community-based, participatory, and action-oriented forms of research, each with their own histories, is beyond the scope of this chapter, but we highlight some key elements under the broad umbrella of community-engaged research and scholarship (Calleson et al. 2005; Mikesell et al. 2013). Strand et al. (2003a) have identified three core influences: (1) the popular education model, (2) the action research model, and (3) the participatory research model. The popular education model draws on the critical pedagogy of Paulo Friere (1993) and aims to democratize education for those whose access has been restricted and whose knowledge has been excluded. As applied to research, this approach emphasizes the need to engage communities in identifying problems and generating solutions to improve their local conditions. Second, the action research model is most often associated with the pioneering work of psychologist Kurt Lewin who advanced the claim that social research could be coupled with action components and influence social change (Adelman 1993; Lewin 1946). The emphasis on producing change within local organizations and systems has been taken up in the development of partnerships between universities, communities, nonprofits, and charitable groups. Finally, the participatory research model that emerged out of community development efforts throughout the 1960s–1970s in the global south has been highly influential (McTaggart 1997) and has been taken up in the global north to address a range of social issues (Cornwall and Jewkes 1995; Nelson and Wright 1994). These approaches recognize the importance of involving “lay” people in research for their lived expertise which complements the technical knowledge of researchers. Most recently, the role of community has become central. While
564
A. Guta and J. Voronka
community is a complex and contested term, it is understood to reflect shared conditions and experiences (e.g., people who use drugs, people living with HIV, people labelled with intellectual and psychiatric disabilities). Two related approaches have emerged, the first being community-based research (CBR) in the sociological tradition of Stoecker (2003, 2012) which includes community organizing, collaboratively identifying community issues, action planning, and implementation and evaluation. The second has been community-based participatory research (CBPR) which has been widely advanced for public health research (Israel et al. 2008; Minkler et al. 2003; Wallerstein et al. 2017). Offering a synthesis of multiple traditions, CBPR has promoted a set of core principles to guide research: be community driven; have community relevance; collaboration and partnerships; capacity building; attending to process; recognizing multiple forms of knowledge; and being action and outcomes oriented (Israel et al. 1998; Strand et al. 2003b). Our respective approaches to CBPAR draw on theoretical insights and applied techniques from these bodies of literature but emphasize the importance of employing critical theory and reflexivity for understanding how power operates within university and community collaborations and the potential for exploitation (Guta et al. 2013a, 2014b; Voronka 2016).
The Problem of Ethics Orientations to research which engage those most affected by a social or health issue in the research process are often described as responding to a broad ethical mandate to democratize the research process (Park 1997) and promote a higher standard for ethical conduct in research than is typically required (Flicker et al. 2007). Yet, the “transgressive” nature of this research, which has been described as blurring the lines between the researcher and the researched (Fox 2003), has come into conflict with normative conceptions of research ethics. Typically, research ethics review frameworks, guidelines, and review bodies rely on a positivistic biomedical approach to research ethics with narrow conceptions of autonomy, vulnerability, and harm (Hoeyer 2006; Hoeyer et al. 2005). These issues are not limited to CBPAR and have been well discussed in relation to social science research broadly (Haggerty 2004) and qualitative research in particular (Hedgecoe 2008). While we refer readers to other chapters in this volume (see issues covered in Parts I and II of this handbook in particular), we note that biomedical research is typically characterized by designs where risks for participants are known, well-articulated, and mitigated through minimal contact with research “subjects” and fixed research procedures. Ethics review boards have been criticized for being more favorable to protocols that follow these standard approaches while penalizing those using more fluid and relational approaches. In the early years of the debate, Downie and Cottrell (2001, pp. 9–10) argued that traditional IRBs are not equipped to review “nontraditional” methods such as those used in CBPAR; the review process offers little guidance in respect to community-engaged and participatory research; the process is “frustrating and demoralizing,” takes too long, and fails to address ongoing ethical issues. The early literature typically contrasted traditional ethics review with the goals of
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
565
community-engaged research (Blake 2007; Boser 2006, 2007; Martin 2007; Rolfe 2005; Shore 2006). In the following sections, we explore key issues and consider recent developments in the scholarship.
Key Issues Following some early important contributions to the literature on ethical issues in CBPAR, an entire field of scholarship has emerged. This includes reviews of the literature about ethical issues in CBPAR (Fouché and Chubb 2017; Kwan and Walsh 2018; Mikesell et al. 2013; Souleymanov et al. 2016; Wilson et al. 2018) and considerable attention in ethics journals (see, e.g., the numerous articles in the Journal of Empirical Research on Human Research Ethics). Drawing on this growing body of literature, we identify some key issues which, while relevant to all forms of social science research, may pose unique challenges for CBPAR teams during the review process. As noted previously, CBPAR is likely to employ innovative community engagement strategies and research methods which blur traditional lines between the researcher and the researched. Such approaches include the use of arts-based methods (Creighton et al. 2018; Switzer et al. 2015), developing new sampling and recruitment techniques (Hanza et al. 2016; Simon and Mosavel 2010; Travers et al. 2013), and the involvement of community members in various stages of the research process (Flicker et al. 2015; Greer et al. 2018a; Newman et al. 2011; Switzer et al. 2018). From the perspective of research teams, this level of engagement enables community members to be actively involved in the research process and to collect different forms of evidence. However, IRBs may not be familiar with these approaches and may have concerns about the potential for participants (or those who they understand to be participants regardless of their actual role in the project) to be harmed. CBPAR often focuses on highly marginalized communities which are considered vulnerable to research exploitation because of their social location or individual characteristics (e.g., people who use drugs, people engaged in sex work, homeless people, and people living with HIV). Researchers have argued that the use of CBPAR can mitigate participants’ vulnerability and ensure individual and collective needs are considered in the research process (Campbell-Page and Shaw-Ridley 2013; Perez and Treadwell 2009). A related issue which has received much attention is compensation, with the concern being that economically marginalized individuals (especially people who use drugs) will be unduly influenced to participate. The approach taken by CBPAR projects is to equitably compensate community members for their knowledge (Black et al. 2013). Finally, and often of greatest concern to IRBs are the issues of informed consent and confidentiality. CBPAR often engages community leaders which is understood as necessary to get community “buy-in,” but concerns have been raised that other community members may feel pressured to participate when “invited” by highprofile members of their community (Brear 2018). As well, CBPAR is often conducted in partnership with community-serving organizations (e.g., the sole health center in a community that offers services for sex workers) and in organizational
566
A. Guta and J. Voronka
settings (e.g., recruitment is being conducted in the lobby). For IRBs this raises concerns about whether participants will feel they can refuse to participate when the invitation comes from a trusted source or healthcare provider who controls access to programs, services, and resources (Strike et al. 2016). Finally, and by extension of the involvement of community members and organizations where there may be pre-existing relationships, concerns have been raised about the limits of confidentiality in small, close-knit, and identifiable communities (Guta et al. 2014a; Petrova et al. 2016). The concern from IRBs is that individual level data can become identifiable to community members on the research team involved in analysis and even the broader community during dissemination. This is further complicated by the desire of some community members to be named in the research despite the potential risks to them (Teti et al. 2012).
Current Debates Ethics Review as Barrier In the years since scholars first raised concerns about the relevance of traditional IRB review processes for CBPAR (Downie and Cottrell 2001), this has remained a pressing issue (Fouché and Chubb 2017; Wilson et al. 2017). Some ongoing barriers identified are slow processes, administrative oversights, difficult communications, and long delays (Tamariz et al. 2015). Considering the need for an ethics certificate to release funds for the project to start, and pressures to produce within academic tenure and promotion cultures which may not understand CBPAR processes (e.g., why it takes so long to build relationships) (Castleden et al. 2015), delays in IRB review can be very impactful on CBPAR teams. One reason reported for long delays has less to do with research ethics than with reputational and risk management for IRBs and universities (Malone et al. 2006). Others have argued that IRB reviews can introduce new ethical issues based on poor understandings of community needs (e.g., asking for protections that may inadvertently increase risk to participants) or overlooking important ethical issues because of their lack of knowledge about CBPAR and community norms (Cross et al. 2015; Guta et al. 2010b). Unfortunately, long delays and unhelpful comments from IRBs have led some researchers to see ethics review as a little more than a “hoop” to jump through which does little to prepare them for the actual process issues which emerge in the field (Guta et al. 2016).
Ethics Review as Opportunity In the aptly titled “The research ethics committee is not the enemy,” Wolf (2010) has identified misunderstanding and confusion within and between IRBs and CBPAR teams, but there are opportunities for improving these relationships and the review process for all. Notably, some IRBs have improved their review processes by explicitly recognizing CBPAR research, providing board members and staff with
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
567
education, and doing outreach to researchers and their community partners (Guta et al. 2010b, 2012). In a study of IRB stakeholders, participants described wanting to have more relational review models (similar to popular practices in CBPAR) which would allow them to work in better ways with researchers to help flag potential ethical issues and how to address them and avoid harsh and protracted reviews (Guta et al. 2012). However, they described being constrained by external factors such as lack of funding for IRBs which limits the amount of time they can spend on individual files (Guta et al. 2013b). In the same study mentioned above which identified ethics review as a “hoop,” some participants characterized it, despite its challenges and limitations, as an important part of the research process and necessary requirement to protect participants and communities (Guta et al. 2016).
Beyond the Traditional Review Following critiques that IRBs are not adequately preparing CBPAR teams with their reviews, early work by Flicker et al. (2007) called on IRBs to shift their review paradigm toward community interests and expand their review criteria to consider community involvement in the design, minimizing barriers to community participation, protecting vulnerable groups (not just individuals), risks and benefits at the community level, how unflattering data will be handled, and community level consent and confidentiality. In parallel with the process to improve and expand the traditional review has been a growing interest in “ethical work” in CBPAR which is done after the review, in the field, and over the life of the project and is framed as relational and contextual. Minkler (2004) has called on CBPAR researchers to reflect on whether they are (a) achieving a true “community-driven” agenda; (b) reflecting on insider-outsider tensions; (c) addressing real and perceived racism; (d) considering the limitations of “participation;” and (e) anticipating and preparing for issues involving the sharing, ownership, and use of findings for action. More recently, Banks et al. (2013) have emphasized the social justice nature of CBPAR and the need to consider the ethics of partnership, collaboration, blurring of boundaries between researchers and the researched, community rights, community conflict, and democratic participation. Some scholars have turned away from principlesbased thinking to nonnormative conceptions of ethics which emphasize researcher qualities and characteristics, such as virtue ethics which identifies key virtues (e.g., compassion and humility) that researchers should cultivate (Banks et al. 2013; Schaffer 2009). Relatedly, but drawing on poststructural understandings of ethics attuned to history, power, and language, others have asked how CBPAR researchers cultivate an individual ethics through their interactions with members of their research team, participants, and the broader communities they work with (Guta et al. 2016; Voronka 2019a). In all, this emerging body of research recognizes that CBPAR brings researchers and communities together in ways that are not typical of other forms of research (e.g., a long partnership development phase pre-funding, numerous meetings, extended engagement in the field, and collaborative analysis, writing, and dissemination activities) which are understood to improve the quality
568
A. Guta and J. Voronka
of the research but also creates opportunities for conflict, unmet expectations, and the potential for harmful power dynamics to emerge. In sectors where CBPAR is an expectation (e.g., HIV and research with Indigenous peoples in Canada), the stakes are very high for both academic and community partners for partnerships to succeed.
Ethics and Rigor in CBPAR The close working relationships that characterize CBPAR, and the considerable investment of time and resources by those involved, have led some to raise questions about the scientific quality of the scholarship. In a recent scoping review, Wilson et al. (2018) identified a number of issues related to validity and research integrity in community-engaged research, including the lack of appropriate expertise within a community to conduct the research, the potential for coercive relationships and the impact of the quality of the data, challenges related to balancing validity with community needs, altering unflattering data, and emphasizing relationships at the expense of consistent application of rigorous research protocols. Early in the debate, Melrose (2001) asked the question “why would you want to” in response to the issue of rigor in action research and argued that while ethical considerations are of central concern, rigor has numerous definitions and can also be interpreted as the requirements for flexibility and responsiveness. Many of the challenges to rigor and quality in CBPAR are relevant to all forms of research, including clinical trials (e.g., when trial staff are poorly trained and administer an intervention inconsistently). However, this appears to have been made more of an issue in CBPAR because of its explicit political commitments. In respect to the issue of managing “unflattering data,” we suggest CBPAR teams explore the potential for such data emerging in advance, and the implications for the partnering organizations and the broader community, and plan strategies for how to handle it which reduces the risk of decisions to hold back certain findings being made arbitrarily. Finally, some of these challenges should be linked to the tension created by government funding bodies which are pushing universities and community-serving organizations together without proper resources and linking program funding with positive research outputs (Guta et al. 2014b). It is perhaps not surprising then that some community-based organizations might be reluctant to share findings which could reduce their ability to obtain future funding and continue to serve their community constituents.
Critical Intersections In this section we focus on an issue that crosses both traditional and emergent conceptions of research ethics and with which we have personal experience. A popular approach for engaging community members with lived experience in CBPAR projects, especially in the HIV and mental health sectors, is popularly referred to as “peer research.” Peer researchers [also known by other names such
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
569
as “community researchers”] are members of the target community being researched who have relevant lived experience (e.g., a mental health diagnosis, injecting drugs, engaging in sex work) and who actively participate as co-researchers (Greene et al. 2009; Guta et al. 2013a; Voronka et al. 2014). There is no single definition or approach to engaging peers, and in some projects, they may be central members of the research team (e.g., a co-investigator listed on the grant which funds the project), whereas in other projects their role is more instrumental (e.g., they are hired after the grant is funded to facilitate participant recruitment). IRBs which are not familiar with peer research approaches may not understand the difference between peers and research participants and may consider the paid work they are doing to fall under their purview (e.g., raising concerns about the vulnerability of peers). This may be further complicated when peers serving as project leads are paid (recognizing that unlike university-based researchers, they need financial support to attend meetings and contribute) and when teams also capture process data from peers about their involvement to share with other CBPAR teams using the approach (Greene et al. 2009). Thus, peers may be part of the research team, project staff, and participants. IRBs have also been concerned about whether peer researchers can maintain ethical standards when working in their communities in such roles as recruiters (Bean and Silva 2010; Simon and Mosavel 2010). Being part of the community is what makes peer researchers excellent recruiters and data collectors, but this can also put individual peer researchers in uncomfortable situations when researching with people they know (Guta et al. 2013a; Logie et al. 2012). Some safeguards for peers and participants include asking peers not to collect data from anyone they know personally (although this may undermine their ability to do their work and should be considered on a case-by-case basis) and giving participants the option to be interviewed by a peer or another member of the research team if they prefer not to share personal information with someone from their own community. Beyond the IRB review, there are additional ethical issues related to how peers are integrated, trained, and supported (Flicker et al. 2010; Guta et al. 2010a; Roche et al. 2010). Whereas graduate students may receive considerable training and support before entering the field for the first time, and throughout the life of a project, peers it seems are often simply expected to know what to do and manage their affects (e.g., being emotionally triggered when entering certain community spaces). While peer researchers play a very important role in projects, sometimes being the difference between community buy-in and success or failure, they have reported feeling poorly treated by members of their research teams, not having their knowledge and contributions valued, and even being penalized for not conforming to university norms despite having been recruited for their lived experience of marginalization (Damon et al. 2017; Guta et al. 2013a). The confusion peer researchers pose to IRBs is often extended to other parts of universities like human resources and finance departments which are not sure if peers are employees and how to pay them. Payment for peers is a highly contested issue with any rewards for peers often falling well below other research staff, despite calls for peer compensation to be fair and equitable. Complicating this is the lack of payment standards and considerations for peers receiving social supports which could be compromised if they are seen to be employed (Greer
570
A. Guta and J. Voronka
et al. 2018b). The lack of standards for peer work combined with their structural vulnerability can lead to unfair and exploitive working conditions and unmet promises. Damon et al. (2017) have suggested the need for a grievance process to hold researchers accountable in highly researched communities. Finally, concerns have been raised about the sustainability of peer models and the tendency for projects to have bursts of activity where peers are intensively engaged followed by long periods of inactivity where peers feel isolated and no guarantees of future funding beyond a single project (Guta et al. 2016).
Toward a Resolution? Scholars have offered best practices to resolve the tensions between CBPAR teams and IRBs which include modifying existing review requirements to be more responsive to partnered research models, to permit flexibility, to allow for multistage or staggered review processes, and to promote training of researchers and IRB stakeholders in each other’s respective languages and needs, reducing the requirements for final research instruments, and adopting strategies from research in emergency clinical research to better respond to emerging challenges (Cross et al. 2015). We find these to be promising options but add that the work should start at higher levels if it is to be taken up by individual IRBs. For example, the governing ethics framework in Canada is the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS-2) (2014b) which is in its second iteration and recognizes community-based and participatory forms of research, research partnerships, and the need to adapt to community level practices and culture. The TCPS-2 recognizes that, “In some cases, participants hold equal or greater power in the researcher-participant relationship, such as in communitybased and/or organizational research when a collaborative process is used to define and design the research project and questions. . .” (see TCPS-2 Chap. 10, Qualitative Research 2014a, p. 142). Having the governing ethics document not only acknowledge CBPAR approaches but also advance a nuanced understanding of power holds the potential for systemic change as all Canadian IRBs must be in line with the TCPS-2. This may be more challenging in the United States with the Belmont Report, multiple regulations, and state laws and internationally with hundreds of competing protections (International Compilation of Human Research Protections 2019). We do not mean to suggest that IRB reviews should be standardized, especially globally, but there may be ways of streamlining them that would benefit CBPAR and researchers in general.
Future Issues As CBPAR continues to go from the “margins to the mainstream” (Horowitz et al. 2009), we are concerned that increased inducements for researchers to engage communities have not been matched with improved training and supports to ensure
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
571
that ethical standards are met. Indeed, CBPAR may be moving away from its original social justice goals toward becoming enmeshed in the academic/industrial research complex. There remain ongoing concerns about tokenism and inauthentic partnerships, as well as the increased burden placed on individuals, communities, and organizations, as well as failed promises of capacity building and reciprocity for community members. A key feature of some traditions within the CBPAR umbrella is the goal to build the capacity of communities to conduct their own independent research without university-based researchers (e.g., the Canadian HIV communitybased research movement and the mental health survivor research movement), but the trend has been toward more complex research designs, larger teams (comprising of numerous researchers and partnering organizations), and few opportunities for community-based organizations to take the lead (for a discussion of ethical issues in survivor research, see Faulkner 2004). If anything, the need for a community and university partner has been galvanized. There have been calls in the literature for nuanced explorations of power within CBPAR teams, with Muhammad et al. (2015) calling for greater reflexivity and resistance in the academy to promote social justice research. Kwan and Walsh (2018, p. 370) have identified a number of gaps in the CBPAR literature related to ethical guidance in “(i) balancing community values, needs, and identity with those of the individual; (ii) negotiating power dynamics and relationships; (iii) working with stigmatized populations; (iv) negotiating conflicting ethical requirements and expectations from [IRBs]; and (v) facilitating social action emerging from the findings.” We invite scholars to move beyond uncritical celebrations of community involvement and claims that “the entire community was involved” to consider what is made possible by CBPAR and the risks and broader impacts on communities when these things go wrong (Mayan and Daum 2016). We end with some challenging and provocative questions raised by Glass et al. (2018) about the possibility of doing ethical research considering “the ethicality of research practices and universities themselves” and that CBPAR cannot address these issues on its own. This invites readers to think about the ways in which certain philosophies, approaches, and methods become ethically problematic within oppressive research contexts and that the work of improving IRB reviews is about more than expediting the flow of research dollars but promoting a conception of “research ethics as a praxis of engagement with aggrieved communities in healing from and redressing historical trauma” (Glass et al. 2018, p. 503).
Conclusion In this chapter we have identified and discussed major ethical issues in CBPAR, an umbrella term we have used to cover a range of community-based, participatory, and action-oriented forms of research which are characterized by the coming together of researchers and those who are typically researched. There has been considerable interest in a range of disciplines about the potential for such research partnerships to improve the quality of research and its usefulness. CBPAR has pushed the boundaries of traditional methods and expanded the role of the academic to include
572
A. Guta and J. Voronka
community engagement. However, these approaches have raised new ethical issues both in terms of formal IRB reviews which rely on principles and procedures and for those immersed in the work who grapple with the ethical dimensions of their research practice. There are no easy solutions or quick fixes in complex projects which bring highly privileged university-based researchers into partnerships with community-based organizations (sometimes running on shoestring budgets) and with individual community members who may be socially and economically marginalized. Rather, what is created is an opportunity for ethical imagination and thinking about how to promote and reconcile (where possible) multiple conceptions of ethics.
References Adair CE, Kopp B, Distasio J, Hwang SW, Lavoie J, Veldhuizen S, Voronka J, Kaufman AF, Somers JM, SR LB, Cote S, Addorisio S, Matte D, Goering P (2016) Housing quality in a randomized controlled trial of Housing First for homeless individuals with mental illness: correlates and associations with outcomes. J Urban Health 93(4):682–697. https://doi.org/ 10.1007/s11524-016-0062-9 Adelman C (1993) Kurt Lewin and the origins of action research. Educ Action Res 1(1):7–24 Aldridge J (2016) Participatory research: working with vulnerable groups in research and practice. Policy Press, Bristol Balazs CL, Morello-Frosch R (2013) The three Rs: how community-based participatory research strengthens the rigor, relevance, and reach of science. Environ Justice 6(1):9–16 Banks S, Armstrong A, Carter K, Graham H, Hayward P, Henry A et al (2013) Everyday ethics in community-based participatory research. Contemp Soc Sci 8(3):263–277 Bean S, Silva DS (2010) Betwixt & between: peer recruiter proximity in community-based research. Am J Bioeth 10(3):18–19 Black KZ, Hardy CY, De Marco M, Ammerman AS, Corbie-Smith G, Council B et al (2013) Beyond incentives for involvement to compensation for consultants: increasing equity in CBPR approaches. Prog Community Health Partnersh 7(3):263 Blake MK (2007) Formality and friendship: research ethics review and participatory action research. ACME Int EJ Crit Geogr 6(3):411–421 Boser S (2006) Ethics and power in community-campus partnerships for research. Action Res 4(1):9–21 Brear M (2018) Ethical research practice or undue influence? Symbolic power in community- and individual-level informed consent processes in community-based participatory research in Swaziland. J Empir Res Hum Res Ethics 13(4):311–322 Boser S (2007) Power, ethics, and the IRB: dissonance over human participant review of participatory research. Qual Inq 13(8):1060–1074 Calleson DC, Jordan C, Seifer SD (2005) Community-engaged scholarship: is faculty work in communities a true academic enterprise? Acad Med 80(4):317–321 Campbell-Page RM, Shaw-Ridley M (2013) Managing ethical dilemmas in community-based participatory research with vulnerable populations. Health Promot Pract 14(4):485–490 Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada (2014a) Tri-council policy statement: ethical conduct for research involving humans. Chapter 10. Qualitative research, pp 139–142, Ottawa. Retrieved from http://www.pre.ethics.gc.ca Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada (2014b) Tri-council policy statement: ethical conduct for research involving humans. Ottawa. Retrieved from http:// www.pre.ethics.gc.ca
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
573
Castleden H, Sylvestre P, Martin D, McNally M (2015) “I don’t think that any peer review committee... would ever ‘get’ what I currently do”: how institutional metrics for success and merit risk perpetuating the (re)production of colonial relationships in communitybased participatory research involving indigenous peoples in Canada. Int Indigenous Policy J 6(4):2 Chevalier JM, Buckles DJ (2013) Participatory action research: theory and methods for engaged inquiry. Routledge, Abingdon Cornwall A, Jewkes R (1995) What is participatory research? Soc Sci Med 41(12):1667–1676 Costa L, Voronka J, Landry D, Reid J, Mcfarlane B, Reville D, Church K (2012) Recovering our stories: a small act of resistance. Stud Soc Justice 6(1):85–101 Creighton G, Oliffe JL, Ferlatte O, Bottorff J, Broom A, Jenkins EK (2018) Photovoice ethics: critical reflections from men’s mental health research. Qual Health Res 28(3):446–455 Cross JE, Pickering K, Hickey M (2015) Community-based participatory research, ethics, and institutional review boards: untying a Gordian knot. Crit Sociol 41(7-8):1007–1026 Damon W, Callon C, Wiebe L, Small W, Kerr T, McNeil R (2017) Community-based participatory research in a heavily researched inner city neighbourhood: perspectives of people who use drugs on their experiences as peer researchers. Soc Sci Med 176:85–92 Downie J, Cottrell B (2001) Community-based research ethics review: reflections on experience and recommendations for action. Health Law Rev 10(1):8–17 Faulkner A (2004) The ethics of survivor research: guidelines for the ethical conduct of research carried out by mental health service users and survivors. Policy Press, Bristol Flicker S, Travers R, Guta A, McDonald S, Meagher A (2007) Ethical dilemmas in communitybased participatory research: recommendations for institutional review boards. J Urban Health 84(4):478–493 Flicker S, Roche B, Guta A (2010) Peer research in action 3: ethical issues. Wellesley Institute, Toronto Flicker S, O’campo P, Monchalin R, Thistle J, Worthington C, Masching R, Guta A, Pooyak S, Whitebird W, Thomas C (2015) Research done in “a good way”: the importance of indigenous elder involvement in HIV community-based research. Am J Public Health 105(6):1149–1154 Fouché CB, Chubb LA (2017) Action researchers encountering ethical review: a literature synthesis on challenges and strategies. Educ Action Res 25(1):23–34 Fox NJ (2003) Practice-based evidence: towards collaborative and transgressive research. Sociology 37(1):81–102 Friere P (1993) Pedagogy of the oppressed: new revised 20th anniversary edition. Continuum, New York Glass RD, Morton JM, King JE, Krueger-Henney P, Moses MS, Sabati S, Richardson T (2018) The ethical stakes of collaborative community-based social science research. Urban Educ 53(4):503–531 Greene S, Ahluwalia A, Watson J, Tucker R, Rourke SB, Koornstra J, Sobota M, Monette L, Byers S (2009) Between skepticism and empowerment: the experiences of peer research assistants in HIV/ AIDS, housing and homelessness community-based research. Int J Soc Res Methodol 12(4):361–373 Greenhalgh T, Jackson C, Shaw S, Janamian T (2016) Achieving research impact through cocreation in community-based health services: literature review and case study. The Milbank Quarterly 94(2):392–429 Greer AM, Pauly B, Scott A, Martin R, Burmeister C, Buxton J (2018a) Paying people who use illicit substances or ‘peers’ participating in community-based work: a narrative review of the literature. Drugs: Educ Prevent Policy:1–13 Greer AM, Amlani A, Pauly B, Burmeister C, Buxton JA (2018b) Participant, peer and PEEP: considerations and strategies for involving people who have used illicit substances as assistants and advisors in research. BMC Public Health 18(1):834 Guta A, Flicker S, Roche B (2010a) Peer research in action 2: management, support and supervision. Wellesley Institute, Toronto Guta A, Wilson MG, Flicker S, Travers R, Mason C, Wenyeve G, O’campo P (2010b) Are we asking the right questions? a review of Canadian REB practices in relation to community-based participatory research. J Empir Res Hum Res Ethics 5(2):35–46
574
A. Guta and J. Voronka
Guta A, Nixon S, Gahagan J, Fielden S (2012) “Walking along beside the researcher”: how Canadian REBs/IRBs are responding to the needs of community-based participatory research. J Empir Res Hum Res Ethics 7(1):17–27 Guta A, Flicker S, Roche B (2013a) Governing through community allegiance: a qualitative examination of peer research in community-based participatory research. Crit Public Health 23(4):432–451 Guta A, Nixon SA, Wilson MG (2013b) Resisting the seduction of “ethics creep”: using Foucault to surface complexity and contradiction in research ethics review. Soc Sci Med 98:301–310 Guta A, Flicker S, Travers R, Wilson M, Strike S, Gaudry S, Binder L, O’Campo P, Kuzmanovic D (2014a) HIV CBR ethics fact sheet #6: confidentiality in close-knit communities. Improving the accessibility of research ethics boards for HIV community-based research in Canada. ON, Toronto Guta A, Strike C, Flicker S, Murray SJ, Upshur R, Myers T (2014b) Governing through community-based research: lessons from the Canadian HIV research sector. Soc Sci Med 123:250–261 Guta A, Murray SJ, Strike C, Flicker S, Upshur R, Myers T (2016) Governing well in communitybased research: lessons from Canada’s HIV research sector on ethics, publics and the care of the self. Public Health Ethics 10(3):315–328 Haggerty KD (2004) Ethics creep: governing social science research in the name of ethics. Qual Sociol 27(4):391–414 Hanza MM, Goodson M, Osman A, Capetillo MDP, Hared A, Nigon JA, Meiers SJ, Weis JA, Wieland ML, Sia IG (2016) Lessons learned from community-led recruitment of immigrants and refugee participants for a randomized, community-based participatory research study. J Immigr Minor Health 18(5):1241–1245 Hedgecoe A (2008) Research ethics review and the sociological research relationship. Sociology 42 (5):873–886 Hoeyer K (2006) “Ethics wars”: reflections on the antagonism between bioethicists and social science observers of biomedicine. Hum Stud 29(2):203–227 Hoeyer K, Dahlager L, Lynöe N (2005) Conflicting notions of research ethics: the mutually challenging traditions of social scientists and medical researchers. Soc Sci Med 61 (8):1741–1749 Horowitz CR, Robinson M, Seifer S (2009) Community-based participatory research from the margin to the mainstream: are researchers prepared? Circulation 119(19):2633–2642 International Compilation of Human Research Protections (2019) Compiled by: Office for Human Research Protections, U.S. Department of Health and Human Services. Retrieved from www. hhs.gov Israel BA, Schulz AJ, Parker EA, Becker AB (1998) Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health 19 (1):173–202 Israel BA, Schulz AJ, Parker EA, Becker AB (2008) Critical issues in developing and following community-based participatory research principles. In: Community-based participatory research for health. Jossey-Bass, San Francisco, pp 47–62 Jagosh J, Bush PL, Salsberg J, Macaulay AC, Greenhalgh T, Wong G, Cargo M, Green LW, Herbert CP, Pluye P (2015) A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects. BMC Public Health 15(1):725 Kwan C, Walsh CA (2018) Ethical issues in conducting community-based participatory research: a narrative review of the literature. Qual Rep 23(2):369–386 Lewin K (1946) Action research and minority problems. J Soc Issues 2(4):34–46 Logie C, James L, Tharao W, Loutfy MR (2012) Opportunities, ethical challenges, and lessons learned from working with peer research assistants in a multi-method HIV community-based research study in Ontario, Canada. J Empir Res Hum Res Ethics 7(4):10–19 Malone RE, Yerger VB, McGruder C, Froelicher E (2006) “It’s like Tuskegee in reverse”: a case study of ethical tensions in institutional review board review of community-based participatory research. Am J Public Health 96(11):1914–1919
31
Ethical Issues in Community-Based, Participatory, and Action-Oriented. . .
575
Martin DG (2007) Bureacratizing ethics: institutional review boards and participatory research. ACME Int E-J Crit Geogr 6(3):319–328 Mayan MJ, Daum CH (2016) Worth the risk? Muddled relationships in community-based participatory research. Qual Health Res 26(1):69–76 McTaggart R (ed) (1997) Participatory action research: international contexts and consequences. SUNY Press, Albany Melrose MJ (2001) Maximizing the rigor of action research: why would you want to? How could you? Field Methods 13(2):160–180 Mikesell L, Bromley E, Khodyakov D (2013) Ethical community-engaged research: a literature review. Am J Public Health 103(12):e7–e14 Milton B, Attree P, French B, Povall S, Whitehead M, Popay J (2011) The impact of community engagement on health and social outcomes: a systematic review. Commun Develop J 47 (3):316–334 Minkler M (2004) Ethical challenges for the “outside” researcher in community-based participatory research. Health Educ Behav 31(6):684–697 Minkler M, Blackwell AG, Thompson M, Tamir H (2003) Community-based participatory research: implications for public health funding. Am J Public Health 93(8):1210–1213 Muhammad M, Wallerstein N, Sussman AL, Avila M, Belone L, Duran B (2015) Reflections on researcher identity and power: the impact of positionality on community based participatory research (CBPR) processes and outcomes. Crit Sociol 41(7-8):1045–1063 Nelson N, Wright S (1994) Power and participatory development: theory and practice. Intermediate Technology Publications, Bradford Nelson G, Macnaughton E, Curwood SE, Egalité N, Voronka J, Fleury MJ et al (2016) Collaboration and involvement of persons with lived experience in planning Canada’s At Home/Chez Soi project. Health Soc Care Commun 24(2):184–193 Newman SD, Andrews JO, Magwood GS, Jenkins C, Cox MJ, Williamson DC (2011) Peer reviewed: community advisory boards in community-based participatory research: a synthesis of best processes. Prev Chronic Dis 8(3) Park P (1997) Participatory research, democracy, and community. Pract Anthropol 19(3):8–13 Perez LM, Treadwell HM (2009) Determining what we stand for will guide what we do: Community priorities, ethical research paradigms, and research with vulnerable populations. Am J Public Health 99(2):201–204 Petrova E, Dewing J, Camilleri M (2016) Confidentiality in participatory research: Challenges from one study. Nurs Ethics 23(4):442–454 Piat M, Polvere L, Kirst M, Voronka J, Zabkiewicz D, Plante MC, Isaak C, Nolin D, Nelson G, Goering P (2015) Pathways into homelessness: Understanding how both individual and structural factors contribute to and sustain homelessness in Canada. Urban Stud 52(13):2366–2382 Roche B, Guta A, Flicker S (2010) Peer research in action 1: models of practice. Wellesley Institute, Toronto Rolfe G (2005) Colliding discourses: Deconstructing the process of seeking ethical approval for a participatory evaluation project. J Res Nurs 10(2):231–233 Salimi Y, Shahandeh K, Malekafzali H, Loori N, Kheiltash A, Jamshidi E, Frouzan AS, Majdzadeh R (2012) Is community-based participatory research (CBPR) useful? A systematic review on papers in a decade. Int J Prev Med 3(6):386 Schaffer MA (2009) A virtue ethics guide to best practices for community-based participatory research. Prog Community Health Partnersh 3(1):83–90 Shore N (2006) Re-conceptualizing the Belmont Report: a community-based participatory research perspective. J Community Pract 14(4):5–26 Silva DS, Bourque J, Goering P, Hahlweg KA, Stergiopoulos V, Streiner DL, Voronka J (2014) Arriving at the end of a newly forged path: lessons from the safety and adverse events committee of the at home/ Chez Soi Project. IRB 36(5):1–7 Simon C, Mosavel M (2010) Community members as recruiters of human subjects: ethical considerations. Am J Bioeth 10(3):3–11
576
A. Guta and J. Voronka
Souleymanov R, Kuzmanović D, Marshall Z, Scheim AI, Mikiki M, Worthington C, Millson MP (2016) The ethics of community-based research with people who use drugs: results of a scoping review. BMC Med Ethics 17(1):25 Stoecker R (2003) Community-based research: from practice to theory and back again. Michigan J Commun Serv Learn 9(2). N/A Stoecker R (2012) Community-based research and the two forms of social change. J Rural Soc Sci 27(2):83 Strand K, Marullo S, Cutforth NJ, Stoecker R, Donohue P (2003a) Principles of best practice for community-based research. Michigan J Commun Serv Learn 9(3):5–15 Strand KJ, Cutforth N, Stoecker R, Marullo S, Donohue P (2003b) Community-based research and higher education: principles and practices. Wiley, San Francisco Strike C, Guta A, De Prinse K, Switzer S, Carusone SC (2016) Opportunities, challenges and ethical issues associated with conducting community-based participatory research in a hospital setting. Research Ethics 12(3):149–157 Switzer S, Guta A, de Prinse K, Carusone SC, Strike C (2015) Visualizing harm reduction: methodological and ethical considerations. Soc Sci Med 133:77–84 Switzer S, Carusone SC, Guta A, Strike C (2018) A seat at the table: designing an activity-based community advisory committee with people living with HIV who use drugs. Qual Health Res https://doi.org/10.1177/1049732318812773 (e-pub ahead of print) Tamariz L, Medina H, Taylor J, Carrasquillo O, Kobetz E, Palacio A (2015) Are research ethics committees prepared for community-based participatory research? J Empir Res Hum Res Ethics 10(5):488–495 Teti M, Murray C, Johnson L, Binson D (2012) Photovoice as a community-based participatory research method among women living with HIV/AIDS: ethical opportunities and challenges. J Empir Res Hum Res Ethics 7(4):34–43 Travers R, Pyne J, Bauer G, Munro L, Giambrone B, Hammond R, Scanlon K (2013) ‘Community control’ in CBPR: challenges experienced and questions raised from the Trans PULSE project. Action Res 11(4):403–422 Voronka J (2013) Rerouting the weeds: the move from criminalizing to pathologizing “troubled youth” in the review of the roots of youth violence. In: Le Francois B, Menzies R, Reaume G (eds) Mad matters: a critical reader in Canadian mad studies. Canadian Scholars Press, Toronto, pp 309–3222 Voronka J (2016) The politics of ‘people with lived experience’ experiential authority and the risks of strategic essentialism. Philos Psychiatry Psychol 23(3):189–201 Voronka J (2017) Turning mad knowledge into affective labor: the case of the peer support worker. Am Q 69(2):333–338 Voronka J (2019a) Slow death through evidence-based research. In: Daley A, Costa L, Beresford P (eds) Madness, violence, power. University of Toronto Press, Toronto (in press) Voronka J (2019b) Storytelling beyond the psychiatric gaze: resisting resilience and recovery narratives. Can J Disabil Stud 8(2):25 p (in press) Voronka J, Wise Harris D, Grant J, Komaroff J, Boyle D, Kennedy A (2014) Un/helpful help and its discontents: peer researchers paying attention to street life narratives to inform social work policy and practice. Soc Work Mental Health 12(3):249–279 Wallerstein N, Duran B, Minkler M, Oetzel JG (eds) (2017) Community-based participatory research for health: advancing social and health equity. Wiley, San Francisco Wilson E, Kenny A, Dickson-Swift V (2018) Ethical challenges in community-based participatory research: a scoping review. Qual Health Res 28(2):189–199 Wolf LE (2010) The research ethics committee is not the enemy: oversight of community-based participatory research. J Empir Res Hum Res Ethics 5(4):77–86 Woolf SH, Zimmerman E, Haley A, Krist AH (2016) Authentic engagement of patients and communities can transform research, practice, and policy. Health Aff 35(4):590–594
“Vulnerability” as a Concept Captive in Its Own Prison
32
Will C. van den Hoonaard
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vulnerability in Research Ethics Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Explicit Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Justification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mention of Specific Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toward a New Framework of Defining and Operationalizing Vulnerability in Research Ethics Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
578 580 581 581 582 583 583 587 587
Abstract
The concept of vulnerability appears to capture many important ethical aspects in research. However, it is one of the most ill-defined and the least operationalizable concepts in research ethics codes. The argument of this chapter is that the concept is archaic and outdated. Its overuse has made it lose its force. The medical research ethics codes (which led to the establishment of research ethics codes in general) have employed this concept over a long period, but the concept is of relatively little use in the social sciences. Research ethics codes provide inadequate definitions, justifications, and explanations for its application. The list of vulnerable groups is either inaccurate, inappropriate, or incomplete. The chapter concludes with practical suggestions that members of research ethics committees might wish to follow to assist in resolving issues associated with the concept of vulnerability. Unless amended, vulnerability, as conceived in ethics codes, will remain captive in its own conceptual chains. W. C. van den Hoonaard (*) Department of Sociology, University of New Brunswick, Fredericton, NB, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_25
577
578
W. C. van den Hoonaard
Keywords
Vulnerability · Disadvantaged groups · Research ethics codes · Inadequacy of medical framework of research ethics
Introduction The concept of vulnerability is derived from the highly vaunted medical research ethics framework which, in turn, forms the root of all contemporary formal research ethics codes. The concept continues to this day – a sad remnant of a time when unreliable medical experiments with human subjects were de rigueur – thanks to the US Belmont Report in 1979, the concept has been given an ongoing life. My argument in this chapter is that the specific location of vulnerability in research ethics codes be retired. Currently, it sits in its own prison, unable to function as was originally intended. Vulnerability, as Levine et al. explain (2004), is one of the least examined concepts in research ethics and, yet, is one of the most used in relation to how researchers connect to research participants and how ethics committees weigh the relevance of research proposals. What is more, there seems to be no other concept in research ethics codes that ensnares other facets as much as vulnerability: it touches the face of informed consent, research design, the interaction of the researcher with participants, voluntariness, the prevention or the reduction of harm, obligations to research participants, human dignity, justice, and the storing of research data. This chapter demonstrates how the concept of vulnerability must be shifted away from its original meaning and its applied use in research ethics codes. It underscores the archaic use of the concept in current research ethics codes and the fetishization of vulnerability. It explores how several research ethics codes are employing the concept. It also notes that with the frequent use of “vulnerability,” the concept is losing its force. Ironically, at the same time, the emphasis that ethics codes place on vulnerability has laid the basis for stereotyping “the vulnerable.” The chapter follows through on what some of the most prevailing ethics codes offer in terms of the definitions, justifications, and explanations of vulnerability. I have argued elsewhere that the concept should be abandoned by researchers in the social sciences (van den Hoonaard 2018); here I broaden the argument to suggest clarification of its use in all research ethics codes. What is needed, the chapter suggests, is the urgent need to move toward a new framework of relocating vulnerability in research ethics codes and, hence, its application in research ethics review processes. Toward the end of the chapter, I propose nine ways that vulnerability as a concept can become more relevant in today’s world. For all empirical purposes, the prevailing definition and use of vulnerability in the ethics codes express archaic and anachronistic views, not only wholly out of step with the contemporary discourse about vulnerability but also considering their impractical use. Gregor Bankoff (2011) offers the following statements (his original quotes refer to the perceived vulnerability of regions and zones across the globe, but I have substituted them with “individuals” in his texts):
32
“Vulnerability” as a Concept Captive in Its Own Prison
579
[Vulnerability] might reflect particular cultural values to do with the way in which certain [individuals] are usually imagined. (Bankoff 2011: 19)
He further argues that: ... vulnerability form[s] part of one and the same essentializing and generalizing cultural discourse that denigrates [individuals] as disease-ridden, poverty-stricken and disaster-prone. (Bankoff 2011: 19)
Vulnerability has thus in recent years taken on a new, unintended meaning that has given rise to stereotypes and the assumption of the lack of agency on the part of the vulnerable. It is not difficult to see the concept portray such individuals and groups as stereotypically “deficient” in some respects. There is an important need to remove “vulnerability” as a persistent category in research ethics codes. This new take on “vulnerability” will constitute a major challenge to researchers and research ethics committees “whose work is bound up with promoting the archaic notions of vulnerability. The removal of ‘vulnerability’ from its usual spot in research ethics codes has several implications.” We have already alluded to the declining force of the concept, the basis of stereotypical thinking, and its being out of step with contemporary discourse. Why maintain “vulnerability” when it proves impossible to establish its objective reality? All the concept serves now is to fan deeply held subjective notions of vulnerability – ones that seem to cast a pall on innumerable groups whom researchers and ethics reviewers have defined a priori as “vulnerable.” Such a list of groups typically includes pregnant women, human fetuses and neonates, prisoners, children, cognitively impaired persons, students and employees, minorities, economically and/or educationally disadvantaged, AIDS/ HIV+ subjects, terminally ill subjects, and women undergoing in vitro fertilization, to name the most obvious examples to be found in research ethics codes. There has been an exponential increase in the number of these individuals and social groups. The fetishization of vulnerability is now part and parcel of the research ethics review process. This fetish underscores our present era of perceived susceptibility to omnipresent dangers, helplessness, and powerlessness. There are researchers who would welcome abandoning the concept altogether. Levine et al. (2004), for example, argue that there are “unequal power relationships between politically and economically disadvantaged groups and investigators or sponsors,” in addition to the fact that “[s]o many groups are now considered to be vulnerable ..., that the concept has lost force.” Levine et al. (2004) wonder whether, despite the concept’s insistent appearance in research ethics codes, it has become so stereotypical that it has lost its efficacy. They also argue that classifying groups as vulnerable not only stereotypes them but also may not reliably protect many individuals from harm.” However, “certain individuals require ongoing protections of the kind already established in law and regulation. . ..”
580
W. C. van den Hoonaard
One wonders whether assumptions about vulnerability might betray an arrogance on the part of policy-makers who categorically decide who is vulnerable, or who is not. More significant, how can we still acknowledge the “vulnerability” of the wheelchair-bound person (an iconic stereotype of a vulnerable person) in an age when we could all now claim to be vulnerable in some way? Or, what (new) understanding can we derive from the critical-disability movement literature that aims to restore the dignity of persons? Or, is there a way of restoring a truly viable meaning of vulnerability? One should not doubt that coining some people as vulnerable came from good intentions, but it still “others” people, making them into something separate on a binary that does not exist, i.e., vulnerable/normal and disabled/normal. “The unexamined idea of another’s vulnerability,” says a reader of an early draft of this chapter (Butler-Hallett 2018), “too easily becomes a weapon to use against them, however unintentionally.” The awkwardness of the term “vulnerability” gains significance even when it is known that biologic medicines generally do not work as well in female bodies as male. Females are so often barred from medical studies because of they are of childbearing age and no one wants to be responsible for another problem like thalidomide. Some women take oral birth control every day; other women are infertile through accident or design. Michelle Butler-Hallett (2018) asks why should they not be part of a study? Would a menstrual cycle, in a medical study, be considered a vulnerability? She also notes when trying to protect women of childbearing age, and their theoretical fetuses, medical researchers are more likely to produce drugs that behave one way in male bodies and another, unexpected way in female bodies. The female patients then suffer needlessly (see Shayna Korol 2018). Within its original conceptual space, Bracken-Roche et al. (2017) aver that the concept of vulnerability signaled “mindfulness for researchers and research ethics boards to the possibility that some participants may be at higher risk of harm or wrong, but the concept has taken a life beyond” its intended meaning. When it comes to defining vulnerability objectively, international and national ethics codes offer very meager definitions; they seem unable to establish a satisfactory meaning of the concept. Of the 11 ethics codes Bracken-Roche et al. have examined, 5 do not provide any details, namely, the EU Clinical Trials Directive and Clinical Trials Regulation, the ICH GCP, the UK Research Governance Framework, and the US Common Rule. The remaining six codes offer us practical instances that involve vulnerability. We shall now turn to explore the particular place and significance of these specific research ethics codes.
Vulnerability in Research Ethics Codes There are plenty of grounds to be confused about the meaning and application of the term vulnerability as presented in research ethics codes. As affirmed by BrackenRoche et al. (2017), very few policies and guidelines “explicitly define vulnerability.” Most policies rely “on implicit assumptions and the delineation of vulnerable groups and sources of vulnerability.”
32
“Vulnerability” as a Concept Captive in Its Own Prison
581
Explicit Definitions According to the CIOMS (The Council for International Organizations of Medical Sciences), “‘[v]ulnerability’ refers to a substantial incapacity to protect one’s own interests owing to such impediments as lack of capability to give informed consent, lack of alternative means of obtaining medical care or other expensive necessities, or being a junior or subordinate member of a hierarchical group” ([18], p. 18). The ICH GCP (International Conference on Harmonisation – Good Clinical Practice) offers a slightly different perspective, defining vulnerable subjects as “[i] ndividuals whose willingness to volunteer in a clinical trial may be unduly influenced by the expectation, whether justified or not, of benefits associated with participation, or of a retaliatory response from senior members of a hierarchy in case of refusal to participate” ([23], Art. 1.61). Aside from visualizing a medical setting in a hospital with strict hierarchical rules, it is not easy for social scientists to imagine the conditions of vulnerability that might apply to conducting their research. When the Canadian Panel on Research Ethics (2006) was apprised of the inclusion of the variant of individual vulnerability, it quickly incorporated the social dimensions or contexts of vulnerability. As a consequence, the Panel (responsible for drafting and promoting the TCPS 2 (Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans)) (CIHR et al. 2010) readily then explicitly extended the idea that vulnerability is context-dependent and is experienced “to different degrees and at different times, depending on [an individual’s or group’s] circumstances” ([24], p. 210). In contrast to the definitions found in the CIOMS or the ICH GCP, the Declaration of Helsinki, the Australian National Statement, and the Belmont Report, respectively, provide a nuanced approach to participants who are “particularly vulnerable” [17], “more-than-usually vulnerable” [23], or “especially vulnerable” [26].
Justification Bracken-Roche et al. (2017) discovered that the guidelines or policies in research ethics codes, provide some “explicit ethical argumentation relating to vulnerability and/or vulnerable subjects.” The CIOMS grounds the protection of dependent or vulnerable persons in both principle and in the respect for persons and justice, while the UNESCO Declaration summons up respect for human vulnerability and personal integrity as “a fundamental principle in this framework.” The Australian National Statement relates the need to protect vulnerable people to “respect for persons, research merit and integrity, justice, and beneficence.” Canada’s TCPS2 also adds the notions of justice and concern for welfare. The Belmont Report repeats several elements found in other codes, including respect for persons, beneficence, and justice, all of which call forth special obligations relating to vulnerability, but the justifications are not clearly spelled out.
582
W. C. van den Hoonaard
Mention of Specific Groups Surely, one of the most striking findings by Bracken-Roche et al. (2017) is that all research ethics codes provide examples of vulnerable individuals or groups, but none provide any supporting explanation of how those categories of individuals or groups were identified as “vulnerable.” Further, only four policies provide explanation, namely, the CIOMS, the Australian National Statement, TCPS2, and the US Common Rule (Bracken-Roche et al. 2017: Table 2). There are the 11 major groups for whom the ethics codes do not offer an explanation or where the explanation is unclear. The EU Clinical Trials Directive and Clinical Trials Regulation, the ICH GCP, the UK Research Governance Framework, and the US Common Rule, however explain the vulnerability of 28 other categories. These include subordinate members of hierarchies or relationships; economically disadvantaged persons; institutionalized persons; women; countries or communities with limited resources; educationally disadvantaged persons; neonates in intensive care; patients in terminal care; participants and researchers in research that uncovers illegal activities; those with diminished capacity for self-determination; the least organizationally developed communities; grouped by patient/participant condition; children, minors, or young people; persons with mental illness or mental health problems; elderly persons; persons with limited (or no) freedom or capacity to consent; pregnant or breastfeeding women; mentally disabled persons; persons who have serious, potentially disabling, or life-threatening diseases; very sick persons; and persons with a cognitive impairment or intellectual disability. Both lists, however, contain both medically derived definitions and socially derived meanings of vulnerability. Bracken-Roche et al. (2017) remind us that the groups most frequently identified are children, minors, or young people, prisoners, as well as persons with mental health issues, patients in emergency settings, and certain ethnocultural, racial, or ethnic minority groups. It is unclear in some cases, however, how the vulnerability of young people is related to capacity and consent. For over half of the groups, Bracken-Roche et al. (2017) found that an explanation of their vulnerability was “unclear or lacking entirely.” What is noteworthy is that ethics codes list some groups as vulnerable on account of their historical vulnerability and “have, at times, been treated unfairly and inequitably in research, or have been excluded from research opportunities.” The operationalization of vulnerability on account of historical and social background or processes is problematic. First, historically disadvantaged groups have been known to work against those disadvantages and can later emerge as groups with a new spirit and less vulnerable status. Second, aside from the lists being inaccurate, it is also incomplete. The world continues to see new groups, such as the 46 million slaves today, found in such countries as China, India, Russia, Uzbekistan, and Pakistan (https://www.google.ca/search?q=contempo rary+slaves&rlz=1C1DVJR_enUS784US784&oq=contemporary+slaves&aqs=chro me..69i57j0l5.6453j0j7&sourceid=chrome&ie=UTF-8). Ethics codes cannot be constantly revised to keep up with the rate of change in the world. The very idea of such lists remains an illusion. It is that illusion which is vulnerable.
32
“Vulnerability” as a Concept Captive in Its Own Prison
583
Explanations Ethics codes often attach explanations as to why or how individuals or groups are disadvantaged. These explanations are derived from reading historical texts even when that reading misconstrues the history of a group. There is no agreed-upon “correct” reading of history, and certainly the voices of the “vulnerable” groups are absent or silent. With the lack of historical veracity, we invent motives or conditions which have given rise to “vulnerability.” Moreover, a sociologist or a historian reading such a list would be very tempted to study those “historically disadvantaged” groups. The exclusionary practices in research created by such a list would, by itself, prove to be the detriment of the progress in our understanding of the world. What is one to make of the category of “women” as a vulnerable and potentially researchlimiting practice? Medical research practitioners may well consider their vulnerability within the context of drug trial, but it makes no sense to transfer such caution generically to the rest of the research world. In any case the suspicion remains that the concept applied to clinical trials is more related to a “risk” assessment. The most commonly used phrasings include such explanations as “historically considered vulnerable” and “have, at times, been treated unfairly and inequitably in research, or have been excluded from research,” or “may continually be sought as research subjects due to ready availability and administrative convenience; have a dependent status and, frequently, compromised capacity for free consent; are easy to manipulate as a result of their illness or socioeconomic conditions.” Related, one also finds substantiation of vulnerability of “subordinate members of hierarchies or relationships.” Another common explanation underscores the fact that “voluntary consent may be compromised by expectations of benefit or repercussions from superiors.” There are a few other instances that research ethics codes have outlined that give rise to vulnerability, including “various forms of discomfort and stress.” Within the context of our chapter, it is not possible to render the whole panoply of explanation that Bracken-Roche et al. (2017) found in their survey of the major research ethics codes in the world, but they all point to areas that researchers in the social sciences and in history have known to have a great interest in researching.
Toward a New Framework of Defining and Operationalizing Vulnerability in Research Ethics Codes There are those who advocate mindfulness about vulnerability as an essential ingredient of research ethics codes. Among them, C.H. Ho (2017) believes that the concept of vulnerability “could ... serve as a robust analytic for the evaluation of situational and pathogenic (or structural) contributions to susceptibilities to harm.” Awareness of the concept, he further mentions, “could also provide better guidance on how to differentiate among varying types and degrees of harm, rather than merely noting their presence [in the research].” The specter of vulnerability becomes
584
W. C. van den Hoonaard
especially salient when we refer to the kinds of harm incurred when a research participant is tricked into giving consent or is being exploited. The task of defining, operationalizing, and dismantling vulnerability is a vast task, larger than I had anticipated. I offer the following nine suggestions: 1. One should recognize that the concept of vulnerability in research ethics codes reflects anachronistic and archaic views. There is an urgent need to rid the ethics codes of the embedded paternalism. The deductive conceptions of vulnerability are, to a large extent, ideas cooked up from stereotypical and judgmental views. Without such a change, research ethics codes would still leave the term potentially languishing in the field of stereotypes. Researchers need to either arrive at inductively conceived ideas about vulnerability or find them in pre-existing literature and research. As it stands today, the medically defined concept of vulnerability has leaked into other sciences. The ethical principles about vulnerability that should sustain researchers in the social sciences, for example, should be entirely divorced from the medical research ethics framework. 2. Ethics codes need to reclaim disability from the taken-for-granted ableist practices of research. There is always the danger of a priori identifying individuals or groups as vulnerable without resorting to believing it possible to fully understand those individuals and groups and seek to understand any potential vulnerability from the perspective of the research participants themselves. Lester and Nusbaum (2017) represent a growing body of scholarly literature that is reclaiming disability from the taken-for-granted ableist practices of even qualitative research. As editors of a special issue of Qualitative Inquiry (2017/2018), Lester and Nusbaum have started to consider “new possibilities for the place and practice of critical qualitative methodologies and methods in research involving disabled people.” Some scholars (e.g., Lange et al. 2013), however, realizing that ethics codes documents tend to rely on enumerating “vulnerable groups rather than an analysis of the features that make such groups vulnerable,” advocate the view that we need to open the concept of vulnerability. They refer to the work of Luna (2009) who suggests that vulnerability is irreducibly contextual and that Institutional Review Boards (Research Ethics Committees) can only identify vulnerable participants by carefully examining the details of the proposed research. Still, as recently as 2016, the point of departure of any discussion about vulnerability was the idea of “protecting” the vulnerable as one of the chief goals of policy (Zagorac 2016). That protection slips too easily into paternalism. 3. Research ethics codes should refrain using the notion of “vulnerability” as a catch-all category of principles. The social world, its history and contemporary features, is far too complex to make categorical statements about which individuals or groups are subject to being vulnerable. What is needed are two things. Above all, researchers need to approach research participants with such dignity that any notion of vulnerability fades into the background. In contrast to medical research ethics and research in the social sciences and humanities, the principle of human dignity reigns supreme. To avoid the
32
“Vulnerability” as a Concept Captive in Its Own Prison
585
arbitrary use of informed consent to any research that might cause harm, the TCPS provides the idea that research must proceed along a different path, namely, the path of preserving human dignity. The idea of human dignity is significantly different than “autonomy.” Autonomy perpetuates the belief that research is about subscribing to the independence of individuals. The life of all individuals is, however, characterized by a web of interrelations; in that sense, no individual is functionally autonomous. 4. We need a far narrower criterion for vulnerability. This criterion should only stress the specific vulnerability that individual(s) and/or social groups will suffer as a result of a specific program of research. This approach would leave the door open that allow the exit of individuals and groups traditionally considered vulnerable but would also permit the entry of others whose vulnerability would be more evident than ever before. By retaining the traditional placement of vulnerability in research ethics codes, “vulnerability” has become a captive in its own prison. What matters, it seems, is not whether an individual or a group is vulnerable in the objective or even in the subjective sense of the concept but whether one’s research makes research participants more vulnerable to harm, or not. For example, Iphofen proffers the idea that “we should not be asking: ‘Are these subjects vulnerable?’ Instead the question should be: ‘Are these subjects made more vulnerable than they might ordinarily be in their daily lives as a result of their participation in this research?’” (2009: 108). BrackenRoche et al. (2017: 13) “hypothesized, based on the literature ..., that research ethics guidance on vulnerability should include at least the following basic content: (1) a definition of vulnerability, (2) a discussion of the sources or circumstances from which vulnerability can arise and/or identification of groups likely to be in those circumstances, [and] (3) an explanation of the ethical justification of the concept to aid in its application.” The main problem I am trying to identify is the potential failure to take the diversity of those research participants whose lives are noted within and outside their social circles into account (see, e.g., van den Hoonaard 2008). 5. New works with such subtitles as “Methods for Rethinking an Ableist World” (see, e.g., Berger and Lorenz 2015) express an ongoing frustration with traditional research conceptions about people with vulnerabilities or disabilities. One should advocate a sort of emancipatory research which highlights collaboration with research subjects and which urges us “to be on guard against exploiting informants for the purpose of professional aggrandizement and to engage in a process of ongoing self-reflection” to clear ourselves of research biases. Prince (2016: 5–6) reminds us that “many social care and human service agencies are undergoing deliberate changes to adopt new values and innovative approaches, shifting from an ethic of benevolence and compassion towards a “philosophy of self-determination and person-centred supports” (Prince 2016: 6). It is time that research ethics codes integrate those new concepts of ethics based on a philosophy of self-determination and person-centered support and step away from a priori definitions of vulnerability and distance themselves from “an ethic of benevolence and compassion.”
586
W. C. van den Hoonaard
6. One should be cognizant that the role, function, or aim of anonymization might play out quite differently in the research field. There are the obvious ways of deflecting potential harm by using anonymity, discontinuous identity, and other means familiar to researchers in the social sciences. These means become stringent when the protection of research participants is paramount. However, the researcher must also consider the unintended harm caused by praise resulting in envy or jealousy. Naturally, researchers cannot be held accountable for the reactions of others from the same social setting. One should also consider that while the need for anonymity is important, it cannot always be guaranteed in small communities or rural locales (van den Hoonaard 2003). Moreover, in some settings, research participants may well want to broadcast their achievements or histories for the greater good. What comes to mind is the harm that research can create through the researcher’s own misinterpretation of events. 7. Research ethics codes are typically preoccupied with individuals and their capacity to give informed consent. Any lack of capacity is seen as a threat to the exercise of “autonomy.” As a reviewer of this chapter (who is well versed in dealing with physical vulnerability) mentioned, the “threats of harm or of exploitation seem to be covered indirectly through the notion that a fully-informed and freely consenting participant can make decisions that minimize the risk of unacceptable harm or of exploitation.” One could argue that this approach “is a fiction that potentially allows a researcher to carry out research that runs the risk of unacceptable harm to participants on the grounds that they have consented to it” (Butler-Hallett 2018). 8. There is currently no excuse for members of research ethics committees not to be apprised of the new discourse on vulnerability. My own bibliography of some 900 entries on research ethics review contains some 50 reports and articles about the experience of researchers working with ethics and vulnerable populations. It struck me while writing The Seduction of Ethics (van den Hoonaard 2011) that few members of research ethics committees are familiar with the research pertaining to any issue they are asked to decide on. It is sad to see all that research not being relied on to help research ethics committees think more thoroughly and helpfully about the decisions they are required to render. Rather than feeling overwhelmed by this abundance of literature of empirical research, an ethics committee member can always home in on the fewer examples within the literature that deal with specific vulnerabilities. 9. Dan Goodley, a professor at the University of Sheffield in the UK, has devoted multiple books (e.g., 2014) and articles (see, e.g., Goodley et al. 2014) to challenging (and, in a sense, reversing) the relationship between the academy and the disability movement. Disability researchers based in the academic world who align themselves with the social model of disability face contradictory aims and values in attempting to understand the world of the vulnerable. Governmental service provisions and government bureaucracies, litigation, tribunal hearings, government lobbying, and artistic and cultural politics each reshape the definition of vulnerability according to their own needs.
32
“Vulnerability” as a Concept Captive in Its Own Prison
587
Conclusion These nine proposed avenues for improving the ethics policies of research involving people with “vulnerabilities” (and disabilities) not only demonstrate the need for research ethics policies to make a major turnaround of their descriptions to a more contemporary and viable understanding of vulnerability but also constitute a clarion call for each member of any research ethics committee to read the abundance of research materials on the topic. No one wants a research proposal itself to be vulnerable to unfounded speculation and stereotypical judgments by committee members. Only in such an instance will the concept of vulnerability as applied to research participants be released from its own prison, one that has constrained the concept in an outworn and dated perspective.
References Bankoff G (2011) Rendering the world unsafe: ‘Vulnerability’ as western discourse. Disasters 25(1):19–35 Berger RJ, Lorenz LS (2015) Disability and qualitative inquiry: methods for rethinking an Ableist world. Routledge, Abingdon Bracken-Roche D, Bell E, Macdonald ME, Racine E (2017) The concept of ‘vulnerability’ in research ethics: an in-depth analysis of policies and guidelines. Health Res Policy. https://doi. org/10.1186/s12961-016-0164-6 Butler-Hallett M (2018) Email message from St. John’s, Newfoundland to W.C. van den Hoonaard, Douglas, New Brunswick. 18 October CIHR et al. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada(2010) Tricouncil policy statement: ethical conduct for research involving humans, vol II. Panel on Research Ethics, Ottawa Goodley D (2014) Dis/ability studies: theorising disablism and ableism. Routledge, London Goodley D, Lawthom R, Runswick-Cole K (2014) Dis/ability and austerity: beyond work and slow death. Disabil Soc 29(6):980–984 Ho CW (2017) CIOMS guidelines remain conservative about vulnerability and social justice. Indian J Med Ethics 2(3):175–179. https://doi.org/10.20529/IJME.2017.061 Iphofen R (2009) Ethical decision-making in social research: a practical guide. Palgrave Macmillan, Basingstoke Korol S (2018) Male patients with ankylosing spondylitis respond better to biologics, study finds. Becker’s Spine Rev. 10 October. https://www.beckersspine.com/biologics/item/42985-male-patientswith-ankylosing-spondylitis-respond-better-to-biologics-study-finds.html. Accessed 18 Oct 2018 Lange, Margaret Meek, Wendy Rogers, Susan Dodds (2013) “Vulnerability in Research Ethics: a Way Forward” First published: 30 May. https://doi.org/10.1111/bioe.12032 Lester JN, Nusbaum EA (2017) “Reclaiming” disability in critical qualitative research: introduction to the special issue. Qual Inq 24(1):3–7. https://journals.sagepub.com/toc/qixa/24/1 Levine C, Faden R, Grady C, Hammerschmidt D, Eckenwiler L, Sugarman J (2004) The limitations of ‘vulnerability’ as a protection for human research participants. Am J Bioeth 4(3):44–49 Luna F (2009) Elucidating the concept of vulnerability: layers not labels. Int J Fem Approaches Bioeth 2(1):121–139 Panel on Research Ethics (2006) Proportionate approach to research ethics review in the TCPS: proposed textual changes for the concept of vulnerability in the TCPS. Ottawa. http://www.pre. ethics.gc.ca/eng/archives/policy-politique/reports-rapports/cv-nv/. Accessed 21 Sept 2018
588
W. C. van den Hoonaard
Prince MJ (2016) Reconsidering knowledge and power: reflections on disability communities and disability studies in Canada. Can J Disabil Stud 5(2):2–30 van den Hoonaard WC (2003) Is anonymity an artefact in ethnographic research? J Acad Ethics 1(2):141–151 van den Hoonaard WC (2008) Re-imagining the ‘subject:’ conceptual and ethical considerations on the participant in health research. Cien Saude Colet 13(2):371–379. (Brazil). ABRASCO van den Hoonaard WC (2011) The seduction of ethics: transforming the social sciences. University of Toronto Press, Toronto van den Hoonaard WC (2018) The vulnerability of vulnerability: why social science researchers should abandon the doctrine of vulnerability. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. Sage, London Zagorac I (2016) How should we treat the vulnerable? Qualitative study of authoritative ethics documents. J Health Care Poor Underserved 27:1655–1671
Researcher Emotional Safety as Ethics in Practice
33
Why Professional Supervision Should Augment PhD Candidates’ Academic Supervision Martin Tolich, Emma Tumilty, Louisa Choe, Bryndl Hohmann-Marriott, and Nikki Fahey Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expecting the Unexpected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professional Supervision for PhD Candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
590 592 595 599 600 602
Abstract
Guillimin and Gillam’s concept of ethics in practice in qualitative research is given in that unexpected ethical dilemmas emerge within qualitative research’s iterative frame reconfiguring how researchers manage potential harm to participants. Not so widely acknowledged is the threat the emergence of ethical dilemmas creates for researchers’ own physical and emotional safety,
M. Tolich (*) University of Otago, Dunedin, New Zealand e-mail: [email protected] E. Tumilty Institute for Translational Sciences, University of Texas Medical Branch at Galveston, Galveston, TX, USA e-mail: [email protected] L. Choe · B. Hohmann-Marriott Sociology, University of Otago, Dunedin, New Zealand e-mail: [email protected]; [email protected] N. Fahey Graduate Research School, University of Otago, Dunedin, New Zealand e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_26
589
590
M. Tolich et al.
especially those who are PhD candidates. This chapter explores a PhD student’s emotional safety when her research design unfolded on her unexpectedly leaving her to ask the question, “What just happened?” Her two PhD supervisors, a bioethicist and a health professional, provide an answer and a solution that is generalizable to qualitative research PhD students in general. A review of the literature finds this situation remarkably commonplace yet academic supervisors are either oblivious to them or limited in what they can offer students. Professional supervision offered to this PhD student was an example of best practice, allowing her to reveal her vulnerabilities in a neutral setting and outside a normal academic supervision hierarchy that routinely inhibits these disclosures. Keywords
Ethics in practice · Postgraduate research · Researcher safety · Qualitative research
Introduction Qualitative research comes in many forms. Sometimes it involves discrete one-time interviews with participants, and sometimes it involves multiple interviews with participants over time. In these interviews, participants are asked to relay their experiences of something and researchers recognize that this recounting may bring up negative or difficult emotions and offer additional support to participants as needed (e.g., referrals to agencies). When, in social science research, researchers conduct multiple interviews with participants, it allows them to grasp the past as well as to learn how the participant deals with the present. In ongoing interviews, researchers themselves are, to a greater or lesser degree, drawn into the participant’s life. This can be disruptive when interviews affording entry into participants’ lives generate an ethnography inclusive of participant observation. Here the research ethics considerations escalate: the researcher records not only the participants’ voices but also the researcher’s own thoughts and emotions in relation to their participants. This process initially is unfiltered. The rapidly evolving situation is not always predicted or predictable by researchers. The researcher’s formal ethics review application to their (usually university) research ethics committee may be circumscribed. That is not unexpected. Qualitative researchers routinely find their ethical considerations obscured in the formal procedural ethics review system designed mainly for biomedical researchers (Bosk and De Vries 2004). While researchers are able to describe what the research is about and what ethical issues are relevant, they cannot predict with any certainty how the research might develop new ethical considerations in the field (Tolich and Fitzgerald 2006). Guillimin and Gillam’s (2004) demarcation between procedural ethics (or formal ethics review) and ethics in practice predicts with some certainty that the iterative and emergent research design that accompanies qualitative research produces an
33
Researcher Emotional Safety as Ethics in Practice
591
ethics in practice assessment of ethical considerations in research by qualitative researchers. This is a “known unknown” expectation. Punch points out that many ethical problems “have to be resolved situationally and even spontaneously” (1994, p. 84). The focus of these ethical considerations is on the safety of the research participants. This chapter expands the focus to how unexpected ethics in practice creates situations for researchers that challenge the safety of the researcher. Thus, the focus of this chapter is less on the ethical considerations for the research participant and more on the researcher whose ethics in practice can manifest unexpectedly, endangering researcher safety and well-being. To bring to life these issues, what follows presents a case of a single ethnographer whose research followed all the prescribed and proscribed steps as if research ethics followed a well-worn path. The formal ethics review application focused on the experience of adolescent girls whose unstable housing disrupted their lives. It included referral plans for participant safety. These girls experienced unstable housing first-hand but not in a synchronic form; their experience was diachronic meaning the researcher had multiple interviews and observed first-hand the girls’ experience of unstable housing. The researcher’s personal research philosophy and therefore approach to her participants enmeshed her in a situation which itself created vulnerability for her. Not only were the ethical issues in this study of unstable housing emergent for the researcher but so too was the physical, emotional, and professional safety of the researcher as she immersed herself into her participants’ world by meeting them weekly to window shop or drink hot chocolate at McDonald’s. What is new in this chapter is how it highlights ethics in practice, viewed not from the participants’ perspective but focused on the researcher’s vulnerability beyond the physical and obviously emotional already described elsewhere (Choe forthcoming). This vulnerability reveals invisible boundaries between participants and researcher demonstrating how procedural ethics are insufficient in addressing the philosophies and practices of social science researchers in the field. Training for postgraduate students conducted prior to entering the field may also be insufficient (Pollard 2009). For this researcher, academic supervision bolstered by active professional supervision provided the ability to create new boundaries and overcome unexpected panics. Academic supervisors can certainly help if academic and personal crises occur, but in their Eleven Practices of Effective Postgraduate Supervisors, James and Baldwin (1999, p. 34) advise: Don’t try to act as a counsellor. To do so is exhausting and dangerous if you are not trained in counselling skills. . . . your role is to be generally and genuinely supportive. Serious problems require expert help, and the University offers a good deal of this. Know where to refer your students, and, if necessary, help them to make the contact and even the appointment.
Social workers often have clinical supervision (Howe and Gray 2013), that is regular meetings with a peer external to their chain-of-command with whom they can debrief and discuss the emotions arising from their work. The professional
592
M. Tolich et al.
supervision the PhD student received in this case study was outsourced to a health professional within the university whose task was to support PhD students during their studies. This role had only been in existence for 6 months and we discuss it highlighting the importance of such a service as best practice for PhD students augmenting academic supervision. This chapter brings together a bioethics professional, the PhD student, her two academic supervisors, and the health professional employed by the University of Otago. Together, they examine a case study the postgraduate student provides, capturing harm when good intentions lead to unravelling boundaries that caused problems on the researcher both emotionally and professionally. The section on “Background” begins with a brief overview of the original research question followed by a narrative where a PhD student describes how the situation evolved and eventually threatened her safety. The section “Expecting the Unexpected” of the chapter positions this unique case within a wider literature that suggests this case study is not unique. The section “Professional Supervision for PhD Candidates” explores the issues that arose and how they were managed differently by academic supervision and professional supervision ending with recommendations for others’ future practice that enhance research safety. Although this chapter is a “case study” related to a specific discipline (social science/social research) and a related methodology (qualitative research/ethnography), the issues raised should be of concern for the physical and emotional safety of any researcher and the proposed supportive mechanism could work equally well with most other disciplines and methodologies.
Background The origin of the PhD student’s research question began when she was working as a bank teller in a low socioeconomic suburb. The banking behaviors of poor or low-income customers were apparent – the poor paid more, not only in terms of serving bank fees, but also in terms of emotional costs and time. Once the monthly rental payments and other direct debit payments were taken out of their account, customers living on a government welfare benefit were often left with a small sum or nothing at all for other living expenses such as medical bills or food. As such, they were often overdrawn on their accounts and were charged hefty fees and interest for keeping their banking facility. The “private troubles” faced by poor and low-income customers at the bank were a “public issue” in a C. Wright Mills (1959) sense. The PhD student’s reading of Desmond’s work “Eviction: Poverty and Profit in the American City” (2016) describing ground-level research of how eight families struggled with their lives after paying almost everything on rent informed her enquiry. Desmond (2016) described the families who paid more not only in monetary terms but also in terms of opportunity costs, security, time, and hassle; for them, there was no buffer against the poverty penalty. The student’s research question asked if
33
Researcher Emotional Safety as Ethics in Practice
593
“the poor paid more,” especially within the domain of rental housing instability in New Zealand. Access to the adolescent girls was gained by volunteering at youth groups in various parts of New Zealand as a mentor. The mentoring opened doors into the life worlds of girls aged 15–18 years. Friendships were formed, and many of the girls became informants and collaborators to the research. A group of 12 senior girls regularly shared their experiences about unstable housing with the researcher. Some of these themes were previously written in a book chapter entitled “Faking Friendship” (Choe forthcoming) discussing friendship as a methodology (Tillmann-Healy 2003). This friendship strategy ran the risk of being inauthentic and one-sided during the research process (Duncombe and Jessop 2002). The inauthentic nature of the research and the fear that it was extractive are very much central to this research chapter. Together they produce a sense of guilt culminating in an affective response not predicted by the researcher or her supervisors or an ethics review system. In particular, the experience with one girl, who we are calling Hannah, exacerbated this feeling of guilt in ways that were not expected. Louisa’s Narrative: What Just Happened? I went into the research hoping to learn about the girls’ past experiences of moving houses, but I failed to realize that if they were always moving then they were probably going to continue moving during the course of this 3-year study. I was not prepared for how this instability would embroil me and I had to adapt and learn as the research unfolded. I became part of this ethnography but I never consciously consented to this level of involvement. Previously, I had enrolled in an ethics class learning from mistakes done by other researchers but nothing prepared me for minimizing harm to myself and the need to establish boundaries to protect myself physically, emotionally, and professionally. The initial part of my study focused on the girls’ experiences moving houses. I built rapport and trust, allowing the girls to share with me their lived experience. Outside formal interviews, as is the case in small city research, I would “bump” into the girls or meet them by chance in public places, such as the supermarket or outside McDonalds. They would update me on their recent housing moves. However, there was a period of time after I had finished transcribing my second round of interviews the girls felt that I “neglected” them as I had not been replying to their messages. In their words, “Why are you snobbing us?” This exchange made me more conscious about how I maintained my relationship to the girls. I did not want the research to be extractive. I wanted to make sure that they did not feel as though rapport was only built because I wanted information from them. I wanted an ongoing rapport where I could be supportive as a mentor to them with basic life skills, such as writing a job application. At first I created and kept my boundaries with the girls. One girl said, “If you want to know about how we live you can come for a sleepover.” She added that another participant frequently stayed over at her place when she ran away from home. As a novice researcher, I thought “what’s a better way to know about something than to experience it?” However, when I asked my supervisors, they raised the ethical issues highlighting my own physical safety. That was my first revelation, was my research was becoming too personal? Boundaries were important but they could easily be undermined. I wanted to be close, but not too close. Recently, one of my participants Hannah ran away from a violent situation in her home and found herself homeless. Hannah reached out to me in a text stating that “if I wasn’t desperate, or that if it wasn’t my last resort, I wouldn’t contact you.” It was then I realized that I had become part of these unstable housing stories. That’s when I panicked. I did
594
M. Tolich et al.
not know how to respond. I read Hannah’s message as serious. I tried calling her immediately but her phone was turned off or out of battery. I took a screenshot of the message and sent it off in an email to my two supervisors. I was desperate. Deep down, I knew that Hannah could not stay at my house as I had concerns with how this would affect my family, especially as this request stemmed from a threat of violence from Hannah’s uncle. After Hannah charged her phone, she contacted me telling me “if she had stayed one more night at home, she could have died and that she was in danger.” I went straight into “solution mode.” Again, I knew her staying at my home was not possible so we started brainstorming options. My main concern was preventing her from being homeless that night. Hannah’s plan was sleeping at the 24/7 McDonald’s after the public library had closed at 8 pm. Focusing on solutions, I contacted Women’s refuge and found she was ineligible. We then went to the city’s Night Shelter and their admission policy was more straightforward but short term for a maximum of 5 nights in any 3 months. After a few basic few questions, they gave her dinner and a bed. We then met the next day for lunch, and that’s when Hannah opened up more about what had happened. The problem was triggered by the lack of money. Both her parents were in prison, she was living under the care of her uncle. Hannah’s uncle had not been doing well financially. She normally paid a set amount for board to her uncle, but that week he wanted more. Her uncle threatened to hit her and to kick her out of the house if she was not going to pay more. During lunch, we sought a solution attempting to contact [the government’s] Work and Income office on the phone to make an appointment for an emergency benefit. However, being in a public place with background noises, it was challenging to make that phone call especially when talking to an automated phone system. We failed. Instead, we walked into the Work and Income office to schedule an appointment but it could not be arranged within 5 days and outside the Night Shelter timeframe. However, the first thing Work and Income asked was for her ID, but she did not have it with her because she had left home in a rush. Thus, it was frustrating to secure an appointment and this time poverty was insightful. I could sense Hannah’s anxiety about the appointment process. Plus having to witness her navigating through the bureaucratic system with frustrations and disappointments left me feeling guilty. I was not physically threatened but I had not prepared myself to be part of this unstable housing consequences story. My response was emotional and guilt ridden. This was not the first time Hannah had exposed my boundaries. The initial “alarm bells” were some months prior when I went shopping with a number of participants and Hannah asked me to buy her some headphones and I said no, I did not have the money for that. Hannah looked disappointed but she said “she got it.” When we left the store, I noticed that the two other participants were really angry and upset. Initially, I failed to pick up what had happened until I saw the headphones in Hannah’s hands. The headphones were the same pair she had asked me to purchase. I was shocked. I confronted her about how she had the headphones, she explained that she had already had them before, and that she had wanted me to purchase another set. I felt compromised and embroiled as an “accomplice” in a shoplifting incident. Now months later when I said no to taking Hannah to my house, I question if my actions contributed to her homelessness. It is difficult to maintain a relationship with this participant and keep boundaries in place. That’s why I panicked. I was blindsided. I feared she was going to be homeless and the danger of living on the street made saying no harder in this situation. Part of my emotion was a feeling of academic guilt. I always remember Kevin Carter taking a photograph of a starving child in Africa with a vulture perched behind her. He went on to win the 2004 Pulitzer Prize but what happened to the child is unknown. My fear is that social research could end up like that with a thesis written through the generosity the girls sharing of their stories but the girls’ lives continuing being the same (see Stacey 1988).
33
Researcher Emotional Safety as Ethics in Practice
595
That situation disturbed me highlighting how inequality can be perpetuated within research settings. For example, the girls ended up homeless, but after listening to their story, I go back to transcribing interviews within the safety of my own home. I find this reality uncomfortable. I resolved many of these situations through the support of my supervisors and their debriefing sessions. These allowed me to articulate and revisit my priorities. If I had said yes to sleeping over in a participant’s houses or allowing Hannah to spend the night in my own house, I would do more harm than good. Learning that by saying no, I was minimizing harm for my participants helped me to reconcile my decision. In the end, I do not regret saying no. But this does nothing to curtail the tears of guilt. I was grateful for the opportunity to speak to a health professional to place the situation into perspective. This perspective is that the benefit of the research stems from talking to my participantcollaborators; what they really wanted was for more people to be aware of their stories. I too want their voices to be heard. The more people know about their experiences and hear about their stories, people might reflect about how unstable housing is affecting those around them. What I have learned from these boundary incidents is that being reflexive is not just about being ethical towards your participants but also for yourself as a researcher. The duty of care goes both ways. I now have revisited the resource sheet in my participant information sheet. Initially, that’s where I made my blunder. I realize that the information sheet was designed as a resource within an “interview” setting: for example, if during the course of this interview the participant feels uncomfortable, there is a list of services they can talk to people. But my study had evolved into an ethnographic one. The resource sheet needed to be relevant to the whole process. I now include other resources such as Women’s Refuge, Night Shelter, and also Rape Crisis on top of the mental health support helplines. Also, the resource sheet needs to be relevant to the young person’s literacy or numeracy. They wanted something more resourceful. This resource sheet is for me, too, as it protects both me and my participants in times of crisis.
Expecting the Unexpected In Louisa’s narrative, we see multiple vulnerabilities that she confronts during the course of her work. The potential for physical harm in staying at a participant’s house, the emotional stress of dealing with a participant in crisis, the professional jeopardy of being with participants engaging in illegal behaviors, and the reconciling of one’s identity within a situation of unequal power relations. The emotional vulnerability of a researcher in the field in part stems from the unknown ways she may relate to her participants. When designing research and explaining it to ethics committees for approval, the relational aspects of the work are never fully interrogated. How the researcher might approach a participant, how they will be invited and consent to participate, how the researcher will communicate with them, how the researcher will ensure the participants’ physical and emotional safety along with their own physical safety are general aspects requested by ethics committees in describing social research work. But the question of what will your attitude as researcher be towards your participants, what are the necessary boundaries, how will they be maintained, and how might you feel spending time with
596
M. Tolich et al.
these participants in the field are never raised. As social science scholars have called for greater relational practice with participants to avoid exploitation and calls of “tarmac professor” (those who explore the lives of others by metaphorically and sometimes physically “flying in and out” of a given situation), the idea of what boundaries are and how they are maintained has become complex and for emergent researchers especially fraught. What is the purpose of our research and how do we conceptualize exploitation in undertaking it? This is the first step to understanding how we define our roles in the field with our participants and what boundaries should be established. The 2000 Safety in Social Research guidelines (Craig et al. 2000) cover some aspects raised in Louisa’s case. In response to the growing awareness of the risks involved in social research, the guideline group called for a wider discussion of safety aspects for social researchers and the need to develop a code of practice for safety which increased awareness of potential risks (Craig et al. 2000). These included: • • • •
Risk of physical threat or abuse. Risk of psychological trauma as the result of actual or threatened violence. The risk of being in a compromising situation leading to improper behavior. Increased exposure to general risks of everyday life and social interaction from travel, infectious illness, or accident.
The new code of practice recognized that risk was carried on both sides of a research interaction, both the researcher and the researched. Social researchers were vulnerable; they often work in private situations, and the topics of enquiry may be sensitive and invoke strong feelings among those people participating (Lee 1993). Some of these risks can be forewarned for new staff in induction training yet even with this training unexpected or unpleasant situations may arise for the researcher even when they are following guidelines. The effects of physical or emotional violence or the threat of violence may be traumatic (Craig et al. 2000). Gender is highlighted as a factor in these safety guidelines. Craig et al. (2000) claim: Lone female researchers are, in general, more vulnerable than lone males. Even when the threat of physical violence is not the issue, certainly more orthodox cultures may find woman researchers unacceptable and react with hostility.
Having identified these gendered risks the social research guidelines point out that some risk is inherent in the fieldwork: Researchers are often left to rely on their own judgement or intuition in arranging and conducting an interview. The quality of much social research depends on establishing the appropriate distance between the researcher and respondent – a distance neither over familiar nor too detached. It may not always be easy even with prior briefing, to know what that distance should be. (Craig et al. 2000)
Hubbard et al. (2001) suggests that one of the reasons that the emotional impact on a researcher is low on the list of concerns is that there is an assumption that we
33
Researcher Emotional Safety as Ethics in Practice
597
tend to “screen ourselves out” of projects that we consider personal danger areas (2001, p. 120). Equally, Hubbard et al. say researchers may not always anticipate emotional challenges. The threat from the emotional spill over goes beyond the research. Thus, more work is needed to understand the potential for emotional threats and discuss their amelioration. Lalor et al. (2006) discuss the emotional vulnerability of not only researchers but also transcribers and supervisors when dealing with research on sensitive or traumatic issues, while Woodby et al. (2011) discuss the emotions that can arise when coding transcripts relaying distressing content. There is an emotional vulnerability in being exposed to the difficult stories of others – how do we plan and then deal with our responses to hearing the suffering or sadness of our participants (see Granholm and Svedmark 2018). Researchers may be more aware of these issues than ethics committees who generally ask researchers how they will protect participant emotional safety but rarely ask about the researchers’ plans for their own emotional well-being. In part we could speculate that this is not only as Hubbard et al. (2001) described because of a presumption of “screening out” but also because ethics committees that grew out of a biomedical tradition (Israel and Hay 2006) see the researcher as a removed observer. They may fail to grasp the relational nature of some qualitative work with participants. How are researchers, especially emergent researchers, meant to conceive of their vulnerabilities in research? Pollard’s (2009) study of 16 anthropology PhD students who conducted fieldwork from three universities found they were not prepared for the potential risks they experienced. From her thematic analysis, she identified the following themes in these students experience. They felt: Alone, ashamed, bereaved, betrayed, depressed, desperate, disappointed, disturbed, embarrassed, fearful, frustrated, guilty, harassed, homeless, paranoid, regretful, silenced, stressed, trapped, uncomfortable, unprepared, unsupported, and unwell.
Pollard’s (2009) conclusion was that fieldwork training courses for anthropology students may be inadequate. Supervisors cannot and do not always provide appropriate support. Her suggestion is that PhD student should be prepared for a wide range of difficulties in the field, and that a significant number may face difficulties they never anticipated. The consensus of the students Pollard (2009) interviewed claimed that the pre-fieldwork training course provided by the department was not satisfactory with little substantive preparation for novice researchers to enter the field. The 16 respondents’ main criticism was the training was too theoretical and not practical enough. No training, however practical it may be, can ever really prepare you for fieldwork. You only learn how to do fieldwork when you are doing fieldwork. Adding to the mystery of what fieldwork actually is was the claim by the students that they were not privy to reading field notes of senior anthropologists (Pollard 2009, p. 29).
One impediment for these PhD students getting support was their expectation that negative experiences were part and parcel of their education. For example, Pollard found the students feared that a supervisor would see any claims of harm
598
M. Tolich et al.
as a weakness on their part, that they could not handle the rigors of being in the field. Thus, no matter how well they do their jobs, student anthropologists will always be reticent about revealing difficulty in fieldwork, because they are worried that this will damage their fragile, emerging reputations as academic professionals (Pollard 2009, p. 37). To alleviate this concern, Pollard (2009) recommended a mentoring system, where former fieldwork students could act as mentors for pre-fieldwork students. This mentoring system was premised on the idea that PhD students need support from people who understand ethnographic fieldwork but who had as little power as possible over their professional careers. The type of professional supervision offered by a health professional discussed below is similar but significantly different to a mentoring program. Those offering professional supervision are trained professionals as well as not being part of the academic supervision team. Hanson and Richards’ (2017) study of researcher safety focuses on women who, as PhD students, experienced unwanted sexual advances in the field. Similar to Pollard, they found that PhD students felt they needed to deal with situations themselves, making ethnography a process of trial by fire. It was as if withstanding the difficulties of conducting ethnographic research is what makes one a worthwhile scholar (Hanson and Richards 2017, p. 8). One student reported that she felt that she was supposed to develop these really amazing, immense intimate relationships with informants (Hanson and Richards 2017). Intimacy may create access to high-quality data, but it also opens up the researcher to unwanted attention or advances, followed by blame and critique when these occur (Greenhill 2007). Intimacy and boundaries are set up as a dichotomy – how to reconcile these required elements of research as symbiotic? Experienced researchers have developed strategies in the field to both elicit cooperation while maintaining professional roles. They need to share this experience with emerging researchers, but this kind of sharing is largely haphazard insofar as it depends on the choice of supervisors and their experience. As social researchers, it is particularly important to share how we conduct research in the field not just methodologically, but interpersonally. What makes any social science PhD difficult is that they are set up on an individual basis. Collaborative research is virtually impossible, and there is very little training for how anyone should respond to dangerous situations in the field. We suggest universities professional supervision for PhD students. Some scholars have written about the emotional vulnerability of working with marginalized and vulnerable populations. Similar to Louisa’s case, others describe the distress that can arise working with vulnerable populations (Davison 2004; Law 2016), especially those populations suffering economic hardship. Law (2016) specifically describes the importance of understanding one’s own privileged perspective in relation to the participants; the greater context of systems that create power dynamics historically, politically, and socially and especially in research. Understanding these dynamics is important in ensuring that researchers pay their participant the requisite respect both personally, but also epistemologically (Law 2016).
33
Researcher Emotional Safety as Ethics in Practice
599
This chapter adds to an ongoing conversation by discussing how we can sustain the safety of emerging researchers by blending academic and professional supervision. Had the 16 students in Pollard’s study had access to professional supervision, many of their problems could have been addressed. In section “Professional Supervision for PhD Candidates,” the Graduate Well-Being Coach at University of Otago explains her emerging role.
Professional Supervision for PhD Candidates My position as the Graduate Well-being Coach at the University of Otago was established in 2018 and is managed by the Graduate Research School. I am a registered occupational therapist with experience working in mental health settings. The initial thinking was that I would coach Ph.D. students who are struggling to be productive, supporting them to look after their well-being while achieving their goal of successfully completing their postgraduate studies, but as I visited the university support services, a need within the wider postgraduate community was identified. Graduate Well-being Coaching is for any postgraduate student who needs some help with motivation, managing procrastination and imposter syndrome, keeping up with writing targets, concentration, managing the demands of their course of study, or any other issue related to keeping on track academically. It also focusses on general lifestyle balance and overall self-care and well-being to see if any changes can be made to support academic performance. The service also acts as triage, directing students to other services/supports they may find helpful. The other part of my role provides guidance to staff, when requested, about supporting student well-being. The development of my role has been quite straightforward. What has been unexpected is that a number of students have sought out the service in the past 6 months reporting they were struggling with the research experience itself, to the point where it has caused them significant distress. For these students, I had to draw upon my past experience of professionally supervising mental health clinicians to provide a different kind of support that potentially their academic supervisors could not or were not in a position to offer. These meetings focussed more on debriefing and guiding the student through critically reflecting on the situation, their emotional response and resulting actions, and then supporting the student to consider how they may change their approach or reframe their thinking based on their new learning. As the service is confidential and I have a level of distance from the research and department, it is a safe space for students to openly explore the situation at an emotional level and receive specific practical advice regarding establishing healthy boundaries and risk management. As I grow into the role, I realize there is a real need to support students who face a crisis of confidence during their studies and to normalize their experiences of this. It is difficult to predict what challenges may occur in the field, or the emotional responses the students may experience. An increased openness about the potential for unexpected situations and reactions to occur may help to encourage students to
600
M. Tolich et al.
proactively seek support sooner. As part of this, it is useful for the academic staff to know there is an alternative type of supervision I can offer that is similar to professional supervision that can complement the academic supervision they provide. Health professionals routinely receive clinical supervision as a safety net for both parties involved (clinician and service user), for researchers who are forming relationships with vulnerable populations, access to this alternative form of supervision, is one practical way to enhance researcher safety and well-being.
Discussion Many of Hanson and Richards (2017) researchers were blindsided by their experiences and as a result tried to ignore or set them aside. Louisa was also blindsided by her experiences with Hannah. She generated rich data, sharing the lived experience of the frustration of being homeless and attempting to gain an emergency benefit, but she felt guilty for a number of reasons. Was she there because she was only collecting data? At any time, she could go home to the comfort of her warm house and loving family. We see in Louisa’s story her genuine desire to be a certain kind of researcher, one that is not “extractive” when working with vulnerable and marginalized populations. What we also see is a transformation in understanding of what “extractive” means in this context. Her initial feelings showed her struggling with her professional identity; what does going home to a warm house mean for the researcher working with those living in precarious situations. There is an internal struggle to define a reciprocity, to establish that the relationship is not extractive and out of balance. This is accomplished in the end by describing the benefits of the research for the participants – their ability to have their story heard and the potential for the research to impact society. In moving forward and sharing practice for the purposes of conducting better and safer research for all parties involved, it is worth mapping out what issues arose, how they were dealt with, and how we can deal with them in future. In this case, Louisa was never physically threatened. The one instance where there may have been an issue of physical unsafety was when she was asked by her participants if she wanted to attend a sleepover. She dealt with this in discussion with her supervisors and politely relayed her refusal to the participants. Her supervisors were there to help her establish boundaries when none were apparent. Her professional vulnerability came to the fore in her experiences with the participants when she spent time with them outside of interviews. The shoplifting incident that may have occurred was unpredicted and arguably unpredictable. As social researchers working in the field, there is a need for a dynamic adaptability to ethical situations that arise as discussed earlier in this chapter. How we train researchers to have this skill is somewhat undetermined. Even the establishment of roles and boundaries does not necessarily provide answers to such a scenario. The murkiness of the situation leaves no clear pathway for researcher action. Discussing actions that may end a researcher-participant relationship such as an awareness by the researcher of illegal activity may be advisable. It is clear that
33
Researcher Emotional Safety as Ethics in Practice
601
one can explain this to a participant as necessary to keep both the participant and the researcher safe. The need to share strategies that one finds to deal with situations is apparent. Her emotional vulnerability was twofold. On the one hand, Louisa obviously felt sympathy, worry, and stress for a participant and her situation. Her empathy and relationship with the participant was such that she even thought through the possibility suggested by the participant of the participant staying in her home. Though she rightly decided along with her supervisors this was inappropriate, the subsequent time spent with the participant helping her arrange an alternative was involved and caused emotional turmoil. One of the ways of dealing with this was a practical one generated in a supervision session. Together Louisa and her supervisors developed a more robust resource in the form of a participant information sheet that mapped out for her participants resources that they could use in the case of their homelessness. A second aspect of Louisa’s emotional vulnerability was her sense of self as researcher in her relationship to the participant. As she describes it, she was guiltridden. She felt that her relationship with her participants who struggled with the precariousness of their living situations was one-sided – she reaped the benefits without providing them anything in return. This was especially so during the episode of handling the one participant’s homelessness in real-time (rather than through interview). Understanding her role as researcher, the boundaries and what reciprocity means within the researcher-participant relationship are crucial in ensuring both researcher and participant well-being within relationships that are by definition imbalanced. The idea of disinterested objective researcher is anathema to many qualitative researchers because of what it implies in relation to the participant (i.e., the participant as object to be observed). The drive to use the word participant over human subject was a drive to recognize that people are active participants in research (Oakley 1981). Moves for more relational practice, however, still require researcher-participant boundaries. These boundaries are required to underpin one’s work, but also to protect all parties. The need to fully identify what the outcome of one’s work is, what it means for participants (directly or indirectly) and what benefits participants gain from being in the research need to be thought through both prior, but as with much else in qualitative research, iteratively throughout the process. To understand one’s own position and feel comfortable about that position, it’s necessary to have scoped out what it is one is doing, how one is doing, and to what end. By understanding this, we can convey this to participants, and they can actively choose their informed participation. In dealing with these emotional struggles, Louisa was able to access a health professional on the staff of the University of Otago to discuss issues that arose from this thesis. Such services make sense for all PhD students as they progress through their research, but especially so for students working with vulnerable and marginalized populations in the field. This chapter demonstrates the need for social science researchers conducting fieldwork to have access if needed to professional supervision in order to enhance researcher safety. It also demonstrates that we need to more open and explicit about our vulnerabilities in the field and how we manage
602
M. Tolich et al.
them for shared learning and growth. These vulnerabilities and our responses, rather than exposing some failing on our part as researcher, show our ability to be ethically dynamic and our need for collaborative support and practices.
References Bosk CL, De Vries RG (2004) Bureaucracies of mass deception: institutional review boards and the ethics of ethnographic research. Ann Am Acad Pol Soc Sci 595(1):249–263 Choe L (2019, forthcoming) How contradictory friendships disrupted my study of working-class girls’ residential instability. In: Billett P, Humphry J, Hart M (eds) Complexities of researching with young people. Routledge, London Craig G, Corden A, Thornton P (2000) Safety in social research. Social Res Update 29:68–72 Davison J (2004) Dilemmas in research: issues of vulnerability and disempowerment for the social worker/researcher. J Soc Work Pract 18(3):379–393 Desmond M (2016) Evicted: poverty and profit in the American city. Broadway Books, New York Duncombe J, Jessop J (2002) ‘Doing Rapport’ and the ethics of ‘faking friendship’. Sage, London, pp 107–122 Granholm C, Svedmark E (2018) Research that hurts them and me: Ethical considerations when studying vulnerable populations online. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics, Sage, London, pp 501–509 Greenhill P (2007) Epistemological Reflections on sex and fieldwork. Resources for Feminist Research 32:(3/4)87–99 Guillemin M, Gillam L (2004) Ethics, reflexivity, and “ethically important moments” in research. Qual Inq 10(2):261–280 Hanson R, Richards P (2017) Sexual harassment and the construction of ethnographic knowledge. Sociol Forum 32(3):587–609 Howe K, Gray I (2013) Effective supervision in social work. Sage, London Hubbard G, Backett-Milburn K, Kemmer D (2001) Working with emotion: issues for the researcher in fieldwork and teamwork. Int J Soc Res Methodol 4(2):119–137 Israel M, Hay I (2006) Research ethics for social scientists. Sage, London James R, Baldwin G (1999) Eleven practices of effective postgraduate supervisors. Centre for the Study of Higher Education and The School of Graduate Studies. University of Melbourne, Parkville Lalor JG, Begley CM, Devane D (2006) Exploring painful experiences: impact of emotional narratives on members of a qualitative research team. J Adv Nurs 56(6):607–616 Law SF (2016) Unknowing researcher’s vulnerability: re-searching inequality on an uneven playing field. J Soc Polit Psychol 4(2):521–536 Lee RM (1993) Doing research on sensitive topics. Sage, London Mills CW (1959/1976) The sociological imagination, Oxford University Press, New York, NY Oakley A (1981) Interviewing women: a contradiction in terms. In: Roberts H (ed) Doing feminist research. Routledge, London, pp 30–62 Pollard A (2009) Field of screams: difficulty and ethnographic fieldwork. Anthropology Matters 11(2):1–23 Punch M (1994) Politics and ethics in qualitative research. Handbook of qualitative research Stacey J (1988) Can there be a feminist ethnography? Women’s Stud Int Forum 11(1):21–27. Pergamon Tillmann-Healy L (2003) Friendship as method. Qual Inq 9(5):729–749 Tolich M, Fitzgerald M (2006) If ethics committees were designed for ethnography. J Empir Res Hum Res Ethics 1(2):71–78 Woodby L, William B, Wittich A, Burgio K (2011) Expanding the notion of researcher distress: the cumulative effects of coding. Qual Health Res 21(6):830–838
Research Involving the Armed Forces
34
Simon E. Kolstoe and Louise Holden
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Difference Between Research Integrity, Governance, and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . What Are Research Ethics Committees Looking For? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Social or Scientific Value: Scientific Design and Conduct of the Study (Including Involvement of Patients, Service Users, and the Public, in the Design, Management, and Undertaking of the Research) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recruitment Arrangements, Access to Health Information, and Fair Research Participant Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Favorable Risk-Benefit Ratio: Anticipated Benefits/Risks for Research Participants (Present and Future) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Care and Protection of Research Participants: Respect for Potential and Enrolled Research Participants’ Welfare and Dignity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informed Consent Process and the Adequacy and Completeness of Participant Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suitability of the Applicant and Supporting Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independent Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suitability of Supporting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other General Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consider and Confirm the Suitability of the Summary of the Study . . . . . . . . . . . . . . . . . . . . . . . Woman in Ground Close Combat Research Program: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . Researchers‘ Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
604 605 608
609 611 612 613 613 615 615 616 616 617 618 618 620
S. E. Kolstoe (*) University of Portsmouth, Portsmouth, UK e-mail: [email protected] L. Holden British Army, Andover, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_27
603
604
S. E. Kolstoe and L. Holden
Abstract
Human participant military research conjures up images of ballistic experiments, biological warfare, or top-secret experiments with dubious objectives. In reality the majority of human participant military research is defense orientated and more widely applicable to health or other fields. This ranges from the translation of pioneering battlefield treatments into civilian medicine, to psychological insights gained from studies into how people operate under stressful situations, to management studies aiming to understand supply chains or explore team dynamics. In many cases research is carried out by civilian academics applying their expertise to military situations, or perhaps supervising post-graduate students who happen to also be military officers. Here, rather than the subject of the research being problematic, ethical challenges often come from the specific military context. This chapter offers advice to researchers from all fields (medical, quantitative, qualitative, and mixed methods) as they design their studies and consider the ethical requirements relevant to conducting human participant research in or for the military. Keywords
Ethics · Military · Research · Governance · Integrity
Introduction There is a distinction between the ethics of war and the ethics of research conducted in a military context or environment. Regardless of whether war should or shouldn’t be an accepted part of broader ethical frameworks, almost all nations have militaries, and almost all societies have soldiers, sailors, and aircrew. This situation is unlikely to change, making the consideration of how research is conducted within this context an important and valid activity especially given the significant level of resources that societies assign to their militaries. From an ethical perspective, those who serve in the armed forces also agree to put themselves in positions of considerable risk and danger on behalf of their civilian populations. This significant and sometimes tragically ultimate sacrifice places a moral imperative on governments and societies to ensure that their soldiers, sailors, and aircrew are well cared for and able to take advantage of the latest technology and knowledge appropriate to their situation. Whether such innovations are offensive or defensive is relevant, but the distinction becomes considerably blurred if the overall aim is to protect combatants, reduce the duration or likelihood of major conflicts, improve understanding of human performance, develop medical care, or produce technologies that may also have considerable civilian utility. It is a fact of history that many key discoveries and innovations were originally driven by necessity at times of conflict or risk to national survival. It is also notable, and perhaps encouraging, that international conventions exist to limit activities and research conducted in areas such as chemical, biological, radiological, and nuclear (CBRN). While this philosophical topic must remain important and the
34
Research Involving the Armed Forces
605
Fig. 1 The structure of “Good research”
subject of considerable reflection, this chapter does not consider whether military research “should” occur but rather seeks to provide practical guidance for both civilian and uniformed academics and clinicians seeking to conduct lawful research within the military context. It draws on the authors’ experiences of civilian and military UK Research Ethics Committees (RECs) and considers how wellestablished principles of research ethics can be applied to researching the military. This chapter is relevant to human participant research spanning all disciplines, from intrusive medical research to quantitative, qualitative, or mixed methods approaches more likely to occur in social sciences, management, or policy development contexts.
The Difference Between Research Integrity, Governance, and Ethics A key confusion for many researchers (and often regulators) is the difference between Research Integrity, Research Governance, and Research Ethics. The three areas are undoubtedly closely linked but at the same time represent three different parts of “good research.” One way of understanding the distinction is shown in Fig. 1. Research Integrity is often taken to mean the reliability, robustness, and trustworthiness of the results of research, but is far broader than just a consideration of research methodology or results. Instead perhaps the most useful conception of Research Integrity comes out of a consideration of the ethical virtues that make up the researcher themselves. Briefly, a virtue represents the “golden mean” between a related vice of deficiency and vice of excess. Although a variety of virtues can be recommended for researchers, surveys such as the 2014 Nuffield Council on Bioethics Report on Research Culture (Joynson and Leyser 2015) provide examples such as being rigorous, accurate, original, honest, transparent, collaborative, multidisciplinary, open, and creative. Virtue theory is also familiar within military settings where key virtues often include loyalty, competence, selflessness, integrity, and
606
S. E. Kolstoe and L. Holden
pride. However, the primary responsibility for the promotion of virtues, be they scientific or military, resides within the professional communities themselves, not with regulators, ethics committees, or research managers. While all within the research environment have the responsibility to promote research integrity, the task of defining the virtues that make up Research Integrity within any given research context must necessarily sit with the relevant professional communities of researchers who are most familiar with the methods used in their area. Others in the system should be free to raise concerns, but not try to define for researchers what their professional virtues must be. This is an extremely important point because considerable angst is caused when researchers see others as trying to judge or dictate the virtues that they should be exhibiting. The issue is perhaps further confounded within military settings where uniformed researchers often approach their work with a well-developed understanding of the military virtues, and while these are not always identical to the virtues required within their area of research, it is the responsibility of academic supervisors to help military personnel understand the virtues required for Research Integrity, not ethics committees or governance officers. Research Governance is very different to Research Integrity. Whereas Research Integrity focuses on the researcher, Research Governance asks whether the research as proposed is legal and follows relevant guidelines and policies. Often Governance responsibilities are defined by law (such as in the EU Clinical Trials Regulations (European Commission 2014)) or by research funders who may need to impose certain contractual agreements on institutions or research teams prior to handing out the funding. Perhaps one of the most convenient summaries of Governance responsibilities is contained in the UK Research Integrity Office’s (UKRIO’s) “checklist for researchers” (UKRIO) (which was broadly developed from the responsibilities described in the UK Policy Framework on Health and Social Care Research (Health Research Authority)). Many other national, institutional, or subject-specific versions are available. Research Governance is a relatively straightforward “check box” procedure to ensure that appropriate contracts are in place, research facilities are available, health and safety issues are covered, indemnities are arranged, and all other legal or policy requirements are accounted for prior to the research commencing. The responsibilities for these checks sit better with Research and Development (R&D) officers or Clinical Trial Units (CTUs) rather than with researchers themselves, as the researchers are unlikely to have all the expertise, knowledge, or authority to sign off on such issues. The individual signing off on governance checks is therefore referred to as the “Research Sponsor” and must be distinct from the “Chief Investigator.” It must also be noted that in this context the Research Sponsor and the Research Funder are often (but not always) two different entities. Identifying the appropriate Research Sponsor within the military context may be challenging because, along with providing reassurance that all research governance responsibilities are carried out, there may be additional military or security arrangements that are needed for the research to proceed. While military research agencies do exist that are experienced in fulfilling this role, often military research is conducted by university-based researchers whose research offices may not have much experience with the military. Here it is important that an officer of appropriate seniority (often at
34
Research Involving the Armed Forces
607
OF5 (Colonel) or above) is identified early on in the planning stages to provide or arrange for military-specific advice, assurances, and permissions. Likewise, if research is being conducted by military personnel as part of a program of study, it is important that the educational institution attempts to identify a military contact with the appropriate seniority to help assure governance arrangements rather than relying on the student to identify and arrange all the military-specific permissions. It is also key that the senior military contact is aware of the role and governance responsibilities of the Research Sponsor, even if they are not fulfilling all aspects of the sponsors role themselves. Vague assurance from a very senior officer saying that they support the idea of the research in principle is seldom appropriate on its own. While Research Integrity is mainly concerned with the virtues of the researcher, and Research Governance looks at legal or policy compliance, Research Ethics is a judgment on whether the research project, as proposed in a defined protocol, is safe and acceptable for participants and conforms to wider ethical acceptability. This judgment is complex and better suited to a Research Ethics Committee (REC) composed of a combination of members with research or clinical experience, alongside “lay” members to represent wider society. Reviews by RECs are necessarily pragmatic as they examine a research protocol in detail and then consider issues that researchers need to take into account during the course of their work. As a result, whereas Research Integrity and Research Governance continually proceed alongside the research process, Research Ethics review necessarily occurs at a point in time prior to the recruitment of participants (as recommended by the authoritative Declaration of Helsinki (WMA 2013)) and answers the question “If the research proceeds as described in the protocol, will it be ethically acceptable?” This is not to say that RECs shouldn’t have a continuing role in the research process by perhaps reviewing amendments or providing advice, but rather that it is not the role of a REC to police researchers in the same way that professional bodies (who are concerned with Research Integrity) or Research Sponsors (who are concerned with Research Governance) might. RECs do not approve research as this is the job of the Research Sponsor. Instead RECs offer an opinion as to the ethical acceptability of the research, albeit an opinion that is taken sufficiently seriously by Research Sponsors that they often will not approve research without the favorable ethics opinion. There may also be certain laws relating to research (see below) that require review by an appropriately constituted and authorized REC before the work is legally allowed to proceed. Similarly insurance or indemnity providers may not cover research activities unless they see evidence of a favorable ethics opinion. However, while it is ideal for the role of Research Sponsor and REC to be separated, it must be noted that some nations run “Institutional Review Boards” (IRBs) that cover both Research Governance and Research Ethics responsibilities. This is not ideal as it means the ethics review cannot be independent of the institution (perhaps university or hospital) conducting the research making it difficult if ethical or governance issues arise because the institution cannot show any independent accountability. The independence of RECs is perhaps even more pertinent in the military context as military-based research is often perceived as quite contentious due to the potential of research outcomes being used in offensive as well as defensive
608
S. E. Kolstoe and L. Holden
activities. Here the recommendation is to form a civilian committee but with the appropriate clearances and military advice to be able to review and provide an opinion on protocols which may, on occasion, be highly sensitive. In summary, one model for conceiving good research, including that which is conducted in the military context, is by considering it as made up of three interlinked components. Research Integrity is primarily the responsibility of academic communities and research supervisors familiar with the virtues required to conduct successful research in their field. There may also be important and relevant military virtues that apply. Research Governance depends on the legal and policy environment, requires the identification of a suitably qualified and authoritative Research Sponsor, and requires support from an institutional infrastructure with experience of dealing with legal, regulatory, and policy issues. One of the responsibilities of the Research Sponsor is to assist the individual researcher in obtaining a suitable ethics review. In this context Research Ethics refers specifically to the process of ethics review by a REC. The remainder of this chapter now focuses on this process with the aim of providing practical advice to researchers as they proceed through the ethics review process, specifically focusing on challenges inherent to conducting research in the military context.
What Are Research Ethics Committees Looking For? Research Ethics Committees and Institutional Review Boards do not all operate in the same way and therefore are unlikely to be consistent in their review of projects (Trace and Kolstoe 2017). This inconsistency is a source of significant frustration to researchers. The problem is compounded if RECs are working within different organizations or administrative structures and as a result are using different operating procedures. In such a diverse environment consistency can only really occur if committees and academics develop suggestions for the structure or at least topics that RECs need to consider during reviews, and hope that the various RECs or IRBs come across these suggestions and try to apply them. However, one significant advance in developing consistency has come from the drawing together of multiple ethics committees within the UK National Health Service (NHS) initially under the umbrella of the National Research Ethics Service (NRES) and more recently the UK Health Research Authority (HRA). While this process involved a consolidation from about 150 RECs down to the current 60 or so, it also provided the opportunity for reflection on how these committees worked, the common problems they encountered, and how REC review could be improved. There was already literature discussing the form and content of REC review (Emanuel et al. 2000), and the UK National Research Ethics Advisors’ Panel (NREAP) were able to consider this literature alongside experience from across the NHS RECs to formulate ten review domains (National Research Ethics Advisors Panel (UK) 2014). These were swiftly incorporated into a “Lead Reviewers Template” and circulated to all NHS RECs to try and encourage consistency during the review process. Initially this template was not well accepted by committee members, but the argument for acceptance was
34
Research Involving the Armed Forces
609
significantly helped by the HRA being given authority to determine which RECs were recognized by the UK Ethics Committee Authority (UKECA) to provide statutory ethics review as required by the European Clinical Trials Regulations, UK Human Tissue Act 2004, and UK Mental Capacity Act 2005 amongst others (for more details, see the UK Policy Framework for Health and Social Care Research (Health Research Authority)). This allowed the HRA to incorporate the ten review domains into the definition of an ethics review for the purposes of complying with law, forcing RECs to use the domains when reviewing these types of research. Although this strategy was certainly not without critics within the REC community, over time there has been a gradual acceptance that they do provide a useful guide for committees, and indeed as more qualitative and social science-based research has been reviewed by these committees, the ten review domains have proven useful for the review of research that is quite far removed from the original clinical context within which the domains originated. Within a military setting, the UK’s Ministry of Defence REC (MODREC) is a UKECA-recognized committee and must use the review domains when providing a statutory required review of studies. MODREC also reviews a wide range of other studies and, although not required to use these domains when reviewing other types of studies, has found them to be a useful guide in almost all cases. As a result, and especially if submitting to committees not experienced with military research, researchers would be well advised to ensure their application covers the following ten areas.
Social or Scientific Value: Scientific Design and Conduct of the Study (Including Involvement of Patients, Service Users, and the Public, in the Design, Management, and Undertaking of the Research) There is significant debate as to how appropriate it is for ethics committees to comment on the justification or methodology of a research project (Trace and Kolstoe 2018). On one hand, a poorly justified project or inappropriate method can be viewed as ethically problematic because it wastes time, resources, and effort. Furthermore, the burden of research that is intrusive or has risks must be carried by the research participants, so researchers and ethics committees can be seen as having the duty to ensure that the goodwill and forbearance of research participants are not wasted. On the other hand, RECs review a wide range of research, and although they do have expert members, there may not always be someone in the committee room who is an expert in the specific scientific or methodological area being addressed by any given project. While RECs do become quite experienced overtime with reviewing a wide range of projects and getting to understand the way certain methodologies or research designs are commonly used, this does not make their review equivalent to scientific peer review. The issue becomes even more complex in military or security environments when RECs may be primarily composed of civilians with little experience of the research context. While REC members may be able to appreciate or empathize with university students being recruited on campus for a study, the situation is quite different if participants are young soldiers
610
S. E. Kolstoe and L. Holden
from a wide variety of social backgrounds being recruited in a highly disciplined context with a very different understanding of risk compared to civilian environments. Similarly the justification for some research projects might only be obvious to those who have personally experienced certain military environments or situations, far removed from the relatively safe and secure lifestyles of many REC members. The solution to this problem normally lies with the REC members being reassured by external peer review of projects. The simplest cases for the REC are often the largest projects funded competitively through grants awarded by established research commissioning programs, often made up of established research funders working in collaboration with military authorities. Here an understanding of the rigorous peer review process conducted prior to the funding being awarded is often sufficient to reassure the REC that the research questions have been suitably formulated and the proposed methods are suitably robust. Likewise when it comes to studies or trials funded or sponsored by large defense contractors, RECs are often reassured that such relatively large investments are scrutinized due to the (hopefully reasonable) assumption that such companies or organizations have procedures in place to prevent wasting resources or damaging their reputations. However, RECs also need to be reassured that conflicts of interest, perhaps most obvious in the case of commercial entities (but also relevant to research charities who may have certain hopes about the outcome of projects), do not influence the justification or methods chosen. How a REC considers this particular issue, and especially how a REC decides whether a peer review is suitably independent, is covered in more detail below (section “Independent Review”). More complex cases regarding the justification of projects or methods often come from relatively poorly funded researchers whose main aim may be military career enhancement through either publication or obtaining research-based degrees. These individuals are often highly motivated but, without the financial backing of large research grants or commercial organizations, may have less time and fewer resources to devote to a systematic literature review and involvement of statistical or methodological expertise in the design of their projects. In these cases it is important for RECs to consider the experience of the research or supervision team in the subject or the research and methods proposed, and also try to get a feel for whether the protocol has been written in a convincing style with appropriate argumentation and references to the literature. Perhaps one other way researchers can help reassure RECs as to whether a proposed piece of research is appropriately justified or designed comes from involving those who may be affected by the outcome of the research in the initial design. Certainly within clinical settings, much research is focused on improving patient health or experience, and while clinicians may have a good understanding of the scientific or technical background, often patients or the public can help provide a slightly alternative view that can help prioritize research questions and benefit research design. While this type of input is being increasingly expected within a healthcare setting, it can also be important in military settings where input received from target populations can be presented to the REC as part of the evidence that the research questions are justified, and the proposed methods address them appropriately.
34
Research Involving the Armed Forces
611
Recruitment Arrangements, Access to Health Information, and Fair Research Participant Selection Recruitment arrangements are a particularly important topic for RECs especially when research is being conducted in a military environment. Although social hierarchies are relevant in civilian research (alongside power imbalances such as the doctor/patient or employer/employee relationship), the military is well known for its clearly hierarchical rank structure and emphasis on following orders. This issue is exacerbated as researchers are often officers and research participants are often enlisted or noncommissioned personnel with relatively little authority. The problem is particularly pertinent for research being carried out by physicians who hold relatively high ranks. For example, in the British Army Consultant Physicians seldom carry ranks lower than Lieutenant Colonel, the same rank as a regimental commanding officer. The involvement of such individuals in promoting research, identifying participants, and recruiting them to studies can be problematic because even if they try not to refer to their rank (and perhaps also dress in civilian clothing), it is likely that their research population may guess their rank and feel less able to refuse research involvement. Perhaps a mitigating factor may be if the researchers are also involved in routine clinical care and thus potential participants may feel more able to engage with them compared to other similarly ranked officers. However, here a therapeutic misconception must also be taken into account as advice given as part of clinical care and advice given as part of research are two quite different things. One solution is to approach potential participants through gatekeepers such as administrative staff or alternatively employ nonmilitary postdoctoral research fellows or students to brief participants and recruit them into the research. Much use can also be made of posters and notices in weekly routine (“Part I”) orders, helping inform (but critically not ordering) potential research populations in a fairly indirect way. Alternatively, if such methods are thought unlikely to recruit enough participants, briefings can be organized as part of training days, health fairs, or predeployment information briefings. However, here good practice is to separate research briefings from other events where participants might be receiving information in the form of orders or directives, perhaps making attendance at the research part voluntary by organizing the research segment to follow other sessions after a short gap, or be held during coffee breaks. One potential complication comes when soldiers are being recruited from units assigned to what is referred to in the UK as RAAT (Regular Army Assistance Table) duties or international equivalents. These troops are essentially on standby to assist with any duties (including research) that may be required, and are often assigned duties through their normal command structure. However, as voluntary participation is a basic ethical principle as far as research is concerned, it is important that any recruitment to research is not assigned in the same way as other duties. Likewise it is important to ensure that potential participants are not being advised to take part in “voluntary” research as an alternativeto being ordered to carry out an unsavory task. While this situation sounds as if it is unique to the military, it is not dissimilar to the case of undergraduate students being encouraged to take part in research as part of
612
S. E. Kolstoe and L. Holden
degree requirements. For instance some university departments insist that students sign up to a certain number of research projects as participants prior to being allowed to recruit participants to their own studies. Here, in a similar way to the RAAT situation, it is important that potential participants are given a sufficient choice between which research projects to participate in, or alternatively be given the choice of other reasonable activities that do not make them think that participating in research is the less distasteful option.
Favorable Risk-Benefit Ratio: Anticipated Benefits/Risks for Research Participants (Present and Future) Two important principles considered by RECs are beneficence and non-maleficence. Beneficence refers to whether a study has any benefits including scientific benefit (as covered above) and also more local benefits directly to participants. This could be therapeutic benefit, psychological benefit, other helpful insights gained by participants while taking part in the research (such as access to personalized training plans), or direct payment for taking part in the research. Although inducing people to take part in research can be problematic (see section “Care and Protection of Research Participants: Respect for Potential and Enrolled Research Participants’ Welfare and Dignity”), it is not unreasonable to ensure that participants do get at least some, proportionate, benefit from taking part in research and that benefits are not just to science (although many participants often view benefit to science as a reason for taking part in the first place) or just the researcher’s career. Non-maleficence refers to “doing no harm” and can be viewed as an assessment as to the safety of the research participant, environment, or even the researchers themselves. While it is important that RECs consider safety, it must also be pointed out that safety requirements are often specified by law and thus are the legal responsibility of research sponsors, medical or professional regulators, or employers in the form of hospital, university, or military authorities. As a result, and similar to the case of scientific justification, researchers need to reassure the REC that safety considerations have been made, that risk assessments are in place, and that the risks and burdens of the research are being appropriately communicated to the research participants as part of the recruitment process. One example specific to the military context comes from the involvement of recruits in research. Here there is a greater likelihood that injuries caused during research may have a significant career impact because recruits are required to be fully fit for service at the end of training. Regarding payment, there has long been a belief that participation in research should be based mostly on altruistic reasons rather than a motivation for material gain. There is an interesting ethics literature discussing this issue (McNeill 1997; Wilkinson and Moore 1997, 1999), but briefly the argument boils down to the fact that members of society such as firemen, soldiers, professional divers, or members of many other professions are regularly paid on a risk basis. It is therefore not clear why research participants, who have been suitably informed of the risks inherent to
34
Research Involving the Armed Forces
613
taking part in a study, should not also be allowed to choose to expose themselves knowingly to research risks through a similar agreement as the one made when people decide to work in risky professions or environments. One counterargument is that a great deal of medical research involves participants who are perhaps more vulnerable due to disease or disability; however if efforts are made to understand this vulnerability and ensure that processes are in place so that such vulnerability is not used to manipulate people into “volunteering,” it is unclear why such people should not also be given the opportunity to benefit materially from taking part in research studies. The exact level of compensation depends on the study and often comes out of discussions with the REC, although in the UK, the Ministry of Defence has specific guidance relating to the payment of research participants.
Care and Protection of Research Participants: Respect for Potential and Enrolled Research Participants’ Welfare and Dignity This subject is also closely related to the principle of non-maleficence. If risks are identified, it is important that processes are put in place to ensure that participants are cared for and protected initially from the risks but then, if a risk is perceived, through ongoing care and support. This may be through signposting participants to sources of further advice or support (such as existing medical or psychological services), providing appropriate debriefing and follow-up opportunities, or in particularly serious cases ensuring there is provision for ongoing specialist healthcare or support. Such ongoing care may well be covered by trial insurance or indemnity arrangements, so a related role of the REC review is to ensure that appropriate insurance provisions are in place prior to a study commencing. Indeed, as mentioned above, it is often stipulated by insurers that a REC review is carried out prior to insurance policies coming into effect as evidence of due diligence. Welfare and dignity must also be considered during the course of research studies especially if intrusive or endurance testing is involved. This may be in the form of ensuring appropriate medical/first aid cover, the provision of rest breaks, refreshments, or more practical provisions such as appropriate car parking, suitable waiting areas, or entertainment if study visits are of a long duration (for instance, the authors recall one study providing underwater televisions to participants involved in longduration diving experiments).
Informed Consent Process and the Adequacy and Completeness of Participant Information After determining whether a research project is justified, appropriately designed, and thus should be allowed to go ahead in the first place (see section “Recruitment Arrangements, Access to Health Information, and Fair Research Participant
614
S. E. Kolstoe and L. Holden
Selection”), the recruitment arrangements for potential participants are one of the most important considerations for any REC. Autonomy, or free choice, is an important principle enshrined in almost all ethical guidelines and statements (Gillon 2003). However, it should also be noted that the phrase “fully informed consent” is seldom used anymore. This is because there is an appreciation that it is difficult to define what “fully informed” actually means – how much information is needed for someone to be “fully informed” and how educated or able to understand the protocol does a person need to be to be “fully informed” (Beauchamp and Childress 1994)? Instead emphasis is now placed on ensuring that potential participants are “adequately” or “suitably” informed as to what they are being asked to participate in. How much information is provided depends very much on the specific project, what the risk-benefit ratio is (see above), and an assessment by both the REC in advance and the researcher taking consent at the time, on whether the potential participant understands what they are being asked to do. Indeed although consent is often viewed as the process of signing a consent form, it is perhaps better to view consent as an ongoing/continuous process that starts upon finding out about the research and continues right the way through research participation. As a result, although the participant information and consent form are very important documents, the consent process is not solely limited to handling, reading, and signing these documents. Although a paper-based signature provides a convenient method for recording consent, this may not be appropriate for all types of research. For instance, research designed where participants are intended to be anonymous may be compromised if names are recorded as part of consent, and likewise online administration of questionnaires can be severely hampered if participants are expected to print out, sign, and then send back a consent form. Although it is difficult to generalize too much, it is often considered acceptable by RECs to include a tick box on either paper or online forms so that potential participants can agree to take part in the research without necessarily writing their name down. Similarly adding a step where the participant has to click a button labelled “submit,” or alternatively actively post a questionnaire, can be considered evidence that the participant is happy to take part in the research. It is probably better to not just rely on this implied consent, again by ensuring that a tick box has been provided next to a statement of consent. For research designs that collect data primarily through recording (e.g., telephone interviews) or video (e.g., focus groups), it can sometimes be considered appropriate by the REC to audio or video record participants agreeing to take part in the research. One notable example was a research project needing to recruit during an overseas military operation where the extra burden of carrying participant information sheets and consent forms placed the researchers in danger. Here the solution was to use a tablet computer (that was already being carried for operational purposes) to show participants a video explaining the study and allow them to consent electronically. Copies of the participant information and consent form were then emailed to participants for review at a later date along with the opportunity to opt out of their data being used should they change their mind once they had the opportunity for more careful consideration in a less extreme environment.
34
Research Involving the Armed Forces
615
Suitability of the Applicant and Supporting Staff Assessment of the suitability of applicant (i.e., the research team) and supporting staff should go without saying especially for clinical trials that may require certain qualifications as a matter of law. For research not covered by legal requirements, it is very important that RECs are satisfied that those conducting the research are suitably qualified and experienced in both the research field and methodologies intended. Where researchers are students carrying out a qualification with the aim of becoming experts in their field, it is important that they receive appropriate supervision. In many cases, it is important that multiple supervisors are involved in the project to ensure a relevant expert is providing methodological or subjectspecific input in all areas that the research is intending to explore. If research teams cannot demonstrate this breadth of experience, it is not uncommon for RECs to ask that research teams identify new collaborators who can provide input into the research design and in some cases assistance in carrying out the research and interpreting the results. The relative scarcity of expert statisticians can provide problems in this regard with many researchers attempting to conduct statistical analyses without fully comprehending the complexity of the method or sometimes without an appreciation of alternative methods that may be more relevant. If clinical procedures are being conducted or medical data is being collected, it is vital to include individuals with sufficient experience in conducting the procedures and interpreting the results. Research physiologists, for instance, often rightly state that they are not qualified to make a clinical diagnosis on the basis of data that they collect, but this does not mean that they may not have a duty of care to their participants to raise concerns if incidental findings related to the health of participants emerge. Although researchers may not be in the position to assess the clinical significance of incidental findings, they can at least signpost participants to sources of clinical support along with perhaps providing a brief summary of the tests they have performed so that any clinicians dealing subsequently with the participants can either provide reassurance to the participants, or conduct further clinical investigations if required.
Independent Review The role of independent review has already been mentioned as a solution to reassure RECs that projects are suitably justified or using suitable methodology. Independent review can also be useful to assure RECs that suitable risk assessments and clinical burdens are being appropriately dealt with. In the military setting it is also important for RECs to have at least some understanding of the types of environments where the research is being conducted. Here military advisors can help, either sitting in REC meetings or alternatively acting as advisors to the individual research projects to ensure that any military-specific considerations are being taken into account, although such advisors are not viewed as regular members of the committee due to
616
S. E. Kolstoe and L. Holden
their obvious conflict of interest. It is similarly important to ensure that all other “independent” reviews are suitably independent. Such reviews will vary depending on the size, complexity, and nature of a project. For instance, a piece of relatively straightforward Master’s degree research could involve “independent review” from an academic supervisor or colleague within the same department, but as projects get bigger and more complex, it is important that reviewers come from other institutions or even other countries. Here reviews conducted as part of grant funding processes are particularly valuable to RECs, but in lieu of such reviews, RECs are well within their remit to request additional, independent, peer review if they identify issues that they are not happy about or qualified to come to an opinion on. Research teams may thus be asked to find additional reviewers, or alternatively RECs may try to seek additional reviews, but in the latter case will normally ensure that confidentiality agreements are in place prior to research protocols being passed on to reviewers who are not normally part of the REC. It is not uncommon for RECs to keep asking for reviews until they reach a point where they are satisfied with both the independence and quality of the review.
Suitability of Supporting Information Supporting information refers to the provision of participant recruitment information, consent or debriefing materials, questionnaires, letters of permission, peer reviews, associated publications, appendices, etc. Much of this may replicate or support information which is present in the REC application form where, for instance in the case of clinical trials, the actual research protocol and investigators brochures are often provided as appendices in order to provide further details of the study should the REC require them. It is also important for RECs to see any questionnaires that are intended to be used, scripts of interview questions, guidance for those conducting focus groups, information documents relating to any medicines or equipment that will be used, etc. It is normally up to the research team to decide which documents would be helpful in illustrating the research they are describing in the application form. Often REC administrators can help researchers ensure that they have included the documents that individual committees may like to see, or alternatively RECs may also ask researchers to subsequently provide additional documents that they think are important for the review. It should be noted that a brief letter from particularly senior officers (such as Field Marshals or the First Sea Lord) saying that they support the research in principle can be viewed unfavorably by the REC as an attempt to put undue pressure on their independent decision-making process.
Other General Comments The above eight categories cover the majority of issues that RECs consider during the review of a project, but the innovative nature of research may mean that new or
34
Research Involving the Armed Forces
617
novel issues could well be raised by researchers who are seeking to push the boundaries of knowledge or devise new methodologies. Thus said, almost every researcher will try to claim that their project is novel or groundbreaking in some way when in reality the majority of research projects do not raise any significant new or novel ethical issues. Other issues that a REC may comment on might relate to changes in legislation, where although the legal responsibility to comply with legislation sits with the research Sponsor, the REC’s experience of how other researchers have dealt with such issues may be helpful to the research team. One recent example has been the introduction of the EU General Data Protection Regulation that requires certain types of information is presented using fairly specific wording. Likewise the broad research experience coupled with input from military advisors may allow the REC to make other observations that could be helpful to the research team.
Consider and Confirm the Suitability of the Summary of the Study This final category was added by the Health Research Authority when they were asked to publish abstracts and basic details for research projects as part of a project to improve research transparency (Health Research Authority 2019). It therefore touches on the important wider issue of research registration and dissemination. It should go without saying that there is no point conducting research if the results of the work are not accurately and appropriately disseminated. Historically the failure to publish research has led to a collection of issues referred to as “reporting bias” and has been particularly relevant in pharmaceutical research where commercial companies have been guilty of only publishing research that would support the marketing of drugs that they have developed. As a consequence there now exist legal requirements for some types of research to preregister on public databases prior to recruiting human participants. Upcoming European Clinical Trials Regulations will also require the reporting of summary results (European Commission 2014). However, although these provisions only apply to certain types of research, it is good practice for research teams to consider their dissemination plans prior to conducting research projects. Just stating “results will be published in a journal and presented at conferences” is seldom sufficient now that sophisticated “open access” tools exist to make research protocols, data, and analyses available more widely beyond just the readers of final journal articles. However, while making research open access is certainly viewed as good practice, it is not always appropriate. For instance, there may be confidentiality or methodological reasons as to why data should not be made more widely available, while in the sphere of defense and security research there may well be strategic or national security reasons why research data, and even the final outcomes of research, are not publicized widely. These considerations need to be discussed with RECs who may seek more information or justification for the planned dissemination routes. For instance it may be possible to publish some parts of a research project while keeping back information that may be particularly security sensitive.
618
S. E. Kolstoe and L. Holden
Woman in Ground Close Combat Research Program: A Case Study To conclude, this chapter now outlines a research program conducted by the British Army that illustrates many of the principles described above. The Women in Ground Close Combat (WGCC) research program was born out of the ministerial decision (Tri Service Review 2014) to allow women to serve in Ground Close Combat (GCC) roles. There was, however, evidence from the scientific literature that GCC roles might increase risks to musculoskeletal, mental, and reproductive health (Ministry of Defence 2016). In order to understand and control these risks, a parallel research program was launched with a reporting date of 2021. The research program was to focus on Army Basic Training (for both enlisted and officer recruits) in order to understand how women were affected by the more physically demanding GCC training.
Researchers‘ Experience Military recruits are a “convenient to access” research group but are vulnerable because of their junior status in the military hierarchy. There is also generally a culture of support for research in the military, which appears to be even greater in the training environment, possibly due to more tangible benefits that improve training quality and reduce risk to future recruits. A number of projects were designed by the WGCC research team and subsequently reviewed by MODREC. Despite having worked with the ethics committee to ensure that the recruitment process was not manipulative, the researchers found that, once research was underway, ensuring freely given consent was sometimes a challenge and needed to be proactively managed. For example, when the research was advertised and the location of the participant briefing given, it was found that groups were still sometimes escorted/ marched to the room by their Platoon Sergeant. Recruits would not challenge this simply because it reflected the nature of everything else that they did. Even when the briefings were clearly signed as optional, recruits would still turn up seeing it as a part of their training obligations: “It’s just another thing in the timetable.” It was therefore important that the research team identified and addressed these issues, for example, by excluding military staff from the briefing process and ensuring that briefings were conducted away from recruits’ living and training environments so as to remain within the bounds required by the favorable ethics opinion. The training environment in the UK Armed Forces is generally very supportive of researchers and recruit participation. Not unexpectedly, participation could be influenced by local commanders. If the platoon leader was felt not to be supportive, researchers found a noticeably reduced participation rate, although the converse was more the norm with researchers reporting being “offered” whole platoons quite regularly. This “desire to help” risked the research become more of an opt-out rather than opt-in situation for the participants, meaning the researchers had to carefully manage the recruitment process to ensure that consent was always freely given. Perhaps this recruit willingness was an early sign of the military team ethos that
34
Research Involving the Armed Forces
619
recruits would have demonstrated as part of their selection for training, but was a concern for researchers who said they “felt better” if the opt-in rate was less than 100% as this could be seen to demonstrate that the recruits did understand the idea of voluntary participation. Once research was underway keeping confidentiality could be a challenge in some environments, especially where research was taking place “on location” (in training, exercises, or on operations) rather than in a laboratory setting. This included keeping confidential who was (or wasn’t) participating in the research, but also if the research uncovered something that the recruit was not aware of such as a needle phobia, which while not having any formal implications for military training, was something the recruit did not wish others to know about. Medical confidentiality could, however, be maintained thanks to using Independent Medical Officers. Once recruited, research interactions between the junior recruits and the sometimes quite senior ranking military researchers were not found to be affected by the rank difference. Strategies such as the wearing of civilian clothing by military medical and nonmedical researchers ensured that military researchers were accepted in the same way as civilian members of the research team, ensuring the same researcher-participant relationship was able to develop. This relationship was important, as while the initial voluntary uptake into studies by recruits was often high, it was common for a number of study participants to subsequently withdraw from studies as they balanced the demands of training with the research study commitments. This observation was reassuring as it provided evidence that in this regard the military environment reflected that of any other employer, and thus the same processes were needed to ensure that employees (in this case recruits) did not feel an obligation to help their employer out of altruism or driven by a concern that their employment would be more secure if they helped. Interestingly some military researchers have anecdotally reported that they personally felt under more pressure to be a participant in research while based at universities than they ever did within a military environment. Some of the research for the WGCC program could be quite onerous in terms of demands on participants during a busy training program. Despite this Officer Cadets in particular were found to be very interested in being part of the research. Some Cadets saw their time with research staff as being an opportunity to provide feedback on their training, whether that be as part of focus groups in the research or simply by talking to research staff in the course of data collection. Indeed, such was the interest and enthusiasm that researchers reported that they had to be very careful to avoid saying anything that could affect the study outcome such as revealing the results of early analysis. Furthermore it was also a surprise to researchers that participation itself was seen as an enjoyable experience and appeared to have additional welfare benefits that were not available to nonparticipants. Some participants saw the research time as an opportunity to have a little “time away” from the military environment for a few hours. Some of the more time-demanding research extended over meal times and into evenings. On these occasions, researchers provided food and found that participants enjoyed this welcome break from the military catering (e.g., enjoying take away pizza) and saw this as a real benefit of taking part.
620
S. E. Kolstoe and L. Holden
The experience of researchers conducting the WGCC program provides a helpful illustration of the considerations outlined in the first part of this chapter. Furthermore, along with leading to greater care and protection of women training and serving in GCC roles, some of the research findings, e.g., on iron and vitamin D (Carswell et al. 2018), are also being exploited to improve the health of wider society. This is analogous to improvements in trauma care and limb prosthetics transferred to the NHS from the experience of military operational healthcare during operations in Iraq and Afghanistan. WGCC also highlighted other advantages of using military populations such as the significant social differences between university populations (whose students traditionally make up the majority of research participants) and the military recruit population who often come from very different backgrounds. However, care always must be taken that researchers and those allowing access to military subjects for research purposes have considered the benefits and risks of taking part through utilizing robust governance and ethics processes and ensuring that all researchers demonstrate integrity in their conduct. We hope that this chapter may help guide researchers seeking to conduct work in this environment. Acknowledgments We thank the Women in Ground Close Combat Research team for providing informal feedback on their recent experiences relevant to the topic of this chapter.
References Beauchamp JF, Childress TL (1994) The meaning and justification of informed consent. In: Principles of biomedical ethics. Oxford University Press, New York, pp 117–121 Carswell AT, Oliver SJ, Wentz LM et al (2018) Influence of Vitamin D supplementation by sunlight or oral D3 on exercise performance. Med Sci Sports Exerc 50:2555–2564. https://doi.org/ 10.1249/MSS.0000000000001721 Emanuel EJ, Wendler D, Grady C (2000) What makes clinical research ethical? JAMA 283:2701. https://doi.org/10.1001/jama.283.20.2701 European Commission (2014) Clinical trial regulation No 536/2014. https://www.ema.europa.eu/ en/human-regulatory/research-development/clinical-trials/clinical-trial-regulation. Accessed 26 Feb 2019 Gillon R (2003) Ethics needs principles–four can encompass the rest–and respect for autonomy should be first among equals. J Med Ethics 29:307–312. https://doi.org/10.1136/JME.29.5.307 Health Research Authority (2019) Our transparency agenda. https://www.hra.nhs.uk/about-us/ what-we-do/our-transparency-agenda/. Accessed 26 Feb 2019 Health Research Authority UK Policy Framework for Health and Social Care Research – Health Research Authority. https://www.hra.nhs.uk/planning-and-improving-research/policies-stan dards-legislation/uk-policy-framework-health-social-care-research/. Accessed 19 Feb 2019 Joynson C, Leyser O (2015) The culture of scientific research. F1000Res. https://doi.org/10.12688/ f1000research.6163.1 McNeill P (1997) Paying people to participate in research: why not? Bioethics 11:390–396. https:// doi.org/10.1111/1467-8519.00079 Ministry of Defence (2016) Interim report on the health risks to women in Ground Close Combat roles. WGCC/Intrim-Report/10/2016 1–72 National Research Ethics Advisors Panel (UK) (2014) Consistency in REC Review. 1–8
34
Research Involving the Armed Forces
621
Trace S, Kolstoe SE (2017) Measuring inconsistency in research ethics committee review. BMC Med Ethics 18:65. https://doi.org/10.1186/s12910-017-0224-7 Trace S, Kolstoe S (2018) Reviewing code consistency is important, but research ethics committees must also make a judgement on scientific justification, methodological approach and competency of the research team. J Med Ethics 44:874. https://doi.org/10.1136/medethics-2018105107 Tri Service Review (2014) Women in ground close combat (GCC) review paper – GOV.UK UKRIO Checklist for researchers. In: 2009. http://ukrio.org/publications/checklist-for-researchers/. Accessed 5 Nov 2018 Wilkinson M, Moore A (1997) Inducement in research. Bioethics 11:373–389. https://doi.org/ 10.1111/1467-8519.00078 Wilkinson M, Moore A (1999) Inducements revisited. Bioethics 13:114–130. https://doi.org/ 10.1111/1467-8519.00136 WMA (2013) WMA declaration of Helsinki – ethical principles for medical research involving human subjects – WMA – The World Medical Association. In: World Med Assoc. https://www. wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-researchinvolving-human-subjects/. Accessed 20 Aug 2018
Research Ethics, Children, and Young People
35
John Oates
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Childhood as Social Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Children’s Rights and Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
624 624 624 625 626 628 631 632 633 633
Abstract
Special considerations apply to the ethics of research with children and young people. The United Nations Convention on the Rights of the Child (United Nations, 1989) highlights the need to respect children’s autonomy and agency while also recognizing the need for protection and support. For researchers, this means that particular care should be taken to ensure that children are fully involved in consenting processes, which may involve difficult ethical decisionmaking where local norms and values run counter to a rights-based approach. A rights-based approach also mandates a need to avoid excluding children from research that concerns them and to ensure that their voices are heard. The wide variations in how childhood is socially constructed around the world and the increasing use of social media and the Internet by children challenge researchers to adopt ethically sound practices. While children are widely seen as especially vulnerable, this should not mean that the protection and care imprimatur must dominate and override concern for autonomy. Research with teenagers is very J. Oates (*) The Open University, Milton Keynes, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_28
623
624
J. Oates
different from research with younger children, and children’s capacities for understanding and relating to adults develop and change massively through childhood. The ethics of research with children and young people involves tensions between competing views and interests, and achieving good outcomes requires careful reasoning. This chapter discusses these issues and offers suggestions for solutions. Keywords
Research ethics · Children · Young people · Children’s rights · Consent
Introduction Growth in international concern for children’s rights has influenced views of what counts as ethical research practice with children and young people (Morrow and Richards 1996). This is shifting a positioning of children from being passive subjects to recognizing their agency and voice, ushering in a move from “research on” to “research with,” and generating a new field of “research by” (Lundy et al. 2011). A central focus in research with children that reflects this shift, around which other key ethics issues accrete, is the question of consent. The standard of freedom to consent assumes autonomy of the child, but does not necessarily imply an active role in the research other than as a “data subject,” a term still in current use. This chapter title includes the term “young people” to recognize that the boundary between childhood and youth is defined differently in different cultures and national legislation. While legal definitions relating to thresholds for permitted or illegal behaviors and children and young people necessarily focus on chronological ages as transition points, issues of research ethics hinge on matters that are not so easily mapped onto specific ages, but rather acknowledge developmental continua of children’s and young people’s cognitive, social, and emotional capacities (Hutchby and Moran-Ellis 1998) that researchers should take account of in order to engage ethically with them in research. Thus ethics codes, guidelines, and regulations sit in an often uneasy relationship between societal norms and values and principles that seek to be universal and sensitive to the specifics of individuals and their contexts (Abebe and Bessell 2014). This chapter seeks to highlight the main concerns that researchers should engage with if they wish to conduct research ethically with children and young people.
Background Childhood as Social Construction To properly reflect on the ethics of research with children and young people, it is important to consider how “immaturity” is represented in societies because these representations frame attitudes and practices of researchers (James and Prout 1990).
35
Research Ethics, Children, and Young People
625
There is a long history of children being characterized as weak, passive and vulnerable, and cognitively, socially, and emotionally deficient (James et al. 1990). This positioning of children effectively disempowers them and locates adults in superior roles in relation to them, which has previously been a common approach in research (Carter 2009). At the same time, it is clear that very young children are universally dependent on the support and protection of more mature others. The very notion of “childhood” in itself has often carried a burden of “othering” children as distinct from adults and lacking in the range of qualities that define and are seen as required for full participation in decision-making and social action and hence in research (Punch 2002). National institutions, for example, of health, education, and security, and indeed parents, define, shape, and constrain in numerous ways the nature and possibilities for children’s lived experiences, in consequence constructing “childhood” as a life space apart from that of adults. The nature of childhood as a social construction rather than an empirical concept becomes especially apparent when geographical, cultural, and historical variations are taken into account. Just two examples serve to illustrate this: first the practice of child labor, which shows great variation around the world in its acceptability and extent, and second, how marriage and reproduction are controlled and enacted, with the age of legal marriage varying widely around the world. Cultural and subcultural variations in the framing of childhoods challenge the application of standardized ethics protocols and instead mandate the application of ethical reasoning to seeking solutions for conflicts between universal rights and local norms and practices (Morrow 2008, 2009; Meloni et al. 2015).
Children’s Rights and Research Ethics In part as a consequence of globalization and greater concerns for human rights in general, the twentieth century ushered in increasingly widespread concerns for safeguarding children from harmful practices which preyed on their vulnerabilities when positioned as relatively weak social actors in addition to their undoubted physical weakness relative to adults and their dependence on adults for their survival. These concerns led to a variety of protective measures being legislated for, again varying across the world, but basically bringing the safeguarding of children higher in societal agendas. However, such moves can be seen critically as nevertheless still reinforcing an image of childhood as a time of weakness and vulnerability. Countering this positioning, and seeking to radically challenge historic conceptions of children as lacking in agency, the United Nations Convention on the Rights of the Child (United Nations 1989) set out clearly a series of rights which included rights for children’s views to be recognized and respected and for their voices to be heard in proceedings affecting them. Turning away from the view that rights depend on a person’s maturity, the UNCRC repositions and strengthens children’s roles as social actors, empowering them in ways that challenge many long-held views and practices (Jones and Welch 2010). Taking the UNCRC as a reference point, the development of rights-based approaches to the positioning of children in research has led to a number of ways of
626
J. Oates
rethinking how research practitioners should best respect and protect children’s rights in their work (Alderson 1995) while at the same time recognizing, as does the UNCRC, children’s developmental changes in mental and emotional capacity.
Key Issues What Is a “Child” and What Is a “Young Person”? According to the UNCRC, a child is any person under the age of 18 years. However, importantly, the Convention also states “unless under the law applicable to the child, majority is attained earlier.” This highlights the impossibility of stating categorically what, in age terms, defines “a child.” Even within a legal definition, within one nation, different minimum age limits are typically legislated for different activities such as driving a vehicle, purchasing alcohol, or being held criminally responsible, to give just a few examples. Along with these variations, it is also important to recognize the inherent variability in the development and change of cognitive, social, and emotional capacities as children grow up, variability that is a complex interaction among biological and environmental influences. This variability means that even at a stated age, two children could differ substantially in their capacities. It is widely recognized that each child’s developmental trajectory is unique, with peaks and troughs in progress, rapid advances in some areas, and slower progress in others. Because of these developmental facts, age is not a wholly reliable proxy for determining appropriate points for specific approaches to working with children. Nevertheless, ethical ways of researching with 9-month-old children clearly need to be different from those appropriate for 3-year-olds; the starting point for a researcher should still be a knowledge and understanding of the typical capacities and behaviors of the age group(s) in focus. Age and Consent One of the core issues in research with children involves the seeking and gaining of consent (Alderson and Morrow 2004). A widely agreed ethics principle is that participant consent should be freely given, that is, without coercion. In parentheses, one should note the shift in attribution of agency implied by moving from “research subject” to “participant” as a descriptor, with particular implications for traditionally disempowered groups such as children. Further, it is widely accepted that consent freely given should be based on an adequate understanding of the implications of participation. For research with children and young people, this immediately confronts the challenge of developmental changes in cognitive, emotional, and social capacities, all of which play a part in understanding. At the same time, the role of parents or other persons with legally defined responsibility for the care and wellbeing of children in consenting to participation needs to be recognized. The UNCRC states the need for respect toward the “rights and duties of the parents and, when applicable, legal guardians, to provide direction to the child in the exercise of his or her right in a manner consistent with the evolving capacities of the child.” An implication here in relation to research consenting could be that the parental view
35
Research Ethics, Children, and Young People
627
should not trump the child’s, but rather that a synthesis of parental and child views should be the aim, although this may not always be easy to achieve. Taking the stance then that a child has a right to engage with the consent process, at least from an age at which language is a feasible channel of communication, it behooves a researcher to seek to explain the implications of participation in child-friendly and age-appropriate language, also taking account of “the evolving capacities of the child” (Farrell 2005; Einarsdóttir 2007). To satisfy the edict of non-coercion, the researcher should also consider carefully the issue of the power relation vis-à-vis the child and take steps to minimize any bias toward consent that this may introduce. For a researcher, managing the interaction with a child when seeking consent may feel easy, but for a child, relating to a strange adult will present much more of a challenge (Perret-Clermont et al. 2004), and acquiescence may feel safer and more appropriate than showing resistance. Although this may not be classed as coercion, it nevertheless compromises a child’s freedom to decide whether or not to participate.
Incidental Findings and Disclosures As in many types of research with people of any age, research with children and young people may reveal information outside of the intended focus of interest that may challenge simple understandings of confidentiality and the relationship between researcher and data subject. However, there are specific reasons why special consideration needs to be given to this area where children are the focus of research. They may in some circumstances see a researcher as a trusted confidante with whom they can be open about matters that are troubling them, which can be a side effect of establishing good rapport. They may not fully understand the possible consequences of revealing sensitive information about themselves or about other people. As well as direct disclosures of matters such as abuse or neglect, children may give indirect indications, or researchers may come to suspect that there are serious difficulties in a child’s life. Given the unfortunate high prevalence of mental health problems in childhood that have not always been recognized or identified, there is a significant risk that researchers will encounter indications of such concerns and may be faced with decisions about how to proceed. Children may also not understand what acts are criminal, or the serious implications of some types of family secrets, so the researcher should consider what disclosures might cross a threshold beyond which some form of action should be taken. A blanket agreement of confidentiality will be challenged by such revelations or indications, so researchers should consider such risks and prepare themselves with protocols to manage them. There are no clear general agreements about breaching confidentiality and appropriate actions (Cashmore 2006), so situation-specific ethical reasoning is needed, taking account of the risks of such actions (Kotch 2000). Participation and Exclusion While it is an increasingly recognized ethics principle that researchers should avoid voluntarily or involuntarily selectively excluding individuals or groups who might have significant inputs to or stakes in the research topic, this concern has particular resonances where children and young people are concerned. Apart from the general
628
J. Oates
risk of failing to include children’s voices in research that concerns them and where their views and feelings are relevant, there are specific risks of excluding hard-toreach and hard-to-research groups of children and young people (Powell and Smith 2009) whose voices and experiences may have at least equal value to those of children who are easier to access and involve.
Current Debate Co-researchers? Often starting from a children’s rights perspective, interest has grown among researchers in involving children more centrally in research about them (Kellett 2010). Driven in part by wishes to empower children to express their understandings and perspectives, and for these to be taken account of, implementing such greater involvement has taken many forms (Alderson and Morrow 2011; Harcourt and Einarsdóttir 2011; Bourke et al. 2017). Consent and Assent While in history childhood has been socially constructed in many different ways, a dominant theme across these variations has been “children as possessions of the parents.” While this is in part a consequence of the necessity of children’s biological dependency on more competent others to provide for their basic physical needs of food, warmth, and shelter, and their psychosocial needs for attachment and cultural learning, children’s autonomy has been commonly suppressed under regimes of control and “discipline” and hence associated with parents and other adults seeing their roles as being to make decisions for their children rather than with them. This preemption has typically been bolstered by views of children’s incapacity to make well-informed autonomous decisions, reflected also in legal views on the reliability of children as witnesses (Parsons et al. 2015). At the same time, in some respects, including legally, parents do have liability for their children’s actions and are generally expected to do so. Often cited as a landmark, the 1985 decision by the UK House of Lords in the case of Gillick vs. West Norfolk and Wisbech Area Health Authority marked an important change in how children’s capacity (or lack of) for consent is viewed. While only directly applicable in England as case law for medical matters, its implications have spread internationally. The core issue was whether children (defined in this case as under-16-year-olds) could be seen as having capacity to consent independently of their parents, and indeed without or counter to their parents’ consent on their behalf. The case involved the decision by a doctor to prescribe contraceptive medication for a child under 16 years of age, a decision made initially without parental involvement. The case ultimately hinged on whether the child concerned had sufficient understanding and intelligence to give valid consent to the treatment, which it was decided they did indeed have. This decision opened the door to questioning the previously widely held view that parents have absolute rights regarding consent on behalf of their children under 16 years of age.
35
Research Ethics, Children, and Young People
629
However, it is rare that a researcher would seek to carry out research with children in which parents or legal guardians are kept unaware of their children’s participation. The only obvious situations in which this might be appropriate could be in relation to research into child abuse or other topics where a child might be inhibited from giving information if parental consent was also sought. Generally, it is reasonable to expect that both child and parent(s) will be involved in consent although this may involve tensions that need resolution. In its Code of Human Research Ethics (British Psychological Society [BPS], 2014), the British Psychological Society has promoted the concept of “valid” consent rather than “fully informed” consent as the more ethical approach. In respect of research with children, this aligns with the notion that arose from the Gillick case, that it is the validity of an individual’s consent that is ethically significant, not externally imposed boundaries such as age. Clearly, though, the younger the child, the more difficult it is for them to conceptualize costs and benefits, to understand the ramifications of what involvement in a research project might entail and reach a decision not influenced by their sense of power imbalance between them and the researcher, hence making the validity of their consent more questionable. This where involving children in developing protocols for the consenting process can be of value, as well as drawing on knowledge about the development of mental capacity through childhood. Consent with older children and young people typically involves language, in an information sheet, consent form and accompanying talk. Thus children’s levels of language comprehension need to be explicitly considered and be reflected in the writing of information sheets if valid consent is to be sought. For younger children, seeking consent on the basis of a structured conversation in addition to or rather than the child reading an information sheet may aid validity, as can also engaging a parent in the process. Rather than tick-box responses on a consent form to indicate and record consent, it may be more appropriate to a child’s capacities to use icons such as smiley or sad face emojis. For very young children or infants, empowering them in the consent process clearly needs a different approach. Here, the researcher can draw on the concept of “assent,” which requires a sensitivity to the signals that a child may express in their behavior that indicate their willingness or unwillingness to participate or to continue participation once started. This requires a knowledge of how young or very young children show their emotions. For example, a reduction in eye contact or a reticence in reply may be an indicator of withdrawal of assent. A contentious issue in relation to children and consent is the question of opt-in versus opt-out parental consent where research within school settings does not involve the students in activities different from those that occur as part of normal curricular practice. Opt-in consent requires a parent to actively give consent; otherwise, their child does not participate. Opt-out means that a child will only be excluded from participation if a parent expressly states that they do not wish their child to participate. The UK’s Economic and Social Research Council’s policy (ESRC 2014) favors an acceptance of opt-out consent in such circumstances, as does the British Educational Research Association ethics code (BERA 2018).
630
J. Oates
Given the roles of parents and guardians as responsible for their children’s wellbeing and best interests, their part in consenting procedures cannot usually be ignored. So, even in situations such as apparently innocuous school-based research, ensuring that parents are aware of the nature of the research and the opportunity to opt-out their children is ethically appropriate. Once consent is given, participation in a research project will almost inevitably reveal more about what this entails than can be easily described and understood in information sheets and consent forms, potentially leading a participant to rethink their prior consent. For this reason, and taking into account the concept of “valid consent,” researchers can usefully consider monitoring ongoing consent and, indeed, including re-consenting if the risk is high of consent being challenged by the experience of participation. Recognizing that children are more likely to find it difficult to envisage the experience of participation on the basis of prior information, implementing explicit re-consenting protocols, and monitoring consent and assent throughout the data gathering need particular consideration. However, putting children’s autonomy first is not always a simple or easily achieved principle nor is parental consent always the only other consideration where consent is concerned. Local cultural norms may involve strong beliefs about who has the responsibility for consent, for example, a village headman or landlord may have a firm view that they can consent or withhold consent on behalf of not only children but parents as well. Gatekeepers in institutional settings such as refugee centers or orphanages may also wish to exercise overall control. Statutory and common law will also mandate limits on children’s autonomy. Research with children in care present particularly challenging issues in respect of who in addition to the children can validly consent. All of these issues challenge a rights-based approach and highlight the complexity of factors that researchers need to weigh up.
Vulnerability and Resilience Although children and young people are often classed along with other groups (e.g., elderly people, people with disabilities or compromised mental capacity) as being especially vulnerable, labelling them in this way is controversial (Graham and Fitzgerald 2010) and may reinforce preexisting power imbalances. At the same time, recognizing that children do have specific sensitivities is important for researchers, and it is also crucial to understand the developmental changes in cognitive, social, and emotional capacities that affect capacity to assent and consent. Researchers rarely receive training in how to take these into account in developing their ethics protocols. It is important for researchers to recognize that specific vulnerabilities and resiliences will vary from one child to another, even among children of the same age; children’s temperaments and prior life experiences affect their sensitivities. There is increasing interest in the genetic and environmental factors that seem to make some children, sometimes dubbed “orchid” children, especially sensitive to environmental stressors as well as positive influences, while others, dubbed “dandelion” children, may be more resilient and robust (Herbert 2011).
35
Research Ethics, Children, and Young People
631
Future Issues Cultural Differences Increasingly, there is interest in research that explores cultural differences in children’s lives. Given the variation around the world in how children and childhoods are viewed, and the tensions where cultural practices do not align with the UNCRC or with expectations that were initially formed in specific cultural niches, researchers may need to appeal to basic ethics principles in reasoning how best to enable their work to continue by reaching compromises that respect local constraints while seeking to maintain as far as possible a justifiable ethical protocol. This may usefully include considering how the research can best share benefit with the children being studied. Research Ethics Committees will typically demand fully argued protocols in cases where such difficult decisions are being made and may be resistant to giving favorable opinions to ethics protocols that deviate much from past practice. Marginalized Children and Research Priorities Some of the children who are most difficult to access for research purposes may be among the most important to study, while also posing some of the more challenging research ethics dilemmas. For example, looked-after children, migrant and refugee children, children in conflict zones and child soldiers, children suffering neglect or abuse, street children and children with additional needs or disabilities, or children in hospital (and this is not an inclusive list) are all children for whom research can potentially help in bestowing benefits as a result of better understanding their conditions, the impacts on their development, and potential forms of alleviation. In addition, the children’s rights agenda points to the need to avoid excluding them from research in which their voices can be heard and which is concerned with policies that will affect them. For such groups, access may be problematic, with gatekeepers who may not share a rights agenda or where it is unclear who can validly consent for the participation of younger children. Establishing boundaries of duty of care can be very challenging, especially where children are in unsafe, harmful contexts. Children themselves in such contexts may be understandably wary of strangers, may not have much understanding of research, and may have experienced such disempowerment that the idea of having autonomy to consent or withhold consent may not be in their repertoire. Digital Spaces Open access to the Internet and the varied forms of social media and the widespread use of these media by children have raised new societal concerns by virtue of the increased blurring of public and private that the virtual world has generated. Concerns about the impact of long-term screen use, exposure to adult content, children’s vulnerability to predatory adults and indeed to other children’s negative behavior, and children’s naivety regarding the risks of online involvement have all contributed to a range of research focusing on these aspects. The ambiguities around “private” and “public” data and behavior in the virtual world create new issues for researchers
632
J. Oates
to resolve, such as what limits researchers should adopt on “scraping” data from social media (Tatlow-Golden et al. 2017).
Solutions Consent To address the challenges of achieving valid consent from children, enlisting the assistance of appropriately experienced professionals in developing consent/assent protocols is advisable in addition to consulting with children themselves. In addition to language-mediated consent, and perhaps the use of appropriate pictorial communication, sensitivity to a child’s visible signs of assent or dissent can play a complementary role, especially for young and very young children. The balance between assent and verbal consent also needs to be struck in relation to the child’s age and maturity, with sensitivity to assent/dissent playing a greater role the younger and less mature the child. For children, and arguably for participants of any age, greater understanding of the implications of participation will follow as a consequence of experiencing the participation itself, and that highlights the need for ongoing consent to be monitored appropriately, and opportunities for withdrawal to be made clear. Co-production In the inception phase of research, when research topics and questions are being formulated and open to development, children can be involved through focus groups (Hennessy and Heary 2005) or individual interviews, for example. When ethics protocols are being prepared, children can usefully take part in piloting information and consent processes, bringing added benefit to resolving issues of clarity and accessibility, as well as highlighting concerns that might not otherwise be apparent to an adult researcher. Preparing interview protocols or designing questionnaires for data gathering with children can also benefit from review by children and may usefully include prompts or questions that seek to elicit the focal child’s view on their participation (Westcott and Littleton 2005). During data analysis, children can be brought into discussions of interpretation of findings or suggest lines of enquiry. More radically, several examples exist of supporting and empowering children to take on leadership of research, thus becoming researchers themselves (Alderson 2008; Roberts and Nash 2009; Kim et al. 2017). As in all research involving children, it requires conscious efforts to ensure that the empowerment being pursued is not undermined by subtle ways in which adults’ voices may still dominate topic setting and decision-making and by neglect of the extent to which “research training” of children may be inculcating them into adult ways of thinking, for example (O’Brien and Moules 2007; Kim 2016, 2017). Hard-to-Access Children Researchers can help to prepare themselves to deal ethically with the challenges reviewed above by consulting with persons, such as aid workers or healthcare professionals, experienced in working with children in these situations. Researchers
35
Research Ethics, Children, and Young People
633
should also consider carefully the risks of disclosures by children that might challenge confidentiality conditions that have been previously agreed with participants and perhaps with gatekeepers as well during access and consenting procedures. Research with hard-to-access children may carry increased risks of harms to participants, as well as benefits, so careful prior analyses of the range of possible harms and the associated risks are especially important for these types of research.
Conclusion Researchers planning to gather data from children and young people should recognise that special considerations apply and that ethics protocols, and indeed the research design as a whole, should properly take account of a number of respects in which simply applying approaches taken with adult participants is not appropriate. As children mature into young people, their cognitive, emotional and social capacities change significantly. Capacities to comprehend what is involved in research participation, emotional abilities to resist social pressures and express personal wishes, and social abilities to negotiate relationships effectively all follow developmental trajectories, and a thorough understanding of these is a pre-requisite for designing ethics protocols that empower children and young people and give due regard to their autonomy.Taking a human rights perspective, the need to respect the autonomy of children and young people leads to a broader view of what involvement in research can mean, and is generating new models of involving children and young people as more than ‘data subjects’, working with them and giving them voice in wider aspects of the research process.Cultural differences in how children and young people are socially positioned create challenges for researchers; sound ethical reasoning is needed to resolve such challenges in ways that respect both local cultural values and the universal values enshrined in the United Nations Convention on the Rights of the Child (1989).
References Abebe T, Bessell S (2014) Advancing ethical research with children: critical reflections on ethical guidelines. Child Geogr 12(1):126–133 Alderson P (1995) Listening to children: children, ethics and social research. Barnardo’s, London Alderson P (2008) Children as researchers. In: Christensen P, James A (eds) Research with children: perspectives and practices, 2nd edn. Falmer Press/Routledge, Abingdon, pp 276–290 Alderson P, Morrow V (2004) Ethics, social research and consulting with children and young people. Barnardo’s, Barkingside Alderson P, Morrow V (2011) The ethics of research with children and young people: a practical handbook. Sage, London Bourke R, Loveridge J, O’Neill J, Erueti B, Jamieson A (2017) A sociocultural analysis of the ethics of involving children in educational research. Int J Incl Educ 21(3):259–271 British Educational Research Association [BERA] (2018) Ethical guidelines for educational research, 4th edn. London. https://www.bera.ac.uk/researchers-resources/publications/ethicalguidelines-for-educational-research-2018
634
J. Oates
British Psychological Society [BPS] (2014) BPS code of human research ethics. 2nd edn. Leicester. https://www.bps.org.uk/news-and-policy/bps-code-human-research-ethics-2nd-edition-2014 Carter B (2009) Tick box for child? The ethical positioning of children as vulnerable, researchers as barbarians and reviewers as overly cautious. Int J Nurs Stud 46:858–864 Cashmore J (2006) Ethical issues concerning consent in obtaining children’s reports of their experience of violence. Child Abuse Negl 30:969–977 Economic and Social Research Council [ESRC] (2014) Framework for research ethics. ESRC, Swindon. https://esrc.ukri.org/funding/guidance-for-applicants/research-ethics/ Einarsdóttir J (2007) Research with children: methodological and ethical challenges. Eur Early Child Educ Res J 15(2):197–211 Farrell A (ed) (2005) Ethical research with children. Open University Press, Maidenhead Graham A, Fitzgerald R (2010) Children’s participation in research: some possibilities and constraints in the current Australian research environment. J Sociol 46:133–147 Harcourt D, Einarsdóttir J (2011) Introducing children’s perspectives and participation in research. Eur Early Child Educ Res J 19(3):301–307 Hennessy E, Heary C (2005) Exploring children’s views through focus groups. In: Greene S, Hogan D (eds) Researching children’s experience. Sage, London, pp 236–252 Herbert W (2011) On the trail of the orchid child. Sci Am Mind 25:70–71 Hutchby I, Moran-Ellis J (eds) (1998) Children and social competence: arenas of action. Falmer Press, London James A, Prout A (1990) Constructing and reconstructing childhood: contemporary issues in the sociological study of childhood. Falmer Press, London James A, Jenks C, Prout A (1990) Theorizing childhood. Polity Press, Cambridge, UK Jones P, Welch S (2010) Rethinking children’s rights. Continuum Books, London Kellett M (2010) Rethinking children and research: attitudes in contemporary society. Continuum Books, London Kim C-Y (2016) Why research by children?, rethinking the assumptions underlying the facilitation of children as researchers’. Child Soc 30(3):230–240 Kim C-Y (2017) Participation or pedagogy?, ambiguities and tensions surrounding the facilitation of children as researchers’. Childhood 24(1):84–98 Kim C-Y, Sheehy K, Kerawalla L (2017) Developing children as researchers: a practical guide to help children conduct social research. Routledge, Abingdon Kotch J (2000) Ethical issues in longitudinal child maltreatment research. J Interpers Violence (7):696–709 Lundy L, McEvoy L, Byrne B (2011) Working with young children as co-researchers: an approach informed by the united nations convention on the rights of the child. Early Educ Dev 22(5):714–736 Meloni F, Vanthuyne K, Rousseau C (2015) Towards a relational ethics: rethinking ethics, agency and dependency in research with children and youth. Anthropol Theory 15(1):106–123 Morrow V (2008) Ethical dilemmas in research with children and young people about their social environments. Child Geogr 6(1):49–61 Morrow V (2009) The ethics of social research with children and families in young lives: practical experiences, Young Lives, Department of International Development, University of Oxford Morrow V, Richards M (1996) The ethics of social research with children: an overview. Child Soc 10(2):90–105 O’Brien N, Moules T (2007) So round the spiral again: a reflective participatory research project with children and young people. Educ Action Res 15(3):385–402 Parsons S, Sherwood GS, Abbott C (2015) Informed consent with children and young people in social research: is there scope for innovation? Child Soc 30(2):132–145 Perret-Clermont A-N, Carugati F, Oates J (2004) A socio-cognitive perspective on learning and cognitive development. In: Oates J, Grayson A (eds) Cognitive and language development in children. Blackwell, Oxford, pp 303–332 Powell M, Smith AB (2009) Children’s participation rights in research. Childhood 16:124–142
35
Research Ethics, Children, and Young People
635
Punch S (2002) Research with children: the same or different from research with adults? Childhood 9(3):321–341 Roberts A, Nash J (2009) Enabling students to participate in school improvement through a students as researchers programme. Improv Sch 12(2):174–187 Tatlow-Golden M, Verdoodt V, Oates J, Jewell J, Breda J, Boyland E (2017) A safe glimpse within the “black box”?: ethical and legal principles when assessing digital marketing of food and drink to children. Publ Health Panor 3(4):537–546 United Nations (1989) Convention on the rights of the child, Geneva. Office of the United Nations High Commissioner for Human Rights, Washington, DC. Available online at www.unhchr.ch/ html/menu3/b/k2crc.htm Westcott HL, Littleton KS (2005) Exploring meanings in interviews with children. In: Greene S, Hogan D (eds) Researching children’s experience. Sage, London, pp 141–157
Older Citizens‘ Involvement in Ageing Research
36
Conceptual, Ethical, and Practical Considerations Roger O’Sullivan
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researching Aging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why User Involvement? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Ethical and Practical Context of Involvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Views and Experiences of Researchers and Practitioners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implications for Practice and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stage 1: Prior to Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stage 2: Setting the Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stage 3: Methodology and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stage 4: Disseminating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stage 5: Evaluating Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Involvement Values, Ethics, and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
638 638 639 641 643 648 648 649 650 650 650 651 651 653
Abstract
Research is essential in helping plan for our aging population – both locally and globally. Academics are increasingly being encouraged to involve government, the nonprofit sector, business, and especially older people in research. However welcome this drive is, we need to understand and debate to a much greater degree how older citizens’ involvement in research can be appropriate, meaningful, and beneficial for all involved. This chapter explores the nature of user involvement in a context of ethical, practical, and methodological considerations and sets out the case for how this can be strengthened to improve the quality of work and the potential for stronger impact. R. O’Sullivan (*) Ageing Research and Development Division, Institute of Public Health in Ireland and Ulster University, Belfast/Dublin, Ireland e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_29
637
638
R. O’Sullivan
Keywords
Ethics · User involvement · Aging · Older people · Risk · Values · Practice · Methodology · Impact
Introduction Although the focus of much of ethical scrutiny and ethical decision making relates to the minimisation of harm to research subjects, we sometimes do not think enough about how, by their more effective inclusion in research design, planning, data gathering, analysis and dissemination, they can participate more in the process and so contribute to their own minimisation of harm. . .. This area of concern certainly points up a tension between methodology and ethics. . .. (Iphofen 2011, p. 123)
Increasingly, academics are being encouraged, financially and morally, to involve policy makers, charities, voluntary, and community organizations and older citizens in their research as a way to help illustrate impact. While this drive is welcomed, its ethical and practical aspects have received less attention than necessary. As a way to bridge the deficit, this chapter reviews the conceptual, practical, and ethical aspects of user involvement in aging research through its interface with ethics. It draws on research commissioned and published by the Centre for Ageing Research and Development in Ireland (CARDI), which became the Ageing Research and Development Division within the Institute of Public Health in Ireland in 2015. I will illustrate philosophical, practical, and ethical aspects, setting out the case for user involvement, effective processes, and existing ethical frameworks. The chapter concludes that older citizens’ involvement in research must be appropriate, meaningful, and beneficial and that ethical frameworks and research processes are too often focused on older people as the subjects of, rather than the players in, the research. This chapter will endeavor to avoid the unnecessary duplication of topics well covered in other chapters in this collection and in Iphofen (2011).
Researching Aging Researchers must be mindful of the need to ensure that research findings apply to all older people, not just those who are fit and well. . .. Researchers, therefore, should not avoid ethical difficulties by taking the easy option of ‘safe’ research that excludes hard-to-reach minority or vulnerable groups. . . [or user involvement]. (CARDI 2009)
Research is critical to planning and delivering services for our aging population, and the drive toward user involvement means that organizations and older people themselves are increasingly being invited to help inform, develop, design, represent older people’s issues, and disseminate research. The role of older people in aging research has gained growing attention (Walker 2007; Murtagh 2014; O’Sullivan 2018). However, there has been less discussion of
36
Older Citizens‘ Involvement in Ageing Research
639
the ethical aspects of user involvement in the context of practical and methodological considerations. This chapter will not cover the issue of the exclusion of older people from clinical research as the subjects of research (in itself an ethical and moral issue) as this area has been addressed elsewhere and the implications for health care and drug development are well rehearsed (McMurdo et al. 2011). Rather, I will focus on the involvement of older people, not as the subjects of research but rather users and co-designers of research. Furthermore, researchers must recognize that simple definitions of “older people” by chronological age can often be problematic (Pierce and Timonen 2010). The experience of aging may be “shared” by people of different age groups or may not be shared by those of the same age – there is no homogeneity in age categories. While the rate of disability rises with age, this does not mean one should assume older people necessarily have disabilities or for that case be vulnerable (see van den Hoonaard’s, ▶ Chap. 32, “‘Vulnerability’ as a Concept Captive in Its Own Prison,” in this handbook and Farmer and Macleod (2011) on disability research). Therefore, context and characteristics are vital, e.g., education, gender, ethnicity, geography, culture, and social economic group, and thus refinement in line with the requirement of the project is required. User involvement means to concretely engage users at all stages, to design with them their role throughout the process, to take into account their needs and concerns throughout the whole process, to carefully encourage, recruit, support and train them. (FUTURAGE 2011, p. 85)
It is important to understand there is no single accepted definition of user involvement, and it can overlap with terms such as participation or engagement (O’Sullivan 2018; Shippee et al. 2013). Personal and public involvement or patient and public involvement are commonly used to describe involving users to inform design, delivery, or evaluation of health and social care services. Therefore, for the purpose of this chapter, user involvement refers to older people being actively involved in research projects. In contrast, participation describes people taking part in a research study as the subjects of the research. INVOLVE sets out a useful distinction (Table 1).
Why User Involvement? As well as the practical benefits of helping to ensure research quality and relevance, the underlying reasons for involving members of the public in research are also informed by broader democratic principles of citizenship, accountability and transparency. (INVOLVE n.d.)
The benefits of user involvement, to date, are more morally and ethically based rather than evidence based. It has been argued in the case for user involvement that it has the potential to utilize the knowledge, experiences, values, skills, and resources to help deliver better research outcomes. According to Twocan, user involvement aims to improve research in a number of ways:
640
R. O’Sullivan
Table 1 Understanding involvement and participation Involvement: Where members of the public are actively involved in research projects and in research organizations As joint grant holders or co-applicants on a research project Involvement in identifying research priorities As members of a project advisory or steering group Commenting and developing patient information leaflets or other research materials Undertaking interviews with research participants User and/or carer researchers carrying out the research Participation: Where people take part in a research study People being recruited to a clinical trial or other research study to take part in the research Completing a questionnaire or participating in a focus group as part of a research study Source: INVOLVE (2012, p. 7)
• • • • •
Relevance and usability Strengthening confidence in validity Increasing the likelihood that the research will be used by others Increasing the prospects for fundraising Strengthening the advocacy potential (http://www.twocanassociates.co.uk)
The levels at which older people are involved in research is critical, and Barnes and Taylor (2009) described four types: • • • •
Active subjects in the research Advisors to researchers Research practitioners working on their own or in collaboration Direct commissioners of research to use it in campaigning work
They outlined six key reasons for the involvement of older people including the need to: • • • • • •
Produce research that is considered relevant and important by older people Understand what aging means to older people Ensure that research has a better impact Develop skills among older people Challenge ageist assumptions Generate data for a campaigning resource for the age sector
In summary, the promise of user involvement is that it can help research have a stronger impact and produce research that is considered relevant to aging, older people, and their organizations. It can also be a means to generate data for lobbing, advocacy, delivering services, designing policy, and programs in addition to developing skills and knowledge among older people and nongovernment organizations (Barnes and Taylor 2009).
36
Older Citizens‘ Involvement in Ageing Research
641
The Ethical and Practical Context of Involvement What the current involvement of older people in research means in practice can be characterised as a continuum between two ideal types: consumerism and empowerment. The consumerist model consists of relatively small incursions by older people into the research process, most commonly as a relatively passive reference point among several to be ‘consulted’ for example through focus groups or membership of project advisory committees. At the other pole, much less frequently, older people are more closely involved as active research participants. This active engagement ranges from making contributions to various stages of the research process, from taking a leading role in problem definition and conceptual development, to commissioning (through representative organisations), to formal training as researchers or co-researchers. In between is a range of different practices and statuses in the relationship between researchers and older people. (Walker 2007, p. 481)
In 2007, Walker observed that the involvement of older people in research is more weighted toward the consumerist end of the continuum. In the years since the publication of this article, the analysis and challenges of user involvement still hold true in the majority of cases (O’Sullivan 2018). King and Bennett (2016) in their article “The challenges of service user involvement: personal reflections” summarize the practical and ethical aspects of user involvement: The challenges to service-user involvement revolve around two themes: logistics and ethical dilemmas. Logistics can be challenging for all parties. . . . The ethical considerations include the professional duty of care when accounts of poor care are recounted, and being sensitive to the service-users condition when discussing potentially distressing subjects such as end of life care. When any ethical issues arose, it was useful for the project manager to be mindful of any professional codes of conduct, as well as having access to an experienced supervisor for peer support.
The reference to professional codes of conduct highlights an important aspect and connection with ethical codes of conduct and research. There are a range of professional codes, frameworks, and guides to support researchers when undertaking aging research that involves humans as the subjects of research. There are also a range of formal support structures such as University Research Ethics Committees or Health Service Ethics Committees as well as Ethics Research Networks. Each institution or professional body will have a protocol, a set of guidelines, or a range of materials to help guide the researcher through the various stages. The intention is to help protect the rights, dignity, and welfare of research participants as well as protecting the rights of researchers to perform ethical research and to mitigate risk to their institution. However, when considering the ethical aspects of user involvement of older people in research, the complexity of the matter is less well developed. Different professional associations, organizations, and institutions have various codes and policies. For example, the Social Research Association (2003) produced a set of ethical guidelines that focus on: • Protecting research subjects • Ensuring high-quality research
642
R. O’Sullivan
• Reassuring funders • Helping to maintain the good reputation of the research profession • Complying with legislation It also covers obligations to society; funders, employers; colleagues and subjects from a risk management position rather than from a user perspective. It broadly reflects the statements of The Gerontological Society of America and the British Society of Gerontology. Likewise the RESPECT code of practice has three main principles: • Upholding scientific standards • Compliance with the law • Avoidance of social and personal harm, http://www.respectproject.org/ The issue of ethics has gained increasing attention in the field of public health; despite that, teaching ethics in public health programs is often not routine (Schröder-Bäck et al. 2014). However, an understanding of ethics is considered a key competency for public health professionals. Coggan and Viens (2017), in a government-commissioned piece of work, within the context of the skills and knowledge framework for public health professionals, focused on public health ethics in practice. They provided an introduction to public health ethics as a philosophy and as applied to practice and policy. In the recognition that public health professionals often receive little training and guidance on how to reach decisions based on ethical thinking, Schröder-Bäck et al. (2014) set out a framework for teaching public health ethics, although beyond research it also provides a valuable checklist to assist ethical considerations for researchers in the context of user involvement: • Non-maleficence – Will no one be harmed by the proposed intervention? • Beneficence – Is it possible to assess whether there is more benefit than harm? • Health maximization – Is the proposed intervention effective and evidence based? • Efficiency – Is the proposed intervention cost-effective? • Respect for Autonomy – Is there really “informed consent” to take part in the intervention? • Justice – Is no one (including third parties) stigmatized, discriminated against, or excluded as a consequence of the proposed intervention? • Proportionality – Is it proportional? Adapted from Schröder-Bäck et al. (2014) CARDI’s funding promoted the production of high quality research and strengthened the role of older people as both users and beneficiaries of research. In 2010, to keep in line with research governance standards and especially ethical approval, CARDI produced an ethical governance framework to ensure that research funded by CARDI was conducted to the highest scientific and ethical standards. The CARDI Research Governance Framework had three objectives:
36
Older Citizens‘ Involvement in Ageing Research
643
1. To ensure that age research is carried out to the highest scientific and ethical standards 2. To ensure that our work and supported projects and activities comply with relevant, research-based legal requirements, policies, and procedures 3. To ensure that research and researchers avoid harm, minimize risk, promote user involvement, and pursue progressive ethical practice In sum, the various examples above help illustrate the range of different ethical frameworks, structures, and codes of practice in research, focused on aspects such as honesty, objectivity, integrity, and nonmaleficence. However, in the main they are based on older people as the subjects of research. User involvement in aging research, on the other hand, is more about empowerment and co-design rather than passive participation in research. This role, as the next section illustrates, is not fully understood or utilized in the research community.
Views and Experiences of Researchers and Practitioners In this section, I set out the views and experiences of researchers and practitioners in aging on user involvement in research as a means of illustrating ethical and methodological issues, highlighting a number of barriers to user involvement including practical and professional positions. This section, using CARDI-commissioned research, is based on e-survey data and interviews with stakeholders across the public, private, and nongovernment sectors as well as researchers in the UK and Ireland (Murtagh 2014). Although a nonrepresentative sample, it does provide a useful illustration of the issues, views, and challenges faced. This commissioned research formed part of a portfolio of resources that CARDI produced and published to help build capacity and increase ethical and practical awareness among academics and government, and nongovernment organizations in relation to planning, undertaking, commissioning, and disseminating research and its alignment to the user involvement. Table 2 sets out the profile of respondents – mainly academics and voluntary, community, or not-for-profit sector respondents.
Table 2 Respondents to the e-survey
Sector Percentage (%) Academic organization 46 Statutory sector or public agency 8 Voluntary, community, or not-for-profit 34 Private sector 8 Personal response 4 Other 0 Total
No. 23 4 17 4 2 0 50
644
R. O’Sullivan
Table 3 The role of older users in research Variable Setting priorities for age research Sitting on selection panels to award specific contracts Involvement in the design of methodology in particular projects Undertaking ethical reviews Being involved directly in data collection, say as interviewers Peer reviewing research output Using research to lobby and campaign
Very important 80% (40) 35% (17)
Quite important 20% (10) 51% (25)
Not important 0% (0) 10% (5)
Don’t know 0% (0) 4% (2)
42% (21)
36% (18)
22% (11)
0% (0)
39% (19) 38% (19)
55% (27) 34% (17)
6% (3) 26% (13)
0% (0) 2% (1)
38% (19) 82% (41)
40% (20) 14% (7)
18% (9) 4% (2)
4% (2) 0% (0)
Table 3 sets out responses on the role of older users in research and covers a range of areas where older people might play a role in the research process. Respondents gave a generally positive attitude to most activities. Eighty percent thought it very important to involve older people in setting research priorities and 82% indicated older people should use research as a basis for lobbying and advocacy. Less than 40% stated that it was very important for older people to be involved in: selection panels (35%); undertaking ethical reviews (39%); or being involved in data collection (38%). This connects with the work of Staley and Elliott (2017) who analyzed researchers’ reports of user involvement in 2748 applications to Research Ethics Committees in 2014, to assess how well their approaches to involvement informed the review process. They found that researchers rarely described user involvement in enough detail to help such ethics committee members understand its role and value. As set out in Table 4, there is a level of professional resistance to user involvement (44%) as well as practical obstacles such as time taken (52%). A total of 68% stated that they were quite concerned (44%) and very concerned (24%) that user involvement risks tokenism, while 74% (47% and 27%, respectively) felt that older people lacked the necessary technical research skills. There was a view by 71% that involvement might create unrealistic expectations about the impact of involvement on research output, with 48% stating that this was a quite significant obstacle and 23% that it was very significant. Similarly, 46% thought there were quite significant obstacles and 18% very significant obstacles to recruiting older people and ensuring the representativeness of participants of the wider age community (64%). The responses to ethical constraints involving vulnerable groups showed quite equally balanced positions between quite significant (39%) and not significant (34%). Table 5 sets out a range of important practices in the development of user involvement. It is notable that 73% thought that it was very important and 27% quite important that older people should be involved in setting research priorities and no one said it was not important. Figures in relation to older people and data collection showed that 30% felt it was very important and 41% felt it was quite important; 89% supported
36
Older Citizens‘ Involvement in Ageing Research
645
Table 4 Barriers to user involvement Variable Researcher’s professional willingness to engage users It risks tokenism It adds little value to the quality of the research The time it takes to organize and deliver properly Lack of technical skills among older people Fatigue of participants, especially in longer term work Ethical constraints involving vulnerable groups Individual older people may not be representative of wider interests Users might have unrealistic expectations about what they are able to influence The financial costs of proper engagement are too high It is difficult to access older people for this type of work
Very significant 44% (20)
Quite significant 42% (19)
Not significant 9% (4)
Don’t know 4% (2)
24% (11) 18% (8)
44% (20) 31% (14)
25% (11) 44% (20)
7% (3) 7% (3)
52% (23)
25% (11)
21% (9)
2% (1)
27% (12) 18% (8)
47% (21) 32% (14)
27% (12) 39% (17)
0% (0) 11% (5)
23% (10)
39% (17)
34% (15)
5% (2)
18% (8)
46% (20)
36% (16)
0% (0)
23% (10)
48% (21)
25% (11)
5% (2)
21% (9)
41% (18)
30% (13)
9% (4)
11% (5)
46% (20)
41% (18)
2% (1)
Table 5 Priorities for user involvement Variable The involvement of older people in setting research priorities Training older people to participate in commissioning and evaluating research projects Evaluating research reports and related outputs Developing older peoples panels or citizens juries Pilot testing research instruments such as questionnaires Involving age-based NGOs in research commissioning and use Older people trained to collect data and analyze research findings Translating findings for use in advocacy and political activity Funders stipulating it as a condition of a grant awards
Very important 73% (32)
Quite important 27% (12)
Not important 0% (0)
Don’t know 0% (0)
50% (22)
41% (18)
9% (4)
0% (0)
30% (13) 57% (25)
59% (26) 34% (15)
7% (3) 7% (3)
5% (2) 2% (1)
59% (26)
41% (18)
0% (0)
0% (0)
68% (30)
14% (6)
14% (6)
5% (2)
30% (13)
41% (18)
30% (13)
0% (0)
71% (31)
27% (12)
2% (1)
0% (0)
46% (20)
36% (16)
16% (7)
2% (1)
646
R. O’Sullivan
evaluating research reports, 30% and 59%, respectively, highlighting an important indicator on where user involvement is being placed in the research process. On whether funders should stipulate involvement and engagement as a condition of funding, 46% indicated that it was very important and 36% quite important. Respondents indicated that user involvement should focus on translating research for advocacy (71% stating it was very important and 68% indicated it was very important to involve nongovernment organizations in research commissioning and use). To provide a deeper analysis and to illustrate core issues requiring closer examination, a total of 18 indicators were selected from the e-survey for more detailed analysis, in an attempt to understand subsets of attitudes (Murtagh 2014; O’Sullivan and Murtagh 2017). Four main groupings emerged from this analysis and provide a useful typology to illustrate attitudes, ethical, and practical considerations. Typology 1 is not positive about older people setting research priorities, determining ethics, or being involved in the design of methodologies. Attitudes reflect a view that involvement will add little to the quality of the research that older people lack the basic skills to be involved as well as the ability to represent a wider constituency of interests. Emphasis was placed on the importance of professional judgment, expertise, and research quality as shown below. Researchers need to be able to use their professional judgment otherwise we stop looking at quality – the research needs to be professionally done not done to include older people – this is mixing up two quite different things. It is not to say they don’t have a role, only that research needs to be developed with reference to good research not the amount to which different people feel they are involved. It is important that involvement is realistic and has a point to it otherwise it will frustrate older people and the researchers. There needs to be time and money given as part of the grant to do this properly and funders often accept responsibility for neither!
Typology 2 see some value in older people sitting on panels and even a measure of peer reviews but are negative about their role in data collection or ethical reviews. Issues raised were about the time it takes to engage, cost, and the danger of raising unrealistic expectations. These attitudes tend to be more characteristic of academic or public sector researchers. Rather than automatically including users at all stages of the research process, it’s important for both parties that user involvement is carefully considered, so that service users are involved productively. I have been at meetings where, due to the technical or specialized nature of the discussions, user reps have been unable to contribute and have not been able to follow the discussion. This is pointless – it makes users feel out of their depth and it makes researchers feel that there (is) no point having them there.
Typology 3 gives most weight to the importance of involving older people in the research process. For example, attitudes emphasize the competence of older people sitting on selection panels, participating in research design and conducting ethical reviews. Moreover, they reflect the view that older people should be involved in peer review and do not feel they lack the necessary skills or that their participation will
36
Older Citizens‘ Involvement in Ageing Research
647
raise unrealistic expectations. This group tends to reflect the attitudes of respondents from the NGO sector and illustrates the need to empower older people in defining research that matters to them. Ethically and realistically research that involved the target group at all levels has more meaning, yet the barrier of “exhaustion” in this group is quite high in general topics . . . But if we find the topics that really impact on their lives we would find a new energy . . . particularly if the research methodologies were “age friendly” such as using the creative arts.
Typology 4 gives most weight to variables that stress the application and use of the research output, especially for lobbying and advocacy. For example, attitudes emphasize that older people can undertake ethical reviews and are most likely to encourage their participation in data collection. Moreover, respondents scoring high on this factor are also the most likely to feel users should be involved in using research to lobby and campaign (Fig. 1). Involving “users” as participants in the whole research process is mutually beneficial. It benefits the research but it also benefits the participants, no matter what angle you are coming from. For example a focus group will often provide a learning platform for participants to learn from each other through the sharing of opinions leading to a sharing of the background knowledge. It can have similar effect to a peer support group. Engaging older people in the entire research process will also help ensure breadth and depth to the work.
In sum, in this section I showed the connections between professional, ethical, and practical aspects toward user involvement. It is positive to note the support by both researchers and practitioners for older people’s involvement in the research process, but it is also important to note the lean toward consumer or subject rather than the co-producer with the research community. The level and type of involvement differed according to one’s research discipline, training, research focus, and understanding of user involvement. This again highlights the links to codes of practice and training.
Fig. 1 Typology of researchers and attitudes to user involvement
648
R. O’Sullivan
Implications for Practice and Ethics Behaving ethically when conducting research requires the researcher to plan a route through a moral maze. To engage in ethical research one constantly has to make choices within a range of options. (Iphofen 2011, p. 7)
The previous section highlighted that the value base of the research and the researcher’s professional background play a key part in the pathway through ethical decisions. The area of user involvement adds a further layer. As Iphofen astutely observes, ethical decision making is generally a dynamic process. Therefore, the intensity of the user’s involvement increases the ethical considerations and decision making. For example, the case for user involvement very much depends upon the focus of the research, its aims, design, and methods. Wright et al. (2007), on the case for user involvement in research and the research priorities of cancer patients, state: Research governance and ethics guidelines can inadvertently promote inadequate ‘tick box’ forms of engagement as academics and clinicians engage with members of the public with little commitment to the user involvement agenda. This can result in dissatisfactory experiences of user involvement for both users and academics alike.
The Macmillan Listening Study of cancer patients and health professionals, on priorities for the research agenda, demonstrated the need to involve users in considerations to a greater degree. In this particular study, the views of users differ from those of clinicians on research priorities and helpfully illustrate the value of user involvement and why the exclusion of users in research (or inappropriate limitations on their role) can result in important areas being neglected. In this final section, using learning from CARDI’s work and commissioned research (Murtagh 2014, 2015), I will focus on the practical aspects of how to “do” user involvement in an ethical manner. The diagram below shows that it is helpful for researchers to think about the involvement of older people at each stage of the research process rather than in a segmented manner (Fig. 2). O’Sullivan and Murtagh (2017) set out five broad phases with a suggested set of activities to improve user involvement, but clearly these would be applied appropriately to the individual research projects and will not suit all circumstances, research designs, or aims. They are suggested here for illustrative purposes.
Stage 1: Prior to Research Building connections is an important stage in the research. The type and number of older users and when and how they will be involved should be considered at this first stage. For example, this might involve holding pre-meetings or inviting users to become a member of a steering or advisory group. There should be clear criteria to ensure an appropriate mix of users in each project. Older users, as with any group,
36
Older Citizens‘ Involvement in Ageing Research
649
Fig. 2 Steps for involving older people
have distinctive needs that must be considered when facilitating their involvement, e.g., timing, accessibility, cost, and language. Some topics might be sensitive or raise emotional issues for older participants. It is important to make sure ethics and risks have been considered and that older people properly understand the implications of involvement and can provide informed consent. Ensure that, if required, participants affected by the research can be referred to the appropriate professional care and support.
Stage 2: Setting the Priorities Most research projects are designed within a framework such as the priorities, eligibility criteria, or requirements set by funders. However, even within such constraints, older users have an important role in the formative stages of the research design. Appropriate and meaningful involvement of older people in setting the priorities for research can strengthen its relevance, usability, and impact. Older people’s input can be used in a number of ways: setting or evaluating research questions, assessing the assumptions underpinning the work or the rationale for the project. Care should be taken to ensure that there is time to engage older people properly in group discussions or meetings to avoid tokenism.
650
R. O’Sullivan
Stage 3: Methodology and Implementation A key phase is the involvement of older people in the design and implementation of the research. It is important to make explicit the rationale for the research design as well as the detail of what is proposed. Older users can also assist in the interpretation of the data and provide a practical perspective that can strengthen the analysis of the research team. They can inform the design or direction and ensure that older people’s involvement is properly planned and resourced and is appropriate across the various stages of the research process. For example, older people can become involved in more direct ways: undertaking literature reviews or policy reviews from an age perspective, conducting interviews, and holding or participating in group discussions. Additionally, older people can play a valuable role in peer-reviewing research reports and validating their content.
Stage 4: Disseminating One of the key advantages of user involvement is the ability to inform practice, services, interventions, and policy from the lived experience. Older people can also potentially be involved directly in dissemination activities, including presenting results at conferences and seminars, attendance at policy-related meetings, and preparing peer-reviewed papers. Older users can become involved in the preparation of briefing papers, reports, or newsletters linked to the project. However, again it is important that practical actions relating to dissemination are meaningful and appropriate. The opportunity is that researchers can work with older people and NGOs to help interpret their data in order to facilitate a stronger policy effect.
Stage 5: Evaluating Performance The final stage is evaluation. One can consider asking older users to reflect on their experiences. The key task is to identify what worked, what did not, and why. How these practices might be built upon or changed in future research projects would be a key outcome of this phase. The critical issue is to identify whether or not user involvement makes a difference (and how) to the participants, researchers, funders, and the broader policy and practitioner community. In conclusion, I am of the view that the quality and impact of research can be strengthened by the appropriate involvement of older people in aging research. The ambition of user involvement is to help make research more relevant to older people, policy makers, and those designing services. However, user involvement can also pose challenges for researchers as they must consider whether it is appropriate for a particular research project and whether, if used, it allows older people to make a meaningful contribution that adds to the research.
36
Older Citizens‘ Involvement in Ageing Research
651
User Involvement Values, Ethics, and Practice Have the ethical implications of the research been clearly explained to you; are there risks or potential harm to older people; can the design and delivery of the project be improved to address any ethical concerns? (Murtagh 2015, p. 10)
The avoidance of risk and harm is often seen as the primary purpose of ethical frameworks. However, ethical frameworks are also about encouraging highly authentic, robust, high-quality, and trustworthy research, in performance, interpretation, application, and process. User involvement challenges the traditional “avoid risk to research subjects,” as the process is based on co-design and wider ownership, and therefore to a degree challenges traditional concepts of both the risk and the benefits. In such research, older people, their organizations, and service providers are essential to the research and require trust, accountability, mutual respect, and fairness that are above the research subject model. Values of user involvement are important to help guide practical, ethical, and methodological considerations. In 2015, INVOLVE published a review of literature that identified six values as underlying good practice in public involvement in research: respect, support, transparency, responsiveness, fairness of opportunity, and accountability. O’Sullivan and Murtagh (2017) set out five similar values that again emphasized the process for effective and ethical user involvement. • Quality: Involvement of older people and representative organizations must add to the overall reliability and validity of the research process. • Accessibility: Ensuring that the involvement of older people permits them to play a serious role in the research process. Avoiding tokenism. • Application: Ensuring that the type of user involvement maximizes the opportunities for policy impact and strengthening practice. • Commitment: Ensuring that time, resources, and training are put in place to allow for effective involvement in the design, delivery, and dissemination of the research. • Transparency: Setting out the relationship between the researcher and older participants at the outset and being clear that involvement does not mean that things will always change. The issues of ethics, values, practice, professional codes, and methodology show that they are not separate but actually part of the one research process.
Conclusion The aim of user involvement is to help improve quality, focus, relevance, and overall impact. However, user involvement can also pose challenges for researchers at ethical, practical, and professional levels. While high ethical standards are essential in all research, it would be a mistake to think that we all share the same views on the
652
R. O’Sullivan
appropriate ethical frameworks, professional codes, and deliberations. Nonetheless, it seems that there is agreement that user involvement must be meaningful and beneficial to ensure value and impact for all involved. Careful consideration must be given to whether or not user involvement facilitates older people to make a meaningful contribution that adds to the research and the desired outcome. When considering the involvement of users, we must be clear about roles and responsibilities for user involvement in the research; understand and clarify users’ expectations about the project (Blackburn et al. 2010); and recognize power imbalances between users and researchers. Power forms a part of any research project involving users and raises important issues such as; who makes what decisions; under what circumstances, and about what aspects of the research. The drive of the user involvement agenda emphasizes that there are various forms of evidence and knowledge (academic, political, policy, advocate, and user) – sometimes sitting alongside peacefully and sometimes in conflict (O’Sullivan 2018; Bannister and Hardill 2013). Clear terms of reference, a set of key terms, areas of responsibility, and expertise, if agreed by all those involved, do help but do not override issues of status and personality. As Foucault noted, power is not simply a top down issue but rather a moving one, according to context and timing. Furthermore, one must not neglect traditional ethical questions, limited as they are, such as: • What risks have been identified for user involvement and how have these been mitigated? • Are the research methods and process appropriate for user involvement? • Is the research project information clearly presented in a way that users understand their role and the research project, e.g., are the issues of ethics and consent or how data will be stored and used clearly understood? • Have the issues of data storage and confidentiality been addressed, especially where it relates to sensitive or personal information? User involvement of older people sets a challenge to the traditional view of ethics and the focus on “avoiding risk to research subjects.” The focus on a user design process is more aligned to co-design and wider research ownership and therefore involves sharing the benefits and “risks.” We cannot assume that users are passive, or that they may not be knowledgeable about how to participate in the research or have knowledge about the topic that can inform its direction. The balance of “harm” and “benefit” must be carefully considered but it seems time to evolve beyond the subject approach to both research and ethics. The quality and impact of research can be strengthened by the appropriate involvement of older people in aging research and the evolution of ethical considerations in line with the desire for increased and wider societal involvement in aging research. It is no longer an issue of should older users be involved in research, but it is an issue of up to what extent they will be involved.
36
Older Citizens‘ Involvement in Ageing Research
653
References Bannister J, Hardill I (2013) Knowledge mobilisation and the social sciences: dancing with new partners in an age of austerity. Contemp Soc Sci 8(3):167–175 Barnes M, Taylor S (2009) Summary guide of good practice for involving older people in research. ERA-AGE, London Blackburn H, Hanley B, Staley K (2010) Turning the pyramid upside down: examples of public involvement in social care research. INVOLVE, Eastleigh CARDI (2009) CARDI Grant Programme – terms of reference. Centre for Ageing Research & Development in Ireland, Dublin Coggan J, Viens AM (2017) Public health ethics in practice: an overview of public health ethics for the UK Public Health Skills and Knowledge Framework (PHSKF). Public Health England, London Farmer M, Macleod F (2011) Involving disabled people in social research: guidance by the Office for Disability Issues. ODI, London FUTURAGE (2011) A road map for European ageing research. FUTURAGE, Sheffield INVOLVE (2012) Briefing notes for researchers: public involvement in NHS, public health and social care research. INVOLVE, Eastleigh INVOLVE (2015) Public involvement in research: values and principles framework. INVOLVE, Eastleigh INVOLVE (n.d.) Briefing note three: why involve members of the public in research? INVOLVE, Eastleigh Iphofen R (2011) Ethical decision making in social research – a practical guide. Palgrave Macmillan, Basingstoke King C, Bennett A (2016) The challenges of service user involvement: personal reflections. Physiotherapy 102(1):e236–e237 McMurdo MET, Roberts H, Parker S, Wyatt W, May H, Goodman C, Jackson S, Gladman J, O’Mahony S, Ali K, Dickinson E, Edison P (2011) Improving recruitment of older people to research through good practice. Age Ageing 40(6):659–665 Murtagh B (2014) Building stronger user engagement in age research-user involvement and practice guidance. CARDI, Belfast Murtagh B (2015) Getting involved in ageing research: a guide for the community and voluntary sector. CARDI, Belfast O’Sullivan R (2018) Research partnerships – embracing user involvement: practical considerations and reflections. Qual Ageing Older Adults 19(4):220–231 O’Sullivan R, Murtagh B (2017) Older citizens’ engagement in age research conceptual and practical considerations. Innov Aging 1(1):211 Pierce M, Timonen V (2010) A discussion paper on theories of ageing and approaches to welfare in Ireland, north and south. CARDI, Dublin Schröder-Bäck P, Duncan P, Sherlaw W, Brall C, Czabanowska K (2014) Teaching seven principles for public health ethics: towards a curriculum for a short course on ethics in public health programmes. BMC Med Ethics 15:73 Shippee N et al (2013) Patient and service user engagement in research: a systematic review and synthesized framework. Health Expect 18:1151–1166 Social Research Association (2003) Ethical guidelines. BMC Med Ethics 201415:73 Staley K, Elliott J (2017) Public involvement could usefully inform ethical review, but rarely does: what are the implications? Res Involv Engagem 3:30. https://doi.org/10.1186/s40900-0170080-0 Walker A (2007) Why involve older people in research? Age Ageing 36:481–483 Wright DNM, Corner JL, Hopkinson JB, Foster CL (2007) The case for user involvement in research: the research priorities of cancer patients. Breast Cancer Res 9(Suppl 2):S3
Disability Research Ethics A Bedrock of Disability Rights
37
Anne Good
Contents Disability Rights and Disability Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Disability Rights Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The UN and Ethical Research in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From the Disability Rights Movement to the UNCRPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The UNCRPD and the Emerging Global Disability Research Agenda . . . . . . . . . . . . . . . . . . . . . . . . The UNCRPD Human Rights Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNCRPD: Principles for Ethical Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Way Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
656 656 658 660 661 662 663 665 669 670 671
Abstract
The UN Convention on the Rights of Persons with Disabilities (UNCRPD 2006) has the potential to transform ethical practice in disability research across the globe. This chapter outlines the current knowledge base in the field of ethics in disability research, especially with regard to the UN Convention, its contents, and their implementation. The chapter concludes with some proposals on ways to ensure that the Convention’s potential is fully realized. It is not always the case that research which investigates the lives of disabled people is conducted in accordance with normal ethical principles and the Convention explicitly addresses this problem. Disability research often presents particular challenges when aiming to achieve full operationalization in this field, of the normal core research principles with regard to ethics, such as informed consent, confidentiality, privacy, respect, and equality. These requirements are not always met since researchers, along with research funders and A. Good (*) Chairperson Disability Research Hub, Disability Federation of Ireland (DFI), Dublin, Ireland e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_30
655
656
A. Good
managers, may not be well trained or informed in the field. Indeed, some practitioners overtly avoid ethical requirements because of their resource implications. When these principles are inadequately addressed, the resulting research is of reduced quality and can damage trust in the research process among disabled people. There is a danger that poor-quality research conducted without careful attention to ethical practice leads to a poor knowledge base underpinning changes in policy and service provision for disabled people. This chapter discusses the impact of the UNCRPD on ethics in disability research, both in terms of its potential and the progress to date. Strategies for increasing the scope and pace of change are also proposed. The author has several decades of active disability research experience focusing on global, European, and national levels. This work has included contributing to the development of guidelines for good ethical practice in disability research, with the WHO ICF FDRG and the UN Washington City Group on Disability Statistics during the period 2003–2015, as well as for the European Union and the Irish government (2002–2007). Lessons have been drawn from this work, and general principles are accompanied by proposals for future development. Keywords
United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) · Human rights disability · Medical model · Social model · Vulnerability · Privacy · Respect · Consent · Research literacy · Research ethics committees/boards
UNCRPD Definition of Disability
“Persons with disabilities include those who have long term physical, mental, intellectual or sensory impairments which, in interaction with various barriers, may hinder their full and effective participation in society on an equal basis with others.” Article 1
Disability Rights and Disability Research Introduction Both the global disability rights movement and its offspring, the UN Convention on the Rights of Persons with Disabilities (UNCRPD), have the ultimate goal of changing society and promoting positive social attitudes, policy, and practice in order to improve the lives of disabled people across the world. Article 1 of the Convention states that the purpose of the present Convention is to promote, protect,
37
Disability Research Ethics
657
and ensure the full and equal enjoyment of all human rights and fundamental freedoms by all persons with disabilities and to promote respect for their inherent dignity. Achieving this goal requires the creation of a knowledge base of high quality, based on ethical research and data collection in order to inform the improvement of law, policy, and practice and to underpin reliable outcomes measurement. The United Nations recently estimated that the more than one billion people worldwide who live with a disability could, and should, benefit from effective implementation of the Convention (Bickenbach 2011; Kostansjek et al. 2013). The Convention aims to improve the lives of those 1 billion disabled people, through promoting positive social changes in all countries which ratify its provisions. Currently most disabled people across the globe are living lives of exclusion and disadvantage and their human rights are violated or compromised. So the change program implied by the UNCRPD is enormous, including that which applies in the research field. Within its provisions, the Convention has an important research dimension arising from the commitment to systematic monitoring and evaluating of the progress, or lack thereof, toward the vindication of the rights of disabled people which is being achieved through its implementation (Madans et al. 2011). Within this research aspect, the question of ethical practice is addressed. The problem of disability research with poor ethical quality has been shown to be more acute as the ratification and entry into force of the UNCRPD spreads across the world. To date, a total of 177 countries have ratified the Convention, as have a number of international bodies, including the European Union (https://fra.europa.eu/ en/theme/people-disabilities/ratified-crpd, accessed 4 July 2019). Thus the UNCRPD is creating the most ambitious global program for change yet seen, with regard to disabled people, their rights, and their lived experiences. Its success in bringing positive change in those 177 countries is dependent on a range of factors, including the quality and reliability of the statistics, research, and data sets used for the monitoring process. Quality monitoring in turn relies on careful ethical practice in all research and data collection exercises which provide the basis for reporting on progress. Many initiatives to improve quality in disability research have been undertaken during the last three decades under the impact of the disability rights movement. However, it has been noted that while the understanding of disability as a concept has changed over recent decades through the influence of the disability rights movement, from an understanding based on the medical model to one based on the social model, research ethics mechanisms and structures for ethical scrutiny have not kept pace uniformly across the globe (Lemmy and Rule 2015). In many cases, this is an issue of lack of capacity, a need for better training and skills development at all levels and inadequate resourcing of this aspect of all research. The quality development initiatives have inspired the UNCRPD work and have been built on the scathing critique of pre-existing research by the early movement, including critiques of the inadequate attention paid to disability research ethics. These developments in turn were built on wider changes in social research and methodologies during those decades, including a greater emphasis on qualitative research and on the expanding field of emancipatory research design and methodologies. The latest
658
A. Good
developments in this process can be best understood as a dimension of the changes resulting from the UN Convention on the Rights of Persons with Disabilities (2006). While the global knowledge base required in order to provide a solid basis for the UNCRPD’s work has not yet been created, significant progress has already been made, and more is expected under the further impact of the Convention and its reporting requirements.
The Disability Rights Movement The disability rights movement first emerged in advanced capitalist countries in the 1960s (Hirst 2000; UPIAS 1976; Shakespeare 2013). During the ensuing decades, the movement developed into a force for global human rights and social justice, aiming to end dehumanizing treatment of disabled people (manifested by incarceration, eugenics, neglect, violence, and abuse) and to improve the lives of members of the most marginalized and disadvantaged sectors of society. This had implications for real-world disability research, both in expanding the scope of the empirical research undertaken and in developing a better understanding of the changes needed in research practice in order to ensure that quality and reliability were improved (Riddle 2016) . Perhaps the most significant changes were created by a long-running and vigorous debate regarding how disability as a concept should be defined and operationalized in research and data collection exercises (Leonardi et al. 2006). The early disability rights movement challenged the dominant understanding of disability, termed the medical model. In this model, disability is defined in terms of biomedical factors (impairments of body structures and functions) and is often based on diagnostic categories. It assumes identical or very similar and inevitable consequences for all people who have a similar medical diagnosis. This model neglects or underestimates the role of social circumstances in disabling, or conversely enabling, people with impairments (Oliver 2013; WHO ICF 2000). The rights-based movement convincingly argued that disablement and exclusion of people with impairments cannot be fully understood through the medical model. Instead it asserts that the working definition of disability in research and in practice must be social in perspective. In other words the new model must recognize how society by its actions and its omissions worsens the position of disabled people beyond the level which could be attributed solely to the direct effects of people’s physical, sensory, or mental impairments (Leonardi et al. 2006). The processes of disablement are now more generally (although not universally) accepted as resulting from complex social and structural factors enforcing, creating, or failing to alleviate, unnecessary barriers facing disabled people (including attitudinal barriers, stigmatization, stereotypes, and discriminatory practices). These social factors prevent people with impairments from living independent lives of their choice, to the fullest extent possible, and from experiencing maximum possible social inclusion. The complex processes of disablement and their roots in society as a
37
Disability Research Ethics
659
whole must be understood and investigated if the lives of disabled people are to improve through effective changes in law, policy, and service provision. The rights movement has worked for decades to reshape national and international disability law and policy in line with these new understandings, which have consequently gained traction in national and international organizations. Change is advocated in two main areas: traditional segregated policy and newer mainstreaming policy. Traditional segregated policy refers to laws and policies based on the medical model which are primarily characterized by a diagnostic approach, along with the practices of keeping disabled people in segregated settings outside of mainstream society. Indeed, even today, the medicalized, stigmatized, and segregated treatment of people with disabilities continues to predominate in very many countries, especially the poorest and least developed as well as the most authoritarian. Perhaps more significantly, and in tandem with critiques of the traditional approach, the disability rights movement gradually forced a new approach by vigorously advocating overt inclusion of disabled people in all mainstream state and international activities, including mainstream research and data collection. When and where the medical model dominates, both research and data collection have tended to neglect the fact that a significant proportion of the overall population is disabled (on average about 18%) and therefore that any representative population sample must include disabled people and findings and data must be disaggregated. This means that design and implementation of general policy and practice across all areas of society and all state activities were, and in many cases still are, in urgent need of disability proofing in depth and in scope. Likewise research on the lives and experiences of disabled people must be expanded beyond a solely segregated type of research and also moved into mainstream social, legal, and policy enquiry with appropriate disaggregation of findings. This has major implications for disability research ethics in practice. Prior to 2006, this rethinking of our understanding of disability was gradually being applied to the research and data collection exercises underpinning policy developments, especially in advanced capitalist societies, through political action by disabled activists and researchers. As new approaches to improving the lives of disabled people grew stronger, they began, of necessity, to impact upon both disability specific research and on general social and policy research. Small-scale changes in the overall disability research field of the 1980s and early 1990s were strengthened and expanded under the impact of the 1993 UN Standard Rules on Equalization of Opportunities for Persons with Disabilities (www.un.org). In part, this impact was caused by a growing demand for the aforementioned reliable evidence to underpin, monitor, and evaluate new national and international initiatives. In part it was also caused by the recognition that poor or unethical research in the past had contaminated the research field, for example, by reducing the likelihood that disabled people would give their consent to participate in research projects where such consent was properly sought. The absence of free and informed consent as a key component of disability research practice was and is, in turn, damaging both to research participants and to the research itself. If the authentic voices of disabled
660
A. Good
people themselves are not heard, then research is partial and misleading. Therefore, expansion of evidence-based mainstream policy-making necessitates attention to research ethics. The major changes in disability research design of the 1990s can be summarized as broadening the scope of the substantive topics investigated, a growing acceptance of the social understanding of disability, and a new emphasis on emancipatory research and research ethics.
The UN and Ethical Research in Practice One of the most essential changes to research practice, reinforced by these activities, was attention to the particularities of developing and strengthening disability research ethics both in scope and in application. New, more rigorous ethical research practice which paid overt attention to both ethical issues in disability research and disability issues in all social and policy research began to be developed by the UN. Perhaps the most globally impactful of the developments, which built on these changes, occurred during the 1990s and early twenty-first century through two major United Nations initiatives (led by the WHO and the UN Statistics Division, respectively) which are still ongoing. This work laid the ground for the UNCRPD research dimension. Firstly, a program of work was undertaken by the World Health Organization in the mid-1990s, in cooperation with organizations representing disabled people, to produce the International Classification of Functioning, Disability, and Health (ICF 2001) with its biopsychosocial understanding of disability (Good and McAnaney 2003; Kostansjek 2011; Schneider et al. 2003). This large-scale effort by the WHO was a direct response to the severe criticism of the previous WHO disability classification system (ICIDH) as being solely based on the medical model, by experts such as Rachel Hirst of the UK (who acted as expert advisor to the reform process). The purpose of the ICIDH had been to provide a globally accepted framework for disability research and data collection similar to the ICD (International Classification of Diseases) in medical research. The ICIDH had been rejected by activists and academics from the disability rights movement as being a reflection of the segregated medical model system. A process of rethinking and redesign of the ICIDH was undertaken which involved, and continues to involve, disabled people’s organizations along with disabled experts and academics at all stages. This program of work led to the inclusion of environmental factors along with the effects of individual impairments, in processes of disablement/enablement (e.g., social factors such as attitudes, access, mainstreaming of disability proofing across education, transport, employment) to create the ICF model. A short appendix on research ethics was also included in the first version of the ICF approved by the WHO and its member organizations in 2001. The ethical dimension of this work then continued with the development of WHO approved guidance on ethical practice in disability research, once again with input at all times, from organizations of disabled people. The initial (very brief) ICF
37
Disability Research Ethics
661
appendix on ethics became the starting point for the work of an expert group which was set up in 2005 to advise the WHO on development and roll out of the ICF. Entitled the WHO Functioning and Disability Reference Group (FDRG), this group in turn set up a subgroup to inform ethical practice when using the ICF. The FDRG has subsequently produced a valuable range of practical resources for implementing this new approach. These can be found on the WHO website. Almost in parallel with the WHO development of the ICF, a program of work was undertaken by the UN Statistics Division through the Washington City Group on Disability Statistics in 2001 and has continued since that date (Brady and Good 2005; Madans et al. 2011). The WCG focused in particular on the census of population as the most reliable and relevant global data collection exercise which could be used to collect accurate and reliable data on disabled people across the globe. Indeed it is often the only such exercise in poorer countries. The design and methodology of the census exercise were reformed through the WCG advice to the UNSD on developing and operationalizing appropriate disability questions for inclusion in the census. This was intended to ensure that the disabled population, often previously unidentified or excluded, would, in the future, be included in the census population count of overall numbers and also in the further collection of socioeconomic and demographic information about the population covered (O’Donovan and Good 2010). The identification of disabled people in the census of population, which was implemented in the 2006 census round, allowed for disaggregation of all further data by disability and, as a consequence, for comparisons of the position of disabled with non-disabled people. The example of educational data is given in the box below. Education
An example is in the field of education. By adding questions in the census to identify disabled people within the overall population, census data collected on educational participation and attainment can now be analyzed to compare educational aspects of disabled people’s lives with those of non-disabled people, thereby revealing one key area of gross inequality.
From the Disability Rights Movement to the UNCRPD With the passing of the UN Convention on the Rights of Persons with Disabilities (UNCRPD) in 2006, a major goal of the disability rights movement was achieved. The Convention has created an even stronger impetus for expansion and changes in disability research design and data collection, including improved ethical practice. The Convention sets out how human rights are to be achieved for disabled people, just as other UN Human Rights Conventions set out pathways to achieve actualization of human rights for other neglected sections of society, such as women and children. It has been recognized that attention must be paid to the specific strategies
662
A. Good
needed to vindicate the rights of all people not simply those of the non-disabled majority. In addition, there is an evolving recognition that these sectors are not discrete groups but form intersecting sections of the human family. This means that they overlap in complex ways, and intersectionality must form part of the analysis. (For example, women and disabled people are not separate groups. The category “women” includes disabled women who are often doubly discriminated against, just as the category “disabled people” includes both genders and thus gender differences. Intersectionality must inform ethical practice.) Under the terms of the UNCRPD, the United Nations has been assigned an implementation and monitoring role with regard to progress toward the Convention goals in the 177 countries which have completed ratification. This complex process has enormous implications for disability research across the globe, whether the research is investigating law, policy, universal design, service provision, attitudinal change, or demographics. The UNCRPD monitoring and evaluation dimension is set out in Articles 31 to 39 of the Convention. These articles aim to ensure that the Convention results in dynamic programs of change in the countries which have signed up and ratified the Convention. It requires states to systematically monitor and demonstrate progress toward full human rights for people with disabilities, through regular reporting to the UN, backed by provision of quality evidence from research and from data collection at all levels. Furthermore, it sets out a framework for how ethical practice, which must be a key component of all research involving human subjects, should be effectively applied to disability research. This framework is provided both within the general provisions of the Convention and in some further detail within Article 31. The UNCRPD monitoring process is proceeding mainly on the basis of inserting the Washington Group’s questions into censuses and national surveys.
The UNCRPD and the Emerging Global Disability Research Agenda The Convention is beginning to shape a complex and long-term global research agenda with regard to disability which has implications for research practice at all levels. One of the important concerns of the Convention in relation to this global research agenda is in the field of research ethics. There are three main sets of ethical implications for disability research emerging from the Convention. These implications are: • The overall human rights framework • The specific research principles listed as requiring particular attention in disability research which are named as respect, consent, and privacy • The challenges posed by practical operationalization of research ethics in areas such as training of research staff at all levels which require appropriate resourcing from funding bodies (Iphofen 2009)
37
Disability Research Ethics
663
These three sets of quality improvement efforts in the area of research and of data collection need to be incorporated into practice, as countries expand their efforts to gather evidence for the UN monitoring exercises. These changes in themselves need to be systematically monitored. The Convention recognizes that much work remains to be completed in order to ensure that ethical requirements become a lived reality for disabled people, as well as key knowledge and skills for non-disabled people who are also active participants in the field. Beyond research participants these other disability research actors include researchers, data analysts, trainers of researchers, research funders and managers, disseminators of research findings, and users of evidence created by research, along with governments and international bodies involved in this research field. The impact of the UNCRPD on research ethics has begun to expedite a crucial and core debate with regard to research ethics in the field of disability research. This debate must establish how the UNCRPD can and will help to improve ethical practice in disability research, thereby ensuring that the Convention principles apply within the world of research but also that, as a consequence, progress toward the Convention goals can be reported and evaluated on the basis of reliable and quality evidence. The UNCRPD website listed at the end of this chapter includes valuable reports, training materials, and guidance for countries in developing their monitoring data sets. The global research agenda established by the requirements of the UNCRPD cover all policy and legal fields from education to health and from transport to universal design. The global knowledge base required to ensure that transformative change can be planned and its achievement evaluated has been steadily expanding, but still remains weak. Firstly, baseline data from which to measure change, although steadily expanding, remains incomplete. Secondly, the quantitative and qualitative research and data collection needed to evaluate the impact of the Convention is also patchy in many countries. This means that effective attention to ethical requirements is of enormous importance as these lacunae are addressed. The Convention sets out a broad human rights framework for this work, and it also gives specific guidance on the operationalization of some key ethical principles which have often in the past not been honored within disability research.
The UNCRPD Human Rights Framework Promoting the human rights and equality of disabled people is the core objective of the UN Convention in all its aspects, including research. As already noted, this does not imply that disabled people have special rights beyond those pertaining to all human beings by virtue of our common humanity. Instead the Convention is concerned with the fact that there are specific challenges to ensuring that disabled people’s rights are respected and honored in the research process. History tells us that this has not always been the case, with unethical research practice ranging from the extremes of eugenics and the systematic abuse of disabled research subjects, to the less dramatic, but nonetheless important, mundane experiences of bad research
664
A. Good
practice which many disabled people report. These often take the form of failure to require informed consent or poor attention to the well-being of disabled research participants at all stages of the research undertaking, including the dissemination stage (Kitchen 2000). The Preamble to the Convention emphasizes a core set of important points with regard to this goal of equality and human rights for disabled people. Firstly, the Convention reminds us that disabled people are subjects in their own lives and must not be treated as mere objects of the actions of others, including researchers. This key recognition underlies major components of the Convention’s philosophy, for example, the right of disabled people to make choices about their lives rather than having the circumstances of their lives determined by others, such as state agencies, charities, or families. Choice implies that disabled people must be offered real and not just nominal choices and that they must be provided with the requisite practical supports to make the right choices for themselves as individuals. This has major implications for research design and implementation which must ensure that disabled people’s choices with regard to whether or not to participate in research are made freely and facilitated with any required resourcing. For example, in the matter of information on the basis of which to make choices, this might take the form of the provision of sign language interpretation for deaf participants or plain English information for people with intellectual disabilities. In addition, the research instruments used must investigate not solely their factual situation but also, in the areas under investigation, the availability or otherwise of facilitators to decision-making around choice as well as the mitigation of barriers to choices. These concepts, of barriers and facilitators, are unpacked in detail in the ICF with its section on environmental factors. Both barriers and facilitators are to be found in the realm of aids, supports, or appliances but also, and perhaps most importantly, in the attitudes and resulting behavior of all non-disabled people involved in each research exercise. Another important core human rights principle of the Convention is its emphasis on full participation by disabled people in all aspects of society. Once again this has implications for research, in how research questions are selected, how research questions and research instruments are designed, and how research is conducted. The impact of the disability rights movement on the disability research field has led to the development of a range of approaches to achieving this goal, from emancipatory to participatory research practice and various types of consultation exercises. These vary according to the level of guaranteed power, influence, and involvement accorded to disabled people within the research, from full involvement at all stages of the research to variations in the strategies of involvement implemented at key stages of the process. These include choice of research topics, scrutiny by research advisory committees and ethical review committees, as well as advice from stakeholders’ groups and expert advisors. These practitioners, advisors, and scrutineers must include disabled researchers as well as those disabled people who hold positions of expert by experience.
37
Disability Research Ethics
665
These principles of honoring the human rights and equality of disabled people in all areas of life, including research, require particular sensitivities, strategies, and understandings in order to be operationalized (Mont 2017).
UNCRPD: Principles for Ethical Research Within the overall human rights framework, the Convention sets out specific challenges which must be addressed.
Article 31 Article 31 of the Convention highlights some particularly problematic areas in applying ethical practice to disability research and data collection. These relate to the operationalization of the interconnected principles of respect, consent, and privacy. These principles are core aspects of all ethical research involving human subjects, but they have often been neglected in the treatment of research subjects who are disabled. It could be argued that the principle of respect is the primary principle from which the other two principles of privacy and consent flow. This principle will be considered first. Respect The principle of respect for the dignity and rights of disabled people is set out in some detail in UNCRPD Articles 3, 17, 22, and 31. Although only one of these Articles, i.e., Article 31, focuses specifically on research and data collection, the general matters set out in the other three Articles have implications for ethics in disability research. These articles provide a backdrop to the Article 31 discussion of data collection and research as part of the UNCRPD process for monitoring and evaluating what progress is being made and by contrast, where progress is not being achieved. The following aspects of respect are covered in Article 3: respect for disabled people’s inherent dignity, individual autonomy, freedom to make choices, and independence. Respect for difference and acceptance of disabled people as part of human diversity and humanity are also discussed. Article 3 also contains provisions relating to respect for the evolving capacities of children with disabilities and respect for the right of children with disabilities to preserve their identities. In addition to Article 3, Article 17 states that every disabled person has a right to respect for his or her physical and mental integrity on an equal basis with others. Article 22 discusses respect for privacy, including equal privacy of the personal, health, and rehabilitation information of disabled people with that of non-disabled people. These are all areas where disabled people’s rights have often been transgressed during research. These articles along with the Convention’s overall human rights framework underpin Article 31. Article 31 states that ethical research practice requires all research participants to be treated with respect at all times. It acknowledges that
666
A. Good
standard ethical guidelines for research cover such issues adequately for most, but crucially, not for all disability research. Therefore Article 31 advocates that particular attention be paid to the areas in which standard guidelines are not sufficient. Article 31 reiterates that information collected for UNCRPD purposes must comply with all legally established safeguards, including legislation on data protection, to ensure confidentiality and respect for the privacy of disabled persons. In addition, there are some specific complexities which must be remembered in disability research. (Malta University, for example, has provided its researchers with a supplementary guideline on applying the general research ethical guidelines to disability research.) To ensure that the principle of respect is fully honored, all researchers in this field must be trained in disability awareness, and the research methods being considered at the design stage must be comprehensively and effectively disability proofed. In particular, strategies for ensuring respectful research practice (including respect for privacy and for seeking consent) require careful planning in the following situations: • In care settings for disabled people, where privacy, for example, may be difficult to ensure, whether those settings are public or private, full time, or intermittent (such as respite care). • In situations where disabled people may require intimate personal care and there is a danger that boundaries may not be adequately respected leading to abusive treatment and attitudes. There is urgent need to empower disabled people to recognize and report such abuse, including in research projects, without fear of negative consequences. • Where advocates, interpreters, or proxies may need to be used to facilitate communication between researchers and participants. As a general rule, the more direct the communication achieved between the researcher and respondent, the better, and where intermediaries must be used, this decision must be carefully considered and managed, and they too will need training and monitoring. • In recent years the theme of direct communication with disabled children has received special attention, as have the challenges of minimizing the use of proxies as respondents when researching disabled people who are cognitively or intellectually impaired. Since the approach/attitude of researchers toward disabled participants began to change in the 1990s, it has proven more possible than previously believed to reduce the use of proxies through the use of newer techniques of data collection, for example. In some cases new technologies, specialized apps, and social media have provided useful solutions. It is the responsibility of researchers and others involved in the research process to ensure that research design takes account of these specific ethical challenges and the range of possible strategies which have been developed in order to meet them. This implies that special training must be undertaken by all those involved in the research team, including those collecting data, those inputting and storing data, as well as the
37
Disability Research Ethics
667
research managers and funders. Collection and dissemination of practical examples are very valuable in this context and need to be expanded. In short, respect in disability research includes respect for disabled people’s dignity and for ensuring anonymity, privacy, and confidentiality at all stages of the research, including the final stages of dissemination of research results and subsequent data storage/disposal. The UNCRPD allocates separate consideration to the complex matters of seeking informed consent and of ensuring privacy for disabled participants as discussed below.
Consent (Informed and Free) All research with human participants must contain safeguards and provisions to ensure that consent is effectively and ethically sought. This means consent must be freely given on the basis of full information and discussion of possible consequences of decisions taken. Firstly, researchers need to fully comprehend and comply with all relevant legislation, including laws relating to data protection, freedom of information, child protection, and protection of vulnerable adults. In addition any staff involved in disability research will need to have obtained police vetting. Secondly, particular strategies may be needed to ensure that the consent obtained from disabled participants for any research exercise is both informed and voluntary. Areas to be considered include provision of information in accessible formats, awareness of barriers to privacy and risks to free choice in certain settings, as well as deep understanding of the concept of capacity to give or withhold consent in a research context. The issue of consent is arising more frequently within the current disability research context, where, unlike in the past, the thrust is toward research which uses proxies as seldom as possible. If the goal is to include disabled people (both older children and adults) so that they speak for themselves rather than be spoken for by others, then particular attention may need to be paid to providing such facilitating resources as: • • • •
Appropriate, accessible, and detailed information Varied and appropriate methods of communication Employment of advocates and interpreters where needed Continued consent negotiation throughout the research process
These requirements will have implications for both timeframes and resources, and so they must be incorporated into the research design and planning from the start. A further complexity arises when the research population comprises in whole or in part, people living in residential institutions or other situations in which potential respondents/participants may feel pressurized to give or withhold consent. In such cases there is a greater than usual onus on the researcher to ensure that no undue pressure is placed on individuals in order to ensure or, in contrast, to prevent their participation and that there are no negative consequences for those who refuse. Only then can consent be deemed genuinely voluntary.
668
A. Good
This can apply as much to over protectionism resulting in excluding disabled participants who may in fact wish to participate, as to consent processes which do not take into account the need to provide full information in formats which are accessible to people with varied impairments. Including participants only on the basis of informed and voluntary consent can mean more than a single act of giving consent. It may mean an ongoing negotiated process through the various stages of the research project. This process includes provision of comprehensive and accessible information on the research before requesting participation; discussions with potential participants and/or parents, guardians, and advocates; the signing of consent forms and letters in various formats; reviewing consent issues as they arise during the research; and reporting on ethical practice as implemented. At the core of the consent issue is the concept of capacity; the decision as to whether a person has or has not, or has to a diminished extent, the ability to understand both that to which they are being asked to agree and the possible implications of such agreement. The matter of determining capacity may be especially complex in some disability research. This is likely to be the case, for example, where the research focus is on disabled children, people with intellectual or cognitive impairments, people experiencing mental distress, or people utilizing some forms of medication, which could give rise to doubts about their capacity to comprehend what is being asked of them. There is then a further and related question, as to who can and/or should make a determination in the matter of capacity. Therefore, researchers need to: • • • • • • • •
Develop their expertise and knowledge in this area, including legal knowledge Review relevant examples of good practice Consult with other experts as appropriate Recognize that these experts may include peers and family members of the disabled people as well as professional staff Be open to diverse forms of negotiation and communication Provide advocates and facilitators where appropriate and needed Consider the appropriateness of some form of supported decision-making in relation to consent Use the ethical review process to gain informed input and advice into this process
All these stages in designing and implementing the consent procedures must be documented and reviewed.
Privacy The UNCRPD also gives separate consideration to the complex matter of ensuring privacy for disabled participants. Based on the requirements set out in Article 22, researchers need to recognize that the situation of many disabled people is such that privacy, including privacy for participation in research, can be limited. Useful examples of these issues and possible solutions can be found in research carried out into gender-based violence perpetrated on disabled women in the work of US
37
Disability Research Ethics
669
researcher Margaret Nosek (see McFarlane et al. 2001). The issues of communication, consent, and intimate caring have been discussed in the section on respect, including their link to respect for privacy. There is no doubt that research is both desired and distrusted by many people with disabilities. The sources of this distrust are many but would include negative experiences of previous research, for example, among disabled people with conditions from childhood along with significant historic discrimination and dehumanization, for example, eugenics policies and superstitious beliefs regarding origins of disability as caused by sinfulness. On a smaller scale, there is often fear of careless, disrespectful treatment by researchers or field staff along with concern that research may be used negatively to limit or withdraw existing supports and positive programs for inclusion, perhaps for reasons of economic policies such as austerity programs. Also there can be a fear that stereotypes will be reinforced through certain types of unethical research. Therefore the best way to allay such fears which are a barrier to high-quality research is to set high standards of ethical practice and provide the resources and skilled personnel to make those standards a reality (WHO 2017, 2018).
Conclusions This examination of the UNCRPD has shown that the Convention has significant implications for disability research and its ethical component. These implications begin with the role of the Convention in setting out, enforcing, and supporting a very significant global disability research agenda, which in its scale and substance has never been attempted before. Furthermore, the Convention must contribute to ethical improvements in disability research in order to create a comprehensive and highquality knowledge base for investigating progress toward achieving the main goal of the Convention, i.e., the vindication of the human rights of all disabled people. Examination of the Convention text has set out both the broad human rights framework which must inform disability research and the principles of ethical research practice which must be operationalized carefully in the particularities of this field of inquiry. The next stage of this process is becoming clear and needs to be the focus of future work. The primary question has become that of implementation: Is the Convention providing the requisite impetus toward more ethical disability research and data collection? This question can only be answered if a program of monitoring and evaluation of knowledge creation is undertaken. This would require comprehensive reporting on these matters as part of the UN work. A secondary question relates to design and methodology, i.e., what are the changes and modifications in all aspects of disability research which are making possible the improvements in ethical practice and what problems are proving intractable. These principles must be implemented in all initiatives promoted and supported by the UN itself, and so a third element is the provision of necessary training, supports, and resourcing for ethical practice in the disability research field as a
670
A. Good
whole. This work must include the important components of regular exchanges of information and experiences between countries and global regions to maximize the Convention’s impact.
Way Forward This chapter has discussed disability research ethics and their role in developing a comprehensive and reliable knowledge base for the UNCRPD. The purpose of this knowledge base is to provide a bedrock on which to build a long-term process of real and positive changes in the lives of disabled people, thus realizing the aspirations of the UN Convention on the Rights of Persons with Disabilities. The next phase requires thought, planning, and action at the micro, meso, and macro levels of research practice. While the changes in ethical practice already underway are useful and important, major further work is required. This needs to take place in two areas. Firstly the understanding of ethical research practice must be developed beyond the current emphases on diversity and intersectionality, important though these are. Secondly, and at the same time, the benefits of these developments must be maintained and regression must be resisted, especially in times of political and social upheaval. Recognition of these challenges is also a theme in other chapters of this handbook. In relation to transformative change in disability research ethics, there is a key concept often cited in disability research ethics, which needs reconceptualization, as is also argued elsewhere in this handbook. I suggest that this concept of vulnerability needs to be recognized as essentialized and individualized. In other words it purports to explain higher than normal risk of harm as being the inevitable result/product of the characteristics of a particular individual or category of people, rather than as something which is socially produced. For example, a person who is disabled is categorized as being vulnerable because of their particular impairment. That is a very different perspective from one in which the perceived higher risk of harm is understood to be created by external social factors. These factors include unequal power relationships; the characteristics of potential perpetuators of harm, along with failures of society to effectively ensure the safety of disabled people; and, therefore, the tolerance of situations in which possible harm doers feel free to act because they believe that they will have impunity. Therefore, I would argue that an individualized understanding of vulnerability is insufficient. Instead we need to capture the causation of greater risk through a dynamic social understanding linked to inequality, injustice, and stereotyping. This in turn means that research ethics needs to be reconstructed based on the human rights and equality model. The onus is on researchers and others involved in the research process not simply to categorize people within one or other of an agreed listing of “vulnerable” people but rather to develop the skills and understandings necessary to prevent harm, by developing what was usefully termed “full human research literacy” in the chapter on LGBTI + research in this handbook.
37
Disability Research Ethics
671
In addition, with regard to the way forward, long-standing discussions from a variety of fields, about how to make the transition from an individualized approach to a human rights/equality approach, must be revisited, this time in the field of research ethics. This matter has often been dealt with in other fields by implementation of strategies such as positive action, affirmative action, piloting, mainstreaming, and exploratory research to identify transitional measures which are found to be effective. So the question is how to get from the old approach to the new in this field by incorporating into the mainstream of research ethics new knowledge about how to rectify the flaws in the old approach. This new knowledge is derived from new insights in social theory and/or from the results of pioneering research, whether large or small scale, quantitative or qualitative or both. These processes have important implications for the training of researchers, the production of guidance and other materials, and the functioning of research ethics committees, (Government of Canada 2015; WHO Resarch Ethics Review Committee 2018) for example. An effective new approach needs to ensure that ethical training, ethical guidance, and ethical practice in research are based on the broadest possible understanding of what it is to be human and of what comprises the full range of lived experiences which emerge from human diversity and complexity. This means moving away from the current approach which consists of applying ethical principles in empirical research by focusing on a mythical majoritarian “normal” using standard research ethical practice and then, in some cases, making special additions. These are inserted as the concept of normal is widened, but those outside the “normal” are termed vulnerable in this process when what is in operation is a particular social construction of normality and of risk. The LGBTI chapter in this handbook is strong in emphasizing this kind of fundamental change in research ethics rather than pragmatic additions which do not address the matter holistically. It has also been interesting to note the existence of research in other fields which works with the idea of vulnerable and oppressed situations as the focus rather than so-called vulnerable people and others using the term vulnerable and oppressed populations, in, for example, health research (e.g., see Flaskerud and Winslow 1998). Disability research ethics needs similar transformational thinking and development of practice. As the UNCRPD monitoring process continues and expands, a major data set is emerging which should be a crucial benefit to the transformation agenda if thoroughly investigated. However, it will be important, in order to realize that potential, firstly that the UN ensures all country reports include comprehensive and reflexive sections on their ethical processes and secondly that regular meta-analysis of these reports is both undertaken and reported.
References Bickenbach J (2011) The world report on disability. Disabil Soc 26(5):655 Brady G, Good A (2005) Methodological preparations for an Irish post census national disability survey in 2006. Paper presented at Washington Group on Disability Statistics 5th meeting,
672
A. Good
Rio de Janeiro, September 2005. http://www.cdc.gov/nchs/about/otheract/citygroup/products/ citygroup5/WG5_Good_Brady.doc. Accessed 4 July 2019 Dench S, Iphofen R, Huws U (2004) An EU Code of Ethics for Socio-Economic Research. Report 412. Institute for Employment Studies: Brighton Flaskerud JH, Winslow BJ (1998) Conceptualizing vulnerable populations health-related research. Nurs Res 47(2):69–78 Good A (2002) Ethical guidelines for disability research. National Disability Authority, Dublin Good A, McAnaney D (2003) Methodological preparations for the first Irish national disability survey. NDA, Dublin Government of Canada (2015) Research ethics boards: guidelines for researchers. www.pre.ethics. gc.ca Hirst R (2000) The international disability rights movement. www.leeds.ac.uk International Classification of Functioning, Disability and Health (2001) World Health Organization: Geneva. www.who.int Iphofen R (2009) Ethical decision making in social research: a practical guide. Palgrave, Hampshire Kitchen R (2000) The researched opinions on research: disabled people and disability research. Disabil Soc 15(1):25 Kostansjek N (2011) The use of the international classification of functioning, disability and health (ICF) as a conceptual framework and common language for disability statistics and health. BMC Public Health 11(4):S3 Kostansjek N, Good A, Madden RM, Ustun TB, Chatterji S, Matteus CD, Officer AM (2013) Counting disability global and national estimation. Disabil Rehabil 35(13):1065–1069 Lemmy Nuwagaba E, Rule P (2015) Navigating the ethical maze in disability research: ethical contestations in an African context. Disabil Soc 30(2):255–269 Leonardi M, Bickenbach J, Ustun TB, Kostansjek N, Chatterji S (2006) The definition of disability: what’s in a name. Lancet 368(9543):1219–1221 Leonardi M, Chatterji S, Ayuso-Mateos JL, Hollenweger J, Üstün TB, Kostanjsek N, Newton A, Björck-Åkesson E, Francescutti C, Alonso J, Matucci M, Samoilescu A, Good A, Cieza A, Svestkova O, Bullinger M, Marincek C, Burger H, Raggi A, Bickenbach J (2010) Integrating research into policy planning: MHADIE policy recommendations. Disabil Rehabil 32(Suppl 1):S139 Madans J, Loeb ME, Altman BA (2011) Measuring disability and monitoring the UN convention on the rights of persons with disability: the work of the Washington group on disability statistics. BMC Public Health 11(Suppl 4):4. https://doi.org/10.1186/14712458-11-S4-S4 McFarlane J, Hughes RB, Nosek MA (2001) Abuse assessment screen disability (AAS-D); measuring frequency, type and perpetrator of abuse towards women with physical disabilities. J Women’s Health Gender Based Med 10(9):861–866. PubMed PMID 11747680 Mont D (2017) Training on how to ask “disability” questions on censuses and surveys, in www. washingtongroup-disability.com/washington-group-blog/. Date 13 October 2017. Accessed Oct 2017 O’Donovan M, Good A (2010) Towards comparability of data: using the ICF to map the contrasting definitions of disability in Irish surveys and census, 2000–2006. Disabil Rehabil 32(Suppl 1): S9–S16 Oliver M (2013) The social model of disability: thirty years on. Disabil Soc 28(7):1024–1026. https://doi.org/10.1080/09687599.2013.818773 Riddle C (2016) Human rights, disability and capabilities. Palgrave Macmillan, New York Schneider M, Hurst R, Miller J, Ustun TB (2003) The role of the environment in the international classification of functioning, disability and health (ICF). Disabil Rehabil 25(11–12):588–595 Shakespeare T (2013) Disability rights and wrongs revisited. Routledge, London Union of Physically Impaired Against Segregation. (UPIAS) and The Disability Alliance (1976) Available inNational Disability Arts Collection and Archive the-NDACA.org
37
Disability Research Ethics
673
United Nations Convention on the Rights of Persons with Disabilities. 2006. UN New York. WHO (2017) WHO code of ethics and professional conduct. https://www.who.int/ WHO (2018) How to use the ICF: draft practical manual for using the ICF. https://www.who.int/ classifications/icf/en/ WHO Research Ethics Review Committee (2018) Informed consent template for research involving children, qualitative studies. https://www.who.int/
Key Resources These comprehensive websites include advice on improving research ethics in their respective research aspects
United Nations
UN Department of Economic and Social Affairs, Monitoring and Evaluation of Disability Inclusive Development (DESA) https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-dis abilities.html https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-dis abilities/monitoring-of-the-implementation-of-the-convention.html
UN Statistics Division http://www.washingtongroup-disability.com/ http://www.washingtongroup-disability.com/washington-group-question-sets/short-set-of-disabil ity-questions
World Health Organization (WHO)
Classifications: ICF https://www.who.int/classifications/icf/en/ https://www.who.int/disabilities/world_report/2011/report.pdfhttps://www.who.int/classifications/ icf/en/%0Dhttps://www.who.int/disabilities/world_report/2011/report.pdf%0D%0C%0D%0D
The Ethics of Research and Indigenous Peoples
38
Closing Borders and Opening Doors Lily George
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Positioned Subjectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Closing Borders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Development of Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Opening Doors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Power of Love . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
676 676 678 680 682 685 688 689
Abstract
For several centuries, Indigenous peoples around the world have been “subjects” of research by western academics who considered themselves to be authoritative and objective scientists. Therefore, research became a “dirty word” for many Indigenous peoples, seemingly more focused on our deficits and reinforcing negative stereotypes. From the late 1970s, however, Indigenous and other marginalized groups began protesting their lack of rights with regard to research, and closed borders against non-Indigenous researchers, to the strident objection of many. In New Zealand, notions of ethical research arose with the Cartwright Inquiry (1987–1988), development of institutional ethics committees, with the requirement of having Māori members on health-related ethics committees. Today the legitimacy of Indigenous research and Indigenous research methodologies is recognized in many countries, although challenges remain. Indigenous researchers claim the right to develop and enact research ethics based on the premises and cultural mores from within which they research. This chapter will explore an Indigenous
L. George (*) Victoria University of Wellington, Wellington, New Zealand e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_31
675
676
L. George
research project, associated ethical challenges, and their amelioration, through the utilization of Indigenous cultural knowledges. Keywords
Indigenous research · Research ethics · Kaupapa Māori research · Ethical research · Indigenous research methodologies
Introduction For several centuries, Indigenous peoples around the world have been subjects and objects of research by western academics who considered themselves to be authoritative and objective scientists. However, this academic distancing enabled the use of research as a tool of colonization and intellectual arrogance enacted upon Indigenous peoples. Therefore, research became a “dirty word” (Smith 1999, 2012) for many Indigenous peoples, seemingly more focused on their deficits and reinforcing negative stereotypes. From the late 1970s, however, Indigenous and other marginalised groups began protesting their lack of rights with regard to research, and closed borders against non-Indigenous researchers, to the strident objection of many. Today the legitimacy of Indigenous research, Indigenous research methodologies and methods is recognised in many countries, although challenges remain. Indigenous researchers claim the right to develop and enact research ethics based on the premises and cultural mores from within which they research, including an emphasis on positive relationship development and maintenance with participants. Important publications on Indigenous ethics such as the San Code of Research Ethics (South African San Institute 2017) highlight the importance of respect, honest, justice and fairness, and care as important components of research with Indigenous peoples. In New Zealand, notions of ethical research arose with the Cartwright Inquiry (1987–1988), development of institutional ethics committees, with the requirement of having Māori members on institutional and national ethics committees. Kaupapa Māori research methodology and methods have come to be more widely accepted in this country, helping set the foundation for Indigenous research methods elsewhere. This chapter will use as an exemplar, a Māori youth development project in which a central tenet has been the use of aroha (love and compassion) to promote positive growth.
Positioned Subjectivity A note on a significant feature of Indigenous research is required; that of the proximity of Indigenous researcher/s and those they research with, and the closing of the distance between them in respectful relationships. In her powerful work, Decolonizing methodologies (1999, 2012), Linda Tuhiwai Smith wrote that:
38
The Ethics of Research and Indigenous Peoples
677
In all community approaches process – that is, methodology and method – is highly important. In many projects the process is far more important than the outcome. Processes are expected to be respectful, to enable people, to heal and to educate. (2012, p. 130)
I contend that the expectation “to heal” is a project of research that is uniquely Indigenous, arising primarily from the impact of colonization and the resultant historical and intergenerational trauma responses (see Brave Heart 2005; EvansCampbell 2008; Faimon 2004; Fast and Collin-Vezina 2010; Sotero 2006; Walters and Simoni 2002). This expectation reminds me of who I am as an Indigenous researcher. I am an academic who primarily works outside of an academic institution, specialises in community-based health research, is current Chair of the New Zealand Ethics Committee, and editing a book on Indigenous research ethics. I am also a Māori woman of the Te Kapotai and Ngāpuhi tribes, a descendant of Whiti, a mother and grandmother, and a member of the Waikare community – my tūrangawaewae, my “place to stand,” my people to whom I belong. As an Indigenous researcher, I must be able to claim a firm foundation for myself – “an intellectual tūrangawaewae” based on my cultural and intellectual heritages – from which I “can venture outwards into the multiple and complex fields of research” (George 2010a, p. 55). I struggled to write this chapter about research ethics and Indigenous peoples while replicating the distance between researcher and researched that often-enabled researchers to describe the worlds of their “subjects” in ways that dehumanised or denigrated them. For most Indigenous peoples, research is not an objective exercise designed only to add to and advance knowledge. The objective of research is to utilise what is a powerful tool for the benefit of our people; those in our homes and at home, those of our tribes and our nations, and those of other Indigenous peoples to whom we connect through a shared history of colonial imperialism. The oftendesired outcome is transformation which “transcend[s] the basic survival mode through using a resource or capability that every Indigenous community has retained through colonization – the ability to create and be creative” (Smith 2012, p. 159). As noted by Durie (2009) – “The future is not something we enter. The future is something we create” (p. 20). Many Indigenous researchers consider themselves in what may be termed a “positioned subjectivity” (Woodthorpe 2007) that acknowledges where they stand in the research projects they undertake, who they are as a person as well as a researcher, how they connect to the people they are working with, and the relationships between them. This kind of research: is a craft which recognises and accepts the humanity of the researcher as well as the researched, and which balances this with wider analyses of the social life within which those we research with are embedded, and researchers are privileged to enter. (George 2010b, p. 30)
Nevertheless, “To acknowledge particular and personal locations is to acknowledge the limits of one’s purview from these positions” (Narayan 1998, p. 178).
678
L. George
Closing Borders There is a plethora of literature to support the contention that research has been conducted on Indigenous peoples by social scientists and other researchers for many years (e.g., Bastien et al. 2003; Deloria 1988; Durie 1998, Mihesuah 1998; Smith 1999, 2012; Walker 1987). Research by non-Indigenous researchers was seemingly more focused on perceived deficits, which created and reinforced negative stereotypes – these “victims of progress” (Bodley 2014) who somehow manifested their own circumstances of marginalization and dis-ease. Research and researchers have long been implicated in the processes of colonisation, displaying the ethnocentrisms and prejudices of the societies from within which they were formed. For example, Bronislaw Malinowski (1884–1942) has been credited with creating the main methods of anthropology; fieldwork and ethnography. Malinowski (1922) stated on the one hand that the social institutions of the “natives”“have a very definite organization. . .they are governed by authority, law and order in their public and personal relations. . .under the control of extremely complex ties of kinship” (pp. 11 and 10). Yet the release of his personal diaries in 1967 caused extreme controversy within anthropology, with comments such as “I see life of the natives as utterly devoid of interest and importance, something as remote from me as the life of a dog” (cited in Kuper 2014, p. 14). In nineteenth century New Zealand, amateur ethnologists such as Tregear, Smith, Best and White, were prolific writers on the social and cultural nature of “the Māori.” They generated what Ballara (1998) considered a “grand design” (pp. 97–99), an abbreviated description of “Māori” society suitable for the purposes of the emerging nation state. Ignoring tribal variations, they charted the historical development of all Māori in a systematic and simplistic manner that assumed an apparently static culture, not taking into consideration the post-colonization changes (Webster 1998). Notions such as “the great migration” of Māori explorers from the tenth century with a “great fleet” in the fourteenth century, were commonly believed into the twentieth century, eventually becoming the “accepted wisdom” (Durie 1998, p. 54) for Maori also; the great migration was solidly debunked by Simmons’ 1976 book, The Great New Zealand Myth. From the late 1960s, many of the early histories written about Māori were investigated by Māori (and some non-Māori) scholars and critiqued heavily. (The Waitangi Tribunal was formed in 1975 to investigate breaches of the Treaty of Waitangi (1840). Originally able to investigate breaches from 1975 only, the Treaty of Waitangi Amendment Act (1985) gave the Tribunal the ability to investigate Treaty breaches from 1840.) While contentious and controversial, the Treaty claims process has enabled the reclaiming and retelling of Māori tribal histories through the perspectives of Māori people. Nevertheless, research was often seen by Indigenous peoples as primarily for the benefit of the researchers coming from institutions with little real connection or relationship to the people they researched. Weber-Pillwax (2001) wrote of the experience of reading an academic article and being horrified to realise that the Cree man being described was her grandfather, and she considered their “lives had been assaulted and violated” (p. 166), as her family had no idea of the existence of
38
The Ethics of Research and Indigenous Peoples
679
the research and the resultant article. Additionally, there were inaccuracies in the English translation of the Cree words her grandfather had used, therefore misrepresenting his intended meaning. An outcome of this experience for WeberPillwax was a determination to develop Indigenous ways of research that were culturally safe, was respectful to and of real value for the communities involved. Standing Rock Sioux scholar and activist, Vine Deloria, was a well-known critic of academia, and anthropologists in particular; a contention echoed by other Indigenous writers (e.g., Smith 1999, 2012). Deloria’s (1969, 1988) book, Custer died for your sins, articulated his assertion that anthropologists “are the most prominent members of the scholarly community that infests the land of the free, and in the summer time, the home of the braves,” and that “Indians have been cursed above all people in history. Indians have anthropologists” (1988, p. 78). For Deloria, it seemed there was no reconciling the anthropological project with that of Indigenous people. With regard to “Indian anthropologists,” Deloria (cited in Watkins 2006) later wrote: Some prominent Indian anthros have announced at Indian meetings, ‘I’m an Indian but I’m also an anthro’. There is no question in this announcement that the individual has chosen the profession over the community. Once this happens. . .unless they prove momentarily useful they are never trusted again and people avoid them whenever possible. (p. 507)
As a self-proclaimed “Indigenous anthropologist,” therefore, I must assume a political stance which necessitates knowledge of the associated politics, and the tensions inherent from within that positioning, defining an Indigenous anthropologist as: An Indigenous person who works mainly with his or her own people, who is cognizant of the issues and challenges that Indigenous people share, and approaches research as a reciprocal and collaborative endeavour that privileges Indigenous concerns and Indigenous knowledge. (George 2010b, p. 62)
Paradoxically, it was within the discipline of anthropology in New Zealand, that I found it most comfortable to express who I am as a Māori woman, possibly reflecting the sensitivities of the anthropologists I worked with, rather than the fullness of the discipline itself. Nevertheless, there have been many criticisms of anthropology and anthropologists by Indigenous writers (e.g., Mihesuah 1998; Smith 1999, 2012). Indeed, there have been intellectual and institutional crises since the 1960s over the spectrum of western academia as challenges from a range of previously marginalized groups were, and continue to be, strongly articulated. These crises have a historicity which necessitated academics and researchers turning an intense gaze into their own disciplines that were being implicated in contributing to the decimation and denigration of Indigenous and other marginalized groups. As Salzman (in Borofsky 1994) stated – “our project rarely if ever speaks to their [the researched] needs. Almost always, we are doing our research to satisfy ourselves, emotionally and intellectually, and to build up our careers, to make our own lives better” (p. 31). Much of this resulted in the closing of research borders (intellectually and
680
L. George
metaphorically, and sometimes literally), which meant that non-Indigenous researchers no longer had the relatively easy access to Indigenous populations they had taken for granted in the past.
Development of Research Ethics Smith and Hudson (2012) consider that ethics is about “doing the right thing at the right time for the right reasons” (p. 1). Ethics are a set of guidelines, or principles, or values that inform how you are going to do particular tasks, or how you are going to develop and maintain particular relationships. However, there is a whakapapa (history/genealogy) to the development of research ethics that can be traced back to World War 2 and earlier. The concept of Eugenics was developed by the cousin of Charles Darwin, Frances Galton, in 1883, originally seen as a method of improving the human race. Galton said that Eugenics “is the study of the agencies under social control that improve or impair the racial qualities of future generations either physically or mentally” (cited in Miller 2014, p. 388). Therefore, in extension of the study of such “agencies,” steps were taken to ensure the destruction of those qualities that impair racial improvement, with the support of those traits that improve future generations; of course, the desirable traits were those common to the “white” races. Humanity has a long history of such racism; Aristotle had a theory of “natural slavery” whereby “some people are naturally inferior and can therefore properly be enslaved by others,” and this was used as a vindication for the maltreatment of Indians by conquistadors, and later for the slave trade of Africans (Maybury-Lewis 1992, p. 18). “Social Darwinism” with “survival of the fittest” theories, Eugenics and the rise of other scientific investigations added to such beliefs which were to influence Western thought for several centuries. Eugenics was particularly popular in the early twentieth century. Many countries enacted various Eugenics policies and programs including genetic screening, birth control, promoting differential birth rates, marriage restrictions, segregation according to race or mental acuity, compulsory sterilization and abortions, or forced pregnancies, and genocide. Miller (2014) notes that “History shows us how the demonization of specific populations, medicalization of racial purification, radicalization of racial hygiene, and state-sponsored genocide provided the necessary crucible for the Holocaust” (p. 389). Such ideas enabled the forced confinement of those so-called “inferior and degenerate” races such as Jews, with millions forced into concentration camps in Europe, many of whom were tortured, starved, and killed. The Nuremburg trials began in 1945 in Germany, bringing to justice military personnel involved in war crimes during World War 2, including the “Angel of Death,” Josef Mengele, who came “to represent all of Nazi evil in Auschwitz” (Siedelman 1988, p. 221). A positive outcome from the Trials was the development in 1948 of the Nuremberg Code which listed 10 ethical principles which included voluntary consent; resulting in good outcomes for society; avoidance of unnecessary
38
The Ethics of Research and Indigenous Peoples
681
suffering, injury, and death; minimisation of harm; and rights of participants to withdraw (Shuster 1997). These were supplemented in 1964 with the Helsinki Declaration which “addressed deficiencies in the Nuremburg Code, specifically with regard to research in the legally incompetent (e.g., mentally ill, temporarily incapacitated, children) and introduced the concept of therapeutic versus nontherapeutic research” (Macrae 2007, p. 176). Despite the developments noted above, however, unethical research continued. The infamous Tuskegee syphilis experiment was conducted by the US Public Health Service between 1932 and 1972. (From 1946 to 1948, the USPHS was also involved “immorally and unethically – and, arguably, illegally” (Rodriguez and Garcia 2013, p. 2122) in experiments on more than 5000 un-informed and non-consenting Guatemalan people, at least 1308 of whom were intentionally infected with sexually transmitted diseases. This was done with the knowledge and cooperation of Guatemalan authorities.) The non-consenting participants were 600 African-American men who thought they were receiving free health care, but who were instead being experimented on. Of this 600, 399 already had syphilis, although they were not informed of the diagnosis. This became even more unethical in 1947 with the discovery of penicillin as an effective treatment for syphilis – doctors chose not to tell the participants of the cure, nor to treat them with it. By 1972, at least 28 and possibly more than 100 had died of syphilis complications. Brandt (1978) connected the implementation of the Tuskegee study to Darwinism, which “provided a new rationale for American racism” with speculation that “in the struggle for survival the Negro in America was doomed.” (p. 21). It was only in 1972 with increasing public outcry following media attention, that the “experiment” was finally stopped. Brandt (1978) noted further that: In retrospect the Tuskegee Study revealed more about the pathology of racism than it did about the pathology of syphilis; more about the nature of scientific inquiry than the nature of the disease process. The injustice committed by the experiment went well beyond the facts outlined in the press and the HEW Final Report. The degree of deception and damages have been seriously underestimated. As this history of the study suggests, the notion that science is a value-free discipline must be rejected. (p. 27)
In New Zealand, a 1987 Commission of Inquiry headed by Silvia Cartwright (Cartwright Inquiry) investigated what became known as the “unfortunate experiment.” From 1966 to the early 1980s, Associate Professor Herb Green conducted a study of 948 women who had been diagnosed with cervical cancers. This study involved withholding treatment – without their knowledge or consent – from 131 of those women to see what would happen. Women’s health advocates Sandra Coney and Phillida Bunkle wrote an expose in a 1987 issue of Metro magazine. They called it the “unfortunate experiment” after a 1986 letter to the NZ Medical Journal by Professor David Skegg. Coney and Bunkle (1987) argued that “by watching these women, Green hoped to observe the natural history of the disease and prove his thesis, that untreated CIS rarely, if at all, led to invasion”; in fact, only 5% of the affected women had their cancer disappear without treatment. The Commission did indeed find evidence of major medical malpractice, as well as uncovering other
682
L. George
unethical practices such as taking cervical smears from baby girls without parental knowledge or consent, and hospital practices of teaching vaginal exams and IUD insertions to un-consenting, un-informed and unconscious women anaesthetised for other operations (Coney and Bunkle 1987). As a result of the Cartwright Inquiry, the Office of the Health and Disability Commissioner was set up, with a Code of Health Consumers’ Rights which strong highlighted issues of informed consent; Health & Disability Ethics Committees are established nationwide to deal with clinical and other health research (see https:// ethics.health.govt.nz/home). Teaching practices in hospitals were changed to meet international standards. Independent ethics committees were also set up in many institutions – including universities – throughout the country. The New Zealand Ethics Committee (see http://www.nzethics.com/), staffed by academic and community volunteers, provides ethical support for non-university and community research. All ethics committees in New Zealand are required to have Māori members on board, reflecting commitment to the principles of the Treaty of Waitangi. (The principles are often cited as participation, partnership, and protection. Having Māori members on ethics committees speaks to all three principles.)
Opening Doors Possibly the most influential text connecting the development of Indigenous research methodologies and related ethics, is Linda Tuhiwai Smith’s Decolonizing methodologies: Research and Indigenous peoples (1999, 2012). Unangax scholar Eve Tuck (2013) writes that: I was captivated by the layered wisdom in this text for novice, more experienced, and expert researchers. Published nearly 15 years ago, Decolonizing Methodologies has profoundly influenced my generation of critical researchers. It has given us an anti- colonial lexicon of research, and an ethics of making space and showing face. (p. 365, original italics)
Arising from Smith’s text, notions of decolonizing research and people became entrenched in the academic psyche of Indigenous researchers and scholars, and somewhat more reluctantly, in that of non-Indigenous researchers. Smith (2012) states she wrote the book: primarily to disrupt relationships between researchers (mostly non-Indigenous) and researched (Indigenous), between a colonizing institution of knowledge and colonized peoples whose own knowledge was subjugated, between academic theories and academic values, between institutions and communities, and between and within Indigenous communities themselves. (p. x)
Smith included the assertion that one factor precipitating this book was the increasing research activity amongst Indigenous communities and the increase in Indigenous researchers who could bridge the gap between academy and community. The second part of the book asserts our right to be researching ourselves and researching
38
The Ethics of Research and Indigenous Peoples
683
for ourselves. It lays the groundwork (already developing) for “kaupapa Māori research,” still a radical idea for many in 1999. Smith concludes the 2012 book with: Research seems such a small and technical aspect of the wider politics of Indigenous peoples. It is often thought of an activity that only anthropologists do! As Indigenous peoples we have our own research needs and priorities. Our questions are important. Research helps us to answer them. (p. 232)
Today the legitimacy of Indigenous research, Indigenous research methodologies and methods is recognised in many countries, and there is a plethora of literature addressing the multiple related issues that continues to explore and expand and excite. Indigenous researchers claim the right to develop and enact research ethics based on the premises and cultural mores from within which they research. One of the fundamental tenets of Indigenous research is the legitimacy of Indigenous knowledge. There cannot be Indigenous research without Indigenous knowledge, and there cannot be Indigenous research ethics without drawing values and guidelines from Indigenous knowledge: Indigenous peoples regard all products of the human mind and heart as interrelated within Indigenous knowledge. They assert that all knowledge flows from the same source: the relationships between global flux that needs to be renewed, the people’s kinship with the spirit world. Since the ultimate source of knowledge is the changing ecosystem itself, art and science of a specific people manifest these relationships and can be considered as manifestations of people’s knowledge as a whole. (Battiste and Youngblood Henderson, cited in Martin-Hill and Soucy 2006, p. 12)
As noted by Hudson et al. (2010), “Ethics is about values, and ethical behaviour reflects values held by people at large. For Māori, ethics is about “tikanga” [locally specific practices] – for tikanga reflects our values, our beliefs and the way we view the world” (p. 2). While we collapse a profusion of cultural groups under the term “Indigenous peoples,” the heterogeneity amongst Indigenous people must not be forgotten, however. Indigenous peoples may want similar things, and they strive to ensure that ethics is correct for their people and their place, but as Sherwood and Anthony (forthcoming) maintain, our ethics is place-based and there is little universalism in on-the-ground methods that must take account of the needs and worldviews of the local peoples. The Australian National Health and Medical Research Council (cited in Hudson et al. 2010) maintains that: “In a research context, to ignore the reality of inter-cultural difference is to live with outdated notions of scientific investigation. It is also likely to hamper the conduct of research, and limit the capacity of research to improve human development” (p. 1). Martin-Hill and Soucy (2006) report on discussions regarding the incorporation of cultural principles in biomedical and traditional health research ethics. They noted the increasing recognition of traditional medicine and spiritual healing as integral to development of systems of care that are culturally relevant. In doing so they also acknowledge the pluralism of Indigenous knowledges amongst the groups they are working with and generally, seeing this as a strength, rather than a deficit: “With
684
L. George
Indigenous methodologies, we can move past the often-flawed research methods of the ethnocentric past practices to research methods that are beneficial for all” (Martin-Hill and Soucy 2006, p. 17). Additionally, more conventional methodologies that nevertheless draw from conceptions of the importance of collective endeavour are also useful. Wallerstein and Duran (2010) advocate for Community-Based Participatory Research (CBPR) “as a transformative research paradigm that bridges the gap between science and practice through community engagement and social action to increase health equity” (p. e1). CBPR brings together health professionals, academics, and community members to enable “underserved communities a genuine voice in research, and therefore to increase the likelihood of an intervention’s success” (p. e1). Projects often arise out of the stated needs of the community, rather than research objectives of the professionals, and strive to equalise power differentials that usually exist. Communities are seen as experts of their own lives, working closely and collaboratively with various professionals to achieve self-identified solutions to self-identified local challenges. This serves to ameliorate the issues of trust often associated with research on Indigenous peoples. Efforts must be genuine on the part of the academics and other professionals however, rather than a surface communityoutreach strategy that does little to change internal structures. Wallerstein and Duran (2010) contend that: “Within the university, both structural changes and the cultural humility of academics can redress power imbalances and foster the needed trust within partnerships to enable the most effective translation of research within diverse settings” (p. e5). Whether using an Indigenous methodology or method, or a more conventional research methodology that incorporates Indigenous methods within it, a fundamental point is to ensure that research is of use to the community or group within which it is undertaken, that methods are respectful of the local cultural worldviews (there can be dynamic differences between sub-groups within larger groups), and that processes are collaborative and power-sharing, rather than continue the imposition of externally-mandated outcomes. Too easily however, Indigenous methods and methodologies – and researchers – can be “captured” within the very institutions they were designed to disrupt. Jackson (2013) considers that: If we are to know all that we might want to know, and if we are to produce work that serves our people, then I believe there is an obligation to not only be academically rigorous but also daringly transformative and bravely political. To re-find what we were and are, in order to re-search what we might yet be.
In his 2013 address, Jackson presented “10 ethics of kaupapa Māori research” that are worth repeating here: 1. Ethic of prior thought – Drawing on the wisdom of our ancestors; understand what has been in order to understand what is now, and then to build what will be. 2. Ethic of Moral or Right Choice – Research requires a moral focus. Ask will this be right, moral, tika [correct, just, fair]? Theory does not exist in isolation; there are possible human consequences from what we do.
38
The Ethics of Research and Indigenous Peoples
685
3. Ethic of Imagination – There is joy to be had in our flights of imagination; it often takes leaps of poetic imagination to lead us to facts/evidence. 4. Ethic of change – Research should be seeking transformation in the lives of those we research with. Change can be transitioned more easily with effective research. 5. Ethic of Time – . . ..Don’t be pushed into doing things to meet someone else’s timetable if it doesn’t feel right. There is nothing wrong with stepping back and/or letting go if the project is not right. So, know our time, and what time is right for us. 6. Ethic of power – “By us, with us, for us.” If knowledge is power, then we need to be really clear about whose knowledge we are defining. If it is our knowledge, that gives us power to be who we are. 7. Ethic of courage – To research well we need to be brave. To do transformative research, we need to be brave. 8. Ethic of honesty – In our researching, be honest. Acknowledge that our people were/are human, fallible, made/make mistakes. But, be honest about ourselves with a wise and loving heart. There is strength in gentle criticism. 9. Ethic of modesty – The seduction of academic success can be very alluring and promote notions of hierarchical elitism. Remember always that we are mokopuna [grandchildren] and that we have (or will have) mokopuna, and we must carry our knowledge modestly. 10. Ethic of Celebration – Celebrate our knowledge, our uniqueness, our survival! These particular ethics draw strongly from Māori cultural knowledge, demonstrating that such knowledge belongs not just to the past, but also to the present and the future. Research can then become “about creating knowledge, seeking understanding, enriching the world around us – including that of the communities in and with which we work (George 2010b, p. 277).
The Power of Love Suicide in New Zealand – and amongst many Indigenous populations – is a major health crisis, with Māori youth featuring in disproportionate numbers as more likely to die from suicide than non-Māori youth. Newstalk ZB and the New Zealand Herald (2019) report that “New Zealand has the highest death rate for teenagers and young people among 19 of the world’s developed, wealthy countries.” A ten-year peak has been reached, with the rate of Māori suicide deaths at 20.3 per 100,000, with non-Māori at 9.5 per 100,000 of population (Radio New Zealand 2019). Given that Māori constitute only 15% of the New Zealand population, it is clear that Māori youth have the highest rate of suicide in the developed world. Youth suicide statistics in the Northland region of New Zealand have steadily increased in the past decade. During 2012 there was a spike in the number of young people suiciding, with an increase from five in 2011, to 19 in 2012 (Penney and Dobbs 2014). As a result of this spike, many organisations in Northland received
686
L. George
funding to devise programmes to address this crisis, with the result being that youth suicide decreased slightly during 2013 and 2014 (Penney and Dobbs 2014). However, during 2015 there was an overall 33% increase in suicide completions in Northland (Coronial Services of New Zealand 2015), and the rates have continued to grow there. In order to address this, the Ngātiwai Trust Board undertook a research programme that runs from 2015 to 2020, in three separate projects. He Ara Toiora (2015–2016) took a positive focus on developing Ngātiwai (Northland tribe) suicide prevention strategies that respond to the specific needs and aspirations of their taitamariki (youth). The project used performative research and kaupapa Māori research methodologies, applying art and drama strategies of youth development through a series of wānanga (learning spaces to meet, discuss and consider) on local marae (traditional gathering places). Kokiritia Te Ora (2016–2017) built on the first and focused on positive youth development; while retaining connection to suicide prevention, this project transformed following consultation hui (meetings) with whānau (extended family) and local professionals in January 2015. An aim of the project was to increase hopefulness, celebrate life and our taitamariki, so that suicide was no longer seen as an option. Kokiritia Te Ora utilised Community-Based Participatory Research (CBPR), building research capability by engaging the group as community researchers, again through marae-based wānanga. They investigated themes of connection, healing and leadership, identified in the first project. Whānau were integral to all three projects, and while in He Ara Toiora there were fewer whānau members involved, by Kokiritia Te Ora more whānau and multiple members of several whānau were active in the projects, most of whom remained for Kokiritia Te Aroha. There has been a deliberate openness to ongoing praxis (i.e., integrated theory and practice; see Zuber-Skerritt 2001) through discussions with the whole group, where activity is followed by assessment, and adjustment to practices are made when necessary. The current project – Kokiritia Te Aroha – (2018–2020) draws from the previous projects to develop a “toolbox of resources” for a marae-based youth development programme. Resources produced include a short film (entered into the Maoriland Film Festival), an app (developed in collaboration with Kiwa Digital), and a “Facilitators Workbook,” the purpose of which is to provide a list of activities and resources needed as well as some information on funding sources, as a kind of smorgasbord from which other communities can create their own youth development programme that fits the needs and resources of their local people. Though yet few, the academic publications add to the growing and necessary body of literature on Indigenous suicide and suicide prevention. But the greatest impact of these programmes is that none of the youth involved in the programmes have suicided; our greatest measure of success, and perhaps an expression of the power of love. One of the most significant findings of He Ara Toiora is also the most simplistic – “you just have to love them!” This quote from one of the research team acknowledges that what taitamariki need most is love, security, belonging, and purpose. Reconnecting taitamariki to whenua (land), marae, kaumātua and kuia (elders) in an atmosphere of aroha, manaaki (care) and safety, increases their strength and
38
The Ethics of Research and Indigenous Peoples
687
belonging. Their integral involvement in the projects has helped them build notions of self-worth, strengthened intergenerational connection, encouraged positive growth, and provided them with a powerful voice regarding development processes which influence their lives (George et al. 2017). However, the simple answer of “just loving them” does not articulate the complexity of the everyday lives of the taitamariki, nor that of previous generations of their whānau. Poverty and lack of education and employment are common features; in some households, multiple whānau members have multiple health issues; most have current or past gang affiliations; incarceration is a common experience; there are drug and alcohol issues (for the young ones as well as their parents and/or grandparents); sexual violence is all too common. These issues do not exist in isolation however but arise from within particular contexts. They are part of the fabric that we – collectively – weave as our society. Suicide itself is a symptom of a dis-ease within our society, surfacing from amongst the detritus created by historical traumas that continue to impact on Māori and many other Indigenous groups today. Aroha is defined in www.maoridictionary.co.nz as “affection, sympathy, charity, compassion, love, empathy.” Aroha is about our collective responsibility to ourselves, to each other, and to our society. It is about using care to weave our collective worlds. And it is about honouring the basic humanity that exists within each of us, especially in relation to working with young ones who are struggling with many issues, including the normal angst of being a teenager. Schoder (2010) drew from Paolo Friere’s notion of a “pedagogy of love” to consider that: a conscious life – as an ongoing process of education – is an act of love; we need the courage to risk acts of love; we can create a world in which it will be easier to love. If we do not consciously strive to love, we risk letting other, less noble, ethics dominate our lives. (p. iii)
In their work with “at risk” youth, urban group Ngā Rangatahi Toa utilise Friere’s “pedagogy of love” as an essential concept for their programmes. Such a stance can then address and reverse the harm perpetrated by the universal rangatahi (youth) experience of being “shaped” in an image not of their own making” (Longbottom, 2016). An outcome of their programme “is a self-confidence which results usually in greater acceptance and participation, and a more meaningful place in the wider world” (Longbottom 2016). He Ara Toiora used art and drama, music and dance to open spaces where the youth could express emotions and challenges, including experiences relating to suicide. In Kokiritia Te Ora we encouraged them to explore beyond the narrow limitations of their social and economic environments, to show them a world with greater opportunities, to help them understand their potential, and to encourage larger dreams. In Kokiritia Te Aroha, the taitamariki glimpse their legacy to the world – of connection, of healing, of leadership. Castellano and Archibald (2013) note that by employing “reclaimed culture as a “healing tool,” clusters of healthy, revitalized people are fostering community renewal, re-forging their identities, and asserting their place within the
688
L. George
wider. . .society” (p. 75). Unique perceptions of current conditions can be extracted from encountering past trauma with open hearts and minds. These developments express the freedoms of self-determination as we “rename or reframe our experiences from traditional knowledge – it is a spiritual process and the renaming is part of decolonising ourselves” (Walters 2007, p. 40). It is a way of “restoring whakapapa consciousness” (Lawson-Te Aho 2015) by understanding the richness of our history while taking for granted the effectiveness of our unique cultural knowledges for our healing and growing (George et al. 2017). In the processes of these three projects, there has been a constant commitment to treating those involved, particularly the taitamariki, with love and respect. “He kai kei aku ringa – There is food at the end of my hands”: this whakataukii (traditional saying, proverb) expresses the humble assertion of someone who is confident in his or her capacity to use their ability and resources to create success for themselves and their loved ones. It seems an essential goal to ensure that our taitamariki grow to be confident and successful, strong in their connections to their cultures, well educated in all aspects of society, with the knowledge and ability to lead us powerfully into the future (This whakataukii features in the Māori Economic Development Panel’s Strategy to 2040 (Tomoana et al. 2012). Chair, Ngāhiwi Tomoana, states that He kai kei aku ringa is a “metaphor for our resilience as a people” (p. 2), and becomes “possible when: Māori experience a transformational change in economic performance. . .socio-economic outcomes; and New Zealand experiences a transformational change in national economic direction” (p. 6).).
Conclusion Concentrating on “our own projects” (Anzaldua 1990, p. xviii) as well as continuing to critique ongoing domination and marginalization, must include the creative negotiation of relationships between peoples. Sandoval (cited in Anzaldua 1990) writes that “We had each tasted the shards of “difference” until they had carved up our insides; now we are asking ourselves what shape our healing would take” (p. xxvii). For Indigenous peoples the “shape of our healing” must be self and collectively defined, rather than externally imposed. A result of this healing may then be stronger relationships between, rather than against, each other. Indigenous researchers will continue to critique intellectual systems that do not adequately address the conflicts of past and present, while creatively working towards our own solutions to ameliorating past and present harms. Many western scholars also work towards similar ends, which suggests the possibility of constructing from the ashes of historical follies, an intellectual environment that meets the needs of researcher and researched, “western” and “other,” and indeed all those involved in the production of knowledge. Perhaps, as stated by Lowe et al. (forthcoming), what is needed is a “deeper deep listening” which “is a matter of the senses, of past and present, of listening towards a future with ethical research journeys which are conscious and creative enough to bring light to shadows of the past.”
38
The Ethics of Research and Indigenous Peoples
689
During research journeys, we often weave together the multiple spaces and places within which the processes of research take place. These spaces and places are not always of the present but are often intruded upon by the whisperings of the past, which at times have become vociferous calls to take note of who we are as human beings as well as the “objects” of research. By taking note in ways that “stir up silence” (Smith 2012, p. 1), however, we can create research that is meaningful for local communities as well as having national and international significance.
References Anzaldua G (1990) Introduction. In: Anzaldua G (ed) Making face, making soul – Haciendo caras: creative and critical perspectives by women of color. Aunt Lute Foundation Books, San Francisco Ballara A (1998) The dynamics of Māori tribal organisation from c.1769–1869. Victoria University Press, Wellington Bastien B, Kremer JW, Kuokkanen R, Vickers P (2003) Healing the impact of colonization, genocide, missionization, and racism on Indigenous populations. In: Krippner S, McIntyre T (eds) The psychological impact of war trauma on civilians. Greenwood Press, New York. Retrieved from https://library.wisn.org/wp-content/uploads/2016/02/130111-Healing-the-Impact.pdf Bodley JH (2014) Victims of progress, 6th edn. Rowman & Littlefield Publishers, Maryland Borofsky R (1994) Assessing cultural anthropology. McGraw-Hill, New York Brandt AM (1978) Racism and research: the case of the Tuskegee Syphilis study. The Hastings Center Report 8(6):21–29. Retrieved from https://dash.harvard.edu/bitstream/handle/1/3372911/Brandt_ Racism.pdf Brave Heart M (2005) Keynote address: from intergenerational trauma to intergenerational healing. Wellbriety – White Bison’s Online Magazine 6(6):2–8 Castellano MB, Archibald L (2013) Healing historic trauma: a report from the Aboriginal Healing Foundation. In: White JP, Wingert S, Beavon D (eds) Volume 4: moving forward, making a difference. Aboriginal policy research series. Thompson Educational Publishing, Inc, Canada Coney S, Bunkle P (1987) An unfortunate experiment at National Women’s. Metro. Retrieved from https://www.metromag.co.nz/society/society-etc/an-unfortunate-experiment-at-national-womens Coronial Services of New Zealand (2015) Annual suicide statistics – Media Release. Retrieved from http://www.justice.govt.nz/courts/coroners-court/suicide-in-new-zealand/suicide-statistics-1/20142015-annual-suicide-stats-media-release Deloria V (1969) Custer died for your sins: an Indian manifesto. University of Oklahoma Press, Oklahoma, USA Deloria V (1988) Custer died for your sins: an Indian manifesto. University of Oklahoma Press, Oklahoma Durie M (1998) Te mana te kawanatanga: the politics of Māori self-determination. Oxford University Press, Auckland Durie M (2009) Pae ora: Māori health horizons. Te Mata o Te Tau lecture series 2009: the Paerangi Lectures – Māori horizons 2020 and beyond. Te Mata o Te Tau, Massey University, Albany Evans-Campbell T (2008) Historical trauma in American Indian/native Alaska communities: a multilevel framework for exploring impacts on individuals, families, and communities. J Interpers Violence 23(3):316–338 Faimon MB (2004) Ties that bind: remembering, mourning, and healing historical trauma. Am Indian Q 28(1/2):238–251. Special issue: empowerment through literature Fast E, Collin-Vezina D (2010) Historical trauma, race-based trauma and resilience of indigenous peoples: a literature review. First Peoples Child Fam Rev 5(1):126–136 George L (2010a) Articulating the intellectual tūrangawaewae of an Indigenous anthropologist. In: Proceedings of Māori Association of Social Scientists Conference, 2008. Māori Association of Social Scientists, Wellington
690
L. George
George L (2010b) Tradition, invention and innovation: Multiple reflections of an urban marae. Unpublished doctoral thesis. School of Social & Cultural Studies, Massey University, Auckland George L, Dowsett G, Lawson-Te Aho K, Milne M, Pirihi W, Flower L, Ngawaka R (2017) Final Report for project He Ara Toiora: Suicide prevention for Ngātiwai youth through the arts (LGB-2016–28888). Ngātiwai Trust Board, Whangarei. Available at http://www. communityresearch.org.nz/wp-content/uploads/formidable/LCSRF-Final-Report-He-ara-toiora-Ng atiwai-Trust-Board-George-et-al.pdf Hudson M, Milne M, Reynolds P, Russell K, Smith B (2010) Te Ara Tika guidelines for Māori research ethics: a framework for researchers and ethics committee members. HRCNZ, Auckland Jackson M (2013) Ten ethics of Kaupapa Māori research. Presentation to He Manawa Whenua conference. Kotahi Research Institute & University of Waikato, Hamilton Kuper A (2014) Anthropology and anthropologists: the modern British school, 3rd edn. Routledge, London Lawson-Te Aho K (2015) The healing is in the pain: revisiting and re-narrating trauma histories as a starting point for healing. Psychol Dev Soc 26(2):181–212 Longbottom S (2016) Love lives here: realising rangatahi potential. Ngā Rangatahi Toa, Auckland. Retrieved from http://www.ngarangatahitoa.co.nz/realising-rangatahi-potential/ Lowe S, George L, Deger J (forthcoming) A deeper deep listening: doing pre-ethics fieldwork in Aotearoa New Zealand. In: George L, Tauri J, MacDonald LTAT (eds) Indigenous research ethics: claiming research sovereignty. Emerald Publishing, Bingley Macrae DJ (2007) The Council for International Organizations and Medical Sciences (CIOMS) guidelines on ethics of clinical trials. Proc Am Thorac Soc 4:176–179. https://doi.org/10.1513/ pats.200701-011GC Malinowski B (1922) Argonauts of the Western Pacific: an account of native enterprise and adventure in the archipalegoes of Melanesian New Guinea. Routledge and Keagan Paul, London Martin-Hill D, Soucy D (2006) Ethical guidelines for aboriginal research elders and healers roundtable: a report by the indigenous health research development program to the interagency advisory panel on research ethics. Six Nations Polytechnic/Indigenous Studies Programme/ McMaster University, Ontario Maybury-Lewis D (1992) Millennium: tribal wisdom and the modern world. Viking, New York Mihesuah D (ed) (1998) Natives and academics: researching and writing about American Indians. University of Nebraska Press, Lincoln Miller S (2014) The role of eugenics in research misconduct. Mo Med 111(5):386–390 Narayan K (1998) How ‘native’ is a native anthropologist? In: Thapan M (ed) Anthropological journeys. Sangram Books Ltd., London, pp 163–187 Newstalk ZB, the New Zealand Herald (2019) NZ has highest death rate for teenagers in developed world. Retrieved from https://www.newstalkzb.co.nz/on-air/kerre-mcivor-mornings/audio/ nathan-wallis-new-zealand-ranks-bottom-of-developed-countries-on-youth-mortality-rates/ Penney L, Dobbs T (2014) Promoting whānau and youth resilience in Te Tai Tokerau: evaluation of the Northland District Health Board Suicide Prevention Project 2013. Kiwikiwi Research and Evaluation Services, Kerikeri Radio New Zealand (2019) Number of suicides reaches 10-year peak, new data reveals. Retrieved from https://www.rnz.co.nz/news/national/394061/number-of-suicides-reaches-10-year-peaknew-data-reveals Rodriguez M, Garcia R (2013) First, do no harm: the US sexually transmitted disease experiments in Guatemala. Am J Public Health 103(12):2122–2126. Retrieved from https://www.ncbi.nlm. nih.gov/pmc/articles/PMC3828982/ Schoder EM (2010) Paulo Friere’s pedagogy of love. Unpublished doctor of education thesis, Rutger State University of New Jersey, New Jersey Seidelman WE (1988) Mengele Medicus: Medicine’s Nazi heritage. Milbank Q 66(2):221–239. Retrieved from https://www.milbank.org/wp-content/uploads/mq/volume-66/issue-02/66-2Mengele-Medicus.pdf
38
The Ethics of Research and Indigenous Peoples
691
Sherwood J, Anthony T (forthcoming) Ethical conduct in indigenous research: it’s just good manners. In: George L, Tauri J, MacDonald LTAT (eds) Indigenous research ethics: claiming research sovereignty. Emerald Publishing, Bingley Shuster E (1997) Fifty years later: the significance of the Nuremberg code. N Engl J Med 337:1436–1440. https://doi.org/10.1056/NEJM199711133372006 Simmons DR (1976) The great New Zealand myth: a study of the discovery and origin traditions of the Māori. A.H. and A.W. Reed, Wellington Smith LT (1999) Decolonising methodologies: research and indigenous peoples. University of Otago Press/Zed Books, Dunedin/London Smith LT (2012) Decolonising methodologies: research and indigenous peoples, 2nd edn. University of Otago Press/Zed Books, Dunedin/London Smith B, Hudson M (2012) Ethics. Presentation to Hui Whakapiripiri. Health Research Council of New Zealand, Auckland Sotero M (2006) A conceptual model of historical trauma: implications for public health practice and research. J Health Dispar Res Pract 1(1):93–108 South African San Institute (2017) San code of research ethics. SASI, Kimberley. Available from http://www.globalcodeofconduct.org/affiliated-codes/ Tomoana N, Whittred G, Graham B, Packer D, Stuart G, Rangi G, Tupuhi G, McCabe J, Solomon M (2012) He kai kei aku ringa: the Crown-Māori economic growth partnership – strategy to 2040. Māori Economic Development Panel, Ministry of Business, Innovation & Employment, Wellington Tuck E (2013) Commentary: decolonizing methodologies – 15 years later. AlterNative 9(4): 365–372 Walker R (1987) Ngā tau tohetohe: years of anger. Penguin Books, Auckland Wallerstein N, Duran B (2010) Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. Online version. e1–e6. https://doi.org/10.2105/AJPH.2009.184036 Walters K (2007) Indigenous perspectives in survey research: conceptualising and measuring historical trauma, microaggressions, and colonial trauma response. In: Te Rito JS (ed) Proceedings of the Mātauranga Taketake: traditional knowledge conference – indigenous indicators of well-being: perspectives, practices, solutions. University of Auckland, Auckland, pp 27–44 Walters K, Simoni J (2002) Reconceptualising native women’s health: an “Indigenist” stress-coping model. Am J Public Health 92(4):520–523 Watkins J (2006) Obituary: he forced us into the fray: Vine Deloria, Jr. (1933–2005). Antiquity 80:506–507 Weber-Pillwax C (2001) What is indigenous research? Can J Nativ Educ 25(2):166–174 Webster S (1998) Māori hapū as a whole way of struggle. Oceania 69:4–35 Woodthorpe K (2007) My life after death: connecting the field, the findings and the feelings. Anthropol Matters 9(1). Retrieved from https://www.anthropologymatters.com/index.php/anth_ matters/article/view/54/102 Zuber-Skerrit O (2001) Chapter one: action learning and action research: paradigm, praxis and programs. In: Sankara S, Dick B, Passfield R (eds) Effective change management through action research and action learning: concepts, perspectives, processes and applications. Southern Cross University Press, Lismore, pp 1–20
Ethical Research with Hard-to-Reach Groups
39
The Example of Substance Misuse John Sims
Contents Introduction: What Is Meant by “Hard to Reach”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of “Hard-to-Reach Group” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Makes Them Hard to Reach? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Who Wants/Needs to Reach Them? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researchers and the Hard-to-Reach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Example of How This Can Best Be Achieved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Another Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Ethical Is It to “Do” Research “on” This Group? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Value in Services Being More Flexible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complex Physical and Mental Pathology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Retention and Attrition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Current Model Within Substance Misuse Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Culture Shift Within Agencies and the Dropping of Paternalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
694 696 697 698 698 699 700 701 702 702 703 703 703 704 704
Abstract
Individuals engaging in substance misuse present health and allied professions with a range of complex challenges that often reflect lifestyles of chaos and high risk-taking behaviors, fueled by high levels of impulsivity. By its very nature, substance misuse is antisocial and often takes place out of sight and often as part of the “nighttime economy.” Thus, people who have substance misuse problems are difficult to reach both to understand and to help. Illicit drug taking, being both anti-social and illegal, is frequently associated with other illegal activities such as dealing to others, prostitution, and other acquisitive criminal behavior. The “membership” concepts of in-group vs out-group apply to their actions. J. Sims (*) Substance Misuse Service, Betsi Cadwaladr University Health Board, Caernarfon, Wales, UK e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_32
693
694
J. Sims
This raises issues with regard to trusting others sufficiently to allow them access for research and for assistance. This chapter examines some of these issues in order to develop a deeper understanding of the needs of the group as a way to enhance understanding of such behavior and thereby facilitate positive behavioral change to achieve a reduction in potential harm and enable these individuals to achieve their fullest potential by accessing appropriate treatment and care. Using the example of substance misuse, this chapter illustrates what makes groups and/ or individuals “hard to reach” both for research purposes and subsequently to offer assistance. Ethical research in such cases has to balance issues of “access” with “understanding” and “support.” Keywords
Substance use disorder · Complex behavioral challenges · High-risk behaviors · Ethical research · Hard-to-reach groups
Introduction: What Is Meant by “Hard to Reach”? This chapter will look at how best to engage with this client group who by their very nature are often on the fringes of society. These individuals who present to NHS services in the UK are some of the most complex patients often presenting with a range of interrelated problems that serve as maintaining factors in their addictive behaviors. Substance misuse is a multifaceted problem that will often present professionals with a range of complex issues associated with their presentation. An acknowledgment of some progress made by helping agencies is given together with some further suggestions of how these initial gains can be built upon. It is hoped that this combined approach of accessing, explaining, understanding, and helping offers a model for ethical research with other hard-to-reach groups. There have always been those individuals who, for a variety of reasons, have not been part of society or have not felt that they wanted to be part of a large group. Attempts at defining this population have been made even resulting in tautologies such as: “hard to reach populations are difficult for researchers to access” (Sydor 2013). Lambert and Wiebel (1990) also attempted at defining hidden populations as “those who are disadvantaged and disenfranchised: the homeless and transient, chronically mentally ill, high school drop-outs, criminal offenders, prostitutes, juvenile delinquents, gang members, runaways and other street people.” People with substance misuse problems often have other associated problems such as those listed above which adds to their level of complexity both as accessible research populations and in need of support and assistance. Those who struggle with substance misuse issues often present anyone trying to reach out to help them with a raft of complex interrelated issues. These associated problems would involve issues associated with their substance misuse problems. The assumption behind this chapter is that being “hard to reach” is not just a concern of accessing people to study them but that any research motive must be linked with an “action” motive. Merely wishing to “research” such groups without an action outcome might make them even
39
Ethical Research with Hard-to-Reach Groups
695
“harder” to reach given their complex needs. Substance misusers are used here to illustrate the nature of these complexities and to aid researchers in designing “good practice” in accessing the hard-to-reach in general. These compound issues can be to do with housing, employment, health, relationships, etc. People with substance misuse issues often have problems with housing and may even be homeless. If they do have housing, there may well be issues around the stability of their accommodation. Decisions to spend money on other things apart from rent or mortgage can lead them into a situation where their home is under threat due to rent/mortgage arrears. Finding accommodation can also be difficult if it is known that they have an addiction problem. Finding paid employment can be complicated by the level of chaos these individuals often have in their lives which can make them appear somewhat unreliable. Not all those with substance misuse problems are unemployed. This raises other issues linked to the availability of services and their accessibility. Some substance misuse agencies offer a traditional approach to operational hours. In other words, they are mostly only available from 9 am to 5 pm between Monday and Friday. Whenever this is the case if the individual with the substance misuse is in employment, then it is often very difficult for them to access services within a normal working day. Physical health is often a very highly complex area of these individuals’ lives. Those who use illicit drugs and or alcohol often have a range of physical health problems associated with their lifestyle choices. Medical professionals refer to these associated health problems as physical sequelae, physical health-related problems as a consequence of long-term substance use and misuse. These would be problems such as infection, failure of the nervous system, inappropriate bleeding such as vomiting blood, weight loss, and in the longer-term alcohol-related cancers. Other acquired infections linked to sexual health may also need to be addressed, and unplanned conceptions could also be another physical health factor. Perhaps necessarily those with substance misuse problems also often have associated mental ill health as a consequence. This can range in severity from common mental health disorders (anxiety-based disorders), depression with associated anxiety, to serious mental illness such as schizophrenia and affective disorders (Edwards 2000, 2013). These compound issues alone demonstrate just how hard to reach both for research and for assistance those who misuse substances can be. But other overarching concerns are who might conduct such research and how might they do it? Substance misuse services are often underfunded. They are frequently viewed by others as a Cinderella service within a Cinderella service. In other words, they are often managed by mental health services who themselves are under-resourced. Research can be a very expensive business. It costs a lot of money to employ researchers to try and answer some of the questions that are appropriate to understanding and helping misusers. Often any sort of research in substance misuse is often undertaken by clinicians who have to fit it into a busy schedule of clinical work and responsibilities. The presence of patient populations attending substance misuse treatment with accompanying underlying common mental health disorder (DSM 5) is well established via previously published research
696
J. Sims
(American Psychiatric Association 2013; Grant et al. 2004; Kessler et al. 2005; Driessen et al. 2008; Boschloo et al. 2011; Sims and Iphofen 2003a, b; Sims et al. 2003a, b; Lancelot and Sims 2001; Weaver et al. 2005; Merkes et al. 2010; Ruglass et al. 2014). The extent of this comorbidity is estimated by some to be as high as 80% of substance misuse service attendees (Weaver et al. 2005). Sims (2019) was also recently able to establish identification of this population across North Wales via a service improvement audit and discovered that a range of Common Mental Health Disorder (CMHD) was identifiable by meeting of diagnostic criteria defined in (DSM 5). These were body dysmorphia, depression/anxiety, panic disorder, obsessive compulsive disorder, blood injury injection disorder, and generalized anxiety disorder. The highest proportion identified met diagnostic criteria for posttraumatic stress disorder (DSM 5). Co-morbidity and the difficulties for patients to be able to access the right “type” of timely interventions has also been identified as an issue, (Sims et al. 2003a) and was referred to as the “ping pong affect” those working in mental health services do not always have the right clinical skills (specifically credible CBT training) to deal with comorbid presentations. Equally those working in SMS often do not have adequate clinical skills in CBT either (Stewart and Conrod 2008). The National Institute for Health and Care Excellence (NICE) has issued a general guideline on psychological treatment for common mental health disorders. In addition each CMHD has its own psychological treatment guideline.
Definition of “Hard-to-Reach Group” These groups are by definition those individuals who feel marginalized by society and apart from the mainstream. These are people who display behaviours that are somewhat different from others – the presumed “majority” of society. Other groups could include those with physical and/or intellectual challenges, those who commit crime, those from ethnic minorities, etc. Included in this group would be those who have substance misuse issues. This group of individuals may well have unique features due to their often complexity of need. However, all of these groups are by definition very difficult to engage with and as a consequence present challenges to health and social researchers who are trying to establish their health and social and public healthcare needs. The problems associated with how best to approach these groups have been examined in detail by a number of researchers previously. Svab and Kuhar (2006), Shaghaghi et al. (2011), and Bonevski et al. (2014) have looked at the range of approaches utilized across such diverse groups as LGBT groups in Slovenia, faith-based communities, those with literacy and numeracy issues, migrants, those living in social isolation in rural communities, and those living in vulnerable social and economic situations. Many lessons for researching the hard-to-reach can be drawn from these works. For example, the difficulties in talking about sexual issues vary between societies and sectors within them for cultural and for legal reasons. These include discussions around their sexual preferences and practices. Once mutual trust, dignity, and respect have been established between the research team and participants, taking more of a qualitative
39
Ethical Research with Hard-to-Reach Groups
697
approach to sampling views and opinions appears to yield the best quality data. Once a rapport based on these basic principles had been established, participants were then able to facilitate access to wider groups and contacts representative of the research population. Again the principles of “in-group/out-group” are evident. By establishing quality contact with an initial group, this then enabled access to wider hard-to-reach membership groups. The initial group of participants, having had a positive experience from the research team, was then able to share that with others from a wider population and act as recruiters into the research themselves. Recruiting participants to become recruiters themselves is a way of enlarging the target population. This snowball effect appeared to not only improve recruitment but also the reliability of the data collected. A remaining issue is what could be referred to as the “generalizability” of the data collected (Borchmann et al. 2014). In other words, is the data recorded a representative data from the larger group? It is hard to assess such generalizability without ongoing contact with the larger group.
What Makes Them Hard to Reach? In particular those individuals with problems with illicit drug and/or alcohol misuse often present health and allied professionals with a range of complex challenges. These individuals often live in the shadows and generally are not viewed as part of mainstream society. As previously mentioned, a lack of affordable accommodation and homelessness make up this group, often being transient and often moving from one area to another. Consequently, when undertaking any sort of research project particularly longitudinal studies, this would present challenges for following individuals up. By their very nature, substance misuse problems often involve a degree of stigma and possible embarrassment or shame. These can often lead to individuals concealing their problem from friends and family. So even if they have accommodation, it is not given that postal addresses can be used to communicate with them as it may not be considered “safe” by the participants themselves. For example, their mail may be opened by others who have no knowledge of the individual’s problems. Individuals with substance misuse can often be suspicious of authority and institutions. Statutory service providers such as staff who are employed in health and social care may well represent authority figures to the group. If their life history involves negative experiences of these same agencies, then this can present further difficulties for those with substance misuse. For example, if a person was placed in local authority care in the past which then led to the individual being mistreated or abused in some way, then this type of life experience can lead to difficulties trusting those types of authority figures again. Additionally by engaging with authorities, their ability to adequately and appropriately parent their own children may well be brought into question. Their contact with researchers, thus allowing them access to their day-to-day living, may well raise some question with regard to the appropriateness of their decision-making, their ability to “gatekeep” contact with their children by others outside the home, and what they spend their money on and raise a range of other issues pertaining to their living conditions including the possibility of domestic abuse (Sims and Iphofen 2003b).
698
J. Sims
Who Wants/Needs to Reach Them? With interventions in substance misuse services, it would be useful to know the answer to a range of further questions such as: “How can we get better clinical outcomes in patients?” This is often linked to the behaviors that individuals with substance misuse have. One example may be an increase in the ability to make appropriate decisions. By its nature misuse of psychoactive drugs and alcohol can impair the individual’s ability to make rational and non-harmful decisions which is also part of a general increase in impulsivity. For example, some impulsive decisions with regard to sexuality can lead to exposure to sexually transmitted disease, unplanned conceptions, etc. This chaos-driven decision-making process can complicate an already complex presentation by creating some additional problems. High levels of impulsivity can sometimes expose individuals to further potential harm from things such as domestic abuse or domestic violence or periods of psychosis (Saycell et al. 2004). Once again sustained research engagement may be difficult to achieve in light of such chaotic decision-making. Participants may initially appear keen to respond but can rapidly change their minds as other influences derail the research engagements.
Researchers and the Hard-to-Reach The importance of reaching these individuals is rooted in an attempt by researchers and clinicians to firstly develop a deeper understanding of these behaviors and how positive cognitive and behavioral change can influence clinical outcomes. A reduction in the numbers of those patients can only be for the good of the individual, their families/ communities, and public health in general. However, in order to achieve this, there needs to be information gathered that results in a deeper understanding of what is happening within these groups. This is to firstly establish factual information regarding what is actually going on, and this can then allow clinicians to be in a stronger position to improve clinical outcomes by supporting patients through a period of positive behavioral and cognitive change and thus reducing harm. Shaghaghi et al. (2011) concluded that there are a range of factors that researchers need to consider to ensure successful access to such groups in order to improve the identification of their need. One issue is the recruitment technique employed. Other issues are with regard to the research’s knowledge of the group they wish to target. Researchers themselves may not adequately explain the purpose of their research thus leaving those with substance misuse problems with a limited or no understanding of how personal information about themselves may be used and can benefit themselves and others in the longer term. Some individuals with substance misuse may also have some literacy and numeracy issues and so therefore may have significant difficulties answering questions appropriately. Language may also be an issue. The author has been employed for a long time working in a very rural area of North Wales where the Welsh Language is spoken by a majority of the population as their language of choice. Linguistic nuances may contribute to the lack of clarity
39
Ethical Research with Hard-to-Reach Groups
699
and appropriateness of research. It would be fair to say that in some parts of North Wales, there is an understandable suspicion of non-Welsh speakers due to the recent political history (Davies 2007). Members of this subculture may feel they are betraying those others who are part of the in-group by talking to researchers outside their group. Again accessing the hard-to-reach may be crucially linked to researchers’ ability to participate in the linguistic community of the researched. Examples of good outcomes measured by high engagement have been seen with other diverse groups such as those using public transport. Ampt and Hickman (2015) were able to accurately identify the needs of those using public transportation in Queensland, Australia, by utilizing a process of engagement. Two examples of this process firstly involved getting groups of teenagers who used the service to keep trip diaries. The children were then invited to participate in a series of workshops which resulted in the children feeling a sense of ownership by inviting them to participate in the redesign of services. Another good example of engaging with these hard-to-reach populations was the training of women in Africa on how to best interview other female users. This resulted in the recording of things that prevented these women from using transport for a diverse range of reasons such as fear of snakes, fallen trees, and potholes. Planning of health and social care in order to provide a range of appropriate services is a very similar process. It is important that the views of all those who use services are identified illustrating just how important access to the marginalized group is as a requirement of good research.
An Example of How This Can Best Be Achieved For example, rural settings can bring further challenges to researchers. The prevalence of heroin use has been extensively estimated in urban areas. Less information is available from rural areas. North West Wales is a predominantly rural area with pockets of high social deprivation and an established heroin-using population. The capture-recapture technique was utilized in a survey by Pulford et al. (2003). The aim of this study was to estimate heroin use in North West Wales using the capture-recapture technique. This project utilized capture-recapture analyses using four data sets: police custody data, opiate-related overdose data (admissions to A&E), hepatitis C test request data, and drug treatment service data. Five hundred and forty four records from 1 year period (October 2001 to September 2002) were matched using initials, dates of birth, and gender. The results were interesting, after matching there were 322 observed heroin users (72% male and 60% aged under 30). The best fitting model from covariate capturerecapture analyses gave a total estimate of approximately 820 heroin users among those aged 15 to 44. This gave a prevalence of approximately 1.2% of 15- to 44-yearolds. This was lower than an estimate of 1300 obtained by multiplying the number of drug-related deaths from North West Wales by standard mortality rates. Longitudinal data collected on admission to the regional Accident and Emergency Department suggest that the frequency of heroin overdose is increasing.
700
J. Sims
This study concluded that the prevalence of heroin use in North West Wales is comparable to that reported from urban areas of the UK. Data on nonfatal overdose in the region suggest that the prevalence of heroin use may be rising.
Another Perspective A further example is that research being undertaken in 2008, again in North West Wales and again within the difficult to reach population of illicit drug users. Craine et al. (2008) examined data from the uptake of dried blood spot testing for hepatitis C. As a consequence of poor uptake of diagnostic testing for hepatitis C, a group of researchers and clinicians explored possibilities of how to improve engagement of drug users in the screening for blood-borne viruses. This was anticipated to widen the screening for other blood-borne viruses such as hepatitis B and HIV. Historically, blood samples had been collected by way of venepuncture. Establishing a new way of obtaining samples of blood that was less intrusive and less painful via skin prick sampling resulted in a sixfold increase in uptake. This last example could also provide clinicians with a further window of opportunity to utilize by assessing more research data that can be used to redesign and improve the quality of care provided to a vulnerable group of patients. This improvement in service provision is a result of accessing more of these “difficultto-reach” individuals. This then leads to clinicians and the wider community developing a deeper understanding of their needs and physical and mental health requirements. In addition when patients have a positive experience of services, they are more likely to avail themselves for further enquiry to establish their needs. In the longer term, this will make a significant contribution to the reduction in stigma and mistrust that can be a contributory factor in reducing access to this “difficult-to-reach group.” Ultimately, this improvement in access should contribute to improved appropriateness and quality of service provision that goes further to meet their needs. Finally, recent developments within substance misuse services have suggested that these individuals prefer to talk to others who have a history of substance misuse themselves. The Intuitive Recovery program is a good example of how those with addiction histories can use this life experience and how they changed as a force for good to model recovery to others coming after them (Elm et al. 2016). Those individuals with substance misuse problems would appear to be a good example of the hard-to-reach as they are highly complex. They do not often just present with a problem with using either alcohol or illicit drugs or both. They often also have a range of other problems that present frontline clinicians with a number of challenges. Most of these have been highlighted previously. These additional challenges for both patients and clinicians to help with are wide-ranging and often interrelated. Some of these issues are specifically issues associated with mental ill health, accommodation, and physical health. In fairness it is difficult for all of these issues to be addressed by one agency alone. Often the best clinical outcomes are achieved by working in a collaborative way with other agencies and patients. This is delicate work and involves establishing a working relationship based on mutual respect and dignity. Those with
39
Ethical Research with Hard-to-Reach Groups
701
substance misuse issues are often found on the fringes of society and often experience being sent between agencies as the agencies themselves are confused as to how best to respond. This “ping-pong” effect has been identified previously (Sims et al. 2003a).
How Ethical Is It to “Do” Research “on” This Group? All of the above highlights the level of vulnerability present within this patient population. Key questions are raised concerning vulnerability within hard-toreach populations and how this could be exploited by researchers curious about this group and/or as representative of the hard-to-reach. The more complex presentation is indicative of increased vulnerability. The issues with regard to ethics associated with hard-to-reach groups have been explored by others and most recently by Lucy Pickering (2018). The most obvious issues are around the increased vulnerability of individuals who are intoxicated and their recruitment into research. How ethical is it to recruit these individuals into any research project when they are likely to be under the influence of drugs/alcohol while being surveyed, and how accurate or valuable are the data collected? Are they able to give free and valid informed consent while under the influence? One way to address this issue would be to take biochemical measurement such as breath testing or blood screening prior to being recruited and then exclude those that are obviously under the influence (Carter and Hall 2013). This would then raise a discussion around what constitutes or defines a state of sobriety. Most individuals who use psychoactive drugs such as all of the illicit drugs and or alcohol can quickly become chemically dependent upon their drug of choice. They can then continue to appear sober even though they have large quantities of drugs/alcohol in their systems. There is the story from Alcoholic Anonymous of the transatlantic jumbo jet pilot who first decided to do something about his drinking having arrived in New York without any recollection of flying the plane there. These nuances and anomalies that clinicians working in substance misuse treatment have to deal with are discussed at length by Pickering (2018) and Sandberg and Copes (2013). A popular recruitment method has been the use of cash payment. Past studies have identified payment as a major incentive for those taking part in research. Another potential regarding this approach is that the provision of payment or reward for participation can also increase the participants’ vulnerabilities. Specifically, there have been cases when those being paid have been coerced into surrendering their payment to dealers as a further way of generating income for themselves (Fry 2005). In the wider sense, further exploitation has been identified historically by payment incentives where vulnerable individuals have been threatened with violence unless they participate in surveys that provide payment for participation (Mosher et al. 2015). Some further work has been done exploring the view of payment by participants who can often view it as a way of getting easy money. Alternatively this can be seen as an easier way of earning cash rather than sex work or other acquisitive criminal activity (Bell and Salmon 2011, 2012).
702
J. Sims
Taking payment for answers to questions or providing explanatory dialogues with regard to their lifestyle choices may not necessarily produce an accurate account of their motives and behavior. The participants may feel coerced by the researchers to give answers that they think are expected. Attempts have also been made by researchers to change the rewards from hard cash to tokens that can then be redeemed on completion of research participation for cash. This appears to create an internal economy within the participant population. This respondent-driven sampling model has identified further coercive behavior on behalf of others leading to exploitation of vulnerable participants. Other identified consequences are the selling of coupons to others for cash (Scott 2008). It could be argued that the use of payment for any participation in research could well contaminate any data given as participants may feel obliged to give the research team information they think they want or even to fabricate responses to ensure continuing engagement and further cash payment. The discussion around paternalism in research associated with those who have substance misuse raises challenges. Society would appear to mostly buy into the general stereotypical image of such individuals. This stereotype is often associated with images of hopelessness and powerlessness living their lives within society that represents for them a system they don’t understand and have difficulty fitting into. Consequently, they live on the edge of society. Clinical experience suggests that it is true to say that vulnerability is definitely an issue among the majority of patients with substance misuse issues. If society views high levels of vulnerability within a group of individuals that they feel they have limited if any ability to change then historically they feel obliged to intervene in some way. The debate involving causation such as the biophysical model vs behaviorism cannot be addressed here. The general consensus is that anything that smacks of paternalism is unlikely to appeal to hard-to-reach groups and less likely to result in effective research engagements.
Conclusions It is also clear that one must never assume hard-to-reach groups as necessarily naïve about their potential vulnerability. Those who have substance misuse issues, for example, can be far more manipulative and astute in managing social relations than we may assume they are. They are often individuals who are highly resourceful and are able to utilize situations and others to the benefit of themselves.
The Value in Services Being More Flexible Some issues associated with this entail asking employers to have time off to attend for help as they would then be required to disclose why they need this time off. This can sometimes lead employers to make judgments regarding their employee and may also feel this contributes to loss of production. There is a need for services to be more flexible with regard to their availability and offer services out of hours which will be more convenient to service users and consequently would lead them to being more
39
Ethical Research with Hard-to-Reach Groups
703
compliant in the treatment process. The same flexibility should then also apply to researchers looking to engage the hard-to-reach. Asking service users in a routine way by their increased involvement in the development of services is a way of empowering and facilitating positive behavioral change through an increased sense of ownership. Feeling that you count and are listened to and by so doing are able to influence can only result in increasing a person’s sense of well-being and sense of value, another valuable lesson for research with such groups.
Complex Physical and Mental Pathology There is very detailed information outlined on this by The Royal College of Physicians (1987) in their publication A Great and Growing Evil: The Medical Consequences of Alcohol Abuse. The physical and mental effects of sustained drug and alcohol misuse can have a devastating impact upon an individual’s physical, mental, and social functioning. There is a need for services to work in a more responsive and collaborative way to improve the quality and appropriateness of services. But the provision of service must not be confused with the discovery of vital understandings that can be offered by good-quality research.
Retention and Attrition Earlier in this chapter, we discussed the ways in which we try and engage individuals in research in order to develop a deeper understanding of their needs which should then develop services to more accurately respond to these needs. This process can sometimes involve enticement. This process of enticement may involve money or tokens or the promise of developing more appropriate services to more accurately meet their needs. In addition we also “sell” to the potential participant the promise of empowerment and a greater role in the running of “things” by sharing our power base as providers. In order for all these things to happen, it would be very useful for the providers of services to establish through research what those needs are. There are a number of individuals who take part in research, but not all of them will complete this process of enquiry. Those with drug and/or alcohol problems are often chaotic in the way in which they live their lives. This chaos then makes a significant contribution to increased levels of impulsivity which ultimately contributes to high levels of attrition among this difficult-to-reach group. This increase in impulsivity also adds to the individual’s vulnerability.
The Current Model Within Substance Misuse Services Currently, there appears to be a model of user-led involvement in not just the development of services for those with substance misuse but also the day-to-day delivery of services. This means that service users are now involved in decisionmaking at every level including that of recruitment of staff. Sims et al. (2007) looked
704
J. Sims
at initializing the involvement of service users within the substance misuse services in North West Wales (Sims et al. 2008). This was a sincere move to meet individual service users by way of collaborative dialogues that brought about real change. Those that used the service were recruited as service consultants and began to advise on a full range of issues that affected the service, its function and role, and development including recruitment of staff. Since this time service users have always been involved in the recruitment of new staff. This also enabled an increase in research in the area by improved relations between those that provided services and those that accessed them.
Culture Shift Within Agencies and the Dropping of Paternalism As with all change some time was required for all to adjust to these changes. Those staff who had worked in the specialism for some time initially found these changes in the perceived power base a threat. The modus operandi of substance misuse services now is based on a partnership and collaborative approach to decision-making. Facilitating a cultural change within the agency, has enabled the service to grow collaboratively with the people we as service providers serve. This has resulted in a less paternalistic approach through empowerment of service users by their involvement at all levels including research. This then negates the need to entice individuals into participation in research rather to encourage them to see themselves as collaborators or, even, co-researchers in an action-oriented project that could have benefits not only for themselves. Those who misuse substances clearly constitute a distinct category of the hard-to-reach, and, in many respects, it is difficult to draw conclusions about what works ethically with all hard-to-reach groups. However it does seem clear that collaborative approaches – researchers working both with service providers, clinicians, and the users themselves – offer the best hope of research that has integrity, help in understanding the thoughts and behavior of the group, and, hopefully, result in effective service provision and improved clinical outcomes.
References American Psychiatric Association (2013) Diagnostic and statistical manual of mental disorders fifth edition. American Psychiatric Publishing, Washington, DC Ampt E, Hickman M (2015) Workshop synthesis: survey methods for hard-to-reach groups and modes. Transport Res Proc 11:475–480 Bell K, Salmon A (2011) What women who use drugs have to say about ethical research: findings of an exploratory qualitative study. J Empir Res Hum Res Ethics 6(4):84–89 Bell K, Salmon A (2012) Good intentions and dangerous assumptions: research ethics committee and illicit drug use research. Res Ethics 8(4):191–199 Bonevski B, Randell M, Paul C, Chapman K, Twyman L, Bryant J, Brozek I, Hughes C (2014) Reaching the hard-to-reach: a systematic review of strategies for improving health and medical research with socially disadvantaged groups. BMC Med Res Methodol 14(42):1–29 Borchmann R, Patterson S, Poovendran D, Wilson D, Weaver T (2014) Influences on recruitment to randomised controlled trials in mental health settings in England: a national cross-sectional
39
Ethical Research with Hard-to-Reach Groups
705
survey of researchers working for the mental health research network. BMC Med Res Methodol 14(23):1–11 Boschloo L, Vogelzangs N, Smit JH, Brink W, Veltman DJ, Beekman AT, Penninx BW (2011) Comorbidity and risk indicators for alcohol disorders among persons with anxiety and/or depressive disorders: findings from the Netherlands Study of Depression and Anxiety (NESDA). J Affect Disord 131(1–3):233–242. https://doi.org/10.1016/j.jad.2010.12.014 Carter A, Hall W (2013) Ethical implications of research on craving. Addict Behav 38(2):1593–1599 Crane N, Parry J, O’Toole J, D’Arcy S, Lyons M (2008) Improving blood-borne viral diagnosis; clinical audit of the uptake of dried blood spot testing offered by a substance misuse service. J Viral Hepat 16:219–222. https://doi.org/10.1111/j.1365-2893.23008.01061.x Davies J A history of wales, (1990, rev. 2007). Penguin: London Driessen M, Schulte S, Luedecke C, Schafer I, Sutmann F, Durisin M, Kemper U, Koesters G, Chodzinski C, Schneider U, Broese T, Dette C, Havermann-Reinecke U (2008) Trauma and PTSD in patients with alcohol, drug or dual dependence: a multi-center study. Alcohol Clin Exp Res 32(3):481–488. https://doi.org/10.1111/j.1530-0277.2007.00591.X Edwards G (2000) Alcohol: the ambiguous molecule. St Martin’s Press, New York Edwards G (2013) Alcohol – the world’s favourite drug. Penguin Books Elm JHL, Lewis JP, Walters KL, Self JM (2016) “I’m in this world for a reason”: resilience and recovery among American Indian and Alaska Native two-spirit women. J Lesbian Stud 20(3–4):352–371. https://doi.org/10.1080/10894160.2016.1152813. PMID 27254761 – via Taylor and Francis+NEJM Fry CL, Ritter A, Baldwin S, Bowen KL, Gardiner P, Holt T, Jenkinson R, Johnson J (2005) Paying research participants:a study of current practices in Australia. Journal of Medical Ethics 31:542–547 Grant BF, Stinson FS, Dawson DA, Chou SP, Dufour M, Compton W, Pickering RP, Kaplan K (2004) Prevalence and co-occurrence of substance use disorders and independent mood and anxiety disorders: results from the National Epidemiologic Survey on Alcohol and related Conditions. Arch Gen Psychiatry 61(8):807–816 Kessler RC, Chiu WT, Demler O, Merikangas KR, Walters EE (2005) Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry 62(6):617–627 Lambert EY, Wiebel WW (eds) (1990) The collection and interpretation of data from hidden populations. United States National Institute on Drug Abuse, Washington DC. From http://www.drugabuse.gov/pdf/monographs/download98.html Lancelot A, Sims J (2001) Mental illness and substance abuse. Nurs Times 97(39):37 Merkes M, Lewis V, Canaway R (2010) Supporting good practice in the provision of services to people with comorbid mental health and alcohol and other drug problems in Australia: describing key elements of good service models. BMC Health Serv Res 10:325. http://www. biomedcentral.com/1472-6963/10/325 Mosher HI, Moorthi G, Li J, Weeks MR (2015) A qualitative analysis of peer recruitment pressures in respondent driven sampling: are risks above the ethical limit? Int J Drug Policy 26(9):832–842 Pickering L (2018) Paternalism and ethics of researching with people who use drugs, Ch.27. In: Iphofen R, Tolich M (eds) The SAGE handbook of qualitative research ethics. SAGE publications Ltd, London, pp 411–426 Pulford M, Craine N, Walker AM, Hope V, Hickman M (2003) Conference paper/poster substance misuse conference Caernarfon, Gwynedd Ruglass LM, Lopez-Castro T, Cheref S, Papini S, And Hien DA (2014) At the crossroads: the intersection of substance use disorders, anxiety disorders, and posttraumatic stress. Curr Psychiatry Rep 16:505. https://doi.org/10.1007/s11920-014-0505-5 Sandberg S, Copes H (2013) Speaking with ethnographers: the challenges of researching drug dealers and offenders. J Drug Issues 43(2):176–197 Saycell K, Sims J, Hugheston-Roberts J, Hazeldine K, Lancelot A, Underwood P, Williams H (2004) A community treatment paradigm for serious and enduring mental illness in North West Wales. Ment Health Pract 8(01):20–24
706
J. Sims
Scott G (2008) ‘They got their program, and I got mine’: a cautionary tale concerning the ethical implications of using respondent-driven sampling to study injection drug users. Int J Drug Policy 19(1):42–51 Shaghaghi A, Bhopal RS, Sheikh A (2011) Approaches to recruiting ‘Hard-To-Reach’ populations into research: a review of the literature. Health Promot Perspect 1(2):86–94 Sims J (2019) Mental health needs of substance misuse patients in wales. Journal of Public Mental Health Sims J, Iphofen R (2003a) Primary care assessment of hazardous and harmful drinkers: a literature review. J Subst Use 8(3):176–181 Sims J, Iphofen R (2003b) Parental substance misuse and its effect on children. Drug Alcohol Prof 3(3):33–40 Sims J, Payne K, Iphofen R (2003a) The triangular treatment paradigm in dual diagnosis clients with a mental illness. J Subst Use 8(2):112–118. https://doi.org/10.1080/ 1465989031000109815 Sims J, Iphofen R, Evans A (2003b) Rapid access treatment model for community-based opiate detoxification. The British Library: Special Acquisitions Unit, PO Box 32, Boston Spa, Wetherby, LS23 6UZ Catalogue Number MO3/37596 Sims J, Kristian MR, Pritchard K, Jones L (2007) Paired intervention model for identified alcohol misuse in secondary care. Pract Dev Health Care 6(4):221–231 Sims J, Kristian MR, Pritchard K, Jones L (2008) Practice development in substance misuse services: changing the patient experience of services. Pract Dev Health Care 7(1):4–14 Stewart SH, Conrod P (2008) Chapter 13: Anxiety disorder and substance use disorder comorbidity: common themes and future directions. In: Stewart SH, Conrod PJ (eds) Anxiety and substance use disorders. https://doi.org/10.1007/978-0-387-74290-8_13 Svab A, Kuhar R (2006) Researching hard-to-reach social groups: the example of gay and lesbian population in Slovenia. Faculty of Social Sciences, The peace Institute, Ljubljana. UDK: 316.344.7-055.3(497.4)“202/2006” Izvorni znansteveni rad Primljeno:1.9.2006 Sydor A (2013) Conducting research into hard-to-reach populations. Nurse Res 20(3):33–37 The Royal College of Physicians (1987) The medical consequences of alcohol abuse – a great and growing evil. Tavistock Publications: New York Weaver T, Charles V, Madden P, Renton A (2005) Co-morbidity of substance misuse and mental illness collaborative study (COSMIC). The study of the prevalence and management of co-morbidity amongst adult substance misuse & mental health treatment populations. Department of Health – Drug Misuse Research Initiative
Queer Literacy Visibility, Representation, and LGBT+ Research Ethics
40
Mark Edward and Chris Greenough
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Researching LGBT+ Lives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Queer Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authenticity: Metaphorical Closets and Performativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Representation and Visibility of Trans Lives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-Disclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Undoing Theory” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
708 709 710 712 713 714 715 717 718
Abstract
Qualitative research embraced a reflexive turn in the 1970s and 1980s, and since that time has been particularly useful for exploring the experiences of nonnormative individuals. This chapter explores the ethical value of “queer literacy,” which allows readers intending on undertaking work with LGBT+ groups to have a nuanced understanding of various nonheterosexual and noncisgender identifications. There is the potential for there to be anxiety around contemporary discussions of gender/sexuality often due to the variety of terms used, as well as concerns for sensitivity when engaging with intimate details of others’ experiences. The aim of queer literacy is to equip readers with the ability to form knowledge and to develop understanding in order to engage with queer
M. Edward (*) Department of Performing Arts, Edge Hill University, Ormskirk, Lancashire, UK e-mail: [email protected] C. Greenough Department of Secondary Education, Edge Hill University, Ormskirk, Lancashire, UK e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_57
707
708
M. Edward and C. Greenough
experiences. This allows researchers of LGBT+ lives to discuss issues confidently, accurately, and critically. The themes emerging from this discussion of queer literacy include identification, authenticity, representation, and visibility, all of which are important when engaging with nonnormative lives. For LGBT+ individuals, authenticity is often perceived as achievable through self-representation and visibility: “coming out.” Yet, in terms of ethics, visibility is problematized because of the real concern for possible prejudice-based attacks on participants. This chapter takes the heat out of such concerns by shedding light on how to engage effectively with LGBT+ groups. Finally, the chapter argues how queer literate ethical researchers of/with LGBT+ communities must adhere to Viviane Namaste’s three key principles of relevance, equity in partnership, and ownership when conducting ethical ethnographic research. Keywords
Ethics · LGBT+ · Queer literacy · Nonnormative · Visibility · Representation
Introduction Qualitative research embraced a reflexive turn in the 1970s and 1980s and, since that time, has been particularly useful for exploring the experiences of nonnormative individuals. The acronym LGBTQ (Lesbian, Gay, Bisexual, Transgender, Queer) has often been used to denote a queer grouping, with multiple and often lengthy variations such as LGBTQIAA (Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual, Ally), more commonly expressed as LGBT+. However, Daniel Warner states that “queer research should stop adding letters to LGBT research, and should instead form a body of knowledge about how these categories come to be, and are lived, on a daily basis” (2004: 335). The term “nonnormative” is a useful term that moves away from complex acronyms but adequately describes nonheterosexual and noncisgender individuals (cisgender or “cis” refers to an individual whose gender identity is aligned to the sex they were assigned at birth). For the purpose of this chapter, we use both terms: “nonnormative” and LGBT+. This chapter explores the ethical value of “queer literacy,” which allows researchers who intend to undertake work with LGBT+ groups to have a nuanced understanding of various nonheterosexual and noncisgender identifications. There is the potential for there to be anxiety around contemporary discussions of gender/ sexuality often due to the variety of terms used, as well as concerns for sensitivity when engaging with intimate details of others’ experiences. The aim of queer literacy is to equip readers with the ability to form knowledge and develop understanding in order to engage with nonnormative experiences. This allows researchers of LGBT+ groups to discuss issues confidently, accurately, and critically. The themes emerging from this discussion of queer literacy include identification, authenticity, representation, and visibility: all of which are important when engaging with nonnormative lives. For LGBT+ individuals, authenticity is often perceived as
40
Queer Literacy
709
achievable through self-representation and visibility, often demarcated through the process of “coming out.” Yet, in terms of ethics, visibility is problematized because of the real concern for possible prejudice-based actions. This chapter takes the heat out of such concerns by shedding light on how to engage effectively with LGBT+ groups. Finally, the chapter argues how queer literate ethical researchers of/with LGBT+ communities must adhere to Viviane Namaste’s (2009) three key principles of relevance, equity in partnership, and ownership when conducting empirical research.
Researching LGBT+ Lives The qualitative turn to explore nonnormative gender and sexuality has resulted in a wealth of publications that span academic disciplines: arts, humanities, social sciences, medicine. The popularity of such research has resulted in the development of numerous handbooks, textbooks, and journals devoted entirely to the study of sexualities and gender. The scholarship in this area reveals the shaky foundations of heteronormativity, negates binary formulations of gender, and erases histories of what was once considered sexual deviancy: nonheterosexuality. Such academic enterprise speaks back to dominant hegemonies of cisgender and “compulsory heterosexuality” (Rich 1980). Historical researchers of nonnormative individuals and communities engage in what Joan Scott terms “enlarging the picture” (1991). Exploring LGBT+ experiences is “to render historical what has hitherto been hidden from history” (1991: 775). Scott writes before the boon in queer theory within the academy, and her words ring true for all researchers of LGBT+ lives: Numbers – massed bodies – constitute a movement and this, even if subterranean, belies enforced silences about the range and diversity of human sexual practices. Making the movement visible breaks the silence about it, challenges prevailing notions, and opens up possibilities for everyone. (1991: 774)
Despite this burgeoning field of much needed work, a glimpse back in history reminds us of the brutality of research conducted with LGBT+ individuals, where medical and scientific research with sexual minorities was often punctuated with physical and psychological violations against LGBT+ research subjects. Such research often focused on attempting to eradicate any physical cause for deviant sexualities, using a range of brutal treatments, including extreme physical medical procedures: castrations, lobotomies, clitoridectomies, and shock treatments. A further glimpse across the globe beyond the West at the time of writing reveals 73 countries where homosexuality is illegal (data provided by International Lesbian, Gay, Bisexual, Trans and Intersex Association “ILGA”: https://76crimes.com). The death penalty is still in force in eight of these countries, with penalties ranging from life imprisonment to laws that prohibit homosexuality but do not penalize. Support groups for LGBT+ people exist in these countries and in more progressive countries
710
M. Edward and C. Greenough
across the globe; we must not lose sight of the activism still needed in and beyond the academy. Beyond legislature and political acceptance, homophobia and transphobia remain real. Legal and political recognition of LGBT+ identities and lives does not lead to the eradication of homophobia or transphobia; activism is still important in countries where homosexuality and transgender equality have been legalized, as prejudiced-based rhetoric and violence still exist. Researchers investigating nonnormative sexuality and gender can be subjected to excessive caution from research ethics committees, even in locations where the rights of LGBT+ are held in legislature. Tufford et al. (2012) use four case study examples that reveal how ethics review boards “engage in ostensibly protective stances regarding potential risks and informed consent that are unwittingly founded upon negative stereotypes of LGB populations” (2012: 221). Tufford et al.’s work shows how procedures within ethics committees were challenged by applications, who in turn, sought to educate the committee. The considerations of queer literacy in this chapter are significant not only for LGBT+ individuals and communities or researchers of LGBT+ lives, but for all those interested in research ethics within academic settings, especially gatekeepers.
Queer Literacy As research and knowledge production around LGBT+ people were historically undertaken largely by nonqueer people, queer lives bring forth rich, alternative perspectives on their identities, representations, and language. For Scott, enlarging the picture reveals “a metaphor of visibility as literal transparency” (1991: 775). Sieber and Tolich discuss “human research literacy” (2013: 204), which aims to build understanding and trust within social research design and effectuation. Issues involving nonnormative gender and sexuality may result in a feeling of discomfort for some people, whether that be through their own prejudice-based views, their religious beliefs, or their own negative experience of LGBT+ culture, yet there is also discomfort around the content and language used. The LGBT+ community and its allies have had to work hard in removing an element of taboo around its status. Researchers of such communities have engaged with sites of controversy and intimacy, to bring about fruitful human inquiry and public dissemination. In seeking to avoid offence or political incorrectness, some people hesitate or even stutter to find the most appropriate terms to describe LGBT+ groups and individuals. Queer literacy aims to provide the knowledge, understanding, and skills to engage with real lives. Queer literacy is not something aimed solely at LGBT+ community, or researchers of LGBT+ lives, but it matters and needs to be taken seriously by everyone. This enables people in society to understand and discuss nonnormative gender and sexuality, and related issues, with confidence, accuracy, and criticality. Within LGBT+ culture, the concept of literacy is paramount, given the variation in terms individuals use to self-identify. Engaging with such terms means researchers are able to avoid drowning in acronymic alphabet soup, as there exist multiple and often lengthy variations such as LGBTQIAA more commonly expressed as LGBT+.
40
Queer Literacy
711
Of course, cultural literacy of the LGBT+ community relies on much more than a knowledge of the terms individuals may use to self-identify. Researchers should not lose sight of the dominant prevailing social hierarchies which have traditionally regulated gender, sex, and sexuality. Nonnormative expressions of gender and sexuality have always existed, but they have been delegitimized and pathologized by norms, laws, and other political and social constructions. Queer literacy avoids the assumption that people are heterosexual or cisgender. Queer literacy does not presume gender or sexuality but invites individuals to selfidentify using their own words, recognizing that such a self-identification may be temporal, fluid, or complex. The use of pronouns is particularly important for trans individuals, so researchers need to engage with respectful sensitivity to eliciting and using the correct pronouns for individuals. One way in which allies of communities can support this work is to offer their own pronouns too. In this, queer literacy avoids prioritizing a definitive list of terms of identification, as this leaves no room for creative ambiguity that is at the heart of nonnormative identifications. Gender and sex are unhinged. Gender is a construct, which is revealed as being truly flexible and without foundations. Gender identity is creatively unbound and must be beyond what genitalia one has (sex). Moreover, queer literacy does not just relate to sexuality or gender, as it recognizes the inessential nature of human identity. Scott (1991) and Spelman (1988) expose how performativity contests the essentialist nature of identity categories. Whereas identity is often thought of in terms of numerous fixed categories, performativity exposes how identity only seems fixed because it has been repeated (Butler 1990). Fielder and Ezzy (2018) discuss how the “queer self is a continually created and transformed self. In this sense, ‘queer’ is anti-identitarian, in which the self is never crystallized” (2018: 15). Therefore, intersections such as location, culture, heritage, language, age, race, religion, spirituality, social class, body type, dis/ability, to name a few, are not extractable. Elizabeth Spelman (1988) has superbly contended that identities are not like “pop-beads” with variables incoherently strung together, allowing a researcher to “pop off” an identity for analysis or scrutiny: One’s gender identity is not related to one’s racial and class identity as the parts of pop-bead necklaces are related, separable and insertable in other “strands” with difference racial and class “parts.” (1988: 15)
The focus on experience with LGBT+ lives means that much empirical research engages with the messiness of lived experiences. Stories of experience considered through life-story narrative research or phenomenological investigations are often considered messy, simply because human lives are messy. Tina Cook describes the “messy turn” as “the interface between the known and the nearly known, between knowledge in use and tacit knowledge as yet to be useful, [. . .] the ‘messy area’ as a vital element for seeing, disrupting, analysing, learning, knowing and changing” (2009: 277). Life narratives may be told chronologically, in reverse, and episodes may jump from one to another. Equally problematic is the awareness that memory can serve as an unreliable tool in its attempt to construct and narrate the past.
712
M. Edward and C. Greenough
Individuals engage in a process of editing and redaction as stories are told. Simply, there is no straightforward, direct access to “truth.” Queer literacy acknowledges that experience in the lives of LGBT+ people is relational and emotional. Researching others’ experiences requires a high degree of emotional intelligence to steer the dialogue and protect the partners from vulnerability while reliving distressing aspects of their life. That said, a competent researcher does not need to function as a counsellor – although there are undoubtedly times when such skills can become ethically relevant – but they should be aware of professional organizations that can support partners who need support in dealing with trauma or distress.
Authenticity: Metaphorical Closets and Performativity Within LGBT+ lives, the notion of performativity is captured by stage directions “coming out” and a metaphorical prop “the closet.” The work for equality and the use of digital technologies in the global West have meant that many people are coming out at a younger age than previously. Heteronormativity renders the experience of coming out as unique to the LGBT+ individuals. In this context, coming out is a milestone which denotes the beginning of an authentic self. This selfidentification is not fixed, of course, but coming out points to an individual who has spent a long time figuring themselves out, finding the right words to express themselves and mobilizing the courage to share this fragile internal selfhood with others. With such agency becomes empowerment. Ken Plummer (1995: 58) cites four critical processes which are at play when an individual comes out: coming out personally (to oneself), coming out privately (to specific others, family, and friends), coming out publicly (one’s self identification becomes public knowledge), and coming out politically (the individual uses one’s story for consciousness raising and social change). Plummer is right to quantify that each of these identified processes is not in any particular order. In terms of methods, Greenough (2017) has observed how the use of social media allows people to “try out” their identifications before making them public, bridging the gap between coming out personally, privately, and publicly. Coming out is an act which is both transgressive and activist. The notion of authenticity as key to LGBT+ lives has been powerfully articulated by Fielder and Ezzy (2018). Their work reveals how coming out is more than choice, eliding the assumption that postmodern gender and sexuality is the equivalent to purveying a buffet and selecting what takes one’s fancy. They state “choice, as an end in itself, does not confer authenticity. Choice is rather the means to an end, and what is done with the choice matters” (Fielder and Ezzy 2018: 18). Authenticity is more than choosing labels; it is about engaging in a process which renders visible what has been suppressed by normative limitations on gender and sexuality. Quotidian performativity of this nature occupies a social space within which queerness and nonnormativity create dialogue.
40
Queer Literacy
713
There is a danger that the notion of coming out and self-authenticity can be idealized; coming out should not be romanticized. The process of coming out requires an individual to repeatedly come out and to stay out. The transgression lies in rendering visible what has been hidden, per Scott; therefore, the themes of representation and visibility are integral to LGBT+ experiences. The closet represents hiddenness, yet experiences of coming out can often be painful, traumatic, and negative: As well as having a suffering at the base of these stories, there is also the harbouring of a secret and a silence which may eventually be given a voice, disclosed, brought out of the private world into the public one. Many important issues appear here: of privacy, of lying, or passing, of defences, of exposure, of deceptions, of transparency. (Plummer 1995: 56)
The ethical researcher must be sensitive of their partners’ relationship vis-à-vis the metaphorical closet. Individuals who chose not to be out will have valid, wellconsidered reasons not to speak out about their gender/sexuality. It should be remembered that closets can be safe spaces. While this choice signals a virtual powerlessness and resulting negative self-esteem, the choice not to be out should be respected. Similarly, in relation to bisexual lives, some consider the option that bisexuals are able “pass” as heterosexual. This is a nuanced double-edged sword, as judiciously put by Kathleen Jowitt: “in my experience passing as straight is almost as tiring as repeatedly coming out” (2017: 116).
Representation and Visibility of Trans Lives In terms of coming out as trans, as well as the personal journey of self-exploration shared by LGB people, the process of coming out often denotes a physical and performative transformation, through which self-authenticity is rendered visible. Ethical considerations to transgender people are applied by healthcare professionals, including surgeons and psychiatrists who vet the process of transitioning. The World Professional Association for Transgender Health has published protocols for professionals working with those who wish to undergo hormonal or physical transitions to the other gender: Standards of Care for the Health of Transsexual, Transgender, and Gender Nonconforming People. Unlike lesbian, gay, and bisexual people, those wishing to undergo gender reassignment therapy are required to “test out” the visible expression of their gender and regularly report back to professionals as part of this process. In this regard, it is important to observe how cisgender individuals are not required to test out their gender before it being assigned! The notion of presenting and performing a gender visibilizes the process for trans people – a process which is physical, emotional, social, and legal. The process of presenting often results in culturally induced stress: the challenge of being different and visible within what remains a largely heteronormative society. Such visibility demands courage from the individual and support from communities, as fear or reprisals or prejudice-based attacks are real.
714
M. Edward and C. Greenough
Researchers working with transgender people must be aware that a desire to discuss past history (i.e., in the sex assigned at birth) can often be undesirable and uncomfortable. Equally important is to recognize the process that takes place due to medical intervention. Those undertaking hormone therapies will notice physical changes within themselves, as well as being aware of the medical risks that accompany such therapy. For trans individuals undergoing surgery, this is major surgery which therefore always carries risks too. Of course, not all people who identify as trans choose to engage with hormone therapy or elective surgery. Some may never transition gender in public spaces. The terms with which other noncisgender individuals use to identify are multiple, including trans, nonbinary, genderqueer. Some individuals describe themselves as gender-fluid, which can result in a presentation of gender ambiguity or androgyny. Some people feel they have no gender to speak about: agender. Researchers who engage with individuals who identify as noncisgender should ask what terms of identification and what pronouns individuals use. They should equally be aware if such metalanguage is constant or fluid. In addition, researchers should be careful to note how an individual may describe their body in different terms to their mind. In terms of representation, many LGBT+ identifying individuals have spent a long time figuring themselves out. Therefore, while ethics panels may insist on anonymity or confidentiality of partners within a study, this can seem an arbitrary measure to those who take pride in narrating who they are. There must be a balance between the requirement for anonymity (in cases where a participant may require such a measure for safety and safeguarding) and to allow the hard-won rights and self-presentation be documented, in cases where there is no risk to participants. The caution remains, however, that individuals who do not wish to be identified should not be at risk of unwanted disclosures, or forced “outings.”
Self-Disclosures Empirical research has often been saturated in presumed heterosexuality. Researchers who carry out studies in gender and sexuality are accustomed to selfdisclosing their positionality from the beginning of the study, offering perfunctory statements referencing their gender, sexuality, social location, race, ethnicity, and so on. While some would argue that to offer private information about oneself in writing can lead to embarrassment or narcissism, Nancy Miller describes selfdisclosure as an “autobiographical pact” (1991:24): . . .binding writer to reader in a fabulation of self-truth, that what is at stake matters also to others: somewhere in the self-fiction of the personal voice is a belief that the writing is worth the risk. (Miller 1991: 24)
Miller observes how she enjoys reading such autobiographical disclosures, noting her preference for “the gossipy grain of situated writing to the academic sublime” (1991: xi). Edward (2018a) describes his use of autobiographical writing as
40
Queer Literacy
715
“mesearch,” observing how “mesearch, as a methodology, is characterised by a triangulation of autoethnography, research/practice and selfhood” (2018a: 42). Elsewhere, Edward states how “mesearchers engage in journeys which explore the self in position to the research area, and this positioning of emotions leads to selfquestioning, doubt, concern, personalization, honesty” (2018b: 171). Edward’s work exposes how personalizing the research can be transformative and eliminate boundaries between researcher and researched. To engage in self-disclosure can raise ethical questions of objectivity, vulnerability of researcher, bias, and influence. Yet, qualitative research relies on relationships more than processes; therefore, the building and maintaining of relationships with individuals and research communities are paramount in conducting effective cocommittal work.
“Undoing Theory” Judith Butler has emerged as a central figure within queer theory (1990). Building on the work of feminist theory and poststructuralism, Butler exposes how gender is a construct – and that binary notions of gender, male/female, masculine/feminine – only exist because of hegemonic heteronormative repetitions through performativity. Butler suggests that once such repetitions are ruptured, gender is exposed as artificial. Queer theory has made some valid and valuable contributions to the notion of gender as performative and instable, and this has permeated academic disciplines, spreading queerness across the academy within the global West. Yet, one critique pitted against the work of Butler emerges from Viviane Namaste. In her concern with “The Transgender Question,” Namaste exposes how work is often produced on an academic, theoretical, and philosophical level. In advocating a move away from theorizing identity politics, Namaste calls for “meaningful social research, and therefore meaningful theory” (2009: 24), by involving marginal people within the knowledge production of themselves, therefore transforming the process of knowledge. She states that “theory would be well served by actually speaking with everyday women about their lives” (2009: 27), criticizing academic investigations which is not rooted in individual experiences. Similarly, in advocating a move away from theoretical ruminations about LGBT+ lives, we suggest the focus must be grounded in experience. Namaste outlines three key principles for working with communities: relevance, equity in partnership, and ownership (2009: 24). Considering the first principle of relevancy, Namaste states how this “means being able to demonstrate that the knowledge produced will be useful to the people and communities under investigation” (2009: 25). Therefore, the researcher’s position should be one which bears fruit for the community integral to the study. One of the consequences of relevancy, we argue, is that there should be impact and improvement for the community in terms of knowledge generated and disseminated, as well as offering clear information pertaining to the methods used to garner new knowledge. Throughout the investigation, researchers must consider how the
716
M. Edward and C. Greenough
research is not entirely self-serving but serves the individuals who form part of the research process. Namaste calls on researchers to consider the following questions: “what is, in fact, useful knowledge? Who gets to decide? Who has the last word on this? Why?” (2009: 25). Of course, this may not be clear at the beginning of a research study, when researchers submit applications for ethics clearance. Responses to such questions may be obvious from the outset, or they may be formulated intermittently, or only occur at the end of the research process – but it is imperative that a consideration of relevancy should drive the consciousness of the project. Namaste’s second consideration of equity in partnership is described as “people about whom one writes have an equal say and an equal voice in all aspects of empirical research: defining the question, gathering the data, analysing the results, and presenting the conclusions” (2009: 25). This is obviously problematic as research committees often require protocols and procedures to be considered by appropriate panels before the commencement of a project. The belief in partnership helps to ensure relevancy, but also ensures partners are committed to the project and its outcomes. Within LGBT+ communities, advisory panels within research settings have helped to ensure an equal voice when representing noncisgender or nonheterosexual experiences. While not engaging directly with Namaste, elsewhere Ron Iphofen highlights the tensions of forming alliances with participants, noting how “research goals may have to compete with the action-orientated aims of the subjects” (2011: 126). He reminds us of the complexities when attempting to measure power differentials and/or the democratic value of research to participants. He states “there are many devices for attempting to even out this power differential but it is difficult to demonstrate clearly that a redistribution of power has been accomplished” (2011: 130). Iphofen continues by offering some cautionary notes in relation to assumptions of benefits to participants, noting how participant involvement in a study is not always beneficial or empowering to an individual or group. He highlights how “a ‘taste’ of empowerment while collaborating on a project could prove frustrating when they find themselves back in their own culture – their sensed relative deprivation could even be exacerbated by the experience” (2011: 130). Iphofen describes decision-making in research ethics as a dynamic process. In bringing together the work of Namaste and Iphofen here, we argue here that ethics is both dynamic and dialogical. It is essential that dialogue is continuous with participants about the aims, methods, and outcomes of the research – before, during, and after the research process takes place. The benefits of ongoing dialogue ensure the voices of participants are not contaminated with presuppositions from the researcher. Therefore, continuous consultations are vital, even if they take time. In order to massage the tensions between the notion of equity, and the practicalities of speaking with/for others, LGBT+ partners should be consulted at each stage of the project about what it will entail and how it will be put into effect. While this constitutes best practice in terms of continuous consent, it also allows research partners an opportunity to reshape the methods as better suited to the area of study. A consideration of ownership refers to what will happen with the research once it has been conducted: that each partner has a say in how the research will be used and
40
Queer Literacy
717
disseminated. LGBT+ lives expose personal, sensitive issues which some would prefer not to be part of public knowledge. The use and dissemination of an investigation needs to explore issues of confidentiality and privacy, as well as respecting a participant’s right to withdraw. Complementary to Namaste’s concerns in relation to dense, opaque theory that can emerge within the academy, we wish to add a fourth principle to “undoing theory” which may perhaps rub against some in the academic community. For scholarship to be meaningful and have impact to the individuals and communities under investigation, it must be accessible. We advocate a move from pompous prose which often renders academic texts dense and impenetrable to those outside of academia, to a use of clear language which ensures the research is accessible. In terms of language, we advocate fluency, coherency, accuracy, and accessibility.
Conclusion Researchers engaging with LGBT+ populations may not be engaging with queer theory at all; just as queer research is not exclusive to LGBT+ individuals. Queer is categorized by its own chaos – it is a disruptive marker. As queer theory has made an impact on the academy and has disciplinary credibility in its own right, it must not lose sight of it its mission to disrupt. As a theory, its impact is constrained, if not suffocated, by limitations to theoretical ruminations in a language that is inaccessible. Remote and distant ruminations in philosophical terms do nothing to engage with LGBT+ living experiences. The discussion of queer literacy offered within this chapter serves to offer practical considerations of engaging with LGBT+ individuals. Although “queer” has often been used as a point of reference for the LGBT+ grouping, it is not to be used as a synonymic term. Queer literacy shows the acts of emotional engagement as an enterprise that allows fruitful, co-produced knowledge and understanding about nonnormative gender and sexuality. Within queer literacy, relationships are prioritized over routines, which are often used as hurdles for ethics committees. Human lives are messy, so any attempt to sanitize a research process with LGBT+ people will be defunct. Relationships with LGBT+ people are best negotiated using emotional processes: voice, body language, human interactions, verbal and nonverbal, communication, visibility, sharing, as well as relevancy, equity of partnership and ownership, as advocated by Namaste. We conclude this chapter with one final point for consideration. Individuals who identify as lesbian, gay, bisexual, and transgender have been described as a community and are commonly grouped as nonnormative, or by use of the popular acronym LGBT+, yet there is significant variation between the experiences of each group, as well as the individuals within the groups themselves. Treating characteristics of a group as universal leads to essentialism and therefore can be exclusionary to individuals within a group. We note that context focused differentiation may be needed when working with one particular group within the LGBT+ umbrella. This has been the pitfall of numerous volumes describing LGBT+
718
M. Edward and C. Greenough
experiences, where the experiences of gays, and to a lesser extent, lesbians, far outweigh bisexual and transgender voices. Thankfully, the lacuna of research which addresses “B” and “T” issues is recognized and steps are being taken to generate awareness of what remain minoritized groups. A word of warning: the pursuit of inclusivity can lead to exclusivity.
References Butler J (1990) Gender trouble. Routledge, London Cook T (2009) The purpose of mess in action research: building rigour though a messy turn. Educ Action Res 17(2):277–291 Edward M (2018a) Mesearch and the performing body. Palgrave, London Edward M (2018b) Between dance and detention: ethical considerations of “Mesearch” in performance. In: Iphofen R, Tolich M (eds) The sage handbook of qualitative research ethics. Sage, London, pp 161–173 Fielder B, Ezzy D (2018) Lesbian, gay, bisexual and transgender Christians. Bloomsbury, London Greenough C (2017) Queering fieldwork in religion: exploring life stories with non-normative Christians online. Fieldwork Relig 12(1):8–26 Iphofen R (2011) Ethical decision making in social research: a practical guide. Palgrave Macmillan, London Jowitt K (2017) The amazing invisible bisexual Christian. In: Robertson B (ed) Our witness: the unheard stories of LGBT+ Christians. Darton, Longman and Todd, London, pp 115–120 Miller N (1991) Getting personal: feminist occasions and other autobiographical acts. Routledge, London Namaste V (2009) Undoing theory: the “transgender question” and the epistemic violence of Anglo-American feminist theory. Hypatia 24(3):11–32 Plummer K (1995) Telling sexual stories: power, change and social world. Routledge, London Rich A (1980) Compulsory heterosexuality and lesbian existence. J Women’s Hist 15(3):11–48 Scott JW (1991) The evidence of experience. Crit Inq 17(4):773–797 Sieber JE, Tolich MB (2013) Planning ethically responsible research. Sage, London Spelman EV (1988) Inessential woman: problems of exclusion in feminist thought. Beacon, Boston Tufford L, Newman PA, Brennan DJ, Craig SL, Woodford MR (2012) Conducting research with lesbian, gay, and bisexual populations: navigating research ethics board reviews. J Gay Lesbian Soc Serv 24(3):221–240 Warner DN (2004) Towards a queer research methodology. Qual Res Psychol 1(4):321–337
Ethics and Integrity for Research in Disasters and Crises
41
Dónal P. O’Mathúna
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Ethical Issues in Disaster Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vulnerability Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Research Ethics Committee Review Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Horizon Scanning for Future Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
720 720 723 723 726 729 731 733 734
Abstract
Disasters hit with increased frequency, sometimes leading to humanitarian crises. Some are acute, like earthquakes, while others are chronic, as with prolonged conflict. Decision-makers in disaster risk reduction and humanitarian responses are increasingly called upon to make evidence-based decisions. As a result, research in and on disasters and crises occurs frequently. The ethical issues with such research are complex, ranging from detailed questions about how to conduct specific types of research ethically to broader questions about whether research in certain situations should not be conducted at all for ethical reasons. This chapter will survey the ethical issues that are distinct or particularly challenging in disaster and crisis research. With acute crises, ethical challenges arise from their sudden onset and the narrow window of opportunity that researchers have to gather data. Participants in disaster and crisis research often have experienced many losses and D. P. O’Mathúna (*) School of Nursing, Psychotherapy and Community Health, Dublin City University, Dublin, Ireland College of Nursing, The Ohio State University, Columbus, OH, USA e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_33
719
720
D. P. O’Mathúna
vulnerabilities, raising concerns about their participation leading to unintended harms. Research in conflict and politically unstable situations may place participants and researchers in difficult circumstances. In some situations, researchers work in isolation leaving them with little other that their personal integrity to guide them in ethical dilemmas. The chapter will include practical guidelines for addressing research ethics and integrity in disasters and crises. Keywords
Disaster research · Conflict research · Virtue ethics · Research integrity · Humanitarian imperative
Introduction The chapter begins with an overview of the development of disaster research and concern for the ethical issues involved. While such research has been conducted for about a century, the particular ethical challenges with disaster research have only received explicit ethical attention in recent decades. Consideration of the ethical issues with research in conflict settings is more recent. The key ethical issues with disaster and conflict research will be discussed using categories identified in a published systematic review. This review identified two broad themes to facilitate examination of research ethics in these contexts: those related to vulnerability and those surrounding ethics review procedures. Both themes included a number of separate issues which will be addressed in this chapter. Some of the ethical issues in disaster and crisis research continue to be debated, and some examples will be discussed in the next part of the chapter. One debate occurs over how best to train researchers in ethical reflection and decision-making. Research integrity seeks to develop researchers’ personal ethical decision-making skills and commitment to upholding ethical values and developing virtues. However, the mechanisms by which this can be done effectively remain to be evaluated. Another debate occurs over how the urgency to deploy researchers to sudden-onset disasters should be balanced against the importance of ensuring research is designed well and implemented appropriately and rigorously. This is particularly important when research findings are used as the basis for advocacy for vulnerable groups. The chapter reviews approaches to resolving some of the ethical challenges in this research and concludes with an overview of developing issues in this field.
Background Disaster research has been conducted since early in the twentieth century. One of the first systematic pieces of disaster research was a sociological study of how the people of Halifax, Canada, recovered from a massive explosion resulting from an accidental
41
Ethics and Integrity for Research in Disasters and Crises
721
collision between two ships in her harbor (O’Mathúna 2018a). Interest in disaster research grew slowly until the 1950s and then accelerated rapidly with the founding of the Disaster Research Center in 1963 at the Ohio State University in the USA (Perry 2007). Events like the Oklahoma City Bombing in 1995, the terrorist attacks of 9/11 in 2001, the 2004 Indian Ocean tsunami, Hurricane Katrina in 2005, and an everexpanding list of disasters continue to add occasions for disaster research. While the field has grown, consensus over the definition of disaster research has not developed. Part of the challenge is that disasters themselves have multiple definitions. The Handbook of Disaster Research identified over three dozen definitions of “disaster,” from which three traditions were distilled (Perry 2007). One focuses on the disruption to social order caused by the devastating event which leads to changes from “normal” behavior. A second tradition focuses more on the environmental events that trigger disasters and their social consequences. The third tradition studies social phenomena, particularly social vulnerabilities and resilience. All three traditions have an interest in the social impact of disasters and hence qualitative research methods have tended to dominate disaster research. Other definitions emphasize the sudden onset of disasters, the large-scale destruction and suffering, and the overwhelming of local resources. For example, the International Federation of Red Cross and Red Crescent Societies (IFRC n.d.) defines a disaster as “a sudden, calamitous event that seriously disrupts the functioning of a community or society and causes human, material, and economic or environmental losses that exceed the community’s or society’s ability to cope using its own resources.” With the medical and mental health needs that disasters bring, disaster research has come to include projects studying disease patterns, as well as public health and medical interventions. No longer is “disaster research” a distinct field of sociological studies, but it includes many other fields of inquiry. This has introduced additional complexities over the focus of disaster research, with some including research on long-term situations like famine, poverty, epidemics, and migration crises. The involvement of humanitarian organizations in these settings has now led some to include needs assessments and evidence generating activities within disaster research. The term “complex emergency” was coined to apply to disasters which also involved war, conflict, or violence. Since conflicts and disasters sometimes have similar outcomes, like numerous refugees, traumatic injuries, and food shortages, disaster research now includes studies related to conflict settings. Given all of these overlapping fields, and their growing complexity, a widely accepted concise definition of disaster research is unlikely to develop soon. Hence, the term will be used in its broadest sense in this chapter. Another development contributing to the growth of research in disaster and crisis settings has been the call for better evidence to guide disaster risk reduction and disaster responses. Such evidence often does not exist in humanitarian settings (Wessells 2013). “We don’t really know which interventions are most effective in reducing risk, saving lives and rebuilding livelihoods after crises. . . At present, humanitarian decisions are often based on poor information” (DFID 2012, 5, emphasis original). Even for health and mental health interventions,
722
D. P. O’Mathúna
where evidence-based practice would be expected to be the norm, evidence is lacking. A systematic review published in 2017 found that there were “major gaps” in the evidence available for public health interventions in humanitarian crises, and the available evidence “was generally quite weak” (Blanchet et al. 2017, 2291–2). Without good evidence, disaster responders find themselves in ethical quandaries over how best to help. This leads to an ethical imperative to conduct research on disasters and crises so that the needs are understood better, effective strategies and interventions developed and deployed, and evidence-based disaster risk reduction plans implemented. This imperative to conduct disaster research can lead to well-intentioned projects, but good intentions are not enough. This truism is reflected in the titles of some recent popular books which have taken a critical look at humanitarian aid, including When Helping Hurts, Toxic Charity, and Good Intentions are Not Enough: Why we Fail at Helping Others. These, and other factors, have led to the examination of the ethics of disaster research. Ethical issues in disasters and conflict settings received relatively little explicit attention until the twenty-first century. Prior to this, ethical issues were addressed informally “on the assumption that [disaster] research has very little potential for injuring the people and organizations that are studied and on the hope that it may ultimately actually do some good” (Tierney 1998, 6). In contrast, reports have identified unethical research practices, including the infamous Trovan study of a meningitis drug in children in Nigeria (Stephens 2000), inappropriate research after the Indian Ocean tsunami (Sumathipala et al. 2010), and qualitative methods used in “unethical and potentially exploitative research” in humanitarian settings (Mackenzie et al. 2007, 300). Research in disaster and crisis settings raises many ethical challenges which require careful reflection and discussion. In conflict settings, additional ethical concerns exist, particularly around the potential for harm to participants and researchers. While the ethical issues themselves may not be unique to conflict settings, “the web of power relationships . . . makes some of the challenges more explicit and potentially more fatal” (Brun 2013, 130). As a result, the ethics of conducting research in disasters and crises is receiving attention from researchers, nongovernmental organizations (NGOs), and international organizations. Some ethics guidelines now exist for disaster research. For example, the Council for International Organizations of Medical Sciences (CIOMS) has worked with the World Health Organization (WHO) to produce research ethics guidelines for over 60 years. The 2016 revision included, for the first time, ethics guidelines for health-related disaster research (CIOMS 2016). WHO has published ethics guidelines for some areas of relevant research, such as during epidemics (WHO 2010), and a group of South Asian researchers and ethicists produced research ethics guidelines for post-disaster research (Sumathipala et al. 2010). Guidelines for qualitative disaster research are also starting to become available (O’Mathúna 2018a). Literature on research conducted in conflict settings has tended to make “scant mention of ethical challenges” (Goodhand 2000, 12), although more recently publications are starting to appear in this area also (Mazurana et al. 2013).
41
Ethics and Integrity for Research in Disasters and Crises
723
Key Ethical Issues in Disaster Research O’Mathúna, the author of this chapter, contributed to a systematic review of ethics guidelines for research in disaster settings which identified 14 documents (Mezinska et al. 2016). Qualitative analysis of these revealed two core themes, with both of these including several categories of ethical issues which were further subdivided into subcategories. This framework will be used to organize this discussion of the ethical issues with research during disasters and crises. The categories are summarized in Table 1 and will be discussed in what follows. The interested reader should refer to the original review for further details and an extensive bibliography of original references.
Vulnerability Issues Vulnerability is frequently raised as an ethical concern for research in disasters and conflict settings. Participants and the communities from which they are drawn are often vulnerable in many different ways, including from injuries, lack of security, poverty, social exclusion, power imbalances, or preexisting injustices. However, the precise meaning of vulnerability is debated, and some have expressed concern that some definitions have become so broad that any person could be included (O’Mathúna 2010). In addition, overemphasizing vulnerability could promote a paternalistic attitude toward participants and their communities if their own capabilities are overlooked. Remarkable resilience can be seen at times among those experiencing disasters and crises. Even youth who have experienced direct violence “show a degree of resilience that is at odds with most Western psychological analyses” (Wessells 2013, 87). While the concept of vulnerability may provide little specific guidance, it serves as an important reminder of the fragility of human life and the importance of respecting people and upholding their dignity during research (O’Mathúna 2010). Vulnerability also points to the need to consider a wide range of harms and benefits from participation in research. This is particularly the case in conflict settings, where even association of people with researchers can be used to link participants with particular sides in a conflict (Wessells 2013). Disaster researchers should take into account that activities that carry little or no burden or harm in ordinary circumstances may be very burdensome or dangerous during crises. When people are searching for lost loved ones, trying to find food and water, or struggling with the loss of everything they ever owned, asking them to participate in research, even “just” to complete a survey, may be overly burdensome and inappropriate. Thus, key ethical questions for all disaster, crisis, and conflict research include whether the project should be done at this time, in this place, and with this group of people. The answers may reveal that some research may be unethical in a specific instance, even if the methods themselves are not inherently unethical.
724
D. P. O’Mathúna
Table 1 Classification of disaster research ethics issues from a systematic review of guidelines (Mezinska et al. 2016) Theme 1: Vulnerability Categories Subcategories Vulnerability Definitions of as a concept vulnerability Reasons for vulnerability
Risks and burdens
Gaps in the existing guidelines Physical harm Re-traumatization Manipulation
Theme 2: Research ethics committee review process Categories Subcategories Experience Cultural sensitivity of and awareness researchers of researchers Awareness of impact of research Conflicts of interest
Interests and rights of research subjects
Exploitation Unrealistic expectations Stigmatization Risk management
Accountability and monitoring of research Avoiding over- or underestimation of risks Need for empirical evidence on risk Providing psychological support to research subjects Quality of informed consent
Decisional capacity of research subjects
Evaluation of power relationships between researchers and subjects Factors diminishing decisional capacity Underestimation of decisional capacity Need for a specific procedure for informed consent
Social value of research
Training in research ethics Professional competence of researchers Balancing the need for scientific evidence with possible harm from the research Minimal risk requirement Justice in selection of participants Potential for overburdening research subjects Provisions for confidentiality and privacy protection Regulation of transfer of biological material Application of standard of care Potential application to future disaster situations Research that cannot be pursued in a non-disaster context Direct or indirect benefit to individuals or community Not draining resources for relief Involvement of local researchers and/or community Post-research obligations
(continued)
41
Ethics and Integrity for Research in Disasters and Crises
725
Table 1 (continued) Theme 1: Vulnerability Categories Subcategories
Theme 2: Research ethics committee review process Categories Subcategories Organization Centralization of review of ethics Conditions for full and review expedited review Alternative review mechanisms “Just-in-case protocols” Proportionality of review Problems in Risk of bureaucracy in the the review review process process Lack of guidelines for research in disaster settings Distinction between research and non-research
Another ethical issue commonly raised in the systematic review was the concern that research could lead participants to become retraumatized. In situations where people in crises or survivors of disasters are asked to talk about their experiences, they might re-experience traumatic memories and be harmed as a result. While this concern is raised regularly, little empirical research has been conducted to determine how participation in disaster research affects participants (Gibbs et al. 2018). A small number of studies after 9/11 and Hurricane Katrina found that less than ten percent of participants were disturbed by participating in research which asked about their experiences of disasters (see O’Mathúna 2010 for original citations). People often have emotional reactions from participating in research about disasters they have experienced, but most were not distressed by this, and many reported that participation was useful or positive (Gibbs et al. 2018). People’s reactions vary considerably, and some people have very negative experiences, but more research is needed to understand the psychological impact of participation. At the same time, researchers have an ethical responsibility to evaluate participants’ well-being during disaster research and to have psychological and other support available as needed. Informed consent was another ethical issue categorized under vulnerability. This raises many of the same issues that can arise when research is conducted cross-culturally. Information should be communicated to participant communities in language and concepts they understand. The process can be more complicated when cultural differences impact approaches to decision-making in various communities. Sometimes community leaders must give approval before individual informed consent is given, and sometimes male heads of households must give approval. Such factors require researchers to understand the culture of participant communities before they engage in research.
726
D. P. O’Mathúna
When disaster research accompanies humanitarian aid, people may believe that they are required to participate in order to receive aid. This gives rise to the equivalent of therapeutic misconception in clinical research. People should be clear that participation may not necessarily bring them personal benefits, but even when this is explicitly stated, people may have raised expectations. This is particularly challenging in cultures where hospitality to strangers is highly valued. For cultural reasons, people may believe it is inappropriate to refuse to help someone, especially an outsider (Wessells 2013). This complicates any notion of informed consent and places an added burden on researchers to take steps to ensure that participation is voluntary. An important development to assist with these ethical issues is participatory action research (PAR). This approach to research involves engaging with participant communities throughout the research process, from design to dissemination. During these discussions, the aims and objectives of the research are discussed, and participant communities articulate their hopes and aspirations for the research. These discussions may reveal important differences between the aims of the researchers and the community and possibly avert inappropriate expectations. The groups involved can discuss how best to recruit participants, translate materials, obtain informed consent, disseminate findings, and other aspects of the research. Researchers can find ways to mitigate risks, or, if necessary, clarify further what can and cannot be accomplished by the research so that expectations are balanced. The discussions and negotiations involved in PAR can take significant amounts of time. However, doing so can help to address the needs of the participant community and respect their concerns and values. Better preparations can be made for conducting the research well, and when unanticipated issues or misunderstandings arise, a mechanism is already in place to help address and resolve the problems and challenges. This method is built on an ethical foundation as it has “an explicit aim of building trust and giving voice to the views of the powerless and voiceless” (Mackenzie et al. 2007, 312). This, at its core, is what addressing vulnerability in research should involve.
Research Ethics Committee Review Process The second general theme identified in the systematic review of research ethics guidelines involved the review processes of research ethics committee (RECs), also known as institutional review boards (IRBs) in some countries (see Table 1). The first category in this theme included issues around the experience and awareness of researchers. Some of these concepts overlap with ones already addressed under vulnerability, such as the researchers’ cultural sensitivity and awareness of the impact of their research, whether intended or not. Additional factors include researchers’ professional competence and training in research ethics. Conducting research in disaster and crisis settings raises many challenges and thus requires extensive training and experience. For research in conflict settings, some go so far
41
Ethics and Integrity for Research in Disasters and Crises
727
as to hold that “it is unethical to involve researchers who are inexperienced and unfamiliar with working in areas of conflict” (Goodhand 2000, 13). Concern for the interests and rights of research subjects goes well beyond informed consent. As noted above, evidence is needed to inform practice and policy in disasters and crises. This provides an ethical responsibility to conduct research, but does not provide a carte blanch for all research. When considering potential harms from research in these settings, researchers should consider more than just direct harm from the research intervention itself. Participants may be overwhelmed by their physical or security needs, and adding research participation on top of this may overburden them. Lack of coordination between research teams can lead to “assessment fatigue” when participants are repeatedly asked to complete research protocols (Wessells 2013, 93). Such issues need to be carefully balanced with the desire to conduct research and reassessed during and after research projects. Maintaining safety is a particularly important ethical responsibility for research in conflict settings. This applies to the safety of both participants and researchers. Creating safe spaces for research can be complicated and time-consuming. Building trust with participant communities requires spending time together in culturally appropriate ways. Sometimes safe spaces can be planned strategically, but more often such spaces develop after drinking tea and sitting under trees. In addition, safety should be continually reevaluated. “The changing context on the ground caused the safe spaces we had established to shrink and become less safe” (Brun 2013, 140). This raises the stakes of ensuring that research does not cause harm, even inadvertently. The challenge is summarized well by an experienced conflict researcher who noted that “my presence in a war zone was a political act even if I had not intended it that way” (Wessells 2013, 103). Safety concerns should include the safety of researchers. A prominent and highly respected Sri Lankan researcher was abducted coming from a scientific meeting in 2006 during Sri Lanka’s civil war and has never been found (Brun 2013). In 2018, two Ethiopian researchers were killed while collecting data in a region involved in intertribal conflict (Endeshaw 2018). Tensions underlying the conflict generated rumors about what the researchers were doing which led to a mob stoning them. Such incidents provide dramatic examples of the ethical obligation to mitigate harm to researchers. This requires careful reflection and constant reevaluation of the situation on the ground. Justice should be considered in selecting research participants, but this can be complicated in conflict settings. Sometimes a particular group becomes the focus of many research projects, especially if donors or humanitarian organizations view the group as under-researched and requiring further attention. This happened in Africa with research focused on helping “child soldiers” reintegrate into society after conflicts. Initially, the research primarily involved boys, overlooking the fact that many girls were recruited as soldiers also. The attention given to the boys also “sparked a backlash and a form of reverse discrimination” (Wessells 2013, 97). The communities where the research was being conducted complained that the former boy soldiers received many benefits from participating in the research which were not available to
728
D. P. O’Mathúna
noncombatants. Such complex social issues again highlight the importance of understanding the local environment before initiating research. More generally, “excessive targeting of research can contribute to the misguiding of humanitarian aid and become part of wider patterns of discrimination and social injustice” (Wessells 2013, 97). Another ethically important dimension of conflict research is the role of silence. In war, the first casualty is truth; the same has been said of disasters. Research in disasters and crises often aims to shed light on truths that powerful groups would rather keep hidden, particularly around injustices and corruption. In a conflict setting, this can make speaking to researchers dangerous for participants. Many prefer to remain silent as a way to protect themselves, something that has also been noted for victims of abuse and gender-based violence. This places researchers in a difficult ethical position when balancing people’s “strategy of silence” (Brun 2013, 132) and seeking to build trust so that people become willing to tell their stories. On top of this, silence is also a tool in the hands of the powerful. They can use silence to retain power, for example, by controlling the media but also by controlling “knowledge production, [and] information” (Brun 2013, 145), central to research practice. Again, important ethical values must be balanced, requiring careful reflection and discussion. Such challenging dilemmas do not have clear ethical standards, but instead point “to the lack of consensus on ethical standards for research that fit the fluid circumstances and the contextual diversity of war zones” (Wessells 2013, 82). The systematic review identified a subcategory of issues focused on the social value of research (Table 1). Researchers always should be mindful that any piece of research may be unethical to conduct at a particular time. Sometimes a setting may be too dangerous or unstable, or the presence of a researcher may be too likely to cause deterioration in a conflict situation. In such cases, it may be inappropriate to conduct research at that time in that place. As one experienced researcher put it, sometimes “the best safe space we can make is staying away and writing and disseminating from a distance” (Brun 2013, 146). Such decisions require humility on the part of researchers and a commitment to make the safety of participants the overriding ethical value. Research must also be balanced against the other needs and resources in a region. When people are in urgent need of lifesaving equipment and personnel, researchers may need to stay back and allow search and rescue to happen first. Research should not drain the limited resources available for humanitarian relief. This makes it especially important to determine whether a particular research project needs to be conducted in a disaster or crisis setting. For example, if the effectiveness of a nutritional intervention is completely unknown, doing the first study during a crisis is probably not appropriate. On the other hand, if its effectiveness in “normal” circumstances is established, research on its effectiveness or usefulness in a refugee camp may be warranted to identify whether the context changes its effectiveness. Another factor here relates to whether all research in disaster and crisis settings must benefit the participant community. Often the benefits come in the form of advocacy, but some have raised concerns about whether advocacy sometimes inappropriately influences research methods.
41
Ethics and Integrity for Research in Disasters and Crises
729
As this question remains controversial, it will be addressed in the next section under current debates. The final area of concern in this theme related to the practical workings of RECs and IRBs. Obtaining ethical approval for humanitarian research has inherent challenges (Blanchet et al. 2017). More broadly, concerns have been voiced about the ethics approval process (Banks 2018). While ethical issues are discussed across the research community, “researchers complain that institutional review boards have lost sight of their original purpose of protecting human subjects, focusing instead on bureaucratic minutiae” (Jones et al. 2016, 2397). Research in humanitarian and conflict settings raises additional challenges for RECs or IRBs when approval is sought quickly to respond to sudden-onset events, or fluid situations call for flexible oversight, or the research context adds issues that committees are unfamiliar with. Humanitarian research highlights inadequacies in the current research ethics paradigm. Ethics approval forms that focus on well-known ethical principles and issues, like informed consent or confidentiality, or procedures focused on guidelines and codes, miss some of what is crucial in research ethics. As one critic puts it, what is missing “is how to positively encourage ethical conduct. Developing an understanding of what to do is always a more challenging prospect than issuing edicts about what is not right” (Macfarlane 2009, 3). Part of what is lacking are ways to develop the reflexivity and ethical decision-making skills necessary to make the difficult decisions about what is best to do in the middle of an ethical dilemma. Growing interest in research integrity and virtue ethics in research point toward some initiatives to address shortcomings in the current system (Banks 2018). At the same time, research ethics guidelines have a role in guiding researchers and setting at least minimum standards for ethical practice. The systematic review noted a lack of such guidelines for disaster research, but this has started to change. The latest revision of the CIOMS ethics guidelines for health-related research included, for the first time, guidelines on disaster research (CIOMS 2016). These include calls for greater flexibility in how disaster research is reviewed, including proposals for ways to pre-review sudden-onset disaster research which can then receive full review and approval more quickly after a disaster occurs (CIOMS 2016). At the same time, given that research in disasters and conflicts often uses social science methods, discussions have occurred over how best to conduct ethical review procedures for those methodologies (Banks 2018). This has led to a number of debates about how research ethics should develop for projects conducted in disasters, crises, and conflicts.
Current Debate The flux and uncertainty of research in disasters and crises point to the need for ongoing ethical reflection and decision-making in these contexts. A focus on guidelines, codes, and principles will not be sufficient to ensure that researchers have the skills and dispositions to balance the multitude of ethical issues that
730
D. P. O’Mathúna
can arise. As noted above, the current system of research ethics approval has its limitations and is particularly unsuited for disaster and conflict settings. Hence, current interest in research integrity and virtue ethics may be particularly well suited for this area of research ethics. Ultimately, this understanding of integrity and virtue can be traced back to Aristotle’s virtue ethics (Macfarlane 2009). In his view, each virtue is a golden mean lying along a sliding scale between two vices. Narratives, not codes, become central in illuminating vices and virtues, and how they apply in particular situations (Macfarlane 2009). As researchers become adept at exploring the ethical challenges in their research, they can develop the appropriate virtues. Sarah Banks has derived one approach based on virtue ethics, where she defines virtue as “a moral disposition to feel, think and act in such a way as to promote human and ecological flourishing, entailing both a motivation to act well and, typically, the achievement of good ends. Virtues are often described as excellent traits of character, and entail a reliable disposition to act in certain predictable ways across contexts” (Banks 2018, 25). Such character development is not straightforward, and this approach has its critics. Kwiatkowski (2018) claims that virtue ethics places too much responsibility on individual researchers and not enough on organizational and societal factors that help shape research environments. He also claims that virtue ethics is an idealistic approach to ethics that calls for perfection and will not be satisfied with the “good enough” of the real world of research (Kwiatkowski 2018, 55). Such criticisms appear unfair, as inherent to the approach is change and development in people’s character, and advocates do acknowledge institutional factors (Banks 2018). In fact, ethical change is not limited to individuals but can apply to organizations behaving well also. Virtue ethics offers a way of addressing ethical challenges and learning from them in ways that become part of a researcher’s commitment to research. Thus the virtues are like skills that researchers need to learn and develop during their training and commit to ongoing development during their careers. Research integrity is thus “a kind of moral competence in resolving conflicts and priorities, readjusting ideals and compromising principles” (Banks 2018, 29). Banks concludes that: Research integrity, in its thick sense, is about researchers being aware of, and critically committed to, the purpose, values, ethical principles and standards of their discipline and/ or broader research field; making sense of them as a whole; and putting them into practice in their research work, including upholding them in challenging circumstances. (Banks 2018, 29)
Given that researchers in disasters and crises are almost continually facing challenging circumstances, this approach is particularly applicable. However, the virtue approach to research ethics in general is relatively new, as are research integrity initiatives, and so it will be some time before its effectiveness can be assessed conclusively. Another area of debate in disaster and crisis research relates to how the emergency nature of some research impacts its ethics. Urgency often leads to a rush to conduct research in such settings. This is understandable when a
41
Ethics and Integrity for Research in Disasters and Crises
731
narrow window opens to conduct research, say in the immediate aftermath of an earthquake. However, the rush to research has sometimes led to poor methods (Stephens 2000). “Not uncommonly, this strong press for data encourages the use of shoddy methods that yield data of questionable quality” (Wessells 2013, 82). Even in an emergency, researchers retain an ethical obligation to conduct good quality research. Research integrity can provide a more helpful approach. Urgency sets up the need to balance speed and quality. Codes and guidelines typically list a range of principles, values, or ideals that should be maintained, but do not provide a mechanism for resolving tensions or balancing conflicting values. Research integrity and virtue ethics focus on the skills that lead to the development of skills in moral decision-making and the balancing of competing ethical principles. Exactly how such skills and virtues should be taught and developed within humanitarian research remains a point of debate. Narratives and case studies play a central role, as well as mentorship, as these do throughout virtue ethics. Series of case studies for this type of research are becoming available which can be incorporated into training for these types of research (e.g., Schroeder et al. 2018). One particularly challenging issue involves the place of advocacy in disaster and crisis research. Some areas of research emphasize detachment and objectivity as a way to overcome bias in research. On the other hand, humanitarian work, including related research, often seeks to impact the world and advocate for those whose voices have been suppressed. This can lead to situations where wellmeaning researchers “already know what they want to see and say, and come away from the research having ‘proved’ it” (Jacobsen and Landau 2003, 187). At the same time, research which has little potential to make a difference in people’s lives, or where researchers fail to disseminate the findings so they impact practice and policy, is also problematic. Many would agree with David Turton’s statement, “I cannot see any justification for conducting research into situations of extreme human suffering if one does not have the alleviation of suffering as an explicit objective of one’s research” (1996, 96). This is particularly the case with conflict research where participants and researchers may be put at increased risk of harm. Thus, “research conducted in war zones that has no direct linkage with action often causes more harm than good” (Wessells 2013, 95). Once again, two important values must be balanced appropriately: the importance of conducting research that is methodologically rigorous and valid and the need to ensure that the findings impact people’s lives through practice and policy. Exactly how this is done is an important aspect of ongoing debate in this area.
Horizon Scanning for Future Issues Within research ethics, a better focus is needed on promoting ethical research rather than just facilitating ethical approval. Research integrity and virtue ethics will play a prominent role in these initiatives, leading to training that targets
732
D. P. O’Mathúna
the skills necessary to reflect on the ethical issues in one’s area of research. New training materials will be necessary for this, as well as a move to a culture of ethical reflection throughout the research enterprise. Ethical issues should be considered at all stages of research, from the design phase, to during implementation, and to dissemination (O’Mathúna and Siriwardhana 2017). Other initiatives in research integrity will need to be adapted for humanitarian and crisis settings. At the same time, researchers need support to address the ethical issues they encounter while conducting research. Whether or not IRBs or RECs can provide such support is questionable. These committees focus on approval and compliance, which may leave researchers unwilling to discuss with them the challenges they are having if they believe this could lead to even temporary suspension of the project’s approval. At the same time, researchers could benefit from having ways to discuss challenges as they arise to help them decide what should be done. Protocol changes may or may not be necessary, but researchers can benefit from the reassurance that comes from talking over ethical issues with someone experienced in the area. This could be a supervisor or other ethics “consultant” available for such discussions. Such “help-desk” resources could help researchers conduct their research more ethically. Some of the ethical dilemmas in humanitarian settings arise because people cannot do what they believe is right to do. Such scenarios can trigger moral distress, a term primarily applied to healthcare professionals, especially nurses (Wocial and Weaver 2013). A small number of studies have identified moral distress in humanitarian workers (Gotowiec and Cantor-Graae 2017). Given the intense moral dilemmas faced by researchers in humanitarian and conflict settings, such researchers are likely to experience moral distress also. When this reaches higher levels, it can interfere with a person’s ability to carry out their responsibilities and undermine the quality of their work (Wocial and Weaver 2013). Research is needed to identify whether or not moral distress is a problem in humanitarian research, and if so, to develop ways to build resilience against it and interventions to address it. One final issue that will require additional attention relates to spiritual and religious issues. In some cultures, spiritual issues can be a source of both intense distress and important resilience (Wessells 2013). Many people turn to their spiritual beliefs and religious practices during disasters and conflict, bringing both practical resourcefulness and resilience (O’Connell et al. 2017; O’Mathúna 2018b). Western researchers, immersed in a secular mind-set, need to be open to carrying out research on what is important to those in need. An unwillingness to include spiritual concerns in psychological care can overlook important issues and needs. For example, research on a psychological program to help war-affected Angolan youth brought two different worldviews into contact. The program was built on a scientific, secular model, yet the children wanted to discuss spiritual issues and include traditional spiritual healers in their recovery. The program was originally designed in a way that avoided spiritual issues, but this raised “a significant ethical issue – the imposition of outsider approaches that did not fit the local
41
Ethics and Integrity for Research in Disasters and Crises
733
context” (Wessells 2013, 89). Conducting research in this area will be important to ensure that all needs and resources are identified and that cultural blindness does not lead to important issues being overlooked.
Conclusion Humanitarian action is fundamentally a moral practice, meaning that its activities are driven by ethical or moral commitments. The Sphere Handbook is the most widely recognized set of common principles and universal minimum standards for humanitarian action (Sphere 2018). At its core are fundamental ethical beliefs, including the principle of humanity, “that all humans are born free and equal in dignity and rights,” and the humanitarian imperative, “that action should be taken to prevent or alleviate human suffering arising out of disaster or conflict, and that nothing should override this principle” (Sphere 2018). Such commitments motivate humanitarian actors to expend themselves greatly to meet the needs of those suffering from disasters or conflict. At the same time, these commitments raise additional questions about how suffering can be prevented or alleviated. Good intentions, as noted earlier, are not enough. Practice should be based on both ethics and evidence. This forms the basis of the ethical imperative to conduct research on humanitarian activities and points to how balance comes up again and again in this context. “[P]ractice without research and reflection tends to leave one ensnared by one’s dogmas and preconceptions, just as research without practice tends to leave one disengaged and trapped in the ivory towers” (Wessells 2013, 102). Research ethics has one dimension that is dominated by regulatory approaches to granting prospective ethical approval for research. These play a role in ensuring that planned research meets minimum standards in well-known ethical areas. But in their current structures, ethical approval procedures fail to address many ethical issues in research. Many of these are debated conceptually and philosophically, but need to be connected more directly with actual research projects. Ethical decisions are made as research is designed, such as decisions to conduct a project with one group of people and not another or to design a project with or without the involvement of participant communities. During research, ethical decisions must be made, such as what to do when unplanned changes are required, or when researchers discover information that is highly sensitive but unrelated to the research itself. After research is completed, decisions about when and where to disseminate study findings have ethical components, as do decisions about whether to reveal all the details of sensitive information collected. Such decisions require researchers to balance ethical commitments throughout their work. Codes, guidelines, and regulations can inform researchers about ethical principles and issues. Even then, they often focus on a limited set of ethical issues (Macfarlane 2009). They tend to address primarily information, data, harms, and benefits. Less is said about relationships and the complexities these raise within research. Researchers are in relationships with
734
D. P. O’Mathúna
many others, including other researchers, supervisors, institutions, funders, and their own families and communities. Each of these can pull them in different directions. Researchers can be deeply impacted by their participants, leading to emotional attachments. This can be positive, but it also has the potential to entangle researchers with their participants and introduce bias. “‘Real’ research is about the stuff of human life: hope and disappointment, loyalty and betrayal, triumph and tragedy. This is one reason why following a code of ethics is likely to be of limited help when confronted with ethical issues whilst actually doing research” (Macfarlane 2009, 3). The challenge is to identify ways to help prepare researchers for such decisions. They need much more than codes and application forms but ways to develop their character and integrity. Researchers in disasters, crises, and conflict settings often find themselves alone in the field where they confront ethical dilemmas. There they need personal skills to reflect on how best to balance the various ethical principles. They need the commitment to uphold those principles that should not be violated, such as those that promote the humanity and dignity of individuals and their relationships. Narratives play an important role here in revealing the ethical complexities and relational intricacies of the real world of research in humanitarian and conflict settings. Case studies should be included in training in research ethics and integrity so that researchers are given opportunities to reflect on situations relevant to their areas of research. Research teams could use such cases during meetings to allow continued development of researchers’ ethical decision-making skills. As individuals show themselves to be particularly adept at working through such challenging scenarios, they could become mentors for others in similar ways to how other skills are mentored. Ways of supporting researchers in the field also need to be developed. As time allows, researchers facing ethical dilemmas should be able to discuss possible options with those skilled and experienced in ethical reflection. As a paradigm of research integrity and ethical reflexivity is developed and implemented, research will not only have ethical approval, but be approved as being ethical research.
References Banks S (2018) Cultivating research integrity: virtue-based approaches to research ethics. In: Emmerich N (ed) Virtue ethics in the conduct and governance of social science research. Emerald, Bingley, pp 21–44 Blanchet K, Ramesh A, Frison S et al (2017) Evidence on public health interventions in humanitarian crises. Lancet 390(10109):2287–2296 Brun C (2013) “I love my soldier”: developing responsible and ethically sound research strategies in a militarize society. In: Mazurana D, Jacobsen K, Gale LA (eds) Research methods in conflict settings: a view from below. Cambridge University Press, New York, pp 129–148 CIOMS: Council for International Organizations of Medical Sciences (2016) International ethical guidelines for health-related research involving humans. https://cioms.ch/shop/product/interna tional-ethical-guidelines-for-health-related-research-involving-humans/. Accessed 23 Feb 2019
41
Ethics and Integrity for Research in Disasters and Crises
735
DFID: Department for International Development (2012) Promoting innovation and evidencebased approaches to building resilience and responding to humanitarian crises: a DFID strategy paper. http://www.alnap.org/resource/9823.aspx/. Accessed 23 Feb 2019 Endeshaw D (2018) Mob action costs two researchers’ lives. https://www.thereporterethiopia.com/ article/mob-action-costs-two-researchers-lives. Accessed 23 Feb 2019 Gibbs L, Molyneaux R, Whiteley S et al (2018) Distress and satisfaction with research participation: impact on retention in longitudinal disaster research. Int J Disaster Risk Reduct 27:68–74 Goodhand J (2000) Research in conflict zones: ethics and accountability. Forced Migr Rev 8:12–15 Gotowiec S, Cantor-Graae E (2017) The burden of choice: a qualitative study of healthcare professionals’ reactions to ethical challenges in humanitarian crises. Int J Humanit Action 2:2 IFRC (n.d.) What is a disaster? International Federation of Red Cross and Red Crescent Societies. https://www.ifrc.org/en/what-we-do/disaster-management/about-disasters/what-is-a-disaster/. Accessed 23 Feb 2019 Jacobsen K, Landau LB (2003) The dual imperative in refugee research: some methodological and ethical considerations in social science research on forced migration. Disasters 27(3):185–206 Jones DS, Grady C, Lederer SE (2016) Ethics and clinical research – the 50th anniversary of Beecher’s bombshell. N Engl J Med 374:2393–2398 Kwiatkowski R (2018) Questioning the virtue of virtue ethics: slowing the rush to virtue in research ethics. In: Emmerich N (ed) Virtue ethics in the conduct and governance of social science research. Emerald, Bingley, pp 45–64 Macfarlane B (2009) Researching with integrity: the ethics of academic enquiry. Routledge, New York Mackenzie C, McDowell C, Pittaway E (2007) Beyond ‘do no harm’: the challenge of constructing ethical relationships in refugee research. J Refug Stud 20(2):299–319 Mazurana D, Jacobsen K, Gale LA (2013) Research methods in conflict settings: a view from below. Cambridge University Press, New York Mezinska S, Kakuk P, Mijaljica G et al (2016) Research in disaster settings: a systematic qualitative review of ethical guidelines. BMC Med Ethics 17(6):1–11 O’Connell E, Abbott RP, White RS (2017) Emotions and beliefs after a disaster: a comparative analysis of Haiti and Indonesia. Disasters 41(4):803–827 O’Mathúna DP (2010) Conducting research in the aftermath of disasters: ethical considerations. J Evid Based Med 3(2):65–75 O’Mathúna DP (2018a) The dual imperative in disaster research ethics. In: Iphofen R, Tolich M (eds) SAGE handbook of qualitative research ethics. Sage, London, pp 441–454 O’Mathúna DP (2018b) Christian theology and disasters: where is God in all this? In: O’Mathúna DP, Dranseika V, Gordijn B (eds) Disasters: core concepts and ethical theories. Springer, Dordrecht, pp 27–42 O’Mathúna D, Siriwardhana C (2017) Research ethics and evidence for humanitarian health. Lancet 390(10109):2228–2229 Perry RW (2007) What is a disaster? In: Rodríguez H, Quarantelli EL, Dynes RR (eds) Handbook of disaster research. Springer, New York, pp 1–15 Schroeder D, Cook J, Hirsch F et al (2018) Ethics dumping: case studies from North-South research collaborations. Springer Open, Cham. https://www.springer.com/us/book/9783319647302. Accessed 23 Feb 2019 Sphere (2018) The Sphere handbook: humanitarian charter and minimum standards in humanitarian response. https://handbook.spherestandards.org/. Accessed 23 Feb 2019 Stephens J (2000) Where profits and lives hang in balance. The Washington Post. https://www. washingtonpost.com/archive/politics/2000/12/17/where-profits-and-lives-hang-in-balance/ 90b0c003-99ed-4fed-bb22-4944c1a98443. Accessed 23 Feb 2019 Sumathipala A, Jafarey A, de Castro LD et al (2010) Ethical issues in post-disaster clinical interventions and research: a developing world perspective. Key findings from a drafting and consensus generation meeting of the Working Group on Disaster Research and Ethics (WGDRE) 2007. Asian Bioethics Rev 2:124–142
736
D. P. O’Mathúna
Tierney KJ (1998) The field turns fifty: social change and the practice of disaster field work. Disaster Research Center. http://udspace.udel.edu/handle/19716/287. Accessed 23 Feb 2019 Turton D (1996) Migrants and refugees: a Mursi case study. In: Allen T (ed) In search of cool ground: war, flight, and homecoming in Northeast Africa. Africa World Press, Trenton, pp 96–110 Wessells M (2013) Reflections on ethical and practical challenges of conducting research with children in war zones: toward a grounded theory. In: Mazurana D, Jacobsen K, Gale LA (eds) Research methods in conflict settings: a view from below. Cambridge University Press, New York, pp 81–105 WHO (2010) Research ethics in international epidemic response. https://www.who.int/ethics/gip_ research_ethics_.pdf. Accessed 23 Feb 2019 Wocial LD, Weaver MT (2013) Development and psychometric testing of a new tool for detecting moral distress: the Moral Distress Thermometer. J Adv Nurs 69(1):167–174
Part VI Disciplines and Professions
Ethics and Integrity in Research
42
Ron Iphofen
Contents Introduction: Generic and Specific Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Striving for Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Profession and the Discipline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Example: Oral History and the Boston College Affair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
740 741 742 745 748 749
Abstract
While it can be argued that the values, principles, and standards that underpin ethical research must apply to all disciplines and professions that conduct research, it remains the case that most, if not all, still seek to devise and apply distinctive ethical codes and/or guidelines that they perceive as specific to their own, specialized field. While highly generalized overarching principles can be accepted, their generality may be regarded as so vague they are impossible to disagree with. Moreover they fail to assist researchers within the specialist professional domain with practical solutions to the ethical issues they may have to confront in their everyday practice. Such an approach could be regarded as a restrictive practice and domain protectionism – but there are also clear practical advantages in a discipline being able to manage its own professional practices and rarely is it the case that generic codes provide some form of “off-the-shelf” solution to an ethical problem within the discipline’s domain. This chapter addresses issues that form the backcloth to the concerns raised by the authors in this section of the handbook.
R. Iphofen (*) Chatelaillon Plage, France e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_56
739
740
R. Iphofen
Keywords
Ethical research · Professional integrity · Discipline-based ethics · Oral history · Boston College case
Introduction: Generic and Specific Research Ethics This section of the handbook has proven one of the most difficult to deliver and, even, to argue for its inclusion. Colleagues have expressed doubts. This view from Gabi Lombardo, CEO of the European Alliance for Social Sciences and Humanities (EASSH), is typical of this position: “I am not sure it is possible to do a section for each discipline and what would be the value added of that. In my mind. . . ethics and integrity is a base line to which any type of research needs to interface, but I am not sure that each discipline interfaces in a different way” (E-mail communication 2018). Similarly, Simon Frith, the eminent musicologist, challenged my approach to attempt to include music in the discipline section: I suspect that most of the ethical issues in musicology research are already covered in your outline (i.e. the initial sections). The issue is most debated in ethnomusicology but the arguments here are familiar in ethnography/anthropology generally and, similarly, music psychology and music education are, from this perspective, just sub-categories of their broader disciplines. Looking to the future ethical problems are most likely to involve concepts of originality/authorship in digital research methods as applied to practice-based research (in the development of computer music/music programmes, for example) but again the issues here will be discussed in your section on data ownership and management. (E-mail communication 2018)
This illustrates a conceptual and epistemological problem linked to the practice of disciplines and/or professions. Thus it is the practice of music or art or literature – perhaps the humanities in general – that raises ethical issues, and it is rare that any research that is conducted in these professions adds to those ethical concerns. To illustrate, Suzy Klein’s documentaries for the British Broadcasting Corporation (BBC) on both the romantic era in music and its contribution to revolution and the programs she delivered on the relation between music and the power of tyrants (https://www.theguardian.com/tv-and-radio/2017/oct/03/tunes-for-tyrants-music-andpower-with-suzy-klein-review) demonstrate the effects of practice and performance, but the “research” that produced the music is hidden in the process of cultural production. Moreover, such effects of practice or performance are felt in a particular historical and geographical context. At other times in other places the consequences may be perceived quite differently. Thus ethics and integrity in the humanities may be more linked to their management as professions or how they may be subject to societal controls than any “scientific” contribution research in these fields could make. Perhaps it does not even make much sense to subject any performative work to conventional research ethics review – rather to some
42
Ethics and Integrity in Research
741
anticipatory ethical impact assessment: that is, the construction of a work (composing music, constructing a work of art, writing a novel) is not, in itself, of ethical concern, but what is done with it is. While the views of Simon Frith and Gabi Lombardo seem to me eminently sensible, I have for many years had problems in arguing exactly this point of view. My view is that the basic, common values, principles, and standards should be equally applicable to all disciplines. While for research in some sciences (social, behavioral, and physical) ethical considerations should run concurrently with their research practices, it makes little sense to do that for the arts and humanities, but it makes a lot of sense to consider the ethical consequences of cultural production – alongside the political and economic consequences. Indeed such was the approach of the highly influential Centre for Contemporary Cultural Studies (CCCS) founded by Richard Hoggart at the University of Birmingham in 1964, with the intention to conduct research into what was then designated “mass” culture. Under the directorship of first Hoggart and then Stuart Hall and Richard Johnson, the Centre operated at the intersections of literary criticism, sociology, history, and anthropology. Rather than focus on “high” culture, the intention was to carry out group research on areas of popular culture such as popular chart music, television programs, and advertising. Among many other issues, it showed the importance to young people of subcultures based around style and music, the ideological influence of girls’ magazines over their young readership, and why a “moral panic” over the presence of black communities had evolved in 1970s Britain (https://www.birmingham.ac.uk/schools/historycultur es/departments/history/research/projects/cccs/about.aspx). It maybe that it is only in such cross-disciplinary intersections that the ethical impact of popular culture can be fully understood.
Striving for Consensus One of my earliest experiences of seeking cross-disciplinary consensus on research ethics was for the RESPECT project funded by the European Commission’s Information Society Technologies (IST) Fifth Framework Programme (FP5) which drew up professional and ethical guidelines for the conduct of what was classed as “socioeconomic” research but which, in essence, covered the whole range of social sciences (Dench et al. 2004). Both the UK’s Economic and Social Research Council (ESRC) and the UK Government Social Research Unit (GSRU) were invited to be represented on the RESPECT project and were encouraged to sign up to the RESPECT “Code” and make use of the support documentation. The ESRC were reluctant to adopt the Code, being concerned not only with what they saw as too limited coverage of anti-discrimination legislation but also the lack of sanctions. The core concern of the GSRU, who also did not sign up to the Code, was the practicality of applying it within the various government departments with their diverse interests and objectives. This was despite the fact that both agencies were given the opportunity to amend the Code to suit such specific needs.
742
R. Iphofen
In similar vein between 2013 and 2015 the Academy of Social Sciences ran a project to develop generic ethical principles for social science research. This included a wide variety of stakeholders, including again the ESRC and now also the UK Research Integrity Office (UKRIO) which had not been established during the earlier RESPECT work. It culminated in the formal adoption and promotion of Five Ethical Principles and a Book Series: Advances in Research Ethics and Integrity, with Emerald Publishing, Vol. 1 of which appeared in May 2017. Once more we were seeking a research ethics consensus from the various learned societies that make up the social sciences (Iphofen 2017). But we again encountered fairly strong domain protectionism. Representatives of the various subdisciplines and professions were keen to assert their “difference” – how the basic principles “had to be” amended to take account of their “special” problem and approaches. Hence we ended up with some extremely generic principles which, to be fair, we managed to obtain general agreement on but which were so generic as to be almost impossible to disagree with (https://www.acss.org.uk/ developing-generic-ethics-principles-social-science/academy-adopts-five-ethical-princi ples-for-social-science-research/). So it seems that consensus about ethics in research is not impossible, but at such a level of generality that being “open to interpretation” they could still cover a multitude of research “sins.” Perhaps it is only within the domain of “the discipline” and/or “the profession” that adequate monitoring, licensing, and, if necessary, sanctioning can be performed – if at all.
The Profession and the Discipline In my own experience in delivering training about research ethics practice and ethics review to cross-disciplinary seminars in universities and independent research agencies, the range of protectionist remarks I received often surprised me. On several occasions economists stridently argued that research ethics and even research ethics review had “nothing to do with them” since economics simply deals with the “objective facts” of a situation. To make some sense of the issues as economists and financiers see them I sought the advice of Fabian Zuleeg, CEO of the European Policy Centre and an economist himself (in an e-mail dialogue 2019). Fabian pointed out that economists have no problem with the issues usually grouped under the heading of “research integrity” such as plagiarism, falsification, and fabrication since it is in the interests of all that professional integrity is maintained and seen to be maintained. Such an approach should rule out the crudest forms of bias and vested interests. Similarly the subset of experimental economics and/or behavioral economics raises issues, for example, of deception of research subjects, the psychological fall-out from induced behavior in experiments or the absence of fully informed consent for elements of research that have to be, at least in part, covert. The issues raised in such activities resemble social experiments in psychology and so can be dealt with by similar ethical frameworks, implemented through university psychology departments.
42
Ethics and Integrity in Research
743
Economic or social policy experiments, however, pose special problems, since there will always be an opportunity cost of intervention versus nonintervention (or vice versa). Policy experiments are crucial but results will rarely be clear cut, with different costs and benefits accruing to different individuals or groups within the population. Arguably, they always do “harm” to some participants – individuals and/ or communities. And Fabian does have a recommendation to deal with this: a specific framework for policy experiments along the lines of the special provisions found, for example, in market research and opinion polling. There is more of a problem with academic economists who see no ethical issues in their research work when they are not in direct contact with research subjects, instead in general using anonymized datasets to test models. In essence, most economists maintain that they work by using a methodology akin to natural sciences – statistical or mathematical modeling to test hypotheses. The only way to address this perception is through training so that researchers can identify ethical issues, for example, where the use of research results is concerned and their societal impact. This needs to be endorsed through a culture of good practice in academic departments and, crucially, in academic journals. Unless the important academic journals assess this routinely (with sanctionable consequences), the profession is unlikely to change given that “publish or perish” acts as an overriding incentive. Perhaps the more serious ethical problem in the academic economics sector is an abdication of responsibility. Much of their work is less policy relevant, when academics are working in some degree of isolation from immediate matters of policy. Economics has a crucial role to play in the making of policy and more support in the form of structural incentives should be available to encourage academic researchers to engage with policy in an ethical manner, transparently and with professional integrity. It is here, that independent think tanks can play a crucial intermediary role, acquainting policymakers with the “best” academic research. My challenge to Fabian was the concern that most economists appear to start from an ideologically entrenched position – in other words they start from a theoretical position which, in an ideal world should be “tested,” but all too often appears to frame the data they seek in the form of a confirmatory bias. However, Fabian’s view was that although adherence to schools of thought or to a particular political direction does create a level of bias, it is possible to see this as a good thing. Through a form of Hegelian thesis, antithesis, and synthesis, through debate and disagreement, the profession could reach new theoretical positions and new findings. In any case applied economics is inherently political. Evidence will be present that underpins all sides in such arguments, there can be no simple right or wrong. Clearly although some positions will be supported by more and better evidence than others, there is a tendency for some economists to rely entirely on the tenets of economic theory rather than on empirical evidence, for example, over the effects of privatization. Even if one fundamentally disagrees with certain economic schools of thought, they have a right to be present in the debate and dialectic. Unjustified bias exists but is hard to evidence. Transparency is the main way to try to address this, so that fundamental theoretical assumptions and their consequences in application are fully understood and evaluated.
744
R. Iphofen
Research in economics highlights the difficulty of divorcing ethics from a specific relationship to the discipline within which the research is framed since it is so close to politics and has inevitable societal outcomes. But it is not alone in this respect. Those involved in management research often claim a priority for the funder or commissioner over any other “overarching” ethical principles. Pharmacologists often dismiss any possibility of research in “alternative medicine” (such as homeopathy) as necessarily “unethical” since, unlike their discipline, it is not an “objective science.” Some artists insisted that art – as a profession and not really a discipline – was either “immune” to moral pressure or “must” ignore it to fulfill the true requirements of art. For example, an art exhibition that opened in Berlin showcased 20 people who “died for their convictions” including the French jihadist Ismael Omar Mostefai and Mohammed Atta. The exhibition first showcased in Copenhagen and claimed to be addressing what martyrdom actually means: https://www.theguardian.com/world/ 2017/dec/05/shock-in-france-as-berlin-martyrs-art-show-includes-bataclan-attackerand-911-pilot Again some commentators would argue strongly against actually calling this “research” and that art must be allowed to explore fully what such concepts mean and their consequences. There is no doubt that such work is exploratory and ethically challenging. As arts education moved into university departments and sought research funding from conventional national funding agencies, artists began to encounter the same sorts of difficulties that social scientists who might employ coercive or deceptive practices have had to confront. This has been particularly the case for those working with interactive and performance art. Munro and Dingwall (2009) report on illustrative cases of deceptive practice in arts-related research. In some forms of interactive research, where responses from participants are part of what is under study, the methodology requires that participants not be fully informed and in performative research the element of “surprise” is inherently part of what is being investigated. In both cases the participant’s engagement and responses can only be authentic if they remain under-informed. In the Munro and Dingwall examples the funders in one case made no objections, so neither did the artist researcher, in the other case, since ethics review had not been sought, the university authorities withdrew permissions for the work to be used in the researcher’s PhD thesis – by which time any “damage” had already been done. Munro and Dingwall advocate artists conducting such work should engage with the larger community beyond their own discipline/profession to explore the ethical validation of such research activities. Other professions, such as engineering, have equal difficulties in deciding when practice constitutes research and the typical ethical issues can be applied. As Michael Davis in this section points out, much of engineering is effectively “field research” and while the outputs may be relatively easily determined the outcomes, or full effects, may not. The many and growing subdivisions within engineering that overlay other professions and disciplines further complicates the issues related to ethics (de Winter and Dodou 2017). It is here that the “prioritizing” of risks, harms, and benefits comes to the fore. For example, in biomedical
42
Ethics and Integrity in Research
745
engineering the research action, per se, may be being conducted by a medical researcher supplemented by the engineer constructing the treatment or therapeutic “device” – the engineer may see “the public interest” as paramount, while the medic must put their patient subjects first. In nuclear engineering it is chastening to be reminded that the Chernobyl nuclear disaster occurred during what was essentially a “field test” in which flaws in design and procedure together with human error delivered a disastrous series of events. Edith West delivered a chapter on nursing research for this section of the Handbook. Again I have always had a problem of whether we treat nursing as a science, a discipline, or a profession – or all three necessarily simultaneously – and then puzzle out what that means for nursing research ethics. My concern has been that nurses do, and are encouraged to do research, but their professional ethics can sometimes be in sharp contradiction to their research ethics. Consider the difficulties of managing the potentially conflicting roles of “nurse, researcher and/or scientist” at the same time (Holloway and Wheeler 2002). In all cases of such complex engagement of professions with “the real world,” it is never enough for the researchers to operate only within the technical capacities and knowledge of their own field. The consequences of much modern, innovative, and technologically complex actions are felt in the physical and social environment, in nature, and in public health. Professionals can no longer remain isolated in the ethical competences deemed necessary in their “own” fields. It is vital they remain alert to the observations and concerns to be found across the range of disciplines that examine the physical and the social worlds.
An Example: Oral History and the Boston College Affair It has long seemed to me that the range and variety of research within the discipline of history offers fruitful insights into how best to address ethical issues that cross anthropological, ethnographic, legal, and political fields of interest for example. No field is more complex in this regard than oral history. A comprehensive and especially challenging illustration can be found in the Belfast Project which was created in 2001 to collate an oral history of The Troubles in Northern Ireland, within Boston College’s (Massachusetts) Burns Library. Members of paramilitary groups from both the Republicans and Loyalists were interviewed by Ed Moloney (an Irish journalist) and Anthony McIntyre (a former IRA member). The project was intended as an archive of The Troubles with transcripts and recordings being stored at Boston College and meant to be kept secret since being a former member of any paramilitary group was still a crime, even after the Good Friday Agreement. Interviewees were promised that their interviews would be closed and sealed until their death, with no one even knowing that they participated or that the project even existed. However, after the death of one narrator in 2008, when related books and articles were released, the nature of the contents of the Belfast Project entered some elements of the public domain.
746
R. Iphofen
It became known that two of the interviewees had mentioned the 1972 abduction and killing of an alleged informant, Jean McConville, a mother of 10 children. Commentators and some of her children began to suspect that the taped interviews could provide knowledge of her death and the site of her burial. In 2011, the US Justice Department, acting on a request from the Police Service of Northern Ireland, subpoenaed Boston College for copies of the tapes. The Northern Ireland Police only demanded access to the recordings after Delours Price (one of the participants) gave an interview to a newspaper that suggested information about the murder of McConville could be gleaned from the tapes. The College appealed on the grounds that the information available did not cast enough light on the McConville killing. It sought further judicial review to establish these facts: i.e. “. . .that the value of the interviews to the underlying criminal investigation . . . outweighs the interest in protecting the confidentiality of academic research materials” (Editorial 2012). But the University dropped its challenge after US District Court Judge William Young ruled against it. This left the researchers to continue the appeal themselves. The researchers argued that: “The archive must now be closed down and the interviews be either returned or shredded since Boston College is no longer a safe nor fit and proper place for them to be kept” (Geoghegan 2012). As would be expected some of the Loyalist participants sought return of their tapes amid fears for their own safety and threats were indeed made to the interviewers and their families. The researchers further claimed that they had asked for and received guarantees that all information would remain confidential. Boston College stated that they did not guarantee this and the head of their library claimed they had warned the researchers that: “. . .the library could not guarantee the confidentiality in the face of a court order.” The researchers argued that this was not stated in the written/signed agreements. After many appeals the university released transcripts of Belfast Project tapes to the British government. The researchers claimed they were misled by Boston College – while the latter stated they could only guarantee protection to the extent of US law. It is evident that the complexity and sensitivity of work such as this appears not have been adequately considered and the potential ethical and legal consequences not fully anticipated – perhaps even with full and in-depth consideration these problems could not have been avoided. Those working in the field of research ethics for some time read about this case with some incredulity. To the condemnation of Institutional Review Boards (IRBs) as potentially obstructive to research progress, we now have to add the concern that if they are to be more facilitative, they must still act to protect researchers and the institution that houses them from a raft of risks and threats which in some cases has always included the potential for serious bodily harm. To say it might appear that the researchers and the College acted with some naivety is not to demean the value of such work. But there are some fundamental principles in applied ethics that all researchers and their institutional monitors must recognize. Ethical research outcomes are dependent upon all stakeholders and participants in a project. If any one does not uphold the relevant values and
42
Ethics and Integrity in Research
747
principles, all are at risk. Often so-called tensions in the balance between truth, knowledge, and the public interest are in reality “conflicts.” By seeking one outcome you are threatening the others. This case illustrates in the extreme that researchers and the institutions that house them are faced with dilemmas, which they must assess along with an assessment of the full range of risks entailed, and their assessments are always likely to be fallible. The problems could have been avoided by not doing the project in the first place, and this may have been the most sensible, even “ethical,” decision. But the knowledge resulting from it would not have been produced. Such dilemmas confront many sensitive and challenging research projects. It is well understood that the law and morality are not coterminous. Some legal actions might be considered immoral, while many moral behaviors may contravene the law. Thus researchers cannot guarantee confidentiality if the law requires disclosure of findings that could help resolve criminal matters (or indeed matters that threaten civil security). The institutional IRB cannot do that and, indeed, has a responsibility to ensure researchers are fully aware of their legal situation even if they are not written into contracts – though they should be. At the same time the researchers should have ensured that they were aware of the law and, in the last analysis, it is their responsibility to make the risks clear to their informants. It is clearly the case that material collected in an oral history project can and might be used as evidence in a criminal investigation. And while the First Amendment to the US Constitution protects journalists from having to surrender their notes, it is not necessarily seen to apply equally to academic researchers. This case is perhaps much more complex than many within historical research and even than in oral history. [These materials] are of interest – valid academic interests. They’re of interest to the historian, sociologist, the student of religion, the student of youth movements, academics who are interested in insurgency and counterinsurgency, in terrorism and counterterrorism. They’re of interest to those who study the history of religions. Judge William G. Young (sourced from https://bostoncollegesubpoena.wordpress.com/)
The overlapping interests and concerns of different disciplines, security forces, the legal system, politics, journalism, and the survivors of The Troubles demonstrate the full complexity of cases such as this and counsel caution to oral historians and those with direct interests in the consequences of such research engagements. Intent is crucial to the choice of a research engagement as well as its conduct. Historical truth and knowledge is to be valued – but perhaps not at the expense of current public safety. Perhaps there are times when “the dead should be left to rest” – it is an ethical research decision whether or not to do so. It is true the law needs to accord some protection to researchers and it should at least equate with the protections accorded to investigative journalists. In this case there is more than a public interest issue, there are the very personal concerns of Jean McConville’s children. The law and morality both find themselves in “tension” when the public interest (say peace in Northern Ireland) finds itself conflicting with the resolution of a serious crime – something clearly more than personal vengeance. A public reconciliation policy does nothing to reconcile the emotions of a family who lost a mother.
748
R. Iphofen
Oral history can offer tremendous insights into the nature of conflicts such as the Northern Ireland Troubles, but similar work of this nature has been put in jeopardy due to how this case was handled. Part of the problem here is that criteria for acceptable questioning in oral history interviews are very different from those allowed in police interviews. Oral history interviewers allowed to “lead” informants in ways that the police may not in order to discover a “truth.” In the same vein respondents may assume, or left to assume, that the legal consequences of their responses are minimized by a promise of confidentiality which remains to be hard fought for – and leaves open the possibility that respondents may feel even less of a need to tell “the truth.” This case is far from over. The ramifications from the disclosures that occurred in 2010 were felt in the most recent trial in October 2019 in which the tapes were deemed inadmissible by a court (see https://www.belfasttelegraph.co. uk/news/northern-ireland/second-prosecution-based-on-boston-college-tapes-in-doubtafter-ivor-bell-found-not-guilty-38606203.html). The battle to protect the confidentiality of the archive goes on. But researchers and institutions will have to think twice about the guarantees that can be offered to oral history respondents when such sensitive cases are involved. Lessons have to have been learned that could guide similarly complex and sensitive research in the future. (More can be found out about the Belfast Project at: https://www.bbc.com/news/uk-northern-ireland27238797 at: http://oralhistory.columbia.edu/blog-posts/lessons-from-the-belfastproject and at: https://bostoncollegesubpoena.wordpress.com/)
Conclusions The reason this section of the handbook was included was to allow the “narrower” focus that a specific discipline might claim for its research goals and the specific issues that emerge from its own dedicated research methods to be heard. Hence each of these chapters do discuss generic ethical issues, but then focus on those dilemmas in research that could claim to be distinctive within their field of research. Obviously we could not cover all disciplines or research professions, so this section must stand as a sample of the range of available disciplines. Some are more obvious in terms of their routine considerations of ethics and integrity and some are coming relatively fresh to these concerns. In communication with the Editorial Board for this handbook, a key contributor, Martyn Hammersley made the following points: . . .in some fields there may be no felt need for any discussion of ethical issues specifically tailored to them, but there are others where this certainly may be required. For example, there are distinctive issues, or at least distinctive ways in which general ethical issues arise, in anthropology and in economics. In the case of the former, issues surrounding neocolonialism may surface much more sharply than in other disciplines; in the latter, the role of many economists as policy advisers, and their considerable influence, raises questions about the limits to what they can legitimately contribute, their culpability for policy failures, and potential conflicts of interest. Furthermore, given that readers may well need help relating
42
Ethics and Integrity in Research
749
general ethical principles and issues to their own fields, such discussions may be helpful pedagogically even if their field is not that distinctive in terms of ethical issues. (E-mail communications with Martyn Hammersley 2019)
It is to be hoped that the cross-fertilization of ideas, approaches, and solutions offered here will help all improve the way they conduct their research. I would encourage readers to take what they seek from the chapter on their “home” discipline or profession and then do some comparative study by having a look at a selection of the problems posed by and in other spheres of interest. Artists should talk to engineers and both should talk to nurse researchers. Some chapter authors see their area of operations as multi-method (e.g., psychology) while others see it as multidisciplinary (e.g., education). Such an overlap of methods and disciplines only adds further complications to the ethical decisions that need to be made in the field. Several chapters raise the vital question of the balance between professional and personal ethics together with the balance between both and societal perceptions of morality. The role of the Holocaust and its influence first on medical and, then, research ethics more generally demonstrates this forcefully. It would be hard to imagine any similar justification for the dominance of political ideology over professional/researcher obligations in the contemporary climate. But there have been many concerning challenges to assumed acceptable moral behavior on the part of political authorities that such a threat may always rest just beneath the surface. We must remain on our guard – personally, but also with the support of strong professional associations.
References de Winter JCF, Dodou D (2017) Human subjects research for engineers: a practical guide. Springer, Cham Dench S, Iphofen R, Huws U (2004) An EU code of ethics for socio-economic research. Institute for Employment Studies (IES). Report 412. Brighton. Retrieved from www.respectproject.org Editorial (2012) Times Higher Education, 1 March 2012, p 12 Geoghegan P (2012) If trust is lost, future promises naught but troubles for research. Times Higher Education, 19 January, p 28 Holloway I, Wheeler S (2002) Qualitative research in nursing, 2nd edn. Blackwell, Oxford Iphofen R (ed) (2017) Finding common ground: consensus in research ethics across the social sciences. Advances in research ethics and integrity, vol 1. Emerald Publishing Limited, Bingley Munro N, Dingwall R (2009) Ethical regulation: a challenge to artistic innovation? A-N Magazine, September, pp 5–6
A Professional Ethics for Researchers?
43
Nathan Emmerich
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Are Researchers Professionals? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Is Research a Profession? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Basis for a Professional Ethics of Research? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rethinking Research Ethics as a Professional Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
752 754 755 759 762 765 766
Abstract
Historical research has shown that, at its inception, research ethics was conceived as distinct from existing discourses of professional ethics. Subsequently, this distinction has been maintained and, as a result, the discourse of research ethics appears to be external to and independent of the practices it normatively analyses and comments upon. This chapter challenges these founding preconceptions and considers if research ethics can be understood as a professional ethics. Therefore, this chapter examines the criteria sociological research identifies as constitutive of a profession, and while one might conclude that research is obviously not formally instantiated as a profession, some of the sociological criteria have significant relevance. In this light it is argued that we might rethink the notion of research ethics in terms of a professional ethics. To do so would be to more clearly embed ethical discourse in the practice(s) of research, something that is consistent with the current turn to integrity.
N. Emmerich (*) Australian National University, Canberra, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_34
751
752
N. Emmerich
Keywords
Research ethics · Professional ethics · Professions · Professionalization · Social institutionalization · Social organization
Introduction In a previous essay, I have argued against the seemingly common, if more or less implicit, understanding of research ethics and professional ethics as distinct from one another in some reasonably strong sense (Emmerich 2016b). As the accounts of the historical emergence of research ethics offered by Schrag (2010) and Stark (2011) show, the idea of a research ethics emerged at a particular time and place – the National Institutes of Health (NIH) in Bethesda Maryland, USA, circa the 1950s. At least in part, the notion of a research ethics was designed to circumnavigate the multiple and potentially conflicting professional ethics of those working in this context. It was also conceived so as to address a new class of patient – the normal or healthy patient – on whom researchers were increasingly reliant. As such individuals were not undergoing any treatment, they could not be seen as entering into a doctor-patient relationship. This was understood to call into question the healthcare professional’s fiduciary duties, and, particularly when understood in the aftermath of unethical research by medical doctors during World War II, this was taken to mean that the norms of professional medical ethics could not be relied upon as the basis for protecting such patients. Tracing the roots of research ethics to the complex challenges faced by researchers and those managing them at this time in the NIH can be taken in a number of ways. One might think that while research, particularly biomedical research, is undertaken by those commonly thought of as professionals, it is distinct from their constitutive professional practices. Therefore, both the practice of research and its associated ethics are rightly conceived of as distinct from the discourse on professional ethics and a concern external to the profession of medicine as a whole. (Of course one does not mean to imply that an internal discourse on professional ethics is or can be fully autonomous or entirely uninfluenced by external factors. This is not the case, and it is right that the discourses surrounding medical ethics and research ethics are open to external inputs. The question is, however, where we consider the locus of debate to be.) However one might alternatively think that researchers either were, or subsequently became, a de facto class of medical professionals. Consider, for example, medical and healthcare professionals who specialize in biomedical research, who pursue a career in research and in locations where such work commonly takes place. In this context while research ethics, including its associated administrative bureaucracy, was initially aimed at preempting potential conflicts between existing specialisms and their associated professional ethics, it might now be taken as reflecting the ethics of this new professional class. As such it can be understood as moral discourse that is internal to a particular practice or professional pursuit.
43
A Professional Ethics for Researchers?
753
This distinction may or may not have significance for the way in which (biomedical) research is pursued, how the ethics of research is understood, and, perhaps more significantly, its implementation, administration, and governance. However, the difference between these two perspectives might be thought of as taking on additional significance when it comes to thinking though the ethics of social research. For the most part, social researchers are researchers first and foremost. Thus, the ethics of social research is not required to preemptively mollify conflicts between allegiances to preexisting codes of professional ethics. Indeed, there is no reason to think of the activity of research as being in conflict with the responsibilities of some other (professional) practice. Quite evidently there are examples where this might not be the case. Where a management researcher is engaged in a reflexive action research project, say, or when a psychologist or a linguist undertakes research with their clients. However, the point here is to address the ethics of research qua the professional researcher and, on that basis, to reimagine the way we think about research ethics. It is, then, worth more fully thinking through the idea of an ethics of research in terms of a professional ethics, or so one might think. What follows can be considered an attempt to do so, albeit in admittedly broad terms. Presuming that this account is convincing or, at least, provides the reader with further food for thought, then the other chapters presented in this section can be viewed as doing the heavy lifting; the more detailed work required when it comes to the more substantive ethical issues encountered by professional researchers working within particular disciplines or utilizing specific methodologies when undertaking research. This chapter proceeds as follows. First the suggestion that researchers are professionals is considered before turning to the more substantive question of whether research is a profession. While it is clear that research is not institutionalized in a manner that reflects the true or classic professions of medicine or law, it is argued that there is some reason to construe research as a professional activity. In light of this conclusion, the basis for a professional ethics of research is then examined, namely, whether or not one can think that an (implicit) social contract obtains between researchers and the societies within which they pursue their activities. One final point or set of related points is worth making here. For the most part, this chapter is concerned with academics, those who pursue their research within universities. However, just as there are medical professionals who practice medicine privately or who are employed by private corporations other than hospitals or similar institutions, there are researchers who are not employed by universities, and who pursue their work independently, or who work for particular nonacademic organizations. In this context, one might think that pursuing research ethics as a professional ethics more clearly includes such individuals than is the case with research ethics, particularly since the administration of research ethics tends to be associated with institutions dedicated to the pursuit of academic research. Nevertheless, questions remain,
754
N. Emmerich
and it may well be that the professional ethics of researchers should be understood as sensitive to the context of practice. This is something that applies to other professional practices; the ethical responsibilities that attend patient care differ from those that attend healthcare professionals working for, say, insurance companies or state bodies such as those that administer state benefits. As such, situational variation presents at least as much of a challenge to the discourse of research ethics as it does to the notion of a professional ethics of research.
Are Researchers Professionals? The notion that the ethics of research should be understood as a professional ethics presumes that conducting research entails being a professional. The question of whether or not researchers are professionals can be defended with relative ease, but it turns out to be the wrong question or, at least, an incorrect place to start an enquiry into the viability of a professional ethics for research. Nevertheless, consider the fact that a great number of the activities that are undertaken in contemporary society can be thought of as being done in a manner that is more or less professional or with greater or less degrees of professionalism. Given the present meaning of the term “professional,” it makes sense to say that a plumber or some builders acted in a professional or unprofessional manner or that they acted with professionalism. Here being professional – as opposed to being a professional – does not require one to be a member of a profession. Rather, the notion is coterminous with certain contemporary behavioral standards, norms, or expectations that are related to those acting within the marketplace or the public sphere. Such standards reflect an ethic or an ethos that, at least to some degree, involves the suspension of certain personal, cultural, and political norms and the adoption of the kind of tolerant or nonjudgmental standpoint that might be termed cosmopolitan. While it is significant that normative concerns are central to the contemporary idea of being professional, or acting with professionalism, this does not mean it is tantamount to a (supra)professional ethics. Nevertheless, it is something that can be applied to researchers. Whether or not they are interacting with academic or administrative colleagues, or with research participants, one can and should expect researchers to act professionally. Nevertheless, affirming that researchers should be professional when conducting research does not get us very far with the matter at hand; it concerns broad standards of behavior applicable to a wide range of persons engaged in a wide range of (largely public) activities. The notion of a professional ethics of research is a more specific idea, and one tied to the practice of research and of being a professional, in the sense of being a member of a profession. Thus, the relevant question is whether or not research can be thought of as constituting a profession.
43
A Professional Ethics for Researchers?
755
Is Research a Profession? On the face of it, it would seem that research cannot be thought of as a profession. While some argue that research is research, to my mind this is not the case. Following Schrag (2014), one might think that research – or, even, human subjects research which, after all, is what we are primarily concerned with when it comes to research ethics – contains a multitude. It is diverse and should not be reduced to a singular activity or practice; it is not any one thing. As such, one cannot simply maintain that researchers are all members of the same profession on the basis that there are all engaged in the same basic task. Nevertheless, rather than abandon the notion entirely, it is worth pursuing the idea a little further. In what follows, the focus on what might be called the formal account of professionalization, i.e., one that is focused on the social and historical processes that created the formal or classical professions, such as medicine and law, and which began in the eighteenth century. This can be contrasted with the informal or cultural account, which offers insight into the broader twentieth-century processes that foreground the notion of professionalism noted in the previous section (cf. Evetts 2003, 2013). The reasons for doing so are multiple. First, academic institutions predate the formal professions and, arguably, influenced their development, not least insofar as the term professor and profession are related to one another. Second, the centrality of ethics and the need for it to be given a distinct expression, primarily in a codified form, in accounts of the creation of formal professions are of particular significance to the arguments made here. Third, while it seems clear that informal processes of professionalization have had an effect on academic institutions and research, if one were to attend to this, then it would unavoidably lead to a focus on developments in the management and administration of such organizations: the professionalization of the managerial class as opposed to the class of “coal face” academic, even if they have also been affected by these developments. In turn, this would result in a critique of research ethics as a bureaucratic behemoth; an administrative and managerial enterprise, masquerading as something else, particularly in the context of social research. Aspects of this point have been and continue to be well made elsewhere (cf. Hedgecoe 2016). The purpose at hand is to try and reclaim research ethics as something that can be reoriented so as to make a more positive contribution to the practice(s) of (social) research and researchers. The formal account of professionalization and the notion of professional ethics as a collective rather than managerial enterprise offer the best opportunity to advance this agenda. For there to be a profession in the formal sense of the term, what is required is the identification of a (set of) practice(s) such that a coherent occupational group can – or perhaps needs – to be formalized or institutionalized in a sociopolitical sense. Thus, the (historically) “ideal professions” of medicine and law involved a certain set of practices, the standards for which are set and ensured by the relevant professional bodies. These practices also provide or support central social goods; they enable the health of citizens and the rule of law, a prerequisite of the modern democratic societies and nation states. It is also worth noting that the process
756
N. Emmerich
of becoming a professional – of joining these professional bodies – involves individuals undertaking (extensive) educational programs that provide the relevant credentials, followed by a period of “on-the-job” training that one might consider as a kind of apprenticeship. Such processes can be perceived in a number of modern occupational groups including nursing, accountancy, architecture, and teaching. Indeed, one might also apply such thinking to the police, the military, and the clergy – the latter two also being occupations that were historically considered to be professions. It is in this context that one might consider whether or not researchers are professionals in the relevant sense. Although research is not institutionalized in any singular form, it would be glib to say that research is not a profession on this basis alone. In the first instance, just as there are various kinds of researcher whose practices differ from one another – those who work within the natural sciences, the social sciences, and the humanities, for example – one might compare these to the different specialties that constitute the medical profession as a whole. Indeed, one might draw attention to the differences between various healthcare professionals and note that where we draw the boundaries of a profession and the line between one profession and another are not timeless metaphysical truths but function of the socio-historical developments that produced them. Similar points can be made about the practice of research and researchers, particularly with regard the emergence of disciplinarity (Wellmon 2015) or, to put it another way, specialization. The social institution of the university emerged prior to the development of disciplinarity, something that continues to be reflected by the fact that the terminal degree of almost all fields and institutions is the PhD – the Doctorate in Philosophy. Thus, while scholars who work in particular fields of disciplines have forged connections with those in other institutions, including through the formation of societies and associations – examples of which range from historically significant national bodies such as the Royal Society, the Royal Society of Chemistry, the British Academy, and the American Academy of Arts and Sciences, to well-established and influential organizations such as the British Sociological Association, and to smaller groups of significance to particular fields, such as the Socio-Legal Studies Association and so forth – the fact that researchers had a preexisting and stable home or employer has meant the absence of any sociopolitical need for formal recognition, i.e., the formal establishment or institutionalization of research disciplines as professions. Thus, while disciplinarity emerged around the same time as medicine underwent a formal process of professionalization, beginning in the late eighteenth and continuing through most of the nineteenth century, this was predated by the university as a sociopolitical institution. Indeed, the idea of an academic, scholar or professor, i.e., someone who professes their knowledge, predates the subsequent emergence of the professional. Thus one might take the view that although academics have never been (explicitly) perceived as members of a profession, some facet of the notion has always been present within the culture of academia and the role that they play in society. Furthermore, the development of disciplinarity, specialization, and the processes through which they have become
43
A Professional Ethics for Researchers?
757
institutionalized within the universities and across the practices of knowledge production arguable reflects conceptions of professionalization as a social and sociological phenomenon (Freidson 2001). Let us consider each of the points in Wueste’s composite summary of the defining features of professions: 1. the centrality of abstract, generalized, and systematic knowledge in the performance of occupational tasks; 2. the social significance of the tasks performed by professionals; professional activity promotes basic social values; 3. the claim to be better situated/qualified than others to pronounce and act on certain matters; 4. the claim that professional practice is governed by role-specific norms – a professional ethic; and that, 5. most professionals work in bureaucratic organizations/institutions (Wueste 1994, 2013, 3). Certainly, the production of abstract, generalized, and systematic knowledge is central to research. Indeed, in his comparison of academic sociologists and those who engage in what he presents as related occupational practices (such as journalist, novelists, and dramatists), Strong distinguishes between analysts and practitioners suggesting this is relevant to the fact that the former “have relatively weak professional organizations” (Strong 1983, 63). It seems, then, that those who make use of knowledge – which is to say practitioners – tend to be members of professional organizations, while the same cannot be said for those who produce knowledge, the defining task of research as an occupation or practice. Nevertheless, one might note that having and using one’s knowledge is central to conducting research; it is central to the production and reproduction of further knowledge. For example, consider the methodology of a discipline or field. This can clearly be considered as a form of abstract, generalized, and systematic knowledge as well as something that is used or put into practice in the course of conducting research. One might, of course, take a pluralist approach to the methodology of a particular area of enquiry. Psychology, for example, can be pursued in a positivist or non-positivist manner, and it can be both a quantitative and a qualitative endeavor. More broadly, one can note that there may be tensions and disagreements about matters of methodology as well as substantive findings within any one field. Nevertheless, this does not undermine the fact that abstract, generalized, and systematic knowledge is central to the proper performance of research, it is merely to note that attending to and revising such knowledge is, to greater or lesser degrees, an ongoing task of disciplined research. It also seems relatively noncontroversial to think that research has social significance and promotes basic social values. Were this not the case, it would certainly be difficult to imagine that society would fund research and researchers to the extent that is the case. Whether scholars of the humanities and social or natural sciences, the vast majority of researchers would see themselves as working toward the betterment of society. This can be understood in terms of
758
N. Emmerich
their substantive research but can also be applied to the teaching activities the majority of researchers undertake as part of their broader role. This point is related to Wueste’s next defining feature. Researchers often claim to be better suited and qualified to pronounce on particular issues or matters that are related to their field of expertise. Certainly, they undergo extensive programs of education and, in many fields, periods of (quasi)apprenticeship to other researchers. Indeed, researchers are often called upon to provide expert testimony in political and policy-making fora as well as in public debates. They do not, of course, act in the same way as other professionals, but insofar as they are better situated and qualified to undertake certain actions, including conceiving of and pursuing specific research projects, reviewing the manuscripts of other researchers, examining doctoral theses, external examining and validating degree programs at other institutions, playing a role on grant boards, and so forth; then they occupy a privileged position. To some degree the fourth of Wueste’s defining features of a profession is what is at stake in this chapter. There are, of course, broader role-specific norms involved in the practice of research, the norms of knowledge production, methodology, and writing style, for example. Nevertheless, both our concern and the notion expressed in this criterion are whether or not the practice of research is governed by a set of role-specific norms and whether there is or can be a professional ethics for research, the questions presently under discussion. The final point is, however, more easily affirmed. It seems clear that researchers tend to work within bureaucratic organizations or institutions. If nothing else, universities are examples of this phenomenon. However, more than this, the task of being a researcher involves one in broader bureaucracies – those associated with distributing research funding, for example. There is, then, no obvious reason for thinking research should not be considered a profession or, at least, not to reject the idea outright, and at least on the basis of Wueste’s conclusions, drawn from the sociological literature. Nevertheless, it is clearly the case that researchers do not belong to a formally institutionalized profession and are not recognized as such. Furthermore, research is constituted by a great deal of theoretical and methodological diversity, such that one might question if a unitary profession can, or ought to be, identified. Even if one were to advocate for the institutionalization of research and researchers as a profession, it would seem that the best one could hope for was the creation of a set of more or less affiliated (multi-)disciplinary groups, perhaps based on the existing model provided by the learned societies. Pursuing such a project is, however, unlikely to bear significant fruit. Fortunately, it is not necessary to promote the formalization of research and researchers within a profession in order to think through the possibilities for a professional ethics of research. Whether as a set of general propositions or, as in the case of the following chapters in this part (VI – Disciplines and Professions), in more specific terms, what is important is the normative basis on which such an ethics can be understood and articulated.
43
A Professional Ethics for Researchers?
759
What Basis for a Professional Ethics of Research? In their essay, Revisiting the Concept of a Profession, Tapper and Millett note that “a profession involves: an ideal of service and responsibility to the public good; virtue on the part of professionals; and a special sort of fiduciary obligation.” (2015, 4) While it is commonplace to present the moral obligations of professionals in terms of their special responsibilities toward their clients, the social license of a profession is something broader and entails a collective articulation and recognition of the goods offered by a profession. Thus, one can think of health as the broad public good promoted by the healthcare professions and promoting the rule of law – which is to say the fundamental basis on which contemporary society is rendered possible – as the public utility served by legal professionals. In this context one might consider what, if anything, is promoted by research and researchers. The short answer is, of course, knowledge, and while one should be reluctant to constrain the pursuit of knowledge by reference to the public good, one might also note that it is not possible to place a prior constraint on the pursuit of knowledge, given that one does not already know what one is endeavoring to find out. Of course, one cannot know in advance if or how a research project will serve the public good or not, at least not with certainty or precision. Indeed, the utility of research may not even be immediately apparent at the conclusion of the project; appreciating the value of knowledge and its creation requires the adoption of a longer-term perspective. Consistent with the diversity identified above, we might acknowledge that the pursuit of knowledge in different domains offers significantly different goods. Broadly speaking, one might distinguish between the natural and the social sciences and the humanities. Each offers its own distinctive form(s) of insight into the world and ourselves. Equally, we might also think that knowledge is worth pursuing for its own sake, that it is its own good. (While he speculates about the end of knowledge in our global or globalized era, Delanty (1998) avers knowledge as the end of the university in the context of the enlightenment. Fuller (2003) also speaks of the university as a social technology for the production of universal knowledge.) Furthermore, one might think that the public good served by researchers is not simply the pursuit of new knowledge. Rather, as scholars and academics, researchers are the custodians of knowledge, something that maps onto ancient and medieval conceptions of the university, conceived as an autonomous institution (Minogue 1973). In this role researchers – academics – have pedagogic responsibilities, something that we might not only interpret in relation to a universities formally registered students, but as a broader social responsibility to profess one’s knowledge in public. Of course the term professor and the idea of professing one’s knowledge are not unrelated to that notion of a profession or professionals, which is to say, those who use their knowledge in service of their clients. In this view, then, researchers are positioned as being in service to and having a responsibility toward knowledge, understood as a public good. In this context the fiduciary obligations of researchers relate to the knowledge they hold in trust
760
N. Emmerich
and the sociopolitical role it plays. Consider, for example, the role of scientific knowledge in contemporary society. As Collins (2010) points out, we should not make the mistake of thinking political debates can be fully determined by scientific knowledge. Nevertheless, scientific testimony – or knowledge, knowledge claims, and knowledge production more generally – is not something that should be politicized (Collins et al. 2010). With regard to bioethics, this is something discussed elsewhere (Emmerich 2018a). In this context, however, one can understand the fiduciary responsibilities of researchers as, first, ensuring this does not happen in their own work and, second, offering counter testimony if and when they perceive it happening in broader public debates. While these fiduciary responsibilities can be reflectively, consciously, or conscientiously pursued, they are nevertheless a matter of virtue: the disposition or habitus of researchers. (One could, at the mention of virtue, present an account of the professional ethics of researchers in neo-Aristotelian terms. Certainly, MacIntyre (1981) provides the resource for such a task. However, I am disinclined toward such a project, which strikes me as overly ideal and requiring a top-down imposition of both virtue and the nature or what it is to correctly pursue (scientific) research. As a result one might prefer to think in terms of the (imperfect) dispositions (or habitus) of researchers, the social structures of a field (or discipline) and notions of immanent (self) critique. For a related discussion regarding the term phrónēsis and the use of this term in contemporary research ethics, see Emmerich (2018b).) As revealed by historians of science (Shapin 2008; Daston and Galison 2007), the scientist (and, by extension, the researcher) is constituted by a particular set of dispositions, a habitus that encapsulates an epistemological ethos (Emmerich 2016a). This is, of course, something that is variable both historically and across the social space of intellectual fields or disciplines. The habitus of a scientist and that of an English literature scholar will differ significantly, while those located in other intellectual fields may be more closely aligned, as in the case of a (natural) scientist and an (analytic) philosopher of (the natural) science(s), say. We might also find significant differences within fields that, on the face of it, are more closely related, such as qualitative and quantitative social scientists, for example. While such differences are significant when it comes to the specifics of ethical practice in particular disciplines and domains and therefore present a challenge to the notion of a professional ethics of research as a singular or unitary phenomenon, they need not be taken as ruling out the idea entirely. Rather, consistent with any properly conceived ethics of research, it is simply the case that a professional ethics of research must be understood in pluralist terms. At least in part, this is because the normative dimension of research does not just concern the kinds of issues normally discussed under the rubric of research ethics, but are fundamentally linked to the practice or practices of research. These include specific issues, such as publication ethics, and broader notions such as research integrity. This is an important point because, as MacIntyre (1981) has it, there is a strong connection between a profession and its practices. This is because of the way that the (internal) goods of a profession are instantiated and pursued through its constitutive set of practices. However, as Millett (2016) notes, the notion
43
A Professional Ethics for Researchers?
761
of practice is insufficient if we are to fully conceptualize the notion of a profession. As discussed above, there needs to be some form of institution that is central to the social existence and organization of the profession and its practices. Millett (2016) also argues that a profession and its host society enter into a social contract, one that enables the goods offered by a profession to be pursued by circumscribing the way(s) in which they can be pursued and by whom. The aforementioned notion that the university holds knowledge in trust and does so on behalf of society might be taken as something that reflects this notion of a social contract. It might also be perceived in the notion, common to American universities in the eighteenth and nineteenth centuries, that a large part of a university’s purpose was the moral education of its students: the development of character (Reuben 1996, 75). However, while Dewey (and others) continued to argue that universities should see themselves as vehicles for moral education well into the twentieth century, and while echoes of this can be perceived in the contemporary notion that attending university should entail undergoing some form of education for (global) citizenship, the notion is not central to the concept of research university. Wellmon (2015, 5), for example, traces the advent of the research university to early nineteenth-century Germany, with distinctive features of this institutional approach to the organization of enlightenment subsequently being reiterated elsewhere, notable late nineteenth-century America. Furthermore, according to Reuben (1996), the making of the modern university involved a marginalization of morality. Nevertheless Wellmon also notes that, as something that was organized around the institutionalization of a particular division of intellectual labor, namely, disciplinarity, the advent of the research university entailed both a process of specialization and one of professionalization (Wellmon 2015, 148–49 & 233). While it is arguably the case that a major imperative in the organization and structure of contemporary research activities, particularly within universities, seeks to go beyond disciplinarity and toward greater interdisciplinarity (Riesch et al. 2018), it seems unarguable that the professionalization of research and researchers is an ongoing project. Of course, the nature of this professionalization is inimical to the image of the lone scholar and, insofar as this is the case, to the autonomous, individual, and self-directed professional. However, as the sociological criteria for professions demonstrate, relying on an overly individualist account of the professional would be misguided. Professionals are not simply individuals, but representative of social organizations, members of institutions to which they are accountable. Similar points can be made with regard to researchers. Consider, for example, processes of peer review for the purposes of publication, the allocation of grant monies and evaluation exercises. The social organization of disciplines or fields of enquiry means research is a team or community effort. Furthermore, while it may not (always) rely on peer review, the practice of research (ethics) governance and its development over the past few decades within universities around the world could be taken as further exemplifying this point. Certainly, the ethical accountability it seeks to engender is a social and collective endeavor. Furthermore, the way researchers are considered to be ethical actors and held responsible for
762
N. Emmerich
their activities could be seen as reflecting the way in which other professionals are understood to be responsible for their activities and subject to codes of ethics. However, the ethical governance of research is, in many cases, not undertaken by a researcher’s direct peers, but by researchers with a broader range of disciplinary profiles as well as others, including individuals who do not conduct research. Although nonprofessionals and laypersons are, today, included in the governance structures of formal professions, professional organizations nevertheless exercise a high degree of autonomy over the way they are governed and, in particular, the setting of ethical standards to which they are accountable. This is not the case for researchers. Indeed the notion that researchers should have primacy when it comes to setting the ethical standards of their field is inimical to the way research ethics is presently conceptualized and understood. Nevertheless if there is indeed a sense in which researchers can be understood as professionals, then exploring the ethics of research as a form of professional ethics may have merit and go some way to combatting the problems generated by research ethics in its current guise.
Rethinking Research Ethics as a Professional Ethics As the foregoing analysis makes clear the social organization and institutionalization of research is such that it cannot be thought of as a (formal) profession or set of professions. Nevertheless, we have also seen that there is some reason to think that research is a professional activity in the relevant sense. Equally, it is also clear that both the discourse that surrounds and constitutes the ethics of research and its implementation or administration differs from that of professional ethics. The question is, then, whether thinking of researchers as professionals can make a positive contribution to the way we think about the both the ethics of research and the way in which we approach its governance. Perhaps the first thing to note is that current developments in research ethics – specifically, concern for the integrity of research and, therefore, the character of researchers – is a mode of thought that is clearly more consistent with the kind of thinking found in accounts of professional ethics than with the prior discourse on research ethics. At its inception, part of the purpose of research ethics was to move beyond the domain of professional ethics so as to prevent inter-professional conflict (Stark 2011). What was accomplished was the elimination of the specific individual or professional in the discourse of research ethics, replacing them with a uniform or generic image of the researcher, as an everyman. Of course this coheres with the way the researcher is conceptualized, particularly within the natural sciences. Scientists are understood to be objective, impartial, and neutral observers, whose standpoint is idealized as the view from nowhere. It follows that the “common sense” vision of good science entails the elimination of the researcher at all stages of its conduct. While similar thoughts can be applied to professionals – what it is to be a professional is to suspend one’s private self and take on a particular social role that entails a certain level of neutrality or impartiality,
43
A Professional Ethics for Researchers?
763
even as one acts in the interests of one’s client or patient – this is taken further in the context of the scientist and the researcher, at least as conceptualized within dominant discourses on research ethics. The ideas pursued in this chapter aim at refitting the notion of the researcher as a professional. As such, they are consistent with intellectual developments in our understanding of science since the publication of Kuhn’s (1996) Structure of Scientific Revolutions, particularly more recent work in the history and sociology of science, that recognizes the importance of the social role and the professional or intellectual character scientists and researchers inhabit, and its relationship to the production of (scientific) knowledge. In the context of such post-Kuhnian perspectives, the notion of integrity can be taken as both an ethical and epistemological initiative. As Daston and Gallison suggest, the advent of scientific modes of thought entailed that an “[e] thos was explicitly wedded to epistemology in the quest for truth or objectivity or accuracy” (Daston and Galison 2007, 204). Such an ethos is reflected in the way science is done; it is a regulative, which is to say normative, set of values or ideal(s) that guides and informs scientific practices. At least in part, its operation entails shaping the character of scientists and, relative to their own disciplinary and intellectual fields, researchers more generally. Thus, Daston and Galison go on to say that “[f]ar from eliminating the self in the pursuit of scientific knowledge, each of the epistemic virtues depended on the cultivation of certain character traits at the expense of others . . . [resulting in an] ethical and epistemological code imagined as self” (Daston and Galison 2007, 204). As previously pointed out an ethos, and the character of a researcher, is mutable, it varies across time, place, and discipline. It is something that can be extended and generalized, as has been suggested in relation to bioethics (Emmerich 2018a). Thus the emergence of integrity as a regulative ideal in the discourse on research ethics can be seen in terms of changes and developments within both particularly disciplines and within research and its associated institutions – particularly universities – more generally. A disciplinary ethics can, then, be seen as wedded to its epistemology, something that is reinforced by the rubric of integrity. As a result the concept of integrity is something that should be understood as requiring further iteration in specific disciplinary contexts. The notion of integrity is consistent with and furthers the notion that the discourse of research ethics should be – and, perhaps, is being – reformulated as a professional ethics. In this context the views of professionals, i.e., those who inhabit the ethos and character of a particular discipline, take on greater significance than mainstream research ethics would suggest, particularly in the context of the governance and review of research. This should not, however, be mistaken as an argument for simply prioritizing the substantive ethical perspectives of researchers or for insulating their views from criticism. Rather this present account, and those which emphasize the importance of integrity to ethical research, suggests that researchers should actively engage with the ethical dimension of conducting research within their field(s) and, as part of this, engage with those who have a stake in the debate.
764
N. Emmerich
This should, of course, include research participants or those who can be taken as representing their views or interests. It should also include external commentators (notable bioethicists or research ethicists). It might also include a range of other stakeholders including funders, gatekeepers and those working within research administration and governance. It might also include the public in a more general sense, depending on the nature of the research at hand. Furthermore, when considering the ethics of their research, researchers should remain conscious of the (implicit) social contract that exists between their discipline and the society they inhabit. This is, of course, not a simple relationship, and it well be subject to a significant degree of variation. Consider the notion in relation to, say, the biomedical sciences and English literature. Given the respective purposes of these fields, and the potential risks attached to each, researchers have differential responsibilities when it comes to demonstrating that they are acting in the best interests of society as a whole. Equally, consider the social sciences, particularly in its more critical instantiations. Here, what is proper to both research and the researcher may well entail a certain degree of antagonism when it comes to the interests of particular groups or society as a whole. The social responsibilities of social scientists may include telling people what they do not want to hear: to “speak truth to power” to use a rather grandiloquent phrase. Arguable it is here that the social contract takes on primary importance. Social scientists do not offer criticism for criticisms sake but do so with regard to longer-term interests. From one point of view, this could be taken as being tantamount to a rejection of the idea that researchers engage in the pursuit of knowledge for its own sake or that knowledge is its own end and justification. However, one might also take this as the basis or motivating factor on which a social contract between a society and the academics and researchers it hosts and supports is formed. If something is valued for its own sake, then it is valued by all and not by those engaged in its pursuit alone. Further justification or motivations for investing in the pursuit of knowledge offer reasons which might shape the way research, and researchers are sociologically and politically institutionalized. Consider, for example, the Manhattan Project and the specifics surrounding the institutionalization of nuclear research and physicists at Los Alamos in the USA during World War II or endeavors such as The Human Genome Project. While knowledge for its own sake might provide a broad context in which to understand the relationship between researchers and society, it also seems clear that a more detailed view of both national and international politics as well as the scope of current research can result in more complex forms of arrangement and organization. In this context it would seem clear that a commitment to knowledge and the value of research on the part of both researchers and their institutions, particularly universities, and society, particularly in the form of governments, nevertheless leaves a great deal of scope within which the pursuit of such knowledge can be organized and structured. Arguably, the ethics of a profession is rooted in such broad sociopolitical arrangements and such arrangements are, of course, themselves open to ethical
43
A Professional Ethics for Researchers?
765
reflection or ethico-political debate. Nevertheless they offer a possible foundation for a professional ethics of research. It is, to be certain, a potentially shifting ground and one that offers far less clarity than, say, the four principles approach developed by Beauchamp and Childress (2009). (One should recall at this point that the four principles of biomedical ethics were initially developed in the context of the Belmont Report, something that was primarily concerned with biomedical research and the historical source for current research ethics regulation in the USA (Schrag 2010). That they later became highly influential in the context of medical practice and healthcare as well as biomedical research augurs well for any refusal to disconnect research ethics from professional ethics, as argued in this chapter.) Nevertheless, it is one that can accommodate the kind of thinking the four principles presents, which has been influential in discussions of both research ethics and the ethics of the healthcare professions. Thinking of research ethics as a professional ethics does not mean repudiating what has gone before. Rather it means casting researchers in a different light as well as thinking about the ethics of research in broader terms.
Conclusion This chapter has suggested that it may be fruitful to rethinking research ethics as a form of professional ethics and to think of researchers as professionals. While there is some prima facie reason to reject such ideas – not least the absence of any formal professional institution(s) and the fact that advocating for their development from the current panoply of associations is unlikely to meet with success – there is value in considering the notion in more detail. This is particularly true when one considers some of the current developments in research ethics, particularly the turn toward integrity. In this context the notion of a professional ethics of research and researchers has much to offer. In particular it suggests the ethics of research can be connected to the professional character of the scientist or to the discipline of researchers. In light of the analyses presented by those who study the practice of science, this perspective creates an opportunity to forge a closer connection between ethics and epistemology and, therefore, between ethics and methodology. It also promotes consideration of the broader sociopolitical contexts in which research actually takes place and in which professional researchers are situated. Indeed, it presents an opportunity to both embed ethical discourses within particular disciplines to promote broader discussion across disciplinary boundaries. If research ethics is a professional ethics, then research professionals ought to take clearer ownership of existing and emerging ethical issues that affect or arise within their particular fields, both in substantive terms and in terms of promoting broader engagement with others that can be considered as having a stake in the debate. This includes those who participate in research as well as those, such as funders, managers, and administrators, whose actions impact the way research is conducted in a broad sense. If we consider research ethics to be a professional ethics, then it is no longer just about specific research projects, but the enterprise as a whole.
766
N. Emmerich
References Beauchamp TL, Childress JF (2009) Principles of biomedical ethics, 6th edn. Oxford University Press, Oxford Collins HM (2010) Elective modernism. Unpublished Manuscript. Unpublished. http://www.cardiff. ac.uk/socsi/contactsandpeople/harrycollins/expertise-project/elective%20modernism%204.doc Collins HM, Weinel M, Evans R (2010) The politics and policy of the third wave: new technologies and society. Crit Policy Stud 4(2):185–201. https://doi.org/10.1080/19460171.2010.490642 Daston L, Galison P (2007) Objectivity. Zone Books, New York Delanty G (1998) The idea of the University in the Global era: from knowledge as an end to the end of knowledge? Soc Epistemol 12(1):3–25 Emmerich N (2016a) Ethos, Eidos, habitus: a social theoretical contribution to morality and ethics. In: Brand C (ed) Dual process theories in moral psychology. Springer, Dordrecht, pp 275–300. http://link.springer.com/chapter/10.1007/978-3-658-12053-5_13 Emmerich N (2016b) Reframing research ethics: towards a professional ethics for the social sciences. Sociol Res Online 21(4):7 Emmerich N (2018a) Elective modernism and the politics of (bio)ethical expertise. In: Riesch H, Emmerich N, Wainwright S (eds) Philosophies and sociologies of bioethics. Springer, Dordrech, pp 23–40. https://doi.org/10.1007/978-3-319-92738-1_2 Emmerich N (2018b) Chapter 11: From Phrónesis to Habitus: Synderesis and the practice(s) of ethics and social research. In: Virtue ethics in the conduct and governance of social science research. Advances in research ethics and integrity, vol 3. Emerald Group Publishing, Bingley, pp 197–217. https://doi.org/10.1108/S2398-601820180000003012 Evetts J (2003) The sociological analysis of professionalism: occupational change in the modern world. Int Sociol 18(2):395–415. https://doi.org/10.1177/0268580903018002005 Evetts J (2013) Professionalism: value and ideology. Curr Sociol 61(5–6):778–796 Freidson E (2001) Professionalism, the third logic: on the practice of knowledge. University of Chicago Press, Chicago Fuller S (2003) The university: a social technology for producing universal knowledge. Technol Soc 25(2):217–234. https://doi.org/10.1016/S0160-791X(03)00023-X Hedgecoe A (2016) Reputational risk, academic freedom and research ethics review. Sociology 50(3):486–501. https://doi.org/10.1177/0038038515590756 Kuhn TS (1996) The structure of scientific revolutions, 3rd edn. University of Chicago Press, Chicago MacIntyre A (1981) After virtue: a study in moral theory. Duckworth, London Tapper A, Millett S (2015) Revisiting the concept of a profession. In: Conscience, leadership and the problem of ‘dirty hands’. Emerald Group Publishing, pp 1–18 Millett S (2016) How should the concept of a profession be understood, and is the notion of a practice helpful in understanding it? In: Contemporary issues in applied and professional ethics, vol 15. Emerald Group Publishing Limited, Bingley, pp 29–40. https://doi.org/10.1108/ S1529-209620160000015002 Minogue KR (1973) The concept of a university. New Brunswick, USA. Transaction Publishers Reuben JA (1996) The making of the modern university: intellectual transformation and the marginalization of morality. University of Chicago Press, Chicago Riesch H, Emmerich N, Wainwright S (2018) Introduction: crossing the divides. In: Riesch H, Emmerich N, Wainwright S (eds) Philosophies and sociologies of bioethics. Springer, Dordrecht, pp 1–22. https://doi.org/10.1007/978-3-319-92738-1_1 Schrag ZM (2010) Ethical imperialism: institutional review boards and the social sciences, 1965–2009. The Johns Hopkins University Press, Baltimore Schrag ZM (2014) What is this thing called research? In: Glenn Cohen I, Fernandez Lynch H (eds) Human subjects research regulation: perspectives on the future. Cambridge, Mass. MIT Press. p 285 Shapin S (2008) The scientific life: a moral history of a late modern vocation. University of Chicago Press, Chicago
43
A Professional Ethics for Researchers?
767
Stark L (2011) Behind closed doors: IRBs and the making of ethical research. University of Chicago Press, Chicago Strong PM (1983) The rivals: an essay on the sociological trades. In: Dingwall R, Lewis P (eds) The sociology of the professions. Macmillan, London Wellmon C (2015) Organizing enlightenment: information overload and the invention of the modern research university. Johns Hopkins University Press, Baltimore Wueste DE (1994) Introduction. In: Professional ethics and social responsibility. Rowman and Littlefield, Lanham Wueste DE (2013) Trust me, I’m a professional: exploring the conditions and implications of trust for the professions. In: Ethics, values and civil society. Emerald Group Publishing, pp 1–12
Sociology and Ethics Doing the Right Thing
44
Lisa-Jo K. van den Scott
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Dynamics and Bias in Interviews, Fieldwork, and Surveys . . . . . . . . . . . . . . . . . . . . . . . . Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feminist and Indigenous Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Community-Based Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics Extends Beyond Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethics Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Covert Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Work and Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
770 770 772 772 773 773 774 775 776 776 778 778 779 779 780 780
Abstract
The state of ethics in sociology today hinges, in part, on models which are inappropriate for many forms of social science research and, in part, move away from the hegemony of these models, with varying success across countries and regions. Australia and parts of Europe, for example, have made considerable strides in this matter. Considerations of ethics in sociology have expanded dramatically in a turn toward recognizing the agency of those whom we study. Conversations today center around mitigating power dynamics between researchers and participants, reflexivity on the part of the researcher, and an L.-J. K. van den Scott (*) Department of Sociology, Memorial University of Newfoundland, St. John’s, NL, Canada e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 R. Iphofen (ed.), Handbook of Research Ethics and Scientific Integrity, https://doi.org/10.1007/978-3-030-16759-2_68
769
770
L.-J. K. van den Scott
engagement with feminist and Indigenous methods. This has led to a proliferation of work on community-based research and the importance of trust-based relationships in social science work. Concerns around anonymity and confidentiality remain but are often at odds with the movement toward community-based and public policy research. Debates emphasize the challenge of doing ethical research under the current regime of regulatory boards, which operate in a legalistic and often exclusively positivist framework. Keywords
Sociology · Reflexivity · Power dynamics · Feminist methods · Indigenous methods · Ethics review
Introduction Sociology has long been concerned with what Anthony Giddens (1990) calls the “double hermeneutic,” that is, the effect of social science research on society (and vice versa) and how to busy ourselves, as sociologists, with ethical research. In the past this has meant representation of marginalized groups and attention to social stratification as critical sociology replaced functionalist paradigms which had privileged the status quo. Today, considerations of ethics in sociology have expanded dramatically in a turn toward recognizing the agency of those whom we study. Conversations today center around mitigating power dynamics between researchers and participants, reflexivity on the part of the researcher, and an engagement with feminist and Indigenous methods. This has led to a proliferation of work on community-based research and the importance of trust-based relationships in social science work. Concerns around anonymity and confidentiality remain but are often at odds with the movement toward communitybased research. In many ways, community-based research bridges feminist and Indigenous methods with the concerns of ethics review boards. In some places, such as areas of North America, those engaged with community-based research continue to struggle within the biomedical model of ethics review that may still be, in some places, imposed upon sociologists and other social science researchers. Current debates in North America tend toward the frustration sociologists feel around ways in which research ethics review boards (such as Institutional Review Boards – IRBs) increase their control and, some would say, mutilation of strong, qualitative, sociological research. The European Commission, however, has infrastructural ideologies that are more supportive of this kind of community-based, feminist, and Indigenous methodology.
Background The mid-twentieth century in sociology was predominantly characterized by structural functionalist approaches. This theoretical approach considered society from a structural standpoint and elucidated how the various structures in society work
44
Sociology and Ethics
771
together so that the whole can function. Critics of this approach accused structural functionalists of supporting the status quo and not taking enough of a critical stance when considering issues such as social stratification and inequality in society. By the 1970s and 1980s, critical perspectives in sociology enjoyed a resurgence. The work of Karl Marx and Marxist theorists was embraced as offering the kind of critique and analysis which could move society forward. Other forms of critical sociology came to the forefront, such as symbolic interactionism and feminist standpoint theory. This meant a shift toward sociology as a discipline particularly well-suited to studying social problems with a critical eye. Research on and around marginalized groups, inequality, systemic racism, and other aspects of social stratification became of great importance to sociology. Howard Becker (1967) heralded this shift by encouraging sociologists to attend to hierarchies of credibility, that is, to not simply turn to specialists and experts to tell us about the experiences of marginalized groups, but to turn to people as experts in their own lives. Becker recognized that institutional hierarchies privilege the everyday experiences and interpretations of higher-status groups over those of lower-status groups. The representation of the oppressed and downtrodden rose to the fore. Ethical research meant research that gave a voice to the voiceless and addressed power imbalances in society, as well as the everyday experiences of various kinds of people. While this was happening in sociology, there was a growing attention to ethics in biomedical research. After the horrors of Nazi research performed on Jewish prisoners, other shocking stories of deeply unethical medical research began to surface. The Tuskegee syphilis study, which spanned the 1930s to the 1960s, involved withholding the cure for syphilis from black men in the study suffering from this disease. By offering free burial, the researchers assured themselves of the opportunity to perform autopsies on the deceased subjects. In the 1960s and 1970s, psychology, too, offered up some harrowing examples of unethical research. The Milgram experiment was conducted in the 1960s, and Zimbardo’s prison study occurred in the 1970s. Such work is referred to elsewhere in this handbook. A regime of ethics boards was established to police medical research. In sociology, ethical concerns evolved to encompass reflexivity, and the field encouraged a shift to community-based research, research which involves the participants in the research process itself. Despite a robust and reflective approach in sociology, the ethics boards established themselves in a position to vet all research in the medical and social sciences. To compound matters, ethics boards commandeered the social sciences at a time when North American culture was becoming more and more litigious, thereby framing many of the protocols (a procedural, formulaic set of steps for conducting research best suited to positivistic science) in legalistic terms. Current challenges to the field of sociology include the difficulty of ethics boards imposing a vetting process based on medical research, which is quite different and often involves treatment and potential life-and-death consequences. Nevertheless, some have argued that this biomedical approach has colonized the social sciences and sociologists continue to struggle to perform work that is actually ethical under a model that does not apply to sociological research (van den Hoonaard 2011). As an
772
L.-J. K. van den Scott
example, it is not ethical to present an illiterate person with a lengthy consent form detailing harms in medical and legalistic terms. This could make the participant feel badly about themselves, rather than honoring the time and effort they are taking to aid the researcher by sharing important and expert information about the experiences of people in his or her social group. This is not ethical, and yet the biomedical model, in an age of litigiousness, demands lengthy consent forms. It is worth noting that ethics boards today comprise a multibillion-dollar industry in North America (van den Hoonaard 2011).
Key Issues Power Dynamics and Bias in Interviews, Fieldwork, and Surveys When entering the field, sociologists conducting qualitative or quantitative work must take stock of the social location from which or within which they conduct observations. That is, the researcher must understand what biases they bring into the field and their own areas of privilege. Whatever the background of the sociologist, there will be blind spots as well as areas into which there is considerable insight (Rosaldo 1993). To compensate for one’s “observational location,” sociologists attend to their own identity in the field, the importance of learning from those in the field as experts in their own lives, and minimizing harm (Grills 2018). Beyond social location, as Scott Grills (2018) discusses, there are always issues of domination to take into account. No matter how much a professor wants to minimize a power imbalance in the classroom, Grills explains, the professor is still in a position of influence and continues to engage in domination activities, such as creating expectations for the students in the course. The same holds true in the field. Sociologists cannot erase their role as researchers entirely, no matter how much they might wish to. Recognizing this contributes to important ethical considerations. In addition, participants also hold power in the relationship. They determine what kind of information they are willing to share, into which spaces they will welcome a researcher, and how they will present themselves. All of these things have a part in a sociologist’s ability to produce good work. Inherent to social location and power dynamics are biases. Acknowledging one’s own biases involves understanding one’s own position in society and its accompanying socialization. While the challenge of recognizing bias presents itself quite obviously when considering qualitative work, this is also a fundamental ethical issue in quantitative work as well, perhaps even more so as participants can correct an ethnographer or interviewer in the field, while it is much more difficult to do so during a survey gathering large-scale quantitative data. Researchers, however, use past literature as well as personal experiences and their intellectual best guesses and imaginations to assemble questions which, in turn, generate quantitative survey data. Once this data is generated, any biases in the survey apparatus are inadvertently concealed from view. Ethical concerns around bias have developed a literature around reflexivity in sociological research.
44
Sociology and Ethics
773
Reflexivity Reflexivity has emerged as a current key issue when considering ethics in sociological research. Ethical practices necessitate the researcher reflect on their role in the field. Researchers must question not only their biases but also the impact of their presence in the field or the questions they might ask on a survey. Research on reflexivity, however, often centers on qualitative work. Researchers attend to the trust-based relationships they form with participants, relationships that are both in good faith and reciprocal (Fine 1996). Transitioning in the field from friend to researcher, for example, if one takes on a research project as an insider in a community or group, is one instance where careful thought around ethics can be important. When writing about my own experience moving to an Inuit community in the far North of Canada and how I transitioned into the role of researcher (van den Scott 2018), I stressed the importance of consulting with the community and approaching with an attitude of learning, as do others (e.g., Spradley 1979; van den Hoonaard 2012). One cannot overstate the importance of this as qualitative researchers consider participants to be experts in their own lives (Becker 1967; van den Hoonaard 2012). It can be challenging to come to terms with how to embark on ethical research when, from the outset, the role of researcher is, in a sense, that of “fink” (Fine 1993) toward their participants. That is, while participants are often generous with researchers and eager to participate (Jacobs 1980), researchers can suffer the “agony of betrayal” (Lofland 1971). In essence the job of a researcher is to tattle, i.e., tell tales, on their participants. When done ethically, however, trust-based relationships are not breached, and data does not have to be gathered in ways in which the participants would feel betrayed. By practicing reflexivity, researchers can balance the fact of their role as researchers and the reality of the trust-based relationships they have formed. Since the 1970s, sociologists have taken the importance of reflexivity to ethical research more and more seriously (Puddephatt et al. 2009). At its core, reflexivity involves being “self-aware and honest” (Puddephatt et al. 2009:6). When practicing reflexivity, sociologists ask themselves what their social location is (sometimes called their “positionality”) and how that might impact their biases, what the impact of their actions might be, and how to honor their trust-based relationships (Thurairajah 2018). Reflexivity can also include thinking about the impact of doing research on yourself and your loved ones (Christensen 2018). Sociological research can be a long and demanding process full of unexpected twists and turns.
Feminist and Indigenous Methods Reflexivity is a fundamental aspect of feminist and Indigenous methods. Trust-based relationships form the bedrock of these methods, particularly Indigenous methods (Castleden et al. 2012). Both of these approaches stress, much as the Chicago School
774
L.-J. K. van den Scott
did in the nascent stages of symbolic interactionism, everyday experiences and ground their research in the organization of everyday life (Tummons 2017). Dorothy Smith (2010) is at the forefront of feminist methodologies. She encourages sociologists to embrace institutional ethnography, an examination not only of everyday life and experiences but the institutions which structure how we move through our everyday worlds. Feminist methodologies argue that overarching generalizations do not offer a complete analysis of our social world. Instead, they focus on the local and how that local is linked to, affected by, and affects social and administrative institutions (Tummons 2017). They term this the “translocal.” For feminist approaches, this is one avenue of ethical research as it turns its attention to those who are often overlooked and who make up the bedrock of how things are accomplished in everyday life. Likewise, Indigenous methodologies focus on everyday life and encourage researchers to study social words in situ, rather than develop generalizations from afar. When working with Indigenous groups, trust-based relationships are the foundation of ethical research. Social scientists have historically had damaging relationships with Indigenous communities, such as attempts by salvage anthropology to remove artifacts from local Indigenous populations for study, for museums, and for personal mementos. To rectify and establish protections for themselves, many Indigenous groups have established their own research approval processes, such as the Nunavut Research Institute in Nunavut, Canada, which vets all forms of research on the Inuit people of that region of Canada, from geology, to prospecting, to social science research (van den Scott 2012). In addition, Indigenous populations are pushing back against the infantilizing approach of academic ethics boards and the past treatment of medical researchers. They are offering a research model to sociologists and other social scientists which partners Indigenous populations, particularly around knowledge creation. Indigenous peoples are more and more successful in advocating for an Indigenous methodology and educating social scientists about the nature of trust-based relationships and the potential benefits of including Indigenous scholars and community members in the research process. Documents such as the San Code of Research Ethics (South African San Institute 2017), created in South Africa, and the guide Negotiating Research Relationships with Inuit Communities: A Guide for Researchers (ITK and NRI 2006), developed in Nunavut, Canada, offer useful examples of how Indigenous groups and researchers can work together in mutually beneficial ways. Underpinning this all is a basic ethical approach which recognizes the joint humanity of us all.
Community-Based Research In conjunction with a rise in feminist and Indigenous methodologies, communitybased research (CBR) has become more popular not only in sociology but also in other fields such as nursing and anthropology. CBR is “a collaborative research approach equitably involving all partners in the understanding of a given phenomenon and the translation of the knowledge gained into policies and interventions
44
Sociology and Ethics
775
that improve the lives of community members” (Israel et al. 1998). CBR combines the importance of the everyday lives of participants with the social activist bent common in sociology. It is no longer considered ethical to enter into a community and to decree with a heavy hand what sort of solutions are required for whatever social problems the community may face. Instead, an attitude of learning is fundamental (van den Scott 2013). Researcher and participants work hand in hand toward determining and implementing possible solutions. Community-based research comes with its own set of ethical quandaries. Researchers, for example, may find themselves working with leaders in a community only to discover a power imbalance within the community which makes their work problematic. There are also questions as to who has the ability to agree for whom that a particular project be carried out. Often there is a disconnect between folks at the grassroots level and those involved in the governance of a group or community. Sociologists tend to prefer not to align themselves with those in power when attempting to undertake meaningful work with the goal of social discovery and change, once again reasserting Becker’s need for us to decide “whose side we are on.” On the whole, despite the new ethical challenges, with a proper dose of reflexivity, sociologists engaged in CBR have found it to be a successful and rewarding experience for both them and their participants. It can give voice to those who previously had none and open up a world of innovative, novel approaches to social science research.
Ethics Extends Beyond Research Whether a sociologist is engaged in CBR or not, trust-based relationships are mutual and sustaining. This means that our ethical commitment to our participants extends beyond face-to-face interactions (Bergum 1991). While sociologists are in many ways finks, we also strive to treat our participants ethically from start to finish. Ethics must be considered, of course, in the nascent stages of a project and throughout its execution. Afterward, during the writing stage, the dissemination stage, and beyond, sociologists continue to work toward ethical considerations, which means, in part, honoring those trust-based relationships. In order to accomplish this, many sociologists and social scientists will work to make their findings available to their participants, even sending copies of publications directly to participants, should they express interest in that. Other sociologists go so far as to meet again with participants after sharing their findings so that the participant has an opportunity to debrief. This can be an emotionally challenging process for both the researcher and the participant when findings may not be what the participant expected or wanted. It is the researcher’s job, however, to balance academic integrity with these trust-based relationships. Annette Lareau (2014) chose to balance this by allowing her participants to confront her face-to-face about her negative findings and to write an afterword to be included in her book, should they chose to do so.
776
L.-J. K. van den Scott
Current Debates Ethics Review Current debates around ethics in sociology echo the trend toward more feminist and Indigenous methodologies, but eclipsing most conversations about actual ethics are discussions about research ethics review boards (Katz 2007; Loftus 2008); how to navigate them; ways in which they may be able to cut social science off at the knees and mutilate potentially good work, even causing many marginal groups which need study to be abandoned by researchers; and the impact of positivist, biomedical thinking on how we approach our work. Michael Lynch (2000) expresses deep concern at the lack of conversation around actual ethics due to this overshadowing of the boards and stresses that some institutionalized ethics review regimes threaten the privileged knowledge gained through reflexivity. This is a multifaceted problem. There is a danger of raising a generation of scholars who lack the ability to question ethics review boards and their history (van den Scott 2016). In some regions many sociologists are passive with regard to the ethics regime and accept the solidification and permanence of the boards without question (Cohen 2007; Dougherty and Kramer 2005). Sociologists concerned with ethics view this development with great concern because the boards affect how we conceptualize our research at its most fundamental levels. Focussing so single-mindedly on the ethics review process, which has little to do with actual social science ethics (Iphofen 2011), turns a sociologist’s thoughts inward. The question shifts from what comprises ethical research to how to get their research project through the ethics boards. Thoughts remain trained on “their” research, process, and career (van den Scott 2016). The ethics review boards establish a new hierarchy of credibility with themselves at the top, then the researcher, as a privileged expert, and then the participants. Research participants find themselves at the bottom of the hierarchy of credibility when the research topic is their own lives. Researchers must make assumptions about what comprises ethical conduct in the opinion of the participant and then fit that into a biomedical model. This most often reduces ethical conduct to a costbenefit analysis, which is completely inappropriate for the social sciences (BeckerBlease and Freyd 2006). Researchers are forced into a superior position in relation to their participants, a strictly unethical practice. In addition, this undermines the practice of reflexivity. One cannot underestimate the resulting problems of undermining reflexivity in the field. Social scientists are treated as experts above their participants, but are no longer trusted as professionals. Ethics boards have successfully framed sociologists as untrustworthy (Tinker and Coomber 2004; Wiles et al. 2006), which essentially entrenches the boards further into the social sciences, legitimating and feeding a multibillion-dollar industry. Evidence of this imposition of biomedical thinking over social sciences is to be found in the terminology required by ethics boards. Sociologists might be inclined to adopt terms such as “protocols” and “minimal risk” to appear to be conforming to the desired process.
44
Sociology and Ethics
777
“Subject” has emerged as one of the more problematic terms. Using the term “subject,” rather than the more social science-appropriate term “participant,” emphasizes a power dynamic which privileges the researcher. This undermines completely whole areas of sociological research, such as participatory research and CBR (Boser 2007). In addition, terminology also influences how the researcher perceives his or her own role in the research and his or her position vis-à-vis the participants. This further turns a researcher’s thinking inward, onto his or her own importance, rather than outward. Again, the researcher becomes the expert when the participants are the real experts in their own lives. Researchers do bring specialized systematic analytical skills to bear, which sheds new light on the lives of participants and on society more broadly, but the researcher must be in the position of learner first, particularly when gathering data and interacting with participants. On the other hand, scholars also argue that the terms “subject” and “participant” describe different relations between researcher and subject/participant (Iphofen 2009/2011). Some research requires the participation of those being researched, such as interviews. Other research may not require participation, and those being researched may indeed be subjects. Covert research is one example where this might be true. If I were to simply observe people in public spaces, they may not actually be “participating” in the research. It can not only be useful, then to use the term “subject” in those instance, but may even be misleading to use the term “participant.” The real unit of analysis in ethnographic covert research is actually the scene, not the individuals. This is an ongoing debate, but beyond semantics, in either case, we all agree that ethical treatment of the participants and subjects is the most important element. Consent practices conventional for the social sciences have often also been compromised by ethics review boards. Having a participant sign a consent form is not equivalent to a participant giving consent (Bell 2014). Practices around consent cannot be standardized across heterogeneous field sites, populations, and cultures (Bhattacharya 2007). The meanings “of risk and harm are symbolically constructed and culturally defined” (Bruner 2004:10). Sometimes historical backgrounds of marginalized groups mean that it would be unethical to bring consent forms into the field, such as in the Guajira region of Columbia and Venezuela (Hannah Bradshaw, personal communication 2012). Other groups may feel intimidated and may sign only to manage how they are appearing to the “expert” who has come into their midst or because of the power imbalance the consent form has created. It is obvious to sociologists that we should not be intimidating our participants, nor introducing an artifact that creates or exacerbates a power imbalance. In addition, physical consent forms requiring signature may be disrespectful toward participants from a predominantly oral culture. This signature also leaves a trail which could make a participant uncomfortable. There are many other ways to get consent beyond over-formalized informed consent forms. Consent forms also violate trust-based relationships. Trust-based relationships, as discussed above, are vitally important not only to the research process but also to the experience of the participant. Despite the fact that many guidelines created for social science ethics boards, such as the Canadian Tri-Council Policy Statement: Ethical
778
L.-J. K. van den Scott
Conduct for Research Involving Humans (TCPS 2), emphasize the importance of trust-based relationships, many ethics review boards do not consider how to maintain those relationships. A sociologist works hard to develop mutual trust in their research relationships. Suddenly the researcher is presenting the participant with a document which, essentially, protects the researcher and their university/research institution from being sued. This demonstrates a fundamental lack of trust and breaches the trust-based relationship. On the flip side, when sociologists study professionals, such as doctors, members of elite groups deem it inappropriate for a member of a less elite group to demand this of them (Conn 2008). Doctors, for example, can, then, be insulted by consent forms. “Consent forms equal neither permission nor consent” (van den Scott 2016: 239) and may cause the participant emotional harm (even in ethics review board terms), either by stressing an imbalance of power relations or causing embarrassment or offense; I have argued for a move away from consent forms as these varying pressures and differing situations may play into the signing of a form. One may sign simply to alleviate embarrassment, even if they do not want to consent or sign, and one may refuse to sign due to insult or fear despite giving consent and desiring to be part of the research. Another problem created by research ethics review boards in sociology involves what and whom we study. Despite findings which indicate many groups have positive and beneficial experiences when they participate in sociological research (Childress 2006), ethics boards make it quite difficult to study certain groups, such as children. Ethics boards sometimes lack an understanding of many complexities inherent to social science research, particularly field research (Bosk and De Vries 2004). Faculty and students alike (Adler and Adler 1993; van den Scott 2016) often avoid studying this protected population, but they can be the groups that most need attention and help. Those who do study this population may find their research itself marginalized (Irvine 2012). Innovative methods also suffer (Bledsoe et al. 2007). In addition, when ethics boards assume more knowledge of social science research practices than they actually have, they begin dispensing inappropriate methodological advice, another alarming practice.
Future Issues Covert Research Concerns for the future include the dramatic decline of covert research. Covert medical research has proved to be deeply problematic in the past. As a result, ethics boards, even those overseeing sociological research, condemn most covert research, to the extent that it is all but lost (Spicker 2011). Covert research in the social sciences, however, can teach us a lot about society. Sociologists traditionally go out into the world, write down what they see, ask a few questions, and systematically analyze what they have found. Typically this does not involve disrupting the regular flow of society, nor does it necessarily report on individual people. In fact, this approach represents one of the few ways to maintain anonymity in sociological
44
Sociology and Ethics
779
research (another being anonymous surveys). In fact it may help to avoid many of the ethical problems the ethics boards create and then are concerned with fixing through elaborate means (such as the complex storage requirements they often have for the already-problematic consent forms). Patricia Adler (1993) was able to make significant contributions to an understanding of drug culture by hanging out in that world and analyzing the data she collected (field notes and informal interviews). This covert approach was the best way to keep both herself and everyone in that scene anonymous and safe. In addition, she was able to treat her unwitting participants with respect and humanity and in authentically ethical ways. Many times, covert research is the only way to see and understand the social world, while it is at ease, so to speak. If a sociologist attends private fraternity parties and covertly observes dating rituals, recording no specific locations or names, or identifying features of individuals, he or she may learn a lot about gender equality (or lack thereof), how date rape occurs, the social processes involved in flirting, or the drinking habits of undergraduate students. If a sociologist, however, first obtains permission from the frat house, or is required to announce themselves to the entire party and request permission to stay as a researcher, many of these important elements of social life will become beyond our reach for study. Thus, restricting covert research hinders intellectual and societal progress. Challenges around covert research continue.
Online Work and Confidentiality As the online world has become a more pronounced component of social life, sociologists are facing a new set of ethical dilemmas concerning online research. Social media, in particular, is a font of data, but how to access and mine it ethically (Woodfield and Iphofen 2018)? Issues around informed consent include whether the online space is public and whether to treat these “spaces” as one would physical, geographic spaces, mapping on dated beliefs about public and private spaces (Lynch and Mah 2018). In addition, quotes sociologist employ from an interview are from a confidential source. Once online quotes become part of a research publication, someone could perform a simple online search to find the original quote, even if the researcher removes personal identifiers (Lynch and Mah 2018). In addition, all data acquired online is, by its very nature, mediated (Iphofen 2017). As media worlds and technology continue to expand, sociologists can only expect more ethical quandaries to arise.
Proposed Solutions There are several proposed solutions to the ongoing struggles in sociology regarding ethics and, in particular, ethics boards. Some advocate dissolving ethics review for the social sciences, while others suggest departmental-level review processes which
780
L.-J. K. van den Scott
attend to the ethical needs of the field in a manner which avoids standardization (van den Hoonaard and Hamilton 2016). In 2012, an ethics summit involving social scientists who study ethics and ethics boards was convened. The resulting volume, The Ethics Rupture (van den Hoonaard and Hamilton 2016), contains a declaration titled The New Brunswick Declaration which was arrived at through consensus-based consultation. This brief declaration lays out some concerns and alternatives to current ethics regimes suggesting, among other things, that they “encourage regulators and administrators to nurture a regulatory culture that grants researchers the same level of respect that researchers should offer research participants” (van den Hoonaard and Hamilton 2016:432). Ethics and conversation around ethics, even among researchers and regulators, should always begin with trust-based relationships.
Conclusion The state of ethics in sociology today hinges, in part, on positivist models which are inappropriate for some kinds of social science research, particularly in North America. Despite this stumbling block, conversations across the global field of sociology continue to develop around an awareness of power and bias, reflexivity, feminist and Indigenous methodologies, and community-based research as ethical concerns. Debates emphasize the challenge of doing ethical research under the current regime of regulatory boards which operate in a legalistic and often exclusively positivist framework. Consequences of these ethics boards include inflicting standardized informed consent forms on participants as if they were subjects, undermining attempts by sociologists to attend to power imbalances between researchers and participants, and causing researchers to avoid important areas of study altogether. The future of covert research is questionable, although some forms of unobtrusive research are still permitted. The most interesting ethical question ahead of sociologists, however, is how to conduct good, ethical research into social media and the online world more broadly.
References Adler PA (1993) Wheeling and dealing: an ethnography of an upper-level drug dealing and smuggling community. Columbia University Press, New York Adler PA, Adler P (1993) Ethical issues in self-censorship: Ethnographic research on sensitive topics. In: Renzetti CM, Lee RM (eds) Research Sensitive Topics. Sage, Newbury Park, pp 249–266 Becker HS (1967) Whose side are we on? Soc Probl 14(3):239–247 Becker-Blease K, Freyd JJ (2006) Research participants telling the truth about their lives: the ethics of asking and not asking about abuse. Am Psychol 61(3):218–226 Bell K (2014) Resisting commensurability: against informed consent as an anthropological virtue. Am Anthropol 116(3):1–12
44
Sociology and Ethics
781
Bergum V (1991) Being a phenomenological researcher. In: Morse J (ed) Qualitative Nursing Research: A Contemporary Dialogue. Sage, London, pp 55–71 Bhattacharya K (2007) Consenting to the consent form: what are the fixed and fluid understandings between the researcher and the researched? Qual Inq 13(8):1095–1115 Bledsoe CH et al (2007) Regulating creativity: research and survival in the IRB iron cage. Northwestern Univ Law Rev 101(2):593–641 Boser S (2007) Power, ethics, and the IRB: dissonance over human participant review of participatory research. Qual Inq 13(8):1060–1074 Bosk CL, De Vries RG (2004) Bureaucracies of mass deception: institutional review boards and the ethics of ethnographic research. Ann Am Acad Polit Soc Sci 595(1):249–263 Bruner E (2004) Ethnographic practice and human subjects review. Anthropol Newsl 45(1):10 Castleden H, Sloan Morgan V, Lamb C (2012) ‘I spent the first year drinking tea’: exploring Canadian university researchers’ perspectives on community-based participatory research involving indigenous peoples. Can Geogr 56(2):160–179 Childress H (2006) The anthropologist and the crayons: changing our focus from avoiding harm to doing good. J Empir Res Hum Res Ethics 1(2):79–88 Christensen T (2018) Collateral damage. In: Kleinknecht S, van den Scott L-JK, Sanders CB (eds) The craft of qualitative research. Canadian Scholars Press, Toronto, pp 25–31 Cohen P (2007) As ethics panels expand grip, no field is off limits” in New York Times 28 Feb Conn LG (2008) Ethics policy as audit in Canadian clinical settings: exiling the ethnographic method. Qual Res 8(4):499–514 Dougherty DS, Kramer MW (2005) Organizational power and the institutional review board. J Appl Commun Res 33(3):277–284 Fine GA (1993) Ten lies of ethnography: moral dilemmas of field research. J Contemp Ethnogr 22(3):267–294 Fine GA (1996) Kitchens: the culture of restaurant work. University of CA Press, Berkeley/London Giddens A (1990) The consequences of modernity. Stanford University Press, Stanford Grills S (2018) Reconsidering relations in the field: attending to dominance processes in the ethnographic encounter. In: Kleinknecht S, van den Scott L-JK, Sanders CB (eds) The craft of qualitative research. Canadian Scholars Press, Toronto, pp 152–159 Iphofen R (2009/2011) Ethical decision making in social research: a practical guide. Palgrave Macmillan, London Iphofen R (2011) Ethical decision making in qualitative research. Qual Res 11(4):443–446 Iphofen R (2017) Conclusion: guiding the ethics of online social media research – adaption or renovation? Adv Res Ethics Integr 2:235–240 Irvine JM (2012) Can’t ask, can’t tell: how institutional review boards keep sex in the closet. Contexts 11(2):28–33 Israel BA, Schulz AJ, Parker EA, Becker AB (1998) Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health 19:173–202 ITK and NRI (2006) Nickels S, Shirley J, Laidler G (eds) Negotiating research relationships with Inuit communities: a guide for researchers. Inuit Tapiriit Kanatami and Nunavut Research Institute, Ottawa and Iqaluit. https://www.nri.nu.ca/sites/default/files/public/files/06-068_itk_nrr_booklet.pdf Jacobs S (1980) Where have we come? Soc Probl 27:371–378 Katz J (2007) Toward a natural history of ethical censorship. Law Soc Rev 41(4):797–810 Lareau A (2014) Unequal childhoods: class, race, and family life. University of California Press, Berkeley Lofland JF (1971) Analyzing social settings. Wadsworth, New York Loftus EF (2008) Perils of provocative scholarship. APS Obs 21(5). https://www.psychologi calscience.org/observer/perils-of-provocative-scholarship Lynch M (2000) Against reflexivity as an academic virtue and source of privileged knowledge. Theory Cult Soc 17(3):26–54
782
L.-J. K. van den Scott
Lynch M, Mah C (2018) Collecting social media data in qualitative research. In: Kleinknecht S, van den Scott L-JK, Sanders CB (eds) The craft of qualitative research. Canadian Scholars Press, Toronto, pp 245–253 Puddephatt AJ, Shaffir W, Kleinknecht SW (2009) Ethnographies revisited: constructing theory in the field. Routledge, London Rosaldo R (1993) Culture and truth: the remaking of social analysis. Beacon, Boston Smith DE (2010) Institutional ethnography as practice. Rowman & Littlefield, Lanham South African San Institute (2017) San code of research ethics. Trust: equitable research partnerships. http:// trust-project.eu/wp-content/uploads/2017/03/San-Code-of-RESEARCH-Ethics-Booklet-final.pdf Spicker P (2011) Ethical covert research. Sociology 45(1):118–133 Spradley JP (1979) The ethnographic interview. Holt, Rinehart and Winston, New York Thurairajah K (2018) ‘The person behind the research:’ Reflexivity and the qualitative process. In: Kleinknecht S, van den Scott L-JK, Sanders CB (eds) The craft of qualitative research. Canadian Scholars Press, Toronto, pp 10–16 Tinker A, Coomber V (2004) University research ethics committees: their role, remit and conduct. King’s College, London Tummons J (2017) Institutional ethnography, theory, methodology, and research: Some concerns and some comments. In: Reid J, Russell L (eds) Perspectives on and from institutional ethnography. Studies in qualitative methodology, vol 15. Emerald Publishing Limited, Bingley, pp 147–162 van den Hoonaard WC (2011) The seduction of ethics: transforming the social sciences. University of Toronto Press, Toronto van den Hoonaard DK (2012) Qualitative research in action. Oxford University Press, Don Mills van den Hoonaard WC, Hamilton A (2016) The ethics rupture: exploring alternatives to formal research ethics review. University of Toronto Press, Toronto van den Scott L-JK (2012) Science, politics, and identity in northern research ethics licensing. J Empir Res Hum Res Ethics 7(1):28–36 van den Scott L-JK (2013) Working with aboriginal populations: an attitude of learning. In: Sieber J, Tolich M (eds) Planning ethically responsible research, 2nd edn. Sage, Los Angeles, pp 128–129 van den Scott L-JK (2016) The socialization of contemporary students by ethics boards: malaise and ethics for graduate students. In: van den Hoonaard WC, Hamilton A (eds) The ethics rupture: exploring alternatives to formal research ethics review. University of Toronto Press, Toronto, pp 230–247 van den Scott L-JK (2018) Role transitions in the field and reflexivity: from friend to researcher. Studies in Qual Method: (Special Issue) Emotion and the Researcher: Sites, Subjectivities and Relationships 16:19–32 Wiles R, Charles V, Crow G, Heath S (2006) Researching researchers: lessons for research ethics. Qual Res 6(3):283–299 Woodfield K, Iphofen R (2018) Introduction to volume 2: the ethics of online research. Adv Res Ethics Integr 2:1–12
Further Reading Smith LT (2012) Decolonizing methodologies: research and indigenous peoples. Zed Books Ltd., New York van den Hoonaard WC, van den Hoonaard DK (2016) Essentials of thinking ethically in qualitative research. Left Coast Press, Walnut Creek
Ethical Considerations in Psychology Research
45
John Oates
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Psychology and Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Beneficence and Non-maleficence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Withholding Versus Deceiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Fears . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Incidental Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reproducibility, Publication Bias, and the Pursuit of