Reconstructing Research Integrity: Beyond Denial 3031271106, 9783031271106

This book exposes significant threats to research integrity and identifies policies and practices that can reverse these

315 74 5MB

English Pages 211 [212] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Reference
Contents
Abbreviations
Chapter 1: Research Integrity as a System Characteristic: Coordinated, Harmonized, with Incentives/Compliance in Alignment
Introduction to Research Integrity
Current Regulation and Directions for Reform
Issue Unaddressed in Regulation but Important to Research Integrity
How Widely Have Research Integrity Issues Affected Research Sectors?
Industry Influence
Government Agency Support for Science Integrity
Standard Setting Groups Assuring Valid Science
National Academy of Sciences (NAS) 2017 Study, “Fostering Integrity in Research”
Studies from Social Psychology and Organizational Science Suggest a more Complex Integrity System
Methods
Conclusion
References
Chapter 2: Blind Spots in Research Integrity Policy: How to Identify and Resolve Them
Current Blind Spots Affecting Research Integrity in Biomedical Science
How Are Blind Spots Detected?
Examples of Blind Spots in Research Practice/Governance; Impact on Research Integrity
Continued Use of the Metaphor “Bad Apple” to Describe Research Integrity Deviations
Hyperindividualization of Research Integrity Policy; Group Influences
Organizational Support of Ethical Behavior or Neutralization of Unethical Behavior?
Limited Consideration of Justice in Research Integrity
Institutional Trust in Science: Underdeveloped Element of Research Integrity, Prone to Being a Blind Spot
Challenges Necessary to Upset “Cognitive Locks” in Order to Reach Research Integrity
Additional Areas of Theory from Related Disciplines that Would Support Move to Research Integrity
Policy Sciences
Examples from Sociocultural Disciplines
Conclusion
References
Chapter 3: Evidence-Based Research Integrity Policy
Lessons from Translational Science, Regulatory Science, Meta-Science, and Science Policy
Translational Science
Regulatory Science
Meta-Science
Science Policy
Precision of Scientific Standards
Evidence of Scientific Quality Needing Attention
Clarity of Scientific Standards
Standards of Evidence in Research Integrity Decisions
Measurement Instruments and Methods for Use in Research Integrity
Research Syntheses, their Social Production and Methods Illuminative of Ethics
Limitations to an Evidence-Based Approach to Research Integrity and Especially to the Current State of Evidence Base
Conclusion
References
Chapter 4: Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity
Lines of Investigation Evaluating RCR Training
Lines of Basic Research with Strong Implications for Robust RCR Training
Responsible Conduct of Research Training and Quality of Science
Strengthened Approaches to RCR Training
Recommendations
Relevance of Higher Education Research to Traditional RCR
Conclusion
References
Chapter 5: Emerging, Evolving Self-Regulation by the Scientific Community
Introduction
Examples of Scientific Self-Correction/Self-Governance and Suggestions for Improvement
Prominent Case Examples of the Evolution of Science Self-Governance
Preclinical Research
Human Experiments with Hepatitis (Halpern)
When External Regulation Was Deemed Necessary
Dual Use
Research Integrity and Research Misconduct Regulations
Institutional Corruption Framework
Characterizing the Task Ahead, with Guidance from the Institutional Corruption Framework (IC)
Core Institutions as Conduits for Science’s Core Goals
Reproducibility Project Cancer Biology as Example
Reforms Supporting Science’s Core Goals; Lessons from East Asia
Discussion and Conclusion
References
Chapter 6: Conflict of Interest and Commitment and Research Integrity
Background
State of COI as an Issue
Case Studies
COI Management and Regulation
Regulation
Scientific Self-Regulation and Other Prudent Practices
COI Management Supporting Translational Science
Institutional Conflicts of Interest (ICOI)
Necessary Conditions/Consequences
More Radical Approaches
Important New Approaches
An Area where Standards for COI Are Well Described: Data Safety Monitoring Boards
COI and Conflict of Commitment as a National Security Issue
Summary
References
Chapter 7: Institutional Responsibilities for Research Integrity
Social Framework and Trustworthiness
Responsibility for Coordinated Quality of Research Produced Under the Network of Institutions
Examples of Research Integrity Innovation, and Needed Reform in Research Misconduct Management
Research Institutions in the Industrial Sector
Research Integrity Guardrails from Regulators, Funders, Journals, and Research Infrastructure
Regulators
Funders
Journals
Research Infrastructure
Navigating Institutional Responsibilities for Research Integrity; Among Competing Institutional Logics
Assuring Research Competence and Responsibility to Conduct Trustworthy Research
Navigating Competing Institutional Logics: Market Logic; Research Integrity Logic
Lessons From the Theory of Institutional Corruption; A Lens for Return to Basic Scientific Values
Science Institutions and Public Policy
Conclusion
References
Chapter 8: Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment
Peer Review
Peer Review in Funding Decisions
Issues In Peer Review as a Practice, Consequences and Solutions, With Most Focus on Publication Peer Review
Consequences and Solutions
Bibliometrics or Infometrics to Evaluate Research Performance
Toward Fairer Use of Bibliometrics to Support Research Integrity
Citation Metrics Embedded in Scientific Structures
Research Impact Evaluation
Ethical Imperatives to Test Reforms
Summary: State of Science Evaluation and Ethics
Conclusion
References
Chapter 9: Research Integrity in Emerging Technologies: Gene Editing and Artificial Intelligence (AI) Research in Medicine
Research Integrity in Emerging Technologies
Human Gene Therapy/Editing Research
Better Practices
AI Research in the Biomedical Sciences
Accountability for Improved AI Research and Development
Other Ethical Views
Governance/Regulatory Oversight/Ethical Debates in Research in Emerging Technologies
Innovation Ethics as a Distinct Field
Conclusion
References
Chapter 10: Research Integrity as Moral Reform: Constitutional Recalibration
Moral Reform; Constitutional Recalibration
Other Frames By Which to Understand Research Integrity
There Is No Crisis (See Fanelli Quote at Chapter Beginning)
Addressing Current Research Integrity Problems Including Those Currently Neglected/Conflicted
Strengthening Science Self-Regulation
Data Ownership/Sharing in Health Research
Conflict of Interest in Research Policy and Practice
Steps to Research Integrity
Imperative to Move to Research Integrity
Constructing the Policy Base to Support Research Integrity
Potential Lines of Empirical Research/Normative Analysis
Arguably The Most Problematic Issues
The Bottom Line
Limitations
References
Appendix
Case: Open Access—Plan S and OSTP Policy
Case: Cochrane
Case: Commercial Determinants of Health and Research Integrity
Case: Scientific Integrity: DHHS Agencies Reporting and Addressing Political Interference
Case: Predatory Journals and Conferences (IAP, 2022)
Case: Retraction Watch
Case: ClinicalTrials.gov
References
Index
Recommend Papers

Reconstructing Research Integrity: Beyond Denial
 3031271106, 9783031271106

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Reconstructing Research Integrity Beyond Denial Barbara Redman

123

Reconstructing Research Integrity

Barbara Redman

Reconstructing Research Integrity Beyond Denial

Barbara Redman Division of Medical Ethics NYU Grossman School of Medicine New York, NY, USA

ISBN 978-3-031-27110-6    ISBN 978-3-031-27111-3 (eBook) https://doi.org/10.1007/978-3-031-27111-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

While both social and ethical norms evolve over time, those relevant to research integrity should now undergo managed change, in concert, harmonized and coordinated, with incentives aligned with the values of research integrity. Leaders have called for “fundamentally shifting the culture of science to honor rigor” (McNutt). This call to action comes in the face of: evidence of poor quality and some fraudulent science, an incomplete and inaccessible scientific record hijacked by the commercial interests of journals, an unwillingness to control biasing effects of conflicts of interest, lack of a sufficiently clear notion of accountability among the various institutions of science, virtually no evidence base necessary to understand deviations from research integrity, and evidence of a growing sense of alienation and revolt among scientists trying to practice with integrity in a system that assigns blame entirely to individuals who cannot control perverse incentives in the institutions and system in which they must operate. But it also comes with many examples of innovative change which align with goals of research integrity. Research integrity is a broader term than is the traditional notion of research ethics; it invites examination of the landscape through new lenses. Here we probe issues/perspectives/frameworks/policy supportive of research integrity or not, and where not, innovations that do support this set of values. We examine blind spots hidden in the current mix of scientific self-regulation and minimal governmental regulation; institutional responsibilities often excused as accountability largely falling on individuals; tools needed to support research integrity such as effective responsible conduct of research training and metrics/evaluation tools. Emerging technologies such as gene therapy and AI in medicine require a rigorous research base with ethics embedded from concept through the long journey to maturity to harvest their potential benefit. In the midst of this journey, gene therapy has benefited from vigorous ethical debate; medical AI has not, resulting in embedded harms spread widely, that now must be addressed. This book is focused on human research and on US policy. It can be read as an integrated monograph chapter by chapter that builds a set of interrelated arguments. After an initial examination of research integrity as a construct and as a goal, chapters that follow illustrate leverage points noted above to support a deliberate move v

vi

Preface

toward research integrity. It is important to note that in every instance there is evidence of innovators demonstrating better paths; the challenge is to quickly test them and adopt those that better support research integrity. Arguably, such corrective processes are insufficiently regularized and robust enough to assure the promise of science. New York, NY

Barbara Redman

Reference McNutt, M. (2020). Self-correction by design. Harvard Science Review 2, 4.

Contents

1

 Research Integrity as a System Characteristic: Coordinated, Harmonized, with Incentives/Compliance in Alignment����������������������    1 Introduction to Research Integrity ����������������������������������������������������������     1 Current Regulation and Directions for Reform���������������������������������������     3 Issue Unaddressed in Regulation but Important to Research Integrity    5 How Widely Have Research Integrity Issues Affected Research Sectors?����������������������������������������������������������������������������������������������������     7 Industry Influence��������������������������������������������������������������������������������     7 Government Agency Support for Science Integrity ����������������������������     8 Standard Setting Groups Assuring Valid Science��������������������������������     9 National Academy of Sciences (NAS) 2017 Study, “Fostering Integrity in Research” ������������������������������������������������������������������������������������������������    10 Studies from Social Psychology and Organizational Science Suggest a more Complex Integrity System ����������������������������������������������������������    12 Methods����������������������������������������������������������������������������������������������������    13 Conclusion ����������������������������������������������������������������������������������������������    14 References������������������������������������������������������������������������������������������������    15

2

Blind Spots in Research Integrity Policy: How to Identify and Resolve Them��������������������������������������������������������������������������������������������   19 Current Blind Spots Affecting Research Integrity in Biomedical Science ����������������������������������������������������������������������������������������������������    20 How Are Blind Spots Detected?����������������������������������������������������������    21 Examples of Blind Spots in Research Practice/Governance; Impact on Research Integrity������������������������������������������������������������������������������������    23 Continued Use of the Metaphor “Bad Apple” to Describe Research Integrity Deviations������������������������������������������������������������������������������    23 Hyperindividualization of Research Integrity Policy; Group Influences ��������������������������������������������������������������������������������������������    25 Organizational Support of Ethical Behavior or Neutralization of Unethical Behavior?����������������������������������������������������������������������������    25 vii

viii

Contents

Limited Consideration of Justice in Research Integrity����������������������    26 Institutional Trust in Science: Underdeveloped Element of Research Integrity, Prone to Being a Blind Spot ������������������������������������������������    27 Challenges Necessary to Upset “Cognitive Locks” in Order to Reach Research Integrity������������������������������������������������������������������������������������    28 Additional Areas of Theory from Related Disciplines that Would Support Move to Research Integrity��������������������������������������������������������    30 Policy Sciences������������������������������������������������������������������������������������    31 Examples from Sociocultural Disciplines��������������������������������������������    32 Conclusion ����������������������������������������������������������������������������������������������    33 References������������������������������������������������������������������������������������������������    34 3

 Evidence-Based Research Integrity Policy��������������������������������������������   37 Lessons from Translational Science, Regulatory Science, Meta-Science, and Science Policy ����������������������������������������������������������    38 Translational Science ��������������������������������������������������������������������������    38 Precision of Scientific Standards ������������������������������������������������������������    42 Evidence of Scientific Quality Needing Attention������������������������������    42 Clarity of Scientific Standards ������������������������������������������������������������    44 Measurement Instruments and Methods for Use in Research Integrity��    46 Research Syntheses, their Social Production and Methods Illuminative of Ethics������������������������������������������������������������������������������    48 Limitations to an Evidence-Based Approach to Research Integrity and Especially to the Current State of Evidence Base����������������������������    50 Conclusion ����������������������������������������������������������������������������������������������    51 References������������������������������������������������������������������������������������������������    51

4

 Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity������������������������������������������������������������������������������������   57 Lines of Investigation Evaluating RCR Training������������������������������������    58 Lines of Basic Research with Strong Implications for Robust RCR Training����������������������������������������������������������������������������������������������������    60 Responsible Conduct of Research Training and Quality of Science ����������������������������������������������������������������������������������������������������    61 Strengthened Approaches to RCR Training��������������������������������������������    62 Recommendations������������������������������������������������������������������������������������    66 Relevance of Higher Education Research to Traditional RCR����������������    67 Conclusion ����������������������������������������������������������������������������������������������    68 References������������������������������������������������������������������������������������������������    69

5

Emerging, Evolving Self-Regulation by the Scientific Community ����������������������������������������������������������������������������������������������   73 Introduction����������������������������������������������������������������������������������������������    73 Examples of Scientific Self-Correction/Self-Governance and Suggestions for Improvement������������������������������������������������������������������    75 Prominent Case Examples of the Evolution of Science Self-Governance��������������������������������������������������������������������������������������    79

Contents

ix

Preclinical Research����������������������������������������������������������������������������    79 Human Experiments with Hepatitis (Halpern)������������������������������������    79 When External Regulation Was Deemed Necessary�������������������������������    80 Dual Use����������������������������������������������������������������������������������������������    80 Research Integrity and Research Misconduct Regulations������������������    81 Institutional Corruption Framework��������������������������������������������������������    83 Characterizing the Task Ahead, with Guidance from the Institutional Corruption Framework (IC)��������������������������������������������    84 Core Institutions as Conduits for Science’s Core Goals����������������������    85 Reproducibility Project Cancer Biology as Example��������������������������    87 Reforms Supporting Science’s Core Goals; Lessons from East Asia����������������������������������������������������������������������������������������������    88 Discussion and Conclusion����������������������������������������������������������������������    88 References������������������������������������������������������������������������������������������������    89 6

 Conflict of Interest and Commitment and Research Integrity������������   93 Background����������������������������������������������������������������������������������������������    93 State of COI as an Issue ��������������������������������������������������������������������������    95 Case Studies ����������������������������������������������������������������������������������������    96 COI Management and Regulation ����������������������������������������������������������    98 Regulation��������������������������������������������������������������������������������������������    98 Scientific Self-Regulation and Other Prudent Practices����������������������    99 COI Management Supporting Translational Science��������������������������   100 Institutional Conflicts of Interest (ICOI)����������������������������������������������   101 Necessary Conditions/Consequences��������������������������������������������������   102 More Radical Approaches��������������������������������������������������������������������   102 Important New Approaches ����������������������������������������������������������������   104 An Area where Standards for COI Are Well Described: Data Safety Monitoring Boards ����������������������������������������������������������������������������������   105 COI and Conflict of Commitment as a National Security Issue��������������   106 Summary��������������������������������������������������������������������������������������������������   107 References������������������������������������������������������������������������������������������������   108

7

 Institutional Responsibilities for Research Integrity����������������������������  113 Social Framework and Trustworthiness ��������������������������������������������������   114 Responsibility for Coordinated Quality of Research Produced Under the Network of Institutions ����������������������������������������������������������   115 Examples of Research Integrity Innovation, and Needed Reform in Research Misconduct Management ����������������������������������������������������   115 Research Institutions in the Industrial Sector��������������������������������������   117 Research Integrity Guardrails from Regulators, Funders, Journals, and Research Infrastructure����������������������������������������������������������������������������   118 Regulators��������������������������������������������������������������������������������������������   119 Funders������������������������������������������������������������������������������������������������   120 Journals������������������������������������������������������������������������������������������������   120 Research Infrastructure������������������������������������������������������������������������   121

x

Contents

Navigating Institutional Responsibilities for Research Integrity; Among Competing Institutional Logics��������������������������������������������������   121 Assuring Research Competence and Responsibility to Conduct Trustworthy Research��������������������������������������������������������������������������   122 Navigating Competing Institutional Logics: Market Logic; Research Integrity Logic ������������������������������������������������������������������������������������   123 Lessons From the Theory of Institutional Corruption; A Lens for Return to Basic Scientific Values ������������������������������������������������������������   126 Science Institutions and Public Policy����������������������������������������������������   127 Conclusion ����������������������������������������������������������������������������������������������   128 References������������������������������������������������������������������������������������������������   129 8

Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment ����������������������������������������������������������������������������������  133 Peer Review ��������������������������������������������������������������������������������������������   133 Peer Review in Funding Decisions������������������������������������������������������   134 Issues In Peer Review as a Practice, Consequences and Solutions, With Most Focus on Publication Peer Review������������������������������������   135 Consequences and Solutions����������������������������������������������������������������   137 Bibliometrics or Infometrics to Evaluate Research Performance������������   138 Toward Fairer Use of Bibliometrics to Support Research Integrity����   139 Citation Metrics Embedded in Scientific Structures����������������������������   140 Research Impact Evaluation��������������������������������������������������������������������   143 Ethical Imperatives to Test Reforms��������������������������������������������������������   144 Summary: State of Science Evaluation and Ethics����������������������������������   146 Conclusion ����������������������������������������������������������������������������������������������   147 References������������������������������������������������������������������������������������������������   147

9

 Research Integrity in Emerging Technologies: Gene Editing and Artificial Intelligence (AI) Research in Medicine ��������������������������������  153 Research Integrity in Emerging Technologies ����������������������������������������   153 Human Gene Therapy/Editing Research��������������������������������������������������   155 Better Practices������������������������������������������������������������������������������������   158 AI Research in the Biomedical Sciences ������������������������������������������������   159 Accountability for Improved AI Research and Development��������������   161 Other Ethical Views ����������������������������������������������������������������������������   163 Governance/Regulatory Oversight/Ethical Debates in Research in Emerging Technologies����������������������������������������������������������������������������   164 Innovation Ethics as a Distinct Field ������������������������������������������������������   166 Conclusion ����������������������������������������������������������������������������������������������   166 References������������������������������������������������������������������������������������������������   167

10 Research  Integrity as Moral Reform: Constitutional Recalibration��������������������������������������������������������������������������������������������  173 Moral Reform; Constitutional Recalibration ������������������������������������������   174 Other Frames By Which to Understand Research Integrity��������������������   177

Contents

xi

There Is No Crisis (See Fanelli Quote at Chapter Beginning)������������   178 Addressing Current Research Integrity Problems Including Those Currently Neglected/Conflicted ����������������������������������������   179 Strengthening Science Self-Regulation ����������������������������������������������   179 Data Ownership/Sharing in Health Research��������������������������������������   181 Conflict of Interest in Research Policy and Practice����������������������������   182 Steps to Research Integrity����������������������������������������������������������������������   182 Imperative to Move to Research Integrity��������������������������������������������   183 Constructing the Policy Base to Support Research Integrity��������������   184 Potential Lines of Empirical Research/Normative Analysis����������������   186 Arguably The Most Problematic Issues ��������������������������������������������������   187 The Bottom Line��������������������������������������������������������������������������������������   188 Limitations ����������������������������������������������������������������������������������������������   189 References������������������������������������������������������������������������������������������������   189 Appendix ��������������������������������������������������������������������������������������������������������   193 Index������������������������������������������������������������������������������������������������������������������  203

Abbreviations

AI ASPR

Artificial Intelligence Office of the Assistant Secretary for Preparedness and Response BMJ British Medical Journal CDC Centers for Disease Control and Prevention CDoH Commercial Determinants of Health CEO Chief Executive Officer CFR Code of Federal Regulations Clinical Trials.gov A trial registry managed by the National Library of Medicine COI Conflict Of Interest COM Conflict of commitment CONSORT Consolidated Standards of Reporting Trials COPE Committee on Publication Ethics COVID-19 Coronavirus Disease 2019 CTSA Clinical and Translational Science Award DHHS Department of Health and Human Services DNA Deoxyribonucleic acid DSMB Data Safety Monitoring Board ELSI Ethical, Legal, and Social Implications EMA European Medicines Agency ET Emerging Technologies EU European Union FCOI Financial Conflict of Interest FDA Food and Drug Administration FDAAA Food and Drug Administration Amendments Act FFP Fabrication, Falsification, and Plagiarism G20 An intergovernmental forum of 19 countries and the European Union, which addresses major issues related to the global economy GAO Government Accountability Office H3Africa Human Heredity and Health in Africa xiii

xiv

HIV/AIDS IC ICMJE ICOI ILSI IRB JIF LGBT LMIC MRI NAS NASEM NCBI NHLBI NIH NINDS NPM NSF OA OASPA OHRP ORI OSTP Paywall PEERS PHS PI PREA QRP RCR RCT REC RFO RI RM RPO RR UK US VA WHO

Abbreviations

Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome Institutional Corruption Framework International Committee of Medical Journal Editors Institutional Conflict of Interest International Life Science Institute Institutional Review Board Journal Impact Factor Lesbian, Gay, Bisexual, And Transgender Low or Middle Income Country Magnetic Resonance Imaging National Academy of Sciences National Academies of Sciences, Engineering, and Medicine National Center for Biotechnology Information National Heart Lung and Blood Institute, part of NIH National Institutes of Health National Institute of Neurological Disorders and Stroke New Public Management National Science Foundation Open Access publishing Open Access Scholarly Publishers Association Office of Human Research Protection Office of Research Integrity Office of Science Technology Policy Closed access to publications in some journals unless a subscription or other fee is paid Platform for the Exchange of Experimental Research Standards Public Health Service Principal Investigator Performance-based Research Evaluation Assessments Questionable Research Practice Responsible Conduct of Research training Randomized Controlled Trial Research Ethics Committee Research Funding Organization Research Integrity Research Misconduct Research Performing Organization Registered Report United Kingdom United States of America Veterans Administration World Health Organization

Chapter 1

Research Integrity as a System Characteristic: Coordinated, Harmonized, with Incentives/Compliance in Alignment

Science has always been self-correcting…we need to adopt an enterprise-wide approach to self-correction that is built into the design of science…creating tools to make self-­correction easy and natural, and fundamentally shifting the culture of science to honor rigor (MuNutt, 2020 abstract p. 1).

This is a bold, welcome but partial view for reconstructing research integrity. Here I address how integrity is expected to function, the partiality of current regulation, obvious trouble spots, and strengths and limitations of the 2017 National Academy of Science study of research integrity. Throughout this book, I describe not only the breadth and depth of problems but also suggested solutions and raise the question of whether the system is beginning (or has the potential) to correct itself. This book is focused on the biomedical sciences in the United States, although examples from other disciplines and localities are occasionally used.

Introduction to Research Integrity Integrity has been defined as “the quality of acting in accordance or harmony with relevant moral values, norms, and rules. Integrity management is consistent and systematic efforts to promote integrity” (Huberts, 2018 p. 20). Some see integrity as an umbrella concept, incorporating corruption, conflict of interest, and fraud (Huberts, 2019). Integrity norms may be fluid, vague, and to some degree context dependent. Integrity scandals offer an opportunity to discuss, affirm or change integrity norms (Kerkhoff & Overseem, 2021). In research, integrity involves producing valid and reliable science while protecting and supporting research participants. There are a number of entities in which integrity can be expressed that are important in practicing research integrity. Personal integrity, professional integrity, and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_1

1

2

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

intellectual integrity are relevant and products of research such as databases should reflect integrity, uncorrupted by error or falsification/fabrication. Social and political structures have features essential to having or practicing integrity in producing and distributing research. Integrity as a moral purpose requires asking about moral characteristics of the society in which research integrity is to be practiced (Integrity, Stanford, 2021). It is important to note that while the notion is old, conceptualization of research integrity matched to current conditions is only beginning to be developed and systematized. The movement to do so has become significant only in the past two decades, in part because it is challenging the notion that science self-regulation is sufficient. Initial development of the concept has addressed infringements on the duty of integrity. Thus, much of the current literature is dominated by biomedical cases that largely focus on fabrication, falsification, and patient safety issues, potentially diverting attention from relevant but less visible issues (Armond et al., 2021), such as defining and measuring harms from research misconduct and uncontrolled conflict of interest. A positive and more complete definition is still evolving, not yet yielding agreement in the scientific community (Futures Neves, 2018). Science is thought to function in communities, which establish moral boundaries and control them through training, peer review, and replication of research results. Relevant norms include measurement accuracy, error correction, repeatability and refinement of methodologies, and truth claims based on accumulated evidence. The core value is the credibility of the knowledge produced. Since theoretical and research practice errors and biases in data collection, analysis, interpretation, and dissemination can lead to false conclusions, self-correction by the community should control them (Michalski, 2022). McNutt’s plea (see above) is for the culture of science to fundamentally shift to honor rigor. Clearly, this must be a collective effort, coordinated across all the institutions of science including funders who should be accountable for the quality of studies they support, research producing institutions (e.g., universities, companies, and governments) responsible to oversee the quality of the science produced, journals which in the aggregate have currently deviated mightily from the norms of science, policy makers who produce regulations and those responsible for assuring compliance with them, and others. Interestingly, a summary of research on research integrity found that 45% of the papers located misconduct in problems with the system; only 16% found that problems of awareness and compliance on the part of individual researchers were central. Studies on policy makers and institutions were sparse as were studies on IRBs and peer review in integrity violations (Bonn & Pinxten, 2019). This evidence confirms the current view that science institutions are not seen as responsible parties in the protection of research integrity in the production and dissemination of research. Here, I review the reach of current research regulations and find them very partial and insufficient to support the full range of research integrity. Scientists and science institutions display both distress over current deficits and confusion over pressure for reform. This chapter then addresses past efforts at research integrity policy/ practice in the US context, and finally reviews findings from social science research

Current Regulation and Directions for Reform

3

for how a system supporting research integrity can be strengthened. An explanation of methods used to obtain examples examined in this book is provided.

Current Regulation and Directions for Reform The most obvious place to look for formalized support for research integrity is in government regulations. One would expect regulations to be coherent and comprehensive, to provide a consistent message of what practices are consistent with the goal of research integrity. The two dominant US federal regulations focus on protection of research subjects and on the management of research misconduct; they are neither coordinated nor harmonized. Likely the most common research regulatory structure around the world is established to protect research subjects, variously called a research ethics board or an institutional research board (IRB). Although no evidence measuring the effects of these oversight committees on the ethical conduct of research with human subjects could be located, they are largely considered the bedrock structure (Friesen et al., 2019). Here we consider, in the US context, the lack of coordination and harmonization among IRBs and other regulatory bodies which have grown up over time. These include: conflict of interest committees, data access committees, data safety monitoring boards, dual use research of concern committees, embryonic stem cell research committees, institutional biosafety committees, privacy boards, scientific review committees, and radioactive drug research committees (for details see Friesen). As a group, they lack common definitions, uncertain relationships, and primary authority with IRBs, especially in instances of conflicting views. Such issues could be clarified, harmonized, and coordinated by restructuring their relationships, assuring common definitions of relevant terms and clarification of which group has regulatory authority when a disagreement occurs (Friesen et al., 2019). While many cases involve suspected violations of both protection of human subjects (45 CFR 46) and research misconduct (42 CFR 93) regulations, they have “different foci of enforcement, priorities of protection, oversight officials, oversight procedures, seizure of evidence, standards of proof, expectations of privacy, and appeal procedures for researchers who are subject to adverse findings and penalties…These differences are significant and fundamental” (Bierer & Barnes, p. S2). Yet another set of regulations is applicable to research supporting FDA applications. Bierer et  al. (2014) provide excellent guidance for how institutions can navigate these differences, but the relevant point is, why were they not coordinated as the regulations were defined? In a further example of lack of coordination, Ennever and colleagues (Ennever et al., 2019) note that the confidentiality section of consent forms must adhere to requirements from at least six different federal sources which are inconsistent with each other. This, in addition to state laws unnecessarily complicates compliance.

4

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

Other complaints address IRB arbitrariness and lack of an appeal mechanism. Rigorous IRB follow-up during and after completion of an approved study appears to be uncommon, limiting subjects’ protection to pre-implementation stages. Elliott (2017) notes lack of a requirement to interview research subjects in the event of a mishap or complaint. These examples point to a series of problems, all undermining the end goal of research integrity. As research regulations have emerged over time, they have addressed a singular problem—a portion of the research integrity landscape—but neglected to fit the new regulation into the broader whole that would support research integrity. Also, new regulations (again to a specific problem) have not been harmonized with existing regulations, leaving them in contradiction or inconsistent with older ones, and thus making it difficult to know how to comply with them. In addition, some areas of central concern are under-regulated, in particular, conflict of interest. Thus, it could be said that research regulations lack coordination, which is the purposeful alignment of units, roles, and efforts to achieve a predefined goal, in this case, research integrity. Some recent efforts do address practices that would improve research integrity, in part through better coordination. For example, EU Clinical Trials Regulation, expected to be applicable in 2021–2022, finally addresses a significant source of waste and unjustifiable exposure of research subjects to risks—control of redundant clinical trials, i.e., those that intend to investigate a question that can be answered with existing evidence. Research ethics committees and drug regulatory authorities would require synthesis of earlier research to justify a new study. Studies addressing replicability and generalizability should be supported. Reform of reliability of data from earlier studies and quality of systematic reviews will be necessary to support control of redundant clinical trials (Kim & Hasford, 2020). Prevention, detection, and management of research misconduct are less uniformly regulated around the world, even as some countries belatedly adopt such rules in response to scandals. US law defines research misconduct as “fabrication, falsification or plagiarism in proposing, performing or reviewing research, or in reporting research results.” The regulations go on to explain that fabrication is making up data or results and recording or reporting them…Falsification is manipulating research materials, equipment or processes, or changing or omitting data or results such that the research is not accurately represented in the research record…Research misconduct does not include honest error or differences of opinion (42 CFR 93.103). Promulgated without an underlying ethical framework, harmonizing research misconduct regulations with the framework underlying human subjects’ protection (based on the Belmont Report) would require attention to the effects of research misconduct on research subjects and subsequent patients. Such effects are currently neither addressed nor measured during investigation of allegations of research misconduct, thus not protecting subjects who may have been harmed as a result of fabrication/falsification (Redman & Caplan, 2021). In the absence of an explicit public policy addressing it, do current regulations assure research integrity? We lack standards and concrete mechanisms for

Current Regulation and Directions for Reform

5

simultaneously evaluating research integrity at all levels of the science-producing system and for correcting errors. Each institution in this collective system must understand its responsibility and how to coordinate with others. Experiences in European countries are instructive. Analysis of research integrity policy in Denmark suggests it is diverse and unstable, not yet operating as a system with common values and coherence (Davies & Lindvig, 2021). The European Code of Conduct for Research Integrity stands in a kind of contradiction to country codes with research misconduct (fabrication, falsification, plagiarism) being the only commonality. Such a situation raises questions of fairness for researchers across countries and raises questions of credibility of the regulatory function of the Code (Desmond & Dierickx, 2021).

I ssue Unaddressed in Regulation but Important to Research Integrity Perhaps because authorship is currency for scientific success, thus with high stakes, both for individuals and groups of researchers, it is an area of significant turmoil and resistance, with a direct impact on perceptions of research integrity. Since formal regulatory agencies decline to address authorship, the International Committee of Medical Journal Editors’ (ICMJE) guidelines have been presumed to constitute a standard, although their legitimacy and authority seem weak. Disciplines or fields of study often have their own guidelines, especially for author order, some of which (by alphabetical order) are systematically unfair. cC and discipline = specific authorship guidelines are an important form of self-regulation but have failed to quell the turmoil. Honorary and guest authorships are common, further inflaming feelings of injustice on the part of legitimate authors, especially in the face of publication-­ based performance metrics. A study of authorship policies at highly research-oriented universities found only a quarter with publicly available policies, with little guidance for handling disputes, managing faculty–student authorship teams, or dispute resolution (Rasmussen et al., 2020). Thus, little guidance exists anywhere, and there is no attention to how this void affects research integrity and widespread feelings of unfairness. Some suggest that this lack of clarity and general unwillingness to resolve authorship issues reflect unjust practices and incentives hardwired into science’s rules, guidelines, and conventions, justifying rule breaking as a form of legitimate disobedience (Penders & Shaw, 2020). A more specific but underexamined concern is the gatekeeper role of journal editors who, it has been suggested, are not required to disclose the scientific or business basis for their decisions and how they choose reviewers or use their advice. Shaw and Penders (2018) suggest that editors’ gatekeeper roles, which have great consequences for scientific careers, can approach censorship, with no transparency required. No authority has successfully addressed authorship issues; they continue to fester.

6

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

This is one example of an issue widely perceived to be unfair. Some suggest that the performance indicators addressed in Chap. 8, have weakened traditional collective control and that the incentives contained therein encourage individuals to take risks to capture visibility and its benefits. In fact, such responses by scientists could be considered adaptive to the current system (Paradeise & Filliatreau, 2021). Here, I present three examples of research integrity issues researchers are trying to address. Some reflect structural unfairness built into the current system of science. Others bring into view ongoing needs for building integrity as science evolves. Still, others represent solutions in the spirit of science self-regulation but which are underutilized with no clear path to scale. In the aggregate, these three examples could contribute to integrity but are languishing because there is no integrity-­ enhancing authority pushing them into the mainstream. Buckley (2022) describes an understudied issue of suppression by research funders. Because public funding is limited, many researchers are routinely under pressure to seek funding from a variety of sources, many with commercial or political interests, which incorporate restrictions into the funding. These restrictions are a form of suppression to support the funders’ interests. They include: control over raw data; not allowing publication or requiring funder permission to do so; revision of the narrative; addition, removal, or substitution of authors; demand to control any press releases; refusal to allow data to be posted in a repository. Buckley suggests reporting such practices to a national registry to alert others to the seriousness of this structural problem, which directly undermines conditions necessary for research integrity. Auer and colleagues have developed a diagnostic system and training program for reproducibility of scientific results, Reproducibility for Everyone. The diagnostic system includes: technical (such as contaminated cell lines and batch effects); study design and statistics (such as misunderstanding statistics and selective reporting); human factors (such as insufficient detail about methods and materials); and external factors (such as rewards for significant results, lack of incentives for responsible research practice). They note that problems with reproducibility have been known for decades. Within the scientific community solutions like theirs are being developed but heavily underutilized, again lacking an infrastructure to promote research practices that will improve integrity (Auer et al., 2021). The final example here describes shifting notions of integrity as HIV research moves from prevention and treatment to cure. Of most concern are analytical treatment interruptions and placebos, a fair selection of participants and community perspectives of people living with HIV. Treatment interruptions induced as part of the research can be seriously problematic because they represent an unknown risk of transmission to sexual partners. What risks are acceptable in early phase HIV cure research and what level of benefits is necessary to be considered curative? A self-administered point-of-care rapid test to detect and measure viral rebound with clearly delineated antiretroviral therapy restart criteria would help to mitigate risk (Dube et al., 2021). Many areas of research can be expected to face such new issues as the science evolves.

How Widely Have Research Integrity Issues Affected Research Sectors?

7

While most of the above conversation has occurred within the biomedical or life sciences, the issues are not limited to these fields. Analysis of the field of economics shows lack of replication, data sharing, widespread uncontrolled conflict of interest, and lack of concern for accuracy of findings that carry heavy influence over the lives of others (Yalcintas & Kosel, 2021). It can be concluded that the current system of formal research regulations is inconsistent and partial.

 ow Widely Have Research Integrity Issues Affected H Research Sectors? Scientific integrity has not been comprehensively and robustly regulated either by the scientific community or by the government but has widely been assumed to uniformly be present. This false assumption has been appropriated by entities trying to hide conflicts of interest or to assure that the highest level of evidence on which important decisions are based, is indeed true. Examples include industries setting scientific standards consistent with their business interests, government stifling research in its own agencies not in accordance with beliefs/goals of the political party in power or of standard—setting groups assuring that their systematic reviews are free from fraud.

Industry Influence A case example may be found in the work of the International Life Science Institute (ILSI), founded and funded by food companies. It has built a literature on conflict of interest in food science and nutrition, public–private partnerships, and now is defining principles and standards of scientific integrity including compulsory certification or accreditation using its principles. Of note, these principles ignore the risks of accepting industry research funding and fail to provide guidelines to protect from these risks (Mialon et al., 2021). In a report describing these principles, some ILSI affiliated authors did not report a conflict of interest (Kretser et al., 2020). To counter ILSI and other similar efforts to influence scientific integrity in research that will be used in public health policy, Nakkash and colleagues report the work of Governance, Ethics, and Conflicts of Interest in Public Health, a network with objectives including: sharing knowledge of undue influence from private sector actors…, documenting the governance, ethical and conflict of interest issues that arise in the interactions between public health and the private sector, and actively demanding limits on the engagement with industry actors in public health (Nakkash et al., 2021). Saltelli et al. (2022) have described strategies of industry-based lobbyists to capture methodological and ethical aspects of science for policymaking. Since books such as Merchants of Doubt (Conway & Oreskes, 2010) have exposed practices of

8

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

obfuscation to keep the truth about tobacco smoke and other issues from emerging, industry representatives have moved to intervene directly in methodological and ethical aspects of science. In comparison with academic research, industry is protected by differential transparency, by largely being free from public oversight and norms, and uses “ethics washing” by distracting into ethical debates to buy time against effective regulation (Saltelli et al., 2022). It also seems that earlier settlements with portions of the tobacco industry are not deterring it from similar obfuscations used previously. Litigation in the United States forced dissolution of several tobacco industry-funded third parties because of their role in spreading scientific misinformation about tobacco. But in 2017, a new scientific organization—The Foundation for a Smoke-Free World, funded by a tobacco corporation—show the same obfuscations as practiced earlier. A comprehensive database of researchers’ financial interests should aid editors in identifying and excluding research funded by the tobacco industry and its affiliates. The database should also contain information on the financial interests of journal editors and editorial board members, which is often not required (Legg et al., 2021). These problems are apparently global and extend to interference by funders (industry, government, charity) with addiction research, particularly censorship of research outputs. While scientific research is essential to effective addiction treatment and drug policy, it has been shown to involve: predetermination of research designs by the funding body, a government department doing bidding of industry to have certain recommendations removed from study findings, controlling access to data, and making future funding contingent on certain kinds of results (Miller et al., 2017). Current oversight has not controlled these influences.

Government Agency Support for Science Integrity Several criteria should be used to support scientific integrity in federal agencies: decisions should be based on rigorous evaluation of the best available science, decisions should be reached and documented…with bases of final decisions and process well documented, and decisions should be made without inappropriate or external interference (Sharfstein, 2020). Federal policy on research was issued in 2010 to help agencies guard against political interference in science-based decision-making including censorship of federal scientists and manipulation of scientific assessments. While 28 federal agencies and departments now have scientific integrity policies, only some have robust scientific integrity infrastructures including a scientific integrity official, a board to help adjudicate potential violations and liaisons to help employees understand the policy. Gaps in these policies remain, including holding political appointees accountable for adhering to integrity policies. Some agencies have Dissenting Scientific Opinion policies that allow scientists involved in a policy decision to write a dissenting scientific opinion for the public record. Agency scientific advisory committees should

How Widely Have Research Integrity Issues Affected Research Sectors?

9

be managed to assure competency and control of conflict of interest (Goldman et al., 2019). A survey of government scientists showed perceived losses of scientific integrity during the Trump administration, uneven across agencies. Political interference in scientific work and adverse work environments were less problematic at CDC and FDA than at agencies dealing with the environment. The Biden administration’s January 27, 2021, memo establishes a working group to review scientific integrity policies and their implementation (Carter et al., 2021). The COVID-19 pandemic exposed the weakness of scientist communication rights at the CDC. Whistleblower protections should explicitly apply to scientific integrity complaints. Anonymized information about scientific integrity complaints and their resolutions at every agency should be publicly released (Kurtz & Goldman, 2020).

Standard Setting Groups Assuring Valid Science The Cochrane group does systematic reviews and updates to provide best evidence on which to base clinical practice and to inform future research. Since such reviews are widely used to establish guidelines and policies, they must be free of fraud. Those involved in such reviews have published several case studies describing their efforts to detect fraudulent data/studies (Bordewijk et al., 2020, 2021a) and methods by which such studies might be detected (Bordewijk et  al., 2021b). This set of authors concludes that when a paper is retracted due to data integrity issues, other publications by the same author should be assessed to identify similar problems. In addition, prospective registration of randomized controlled trials does not prevent studies from appearing with questionable data integrity (Bordewijk et al., 2021a). Continued work of the Bordewijk group found that methods to assess research misconduct in health-related research do not reference the strength and limitations of these methods. Definition of a minimal set of tests required to optimize detection of misconduct constitutes a significant research gap. While it always helps to ask for the raw datasets and to apply statistical checks, such attempts are often hampered by poor accessibility and stewardship of research data. Assuring evidence of ethics approval and documentation of study medications will help to either engender trust or mistrust of the data. Automation and wide availability of methods to detect fraud are essential (Bordewijk et al., 2021b). These findings do not engender confidence in ability to assure that systematic reviews are free of fraud; at the same time, the work of these reviewers try to solve the problem is admirable. It is perhaps best to assume that all studies/data are potentially fraudulent than to assume the opposite, which has been current policy. The initial development of a Research Integrity Assessment tool (Weibel et al., 2022) and its application to a Cochrane review of Ivermectin for preventing and treating COVID-19 (Popp et  al., 2022) is very promising. Evidence synthesis depends on the assumption that included studies are valid; routine use of risk of bias

10

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

tools does not fully address research integrity. Including problematic studies can lead to misleading conclusions, affecting clinical practice, and research priorities that use the reviews. Although systematic reviews aim to identify all studies that meet the eligibility criteria for the review, use of the RIA tool in Ivermectin use in COVID-19 excluded more than 40% of studies. Each study considered for inclusion in the review should be independently assessed by two review authors on six criteria: retracted studies or studies with a published expression of concern; prospective trial registration; adequate ethics approval; author group; sufficient reporting and plausible of methods; plausible results. Correspondence with study authors may be necessary. Thus, research integrity in evidence synthesis should be part of the eligibility screening for inclusion in systematic reviews (Weibel et al., 2022). Thus, in significant sectors in which science is produced and used (industry, government, and standard setting organizations), research integrity problems have been documented. Some vigorous efforts are beginning to address these issues. What have past efforts in the US context taught us?

 ational Academy of Sciences (NAS) 2017 Study, “Fostering N Integrity in Research” The NAS 2017 study is the most recent authoritative US examination of research integrity issues. Here we note the findings from this report. Integrity in research means that the organizations in which research is conducted encourage those involved to exemplify these values in every step of the research process…including training the next generation of researchers and maintaining effective stewardship of the scholarly record. p. 1. The research enterprise is a complex system that includes universities and other research institutions …the federal, foundation and industrial sponsors of research…journal and book publishers and scientific societies. These organizations can act in ways that either support or undermine the integrity of research. p. 1. Fields and disciplines may take on as a community the task of defining and upholding necessary standards… p. 2. Failing to define and respond strongly to research misconduct and detrimental research practices constitutes a significant threat to the research enterprise. p. 2. …need to rethink and reconsider the strategies used to support integrity in research environments… p. 2. …this report does not conclude that the research enterprise is broken…but faces serious challenges in creating the appropriate conditions to foster and sustain the highest standards of integrity. p. 3.

Recommendations: “…all stakeholders in the research enterprise—researchers, research institutions, research sponsors, journals, and societies—should significantly improve and update their practices and policies to respond to the threats to research integrity identified in this report.” p. 4.

National Academy of Sciences (NAS) 2017 Study, “Fostering Integrity in Research”

11

No permanent organizational focus for efforts to foster research integrity at a national level currently exist. The Research Integrity Advisory Board recommended by the committee would bring a united focus to understanding and addressing challenges across all disciplines and sectors. p. 6. Deviation from good science can cause great damage to the research enterprise—both in the practice of science and in how that science is perceived in the broader society…sustainability of the scientific enterprise…depends on putting best practices to work across the entire system. p. 15.

Fostering Integrity in Research demonstrates clear understanding that science is a system of institutions, all of which must be operating with integrity. This report concludes that the research enterprise faces serious challenges but is not broken, without clarification as to the standard that would declare it broken. To my knowledge, the Research Integrity Advisory Board, which would have “no direct role in investigations, regulation or accreditation…will serve as a neutral resource based in the research enterprise that helps the enterprise respond to ongoing and future changes” (NAS, 2017, p. 6) has not been revisited. Recent citations to “Fostering Integrity in Research” show very little activity, suggesting a potentially dormant environment. Clearly, monitoring the behavior of individual scientists to see whether they adhere to ideal research and publishing standards is inadequate for ensuring integrity (Roberts et al., 2020); yet, “Fostering…” does not take the necessary next steps to assure that the institutions in the system are held accountable for their roles. It should be noted that this 2017 NAS study shows some progress from the previous several decades in thinking about research integrity. A 1992 survey of American Association for the Advancement of Science members found little confidence that their universities would investigate suspicions of research misconduct properly and likely the concerns would never be completely resolved (Hamilton, 1992). It appears that scientific integrity was equated with absence of research misconduct. A 2002 Institute of Medicine study, Integrity in Scientific Research made several noteworthy observations/conclusions: “No established measures for assessing integrity in the research environment exist.” (p. 3), “There is a lack of evidence to definitively support any one way to approach the problem of promoting and evaluating research integrity.” (p. 3), “At this time, neither the executive nor legislative branches of government has established a regulatory framework to foster integrity in scientific research, and the committee does not believe such a framework would be desirable or comparable to the system that has been put into place to address misconduct in science or the use of institutional review boards,” (Institute of Medicine, 2022, p. 11). What is still missing: a government framework for research integrity; a model, documented in its effectiveness, for responsible conduct of research training; effective oversight for quality of scientific research and for a valid and complete scientific record; evaluative metrics aligned with the goals of science. Among the accomplishments in the past two decades: acknowledgment that research integrity is a system issue and requires all parts of that system to operate consistently with the goals of science; the vigorous establishment of meta-science which provides evidence and judgment about how/if/when science is operating with integrity; a

12

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

number of initiatives, often by institutions, to invest in pilot programs to improve research integrity. Many of these initiatives are noted throughout the following chapters.

 tudies from Social Psychology and Organizational Science S Suggest a more Complex Integrity System While honesty and conscientiousness are important in expressing the terms of integrity, assumptions about how to instill and maintain these elements do not seem to reflect what is known about them. Such assumptions are reflected in self- and governmental regulatory policy—that the scientific literature is largely valid and reliable and therefore can be used as a basis for medical and other practice, that “bad apples” are rare and may occur but will quickly be identified by other members of the scientific community and dealt with and that where relevant, their damaged work will be retracted and not used further by the scientific community. All of these assumptions are false and will be addressed in subsequent chapters of this book. Lying, dishonesty, and deception are all negative to research integrity. Research has shown that many people are honest most of the time; the majority of lies are told by a few prolific liars, often feeling low guilt (Serota & Levine, 2015). Recent summary reviews across hundreds of studies have clearly confirmed that dishonesty is too prevalent to count as a rare exception (Hilbig, 2022). Individuals with trait dishonesty and feelings of superiority and deservingness adopt beliefs that justify norm violation. Situations of ambiguity feed such aversive behavior (Hilbig et al., 2022). These findings suggest that standards be made clear and that all research settings must have vigorous oversight to identify and stop such individuals. All past and future publications of those found to have committed research misconduct must be scrutinized. Some scientists have as many as 183 retractions which would, in a well-functioning research integrity system, have been stopped long ago avoiding further contamination of the literature. Openness and transparency enable self-governance; forms of deception undermine it. A regulatory institution that does not investigate and punish deceptive behavior prioritizes it. Does this system exist (Sarkadi et al., 2021)? It is quite possible that the infrastructure for research integrity is incomplete. Although not undergirded by rigorous empirical evidence but widely supported by theory, work in the organizational sciences suggest an integrity program, a formal organizational control system with internal coherence, is necessary to embed and advance integrity. While rule compliance is part of this work, building a culture of integrity is thought to be necessary to promote positive behavior and impede integrity violations. An integrity program is dynamic, periodically analyzed, and adapted. Without such a plan, integrity activities are destined to remain incident-­driven, prompted by scandals or erratic political or financial decisions (Hoekstra & Kaptein, 2021).

Methods

13

Such a program consists of an entire configuration of coherent elements, jointly responsible and including: codes or conduct, an integrity training program, institutions for reporting and investigation, and ethical leadership. Importantly, external elements such as audit institutions, the media, and an external watchdog organization complete the system (Huberts & van Montfort, 2021). In research integrity, the research-producing institution should have such a program and it should extend to all institutional activities. So should the entire system of institutions involved in producing and disseminating research have such a program/ system of research integrity? Weaknesses in this overarching set of institutions are addressed throughout the book but include: a sense of science exceptionalism that it will always self-­correct, greed-distorted journals, Congressionally created regulatory agencies starved of resources and inappropriate use of funding organizations as regulators, inadequate regulation of commercially produced and often privately funded research, wide use of perverse metrics, lack of assurance of positive mentorship and practice of research integrity skills, unknown quality and portfolio of research integrity officers, long-term lack of resolution of methodological guidance such as questionable research practices, no overarching watchdog agency to diagnose integrity and public benefit and get the system back on track. Is there an internally coherent, properly functioning research integrity program in research relevant institutions? Are these programs effectively managed, reducing integrity risks? A related construct of stewardship plays an important role in securing a program of integrity. Defined as an ongoing prudent vigilance that carefully monitors and mitigates harm over time. Regulatory stewardship supports actors toward regulation of a public good. Such stewardship is said to be currently underappreciated and not part of formal regulatory structures but part of a community of common and collective interest (Laurie et al., 2018).

Methods This is a conceptual book, drawing on existing literature that provides a variety of examples in order to shed new light on research integrity as an ethical and political construct. It is written in the mode of a narrative critical review, retrieving publications from a variety of databases (Web of Science, Scopus, PubMed) by means of the search terms at the beginning of each chapter and a snowball technique searching references. Systematic reviews and meta-analyses were sought, both for their summary of empirical research on a construct relevant to research integrity and also to identify additional individual publications. Conference proceedings and blogs were not included; relevant books were included. Examples (which are non-exhaustive) yielded from these searches should not be interpreted as necessarily representative of actual practice but might be

14

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

considered against a standard—should examples not supportive of research integrity occur at all or at best minimally and what relevance do they have for policy? How are positive, innovative approaches to support research integrity which are documented throughout the book, encouraged, evaluated, and taken to scale? What coherent approaches to assessing and optimizing research integrity are in place and active? The underlying normative question is: is the current system operating at a level appropriate to its responsibilities? Within the framework of research integrity as an umbrella concept, the several institutions involved in production and dissemination of research, and the constructs that direct their behavior (e.g., conflict of interest and science self-regulation) are examined. Since many may accept the current system of research practice/integrity and thus do not question it in print, it is plausible that the literature may more predominantly reflect those who find fault with it. Alternatively, the presence of empirical findings and normative analyses casting doubt on functioning of the current system may represent movement toward a new norm more aligned with the purposes of science including social impact. Thus, the current literature cited in following chapters cannot be assumed to accurately reflect the state of science; a much larger project would be necessary to meet that goal. But it is also clear that the constant comparison of current with past practice—to justify that it is not deteriorating—is incorrect because it assumes that the past was ethically satisfactory. Norms (both ethical and social) change over time. I (and others) interpret current criticism to reflect a desire to align with justified norms of science and to correct a current system that has embedded incentives that have pulled the production and dissemination of science away from its basic purposes. Greenhalgh et al. (2018) note that narrative reviews such as this provide interpretation and critique, clarification, and insight—their key contribution being deepening understanding. Particular focus is placed on unstated assumptions, intriguing unanswered questions that have remained under-explored. Narrative reviewers select evidence judiciously and purposively with an eye to what is relevant for key policy questions. It focuses on identifying where the intriguing unanswered questions lie (Greenhalgh et  al., 2018). A discussion of those intriguing unanswered questions will be provided in Chap. 10.

Conclusion Research integrity norms, aimed at producing valid science with subject protection and support, should be complete and should be revised over time, in part reflecting new technologies, requiring collective, not just individual behavior change. Although incomplete, there is no compelling evidence that US authoritative examination of issues with research integrity has spurred active movement toward addressing significant challenges in assuring it.

References

15

References Auer, S., Haeltermann, N., Weissberger, T., Erlich, J., Susilaradeya, D., Julkowska, M., Gazda, J., Schwessinger, B., Jadavji, N., & Reproducibility for Everyone Team. (2021). A community-­ led initiative for training in reproducible research. eLife, 10, e64719. https://doi.org/10.7554/ eLife.64719 Armond, A., Gordin, B., Lewis, J., Hosseini, M., Bodnar, J., Holm, S., & Kakuk, P. (2021). A scoping review of the literature featuring research ethics and research integrity cases. BMC Medical Ethics, 22, 50. https://doi.org/10.1186/s12910-­021-­00620-­8 Bierer, B. E., Barnes, M., & IRB/RIO/IO Working Group. (2014). Research misconduct involving noncompliance in human subjects research supported by the Public Health Service: Reconciling separate regulatory systems. Hastings Center Report, 44(4), S2–S26. https://doi. org/10.1002/hast.336 Bonn, N. A., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics, 14(4), 338–352. https://doi.org/10.1177/1556264619858534 Bordewijk, E. M., Wang, R., Askie, L. M., Gurrin, L. C., Thornton, J. C., van Wely, M., Li, W., & Mol, B.  W. (2020). Data integrity of 35 randomised controlled trials in women’s health. European Journal of Obstetrics & Gynecology and Reproductive Biology, 249, 72–83. https:// doi.org/10.1016/j.ejogrb.2020.04.016 Bordewijk, E. M., Li, W., Gurrin, L. C., Thornton, J. G., vanWely, M., & Mol, B. W. (2021a). An investigation of seven other publications by the first author of a retracted paper due to doubts about data integrity. European Journal of Obstetrics & Gynecology and Reproductive Biology, 261, 236–224. https://doi.org/10.1016/j.ejogrb.2021.04.018 Bordewijk, E. M., Li, W., van Eekelen, W., Showell, M., Mol, B. W., & van Wely, M. (2021b). Methods to assess research misconduct in health-related research: A scoping review. Journal of Clinical Epidemiology, 136, 189–202. https://doi.org/10.1016/j.jclinepi.2021.05.012 Buckley, R.  C. (2022). Stakeholder controls and conflicts in research funding and publication. PLoS One, 17(3), 0264865. https://doi.org/10.1371/journal.pone.0264865 Carter, J. M., Goldman, G. T., Rosenberg, A. A., Reed, G., Desikan, A., & MacKinney, T. (2021). Strengthen scientific integrity under the Biden administration. Science, 371(6530), 12. https:// doi.org/10.1126/science.abg0533 Conway, E. M., & Oreskes, N. (2010). Merchants of doubt. Bloomsberg Press. Davies, S. R., & Lindvig, K. (2021). Assembling research integrity: Negotiating a policy object in scientific governance. Critical Policy Studies, 15(4), 444–461. https://doi.org/10.1080/1946017 1.2021.1879660 Desmond, H., & Dierickx, K. (2021). Research integrity codes of conduct in Europe: Understanding the divergences. Bioethics, 35(5), 414–428. https://doi.org/10.1111/bioe.12851 Dube, K., Kanazawa, J., Taylor, J., Dee, L., Jones, N., Roebuck, C., Sylla, L., Louella, M., Kosmyna, J., Kelly, D., Clanton, O., Palm, D., Campbell, D. M., Morenike, G. O., Patel, H., Ndukwe, S., Henley, L., Johnson, M. O., Saberi, P., et al. (2021). Ethics of HIV cure research: An unfinished agenda. BMC Medical Ethics, 22(1), 83. https://doi.org/10.1186/s12910-­021-­00651-­1 Elliott, C. (2017). Why research oversight bodies should interview research subjects, IRB. Ethics & Human Research, 39(2), 8–13. Ennever, F. K. , Nabi, S., Bass, P. A., , Huang, L. O., Fogler, E. C. (2019) Developing language to communicate privacy and confidentiality protections to potential clinical trial subjects: Meshing requirements under six applicable regulations, laws, guidelines and funding policies, Journal of Research Administration 50(1):20–44, Friesen, P., Redman, B., & Caplan, A. (2019). Of straws, camels, research regulation and IRBs. Therapeutic Innovation & Regulatory Science, 53(4), 526–534. https://doi. org/10.1177/2168479018783740 Futures Neves, M. (2018). On (scientific) integrity: Conceptual clarification. Medicine, Health Care and Philosophy, 21(2), 181–187. https://doi.org/10.1007/s11019-­017-­9796-­8

16

1  Research Integrity as a System Characteristic: Coordinated, Harmonized…

Goldman, G., Carter, J.  M., Wang, Y., & Larson, J.  M. (2019). Perceived losses of scientific integrity under the Trump administration: A survey of federal scientists. PLoS One, 15(4), e0231929. https://doi.org/10.1371/journal.pone.0231929 Greenhalgh, T., Thorne, S., & Malterud, K. (2018). Time to challenge the spurious hierarchy of systematic over narrative reviews. European Journal of Clinical Investigation, 48(6), 12931. https://doi.org/10.1111/eci.12931 Hamilton, D. P. (1992). In the trenches, doubts about scientific integrity. Science, 255(5052), 1636. https://doi.org/10.1126/science.11642983 Hilbig, B.  E. (2022). Personality and behavioral dishonesty. Current Opinion in Psychology, 47, 101378. Hilbig, B. E., Moshagen, M., Thielmann, I., & Zettler, I. (2022). Making rights from wrongs: The crucial role of beliefs and justifications for the expression of aversive personality. Journal of Experimental Psychology: General, 151(11), 2730–2755. https://doi.org/10.1037/xge0001232 Hoekstra, A., & Kaptein, M. (2021). The integrity of integrity programs: Toward a normative framework. Public Integrity, 23, 129–141. https://doi.org/10.1080/10999922.2020.1776077 Huberts, L. W. (2018). Integrity: What it is and why it is important. Public Integrity, 20, S18–S32. https://doi.org/10.1080/10999922.2018.1477404 Huberts, L. (2019). Integrity and quality in different governance phases, in Paanakker, H and others, eds, quality of governance; values and violations, 2019. Springer. Huberts, L., & van Montfort, A., (2021 July 26). Integrity of governance: Toward a system approach, In Integrity. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/ integrity/ Institute of Medicine. (2022). Integrity in scientific research. National Academies Press. Integrity. (2021). Stanford Encyclopedia of Philosophy, accessed July 26. Kerkhoff, T., & Overseem, P. (2021). The fluidity of integrity: Lessons from Dutch scandals. Public Integrity, 23, 82–94. https://doi.org/10.1080/10999922.2020.1826139 Kim, D., & Hasford, J. (2020). Redundant trials can be prevented, if the EU clinical trial regulation is applied duly. BMC Medical Ethics, 21, 107. https://doi.org/10.1186/s12910-­020-­00536-­9 Kretser, A., Murphy, D., Bertuzzi, S., Abraham, T., Allison, D. B., Boor, K. J., Dwyer, J., Grantham, A., Harris, L. J., Hollander, R., Jacobs-Young, C., Rovito, S., Vafiadis, D., Woteki, C., Wyndham, J., & Yada, R. (2020). Scientific integrity principles and best practices: Recommendations from a scientific integrity consortium. Science & Engineering Ethics, 25(2), 327–355. https://doi. org/10.1007/s11948-­019-­00094-­3 Kurtz, L., & Goldman, G. (2020, November 16) 10 steps that can restore scientific integrity in government. Scientific American. Laurie, G.  T., Dove, E.  S., Ganguli-Mitra, A., Fletcher, I., McMillan, C., Sethi, N., & Sorbie, A. (2018). Charting regulatory stewardship in health research. Cambridge Quarterly of Healthcare Ethics, 27, 333–347. https://doi.org/10.1017/S0963180117000664 Legg, T., Legendre, M., & Gilmore, A. B. (2021). Paying lip service to publication ethics: Scientific publishing practices and the Foundation for a Smoke-Free World. Tobacco Control, 30(e1), e65–e72. https://doi.org/10.1136/tobaccocontrol-­2020-­056003 MuNutt, M. (2020). Self-correction by design. Harvard Data Science Review, 2(4), 1–11. https:// doi.org/10.1162/99608f92.32432837 Mialon, M., Ho, M., Carriedo, A., Ruskin, G., & Crosbie, E. (2021). Beyond nutrition and physical activity: Food industry shaping of the very principles of scientific integrity. Globalization and Health, 17(1), 37. https://doi.org/10.1186/s12992-­021-­00689-­1 Michalski, J.  H. (2022). The sociological determinants of scientific bias. Journal of Moral Education, 51(1), 47–60. https://doi.org/10.1080/03057240.2020.1787962 Miller, P., Martino, F., Gross, S., Curtis, A., Mayshak, R., Droste, N., & Kypri, K. (2017). Funder interference in addiction research: An international survey of authors. Addictive Behaviors, 72, 100–105. https://doi.org/10.1016/j.addbeh.2017.03.026

References

17

Nakkash, R., Mialon, M., Makhoul, J., Arora, M., Afifi, R., Halabi, A., & London, L. (2021). A call to advance and translate research into policy on governance, ethics, and conflicts of interest in public health: The GECI-PH network. Globalization and Health, 17(1), 16. https://doi. org/10.1186/s12992-­021-­00660-­0 National Academy of Science. (2017). Fostering integrity in research. National Academy Press. Paradeise, C., & Filliatreau, G. (2021). Scientific integrity matters. Minerva, 59, 289–309. https:// doi.org/10.1007/s11024-­021-­09440-­x Penders, B., & Shaw, D. M. (2020). Civil disobedience in scientific authorship: Resistance and insubordination in science. Accountability in Research, 27(6), 347–371. https://doi.org/10.108 0/08989621.2020.1756787 Popp, M., Reis, S., Schieber, S., Hausinger, R.  I., Stegemann, M., Metzendorf, M., Kranke, P., Meybohm, P., Skoetz, N., & Weibel, S. (2022). Ivermectin for preventing and treating COVID-19. Cochrane Database Systematic Review, 6(6), C015017. https://doi.org/10.1002/14651858. CD015017.pub3 Rasmussen, L.  M., Williams, C.  E., , Hausfeld, M.  M., Banks, G.  C., Davis, B.  C., (2020). Authorship policies at U.S. doctoral universities: A review and recommendations for future policies, Science & Engineering Ethics, 16(6):3393–3413. doi: https://doi.org/10.1007/ s11948-­020-­00273-­7. Redman, B. K., & Caplan, A. L. (2021). Should the regulation of research misconduct be integrated with the ethics framework promulgated in the Belmont report? Ethics & Human Research, 43(1), 37–41. https://doi.org/10.1002/eahr.500078 Roberts, L. L., Sibum, H. O., & Cyrus, C. M. (2020). Integrating the history of science into broader discussions of research integrity and fraud. History of Science, 58(4), 354–368. https://doi. org/10.1177/0073275320952268 Saltelli, A., Dankel, D. J., DiFiore, M., Holland, N., & Pigeon, M. (2022). Science, the endless frontier of regulatory capture. Futures, 135, 102860. Sarkadi, S., Rutherford, A., McBurney, P., Parsons, S., & Rahwan, I. (2021). The evolution of deception. Royal Society Open Science, 8(9), 201032. https://doi.org/10.1098/rsos.201032 Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology, 34(2), 138–157. Sharfstein, J. (2020). How the FDA should protect its integrity from politics. Nature, 585(7824), 161. https://doi.org/10.1038/d41586-­020-­02542-­8 Shaw, D. M., & Penders, B. (2018). Gatekeepers of reward: A pilot study on the ethics of editing and competing evaluations of value. Journal of Academic Ethics, 16, 211–223. https://doi. org/10.1007/s10805-­018-­9305-­6 Weibel, S., Popp, M., Reis, S., Skoetz, N., Garner, P. & Sydenham, E. (2022, August 29 ahead of print) Identifying and managing problematic trials: A research integrity assessment tool (RIA) for randomized controlled trials in evidence synthesis. Research Synthesis Methods https://doi. org/10.1002/jrsm.1599. Yalcintas, A., & Kosel, E.  S. (2021). Research ethics in economics: What if economists and their subjects are not rational? In P. Rona & L. Zsolnai (Eds.), Words, Objects and Events in Economics (pp. 103–115). Springer International Publishing.

Chapter 2

Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

Policies reflect a diagnosis of what will correct a perceived problem measured against what is politically possible, thus excluding other diagnoses, perspectives, and causes. Some important relevant options may be blind spots, which are invisible under current thinking. Both problem definition and political interest change over time, allowing unmasking of blind spots and consideration of alternative causes and solutions. Blind spots are inevitable, flowing from taking a position on an issue. They are also preventable, acknowledging that current views/policy institutionalizes specific ways of framing an issue, which must regularly be examined against alternative frames. Blind spots do have costs to those whose concerns are not acknowledged. An example of a blind spot is the possibility of researchers’ collective responsibility for ethics of research on ethnic groups which has played a role in determining their identity. Fossheim (2019) notes there is no clear conception of the nature of such responsibility beyond an individual researcher’s actions. Both the subject matter and the level of responsibility have not previously been considered relevant and therefore were a blind spot. Other examples include using patient-centered insights to reveal blind spots in quality and safety of patient care (Gillespie & Reader, 2018). Another example is that the relationship between racial inequalities and environmental injustices is a blind spot. A discipline can be blind to those voices not represented within that discipline (Mirti et al., 2021). A further example notes that the science of safety has not produced a robust reduction in preventable adverse events, in part because it has ignored the role of human thinking and acting (Bosk & Pedersen, 2019). Blind spots are thus a type of biased attention/perception. They are empirical and conceptual, with both analytical and normative implications. They occur in all organizational forms and professions, especially in hierarchical settings which by their nature discourage reporting failure, as a set of frames and risk perceptions and

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_2

19

20

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

limits on information processing. Blind spots filter out information that destabilizes existing belief systems and power bases. They may present as: lack of action despite available information, absence of information due to lack of measurement tools, inability to identify causes of problematic behavior (Lodge, 2019). Examination of blind spots is a form of housekeeping to help correct entrenched biases and solve problems (LeBaron et al., 2022). They can be made visible by providing multiple perspectives (Lodge, 2019), by mandatory impact assessment and by extending and clarifying the broader concept of research integrity. There are a variety of ways to identify blind spots. We review those affecting research integrity in the biomedical sciences and how they have been uncovered and managed or not. This section is followed by more in-depth analysis of a set of implicit assumptions/aspirations in current policy/practice—improper use of metaphors, two problematic organizational practices/policies, a broader consideration of justice, and the need for institutional trust in science. All affect ability to support research integrity. Upsetting “cognitive locks” preventing research integrity may be necessary. Finally, we consider the value of importing research findings from relevant disciplines, which will provide alternative frames undergirded by empirical evidence.

 urrent Blind Spots Affecting Research Integrity C in Biomedical Science Where does one look for and detect blind spots? In biomedical science, examples of blind spots in need of examination fall within objectives such as: –– Defining and assuring compliance with shared evidential standards and their normative implications. Why is so much attention placed on estimating the incidence of research misconduct in lieu of publicly providing evidence of research practiced with integrity? –– Attaining and maintaining an accurate, cumulative scientific record, protected from weaponization by corporate entities with harmful social impacts Why is there no stomach for effective regulation of conflict of interest, which fuels this distortion? –– Assuring effective governance mechanisms for research integrity with special attention to hyperindividualization of responsibility and appropriate institutional accountability Why are there no enforceable standards for universities to assure competence of the research occurring under their aegis, just as they receive and are accountable for management of research funds? The chapters that follow address means by which these blind spots central to research governance might be addressed. Why is such examination necessary? Williams (2015) notes that history shows unknowing complicit involvement with practices that, in retrospect were blind

Current Blind Spots Affecting Research Integrity in Biomedical Science

21

spots. The history of research ethics surely displays many ways in which research subjects did not have moral standing, or of previously undiscovered data fabrication/falsification that contaminated fields of study, with untrue findings widely applied. Despite progress, both problems continue today, providing a compelling rationale for examining current blind spots. As evidence that concern about blind spots in research practice/governance is very active today, some see emergence of a still fragmented transnational constitution for science, challenging scientists’ professional autonomy in standards of evidence and proof and societal implications of research. A growing array of norm-producing initiatives including ethics bodies are part of this recalibration of old agreements about the autonomy of science (Verschraegen, 2018). Such a new constitution for science is emerging because of concern about unjustifiable blind spots in the current model of science governance. Some blind spots have had a major impact on the ability of science to fulfill its social and moral mission. Neff (2020) identifies the surrendering of control of scientific priorities and access to the profit model of the scientific publishing industry, largely unacknowledged as a blind spot. This has led to pursuit of knowledge of interest to journals, unrelated to consideration of societal importance and to proper accumulation of scientific knowledge. It has also led to the dominant metrics of science evaluation—citations from these journals (Neff, 2020), not an indicator of the quality of the science. Concerns about metrics are addressed in a later chapter.

How Are Blind Spots Detected? Blind spots can be detected when trying to translate theory into practice, from complaints by users of a system, or be clearly described in the literature. Some are conceptual, such as lack of protection of communities, particularly those which are vulnerable, from blind spots in normative documents and their use in regulation of social science research (Cragoe, 2019). Others are empirical such as the finding that retractions required as a result of documented research misconduct were not, in fact, retracted to correct the scientific literature. Blind spots can be hidden behind myths and symbols and inherent in incentive structures. Some blind spots are yet to be fully described, lying hidden behind the perspectives and assumptions embedded in regulations, which are considered to be the authority. Research misconduct (RM) regulations (42 CFR 93) provide an example. Allegations of fabrication/falsification/plagiarism (FFP) must be made to the institution hosting the research by a whistleblower who, experience shows, will often face retribution. No regular monitoring to detect FFP is required. A survey in the USA showed many RM incidents go unreported to the responsible federal agency, Office of Research Integrity (ORI) (Titus et al., 2008). Allegations of FFP must first be reported to the institution, which may not have much experience with such allegations or be unaware of their own research climate that supports RI deviations. RM regulations do not require documentation of the full

22

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

amount of FFP from each case, nor is there any requirement to follow up on the amount of harm to research subjects or to subsequent patients whose treatment depends on the falsified data. Those subjects/pts. are not entitled to disclosure of the RM that affected them nor to any compensation or correction of the ways in which they may have been harmed. All of those missing elements are blind spots with consequences for those involved including a lack of trust that this process is fair. Blind spots may become evident with use of new technologies (see Chap. 9), occur at the intersection of two academic fields, or by questioning exclusions as by gender or age, or by examination of patient or professional complaints. Many are related to data scarcity, certainly an issue in research integrity management as institutions either do not systematically collect data according to common standards or refuse to make it publicly available in aggregate form. Likely for purposes of reputation protection, universities have been resistant to provide data to build an evidence base of management of research misconduct allegations (Byrn et al., 2016) or possible contributing causes such as institutional research climate. And some serious blind spots with direct links to research integrity and to patient welfare have been known for a long time and not corrected, even after regulatory intervention. The NIH Revitalization Act of 1993 required for the first time adequate inclusion of women in NIH-sponsored clinical research although subsequent law narrowed this requirement to phase III trials only. FDA nonbinding guidance documents suggest analysis by sex in FDA-regulated drug and device clinical trials. Pregnant and lactating women continue to be excluded from most such trials, women are underrepresented in cardiovascular trials and post-marketing surveillance studies often have insufficient stratification and analysis by sex (Spagnolo et al., 2022). While the proportion of women in clinical trials leading to drug approval in the USA has improved, current cardiovascular trials consistently show underrepresentation of women, especially for heart failure, ischemic heart disease, and stroke in comparison with the prevalence of these diseases in the population, a rate that has not changed for 30 years (Bierer & Meloney, 2022). Cardiovascular disease is the leading cause of death in American women, accounting for half of deaths; yet, a review of seven recent cardiovascular trials showed female enrollments at 30% (Iribarren et  al., 2022), potentially explaining higher risk of adverse events among women. Flaws in clinical trials include sex-based differences in clinical presentation of ischemic heart disease which often exclude women from trials for not meeting inclusion criteria. These may include lack of angina, angiographically significant coronary lesions, and with the wide use of troponin as a marker for myocardial infarction, lack of sex-specific reference values, with consequent under-diagnosis of women (Iribarren et  al., 2022). Bierer and Meloney (2022) suggest that journals should require a table on the representativeness of study participants and specifically compare participation to known prevalence rates. Mandatory analysis of trial data by sex should be included.

Examples of Blind Spots in Research Practice/Governance; Impact on Research Integrity

23

This blind spot and others like it, in which a whole section of the population is routinely disadvantaged, a finding kn for a long time with no correction, constitutes a significant failure of the regulatory system, yielding a persistent injustice to an already marginalized group. In summary, the potential for detecting important blind spots requires regular examination of a variety of frames through which to view an issue, accompanied by an active search for different perspectives. Alternative frames can be exposed by examining: proposed governing models, inputs from users of a system, previously unrecognized defects in current regulations or upon introduction of a new technology. Blind spots may be recognized but uncorrected for long periods of time. When finally recognized, many can be interpreted as deterring science from its social mission and to be unethical. Examples of issues important to research integrity that contain significant blind spots are discussed in the following section.

 xamples of Blind Spots in Research Practice/Governance; E Impact on Research Integrity Here, I consider a number of overarching themes in current research practice/governance which are problematic for research integrity policy including necessary institutional trust in science. These are non-exhaustive examples.

 ontinued Use of the Metaphor “Bad Apple” to Describe C Research Integrity Deviations Metaphors persuade, they suppress and highlight particular features of an issue and they make visible certain options for problem solving. Numerous controlled studies show metaphors can change how people represent concepts and reason about them (Hauser & Schwarz, 2016). Since metaphors also affect discourse about morality, it is ethically imperative to examine them. One of the most prominent research integrity deviations—research misconduct (RM)—is commonly explained with the metaphor “bad apple.” Such a metaphor may never have been useful but has persisted, serving a protective purpose for science. It certainly ought to be replaced. “Bad apple” has contributed to the persistent effort to document how many there are so that (consistent with self-regulation) they can be ejected from the scientific community. It ignores why RM occurs, how to avoid it, whether the “rot” can spread, or whether “bad apples” can and should be rehabilitated, presumably unlikely since “rotten” is a one-way road to the garbage. It assumes that people can be sorted into “good apples” and “bad apples,” ignoring much research which demonstrates that “good” people in some situations do bad

24

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

things, sometimes subconsciously. It also assumes that there are only a few “bad apples” in the scientific community and that they are usually discovered, investigated, and sanctioned so that the integrity of science remains relatively untouched (Armond et al., 2022). The metaphor “bad apple” should compel attention to “bad barrels” (institutions) and “bad orchards” (the broader scientific community) but so far has not done so. Several other pieces of evidence would be useful but have not been considered to be relevant. What are the experiences of those accused of RM—what metaphor would they use about themselves, their situation and the punishment they will face? How would research subjects impacted by RM describe their experience; who will they consider responsible for harms they may have experienced from fabrication/falsification of data/findings? This and information from other stakeholders would inform development of a replacement metaphor. Huistra and Paul (2022) have asserted that two research misconduct cases in the Dutch context 14 years apart, depict a shift also evident in other literature, a move from emphasizing personal factors in research misconduct to a focus on structural factors such as publication and grant application pressures prevalent in the science system. If this suggested pattern is confirmed more broadly, where is the appropriate shift of research misconduct regulations, which at least in the USA continue to focus entirely on individuals? An alternative view that aims to expose blind spots in the system of RM suggests that it should be regarded as a symptom that the collective scaffolding is failing (Zwart, 2017) to direct individual behavior toward achievement of science’s primary purpose. Perhaps “integrity scaffold” would serve as an improved metaphor since it directs attention to all elements of the research production system that play a role in supporting research integrity. Another option is the metaphor “deception faucet,” useful because violations can include both deliberate (RM) and less deliberate processes such as the use of questionable research practices or failing to report null effects in a paper (Markowitz, 2020). The latter is a violation of research integrity. Deception is a moral term and importantly, it can be fueled by an organizational climate (see discussion below). Criteria for a replacement metaphor might include: Metaphors are normative; as such, they should not be deceptive. Metaphors should be based on a proper scientific source rather than on a folk or underdeveloped source. Metaphors should open new avenues for understanding relevant behaviors of all parties involved and how integrity can be supported.

It is important to note that while metaphors are commonly used in the life sciences, Reynolds suggests that they should be strictly examined. Putting a metaphor into use provides an opportunity to determine how adequate it is; if determined to be inadequate and misleading, it should be dropped. In other words, metaphors are like hypotheses—provisional, partial. Be vigilant about how they are used in public policy to set the research agenda and frame questions (Reynolds, 2022).

Examples of Blind Spots in Research Practice/Governance; Impact on Research Integrity

25

 yperindividualization of Research Integrity Policy; H Group Influences Research misconduct regulations speak only of individuals who have fabricated/ falsified or plagiarized in proposing, performing, or reviewing research or in reporting research results, intentionally, knowingly, or recklessly (42 CFR 93.103-104). An allegation of RM is made by an individual (complainant) about a respondent. All findings of RM have been taken against individuals (ORI website). Yet, groups (a discipline, a working group, or an academic department) are receptacles of norms, transmissible ideas/practices that surely have affected these individuals. Social network theory and literature on ethics in organizations provide multiple insights into such influences and how RM can be prevented. Network theory suggests that distribution of complex, high risk or less familiar behavior depends on multiple sources of reinforcement across social networks (Centola, 2018). Studies of surgical practice document that behavior is associated with cancer surgeons in their network, suggesting a potential explanation for dissemination of use of MRI despite its benefits being uncertain (Pollack et al., 2017; Shi et al., 2019). In another study, cheaters strategically situated themselves in different cognitive social networks when preparing to behave unethically (sparse networks that would not monitor or detect their behavior), or recovering from unethical behavior (dense networks that affirm their moral identities so as to attenuate negative self-evaluation) (Shea et  al., 2019). Since individual behavior always occurs within networks, what accountability should be placed on the wider group? This question is a blind spot in current thought and regulation.

 rganizational Support of Ethical Behavior or Neutralization O of Unethical Behavior? As noted above, research integrity, like other moral behaviors is affected by social behavior in organizations and groups. The central insight is that ethical failures occur regularly in organizations. This occurs in part because of members’ blind spots in being unaware of their unethical behavior, and in part because research shows that individuals are ethically malleable in various work situations. Strong incentives, high cognitive load, and perception of performance pressure can lead to moral disengagement—a decoupling of one’s actions from moral standards. Moral emotions can also trigger disengagement—anxiety as a reaction to a perceived threat, envy to repair a perceived disadvantage. Gratitude (a positive moral emotion) can support ethical behavior. Promoting an organizational ethical infrastructure involves monitoring ethical behavior and rewarding it, placing ethical champions on each work team, and understanding when and why individuals may act unethically (De Cremer & Moore, 2020). Those who commit corruption rationalize that: they did not intend this action

26

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

and have no responsibility, there are no consequences from the corrupt action, others do it and get away with it, I am entitled to such action, and I was appealing to a higher purpose (De Klerk, 2017). Regular violation of standards, tolerated by the organization, will make such behavior seem normal. A specific set of rationalizations, both individual and collective, allows people to persuade themselves they are acting morally even as they commit criminal behavior, neutralizing moral standards and feeling no guilt. There are multiple such techniques: distorting the facts, negating the norm, appealing to another norm, blaming the circumstances or the limited options, relativizing the norm violation by comparing it to the behavior of others, shifting the blame to those who started the wrong doing, hiding behind imperfect knowledge. These techniques can be clustered into: denial of responsibility; denial of injury or consequences; denial of the victim (party does not deserve protection); condemnation of the condemners; and appeal to a higher loyalty, commonly one’s employer (Kaptein & van Helvoort, 2019). Through the process of moral neutralization, initial moral dissonance can give way to acceptance at all levels, even industry wide. The reward and punishment structure can affect such behavior and should be examined. Be alert to neutralization logic/behaviors and be prepared to challenge and question them, as they frequently are unexamined blind spots.

Limited Consideration of Justice in Research Integrity Justice in research ethics has frequently been associated with participant (subject) access to clinical trials, trial inclusion criteria not being representative of those affected, and concerns that research does not match well with population disease burden. But other blind spots related to justice and research integrity are being explored. Here I suggest several. Population-level biomedical research now includes large-scale biobanks, genetic data repositories, and digital networks. Individual consent is no longer sufficient. A focus on the population collective, ethical issues around social justice and how such research should be governed is required but currently is poorly defined and lacking a normative framework. Commercial entities may also be involved, raising questions of data access, for what purpose and what should count as public interest. Development of digital research technologies gradually exposed this blind spot, with initial thoughts described by a group of ethicists (Erikainen et al., 2020). A second example involves a new construct, institutional betrayal, in which individuals are systematically traumatized by institutional omission of preventive or responsive actions to protect those who trust or depend on that institution (Smith & Freyd, 2014). Research misconduct regulations require someone to make an allegation of fabrication/falsification or plagiarism; those who do so are often other employees or students in that institution; such whistleblowers routinely are fired or demeaned and have no way to achieve justice. Some professional codes suggest that those who see evidence of FFP have a duty to report it. Doing so and then receiving

Examples of Blind Spots in Research Practice/Governance; Impact on Research Integrity

27

institutional condemnation is a form of institutional betrayal. While institutions receiving federal funds must name a research integrity officer, that person operates in a conflicted position—does he follow policies that support research integrity and simultaneously carries risk of reputational or financial damage to the institution, or does he become a “lap dog” (Bramstedt, 2021) as otherwise he himself may be fired? The root problem is a policy that allows/requires institutions to investigate their own allegations and evidence of research misconduct—clearly problematic as a conflict of interest but very protective of institutions’ interests. The fate of those who uphold research integrity in such situations has clearly been a blind spot that now has a name (institutional betrayal) but no solution. A different kind of injustice is embedded in research designs that do not support individual decision-making in areas of social importance. Typical research designs focus on effects on the group level but are not able to assess (are blind to) the extent to which these effects characterize each participant in the study. Even if a trial manipulation appears to have an effect at the group level, it may have no effect on many participants, or have an opposite effect and constitute a harm, or no effect at a group level may still be effective for an individual. Zayas et al., 2019) suggest the highly repeated with-in-person trial design as an alternative. Goldenberg (2021) believes this blind spot has contributed to vaccine hesitancy. Public health recommendations are largely based on group efficacy, but what parents want to know is how a vaccine will affect their child. Based on trial evidence, clinicians often cannot answer this concern, and Goldenberg believes that refusal (or ability) to engage with individual concerns has led to a loss of trust in scientific and medical institutions. Vaccine hesitators may see this situation as a failure of scientific integrity, or alternatively, as serving corporate interests.

I nstitutional Trust in Science: Underdeveloped Element of Research Integrity, Prone to Being a Blind Spot In many ways, science as an institution depends heavily on its trustworthiness. Trustworthiness can insulate science when expert testimony is proven wrong, or a falsified theory or findings are discovered, understood as a necessary part of scientific progress rather than as evidence of malfeasance or incompetence (Klincewicz, 2022). Instead of spending time validating previous work, scientists trust those on whose work they build, just as they depend on team members with expertise different from their own. The most important consequence of conflict of interest is the proliferation of unreliable publications, worrisome to scientists because a growing body of incorrect or unreliable findings from COI or sloppy/fraudulent studies is disorienting and slows the progress of science. Indeed, trust is strained not only by unclear but suspected effects of COI but also by concerns about replicability and increasing rates of retractions. As currently constituted, peer review is not designed to audit every stage of the research process and frequently neither is the IRB, again depending on trust.

28

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

Once institutional trust is gained, it takes effort to sustain it, central to public funding of its work. The scientific community has to organize itself to facilitate trust-generating conduct. Well-designed social practices and institutions of science incorporate incentives for scientists to behave in a trustworthy way. In turn, scientific institutions should be able to ensure that scientists are evaluated in a fair and reliable way (Rolin, 2020). Trust in the results of scientific research has to inevitably be directed in part at collective bodies rather than at single researchers. Such collective bodies are often disciplines defined by shared methodological standards, which differ in degrees of explicitness, generality, and binding force and can govern many steps in the research process, from design to data analysis. Methodological standards in a community may come from practices passed on in professional training, from peer review or editorial stances, or in widely used research tools such as statistics software packages (Wilholt, 2016). Practices in some of these communities are being challenged by meta-research (See Chap. 3). It is important to note that the role of disciplines in assuring scientific quality is rarely externally regulated, instead depending almost entirely on self-regulation by that collective. Is it possible (a serious blind spot) that external regulation focuses on individual scientists and on research performing institutions using federal money and not on disciplines, largely not under the control of institutions?

 hallenges Necessary to Upset “Cognitive Locks” in Order C to Reach Research Integrity “Cognitive locks” are assumptions, often unexamined, built into science governance and practice. They are good places to look for blind spots that may be blocking a move to a higher level of research integrity. Here we consider science governance/practice, roles played by expert commissions, examples from perspectives from different disciplines studying the same problem, overlooked measurement standards, and necessity to monitor transparency and fairness of infrastructure changes. Science governance under the current notion of research ethics has incorporated important biases and omissions. First, it has been oriented to scientists and very rarely to effects on research subjects or on patients on whom science is used (a blind spot). Second, while research misconduct has been defined as the most serious deviation from scientific norms, use of questionable research practices may in the aggregate cause more harm and yet are not regulated. Third, a long-term assumption is that research is competently done. The frame of research integrity requires verifying that assumption and setting normative standards for the level of integrity. The framework of “research ethics” has not explicitly encompassed all relevant players in the system of production and dissemination of research. For each of the institutions so involved, what combination of training, institutional oversight/incentives/ environments is necessary to support research integrity?

Challenges Necessary to Upset “Cognitive Locks” in Order to Reach Research Integrity

29

It is noteworthy that many of the studies examining research integrity are carried out by national academies, in the USA, the National Academies of Science, Engineering and Medicine. Such studies often provide a careful analysis of the available literature and are guided, within a charge, by a committee representing those considered experts in the issue. But such committees are often inherently conservative and biased toward sustaining the status quo under which members achieved their expert standing. This is a blind spot. Few other organizations that are independent, such as think tanks, address issues related to research integrity, operating instead in fields highly relevant to government, such as economics or governance. Ethics expertise is a core part of the governance and regulation of biomedical development. In the USA, bioethics commissions have been used to clarify positions, illuminate issues beyond what is explicitly present, to sort moral judgment and provide justification for those judgments, and to examine different possible moral positions, all for the public and democratically accountable decision makers to deliberate and decide upon (Hedlund, 2014). Capron (2017) reviews the history of US federal bioethics bodies and their major publications, a number of which have addressed issues relevant to research integrity. One important example (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research) was directly authorized by Congress in 1972 in the wake of the Tuskegee Syphilis Study. That commission addressed what the federal government should do to avoid another Tuskegee, reporting directly back not just to the executive branch but also to Congress. While commissions can achieve consensus, or serve as a crucible for hammering out conclusions on controversial issues (Capron, 2017), they operate intermittently, dependent for their existence on a supportive political climate and thus cannot conduct regular monitoring to expose and correct blind spots. Holman (2021) describes the blind spots of two different disciplines studying the same problem—in this case, the effects of research partnerships between academics and industry. Work on this issue from the policy sciences and from philosophy of science do not overlap at all; yet, each provides a partial view of the issue, leaving a blind spot that could be addressed by the other discipline. Policy sciences indicate that such partnerships are beneficial and focus on quantity and quality of subsequent publications. Philosophy of science finds such partnerships to be distorting and corrupting, with evidence of commercially favorable outcomes. Where their perspectives might merge is from successful cases where the ideals and norms of academic research were shielded from negative effects of industry engagement (Holman, 2021). Such literature addressing blind spots from both disciplines is slim. The lesson from this example is that any issue should be studied from multiple different perspectives, as each is likely to present with only a partial frame of that issue. A methodological blind spot, dubbed Questionable Measurement Practices (QMP), draws attention to widespread lack of transparency and rigor about construct validation of measurement instruments used in research. Reforms to problems of replicability have focused on statistics and data analysis because an abundance of

30

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

analytic flexibility has allowed researchers to produce desired results. But the current state of measurement instrument construct validity is a blind spot which will continue to undermine replicability. Construct validation of an instrument is a process by which researchers gather evidence to support use and interpretation of scores for a given context, population, or time. It is common practice for measures to be developed and used in research still lacking evidence of strong validity, sometimes on the fly with no such evidence collected, or instruments are modified without rechecking evidence of validity. Such practices affect the validity of study conclusions as well as the ability to replicate (Flake, 2021). The construct being studied must be defined with instruments built to measure it accurately, supported by evidence that is described in a study manuscript. Rigorous research design, appropriate sample size, and advanced statistics will not solve problems resulting from lack of construct validation of instruments used in research (Flake & Fried, 2020). Construct validity is a necessary but insufficient condition of replication. Some studies with fatal internal validity or construct validity flaws do not replicate. Currently, there is little guidance for navigating measurement challenges that threaten construct validity in replication research (Flake et al., 2022). This blind spot can be illuminated by accessing strong scholarship from psychology/educational psychology and by routinely including a measurement specialist on the research team, just as a biostatistician is used. In a final example, infrastructure changes such as publication charges, being implemented as part of the open access/open science initiative require filling in blind spots to determine if they are reaching their target and are fair. In the traditional subscription-based model, details of licenses for journals are usually kept secret through nondisclosure agreements. The new model of article processing charges, presumably eventually to replace subscriptions, offers transparency about financial flows in the publication market. Such information must be well monitored and complete and offers a basis for judging fairness of charges being levied by publishers for their services (Bruns & Taubert, 2021). Current practices by publishers are strongly contested not only for access to journal articles that have been supported by public funds and peer-reviewed gratis (See Appendix) but also for the level of profits accruing to publishers. Thus, established ways of structuring science governance and practice may in fact be “cognitive locks,” unexamined for better ways of supporting research integrity.

 dditional Areas of Theory from Related Disciplines that A Would Support Move to Research Integrity How issues are framed can shape differences in priority—frames are ways of assigning meaning and significance to public issues. Frames are simplifications of reality, are usually contested, with groups using various forms of power to secure attention

Additional Areas of Theory from Related Disciplines that Would Support Move…

31

and resources for the issues that concern them. Moral questions are ever present— for example, why should HIV/AIDS be selected for high-priority research and universal access while other diseases of equal or greater import are neglected? Research integrity offers an alternative inclusive, system-wide frame to what has loosely been called research ethics. As will be seen in examples throughout this book, problems that have traditionally been seen as the responsibility of science self-governance have been documented for decades but not resolved; a research integrity frame would demand attention and action. Framing rarely unfolds smoothly but rather is a struggle through iterative processes, likely with counter-mobilization of those opposing the integrity frame. Theory and research findings from relevant disciplines are key to uncovering new options that would not have become obvious in current frameworks. Here, we examine work from the policy and sociocultural sciences, both of which provide relevant direction for better research policy.

Policy Sciences Theory from the policy sciences provides many insights about how research policy is structured, based in part on form of government by country. Policies can be overreactions or underreactions to a problem, the former exacting high costs for expected benefits, the latter providing lower benefits than are needed to solve a problem, in part because of a lack of state capacity (Maor, 2020). Information quality is essential to an accurate decision on appropriate and effective policy approach as well as to knowing when adaptation of original policy is necessary because conditions have changed (Greer & Trump, 2019). Governments like the USA feature multiple actors (executive, legislative, judicial) which can strengthen and/or impede timely update of policies. Regulated aspects of policy re federally funded research (subjects protection, research misconduct) show evidence of this decision structure. There has been a long period between regulation revision even as changes in research practice have multiplied, and revisions have stayed pretty narrowly within the framework embodied in the original law which established these regulations. Certainly in the case of research misconduct regulations (last issued in 2005), subsequent studies have challenged assumptions made at the time of passage, showing clearly that RM regulations are underreactions to a problem now acknowledged to be larger than originally thought. In addition, information about operation of these regulations and the problem they were to solve has been nearly absent, clearly of insufficient quality to direct adequate and timely updates. Agencies administering these regulations have been starved for resources, perhaps deliberately, in order to sustain the notion that science could adequately self-regulate its own practices. Yet, proper investment in information and revised policies could have improved the science delivered to the public.

32

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

Examples from Sociocultural Disciplines Research from a sociocultural perspective identifies ethical problems traditional research ethics frameworks fail to see (De Vries, 2022). For example, women remain underrepresented in phase 1 clinical trials, curtailing their participation, subjecting them to additional surveillance and in the end leaving women’s health depending on therapies tested primarily on male bodies. As noted earlier in this chapter, since the late 1990s, NIH has demanded that researchers incorporate more women into clinical trials, but today their inclusion in phase 1 trials is largely at the discretion of the pharmaceutical company sponsoring the trial (Cottingham & Fisher, 2022). In a body of work that illuminates sociocultural limitations of research ethics frameworks, Jill A. Fisher’s work on participants in phase 1 drug trials raises questions about whether structural aspects of participants in these trials challenge traditional research ethics and make participants more vulnerable (Fisher, 2020). Phase 1 is first-in-human trials testing new pharmaceuticals, using human volunteers and are necessary for eventual consideration of the product by FDA. The invisibility of these clinical trials protects the interests of the companies from outside scrutiny. Metaphors of these research participants as human guinea pigs or lab rats highlight not only differences from later-stage clinical trials of affected patients, but also the confined settings in which phase 1 trials occur and the predominance of minority populations that enroll (Fisher, 2020). More generally misaligned are commonly used research ethics principles juxtaposed against realities of the mostly poor participants in phase 1 trials, trying to earn a living by enrolling serially in such trials. Phase 1 clinical trials test for drug toxicity levels and side effects. They do not benefit participants and often require staying in a confinement facility for some period of time but do compensate participants. Traditional research ethics concerns focus on undue inducement and coercion undermining voluntary participation. But many of the real participants in these trials face structurally diminished voluntariness because they lack other options for earning money. IRBs are typically left to come up with their own norms about ethically appropriate compensation rates, creating wide variation across research institutions. Restricting payments has spurred a different kind of bioethical worry—exploitation of the economically vulnerable among individuals who have little power to set conditions for phase 1 participation (Walker et al., 2018). A related concern is that payment creates incentives for problematic participant behavior that may undermine data validity, such as not adhering to washout periods between trials, not taking study drugs, or providing false medical history. Such violations are reported to occur but may involve avoidance of unexplained banning from a trial or a company if symptoms are reported. An alternative supporting more honest participant behaviors would be to treat and monitor subjects experiencing adverse reactions in the facility by paying them the full study compensation even if they no longer receive dosage of the experimental drug and are effectively dropped from a study. There is a lack of information about the health impact of repeat trial

Conclusion

33

participation and no centralized database to track it. Informed consent is focused on risks presented by each individual study, not on interactions between and accumulative effects of multiple investigational drugs. With such information, participants would be in a position to make decisions based on a more realistic appraisal of potential harms of serial participation. Thus, larger structural and system-oriented problems pervade this area of phase 1 research—in this situation, the research enterprise is positioned against a background of social inequities, calling for a re-examination of traditional research ethics. Possible policy reforms include: “transparency about why clinics ban individuals and limitations on appropriate banning practices, modifying payment structures to incentivize the reporting of side effects, assessing and communicating the risk factors relevant to serial participants, and extending worker-like protections to phase 1 participants. The perspectives and experiences of serial healthy volunteers illuminate the ethics of phase 1 practices beyond the current bioethical focus on discrete exchanges, behaviors and events” (Walker et al., 2018, p. 110), remedying a set of invisible and problematic research practices. Other racial blind spots—a gap in knowledge that may be brought about by white normativity—could explain why racial disparities in pain treatment and racial differences in pain experience have not been linked. The failures of research programs concerned with racial differences in pain experience have focused on explanations at the individual level—genetic and psychological differences. The possibility that differences in these experiences are the result of differences in how health care professionals treat individuals of different ancestral backgrounds or structural differences have not been taken up in the literature. Such an error in defining a relevant research question can be described as collective unknowing, false beliefs, and misunderstandings and raises the question, how are institutions set up to reinforce such biases (Friesen & Glligorov, 2022)? These are but a few examples of the application of findings from related sciences which have the potential to support research integrity.

Conclusion Blind spots occur in all organizations and are embedded in policy. Some are inevitable as a consequence of taking a stand on an issue. Others are sustained by hewing for long periods of time to a particular frame without being forced to determine the costs of doing so or to examine other frames (disciplines, ideologies). These blind spots can have profound individual and system consequences. For example, inconsistent and incomplete monitoring of research misconduct and conflict of interest leave identified violators bearing consequences, while those not so identified by this leaky system, are free of any consequences. Another example: voices of research participants and subsequent patients on which research knowledge is used being totally absent from discussions regarding research integrity even though they are directly affected. Current governance is almost entirely science-facing, ignoring

34

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

many of its consequences to participants and the public. Another example: there is little concern about the total lack of transparency in research ethics practices of companies that run the majority of clinical trials. Required challenges to these “cognitive locks” should occur regularly and be overseen by an outside, independent body. Here, I have examined a number of examples of “high level” blind spots embedded in current research ethics policy/practice: attachment to the defective “bad apple” metaphor to describe research misconduct and currently forming the structure for the regulation of research misconduct; a set of policies that always blame the individual (hyperindividualization), excusing other parties from responsibility/ accountability; organizational support, sometimes industry wide, to neutralize/ rationalize unethical behavior so that it is guilt-free; limited attention to the full range of issues of research justice; and lack of attention to trial design that would answer questions central to decisions patients believe ethically required. Recognition and deep examination of these blind spots and others to be carefully defined are necessary, requiring expansion of views beyond those typically called upon, usually representatives of the scientific community. We do have a responsibility to identify and rectify blind spots, to allow fields to move on to more productive performance. The same may be said of the constant need to nurture institutional trust in science.

References Armond, A. C., Gordijn, B., Lewis, J., Hosseini, M., Bodnar, J., Holm, S., & Kakuk, P. (2022). A scoping review of the literature featuring research ethics and research integrity cases. BMC Medical Ethics, 22(1), 50. https://doi.org/10.1186/s12910-­021-­00620-­8 Bierer, B. E., & Meloney, L. G. (2022). Strategies to optimize inclusion of women in multi-national trials. Contemporary Clinical Trials, 117, 106770. https://doi.org/10.1016/j.cct.2022.106770 Bosk, C. L., & Pedersen, K. Z. (2019). Blind spots in the science of safety. The Lancet, 393(10175), 978–979. https://doi.org/10.1016/S0140-­6736(19)30441-­6 Bramstedt, K. A. (2021). Integrity watchdogs, lap dogs, and dead dogs. Accountability in Research, 28(3), 191–195. https://doi.org/10.1080/08989621.2020.1821370 Bruns, A., & Taubert, N. (2021). Investigating the blind spot of a monitoring system for article processing charges. Publications, 9(3), 41. https://doi.org/10.3390/publications9030041 Byrn, M. J., Redman, B. K., & Merz, J. F. (2016). A pilot study of universities’ willingness to solicit whistleblowers for participation in a study. American Journal of Bioethics: Empirical Bioethics, 4(4), 64–67. Capron, A. M. (2017). Building the next bioethics commission. Hastings Center Report, 47(3), S4–S9. https://doi.org/10.1002/hast.710 Centola, D. (2018). How behavior spreads. Princeton University Press. Cottingham, M.  D., & Fisher, J.  A. (2022). Gendered logics of biomedical research: Women in U.  S. Phase 1 clinical trials. Social Problems, 69(2), 492–509. https://doi.org/10.1093/ socpro/spaa035 Cragoe, N. G. (2019). Oversight: Community vulnerabilities in the blind spot of research ethics. Research Ethics, 15(2), 1–15. https://doi.org/10.1177/1747016117739936 De Cremer, D., & Moore, C. (2020). Toward a better understanding of behavioral ethics in the workplace. Annual Review of Organizational Psychology and Organizational Behavior, 7, 369–393. https://doi.org/10.1146/annurev-­orgpsych-­012218-­015151

References

35

De Klerk, J. J. (2017). “The devil made me do it!” an inquiry into the unconscious “devils within” of rationalized corruption. Journal of Management Inquiry, 26(3), 254–269. https://doi. org/10.1177/1056492617692101 De Vries, R. (2022). A tale of two bioethics. Perspectives in Biology and Medicine, 65(1), 133–142. https://doi.org/10.1353/pbm.2022.0008 Erikainen, S., Friesen, P., Rand, L., Jongsma, K., Dunn, M., Sorbie, A., McCoy, M., Bell, J., Burgess, M., Chen, H., Chico, V., Cunningham-Burley, S., Darbyshire, J., Dawson, R., Evans, A., Fahy, F., Finlay, T., Frith, L., Goldenbert, A., et  al. (2020). Public involvement in the governance of population-level biomedical research: Unresolved questions and future directions. Journal of Medical Ethics, 47. Oct 6 online ahead of print. https://doi.org/10.1136/ medethics-­2020-­106530 Fisher, J. A. (2020). Adverse Events. New York University Press. Flake, J. K. (2021). Strengthening the foundation of educational psychology by integrating construct validation into open science reform. Educational Psychologist, 56(2), 132–141. https:// doi.org/10.1080/00461520.2021.1898962 Flake, J. K., Davidson, I. J., Wong, O., & Pek, J. (2022). Construct validity and the validity of replication studies: A systematic review. American Psychologist, 77(4), 576–588. https://doi. org/10.1037/amp0001006 Flake, J.  K., & Fried, E.  I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Applied Psychological Science, 3(4), 456–465. Fossheim, H. J. (2019). Past responsibility: History and the ethics of research on ethnic groups. Studies in History and Philosophy of Biology and Biomedical Science, 73, 35–43. https://doi. org/10.1016/j.shpsc.2018.11.003 Friesen, P., & Glligorov, N. (2022). White ignorance in pain research: Racial differences and racial disparities. Kennedy Institute of Ethics Journal, 32(2), 205–235. https://doi.org/10.1353/ ken.2022.0012 Gillespie, A., & Reader, T. E. (2018). Patient-centered insights: Using health care complaints to reveal hot spots and blind spots in quality and safety. Milbank Quarterly, 96(3), 530–567. https://doi.org/10.1111/1468-­0009.12338 Goldenberg, M. J. (2021). Vaccine hesitancy: Public trust. University of Pittsburgh Press. Greer, S.  L., & Trump, B. (2019). Regulation and regime: The comparative politics of adaptive regulation in synthetic biology. Policy Sciences, 52, 505–524. https://doi.org/10.1007/ s11077-­019-­09356-­0 Hauser, D.  J., & Schwarz, N. (2016). Medical metaphors matter: Experiments can determine the impact of metaphors on bioethical issues. American Journal of Bioethics, 16(10), 18–19. https://doi.org/10.1080/15265161.2016.1214311 Hedlund, M. (2014). Ethics expertise in political regulation of biomedicine: The need for democratic justification. Critical Policy Studies, 8(3), 282–299. https://doi.org/10.1080/1946017 1.2014.901174 Holman, B. (2021). What, me worry? Research policy and the open embrace of industry-academic relations. Frontiers in Research Metrics and Analytics, 6, 600706. https://doi.org/10.3389/ frma.2021.600706 Huistra, P., & Paul, H. (2022). Systemic explanations of scientific misconduct: Provoked by spectacular cases of norm violation? Journal of Academic Ethics, 20, 51–65. https://doi. org/10.1007/s10805-­020-­09389-­8 Iribarren, A., Diniz, M.  A., Merz, C.  N., Shufelt, C., & Wei, J. (2022). Are we any WISER yet? Progress and contemporary need for smart trials to include women in coronary artery disease trials. Contemporary Clinical Trials, 117, 106762. https://doi.org/10.1016/j. cct.2022.106762 Kaptein, M., & van Helvoort, M. (2019). A model of neutralization techniques. Deviant Behavior, 40(10), 1260–1285. https://doi.org/10.1080/01639625.2018.1491696 Klincewicz, M. (2022). Institutional trust in medicine in the age of artificial intelligence. research. tilburguniversity.edu

36

2  Blind Spots in Research Integrity Policy: How to Identify and Resolve Them

LeBaron, G., Mugge, D., Best, J., & Hay, C. (2022). Blind spots in IPE: Marginalized trends in contemporary capitalism. Review of International Political Economy, 28(2), 283–294. https:// doi.org/10.1080/09692290.2020.1830835 Lodge, M. (2019). Accounting for blind spots, p  29-48. In T.  Bach & K.  Wegrich (Eds.), The blind spots of public bureaucracy and the politics of non-coordination. Springer International Publishing. Maor, M. (2020). Policy over- and under-design: An information quality perspective. Policy Sciences, 53, 395–411. https://doi.org/10.1007/s11077-­020-­09388-­x Markowitz, D.  M. (2020). The deception faucet: A metaphor to conceptualize deception and its detection. New Ideas in Psychology, 59, 100816. https://doi.org/10.1016/j. newideapsych.2020.100816 Mirti, M. N., Bowser, G., Cid, C. R., & Harris, N. C. (2021). Overcoming blind spots to promote environmental justice research. Trends in Ecology & Evolution, 36(4), 269–273. https://doi. org/10.1016/j.tree.2020.12.011 Neff, M.  W. (2020). How academic science gave its soul to the publishing industry. Issues in Science & Technology, 36(2), 35–43. Pollack, C. E., Soulos, P. R., Herrin, J., Xu, X., Cjristakis, N. A., Forman, H. P., Hu, J. B., Killelea, B. K., Wang, S., & Gross, C. P. (2017). The impact of social contagion on physician adoption of advanced imaging tests in breast cancer. JNCI J of the National Cancer Institute, 109(8), djw330. https://doi.org/10.1093/jnci/djw330 Reynolds, A. S. (2022). Understanding metaphors in the life sciences. Cambridge University Press. Rolin, K. (2020). In J.  Simon (Ed.), Trust in science. The Routledge Handbook of Trust and Philosophy. Shea, C. T., Lee, J., Menon, T., & Dong-Kyun, I. (2019). Cheaters hide and seek: Strategic cognitive network activation during ethical decision making. Social Networks, 58, 143–155. https:// doi.org/10.1016/j.socnet.2019.03.005 Shi, Y., Pollack, C. E., Soulos, P. R., Herrin, J., Christakis, N. A., Xu, X., & Gross, C. P. (2019). Association between degrees of separation in physician networks and surgeons’ use of perioperative breast magnetic resonance imaging. Medical Care, 57(6), 460–467. https://doi. org/10.1097/MLR.0000000000001123 Smith, C. P., & Freyd, J. J. (2014). Institutional betrayal. American Psychologist, 69(6), 575–586. https://doi.org/10.1037/a0037564 Spagnolo, P. A., Lorell, B. H., & Joffe, H. (2022). Preface to theme issue on women’s health and clinical trials. Contemporary Clinical Trials, 119, 106837. Titus, S. L., Wells, J. A., & Rhoades, L. J. (2008). Repairing research integrity. Nature, 453(7198), 980–982. Verschraegen, G. (2018). Regulating scientific research: A constitutional moment? Journal of Law & Society, 45(S1), S163–S184. Walker, R. L., Cottingham, M. D., & Fisher, J. A. (2018). Serial participation and the ethics of phase 1 healthy volunteer research. Journal of Medicine and Philosophy, 43, 83–114. https:// doi.org/10.1093/jmp/jhx033 Wilholt, T. (2016). Collaborative research, scientific communities, and the social diffusion of trustworthiness. In M. S. Brady & M. Fricker (Eds.), The epistemic life of groups: Essays in the epistemology of collectives. Oxford University Press. Williams, E.  G. (2015). The possibility of an ongoing moral catastrophe. Ethical Theory and Moral Practice, 18(5), 971–982. Zayas, V., Sridharan, V., Lee, R.  T., & Shoda, Y. (2019). Addressing two blind spots of commonly used experimental designs: The highly-repeated within-person approach. Social and Personality Psychology Compass, 13(9), e12487. https://doi.org/10.1111/spc3.12487 Zwart, H. (2017). Tales of research misconduct. Library of Ethics and Applied Philosophy.

Chapter 3

Evidence-Based Research Integrity Policy

In the absence of an overall metric that captures the integrity of research, what issues in the biomedical sciences do we need to address to assure research integrity and is there an adequate evidence base to do so? There are scattered threads of evidence that suggest that we have been overconfident in our current system of governance. How uniformly is it addressing: areas of poor quality science, an incomplete and inaccessible scientific record, prevalence of bias in part stemming from uncontrolled conflicts of interest and perverse system-wide incentives which fall directly on individual scientists and make it difficult for them to practice with integrity? Alternatively, could current research integrity be judged to be robust and if there are gaps, how could they be improved? An example is useful. While the connection between diet and health risks is of obvious importance both to individuals and to society, the evidence base surrounding much nutritional advice is disputed. Researchers at the Institute for Health Metrics and Evaluation at the University of Washington have created a star-based metric that rates the quality of the evidence for a link between a given behavior and a particular health outcome. The link between smoking and high systolic blood pressure and health outcomes is strong; the link between studies assessing diets such as eating red meat/vegetables and health outcomes is notably weaker. The lower-rated findings indicate that studies in these areas need to get better if they are to yield convincing results that the public can trust and use to make informed choices. There is evidence that underpowered clinical studies without controls are contributing to this confusion. Funders need to invest in high-quality research in areas currently lacking convincing evidence (Studies, 2022). Several limitations of evidence-based policy should be acknowledged. Scientific findings can remain conjectural and often contested, successful evidence-based interventions may not generalize well, philosophical reasoning and judgment are necessary as is recognition that politics and governance institutions will play a role in policy decisions. Yet, large swaths of the infrastructure supporting integrity of research practice requires empirical information currently not only missing but © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_3

37

38

3  Evidence-Based Research Integrity Policy

actively resisted, presumably to “protect” private interests, whether of commercial entities including journals, or of universities. Concern about these deficits is described in all chapters of this book. Here, I examine evidence oriented, largely still emerging developments, which support research integrity. Multiple newly developed sciences offer important frames: translational science (from basic science to clinical application), regulatory science (regulatory process should be evidence based), meta-science (empirical studies of the practice of science), and relevant studies from the policy sciences. Research integrity also requires tools: scientific standards for which consensus exists, measurement instruments that provide sufficient evidence of validity and reliability, and research syntheses which summarize the evidence and expose gaps important for research integrity standards and policies. Limitations of an evidence-­ based approach to research integrity are noted, especially hampered by the current lack of evidence.

 essons from Translational Science, Regulatory Science, L Meta-Science, and Science Policy While most of these sciences are relatively new as disciplines, each plays an essential role in meeting ethical requirements to produce and apply accurate science.

Translational Science Arising near the time of the mapping of the human genome, translational science addresses the gap between basic science and its clinical and commercial applications. The logic of translation is that the capacity of science and the market to produce innovation such as new cures requires a science focused on removing whatever barriers exist to reach beneficial outcomes/products. Such translation will not occur spontaneously; it requires removal of regulatory obstacles and incentivizing academic, commercial, and governmental collaboration, each of which has traditionally operated with different versions of research integrity/ethics. Translational science also is framed as a responsibility flowing from public investment in science (Aarden et al., 2021), to reap benefits from such investment. Others have concluded that an underlying motivation for emergence and significant government funding for translational science, including 60 university-based translational centers (CTSAs), is a shift in the source of financial investment. The program is said to “de-risk” drug development for pharmaceutical companies by having academics do the early science using federal money. Industry partners can retain rights to monetize university research projects. This analysis coexists with the more publicly available explanation for translational research, which is to move the science and product development more quickly so that it can reach patients.

Lessons from Translational Science, Regulatory Science, Meta-Science, and Science…

39

Ethically, “the market” becomes a central value for investment of public funds in this endeavor, sidelining other potential investments (Robinson, 2019a, 2019b). Thus, translational science has become a subdiscipline within biomedicine, complete with a vast publicly funded infrastructure, believed to be necessary in order to include the partners that can bring science to market. This arrangement brings significant potential threats to research integrity, rendering existing conflict of interest governance, largely through disclosure, inadequate. Conflicting interests are no longer seen as a problem (Jeste, 2020). Against a history of industry-sponsored research supporting the sponsor’s interests, the only appropriate approach in translational research is for all parties, especially industry, to be held to high research producing and reporting standards, transparently monitored by an independent body. The public should understand how and to which entities its investment in translational science is producing beneficial knowledge/products. Regulatory Science Regulatory science is most extensively used by agencies such as the FDA and those preparing data for product approval, which requires meeting standards of quality, safety, and efficacy. In this space, it sets standards that define what counts as valid evidence for regulatory decisions, and thus heavily affects the kind of research undertaken in support of product approval. An example is statistical practices of safety monitoring in clinical trials. Several ethical tensions around source of evidence in regulatory science have arisen. A science-based definition of regulatory science indicates that it uses applied versions of scientific disciplines in the regulatory process. Its focus is on risk and benefit assessment, sometimes using methods more appropriate for a decision-­ oriented science than for a knowledge-oriented science (academic), sometimes yielding methodological controversy (Todt & Lujan, 2022). Given the impact of regulatory decisions on peoples’ lives, there is much pressure to open these scientific processes to public engagement. Maintaining an image of independent, objective, reliable scientific processes is, in some respects, a different logic from opening the process to disparate social demands from the general public (Dendler & Bol, 2021). What is research integrity in this influential realm? Demortain (2017) suggests that regulatory science ends up including the part of regulatory knowledge which is agreed by all parties to be relevant, which is, of course, a matter of judgment and likely excludes important points of view. Real-­ world evidence has been accepted for some regulatory decisions such as post-­ approval safety surveillance, but its integration in early product development is not yet clearly defined. A second set of tensions arises in the case of emerging technologies and their unknown risks (see Chap. 9 for greater detail). How these technologies are tested, assessed, monitored, and their effects determined, and who sets standards, permission to enter the market or requires additional research and development are important questions? Has an emerging technology been thoroughly evaluated to determine

40

3  Evidence-Based Research Integrity Policy

if it is ready for regulatory application (Anklam et al., 2022)? The science is often of industrial origin and may be evaluated in closed settings without external peer review, leading to a perception of a bias toward supporting innovation. A further regulatory example shows an important blind spot. Research on commercial determinants of health has almost exclusively focused on markets that harm health (tobacco, guns), very rarely on those markets with positive effects on health (vaccines) and almost never on how to shape markets to provide a more positive impact, consistent with public good (Liber, 2022). Meta-Science Meta-science involves studying science itself, and how it is produced both ethically and methodologically. Meta-science could be seen as a social movement that uses the tools of science to diagnose strengths and weaknesses in research practice. Those writing in this field have delivered scathing analyses of multiple areas of failure in scientific standards and accuracy, outlined in a later section on precision of scientific standards. A current concern of meta-science is replication of scientific studies, essential to scientific self-correction but uncommon. In fields with high task uncertainty, replication is challenging and failures more ambiguous, but the ideal of replication is held to be important in supporting research integrity. In general, meta-­ science seeks reforms above the level of disciplinary science, which has traditionally been the site of negotiation regarding standards. One particular example of a meta-science-based reform is Evidence-Based Research. It is a particular project born in acknowledgment of a meta-research finding that half of all clinical studies may be redundant and do not add value. Such waste will continue unless researchers systematically and transparently identify and access the existing evidence, both before planning a trial and after one is completed (Lund et al., 2021). Several studies clearly show that researchers tend to cite a small, unrepresentative set of earlier studies, often preferentially choosing the newest or the biggest studies or those that concur with the author’s opinion (Robinson, 2020.). These practices do not support stewardship of scientific resources and a clear tracing of the cumulative scientific record. The digital revolution allows the scientific ideal of research being cumulative, to be within reach (Lund et al., 2021), but is largely not being practiced. Some very useful kinds of meta-science studies link directly to clinical practice. Medical reversals look at practices that are being used but then shown by RCT to be ineffective. Studies that could be located showed 396 such reversals over a 15-year period in three highly cited medical journals (Herrera-Perez et  al., 2019) or 64 reversals over a 10-year period in high-impact oncology journals (Haslam et  al., 2021). These studies look at a very small part of specialized chunks of the literature; what would be the impact of such meta-studies if they were routinely done as part of quality assurance for the evidence base of all fields of scientific practice? Some reversals never had a prior RCT while others have, but accumulated evidence shows them to be of low effectiveness (Haslam et al., 2021). Unfortunately, there is a lack

Lessons from Translational Science, Regulatory Science, Meta-Science, and Science…

41

of established methods to identify these low-value practices open to reversal (Herrera-Perez). It is noteworthy that a majority of reversal studies were funded by non-industry sources even in fields in which 35–49% of trials had been industry funded (Herrera-Perez et al., 2019). Outside the meta-science movement/approach, there is an alternative view of the state of science. This view strongly supports the notion that there is no methodological crisis, suggesting that there is no evidence supporting such a framing and that it is harmful to science. Proponents of this view note several points in support of their position. Surveys suggest FFP is uncommon, with unknown effects on the scientific literature. There is no clear evidence that questionable research practices are increasing. While a certain amount of irreproducibility ought to be expected in most research fields, we lack consensus about what rate ought to be considered acceptable. Such a standard would need to take into account science tackling phenomena of increasing complexity and that lacking theoretical strength. Fanelli (2022) notes that there may be localized “crises” in specific fields of research. While Fanelli (2009, 2022) accurately draws attention to missing pieces of information/standards that should be corrected, his conclusion that there is no crisis ignores two important perspectives. First, arguing that indicators of science quality are not getting worse ignores the question of whether both present and past standards were fit for purpose and a good value for a large expenditure of public funds. Second, the fact that there is an insufficient evidence base to judge, monitor, and assure the quality of science constitutes irresponsibility on the part of the scientific community and those institutions that oversee and regulate it. Science Policy Evidence-informed science policy lags decades behind evidence-based medicine, serious since policy usually affects broad swaths of science. Some areas of social policy have been evaluated by randomized controlled trials. It is important to note that the political environment may encourage unfair manipulations of studies and conflicts of interest can occur with those funding the research. What counts as evidence should be broadly inclusive. Within these bounds, information as verifiable as possible should inform decisions about how to reach goals that have been politically set (Newman, 2017). Sometimes these goals are very general—“Research misconduct involving PHS support is contrary to the interests of the PHS and the Federal government and to the health and safety of the public, to the integrity of research, and to the conservation of public funds” (42 CFR 93.100). The policy sciences also instruct us to gain a full understanding of strategic ignorance, bias, and denial that may be inconvenient to some groups affected by a policy (Paul). Again, US research misconduct policy is entirely silent on determining why someone fabricated or falsified data, does not require examination of the extent of these F/F in the full range of the individual’s publications or their effects on research subjects in those trials or inform subjects when FF has affected the trial or them personally. These are ethically serious areas of ignorance not sanctioned

42

3  Evidence-Based Research Integrity Policy

under current policy, perhaps a form of neglect serving political purposes of protecting science and avoiding liability. A more well-established area of protected ignorance is the ways clinical trials can hide bias and selectivity. Clinical trials that might expand a drug market are preferred over those that might narrow it, and very few trials deal with the rationale for discontinuing a medication (Paul & Haddad, 2019). Evidence-based policy studies address the question: does the program/policy work, does it produce its intended results? Some suggest that such evaluations should be replicated in at least two settings before the policy can be called evidence-­ based. If these questions cannot be answered, resources may be spent on programs of unproven effectiveness (Haskins, 2018).

Precision of Scientific Standards As noted above, meta-research is the study of research itself including its methods, reporting, evaluation, incentives, and overall quality, a necessary first step to improvement. Science is facing an important transformation, necessary to overcome the limitations noted throughout this text. New software tools; retraining of the scientific workforce; and revision of journal, funder, and university policies are necessary. But any single reform will address only a subset of this complex set of issues.

Evidence of Scientific Quality Needing Attention Examples of specific deficiencies in particular areas of research are legion. Only a few are here described. The underlying ethical logic is that excellent research methods/practices are necessary to justify the use of research resources including research participants and to meet commitments to their protection. Appropriate statistical practice has long been recognized as an area of concern. Low statistical power has long been recognized as common to many studies but has not been reformed and continues to be a problem. In neuroimaging research and even in top journals one study found that only 4 of 131 papers in 2017 and 5 of 142 papers in 2018 had pre-study power calculations. Low power increases probability that statistically significant findings are in fact false, and many such exaggerated published effects from small studies will distort the literature. Both publishers and funders could require appropriately powered studies (Szucs & Ioannidis, 2020), seemingly not that difficult. Yet, a survey of journal editors found that a third rarely or never used specialized statistical reviewers and another third used such expertise for only a few manuscripts. This pattern has not changed since 1998, despite the fact that scientific claims in biomedical research are usually derived through statistical analysis (Hardwicke & Goodman, 2020).

Precision of Scientific Standards

43

A survey of consulting biostatisticians found a high frequency of inappropriate researcher requests: proposing a study with a flawed design including insufficient power, setting aside values when the outcome turns on a few outliers, reporting results of data analysis from only subsets of the data, overstating the statistical findings well beyond what the data support, interpreting statistical findings on the basis of expectation, not actual results, not reporting the presence of key missing data that might bias the results (Wang et al., 2019a, 2019b). A second example of poor scientific standards exists in research on treatments for mental disorders. The true efficacy of both psychotherapy and pharmacotherapy remains contested, their effects are overestimated due to publication bias, researcher allegiance, and financial conflicts from industry funding, consistent with low rates of study replication. Some relevant areas are understudied: long-term treatment effects, high dropout rates, and lack of data on side effects. Some suggest that mental disorders are not effectively addressed by available treatments and research strategies. Methodological standards must be significantly upgraded and run without industry control (Leichsenring et al., 2019). It should be noted that EMA and FDA produce guidelines for design of psychiatric drug trials to be used in applications for drug approvals. Short trial duration, restricted trial populations, allowing previous exposure to the drug and efficacy measured through rating scales are to be avoided. Appropriate research designs, generalizability, and independent/non-conflicted stakeholders are important (Boesen et al., 2021). In a third example, meaningful ethical phase I/II trial can only be conducted based on supportive prospective risk/benefit assessment, which depends largely on preclinical animal studies. These should be reported in Investigator’s Brochures to inform IRB review and regulatory authorities. Review of 46 Brochures found fewer than 1% reporting blinded outcome assessment, randomization, and sample size calculations, with only 5% of the preclinical safety studies providing a reference to published data. Only a fraction of animal studies are published, which overestimates effect size. Adverse drug reactions not anticipated by preclinical safety studies are a cause of failure in drug development and harm subjects in Phase I trials. This translational failure may be due to flawed methodology of preclinical research and/or general failure of animal models (Sievers et al., 2021). A comprehensive study of life sciences research found quality deficiencies exceeding 20% in: sample size power calculation, p-values reported without a statistical test, incomplete eligibility criteria, randomization deficiency, and lacking description of the study population, among others. No evidence of quality improvement over time was found. General improvement in quality should come from systematic and regular measurements of deficiencies and organized efforts to manage quality—initiatives that are uncommon in the biomedical research enterprise (Mansour et al., 2020). Others (Vinkers et al., 2021), in an analysis of 176,620 full-­ text publications of RCTs between 1966 and 2018 found that the likelihood of bias has decreased, and an increasing percentage of RCTs are registered in public trial registries; yet, the average risk of bias remains high, especially in journals with lower impact factors.

44

3  Evidence-Based Research Integrity Policy

Of concern is that RECs apparently have approved these trials (Yarborough, 2021). Further, Zarin et al. (2019) remind us no oversight mechanism assesses all five conditions necessary for an informative trial: “the study (1) hypothesis must address an important and unresolved…question, (2) must be designed to provide meaningful evidence related to this question, (3) must be demonstrably feasible, (4) must be conducted and analyzed in a scientifically valid manner, (5) must report methods and results accurately, completely and promptly.” RECs do not typically assess scientific merit beyond that needed to justify risk, and there are no counter incentives for uninformative trials (Zarin et al., 2019, p. 813). But, relatedly, a systematic review of instruments to evaluate RECs found emphasis on process and structure with little attention to participant outcomes (Lynch et al., 2020). The extent to which these examples can be generalized is not known, but they seem to represent a pattern of sloppy work.

Clarity of Scientific Standards Are the standards clear? Maybe not. Langfeldt et al. (2020) draw attention to the fact that research quality was originally defined within knowledge communities (research fields) but now policy, funding, and commercial initiatives have migrated to more general and global quality criteria. While both sets of quality criteria often coexist, this situation could partially explain why peer review is often so incongruent (see discussion in Chap. 8), leading to ethical issues regarding fairness. In addition, quality standards are renegotiated over time (Langfeldt et al., 2020). A very limited set of interviews with scientists at elite universities in the USA, UK, and Germany (and therefore not to be generalized to scientists at large) found that they often struggled to evaluate the quality of their data, decide whether available evidence confirms their hypothesis, whether a replication was successful, and the extent to which they can rely on peer-reviewed publications. General ideas about scientific norms did not adequately translate into these decisions. Their response was to simply not communicate about what did not agree with the norm or to blame themselves. In the scientific community, open discussion of failed or ambiguous research projects is said to have been suppressed (Schickore & Hangel, 2019). How could these scientists reach informed and reasonable decisions? This surely cannot be the responsibility of individual scientists, but rather of the scientific community to actively describe methodological standards (including interpretation) and assure compliance with them. But, again, such a reform requires coordinated involvement of the whole system—funders, institutions, and journals, to assure that the changed incentives align with values of research integrity. Smaldino and colleagues (Smaldino & McElreath, 2016) suggest that persistence of poor methods results not just from misunderstanding but from incentives that favor them, leading to the natural selection of bad science and scientific literature filled with false discoveries. As noted above, statistical power has not improved despite repeated demonstrations of the necessity to do so. Selection for research

Precision of Scientific Standards

45

from high-output laboratories (which often yields career advancement) was found to lead to poorer methods and high false discovery rates. Replication slows but does not stop methodological deterioration; because it is not done universally and multiple times, those doing poor-quality science can avoid detection. Reversing incentives so that they support integrity of science is necessary (Smaldino & McElreath, 2016). In theory, these examples of poor methodology recounted above are entirely preventable. A directed emphasis on methodological rigor in funding and hiring decisions, adoption of open science (Smaldino et  al., 2019) institutions of science, revision of metrics (see Chap. 8), and other initiatives would be useful. There are some examples of moves toward high quality. Biobanks are central to reproducibility and translational research. Biobank science has now progressed well beyond biospecimen management and distribution to include biospecimen characterization and analysis methods, with standards to ensure quality, robustness, and reproducibility. Still, room for improvement exists, for example, in using population metrics to avoid biased sample selection (Dolle & Bekaert, 2019). In another example of progress but still short of meeting standards, clinical trial registries are important to enhance enrollment of participants in trials and to reduce the possibility of bias in subsequent reporting of trial results. After a protracted battle to ensure that trials were registered, in the past two decades registries have proliferated. The World Health Organization recognizes 17 public registries and has introduced a set of minimum standards in the International Standards for Clinical Trial Registries. A recent study shows substantial variation in compliance with the recommended minimal standards (Venugopal & Saberwal, 2021). And even despite registration, where the effect of an intervention on a primary outcome has a high “certainty of evidence,” redundant trials are still being registered (Vergara-Merino et al., 2021); thus, registries are not yet adequately serving the function of avoiding waste of resources including risks to research participants. Standards of Evidence in Research Integrity Decisions Here, it is easiest to focus on areas of research integrity that are regulated and thus have clear statements of purpose, scope, and level of evidence. Research misconduct regulations do describe evidentiary standards of proof to be preponderance of the evidence, with the institution having the burden of proof for making a finding of research misconduct and placing on the respondent the same level of proof for affirmative defenses (42 CFR 93.106). Little is known about how consistently this standard is used. Federal regulations for institutional review boards do not contain a stated standard (type and amount of evidence) that IRB members should use in deciding whether a study meets the approval criteria, which are delineated. This absence potentially contributes to variations in IRB judgments, which have been well documented. Resnik (2021) suggests that IRB decision-making should be more empirically based than it currently is, with evidentiary standards, understanding that other

46

3  Evidence-Based Research Integrity Policy

sources such as emotion and intuition will also play a role. Resnik (2021) suggests a clear and convincing evidence standard for making approval decisions, varying when a study involves significant risks to healthy volunteers. Friesen et al. (2019) argue for supporting some inconsistency in IRB judgments on the grounds that: “there is no ground truth that can provide a foundation for consistent moral decision making,” context and multiple variables make a difference (Friesen et al., 2019). This stance does not necessarily rule out the use of evidentiary standards. Researchers presumably are already required to provide supportive evidence for their protocols. Smith and Anderson (2022) suggest that an evidential standard for IRBs could also support participant safety, autonomy, and fair subject selection. Such a change would place further accountability on IRBs, support views about IRB accountability, and help to meet a basic ethical criterion of treating likes alike.

 easurement Instruments and Methods for Use M in Research Integrity An evidence base for research integrity policy requires tools such as measurement instruments. To what extent do they exist, and is information about their validity and reliability available? Here are some examples. How I Think About Research (HIT-Res) measures compliance with research regulations and integrity. Its psychometric qualities were tested with researchers and trainees funded by NIH, showing excellent reliability and construct validity, particularly against measures of moral disengagement. Moral disengagement theory involves protecting one’s self-identity by blaming others, minimizing harmfulness of their actions and other cognitive distortions (“the pressure to get grants almost forces people to take liberties with their data”), thus supporting compliance disengagement. The instrument appears in the appendix of this publication (DuBois et  al., 2016). Further work establishing psychometric qualities remains to be accomplished. Perceived Publication Pressure revised (PPQr) measures how academic researchers experience pressure driven by publication quantity and citation record. Publication pressure is studied for its effects on research integrity, cutting corners, linked to burnout, and associated with research misconduct and poor quality research. Reliability was found to be adequate for the three scales (stress, attitude, resources); construct validity is supported by strong relationship between burnout and emotional exhaustion. The instrument can be found in appendix material in Haven et al., 2019b and items in subscales in Haven et al., 2019a. It was developed and studied by more than 1000 researchers in Amsterdam. Survey of Organizational Research Climate (SOuRCe) is designed to measure the organizational research climate in academic research settings—how it can strengthen or erode research integrity (Haven et al., 2019c). Organizational climate is the shared meaning members obtain from events, policies, practices, and

Measurement Instruments and Methods for Use in Research Integrity

47

procedures they experience and the behaviors they see being rewarded, supported, and expected. The instrument has been used in several large academic and VA settings, both as a tool for institutional self-regulation and to study how research integrity is fostered within an institution. SOuRCe yields scores on seven scales with scores that can be compared with each other, usually at the department level: RCR resources scale, regulatory quality scale, integrity norms scale, integrity socialization scale, advisor–advisee relations scale, integrity inhibitors scale, and the departmental expectations scale. Cumulative psychometric data from prior studies are summarized in Wells et al. (2014), with high reliability, construct, discriminant, and predictive validity and are correlated with misconduct and detrimental research practices. Interventions at the unit level can address areas with low scores. A User’s Manual can also be obtained. Lab Climate for Research Ethics is a new measure to assess perceptions of the climate at the lab level, rather than at the department or institutional level. The lab is where personnel and trainees learn standards and carry out practice for responsible research and therefore more likely to commit to norms and behaviors. This tool is brief, has adequate internal consistency, is correlated with an existing measure of climate for research ethics and is not correlated with social desirability. It was tested on NIH-funded postdocs (Solomon et al., 2022). Despite the availability of SOuRCe, a scoping review found outcomes of interventions to change organizational climate in academic or research settings to suffer from low methodological quality. This leaves institutions that wish to meet their responsibilities to provide an environment that strengthens integrity lacking evidence-­supported ways to do so, perhaps creating an important disincentive for meeting this important responsibility (Vidak et al., 2021). Perceived IRB Violation Scale, developed for perceived prevalence and perceived seriousness shows a single underlying factor. These items may be found on Table 3 in Reisig and colleagues (2022). Purpose of these items is to shed light on researchers’ views at a particular university, which could suggest need, for example, for data storage services and maintenance of project records. Allegations of research misconduct require excellent records for resolution. And although not previously well studied, IRB infractions can be harmful to research participants (Reisig et al., 2022). Research Misconduct Scale for Social Science University Students was developed and tested on a convenience sample of students in Pakistan. The instrument may be found in the citation. Factor analysis confirmed one factor, Cronbach’s alpha (measure of internal consistency) was high. The scale has significant positive correlations with academic stress and procrastination and a significant negative relationship with academic achievement (Ghayas et al., 2022). Other measurements important to research integrity but judged to be incompletely developed include a uniform and solid theoretical approach to risk assessment by IRBs, necessary for subject protection. None of the current approaches is judged by Rudra and Lenk (2021) to be acceptable, in part because they do not address reasonable risk. A steady and systematic approach by IRBs/RECs is needed (Rudra & Lenk, 2021).

48

3  Evidence-Based Research Integrity Policy

 esearch Syntheses, their Social Production and Methods R Illuminative of Ethics Research syntheses, which sit atop classifications of evidence as being of the highest value, are largely considered from a technical view (how to statistically merge relevant studies) while their social function is largely ignored. Some meta-analyses stimulate ethically important suggestions. Berthelsen et  al. (2021) describe, in a systematic review of rheumatology drug trials, reporting of harms or harm severity and concluded that these trials did not provide a reporting framework of harms from the patient’s perspective. This review led to the suggestion to develop uniform criteria for reporting severity level and a framework for patient-reported harms in these randomized controlled trials. Since harms are likely underreported in published trial literature of healthcare interventions, such standards and framework could be adopted more widely (Berthelsen, et al., 2021). Systematic reviews and meta-analyses (statistical aggregation of all relevant prior studies) form a large body of research but with redundancy, important gaps, always out of date, frequently with an incomplete search strategy, dealing with risk of bias and selective reporting from the primary studies, subject to conflict of interest and other issues (Boutron et al., 2020). But the vision of a reformed evidence ecosystem is ethically compelling. Evidence synthesis would be a continuous process built around a clinical question, using the full range of research data (not just RCTs), available to clinicians and patients, to trialists as they plan their research and to funders as they decide where to place their resources, to IRBs in their assessment, providing a reliable source of information independent from lobbying bodies. Rigorous standards and internationally coordinated infrastructure would be necessary to support this important endeavor (Ravaud, et al., 2020). Many relatively new review types have evolved to aid in knowledge translation and changing professional and organizational practice. The full panoply is described in Sutton et al. (2019) but include: scoping reviews (clarifying concepts, identifying knowledge gaps), realist reviews (insights into what programs and interventions work, why and in what contexts), and many others. Reviews can be used to describe prevalence/incidence, effectiveness, experiences, and psychometrics. Most compelling from an ethics point of view is the critical interpretive synthesis, an alternative to statistical aggregation of studies. It requires questioning taken-for-granted assumptions, the epistemological and normative assumptions of the literature (Dixon-Woods et al., 2006). Here are examples of reviews addressing important elements of research ethics and relevant to research integrity. Policy Gaps: H3Africa is a pan-continental genomic research enterprise, requiring ethics guidelines in all nations and as interoperable as possible. Particularly important is regulatory guidance on collection and use of human biological specimens in research. Full participation in genomic research by all African countries and difficulties in REC review in the absence of national guidelines, will slow potential research benefits to their populations (Barchi & Little, 2016). A

Research Syntheses, their Social Production and Methods Illuminative of Ethics

49

subsequent H3Africa working group developed a framework, policy documents and guidelines to address this issue, importantly working directly with stakeholder groups (Tindana, et al., 2019). Prevalence of Scientific Misconduct: Prevalence has largely been studied with surveys asking scientists whether they have committed research misconduct or know of a colleague who has done so. These are sensitive questions, likely to yield a conservative estimate of the true prevalence of research misconduct. Meta-analysis of these surveys found that, on average, about 2% of scientists admitted to have fabricated, falsified or modified data or results at least once, up to one third admitted to questionable research practices such as dropping data points based on a gut feeling, and 14% of colleagues were thought to have FF. Because a reputation of honesty is fundamental to a scientific career (Fanelli, 2009), and the way the data were collected, it is likely that these results represent a significant underestimate. Clinical Trial Design: Use of stepped wedge cluster randomized trials, commonly used to evaluate health policy and service delivery interventions, has increased rapidly. In this design, clusters of groups or individuals gradually cross over from control to intervention according to a randomized schedule, with all clusters eventually receiving the intervention, presumably increasing its ethicality. Interventions may involve individuals at different organizational levels, raising the question of from whom consent should be obtained, and it may be difficult to avoid the study intervention in a workplace/site of care. A systematic review of such trials reported up to 2014 showed only one in three were compliant with IRB review and informed consent (or waiver) requirements (Taljaard et al., 2017). Subsequently, an extension of the CONSORT statement for reporting stepped wedge cluster randomized trials was published (Hemming et al., 2018). Research from the Perspective of Participating Children and Adolescents: There have been significant shifts in how participation of children and adolescents in research is viewed, balancing their inherent vulnerability with the necessity to study their needs. It is unclear what voice children have had in development of current research guidelines, surely an ethical requirement. A systematic review found most children preferred shared decision-making, actively supported by parents, doctors and researchers; they are most willing to participate in research when they feel safe. Assent processes and instruments should be piloted with children before being used in a research study. A key gap in the findings of this review is whether children are able to follow through on a desire to stop participating in a research study, or to dissent to participate (Crane & Broome, 2017). This review addresses an ethically important question and provides direction for further research. No example of a critical interpretive synthesis, a form highly useful to support research integrity, could be located. Fixing flaws underlying the classification and retrieval of evidence could dramatically improve research syntheses of all kinds. Gough and colleagues note that systems for indexing research are not fit for purpose in that they do not ensure that research can be located reliably. Research publication has traditionally been disconnected from subsequent utilization of that research. Journals were historically a means for researchers to communicate with each other, rather than the means

50

3  Evidence-Based Research Integrity Policy

through which research evidence can move through the pipeline of subsequent research and informing policy and practice (Gough et al., 2019). Fixing these flaws should improve accuracy and safety of research use as well as clarify evidence gaps important to patient well-being. Also, a move in the right direction is a consensus checklist for when to replicate systematic reviews. Replication is an essential part of the scientific method. Systematic reviews can often be subject to errors or uncertainty around inaccurate data collection or analysis, and potential bias around how data are collected, analyzed and interpreted or criteria for inclusion. Replication (complete or partial) might be prioritized if it will resolve uncertainties and magnitude of the benefit or harm of implementing its findings. Details of consensus criteria for meta-analysis replication for interventions may be found in Tugwell and colleagues; further work is ongoing to test usability with specific user groups (Tugwell et al., 2020).

 imitations to an Evidence-Based Approach to Research L Integrity and Especially to the Current State of Evidence Base Ethically sensitive limitations/biases in research evidence are well recognized although rarely well-described, for evidence-based health care policy and guidance. These include exclusion of certain groups, favoring the commercial and political interests of commissioners and funders of research, privileging experts whose views are infused with unexamined values, privileging randomized controlled trials of short duration over other research methods and long-term studies, and lack of examination of those affected by these practices (Michaels, 2021). Limitations of evidence for research integrity practice and policy are far less examined but surely include publication and many other kinds of bias, as well as locked-in frames/perspectives that direct attention to certain issues such as research misconduct by individuals but ignoring misaligned incentives that contribute to that behavior. On balance, understanding of how to foster research integrity suffers from significant evidence gaps outlined throughout this book. Brown et  al. (2022) helpfully characterize the present state of evidence-based policy with a state called “broad medical uncertainty.” Flaws in medical research methodologies, bias in publication, unmanaged conflict of interest have yielded persistent and widespread uncertainty. For example, it is often not clear if trials have been conducted but not reported. Data from trials are unavailable for scrutiny and their integrity not known. Many trials are not sufficiently powered to reliably measure negative side effects and those observed are often omitted from trial reports. While not all medical research is flawed, there is a perception that a sufficient amount of it may be, and it is difficult to judge when these are present to cast doubt on effectiveness estimates. There are some medical interventions where benefits are so large that uncertainty is unimportant, but that is not usually the case (Brown et al., 2022).

References

51

In summary, there is an uncertainty about the level of uncertainty but also an ethical obligation to better inform those involved about the pervasiveness of “broad medical uncertainty” and to mount a corrective response. Meta-analysis of individual level patient data rather than use of summary statistics is an example of a corrective (Brown et al., 2022).

Conclusion Research ethics, which preceded the broader notion of research integrity, developed largely without an evidence base and has been slow to incorporate one. New disciplines have been formed supporting translation and regulation of science, and exposing areas in scientific practice in need of improvement. Moving to an evidence base adequate to support research integrity likely requires changes throughout the institutions of science, in incentives, metrics, and other practices. This chapter has described multiple opportunities to improve an evidence base for research integrity: standards of evidence, measurement tools, research syntheses, but most importantly systematic and regular monitoring of strengths and deficiencies in scientific practice and organized efforts to manage quality.

References Aarden, E., Marelli, L., & Blasimme, A. (2021). The translational lag narrative in policy discourse in the United States and the European Union: A comparative study. Humanities & Social Sciences Communication, 8, 107. https://doi.org/10.1057/s41599-­21-­00777-­y Anklam, E., Bahl, M., Ball, R., Beger, R.  D., Cohen, J., Fitzpatrick, S., Girard, P., Halamoda-­ Kenzaoui, B., Hinton, D., Hirose, A., Hoeveler, A., Honma, M., Hugas, M., Ishida, S., Kass, G., Kojima, H., Krefting, I., Liachenko, S., Liu, Y., et al. (2022). Emerging technologies and their impact on regulatory science. Experimental Biology and Medicine, 247, 1–75. https://doi. org/10.1177/15353702211052280 Barchi, F., & Little, M. T. (2016). National ethics guidance in sub-Saharan Africa on the collection and use of human biological specimens: A systematic review. BMC Medical Ethics, 17(1), 64. https://doi.org/10.1186/s12910-­016-­0146-­9 Berthelsen, D.  B., Woodworth, T.  G., Goel, N., Ioannidis, J.  P. A., Tugwell, P., Devoe, D., Williamson, P., Terwee, C. B., Suarez-Almazor, M. E., Strand, V., Leong, A. L., Conaghan, P.  G., Boers, M., Sjea, B.  J., Books, P.  M., Simon, L.  S., Furst, D.  E., Christensen, R., & OMERACT Safety Workng Group. (2021). Harms reported by patients in rheumatology drug trials: A systematic review of randomized trials in the Cochrane library from an OMERACT working group. Seminars in Arthritis and Rheumatism, 51(3), 607–617. https:// doi.org/10.1016/j.semarthrit.2020.09.023 Boesen, K., Gotzsche, P.  C., & Ioannidis, J.  P. A. (2021). EMA and FDA psychiatric drug trial guidelines: Assessment of guideline development and trial design recommendations. Epidemiology and Psychiatric Sciences, 30, e35. https://doi.org/10.1017/S2045796021000147 Boutron, I., Crequit, P., Williams, H., Meerpohl, J., Craid, J. C., & Ravaud, P. (2020). Future of evidence ecosystem series: 1. Introduction evidence synthesis ecosystem needs dramatic change. Journal of Clinical Epidemiology, 123, 135–142. https://doi.org/10.1016/j.jclinepi.2020.01.024

52

3  Evidence-Based Research Integrity Policy

Brown, R. C. J., de Barra, M., & Earp, B. D. (2022). Broad medical uncertainty and the ethical obligation for openness. Synthese, 200(2), 121. https://doi.org/10.1007/s11229-­022-­03666-­2 Crane, S., & Broome, M. E. (2017). Understanding ethical issues of research participation from the perspective of participating children and adolescents: A systematic review. Worldviews Evidence Based Nursing, 14(3), 200–209. https://doi.org/10.1111/wvn.12209 Demortain, D. (2017). Expertise, regulatory science and the evaluation of technology and risk: Introduction to the special issue. Minerva, 55, 139–159. Dendler, L., & Bol, G. (2021). Increasing engagement in regulatory science: Reflections from the field of risk assessment. Science, Technology & Human Values, 46(4), 719–754. https://doi. org/10.1177/0162243920944499 Dixon-Woods, M., Cavers, D., Agarwal, S., Annandale, E., Arthur, A., Harvey, J., Hsu, R., Katbamna, S., Olsen, R., Smith, L., Riley, R., & Sutton, A. J. (2006). Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Medical Research Methodology, 6, 35. https://doi.org/10.1186/1471-­2288-­6-­35 Dolle, L., & Bekaert, S. (2019). High-quality biobanks: Pivotal assets for reproducibility of OMICS-data in biomedical translational research. Proteomics, 19(21–22), e1800485. https:// doi.org/10.1002/pmic.201800485 DuBois, J.  M., Chibnalt, J.  T., & Gibbs, J. (2016). Compliance disengagement in research: Development and validation of a new measure. Science & Engineering Ethics, 22(4), 965–988. https://doi.org/10.1007/s11948-­015-­9681-­x Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One, 4(5), e5738. https://doi.org/10.1371/journal. pone.0005738 Fanelli, D. (2022). Is science in crisis? In L. J. Jussim, J. A. Krosnick, & T. Stevens Sean (Eds.), eds. Oxford University Press. Friesen, P., Nadia, A.  N. M.  Y., & Sheehan, M. (2019). Should the decisions of institutional review boards be consistent? Ethics & Human Research, 41(4), 2–14. https://doi.org/10.1002/ eahr.500022 Ghayas, S., Hassan, Z., Kayan, S., & Biasutti, M. (2022). Construction and validation of the research misconduct scale for social science university students. Frontiers in Psychology, 13, 859466. https://doi.org/10.3389/fpsyg.2022.859466 Gough, D., Thomas, J., & Oliver, S. (2019). Clarifying differences between reviews within evidence ecosystems. Systematic Reviews, 8(1), 170. https://doi.org/10.1186/s13643-­019-­1089-­2 Hardwicke, T. E., & Goodman, S. N. (2020). How often do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey. PLoS One, 15(10), e0239598. https://doi.org/10.1371/journal.pone.0239598 Haskins, R. (2018). Evidence-based policy: The movement, the goals, the issues, the promise. Annals of the American Academy of Political and Social Science, 678, 8–37. Haslam, A., Gil, J., Crain, T., Herrera-Perez, D., Chen, E.  Y., Hilal, T., Kim, M.  W., & Prasad, V. (2021). The frequency of medical reversals in a cross-sectional analysis of high impact oncology journals, 2009-2018. BMC Cancer, 21(1), 889. https://doi.org/10.1186/ s12885-­021-­08632-­8 Haven, T. L., de Goede, J. E. E., Tijdink, J. K., & Oort, F. J. (2019a). Personally perceived publication pressure: Revising the publication pressure questionnaire (PPQ) by using work stress models. Research Integrity & Peer Review, 4, 7. https://doi.org/10.1186/s41073-­019-­0066-­6 Haven, T.  L., Bouter, L.  M., Smulders, Y.  M., & Tijdink, J.  K. (2019b). Perceived publication pressure in Amsterdam: Survey of all disciplinary fields and academic ranks. PLoS One, 14(6), e02117931. https://doi.org/10.1371/journal.pone.0217931 Haven, T. L., Tijdink, J. K., Martinson, B. C., & Bouter, L. M. (2019c). Perceptions of research integrity climate differ between academic ranks and disciplinary fields: Results from a survey among academic researchers in Amsterdam. PLoS One, 14(1), e0210599. Hemming, K., Taljaard, M., McKenzie, J.  E., Hopper, R., Copas, A., Thompson, J.  A., Dixon-­ Woods, M., Aldcroft, A., Doussau, A., Grayling, M., Kristunas, C., Goldstein, C. E., Campbell, M. K., Girling, A., Eldridge, S., Campbell, M. J., Lilford, R. J., Weijer, C., Forbes, A. B., &

References

53

Grimshaw, J. M. (2018). Reporting of stepped wedge cluster randomized trials: Extension of the CONSORT 2010 statement with explanation and elaboration. BMJ, 363, k1614. https://doi. org/10.1136/bmj.k1614 Herrera-Perez, D., Haslam, A., Crain, T., Gill, J., Livingston, C., Kaestner, V., Hayes, M., Morgan, D., Cifu, A. S., & Prasad, V. (2019). A comprehensive review of randomized clinical trials in three medical journals reveals 396 medical reversals. eLife, 8, e45183. https://doi.org/10.7554/ eLife.45183 Jeste, M. (2020). “Conflict of interest” or simply “interest”? Shifting values in translational medicine. In B.  Hauray, H.  Boullier, & J.  M. Gaudilliere (Eds.), Helene, Conflict of interest in medicine. Routledge. Langfeldt, L., Nedeva, M., Sorlin, S., & Ducan, A.  T. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58, 115–137. Leichsenring, F., Steinert, C., & Ioannidis, J. P. A. (2019). Toward a paradigm shift in treatment and research of mental disorders. Psychological Medicine, 49(13), 2111–2117. https://doi. org/10.1017/S0033291719002265 Liber, A. C. (2022). Using regulatory stances to see all the commercial determinants of health. Milbank Quarterly, 100(3), 918–961. https://doi.org/10.1111/1468-­0009.12570 Lund, H., Bala, M., Blaine, C., Brunnhuber, K., & Robinson, K. A. (2021). How to improve the study design of clinical trials in internal medicine: Recent advances in the evidence-based methodology. Polish Archives of Internal Medicine, 131(9), 848–853. https://doi.org/10.20452/ pamw.16076 Lynch, H.  F., Abdirisak, M., Bogia, M., & Clapp, J. (2020). Evaluating the quality of research ethics review and oversight: A systematic analysis of quality assessment instruments. AJOB Empirical Bioethics, 11(4), 208–222. https://doi.org/10.1080/23294515.2020.1798563 Mansour, N. N., Balas, E. A., Yang, F. M., & Vernon, M. M. (2020). Prevalence and prevention of reproducibility deficiencies in life sciences research: Large-scale meta-analyses. Medical Science Monitor, 26, e922016. https://doi.org/10.12659/MSM.922016 Michaels, J. A. (2021). Potential for epistemic injustice in evidence-based healthcare policy and guidance. Journal of Medical Ethics, 47, 417–422. https://doi.org/10.1136/medethics-­2020-­106171 Newman, J. (2017). Deconstructing the debate over evidence-based policy. Critical Policy Studies, 11(2), 211–226. https://doi.org/10.1080/19460171.2016.1224724 Paul, K.  T., & Haddad, C. (2019). Beyond evidence versus truthiness: Toward a symmetrical approach to knowledge and ignorance in policy studies. Policy Sciences, 52, 299–314. https:// doi.org/10.1007/s11077-­019-­09352-­4 Ravaud, P., Crequit, P., Williams, H. C., Meerpohl, J., Craig, J. C., & Boutron, I. (2020). Future of evidence ecosystem series: 3. From an evidence synthesis ecosystem to an evidence ecosystem. Journal of Clinical Epidemiology, 123, 153–161. https://doi.org/10.1016/j. jclinepi.2020.01.027 Reisig, M. D., Flippin, M., & Holtfreter, K. (2022). Toward the development of a perceived IRB violation scale. Accountability in Research, 29(5), 309–323. https://doi.org/10.1080/0898962 1.2021.1920408 Resnik, D.  B. (2021). Standards of evidence for institutional review board decision-making. Accountability in Research, 28(7), 428–455. https://doi.org/10.1080/0898962 1.2020.1855149 Robinson, K. A., Brunnhuber, K., Cilska, D., Juhl, C. B., Christensen, R., Lund, H., & Evidence-­ Based Research Network. (2020). Evidence-based research series–Paper 1: What evidence-­ based research is and why is it important? Journal of Clinical Epidemiology, 129, 151–157. https://doi.org/10.1016/j.jclinepi.2020.07.020 Robinson, M. D. (2019a). The market in mind. MIT Press. Robinson, M. D. (2019b). Financializing epistemic norms in contemporary biomedical innovation. Synthese, 196, 4391–4407. https://doi.org/10.1007/s11229-­018-­1704-­0

54

3  Evidence-Based Research Integrity Policy

Rudra, P., & Lenk, C. (2021). Process of risk assessment by research ethics committees: Foundations, shortcomings and open questions. Journal of Medical Ethics, 47, 343–349. https://doi.org/10.1136/medethics-­2019-­105595 Schickore, J., & Hangel, N. (2019). “It might be this, it should be that…” uncertainty and doubt in day-to-day research practice. European Journal for Philosophy of Science, 9, 31. Sievers, S., Wieschowski, S., & Strech, D. (2021). Investigator brochures for phase I/II trials lack information on the robustness of preclinical safety studies. British Journal of Clinical Pharmacology, 87(7), 2723–2731. https://doi.org/10.1111/bcp.14615 Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384 Smaldino, P. E., Turner, M. A., & Kallens, P. A. C. (2019). Open science and modified funding lotteries can impede the natural selection of bad science. Royal Society Open Science, 6(7), 190194. https://doi.org/10.1098/rsos.190194 Smith, E., & Anderson, E.  E. (2022). Reimagining IRB review to incorporate a clear and convincing standard of evidence. Accountability in Research, 29(1), 55–62. https://doi.org/10.108 0/08989621.2021.1880902 Solomon, E. D., English, T., Wroblewski, M., DuBois, J. M., & Antes, A. L. (2022). Assessing the climate for research ethics in labs: Development and validation of a brief measure. Accountability in Research, 29(1), 2–17. https://doi.org/10.1080/08989621.2021.1881891 Studies linking diet with health must get a whole lot better. (2022). Nature, 610(7931), 231. https:// doi.org/10.1038/d41586-­022-­03199-­1 Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements. Health Information and Libraries Journal, 36, 202–222. https://doi.org/10.1111/hir.12276 Szucs, D., & Ioannidis, J. P. A. (2020). Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990-2012) and of latest practices (2017-2018) in high-impact journals. NeuroImage, 221, 117164. https://doi.org/10.1016/j.neuroimage.2020.117164 Taljaard, M., Hemming, K., Shah, L., Giraudeau, B., Grimshaw, J.  M., & Weijer, C. (2017). Inadequacy of ethical conduct and reporting of stepped wedge cluster randomized trials: Results from a systematic review. Clinical Trials, 14(4), 333–341. https://doi. org/10.1177/1740774517703057 Tindana, P., Yakubu, A., Staunton, C., Matimba, A., Littler, K., Madden, E., Munung, N. S., de Vries, J., & as members of the H3Africa Consortium. (2019). Engaging research ethics committees to develop an ethics and governance framework for best practices in genomic research and biobanking in Africa: The H3Africa model. BMC Medical Ethics, 20, 69. https://doi. org/10.1186/s12910-­019-­0398-­2 Todt, O., & Lujan, J. L. (2022). Rationality in context: Regulatory science and the best scientific method. Science, Technology & Human Values, 47(5), 1086–1108. Tugwell, P., Welch, V.  A., Karunananthan, S., Maxwell, L.  J., Akl, E.  A., Avey, M.  T., Bhutta, Z.  A., Brouwers, M.  C., Clark, J.  P., Cook, S., Cuervo, L.  G., Curran, J., Ghogomu, E.  T., Graham, I. G., Grimshaw, J. M., Hutton, B., Ioannidis, J. P. A., Jordan, Z., Jull, J. E., et al. (2020). When to replicate systematic reviews of interventions: Consensus checklist. British Medical Journal, 370, m2864. https://doi.org/10.1136/bmj.m2864 Venugopal, N., & Saberwal, G. (2021). A comparative analysis of important public clinical trial registries, and a proposal for an interim ideal one. PLoS One, 16(5), e0251191. https://doi. org/10.1371/journal.pone.0251191 Vergara-Merino, L., Verdejo, C., Franco, J. V. A., Liquitay, C. E., Urrutia, G., Klabunde, R., Perez, P., Sanchez, L., & Madrid, E. (2021). Registered trials address questions already answered with high-certainty evidence: A sample of current redundant research. Journal of Clinical Epidemiology, 134, 89–94. https://doi.org/10.1016/j.jclinepi.2021.01.024 Vidak, M., Barac, L., Tokalic, R., Bujan, I., & Marusic, A. (2021). Interventions for organizational climate and culture in academia: A scoping review. Science & Engineering Ethics, 27(2), 24. https://doi.org/10.1007/s11948-­021-­00298-­6

References

55

Vinkers, C.  H., Lamberink, H.  J., Tijdink, J.  K., Heus, P., Bouter, L., Glasziou, P., Moher, D., Damen, J. A., Hooft, L., & Otte, W. M. (2021). The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement. PLoS Biology, 19(4), e3001162. https://doi.org/10.1371/journal. pbio.3001162 Wang, M. Q., Fan, A. Y., & Katz, R. V. (2019a). Researcher requests for inappropriate analysis and reporting: A US survey of consulting biostatisticians. Annals of Internal Medicine, 169(8), 554–558. https://doi.org/10.7326/M18-­1230 Wang, M. Q., Fan, A. Y., & Katz, R. V. (2019b). Bioethical issues in biostatistical consulting study: Additional findings and concerns. JDR Clinical & Translational Research, 4(3), 271–275. https://doi.org/10.1177/2380084419837294 Wells, J.  A., Thrush, C.  R., Martinson, B.  C., May, T.  A., Stickler, M., Callahan, E.  C., & Klomparens, K. L. (2014). Survey of organizational research climates in three research intensive, doctoral granting universities. Journal of Empirical Research on Human Research Ethics, 9(5), 72–88. https://doi.org/10.1177/1556264614552798 Yarborough, M. (2021). Do we really know how many clinical trials are conducted ethically? Why research ethics committee review practices need to be strengthened and initial steps we could take to strengthen them. Journal of Medical Ethics, 47(8), 562–579. https://doi.org/10.1136/ medethics-­2019-­106014 Zarin, D. A., Goodman, S. N., & Kimmelman, J. (2019). Harms from uninformative clinical trials. JAMA, 322(9), 813–814. https://doi.org/10.1001/jama.2019.9892

Chapter 4

Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

A recent editorial in Nature sounds the call to research integrity, indicating such practice requires “creating systems that boost the quality, relevance and reliability of all research” through sustained improvements in day-to-day practice (Integrity, 2019, p. 5). This hunger for an improved quality of research will, in part, require a new vision of instruction for the responsible conduct of research (RCR). Here, I examine bodies of research and initiatives supporting a new RCR and raise basic questions central to assuring that it is effective. I argue that it should be clear of purpose, prepare students to address ethical issues in science as it is currently practiced, utilize the best available evidence, and require commitments from institutions in which science is practiced. Finally, I review developments from the field of higher education research, providing a new evidence base and a few measurement instruments, largely outside the biomedical sciences but likely applicable to them. Unanswered questions are located throughout the text. While the end goal of research integrity (RI) does not yet have a systematized and consensual definition, it largely embodies values such as honesty, accountability, and reliability, infused in both scientists and institutions as virtues and duties which must be stimulated to develop (do Ceu Patrao Neves, 2018). RCR training is recognized as very basic to research integrity although in its current state, incomplete. Presented as a necessary-to-understand factual base sometimes accompanied by skill development, RCR education has largely been apportioned to initial training of students, in a classroom or online, accompanied by the full recognition that mentorship and conditions in the lab are where students really learn how research should be practiced responsibly.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_4

57

58

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

Lines of Investigation Evaluating RCR Training Limited empirical work testing real-world approaches to research integrity through RCR training bumps up against frustration that the field has not moved forward since it began to be required for new trainees several decades ago. While effective mentoring is a necessary component of learning to practice with integrity, it is both understudied and apparently not consistently monitored/required. Two important examples of research are directly instructive about how to better link robust RCR training to actual research behaviors. Plemmons and collaborators (2020) report a randomized trial of a lab-embedded intervention addressing authorship and data management and specifically focused on reason giving (justification) for decisions and interpersonal communication necessary for ethical practice. Lab members perceived improvement in the quality of research ethics discourse compared with control laboratories (Plemmons et al., 2020). While partial in terms of RCR content covered, needing a stronger outcome measure and with questions about persistence (past the 4.5  months measured) of behavior change, this work provides a welcome paradigm change. A second example, reviewed in more detail in Chap. 3, is the Survey of Organizational Research Climate (SOuRCe) and evidence of its validity in various settings. Defined as “the shared meaning organizational members attach to the events, policies, practices and procedures they experience and the behaviors they see being rewarded, supported, and expected,” this scale includes a subscale on RCR resources, perception of effective educational opportunities about RCR and leaders who support them (Haven et al., 2019). These two studies support the importance of the lived environment. A measurement instrument at an early stage of development addresses a diagnostic approach, testing first students’ judgment of RCR scenarios’ ethical acceptability and in a second step, students’ ability to justify their judgments. The Revised RCR Reasoning Test has been developed to support these diagnostic steps. This instrument was administered to a limited number of Taiwanese and American students, finding that many lacked the knowledge to support their own judgments. While further psychometric work on the RCR Reasoning Test is ongoing, a diagnostic approach supports customization of RCR instruction to target inaccurate or incomplete beliefs, including across cultures (Pan, 2021). Against this innovative work, broader literature expresses frustration that basic RCR training assumptions are not yet clarified, provides insights into how RCR has been practiced by exemplar mentors as well as distressing evidence about how it can fail. Kalichman and colleagues (2018) lament that after 25  years of RCR training requirements in the USA, agreement on its desired impact does not yet exist. Surely, without a full understanding of what happens in the research environment, brief largely didactic courses could not be expected to decrease research misconduct or result in more reliable, reproducible research. Yet, NIH RCR Training policy notes that in spite of years of such training, incidence of reported cases of research

Lines of Investigation Evaluating RCR Training

59

misconduct have been increasing. Goals of …improving the ability to make ethical and legal choices in the face of conflicts involving scientific research, developing an appreciation for the range of accepted scientific practices for conducting research (NIH) are supportive of research integrity although not necessarily effective in preventing research misconduct. The research on the effectiveness of RCR training that does exist addresses much more limited questions. It shows, not surprisingly, that courses involving active learning including case analysis have been found to be more effective (Todd et al., 2017). Those courses targeting more specific fields demonstrated the largest effects (Watts et al., 2017), likely because they provide context and subject matter which could be directly applied. Mentoring during conduct of research is significantly understudied, astonishing since it would be expected to be key to responsible conduct of research. In two studies of exemplar mentors, Antes et al. (2019a, 2019b) found behaviors such as: holding regular meetings where emerging concerns could be addressed and data and findings reviewed, providing supervision and guidance, and deliberately cultivating a positive team environment with shared ownership and appreciation. Team members could bring up concerns or questions at any time. Compliance protocols were designed as a team with a point person regularly checking that they were being followed and examining informed consent forms. Data and findings were checked transparently (Antes et al., 2019a, 2019b). The quality of science and level of research integrity from such a pattern was not noted but should be measured, including in more hierarchical mentoring relationships. Nakumura suggests that scientists so mentored absorb their mentors’ ethical commitments and learn how, for example, honesty is enacted in research settings (Nakamura & Condren, 2018; Nakamura, 2020) Evidence of impact of these practices on end goals could not be located. In contrast to these descriptions of exemplary behavior, a study of US Office of Research Integrity (ORI) cases in which trainees had been found to have committed research misconduct (RM) found that three-fourths of their mentors had not reviewed source data and two-thirds had not set standards, leading to the question of whether the mentors contributed to trainee research misconduct. There do not appear to be agreed upon standards in the research community for these practices (Wright et al., 2008). Likewise, Iwasaki (2020) describes a deluge of students and postdocs indicating that they were suffering from toxic principal investigators and finding no guidance from their institutions. Explicit evaluation of PI’s mentorship should be required, with remediation or other consequences if poor mentor behavior is validated (Iwasaki, 2020). In general, this body of work provides only very limited insights into how RCR training should be structured but is particularly limited on its impact on practicing science with integrity. Questions: What is the expected end goal of RCR instruction? Is the quality of mentoring controlled within a rigorous system of quality governance for research?

60

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

 ines of Basic Research with Strong Implications for Robust L RCR Training Specialized literature on moral behavior/decision-making relevant to but not necessarily specifically addressing RCR was widely read. It is most available in cognitive sciences and social psychology but also emerging in sociology, social theory, and economics. Findings from these disciplines suggest factors important in RCR instruction/mentoring. It is common for individuals to fail to see unethical meaning and consequences of their behavior, with a near-universal desire to see oneself as moral. Especially for graduate students, it is important to understand how to follow up when someone has behaved immorally (broken RCR). Potentially conflicting research results indicate that after such an event: (A) individuals may increase moral behavior in an effort to retain humanity, or (B) feelings of decreased humanity can lead to downward spirals of immorality. After a failure to avoid temptation to behave dishonestly, some may feel lack of agency and experience to attain subsequent ethicality or may excuse themselves of responsibility (Kouchaki et  al., 2018), especially if standards are vague and not understandable to them. Real or perceived negative status change can precipitate cheating (Pettit et al., 2016). Also relevant are decision biases common in ethical problems which are themselves ill-defined and impactful. Common biases (simplifications or distortions in judgments and reasoning) relevant to ethical problems include simplification bias before considering all evidence, verification bias such as agreeing with the group to protect harmony, and regulation bias such as blaming the situation or the victim rather than oneself. Such distortions can be measured by the Basic Attitudes Scale which assesses an individual’s propensity to express these biases that can inhibit ethical decision-making (Watts et al., 2020). RCR is always a developmental process and ethical lapses should be reviewed and interpreted by a competent mentor, who can help sort out the incident and the student’s interpretation. RCR training is directed at individuals, with the aim of providing knowledge and the will to act responsibly. But individuals are strongly affected by group membership, with a tendency to conform to group norms in order to avoid exclusion. The specific behavior that is seen as moral can shift, depending on the social context (Ellemers et al., 2019). Similar dynamics can affect an individual’s ability/interest in managing evidence of research misconduct. Those highly identified with a group are predicted to be less likely to report such an allegation (Anvari et al., 2019). Relevant groups are specialty communities, labs, and/or research groups. Many studies support the notion that being tied to others engaged in unethical behavior increases propensity to do so oneself; from these others one can learn how to cheat, benefits/risks of doing so (Palmer & Moore, 2016, p.  211). Such behavior can spread through contagion and social learning and become concentrated because uninvolved honest peers leave. Abbott (2019) reminds us that morality like all social behavior, is heavily determined by social forces rather than by individual will. Even virtues, thought to be

Responsible Conduct of Research Training and Quality of Science

61

stable traits, are developed and practiced in social settings and depend to some extent on suitable social environments over time. The basic research reviewed above notes several factors important for RCR education. Overwhelming evidence shows that errors in ethical behavior are inevitable and that even honest people are situationally malleable. Individuals will also encounter teams/groups/ situations that are morally ambiguous, as well as the possibility that scientific communities and institutions may be unresponsive to misbehavior and/or poor scientific practices.

 esponsible Conduct of Research Training and Quality R of Science Quality deficits described in some areas of science, sometimes against standards set decades ago, undermine the responsible conduct of research. An appreciation for the range of accepted scientific practices, as indicated in NIH guidance, is insufficient because it does not help students deal with quality deficits they will likely encounter. Here are some examples of the kinds of deficits being documented. –– “It has become clear in the past decade, that many if not most preclinical study findings are not reproducible” (Bespalov et al., 2020). –– It is alleged that academic research institutions and laboratories…are largely operating without defined standards (Bongiovanni et al., 2020). –– Currently published neurology research has been found to not consistently provide information needed for reproducibility, including access to materials, raw data, and analysis scripts, not being preregistered, with a third lacking conflict of interest statements (Raub et al., 2020). –– Fields and disciplines may take on the task of defining and upholding necessary standards or may fail to do so and tolerate detrimental research practices (NASEM, 2017). –– No metric exists for scientific quality (Begley, 2020). Quality governance requires institutions to monitor research practices, detect and deal with signals of drift. Quality expectations in the practice of science are made clear and enforced. But some suggest that in the current practice of science, incentives, evaluation systems, and rewards are so pervasively distorted and unfair and so impossible for individuals to reform that rules/norms are purposely broken as a form of civil disobedience (Penders & Shaw, 2020). Others have suggested that the incentive infrastructure in science has become so distorted that it is now harmful to the research endeavor (Munafo et al., 2020). Tourish and Craig (2020) cite a number of studies reporting trainees, many of them biomedical, witnessing research misconduct and deceptive research practices.

62

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

Question: How can RCR training, which advocates the ideal, prepare a student to deal with poor quality scientific practice and/or governance, should they encounter it?

Strengthened Approaches to RCR Training A number of approaches can be taken to refresh RCR training: 1) Consensus on RCR Training Goal Two myths persist: (1) that RCR training can be used to contain RM and (2) that institutions should have discretion (beyond guidelines for federally sponsored trainees) on mandatory RCR education. Likely, the first confusion occurred because historically RCR training was introduced at the time when policy makers were looking for measures to contain research misconduct, although the hypothesized effect has never been documented. An Office of Research Integrity (ORI) proposed regulation that would have made RCR education mandatory for all research staff was vigorously challenged by universities and withdrawn (Steneck & Bulger, 2007). Two recent cases illustrate continuing confusion/concern about outcomes from RCR training. As a response to two research scandals, Duke University required all research staff to complete RCR training including discussion of one of the scandals that occurred on its campus. Although acknowledging other problematic factors such as gaps in institutional oversight (Simon et  al., 2019), the implicit notion that RCR education would contain future RM scandals is questionable. In analysis of the He Jiankui case of germline gene editing, Yarborough (2019) suggests that He’s ethical education should have assured an ability to recognize and navigate the moral values involved in his research, the many ways they can be in conflict and the caution this should induce. Question: Who has the responsibility to produce and examine evidence of effectiveness of current RCR training and to institute any necessary reforms? Question: Shouldn’t universities, and especially those experiencing research ethics scandals be expected to address the full range of root causes, including documented quality of the RCR education they supply, institutional issues, and measure the effectiveness of each portion of the intervention to correct shortcomings? 2) Mentoring and Peer Support Mentoring can be studied as a form of adult development, constructing psychosocial resources (agency) in mentees and assurance of instructional contexts supportive of learning. Nakamura (2020) notes that in the sciences, students are embedded in a community of practice and immersed in a laboratory for long periods of time. Studies of lineage mentors in such contexts show strengths in four areas: science ethics (honesty, integrity) and how these values are lived, interpersonal equality and fairness, balance between intellectual freedom and guidance, and a

Strengthened Approaches to RCR Training

63

facilitative laboratory structured as a learning environment. Contribution of good mentoring to the well-being of science and to those using products of science must be recognized (Nakamura, 2020). A small study found peer pressure serves as an enabler to carry out research with integrity, providing peer counseling in case of a breach (Huybers et al., 2020). A more generalizable approach is to assure that work teams include ethical champions. At the earliest stage of an issue discussion, champions frame the issue in ethical terms (decrease harm, support fairness, honesty), which increases team awareness and decreases moral disengagement, resulting in more ethical decisions (Chen et  al., 2020). Established in work teams, ethical champions become an ongoing source of education in ethics through peer support. While the overall quality of mentoring and peer pressure is unknown, both can serve to efficiently teach motivations, rationalizations, and behaviors that can either support or undermine research integrity. Question: Universities have incentives to train large numbers of graduate students; can good mentorship be incentivized and assured in such conditions? 3) Learning RCR in Context Contextual RCR training specific to a discipline is thought to afford much more direct application, especially if accompanied by well-known researchers in that field sharing their personal experiences in handling ethical issues. Such issues might include reliable data reporting, conducting double-blind experiments, and managing conflicts of interest. Novices will also need personalized support and adaptive guidance. While collaborative, case-based, contextualized learning specific to a discipline would seem to be a useful model (Barak & Green, 2020), examination of RCR training plans for very high research activity universities found little evidence of such a model and fewer than half had incorporated at least half of best RCR training practices (Phillips et al., 2018). Question: For purposes of accreditation, universities are required to show that instructional outcomes are met. Shouldn’t such evidence be required for application of RCR skills in general and in a major area of study? 4) Teaching Tools for Self-Monitoring and System-Monitoring It would be useful in RCR training to teach the use of tools to detect deviation from responsible science. Such an approach would not only bolster scientific self-­ governance but should be used to improve student skills. Frequently, students are unclear about what actions constitute a lack of responsible conduct of research. For example, Eaton and others used text-matching software to examine student writing samples, then provided an educational intervention to teach them how to paraphrase and cite sources, thus avoiding a judgment of plagiarism (Eaton et al., 2020). After such education, students showed better text-matching scores, suggesting learning of skills in support of research integrity. And for those with statistical skills, screening methods to detect academic fraud can be used (Horton et al., 2020). Methods to assess data integrity in RCTs may be found in Wentao et al. (2020).

64

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

In addition to the Survey of Organizational Research Climate cited above, other measures can be used by institutions to monitor and if necessary correct, perverse incentives. One example is the Publication Pressure Questionnaire, which strongly relates to burnout, and can alert officials to need for resources to support healthy publication incentives (Haven et al., 2019). So, also, should RCR training make explicit expected responsible performances from other institutions such as journals, which play a major role in production and dissemination of scientific knowledge? Question: Shouldn’t a learning approach to research integrity always be paired with a compliance approach in support of RCR? Shouldn’t the system of scientific institutions also be held to justifiable standards? Shouldn’t each allegation of RM require the institutions involved to examine how its research integrity systems may have contributed to the misconduct and can be improved? 5) Blind Spots in Research Ethics Research ethics is not a comprehensive, coherent system. For example, IRB oversight is largely limited to before a study is implemented. Second, there is no oversight body at the level of research program or a complete field of research (van Delden, 2020). Third, conflict of interest is especially poorly regulated. Fourth, US research regulations that impinge on human subjects’ protection have grown over time to meet concerns du jour, establishing multiple oversight bodies but without resolving conflicts, assuring communication among them, or harmonizing them (Friesen et al., 2019), creating confusion. Fifth, prevention of harms is very important in protection of research subjects policy but not even considered in research misconduct policy. A particularly opaque blind spot is the possibility of moral suffering, defined as anguish in response to moral adversity, harms, wrongs or failures, or unrelieved moral stress. While most studies of this phenomenon are limited to health care providers or the military, it is quite conceivable that researchers could experience it when they are unable to fulfill their own standards of research integrity, perhaps exacerbated by a toxic or unsupportive organizational culture or by incentives perverse to integrity (Braxton et al., 2021). Question: What entity carries responsibility for coherence of research ethics policy? Could Belmont principles (perhaps expanded) serve as the core for all research regulation? Who is responsible for monitoring harms such as moral suffering, to students and scientists who are thwarted in their high moral standards by perverse incentives embedded in the institutions for producing and disseminating research? 6) Internationals and RCR Training There is modest evidence that researchers born internationally, even if they have completed scientific training in the USA, perceive research ethics differently from those born in the USA and are over-represented in US cases of research misconduct.

Strengthened Approaches to RCR Training

65

RCR education should be very explicit about content and consequences of rules (Antes et al., 2018) and follow closely those dealing with cultural discrepancies. Given the prevalence of international research teams, it is likely that many could benefit from such explicit training, the most efficacious still to be determined. Justice for the researchers and teams in which they function requires a solid understanding of how to bridge these cultural differences, especially important because of the large numbers of these individuals staffing US biomedical research labs such as those at the NIH (Diaz-Briquets & Cheney, 2017). Question: What is sufficient support for international students/scientists to practice research with integrity in an adopted country? 7) Institutional Responsibility in Transforming Research Research institutions must actively teach, monitor, and refresh the quality of scientific practice as a necessary element of continued RCR training. Strech et al. (2020) describe Quest Center for Transforming Biomedical Research at the Berlin Institute of Health. The program offers training and tools on topics such as study design, methods to decrease bias, how to publish “nonstandard” results, and others to all levels of faculty. It is too early for a meaningful assessment of this effort (Strech et al., 2020). While one might expect that scientists would have learned such skills in their basic training, others have documented this need. Koroshetz et  al. (2020) found, among institutions with NINDS training grants, only 5 of 37 reported enhanced training in the fundamental principles of rigorous research necessary for reliable and robust science. The Quest Center model also provides an opportunity to check whether an assumption from an earlier time is true—that standards for responsible practice are passed to new researchers by current research training processes (Steneck & Bulger, 2007). Initiatives such as the Quest Center may be common but not described in the literature. Such examples would document how institutions and products are held to standards of competence. Since US federal policy on research misconduct (and other such codes) does not require such competence (Desmond, 2020), it is important for institutions to do so. Question: Where in the course of science training and practice, is methods and ethics competence tested, assured, and remediated if necessary? 8) Regulators Regulating? Policies meant to support research integrity should be enforced as the legislative body intended in order to send the message that RCR is expected to be practiced. For example, compliance with legal requirements to report clinical trial results in ClinicalTrials.gov is poor and not enforced with monetary penalties as allowed by law (Zarin & Califf, 2021). The problem of bias from nonpublication of clinical trials has been understood since the 1980s. Such bias leads to a distorted evidence base, undermining clinical practice and research, hence the move to require a comprehensive ClinicalTrials.gov repository, which over the past 20 years has added requirements addressing documented deficiencies (Zarin). If regulators won’t

66

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

enforce, open public audit by a sponsor is a desirable option (DeVito et al., 2020). Two public purposes should be served: public access to information paid for by public funds and essential for those seeking trials, and support to the scientific community to make full use of prior work. Selective non-enforcement can occur because of cost of enforcement (personnel, relationships) but also signals a competitive advantage to those who choose not to comply but do not experience repercussions. Question: Does lack of enforcement by regulators display a conflict of interest on their part, and/or a passive effort from regulatory institutions dominated by scientists which thwarts the goals of public policy?

Recommendations From the view of optimizing research integrity, reframing RCR training is useful. A. RCR training should refer to the entire educational experience, say in a degree program, in which students learn the factual basis for research regulation, skills to practice good science and detect deviations from it, ethical analysis and judgment skills, and how to practice ethical science in a learning community of practice. B. Educational institutions have a responsibility to assure development of such skills, monitor that development and take corrective action for the individual and the environment when necessary. C. The system of scientific institutions (publishers, funders, regulators) should be called out when their practices create incentives perverse to the ethical practice of science. D. The myth that a disjointed system of education and practice, inevitably populated by individuals of goodwill can assure graduates responsibly practicing research needs to be replaced by an evidence-based, constantly introspective system that assures the skill set (both the ethical and the scientific) and the environment that supports such development. E. The field of RCR is evolving. It should now be expected that the research-­ performing institution is fully responsible for research integrity including preparation to practice science with integrity. Research integrity education should be continuous for all, at least in part with discipline-specific content and reflecting work situations, fairness in supervision, and access to people who can provide face-to-face advice about doubts that arise in research (Labib et al., 2022). These conditions suggest a broader move from punitive to educational approaches aimed to prevent research integrity violations. NIH has recently provided updated guidance for RCR education to include: conflict of commitment as well as conflict of interest, provision of psychologically and socially safe research environments, and a significant focus on data acquisition and analysis including use of electronic laboratory notebooks (NIH, 2022).

Relevance of Higher Education Research to Traditional RCR

67

Relevance of Higher Education Research to Traditional RCR Viewing RCR education in the broader context of socialization and skill development in graduate education in universities, provides further insight and empirical research. One segment of this work clusters around mentoring and socialization in the scientific role. The traditional model in doctoral education is one of socialization—the process of internalizing the expectations, standards, and norms of a given discipline including learning relevant skills (Blaney et al., 2020, 2022). An alternative is a cognitive apprenticeship model, aimed at developing skills to be learned and conditions necessary for advanced problem solving common in research. Steps in this model may be found in Table 1 of Minshew and colleagues (2021) and include assuring domain knowledge, teaching strategies to reach a solution, providing scaffolding in which a professor provides support to help students perform a task, among others. The cognitive apprenticeship model provides structures by which to assure that students learn the content and approaches to be successful in research. It is thought to be an improvement on “socialization,” an amorphous term that is not specific about learning and thus evades accountability (Minshew et al., 2021). A related meta-analysis of RCR scientific publications from 1990 to 2020 specifically oriented to effective training strategies for different learning outcomes found that experiential learning approaches where learners were emotionally involved in thinking about how to deal with problems were most effective Primarily intellectual deliberation about ethical problems, often considered the gold standard of ethics education, was significantly less effective (Katarov et al., 2022). Current RCR ignores actions of scientists who are ignorant, negligent, or careless (Anderson, 2021), suggesting that RCR training should incorporate tests of science knowledge and competency. This blind spot is noted elsewhere in this book—the assumption that scientists attain and retain competence in knowledge and skills relevant to their fields of investigation, without ever testing that assumption. All professions require more or less adequate retesting of knowledge and skill base, through licensing and relicensing exams; there is no reason to assume that scientists would be any different. A study of doctoral students documented significant negative experiences accompanied by a high rate of attrition, problems with student well-being, sometimes exploitation, and sometimes poor mentorship used to “weed out” those who were thought would not be successful. This study noted little accountability in mentoring, the notion that interfering with mentoring is a violation of academic freedom, and a sense that the current system is infallible (Tuma et al., 2021). Three measurement instruments for mentoring and for research self-efficacy have been well studied/developed psychometrically. The Mentor Role Instrument consists of nine items; the Clinical Research Appraisal Instrument consists of 19 items testing student self-efficacy for tasks essential to quality research. In their newly revised and shortened form, these instruments have been used to

68

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

evaluate research training programs, particularly for diversity-focused educational and research training programs at academic institutions (Jeffe et  al., 2017). A third instrument, Mentoring Competency Assessment to evaluate the skills of research mentors (the MCA-21) has emerged from the clinical and translational science community to evaluate mentoring programs, especially in those centers. A second round of testing (sample of 1626) provides further evidence of validity and resulted in 21 items clustered around: aligning expectations, promoting professional development, maintaining effective communication, assessing understanding, mentee self-efficacy, addressing diversity, fostering independence and navigating mentoring networks (Hyun et al., 2022). Continued research and analysis published in higher education journals but relevant to RCR in the biomedical context is a positive development. This additional body of scholarship should be helpful in a biomedical context, in which research mentorship has been characterized as largely an ad hoc activity delegated to graduate training programs and PIs of individual research groups, with little oversight or accountability (Montgomery et al., 2022).

Conclusion RCR training is incomplete, not evidence-based and somewhat deceptive. It assumes that the research workplace is producing competent science and is well governed, that regulations supporting research integrity are enforced, and that the lessons taught in RCR education can be applied in a fair and coherent system. This may or may not be the case. Like other areas of research regulation, RCR is required for individuals but not for institutions in which research is carried out and for those responsible for assuring proper archiving of results (journals). Allocating responsibility for RCR largely to individuals is unfair because individuals cannot control incentives and conditions of practice. Answering the question of whether current RCR training is fit for purpose is difficult to ascertain if purpose is not clear. There are, however, a number of avenues for improvement. Findings from the basic sciences should be applied and tested. Most improvement will require a network of guidelines/regulations/standards that are not only clearly promulgated but enforced; there is little evidence that either is adequate today. Refreshed RCR training should provide not only skills for ethically navigating science as it is practiced but also contributes to reforms now underway. Since RCR training was established during a time of resistance by universities of what they believed was a federal incursion into their affairs, it has remained “highly decentralized, disorganized and is without foundational theories or paradigms,” viewed as an unfunded mandate (Tamot et al., 2013). It has too many consequences to be allowed to remain a remnant of a political altercation and should be reformed for the ethical purposes it can fulfill.

References

69

References Abbott, A. (2019). Living one’s theories: Moral consistency in the life of Emile Durkheim. The Sociological Review, 37(1), 1–34. https://doi.org/10.1177/0735275119830444 Anderson, H. (2021). Philosophy of scientific malpractice. SATS, 22(2), 135–148. Antes, A. L., Kuykendall, A., & DuBois, J. M. (2019a). Leading for research excellence and integrity: A qualitative investigation of the relationship-building practices of exemplary principal investigators. Accountability in Research, 26(3), 198–226. https://doi.org/10.1080/0898962 1.2019.1611429 Antes, A. L., Kuykendall, A., & DuBois, J. M. (2019b). The lab management practices of “research exemplars” that foster research rigor and regulatory compliance: A qualitative study of successful principal investigators. PLoS One, 14(4), e0214595. https://doi.org/10.1371/journal. pone.0214595 Antes, A. L., English, T., Baldwin, K. A., & DuBois, J. N. (2018). The role of culture: Acculturation in researchers’ perceptions of rules in science. Science & Engineering Ethics, 24(2), 361–391. https://doi.org/10.1007/s11948-­017-­9876-­4 Anvari, F., Wenzel, M., Woodyatt, L., & Haslam, S. A. (2019). The social psychology of whistleblowing: An integrated model. Organizational Psychology Review, 9(1), 41–67. https://doi. org/10.1177/2041386619849085 Barak, M., & Green, G. (2020). Novice researchers’ views about online ethics education and the instructional design components that may favor ethical practice. Science & Engineering Ethics, 26, 1403–1421. https://doi.org/10.1007/s11948-­019-­00169-­1 Begley, C. G. (2020 July 13). Better (publishing) background checks; a way toward better integrity in science, Retraction Watch. Bespalov, A., Michel, M.  C., & Steckler, T. (2020). Preface. In A.  Bespalov, M.  C. Michel, & T.  Steckler (Eds.), Good research practice in non-clinical pharmacology and biomedicine. Springer. Blaney, J. M., Kang, J., Wofford, A. M., & Feldon, D. F. (2020). Mentoring relationships between doctoral students and postdocs in the lab sciences. Studies in Graduate and Postdoctoral Education, 11(3), 263–279. https://doi.org/10.1108/SGPE-­08-­2019-­0071 Blaney, J. M., Wofford, A. M., Jeong, S., Kang, J., & Feldon, D. F. (2022). Autonomy and privilege in doctoral education: An analysis of STEM students’ academic and professional trajectories. The Journal of Higher Education, 93(7), 1037–1063. https://doi.org/10.1080/0022154 6.2022.2082761 Bongiovanni, S., Purdue, R., Kornienko, O., & Bernard, R. (2020). Quality in non-GxP research environment. In A. Bespalov, M. C. Michel, & T. Steckler (Eds.), Good research practice in non-clinical pharmacology and biomedicine. Springer. Braxton, J. N., Busse, E. M., & Rushton, C. H. (2021). Mapping the terrain of moral suffering. Perspectives in Biology and Medicine, 64(2), 235–245. https://doi.org/10.1353/pbm.2021.0020 Chen, A., Treveno, L.  K., & Humphrey, S. (2020). Ethical champions, emotions, framing and team ethical decision making. Journal of Applied Psychology, 105(3), 245–273. https://doi. org/10.1037/apl0000437 do Ceu Patrao Neves, M. (2018). On (scientific) integrity: Conceptual clarification. Medicine, Health Care and Philosophy, 21(2), 181–187. https://doi.org/10.1007/s11019-­017-­9796-­8 van Delden, J. (2020). The future of research ethics, pp  468-474. In U.  Schmidt, A.  Freuer, & C. Sprumont (Eds.), Ethical research. Oxford University Press. Desmond, H. (2020). Professionalism in science: Competence, autonomy and service. Science & Engineering Ethics, 26(3), 1287–1313. https://doi.org/10.1007/s11948-­019-­00143-­x DeVito, N., Bacon, S., & Goldacre, B. (2020). Compliance with legal requirements to report clinical trial results on ClinicalTrials.gov: A cohort study. Lancet, 395, 361–369. https://doi. org/10.1016/S0140-­6736(19)33220-­9 Diaz-Briquets, S., & Cheney, C. C. (2017). Biomedical globalization. Routledge.

70

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

Eaton, S. E., Crossman, K., Behjat, L., Yates, R. M., Fear, E., & Trifkovic, M. (2020). An institutional self-study of text-matching software in a Canadian graduate-level engineering program. Journal of Academic Ethics, 18, 263–282. https://doi.org/10.1007/s10805-­020-­09367-­0 Ellemers, N., van der Toorn, J., Paunov, Y., & van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332–366. https://doi.org/10.1177/1088868318811759 Friesen, P., Redman, B., & Caplan, A. (2019). Of straws, camels, research regulation and IRBs. Therapeutic Innovation and Regulatory Science, 53(4), 526–534. https://doi. org/10.1177/2168479018783740 Haven, T. L., Tijdink, J. K., Martonson, B. C., & Bouter, L. M. (2019). Perceptions of research integrity climate differ between academic ranks and disciplinary fields: Results from a survey among academic researchers in Amsterdam. PLoS One, 14(1), e0210599. https://doi. org/10.1371/journal.pone.0210599 Horton, J., Jumar, D., & Wood, A. (2020). Detecting academic fraud using Benford law: The case of professor James Hunton. Research Policy, 49(8), 104084. Huybers, T., Greene, B., & Rohr, D. (2020). Academic research integrity: Exploring researchers’ perceptions of responsibilities and enablers. Accountability in Research, 27(3), 146–177. https://doi.org/10.1080/08989621.2020.1732824 Hyun, S. H., Rogers, J. G., House, S. C., Sorkness, C. A., & Pfund, C. (2022). Revalidation of the mentoring competency assessment to evaluate skills of research mentors: The MXA-21. Journal of Clinical and Translational Science, 6(1), e46. https://doi.org/10.1017/cts.2022.381 Integrity for all. (2019, June 6). Nature 570:5. Iwasaki, A. (2020). Antidote to toxic principal investigators. Nature Medicine, 26(4), 457. https:// doi.org/10.1038/s41591-­020-­0831-­6 Jeffe, D. B., Rice, T. K., Boyington, J. E. A., Rao, D. C., Jean-Louis, G., Davila-Roman, V. G., Taylor, A. L., Pace, B. S., & Boutjdir, M. (2017). Development and evaluation of two abbreviated questionnaires for mentoring and research self-efficacy. Ethnicity & Disease, 27(2), 179–188. https://doi.org/10.18865/ed.27.2.179 Kalichman, M. W., & Plemmons, D. K. (2018). Intervention to promote responsible conduct of research mentoring. Science & Engineering Ethics, 24(2), 699–725. https://doi.org/10.1007/ s11948-­017-­9929-­8 Katarov, J., Andorno, R., Krom, A., & van den Hoven, M. (2022). Effective strategies for research integrity training–A meta-analysis. Educational Psychology Review, 34, 935–995. https://doi. org/10.1007/s10648-­021-­09630-­9 Koroshetz, W.  J., Behrman, S., Brame, C.  J., Branchaw, J.  L., Brown, E.  N., Clark, E.  A., Dockterman, D., Elm, J. J., Gay, P. L., Green, K. M., Hsi, S., Kaplitt, M. G., Kolber, B. J., Kolodkin, A. L., Lipscombe, D., MacLeod, M. R., McKinney, C. C., Munafo, M. R., Oakley, B., et  al. (2020). Framework for advancing rigorous research. eLife, 9, e55915. https://doi. org/10.7554/eLife.55915 Kouchaki, M., Dobson, K.  S. H., Waytz, A., & Kteily, N.  S. (2018). The link between self-­ dehumanization and immoral behavior. Psychological Science, 29(8), 1234–01246. https://doi. org/10.1177/0956797618760784 Labib, K., Evans, N., Roje, R., Kavouras, P., Elizondo, A. R., Kaltenbrunner, W., Buljan, I., Tavn, T., Widdershoven, G., Bouter, L., Charitidis, C., Srensen, M. P., & Tijdink, J. (2022). Education and training policies for research integrity: Insights from a focus group study. Science and Public Policy, 49, 246–266. https://doi.org/10.1093/scipol/scab977 Minshew, L. M., Olsen, A. A., & McLaughlin, J. E. (2021). Cognitive apprenticeship in STEM graduate education: A qualitative review of the literature. AERA Open, 7(1), 1–16. https://doi. org/10.1177/23328584211052044 Montgomery, B. L., Sancheznieto, F., & Dahlberg, M. L. (2022). Academic mentorship needs a more scientific approach. Issues in Science and Technology, 38(4), 84–87.

References

71

Munafo, M.  R., Chambers, C.  D., Collins, A.  M., Fortunato, L., & Macleod, M.  R. (2020). Research culture and reproducibility. Trends in Cognitive Science, 24(2), 91–93. https://doi. org/10.1016/j.tics.2019.12.002 Nakamura, J., & Condren, M. (2018). A systems perspective on the role mentors play in the cultivation of virtue. Journal of Moral Education, 47(3), 3160–3332. Nakamura, J. (2020). Contexts of positive adult development: Mentoring as an example. In S. I. Donaldson, M. Csikszentmihalyi, & J. Nakamura (Eds.), Positive psychological science (2nd ed.). Routledge. NASEM. (2017). Fostering integrity in research. National Academies Press. NIH. (2022). Responsible conduct of research training, Updated guidance FY 2022, NOT-OD-055. Palmer, D., & Moore, C. (2016). Social networks and organizational wrongdoing in context. In D.  In Palmer, K.  Smith-Crowe, & R.  Greenwood (Eds.), Organizational wrongdoing (pp. 203–234). Cambridge University Press. Pan, S. (2021). Taiwanese and American graduate students’ misconceptions regarding responsible conduct of research: A cross-national comparison using a two-tier test approach. Science & Engineering Ethics, 27(2), 20. https://doi.org/10.1007/s11948-­021-­00297-­7 Penders, B., & Shaw, D. (2020). Civil disobedience in scientific authorship: Resistance and insubordination in science. Accountability in Research, 27(6), 347–371. https://doi.org/10.108 0/08989621.2020.1756787 Pettit, N. C., Doyle, S. P., Lount, R. B., & To, C. (2016). Cheating to get ahead or to avoid falling behind? The effect of potential negative versus positive status change on unethical behavior. Organizational Behavior and Human Decision Processes, 137, 172–183. https://doi. org/10.1016/j.obhdp.2016.09.005 Phillips, T., Nestor, F., Beach, G., & Heitman, E. (2018). America COMPETES at 5 years: An analysis of research intensive universities’ RCR training plans. Science & Engineering Ethics, 24(1), 227–249. https://doi.org/10.1007/s11948-­017-­9883-­ Plemmons, D. K., Baranski, E. N., Harp, K., Lo, D. D., Soderberg, C. K., Errington, T. M., Nosek, B. A., & Esterline, K. M. (2020). A randomized trial of a lab-embedded discourse intervention to improve research ethics. Proceedings of the National Academy of Sciences, 117(3), 1389–1394. https://doi.org/10.1073/pnas.1917848117 Raub, S., Torgerson, T., Johnson, A. L., Pollard, J., Tritz, D., & Vassar, M. (2020). Reproducible and transparent research practices in published neurology research. Research Integrity and Peer Review, 5, 5. https://doi.org/10.1186/s41073-­020-­0091-­5 Simon, C., Beerman, R. W., Ariansen, J. L., Kessler, D., Sanchez, A. M., King, K., Sarzotti-Kelsoe, M., Sasa, G., Bradley, A., Torres, L., Califf, R., & Swamy, G. K. (2019). Implementation of a responsible conduct of research education program at Duke University School of Medicine. Accountability in Research, 26(5), 288–310. https://doi.org/10.1080/08989621.2019.1621755 Steneck, N.  H., & Bulger, R.  E. (2007). The history, purpose and future of instruction in the responsible conduct of research. Academic Medicine, 82(9), 829–834. https://doi.org/10.1097/ ACM.0b013e31812f7d4d Strech, D., Weissgerber, T., Dirnagl, U., & QUEST group. (2020). Improving the trustworthiness, usefulness and ethics of biomedical research through an innovative and comprehensive institutional initiative. PLoS Biology, 18(2), e3000576. https://doi.org/10.1371/journal.pbio.3000576 Tamot, R., Arsenieva, D., & Wright, D. E. (2013). The emergence of the responsible conduct of research (RCR) in PHS policy and practice. Accountability in Research, 20, 349–368. https:// doi.org/10.1080/08989621.2013.822258 Todd, E. M., Torrence, B. S., Watts, L. L., Mulhearn, T. J., Connelly, S., & Mumford, M. D. (2017). Effective practices in the delivery of research ethics education: Qualitative review of ­instructional methods. Accountability in Research, 24(5), 297–321. https://doi.org/10.108 0/08989621.2017.1301210 Tourish, D., & Craig, R. (2020). Research misconduct in business and management studies: Caiuses, consequences and possible remedies. Journal of Management Inquiry, 29(2), 174–187.

72

4  Responsible Conduct of Research (RCR) Instruction Supporting Research Integrity

Tuma, T. T., Adams, J. D., Hultquist, B. C., & Dolan, E. L. (2021). The dark side of development: A systems characterization of the negative mentoring experiences of doctoral students. CBE-­ Life Sciences Education, 20(2), ar16. https://doi.org/10.1187/cbe.20-­10-­0231 Watts, L.  L., Medeiros, K.  E., Mulhearn, T.  J., Steele, L.  M., Connelly, S., & Mumford, M.  D. (2017). Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics & Behavior, 27(5), 351–384. https://doi.org/1 0.1080/10508422.2016.1182025 Watts, L. L., Medeiros, K. E., McIntosh, T. J., & Mulhearn, T. L. (2020). Decision biases in the context of ethics: Initial scale development and validation. Personality and Individual Differences, 153, 109609. https://doi.org/10.1016/j.paid.2019.109609 Wentao, L., van Wely, M., Gurrin, L., & Mol, B. W. (2020). Integrity of randomized controlled trials: Challenges and solutions. Fertility and Sterility, 113(6), 1113–1119. https://doi. org/10.1016/j.fertnstert.2020.04.018 Wright, D. E., Titus, S. L., & Cornelison, J. B. (2008). Mentoring and research misconduct: An analysis of research misconduct in closed ORI cases. Science & Engineering Ethics, 14(3), 323–336. https://doi.org/10.1007/s11948-­008-­9074-­5 Yarborough, M. (2019). 20th century science education and 21st century genetic engineering technologies: A toxic mix. Accountability in Research, 26(4), 271–275. https://doi.org/10.108 0/08989621.2019.1596031 Zarin, D.  A., & Califf, R.  M. (2021). Trial reporting and the clinical trials enterprise. Journal of the American Medical Association, 181(8), 1131–1132. https://doi.org/10.1001/ jamainternmed.2021.2041

Chapter 5

Emerging, Evolving Self-Regulation by the Scientific Community

“The scientific community generally believes that the violation of research integrity is rare. Built upon this belief, the scientific system makes little effort to examine the trustworthiness of research….Emerging evidence has suggested that research misconduct is far more common than we normally perceive….Research misconduct…is further facilitated by poor research governance…The current strategy that tackles potential research misconduct focuses on protecting the reputation of authors and their institutions but neglects the interest of patients, clinicians and honest researchers…” (Li et al., 2022).

Introduction Self-regulation has long been guarded by the scientific community and has been assumed to control quality of research practices and products. Limitations to this authority emerged in the 1970s and continued in several new phases, moving toward further democratization of science governance. Does science have the resources and perspectives to uphold its stewardship responsibilities? Examples from current literature describe strengths in science self-governance but also limitations and multiple suggested improvements. Two historical case examples document delayed recognition of methodological and moral limitations of scientific practice. The Institutional Corruption framework can provide guidance in assuring that institutions in science support its purpose, not infected with conflict of interest and diversion to financial goals. The old standard insisted on a strict division of labor between the public and scientists. But recent trends toward democratization of science involve non-­scientists in decisions about research priorities, allocation of resources, and setting of evidentiary standards before a scientific finding can become the basis of public action. Influencing science allows the public to protect and promote its interests, fully © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_5

73

74

5  Emerging, Evolving Self-Regulation by the Scientific Community

benefit from science, and build trust in it. Kurtulmus (2021) charges that a number of sciences have been complicit in neglecting the interests of certain social groups (women, LGBT individuals), a practice that has not been well controlled by strong science self-regulation. Autonomous science can become an unaccountable source of power that undermines collective self-government. In addition, hijacking of public participation in science by other actors is a persistent danger (Kurtulmus). Thus, self-regulation in science is a logic of long standing but now recognized to necessarily be balanced by an imperative of quality control, and emerging since the twenty-first century, social responsibility. In recent times, challenges to scientific selfgovernance have occurred in three waves. The first emerged in the 1970s when, amid occurrences of blatantly unethical medical research, the authority of scientists to decide on their own practices was contested and contained, in part through legislation which established IRBs and eventually research misconduct regulations. The desire for scientific autonomy has and continues to fuel self-regulation (Littoz-­Monnet, 2020). A less well-understood second wave addresses the ability of science to assure, through self-regulation, the quality of research it produces. In a widely cited 2012 article about why science is not necessarily self-correcting, Ioannidis calls out the following practices: bias of all kinds; fabricated results; questionable research practices; lack of replications and editorial bias against them; and lack of publicly available protocol, data, and analysis (Ioannidis, 2012). None of these shortcomings has been adequately addressed to date. Some would suggest that an original conception of science self-regulation which meant protecting the private domain in which individual scientists have typically been granted authority to package and present their work, control interpretation of their findings by keeping data and methods sequestered and inaccessible, with few consequences for lack of replicability, should no longer be acceptable (Freese & Peterson, 2018). Expectations of responsibility to meet a much more rigorous set of methodological standards can be found in a recent draft document related to the free and responsible practice of science (CFRS). It states that scientists have a responsibility to meet globally recognized standards of reliability and validity. Failure to do so infringes on the human right to enjoy the benefits of scientific progress (CFRS, 2021). Notably, the NIH Strategic Plan for 2021–2025 acknowledges that it must enhance reproducibility, provide training modules for the research community in good experimental design, improve stewardship of trials, and transparency through data sharing (NIH, 2021), an acknowledgment that standards are not reliably being met. A third wave challenging the quality of scientific self-governance constitutes a shift from the predominant view in the second half of the twentieth century. This most recent view requires scientific freedom to be linked to social responsibility not only in the setting of a program of research (Douglas, 2021) but also in all aspects of the practice of science. Yet, in a recent case of experimental human germline editing, the scientific community could not self-regulate to prevent a violation of a widely supported consensus (Nelson et al., 2021). It is also worth noting that both the regulations regarding protection of human research subjects and those of

Examples of Scientific Self-Correction/Self-Governance and Suggestions for Improvement

75

research misconduct reflect the now old logic. Neither requires accounting for impacts of research on human subjects or on subsequent users of research, or more broadly on society, as required in the third-wave logic. These regulations must be upgraded to address these issues. Writing from a European perspective, Landeweerd et al. (2015) note that in the late 1970s and 1980s there was a shift from science self-governance to external regulation. These authors describe several sequential phases over the past 30 years of governance: first, a technocratic style focused on assessors of acceptable risk rather than moral and other questions; second, an applied ethics style to provide input on the moral delimitations of science and technology; and third, a public participation style of governance, which aims to drive political and governmental choices. While each had its limitations, “each of the styles emerged as a response to the increasing democratization of science governance” (Landeweerd et  al., 2015, p. 18). It is worth noting that a bedrock ethical value undergirding science regulation is stewardship. Stewards are those who are charged with taking good care of that with which they have been entrusted, toward collective betterment. Stewardship can also be built into regulatory structures (Laurie et al., 2018) as well as to general understanding of responsibilities of individual scientists and of science communities. This chapter presents a limited analysis and interpretation of perspectives about whether science has the resources—conceptual, capacity building, social/ political support, and other tools—to self-regulate to modern methodological and social standards and to uphold stewardship responsibilities. Examples were selected for the variety of lessons they provide; examples are neither exhaustive nor necessarily representative of all lessons. They are used to address the question—what are current views/examples about how self-governance is currently practiced and might evolve to meet emerging standards? Two historical cases which document self-­governance efforts add insights, as do two additional examples where external regulation was deemed necessary. Options for system-wide institutional reforms are considered through the Institutional Corruption (IC) framework. IC is useful for diagnosing when actions/beliefs signal a loss of alignment between current practices/policies and an institution’s (in this case science’s) purpose/goal (Lessig, 2013). The IC framework also invites attention to costs of not examining, undertaking, or delaying any relevant reforms. Finally, we visit the latest case du jour and reflect on it against expert conclusions about self-governance in science.

 xamples of Scientific Self-Correction/Self-Governance E and Suggestions for Improvement Seven examples were located in current literature by explorative methods, in part by searching “science self-correction/governance,” and are here excerpted. Further relevant examples could be added as they are identified or available.

76

5  Emerging, Evolving Self-Regulation by the Scientific Community

The first example, Paul Offit’s book, You Bet Your Life, (Offit, 2021) provides discussion of how errors with emerging treatments—X-rays, blood transfusions, anesthesia, antibiotics, and vaccines—were eventually corrected. Perhaps the most revealing recent example described is that of gene therapy and especially the death of research subject Jesse Gelsinger. With further research, investigators were able to determine the likely cause(s) of failure, in the case of gene therapy, information that proved essential in its subsequent development. The second example describes the effects of lack of a science self-correction system on the research underlying the practice of medicine and of patient safety, alleging to fall well behind practices of other fields (Becker, 2020). Correcting this situation would require tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research. The current error control system in research undergirding medical practice is too fragmented and sporadic (Becker, 2020) and beset by perverse incentives. Upshur and Goldenberg (2020) note that the current system of research and its regulation is not generally effective in producing medical interventions with low harm profiles and in being honest about the degree of uncertainty in the research base for medical practice. A self-­ corrective effort may be found in PEERS—an open science Platform for the Exchange of Experimental Research Standards—a curated open platform to aid scientists in determining which experimental factors and variables are most likely to affect the outcome of a specific test, model, or assay (Sil et al., 2021). The third example examines whether the IRB system in the USA is fit-for-­ purpose. This system has, from the beginning, represented a political compromise at the behest of the scientific community, to protect its professional autonomy, by establishing a weak regulatory system with few resources and little authority. There is still no support for strengthening it. Even worse, the IRB system is procedural with no real examination of whether its societal goal (protection of research subjects) is met—an example of goal displacement. Confusing and contradictory rules from an agency (Office of Human Research Protections, OHRP) lacking authority to issue formal precedents, have left a lack of clarity of what it means to comply with the regulations. The IRB system has consisted of thousands of local boards, each developing its own policies but answering to an agency (OHRP) that lacks sufficient resources to provide oversight and clarify regulations. The result has been costly hyper-compliance with each institution’s interpretation of unclear rules and emergence of private accreditors (Association for Accreditation of Human Research Protection Programs) who set “best practices” and provide audits (Babb, 2020). One might label this situation as forced self-regulation. The question that must be asked is, through this arrangement, whose goals have been met? Whether social goals for subject protection have been met cannot be answered for lack of evidence. In protecting scientific autonomy, the price has been confusion, delays, and high institutional costs. The fourth example, meta-research (the study of research itself—its methods, reporting, reproducibility, evaluation, and incentives) is a new field that provides an evidence base, noting useful practices and exposing improper practices that need to be corrected expeditiously instead of depending on what has been a haphazard

Examples of Scientific Self-Correction/Self-Governance and Suggestions for Improvement

77

practice of self-correction (Ioannidis, 2018). Still missing are the regularized monitoring of these practices and enforcement and correction when problems are found. Meta-­science studies have clearly shown that assurance of quality and correction of the scientific literature is at best unreliable, with no single entity having jurisdiction or incentive to solve the problem. (See Chap. 3 for further discussion). As a social movement, meta-science has become a player in debates about research integrity, using tools of science to diagnose problems in research practice and to improve research efficiency on a macro scale, and as a reform, considering the current practice of science as a scandal. But the ethics of meta-science and its current practice also need attention. It is ethically problematic that meta-science studies dip into problem areas in specific disciplines without providing a summative (across disciplines) base for change, in other words, how to generalize its findings. There is rarely a response from fields studied, necessary to assure that the scientific community knows what to do to self-correct, fix the problem. Meta-science studies rarely address all institutions that are part of science production/dissemination, usually necessary to create change. Under these conditions, meta-science is sub-­ optimizing its benefit for reform. The fifth example advocates incorporating negligence into regulatory language. Negligence, defined as where a professional does not live up to standards expected for a professional of similar qualifications, is a category of misconduct that is missing from codes of conduct for scientists. It is also missing from government regulations, including those addressing research misconduct. Desmond and Dierickx (2021) argue that negligence provisions are crucial for justified trust in the scientific community which, in turn, is essential for scientists to maintain their autonomy. To judge negligence, the scientific community needs clear standards of competence expected of all, a means to assess them and to hold individuals, communities of scientists, and others accountable. The sixth example requires assuring an integrated, accurate scientific record. Several revised practices are suggested. First, PubMed would include a data-driven assessment of papers’ technical quality and generalizability, targeting those scientists with consistently low scores for additional training. Secondly, preregistration of all experimental designs would assure disclosure of important elements of rigor, which scientists currently can choose to disclose or not. Thirdly, all related publications would be assembled into a sub-network so that validation of findings could be tracked. PubMed currently does not automatically associate related studies and therefore doesn’t deter useless duplication or help to document the accumulated state of the science. In this proposed revision of PubMed, publications could be transparently modified, with the history of changes recorded. Along with reform of peer review to make it accountable, and use of data sharing, Bielekova and Brownlee (2021) advocate for properly managing and translating the vast stores of biomedical research data. A seventh example is the initiation of registered reports (RR), first adopted in 2013 and as of early 2021 used by more than 300 journals (Chambers & Tzavella, 2021), largely in psychology but also to some degree in medicine. It is an important example of science self-correction. RR represent a new publication workflow in

78

5  Emerging, Evolving Self-Regulation by the Scientific Community

which the decision about journal acceptance occurs on the basis of a detailed proposal occurring before data are collected or analyzed. Stage 1 review evaluates the proposal, an in-principle acceptance as a commitment to publish results regardless of the outcome as long as the proposal was adhered to (Stage 2 review). Acceptance is not contingent on results and instead focuses on methods and importance of research questions, thus consistent with scientific norms. RRs are meant to eliminate publication bias and various forms of reporting bias and because of high standards and detailed methodological description, to make efforts at reproducibility easier. Journals accepting RR may also practice open data and open materials. This approach differs from preregistration, in which researchers submit hypotheses and data analysis plan to a registry, rather than a journal, before data collection (Montoya et al., 2021). A comparison of RR trials with standard trials in psychology found 44% positive results in RR but 96% results in standard trials (Scheel et al., 2021). Some believe that all clinical trials should be conducted and reported exclusively as RRs, to assure trial results will be reported free from bias or that results will be published at all (Chambers & Tzavella, 2021). In an eighth example, behavioral research scientists have suggested a much more basic reform. They note that the current practice of “drawing inferences about an intervention’s likely effect at a population scale is based on findings in haphazard convenience samples that cannot support such generalizations” (Bryan et al., 2021, p 980). This lack of attention to heterogeneity is especially problematic in artificial intelligence, which can and has led to policies that perpetuate or exacerbate inequalities. A solution requires investment in shared infrastructure in high-quality generalizable samples so that heterogeneous effects can be studied. Intervention effects should be expected to vary across contexts and populations, and decline of effects in later replications should not automatically be attributed to questionable research practices. This stance suggests that adding larger study samples to current studies, and study preregistration are insufficient to deal with inconsistency of interventions (Bryan et al., 2021). The heterogeneity revolution is an example of scientists self-­ regulating with basic knowledge about how research practices should be improved. While not depicted as a comprehensive landscape, these examples reveal various meanings of science self-correction. The first example notes that determined self-­ correction is necessary to bring emerging technology to fruition. The second example describes the failure to meet an important societal goal to undergird safe and effective practice of medicine. The third example of IRB regulations notes that government regulation and science self-regulation are always interdependent. It remains unclear whether this co-regulated IRB system actually fulfills the original goal of protecting human research subjects. The other examples describe new and evolving tools to meet methodological standards and expectations of social responsibility. These include incorporation of negligence in legal standards governing science; development of an evidence base essential to governance (meta-science); and overhauling a resource like PubMed into an integrated, cumulative scientific record that also discloses important elements of rigor and use of registered reports to reduce biased research practice. The last example, investment in infrastructure to improve generalization of scientific

Prominent Case Examples of the Evolution of Science Self-Governance

79

findings, is the boldest, aimed at a basic methodological limitation of the current practice and funding of science.

 rominent Case Examples of the Evolution of Science P Self-Governance Here, I address two prominent case examples in the evolution of science self-­ governance. In the first case, progress was apparently constrained by inertia in the scientific community and in the institutions that produce and disseminate research. The second case describes the evolution of what now would be viewed as egregious research practices, eventually overtaken by changes in the broader culture.

Preclinical Research In 2010, the National Institute of Neurological Disease and Stroke (NINDS), of the NIH called a meeting of stakeholders to discuss how to improve methodological reporting of animal studies in grant applications and publications. Few animal studies reported on randomization, blinding, or sample size determination. Poor reporting, often associated with poor experimental design rendered studies difficult to reproduce and likely explained why multiple clinical trials, based on positive results in animal studies, failed (Landis et  al., 2012). The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed and endorsed by a thousand journals. But 10 years later, many preclinical studies still exhibit these problems (du Sert et al., 2020). This case raises questions relevant to science self-correction. Obviously, quality of study methodology had not been monitored by the scientific community, and apparently also by funders, and a decade later, the issue had not been resolved.

Human Experiments with Hepatitis (Halpern) Research abuse narratives are embedded in our collective memory of late twentieth-century biomedical research; they serve to affirm the moral priorities we proudly hold. But the accounts have serious shortcomings as reflections of the past. By focusing exclusively on the individual scientist as the responsible party, they obscure the web of institutional and cultural support that gave momentum to the now-condemned research. Missing from the picture are the federal sponsorship of virtually all of the repudiated studies, the central role of a military-biomedical elite with its links to defense agencies, the widespread acceptance of scientists’ justificatory claims, the deference of the press, the buy-in from managers of multiple institutions providing access to subjects; and the lived experience of some participants… (Halpern, 2021, p 179).

80

5  Emerging, Evolving Self-Regulation by the Scientific Community

Halpern’s documentation of the story behind the 30 years (1942–1972) of human experiments with hepatitis carefully track the ability of the American biomedical elite and the military in sustaining the original narrative of research aiming to control the disease in support of the war effort and national security. Enrolling marginalized and often institutionalized groups—disabled children, individuals with mental illness, prisoners, and conscientious objectors, kept the research out of the public eye. The social, military, and scientific context fostered the maintenance of a moral framework that enabled dangerous experiments, without attention to serious longterm effects, with marginalized people and ignoring the Nuremberg Code. These hepatitis trials in this time period also further supported a policy of not assuring longterm follow-up or compensation for research injury, a policy that continues in the US today. With cultural changes, including the emergence of bioethics, involved biomedical researchers lost control of the wartime narrative in the 1960s (Halpern, 2021). This pattern of neglect of medical research ethics in the first two decades after World War II, a period of vast increase in human experimentation, has been seen by analysts as characterized by lack of moral leadership among physicians and a focus on promoting their own professional interests. Scandals emerging in the mid-1960s created a need for public action (Jacobs, 2020). One can only conclude that science self-governance and self-correction were eventually, slowly and painfully, corrected, partly by voices inside the scientific community but largely by forces outside science. Some contemporary examples consider when external regulation was deemed necessary even when uniformly resisted by the scientific community.

When External Regulation Was Deemed Necessary As noted earlier, the position of the scientific community has generally been that there should be no external regulation of its activities. Examination of instances in which limitations have been imposed provides insight into balance between external regulation and science self-regulation, and about effectiveness of the compromise reached. Two examples in the US context are instructive: dual-use policies and research misconduct regulations. It is important to note that such regulations largely apply to publicly funded research. It should also be noted that applying regulation to private research, in general, is an important unresolved issue in science (Evans et al., 2021).

Dual Use Dual-use research can be used both intentionally and unintentionally to create harm. Because production of biological weapons is relatively easy and inexpensive, details could be obtained from published research, and policies supporting open data are ascending. Stakes are high and the list of stakeholders is long: not only researchers

When External Regulation Was Deemed Necessary

81

but also universities and research institutes, funding bodies, research integrity committees, the public, not only in the USA but around the world—leading to the question of who is responsible (Kavouras & Charitidis, 2020)? Most of these parties are not knowledgeable about security issues that arise from dual use. Drew and colleagues (2017) note that neither funders nor publishers provide training for these responsibilities. Increasing use of preprints, posted before journal submission and peer review, and a significant increase in “predatory journals” which lack quality standards or peer review, add to the concern. Imperiale and Casadevall (2018) note that all technologies are potentially dual use. After more than a decade of intensive discussion and controversy about biological experiments with dual-use potential, little consensus has been reached within the scientific community about when value of a particular experiment justifies the risks involved. These authors suggest focusing on the importance of the scientific question being addressed (Imperiale & Casadevall, 2018). Evans and colleagues (2022) examine regulation of dual-use research from a philosophical perspective, noting that there is little principled guidance regarding when dual-use research in the life sciences can be forcibly censured. While autonomy in scientific practice is important, sometimes the risk of disseminating dual-use research is too grave, in which case, the government should have to satisfy an exact justificatory standard to serve a compelling social interest (Evans et al., 2022). In the context of a pandemic (the most recent, COVID-19), it is likely that much related research will be carried out and published without adequate dual-use oversight (Musunuri et al., 2021). In the absence of agreement on dual-use monitoring standards, construction of tools to help scientists and other stakeholders determine what is dual-use has likely been stalled. Nevertheless, a web-based tool—Dual-Use Quickscan—is available, focusing on: characteristics of the biological agent such as virulence, production rate, transmission rate, availability of medical countermeasures; knowledge and technology regarding the biological agent; and consequence of misuse. This instrument has been reviewed by an expert committee, presumably supporting content validity. Authors (Vennis et al., 2021) recommend use prior to commencement of research and repeatedly as research progresses. In summary, while regulation of dual-use research requires knowledge of the science and of the research context—knowledge based on scientists—over a prolonged period of time the community has not been able to achieve a working standard or widespread skills to manage dual-use research. Societal risks do exist, and access to published results and open data are expanding exponentially.

Research Integrity and Research Misconduct Regulations Management of research integrity (of which research misconduct is a part) has followed a clear sequence over time. Prior to 1975, science was seen as in pursuit of truth. Since values were assumed to be internalized by scientists through

82

5  Emerging, Evolving Self-Regulation by the Scientific Community

socialization into a professional ethics of self-regulation, institutional oversight was thought not to be needed. During the 1975–1990 period, potential risks to human research subjects became of concern, yielding the National Research Act of 1974, the Belmont Report, and subsequent installation of institutional review boards (IRBs) through the Common Rule. IRBs were meant to operate as a form of peer review with the intent to preserve scientific autonomy. Concern about research fraud surfaced in the 1980s, resulting in federal legislation requiring agencies and universities receiving federal research funds to have policies for responding to allegations of research misconduct. The third historical period (1990–present) has been marked by elements not present during the period when regulations were imposed and thus not addressed by them. These include: a shift to research funding by private, large commercial entities and attention to the effects of conflicts of interest (Montgomery & Oliver, 2009). In summary, past norms were described as tacit and unarticulated, contributing to weak socialization and inadequate self-regulation by the scientific community. In subsequent periods, logics have become more comprehensive and explicit, adding coercive pressure with penalties. Voluntary standards such as those of the International Council of Medical Journal Editors, rise of accreditation for human subjects protection programs, and institutional conflict of interest committees, may well prove to be inadequate, suggesting continuation or expansion of government oversight (Montgomery & Oliver, 2009). A summary of the literature on misconduct in research may be found in a chapter by David Resnik (2018), noting especially that regulations flow from federal agencies, related to research they fund; very little is known about research misconduct in privately funded research. Here we focus on activities of the scientific community to manage RM in the context of a clear goal of research integrity. Individual members of the scientific community have taken initiative to address this problem. Bolland et al. (2016) tracked down a large number of publications about bone fracture and osteoporosis, suspected of RM, authored by a single individual with colleagues. They used statistical methods, identified a significant number of improbable events and noted a lack of congruence with meta-analyses on the research question. Bordewijk et al. (2021) summarized the literature on methods to assess RM in health-related research, noting that although tools to assess textual plagiarism are well established and ORI has an image analysis tool, tools used to investigate other forms of RM are rudimentary and labor intensive. Only a few are validated and their strengths and weaknesses are not known (Bordewijk et al., 2021). Reflecting on their most recent RM case, university research integrity officers noted that 9 of 10 good practices of research were deficient. These include: transparency, good understanding of statistical methods, feeling empowered to speak up, having a group leader who is a good manager of people and of data, and availability of research records sufficient for others to reconstruct what was done. It is notable that nearly two-thirds of individuals who had committed research misconduct in these cases had taken a course in the responsible conduct of research (Kalichman, 2020). Finally, Xie and colleagues (2021) note that there is no universal definition

Institutional Corruption Framework

83

of questionable research practices, which, although not defined as RM, is urgently needed to promote responsible research practice. The latest RM regulations covering federally funded research were published in 2005. Experience over nearly two decades since that time has shown naïve compromises reached with the scientific community at that time. No reconceptualization of the bases for these regulations has been brought forward. Several areas in current RM regulations have proven especially problematic. For example, depending on complainants making an allegation of RM may have been thought to be part of the scientific community’s responsibility to monitor each other; in reality, it has resulted in such negative impacts to the whistleblower that this system very likely significantly under-reports RM.  Responsibility of institutions receiving the federal research monies to manage allegations of RM against their own faculty/staff/students would seem conflictual. The original rationale was likely that these individuals would have good knowledge of local circumstances; in reality, such an arrangement is a gigantic conflict of interest. Indicative of the potential for reputational and financial damage for these institutions, there is virtually no public record of how research institutions have managed these responsibilities. Also problematic is the requirement that allegations be “in good faith,” which lacks a rigorous definition. This situation has left open the opportunity for entities wishing to derail an investigator or a line of research, such as those seeking to protect commercial interests, to allege RM. Could direction from an overall framework serve to detect when science is losing its direction, whether from lack of self-governance, poor government regulations, and/or from intrusion of perverse incentives into its basic institutions (Redman, 2015)?

Institutional Corruption Framework “Institutional corruption is manifest when there is a systemic and strategic influence which is legal, or even currently ethical, that undermines the institution’s effectiveness by diverting it from its purpose…” (Lessig, 2013, p  553). Systemic means regular and predictable, strategic means used by others to achieve the deviation, most commonly money but sometimes ideology. As currently regulated, the influence may be permitted but if documented should signal the need to change regulation. One possible consequence of such corruption is that the institution is incapable of achieving its purpose or finds it more difficult to do so (Lessig, 2013). The purpose of science is to discover and disseminate knowledge, within the bounds of protection of research subjects and subsequent users of that knowledge and of access to those opportunities. Each of the institutions of science can contribute to diversion away from science’s purpose—journals by publishing only novel and hyped content in pursuit of financial rewards, research-producing institutions by overlooking inherent conflict of interest in their oversight role, funders by neglecting to declare and enforce rigorous rules to ensure research quality as a condition of support, regulators by delegating too much oversight to conflicted entities.

84

5  Emerging, Evolving Self-Regulation by the Scientific Community

A large part of combatting institutional corruption is formulating rules and procedures that determine what counts as corruption and not just accepting current rules and procedures as legitimate (Thompson, 2013). Identifying those issues and individuals that are responsible for monitoring and reversing problematic behaviors, and looking for alternate (non-corrupting means) of meeting institutional needs are central. Reforms often change structure and incentives. In the example of quality of preclinical research data outlined above, analysis by NINDS, while belated, aimed to set standards that would be expected for research funding and to offer remedial instruction for how to meet those standards. This reform apparently failed, perhaps because it required rigorous compliance by funders, research-producing institutions, and journals, which was not forthcoming.

 haracterizing the Task Ahead, with Guidance C from the Institutional Corruption Framework (IC) The core lesson from the IC framework is that science and the many institutions involved in producing and disseminating research must be true to the core purpose and not diverted from it. Tasks ahead include examining the accuracy of traditional core beliefs and assumptions, emerging empirical findings, and attention to core institutions. Some (Miedema, 2022) suggest that a central concern is a continuing belief, including at the institutional and political level, in a no-longer-accurate legend of inevitable science self-correction. This legend invokes the uniqueness and ethical superiority of science compared with any other social activity, but ignores growing evidence of areas of shoddy science, a closed scientific social system that determines rewards, displacement of robust and impactful results by internal credits for academic advancement (flawed use of metrics) and a widely felt lack of alignment with core values of science. It is not at all clear that science can self-correct for these flaws, in part because they require system-wide reform. Miedema (2022) suggests that institutional accreditation for research may help institutions establish and uphold policies supportive of research integrity. A central question is who will establish and uphold accreditation standards that support a reform agenda. Others conclude that major self-governance mechanisms simply don’t work to support core goals of science. (1) Heesen and Bright (2021) assert that the available empirical evidence does not support the purported purposes of pre-publication peer review as a method of quality and research fraud control and leaves journal editors as gatekeepers with outsized control. These authors suggest that post-publication review could serve well at less cost. (2) Many have noted that science rewards rapid and numerous publications with little incentive to publish high-quality studies. There is little indication that such credit will reverse although all the institutions of science (funders, institutions hosting science, journals) should have a duty to do so and to strongly reward quality. (3) The probability of

Institutional Corruption Framework

85

scientific fraud being detected with a subsequent level of punishment that would serve as a deterrent is way too low. Teodorescu et al. (2021) note that the frequency of enforcement is more important than severity of punishment in reducing violation behaviors. Punishment for research misconduct relies largely on shaming, presumably harming reputation, and on temporary restriction of some scientific activities such as serving on review committees, but not on detecting and righting the wrongs on research subjects, other scientists, and the public trust. (4) The retraction system is totally broken. Bibliographic databases/journal publishers/ guideline developer structures that permit systematic identification and correction of research with compromised integrity have simply not been established (Avenell et al., 2019). (5) Finally, assurance of valid scientific methods as a crucial requirement for research to be ethical has faltered and may not be monitored by IRBs. The International Science Council (2021a, 2021b) concludes that the academic community and its institutions are complicit in processes that ultimately inhibit access to and accuracy of the scientific record and that they have not exercised their market power to effect change including in a dysfunctional science publishing market. Proposals for reform of some of the institutions closely involved in production, dissemination, and regulation of research evidence aim to return them to support of science’s purpose. Most of such proposals require structural reform. It is important to note that traditional research ethics has generally not addressed the social structure of science; a goal of research integrity requires that ethically problematic issues with this structure must be addressed.

Core Institutions as Conduits for Science’s Core Goals Universities and other science-producing institutions, regulators, journals, and other managers of the scientific record are core institutions necessary to support core goals of science, reached with integrity. Do current practices of each assure that these goals are met? Franzen (2021) believes that a central flaw in the current system is that responsibilities universities have been given, by regulation, to adjudicate research misconduct and poor scientific practices against their own faculty, is that they have a serious conflict of interest. Universities wish to protect the flow of research money and their reputations and do so by alleging confidentiality of misconduct/quality issues and by blocking independent research that would make the process evidence based. Regulatory agencies usually accept findings reached within the research institution, in judgment of their own faculty and oversight. It is important to note that giving universities this responsibility was thought to support self-governance of science. Insufficient attention has been paid to the costs of maintaining the current arrangement, absent checks, and balances that make these institutions accountable to the core goals of science. Historically, sponsors and regulatory agencies have deemed data generated to support approval of drugs and other products to be confidential. In a few cases

86

5  Emerging, Evolving Self-Regulation by the Scientific Community

where such data have been revealed, improper research practices and selective reporting of trial results have been found, subverting a core goal of science to produce research that is transparently tested for accuracy. Regulatory data can also be used for secondary research; it has value beyond regulatory uses. Over the past decade, the European Medicines Agency and Health Canada have greatly expanded public availability of regulatory data, while the US Food and Drug Administration has lagged behind (Egilman et al., 2021). The commercial science sector has been exempt from practices to allow science self-correction, including challenging not only all of the original studies supporting product approval, but also challenges to regulatory decisions. Journals, specifically editors, are in a powerful position to assure the quality of published science as well as to serve as custodians of the cumulative research record, both as core goals of science. Evidence in some scientific fields shows otherwise, requiring reform. Discussing the experience of psychology but likely highly relevant to other fields, Scheel et al. (2021) note that cumulative science is meant to be a tool to infer what is true about the world. Its benefits should hold, aggregating across many findings, even with random error, but when science is systematically biased, its accumulation cannot be trusted. Publication bias, in which novel, positive findings are hugely over-represented in a body of literature, has appeared to be widespread. Absence of negative results is a serious threat to cumulative science (Scheel et al., 2021). A publication decision made based on a registered report (protocol of hypotheses, methods, analysis plan), before research results are known, is thought to correct for such bias and to ensure high-quality study methodology. Indeed, a systematic comparison found a much lower rate of positive results in registered reports compared with the standard literature (Scheel et al., 2021). Others note that publicly publishing research plans prior to data collection, is largely a practice in academic science, but not in consultancy-based research, which has been characterized by a low level of transparency and accountability (Evans et al., 2021). It is difficult to imagine that this level of transparency would be adopted by industry-based research, thus severely limiting its usefulness as a system for scientific self-oversight. In this sector, Registered Reports are also not required for clinical trials, even though accuracy is essential for subject/patient protection (Chambers & Tzavella, 2021). Also problematic, the retraction system has not kept pace with technological change. Frampton et al. (2021) note that there are no guidelines for retracting articles from preprint servers (86% of top-ranked journals allow authors to submit first to preprint servers), unmarked versions of retracted articles can be shared by online sites and downloaded directly by users, metanalyses, and guidelines must be corrected. Using COVID-19 literature as an example, 59% of retracted articles remained available as original unmarked electronic documents after retraction. The fact that there are consequences to these errors that can affect patient/research subject welfare is ignored (Frampton et al., 2021). Where they exist, self-corrective initiatives to assure science meets a core value of validity appears to be far from universal, and it is unclear how widely they can be

Institutional Corruption Framework

87

implemented. Addressing the range of institutions and practices in the research production/dissemination/utilization/regulation system requires an encompassing framework. Institutional Corruption offers one such framework, requiring that each set of organizations adhere to the basic purpose of science which is to produce and disseminate valid science, within the boundaries of subject protection and access. But any framework requires examination of ways in which the culture of science contributes to concerns about the adequacy of self-governance. Many of the mechanisms, such as slow, inconsistent, and sometimes flawed peer review and the use of metrics that do not reflect science quality or its public purpose, at the same time maintain a social stratification structure that benefits influential members of the scientific community. Problematic practices of scientific institutions noted above have been allowed, by the scientific community, to persist and flourish (Schneider et al., 2021). Recall that a large part of combatting institutional corruption is determining what counts as corruption and not just accepting current rules and procedures as legitimate (Thompson, 2013). Two further examples of science self-regulation are instructive.

Reproducibility Project Cancer Biology as Example A small percentage of cancer drugs that enter clinical trials are eventually approved, raising questions about what is optimal/possible. Assurance of trial quality and appropriate tests of reproducibility are key; can the scientific community self-govern this process efficiently? This project was to repeat 193 experiments from 53 high-impact papers but was finally only able to repeat 50 experiments from 23 papers. Many original papers failed to report key descriptive and inferential statistics and none were described sufficiently in the original paper to enable designing protocols to repeat the experiments. About two-thirds of original authors were not helpful. Authors of this project describe the evidence for self-corrective processes in the scientific literature as underwhelming and conclude that there is still a long way to go before strong reporting is normative (Errington et al., 2021). Addressing the long haul of self-governance in science, Stephen Maurer (2017) notes relevant general trends. Governance mechanisms need to be fair, transparent, accountable, and open; those that are not subject to capture, and there is very little research that addresses the degree to which the current system meets these ethical requirements. Government leaves detailed resource allocation to scientists themselves who can be making such decisions based on merit and/or on “old boys” networks. Suitable monitoring mechanisms must be in place for public policy makers to be able to observe whether the self-governance system is sufficiently aligned with the public interest; government officials frequently do not take this step. Science policies frequently are narrow in scope and underfunded—observable for both the US human subjects protection and research misconduct regulatory infrastructure— which are additional ways to preserve science self-governance, whether it is properly functioning or not.

88

5  Emerging, Evolving Self-Regulation by the Scientific Community

 eforms Supporting Science’s Core Goals; Lessons R from East Asia While this book is focused on US policy and practice, analysis of a cluster of research misconduct cases in East Asia reveals causal patterns/policies not well considered but very likely present elsewhere. A significant role of the state and strong nationalist culture to promote scientific activities toward economic and social values rather than straightforward scientific values are evident in these countries. This strong influence cuts both ways. After the Hwang scandal in South Korea, the government of that nation made sweeping regulatory changes to support research integrity, with little resistance from the scientific community. Such would likely not have been the case in the USA (Bak, 2018). Mikami’s (2018) analysis of the STAP cell scandal in Japan outlines a potential cultural vulnerability in science and government, thought not to be specific to that country. A prior Japanese scientist (Yamanaka) had earlier developed a novel technique of cell reprograming that brought great fame to the country, leading to a sense that further novel developments in that field were doable in service of Japan’s leading in regenerative medicine. In retrospect, institutional oversight of the science in the STAP project before publication, and scientists contributing parts of the project but no one scientist responsible for all of it seemed to have played a role in a finding of research misconduct against the lead scientist (Obakata). The lesson is that the socioinstitutional culture invited involved scientists to overstress the strong alignment institutionally and societally with goals for their country. In this case, the global scientific community was efficient in showing that the scientific findings could not be replicated. But the underlying message is that resource management at the country level requires as much scrutiny as does the regulatory environment governing individual scientists and institutions, who are eventually held responsible (Mikami, 2018).

Discussion and Conclusion The entire system of research governance is built on assumptions that published research is overwhelmingly accurate, produced with methodological and ethical integrity, and that the scientific community can self-govern/self-correct when necessary to reach science’s core goals. Recent evidence requires questioning those assumptions. An institutional corruption framework clarifies these issues and suggests interventions to steer science to its purpose/goal. The framework requires a scientific community that examines its culture for practices that contribute to diversion from the purpose of science. Such analysis/intervention is necessary to set conditions for science self-governance. A number of approaches/tools are available. While meta-­ science has exposed significant chinks in accuracy of scientific knowledge, it should now be used to systematically evaluate areas/fields of research to identify the level

References

89

of integrity and force corrections where necessary. Delineation and enforcement of clear scientific standards should decrease the need for the USA’s dominant funding agency to be offering remedial instruction. Incorporating negligence into codes of conduct and regulation should support standards and create consequences if they are not met and regulations should be revised to be rigorous, fair, and evidence-based with adequate oversight. Perhaps most important of all, examples of effective self-correction/governance supporting science’s core goals must be identified, championed, and tested for broader adoption. While this paper provides analysis and interpretation of relevant examples, others with important lessons should be added to this analysis.

References Avenell, A., Stewart, F., Grey, A., Gamble, G., & Bolland, M. (2019). An investigation into the impact and implications of published papers from retracted research: systematic search of affected literature. BMJ Open, 9(10), e031909. https://doi.org/10.1136/bmjopen-­2019-­031909 Babb, S. (2020). Regulating human research. Stanford University Press. Bak, H. (2018). Research misconduct in East Asia’s research environments. East Asian Science, Technology and Society, 12(2), 117–122. https://doi.org/10.1215/18752160-­6577620 Becker, R. E. (2020). Two cultures in modern science and technology: For safety and validity does medicine have to update? Journal of Patient Safety, 16(1), e46–e50. https://doi.org/10.1097/ PTS.0000000000000260 Bielekova, B., & Brownlee, S. (2021). The imperative to find the courage to redesign the biomedical research enterprise. F1000, 10, 641m. Bolland, M. J., Avenell, A., Gamble, G. D., & Grey, A. (2016). Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology, 87(23), 2391–2402. https://doi.org/10.1212/WNL.0000000000003387 Bordewijk, E.  M., Li, W., van Eekelen, R., Want, R., Showell, M., Mol, B.  W., & van Wely, M. (2021). Methods to assess research misconduct in health-related research: A scoping review. Journal of Clinical Epidemiology, 136, 189–202. https://doi.org/10.1016/j.jclinepi.2021.05.012 Bryan, C., Tipton, E., & Yeager, D.  S. (2021). Behavioural science is unlikely to change the world without a heterogeneity revolution. Nature Human Behavior, 5(8), 980–989. https://doi. org/10.1038/s41562-­021-­01143-­3 CFRS. (2021). Freedom and responsibility in the 21st century: A contemporary perspective on the free and responsible practice of research. Draft Discussion Paper. Chambers, C. D., & Tzavella, L. (2021). The past, present and future of registered reports. Nature Human Behaviour, 6, 29–42. https://doi.org/10.1038/s41562-­021-­01193-­7 Desmond, H., & Dierickx, K. (2021). Trust and professionalism in science: Medical codes as a model for scientific negligence? BMC Medical Ethics, 22(1), 45. https://doi.org/10.1186/ s12910-­021-­00610-­w Douglas, H. (2021). Scientific freedom and social responsibility. In P. Hartl & A. T. Tuboly (Eds.), Science, freedom. Routledge. Drew, T. W., & Mueller-Doblies, U. U. (2017). Dual use issues in research–A subject of increasing concern? Vaccine, 35(44), 5990–5994. https://doi.org/10.1016/j.vaccine.2017.07.109 Du Sert, N. P., Hurst, V., Ahluwallia, A., Alam, S., Avey, M. T., Baker, M., Browne, W. J., Clark, A., Cuthill, I. C., Dirnagl, U., Emerson, M., Garner, P., Holgate, S. T., Howells, D. W., Karp, N. A., Lazic, S. E., Lidster, K., MacCallum, C. J., Macleod, M., et al. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research. British Journal of Pharmacology, 177(16), 3617–3624. https://doi.org/10.1111/bph.15193

90

5  Emerging, Evolving Self-Regulation by the Scientific Community

Egilman, A. C., Kapczynski, A., McCarthy, M. E., Luxkaranayagam, A. T., Morten, C. J., Herder, M., Wallach, J. D., & Ross, J. S. (2021). Transparency of regulatory data across the European medicines agency, Health Canada, and the US Food and Drug Administration. Journal of Law, Medicine and Ethics, 49(3), 456–485. https://doi.org/10.1017/jme.2021.67 Errington, T. M., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Challenges for assessing replicability in preclinical cancer biology. eLife, 10, e67995. https://doi.org/10.7554/ eLife.67995 Evans, N. G., Selgelid, M. J., & Simpson, M. R. (2022). Reconciling regulation with scientific autonomy in dual-use research. The Journal of Medicine and Philosophy, 47(1), 72–94. https:// doi.org/10.1093/jmp/jhab041 Evans, T. R., Branney, P., Clements, A., & Hatton, E. (2021). Improving evidence-based practice through preregistration of applied research: Barriers and recommendations. Accountability in Research, 30, 88. https://doi.org/10.1080/08989621.2021.1969233 Frampton, G., Woods, L., & Scott, D. A. (2021). Inconsistent and incomplete retraction of published research: A cross-sectional study on Covid-19 retractions and recommendations to mitigate risks for research, policy and practice. PLoS One, 16(10), e0258935. https://doi. org/10.1371/journal.pone.0258935 Franzen, S. (2021). University responsibility for the adjudication of research misconduct: The science bubble. Springer. Freese, J., & Peterson, D. (2018). The emergence of statistical objectivity: Changing ideas of epistemic vice and virtue in science. Sociological Theory, 36(3), 289–313. Halpern, S. A. (2021). Dangerous medicine. Yale University Press. Heesen, R., & Bright, L.  K. (2021). Is peer review a good idea? The British Journal for the Philosophy of Science, 72(3), 635–663. https://orcid.org/0000-­0003-­3823-­944X Imperiale, M. J., & Casadevall, A. (2018). A new approach to evaluating the risk-benefit equation for dual-use and gain-of-function research of concern. Frontiers in Bioengineering and Biotechnology, 6, 2. https://doi.org/10.3389/fbioe.2018.00021 International Science Council. (2021a). Opening the record of science. International Science Council. (2021b). Strengthening research integrity. Ioannidis, J. P. A. (2018). Meta-research: Why research on research matters. PLoS Biology, 16(3), e2005468. https://doi.org/10.1371/journal.pbio.2005468 Ioannidis, J.  P. A. (2012). Why science is not necessarily self-correcting. Perspectives in Psychological Science, 7(6), 645–654. https://doi.org/10.1177/1745691612464056 Jacobs, N. (2020). A moral obligation to proper experimentation: Research ethics as epistemic filter in the aftermath of world war II. Isis, 111(4), 759–780. Kalichman, M. (2020). Survey study of research integrity officers’ perceptions of research practices associated with instances of research misconduct. Research Integrity & Peer Review, 5(1), 17. https://doi.org/10.1186/s41073-­020-­00103-­1 Kavouras, P., & Charitidis, C.  A. (2020). Dual use in modern research. In R.  Iphofen (Ed.), Handbook of research ethics and scientific integrity. Springer/Nature. Kurtulmus, F. (2021). The democratization of science. In D.  Ludwig, I.  Koskinen, L.  Mncube, L. Poliseli, & R. Reyes-Garcia (Eds.), Global epistemologies and philosophies of science. Landeweerd, L., Townend, D., Mesman, J., & Van Hoyweghen, I. (2015). Reflections on different governance styles in regulating science: A contribution to ‘responsible research and innovation’. Life Sciences, Society and Policy, 11, 8. https://doi.org/10.1186/ s40504-­015-­0026-­y Landis, S. C., Amara, S. G., Asadullah, K., Austin, C. P., Blumenstein, R., Bradley, E. W., Crystal, R. G., Darnell, R. B., Ferrante, R. J., Fillit, H., Finkelstein, R., Fisher, M., Gendelman, H. E., Golub, R. M., Goudreau, J. I., Gross, R. A., Gubitz, A. K., Hesterlee, S. E., Howells, D. W., et  al. (2012). A call for transparent reporting to optimize the predictive value of preclinical research. Nature, 490(7419), 187–191. https://doi.org/10.1038/nature11556 Laurie, G.  T., Dove, E.  S., Ganguli-Mitra, A., Fletcher, I., McMillan, C., Sethi, N., & Sorbie, A. (2018). Charting regulatory stewardship in health research: Making the invisible visible. Cambridge Quarterly of Healthcare Ethics, 27(2), 333–347. https://doi.org/10.1017/ S0963180117000664

References

91

Lessig, L. (2013). “Institutional corruption” defined. Journal of Law, Medicine and Ethics, 41(3), 553–555. https://doi.org/10.1111/jlme.12063 Li, W., Gurrin, L.  C., & Mol, B.  W. (2022). Violation of research integrity principles occurs more often than we think. Reproductive Biomedicine Online, 443(2), 207–209. https://doi. org/10.1016/j.rbmo.2021.11.022 Littoz-Monnet, A. (2020). Governing through expertise: The politics of bioethics. Cambridge University Press. Maurer, S. M. (2017). Self-governance in science. Cambridge University Press. Miedema, F. (2022). Open science: The very idea. Springer. Mikami, K. (2018). The case of inferred doability: An analysis of the socio-institutional background of the STAP cell scandal. East Asian Science, Technology and Society, 12(2), 123–143. https://doi.org/10.1215/18752160-­4202323 Montgomery, K., & Oliver, A. L. (2009). Shifts in guidelines for ethical scientific conduct: How public and private organizations create and change norms for research integrity. Social Studies of Science, 39(1), 137–155. https://doi.org/10.1177/0306312708097659 Montoya, A. K., Krenzer, W. L. D., & Fossum, J. L. (2021). Opening the door to registered reports: Census of journals publishing registered reports (2013-2020). Collabr. Psychology, 7(1), 24404. https://doi.org/10.1525/collabra.24404 Musunuri, S., Sandbrink, J. B., Monrad, J. T., Palmer, M.  J., & Koblentz, G.  D. (2021). Rapid proliferation of pandemic research: Implications for dual-use risks. mBio, 12(5), e0186421. https://doi.org/10.1128/mBio.01864-­21 Nelson, J. P., Selin, C. L., & Scott, C. T. (2021). Toward anticipatory governance of human genome editing: A critical review of scholarly governance discourse. Journal of Responsible Innovation, 8(3), 382–420. https://doi.org/10.1080/23299460.2021.1957579 NIH. (2021). Strategic Plan, 2021–2025. Offit, P. A. (2021). You Bet Your Life. Basic Books. Redman, B. K. (2015, March 25). Are the biomedical sciences sliding toward institutional corruption? Harvard University, Edmond J. Safra Working Paper No. 59. Resnik, D. B. (2018). Research integrity. Springer. Scheel, A. M., Schijen, M. R. M. J., & Lakens, D. (2021). An excess of positive results: Comparing the standard psychology literature with registered reports. Advances in Methods and Practices in Psychological Science, 4(2), 1–12. https://doi.org/10.1177/25152459211007467 Schneider, J. W., Horbach, S. P. J. M., & Aagaard, K. (2021). Stop blaming external factors: A historical-sociological argument. Social Sciences Information, 60(3), 329–337. Sil, A., Bespalov, A., Dalla, C., Ferland-Beckham, C., Herremans, A., Karantzalos, K., Kas, M. J., Kokras, N., Parnham, M. J., Pavlidi, P., Pristouris, K., Steckler, S., Riedel, G., & Emmerich, C. H. (2021). PEERS–An open science “platform for the exchange of experimental research standards” in biomedicine. Frontiers in Behavioral Neuroscience, 15, 755812. https://doi. org/10.3389/fnbeh.2021.755812 Teodorescu, K., Plonsky, O., Ayal, S., & Barkan, R. (2021). Frequency of enforcement is more important than the severity of punishment in reducing violation behaviors. Proceedings of the National Academy of Science, 118(42), e2108507118. https://doi.org/10.1073/ pnas.2108507118 Thompson, D. F. (2013, August 1). “Two concepts of corruption, Harvard University.” Edmond J. Safra Center for Ethics Working Paper No. 16. Upshur, R., & Goldenberg, M. (2020). Countering medical nihilism by reconnecting facts and values. Studies in History and Philosophy of Science, 84, 75–83. https://doi.org/10.1016/j. shpsa.2020.08.005 Vennis, I. M., Schaap, M. M., Hogervorst, P. A. M., deBruin, A., Schulpen, S., Boot, M. A., van Passel, M.  W. J., Rutjes, S.  A., & Bleijs, D.  A. (2021). Dual-use Quickscan: A web-based tool to assess the dual-use potential of life science research. Frontiers in Bioengineering and Biotechnology, 9, 797076. https://doi.org/10.3389/fbioe.2021.797076 Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science & Engineering Ethics, 27(4), 41. https://doi.org/10.1007/s11948-­021-­00314-­9

Chapter 6

Conflict of Interest and Commitment and Research Integrity

A long-standing scientific norm is that integrity requires disinterestedness rather than personal gain. Conflict of interest (COI) can violate that norm. Proper management of COI is essential to research integrity in order to protect against harm and deception to research subjects and to protect the scientific record against often unrecognized bias in design, conduct, analysis, and in reporting of research results. Conflicts of interest may affect researchers, ethics committees, research institutions and their governing boards, journals, and potentially all institutions involved in producing and disseminating science. Attention to institutional conflicts lags well behind that related to individuals. No studies that addressed impact of COI policies on research output could be located (Fabbri et al., 2021).

Background The history of COI has been well described, emerging in the 1960s and more broadly in the 1980s, as a cycle of denial, scandal, reform, and more denial, particularly in relation to corporate influence over areas of science in which the goal should be to advance the public interest. Policies to control these influences have largely been ineffective, partial, and highly inequitable among the players (Thacker, 2020). Conflict of interest and how to manage it are well described conceptually. Dennis Thompson provides such a roadmap. “A conflict of interest is a set of conditions in which professional judgment concerning a primary interest …tends to be unduly influenced by a secondary interest… Conflict of interest rules informal and formal, regulate the disclosure and avoidance of these conditions” (p. 290). “It is important to note that the secondary interests are often not illegitimate in themselves and may be a part of professional practice. The goal is to prevent secondary factors from dominating or appearing to dominate the relevant primary interest in the making of

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_6

93

94

6  Conflict of Interest and Commitment and Research Integrity

professional decisions” (p. 291) (Thompson, 2004). Please note that existence of interests is not in itself an ethical issue. COI usually focuses on financial matters, largely because they are more objective. They are, however, not necessarily more problematic than other kinds of interests such as pressure to favor research findings of those in one’s group. Nonfinancial COI range from intellectual interests to personal beliefs, advocacy, theoretical commitments, and “researcher allegiance” to superiority of an intervention (Grundy, 2021). Nonfinancial COIs clearly extend to peer review, widely used to select manuscripts, research funding, and academic hiring and promotion, and in many of these decisions, who the peer reviewers are and what their conflicts are is not known to the person being judged or to the institution requesting the review. Note that Thompson’s definition fits both financial and nonfinancial COI. A common attitude is that COIs are so ubiquitous that they cannot be avoided (Thompson, 2004). The purpose of COI rules is to maintain integrity of professional judgment, by removing factors that can distract from concentration on primary goals. A second purpose of COI rules is to maintain confidence in professional judgment, which means that a failure to avoid COI is wrong (unethical) even if one is not influenced by secondary interests. Standards for assessing COI include severity (likelihood of influence or perception of influence), and seriousness or harm likely to result. Likelihood can be assessed by considering: the value of the secondary interest (the greater the value the more probable its influence), scope of conflict (longer and closer increases risk), and extent of discretion in a particular case including accountability to and review by colleagues (Thompson, 2004). Incentives or conflicting loyalties may encourage breaking obligations (Rodwin, 2019). Remedies range from less to more extensive COI control. Regulation by the profession has generally not been satisfactory; indeed, a profession has an inherent conflict when it regulates itself. While disclosure of a COI is the most commonly used remedy, this action merely discloses a potential problem but not a clear notion of the nature or size of the problem. It can lead to moral license—a sense that once disclosed, one is morally free. More stringent methods include abstention from a particular case or abstention/withdrawal from an area in which researchers or institutions may have substantial secondary interests (Thompson, 2004). Thompson’s work thus outlines the problem, how to judge its seriousness, and remedies. Appraisal tools that address COI typically do so superficially (whether COI information was available) and rarely address how adequately the conflict was managed or may have influenced studies. Fewer than half of top medical journals have explicit policies on managing COI (Lundh et al., 2020). This chapter addresses the current state of COI as an issue affecting integrity in biomedical research, including examples of cases depicting how COI can affect whole areas of research with direct impact on patients. Current management and regulation of COI are then addressed, noting that more radical approaches are in order. In general, more effective management has been stunted by lack of empirical research that probes the true nature, causes, and effects of COI in research. It should be noted that COI policies for DSMBs provide an excellent, well-defined example (Ellenberg et  al., 2019) (see discussion later in this chapter). Finally, the

State of COI as an Issue

95

re-emergence of COI and conflict of commitment as an issue of national security is being vigorously pursued.

State of COI as an Issue A 2009 Institute of Medicine report, Conflicts of Interest in Medical Research, Practice and Education, provided a comprehensive review of empirical research on this topic. A 10-year follow-up Torgerson et al. (2022) documented the establishment in 2013 of the US Physician Payments Sunshine Act which mandates public disclosure of payments from pharmaceutical, medical device, and medical supply companies to health care providers and academic hospitals, including research grants. Quality of data in this database is enhanced by internal audits, corroboration with other sources, and retraction of data found to be inaccurate. The 10-year update to the Institute of Medicine report goes on to note that recent research has identified a high prevalence of COIs among authors of clinical trials, frequently under-reported and undisclosed, which presents a risk of bias. COIs in clinical trials can result from the trial being industry funded and investigators having personal COIs (Torgerson et al., 2022). Especially relevant emerging areas of COI investigation include lack of COI policies addressing journal editors, editorial boards, and peer reviewers, and of effects of large reprint sales from journals to commercial entities. Journal policies suggest authors should always disclose COI; at the same time, this policy is variably enforced and perhaps more importantly, has not been seen as relevant for editors to declare. Concern has also been raised about effects of industry financial support to patient advocacy groups. All of these current practices are based on the honor system and poorly disclosed but could carry penalties. Even at that, disclosure does not eliminate the risk of bias (Torgerson et al., 2022). More recent summaries in specialized areas found that the medical product industry maintains an extensive network of financial and nonfinancial ties with all parties in the health care network, including research. This network seems largely to be unregulated and opaque. Without effective oversight, these practices might threaten the integrity of research, health care systems, and patient outcomes. The cumulative effects of these conflicts are unexamined. Public data sources seldom describe or quantify these ties; thus, patient care is likely to be affected without the public’s knowledge (Chimonas et al., 2021). Two relevant Cochrane reviews have shown that systematic reviews with financial conflicts of interest (FCOI) more often have favorable conclusions and tend to be of lower methodological quality than those reviews without FCOI. Only those without FCOIs should be used, but if there are none, users should read the review conclusions with skepticism, critically review methods used and interpret results with caution (Hansen et al., 2019). Both industry funding and investigator ties with the associated industry are associated with more favorable trial outcomes (Lundh et al., 2017).

96

6  Conflict of Interest and Commitment and Research Integrity

There are some indications of improvement in COI management and in the evidence base to support it. A large study (Serghiou et al., 2021) of open-access articles 1959–2020 found substantial improvement in COI disclosure, especially for funding disclosures. The study also recommends that the National Library of Medicine standardize their reporting of COIs. Others (Shaw, 2022) suggest the development of a registry that could share COI statements with all journals. Case studies in particular areas of health science have documented the existence of potential effects of pervasive undisclosed conflicts of interest. It should be noted that these cases are examples, should not be generalized, and are certainly non-exhaustive.

Case Studies For example, persons with an autism diagnosis are often treated with applied behavior analysis. Prevalence of conflicts of interest among autism intervention researchers is high, in part because these same individuals offer (and sometimes advertise) the behavior analysis services they provide. Analysis of eight journals publishing this research, over a 1-year period found 84% of studies with at least one author with this type of COI, disclosed in only 2% of cases, with most (87%) claiming they had no COIs. Pervasive, undisclosed COIs of this nature likely lead to researcher bias if appropriate precautions are not taken. In intervention research, COI occurs when a researcher can potentially benefit from demonstrating that interventions they provide are effective. This is an employment-related COI; it can be joined by detection bias (evaluator knowing what treatment child is receiving). Authors of this analysis posit that these conditions could contribute to the persistently large percentage of applied behavioral analysis studies of low-quality designs. At a minimum, interventionists on such a research team should remain independent of data collection and analysis (Bottema-Beutel & Crowley, 2021). It is important to note that developers of innovative programs addressing pressing global mental health and social problems must disseminate those programs at a scale beyond research demonstration projects. But once a scientific COI has been declared, there is no guiding framework for how to continually manage COI over further program development, evaluation, and dissemination. Initial proof-of-­ concept research is often led by the program developer but then must pivot to an independent evaluation. For such programs, Sanders et al. (2020) suggest an institutionally approved COI management plan and external oversight from a COI management compliance officer. A second case involves impact of author financial conflicts on robotic-assisted joint arthroplasty research. Nearly all studies comparing routine to robotic joint arthroplasty involve financially conflicted authors and show results favorable to the robotic option. Further high-quality nonconflicted investigations will be necessary to determine if robotic technology leads to lower surgical revision rates and increased implant longevity (DeFrance et al., 2021).

State of COI as an Issue

97

A third case is British, although similar action occurred earlier in the USA. Vaginal mesh was introduced in 1998 as a novel surgical treatment for stress urinary incontinence, later also used to repair pelvic organ prolapse in women. In the USA, this mesh was approved by “substantial equivalence” with a previously approved product which was itself recalled (a regulatory failure). Twenty years later, tens of thousands of compensation claims had been filed in Britain against this product for pain and other serious side effects (Gornall, 2018a). It was later revealed that the investigator who developed this product was paid by a commercial entity contingent on a second trial showing the same positive results as did the first. This blatant conflict of interest was not acknowledged in the published paper, although COI declarations were not at that time common (Gornall, 2018b). Informed consent requires that treatment is reasonable as well as providing information disclosure and obtaining the patient’s permission Over 10 years, the company sponsored much of the research on the use of this mesh, leading to a body of evidence that created a false narrative of vaginal mesh efficacy (O’Neill, 2021). Eventually, patients’ complaints of serious adverse effects and the suits that they filed drew attention to the issue and discovery of the blatant conflict of interest. The average patient has no way of knowing whether her surgeon is involved with a company whose product is the implant she is receiving. As noted earlier, in the USA in 2013, the Sunshine Act was passed into law, requiring pharmaceutical and device companies to publicly record all financial relations with physicians. This database is available online (Gornall, 2018c). However, a summary of 27 journal articles studying concordance of financial conflicts of interest with payments databases showed a large percentage of authors whose COI disclosures did not reflect what was reported in the database. The authors conclude that under-reporting of health science researchers’ financial conflicts of interest is pervasive (El-Rayess et al., 2020). Mialon et al. (2022) track conflicts of interest for members of the US Advisory Dietary Guidelines for America (2020–2025) and found that 95% had COI involving food and pharmaceutical industries. Public disclosure does not appear to have taken place. A fourth case considers a question: has deep public mistrust of the pharmaceutical industry and extensive links between academic researchers and industry played heavily into vaccine hesitancy? Industry possesses much of the world’s expertise in vaccine research, which makes its closed, non-transparent practices ripe for suspicion. Yet, these partnerships can greatly facilitate scientifically robust and timely vaccine development. A move toward mitigation of this problem might include: explanation of why the partnership is important, making public the membership of industry committees involved in vaccine decision-making, and assurance of lack of personal monetary benefits (McClymont, 2021)—in other words, control of both real and perceived COI. These cases provide examples of researchers with real or perceived financial interests in a product they evaluate. But COI is just as relevant for all bodies/institutions involved with production, regulation, and dissemination of research. IRB members should not review proposals in which they have a COI. Shouldn’t conflicts be disclosed to research subjects as they are relevant to both an individual’s and an IRB’s

98

6  Conflict of Interest and Commitment and Research Integrity

estimation of risk? Because, in the USA, IRBs are located in research-producing institutions with members employed by that institution, is there an incentive for the IRB to take action that protects the institution? These lessons must be extended from voluntary, sometimes enforced rules about COI to transparent, forthcoming declarations for whole sectors such as vaccine research and development organizations. Some fairly easy steps could be taken to support COI transparency, such as requiring links for physician authors to the Open Payments database. In addition, WHO’s International Clinical Trials Registry and ClinicalTrials.gov lack sufficient information about COI, potentially undermining interpretation of the entries for providers or patients (Buruk et al., 2021); such information should be required.

COI Management and Regulation There are a number of nodes/structures/practices in research production and dissemination at which COI can be managed. These include formal regulations as well as self-regulation by the scientific community, institutional arrangements to support translational research and in general to control their own institutional COI (ICOI). While some believe more radical approaches are necessary, simple steps could be introduced summarily. Why haven’t those steps been taken; why are the biomedical sciences stuck on disclosure, which is a very partial, incomplete approach?

Regulation A thorough review of COI standards in human subjects research, authored by Rodwin in 2019 reaches harsh conclusions: There is no single or consistent set of COI rules that apply to all or even most human subjects research, which supports researchers and institutions to follow as few standards as possible. Existing COI standards lack teeth, primarily because their oversight is delegated to the institutions that receive federal funds. U S governmental standards typically require only disclosure of financial interest but rarely prohibit conflicted researchers and institutions from participating in research or from overseeing research creating the conflict of interest. Studies that evaluate whether supervision works are lacking. Disclosure is necessary to identify COI but does not resolve or mitigate the conflict. While COI can always be resolved, institutions rarely want to take steps to do so. They are more likely to try to mitigate a conflict by assigning an individual to oversee the conflicted researcher’s work (Rodwin, 2019). Some recent regulatory changes offer opportunities for re-calibrating COI practices. Effective January, 2018, NIH policy and the Common Rule require all

COI Management and Regulation

99

multisite studies in US institutions to use a single IRB, sometimes called a Central IRB, which would remove some of the concerns about IRB member loyalty to their employing institution. An example of a government review board is the National Cancer Institute Central IRB, which reviews research funded by that agency. Commercial IRBs are organizationally separate from the sponsor and the research site and are supported by a fee-for-service. This arrangement relieves some potential COIs with an employing institution but introduces potential incentives to IRB shop. Commercial and government central IRBs can use firewalling between review and financial matters, and use nonaffiliated reviewers. No standard system seems to exist for academic central IRBs although theoretically they could be influenced by additional fees brought in by serving in this role. A preliminary study could not address how these protective strategies worked but note that academic Central IRBs did not report any specific policies to manage their COIs (Pivovarova et al., 2019). Other areas of COI regulation remain unresolved. Grundy et al. (2020) analyzed COI policies from prominent organizations in health-related research. While nonfinancial COI (commonly termed “intellectual” COI) might be addressed in these policies, there was little consensus about what should be controlled—particular schools of thought, social relationships, and professional experiences. Management strategies favored disclosure or restriction on participation but none described consequences for non-adherence to the management plan. Alarms were raised that “intellectual” or personal relationship COI might include sensitive information, and disclosure might violate privacy legislation and that exclusion might be a form of discrimination. Current COI policies may violate proportionality by failing to address how severity of policy response corresponds to likelihood of or seriousness of harm, and fairness—perceived arbitrariness of decisions about whom to exclude (Grundy, 2021). Others suggest that there is no universal standard by which COIs can be identified. How do we know if COIs actually represent an important enough risk of bias to justify their mandatory disclosure? Information in COIs is too sparse to allow a full risk of bias determination. Under such uncertain conditions, an honest author required to disclose COI may suffer an undeserved questioning of his/her scientific Integrity (Tresker, 2022). Yet, Wiersma and colleagues note that relationships, desire for prestige, friendships or rivalry, need to reciprocate and avoid social disapproval, and religious or cultural beliefs can distort research, policymaking, and patient care. Indeed, management of COIs is very unsettled and effects of current policy and improved policies are generally not known (Wiersma et al., 2018).

Scientific Self-Regulation and Other Prudent Practices Because of the significant expertise residing in the scientific community, practices involving self-regulation of COI by that community are important. For example, The Cochrane Collaboration represents a group of scientists and methodologists who have developed a rigorous methodological manual for summarizing (through meta-analysis) the state of knowledge in a particular field. That work is a form of self-regulation

100

6  Conflict of Interest and Commitment and Research Integrity

by the community of scientists. One instrument produced by The Collaboration is a risk of bias assessment tool (RoB 2) that addresses flaws in study design, conduct, and analysis that affect study results. Evaluation of the risk of bias in each study included in a systematic review documents potential flaws in the evidence summarized and contributes to certainty of the overall evidence. Bias is defined as a systematic deviation from the effect of intervention that would be observed in a large randomized trial without any flaws. Five bias domains, important mechanisms by which bias can be introduced into a trial, are considered for each outcome or endpoint: deviations from intended interventions, missing outcome data, error in measurement of the outcome or in selection of the reported result, and overall bias (Sterne et al., 2019). Sometimes bias assessment and conflict of interest are examined in the same set of studies (or reviews) to test which or both affect the validity of the research, therefore affecting research integrity. For example, Mandrioli et  al. (2016) studied reviews of the effects of artificially sweetened beverages on weight. Review sponsorship and authors’ financial conflicts of interest introduced bias affecting the outcomes of these reviews that could not be explained by other sources of bias. Industrial sponsorship and author COI (many of which were not reported) were associated with favorable outcomes related to effects of artificially sweetened beverages on weight outcomes (Mandrioli et al., 2016). While procedures for adjusting bodies of evidence infested with COI are not standardized or precise, some suggestions are offered. Moving beyond evaluating individual studies, especially early ones which can suffer from inflated effect sizes, it is possible to down-adjust these effect sizes by estimates of industry bias or publication bias or inaccuracy in the evidence base (Fuller, 2018). Asking experienced clinical trialists with methodological or statistical expertise about practices they have encountered that might unduly influence trials yielded important insights. In the design stage: inferior comparators, suboptimal primary outcome, choice of research agenda; in the trial conduct stage: manipulation of the randomization process or prematurely stopping the trial; in the analysis phase: blocking data access, multiple unplanned analyses or fabrication of data; in the reporting stage: spin, premature release of results, prevention of publication or contractual restraints. To counter these practices, managing conflicts of interest in clinical trials should include: preplanned methods, exclusion of collaborators with COI, adequate randomization and blinding procedures, independent data monitoring and trial steering committees, analysis done by an independent outside researcher, detailed reporting if the protocol was not adhered to. These clinical trial experts also noted that funding of trials by government agencies may mask a political agenda—a COI (Ostergaard et al., 2020).

COI Management Supporting Translational Science There has been considerable pressure to join academic and commercial entities in order to speed translation of research into societal benefits/products (see further discussion in Chap. 3). Yet, these two sectors are thought to operate with different

COI Management and Regulation

101

institutional logics as well as degrees and levels of oversight. Ideal academic logic focuses on search for fundamental knowledge, research freedom, rewards from peer recognition, and open disclosure of research results. Commercial logic uses limited disclosure and private appropriation of financial returns from research. How could these different cultures work together to achieve moving basic science to product? Perkmann et al. (2019) provide an example of how university–industry research centers established and managed hybrid spaces—organizational units characterized by a logic different from the dominant logic of the organization (academic). A structural hybrid allowed engagement with a minority logic (commercialization) so that both logics could coexist, which is different from trying to blend the two logics and their associated notions of research integrity. The structured hybrid required, for example, working together in scientific areas where industry partners could support open dissemination of research results or dealing with publication delays to allow for patent application. The hybrid spaces allow heterogeneous groups to interact productively, each adhering to its logic, and doing so on an ongoing basis (Perkmann et al., 2019).

Institutional Conflicts of Interest (ICOI) Most universities have COI policies for individuals involved in research but not for the institution, in part because the former is required by federal research policy, but the latter is not. This is the case even though ICOI can affect patients, multiple investigators, and the entire institution. Cigarroa et al. (2018) note that ICOI disclosures should be publicly available, and if they cannot be managed appropriately then the research should not be allowed by the involved parties to proceed. ICOI may arise: “(1) when an institution licenses intellectual property to an outside entity and holds substantial royalty or equity in the entity, which may be affected by ongoing institutional research or other institutional activities, (2) when substantial gifts to the institution appear to be connected to any decision related to the institution’s primary mission in ways that may not be appropriate, (3) when an institution holds a substantial investment or equity interest in an outside entity that has a financial or business relationship with the institution, (4) when a significant outside financial interest affects or appears to affect the decisions of the institution, or (5) when an institution enters into a transaction that compromises or appears to compromise the institution’s research mission” (Cigarroa et al., 2018, pp. 2305–2306). Such ICOI frequently involve research funding from private sources. Kahr and Hollingsworth (2019) describe a case they believe showed failure of research accountability when a faculty member made massive gifts to a public university. An institutional conflict of interest committee composed of expert faculty members, external members, and perhaps general counsel and an ethicist, should establish an appropriate ICOI assessment and management process, with a reporting relationship to the institution’s president and board of trustees (Cigarroa et al., 2018).

102

6  Conflict of Interest and Commitment and Research Integrity

Necessary Conditions/Consequences Regulatory capture by industry and sloppy COI prevention/disclosure/management practices are not inevitable but certainly have been thwarted by parties protecting their products/interests. Guard rails for protecting science, peer review, and regulatory decision-making have suffered sustained attack; government efforts to deal with it have been inadequate. Tobacco, lead, sugar, oil, and gas industries have used the same playbook of controlling research to support their products, through funding research friendly to their products and through concealment. Reed et al. (2021) suggest steps necessary to right this injustice: Require separation of industry funding and research evaluation of a product’s safety or harms. Expose corporately funded and directed front groups that circulate patently false information. Fully implement and enforce scientific integrity policies at government science agencies. Make sponsorship of clinical trials publicly available, as on ClinicalTrials.gov. Public regulators assign independent teams to evaluate drug products (Reed et al., 2021). Endemic financial entanglement is distorting production and use of research evidence, causing harm to individuals (Moynihan et al., 2019), very little of which is ever tracked. Conflict of interest can also constitute a serious risk of harmful distrust in science. It can impair ability of the scientific community to self-correct. Contessa (2022) notes that it is far from obvious that science as currently practiced is trustworthy. In most cases of societal distrust, economic interests have played a crucial role in production and uptake of the science of interest, and since commercial funding of research is now dominant and growing, scientists may be operating under more or less open forms of COI (Contessa, 2022). Public policies have not controlled these forces and those relevant to research are often not enforced. More radical approaches may be necessary.

More Radical Approaches A recent summary of a model used by eight corporate sectors (some of which are health-related) showed repeated engagement in activities to influence science including: manipulation of scientific methods; reshaping criteria for establishing scientific “proof,” making threats against scientists and promotion of policy reforms that increase reliance on industry. Legg et al. (2021) carefully describe these strategies used by: alcohol, chemicals, food and drink, pharmaceuticals, and medical technologies, yielding a Science for Profit Typology. These authors conclude that

COI Management and Regulation

103

addressing the underlying drivers of corporate influence on science has to be achieved by structural changes in the way science is funded and its processes/outcomes disclosed. Others concur. Elliott (2018) suggests several options: industry-funded safety study data should be made public or better yet, require taking safety studies out of the hands of industry. Brown (2017) also suggests taking medical research completely out of private hands, making it publicly funded, with all testing done by public agencies and all research results publicly owned. Others (Biddle, 2013) suggest that at the least, traditional norms of science should be supported by designing a system that institutionalizes criticism and dissent in pharmaceutical research. Debate in an adversarial setting such as a science court, which would have power to compel disclosure of scientific information, and be considered in the FDA’s regulatory process (Biddle, 2013), an additional way to “let the sunshine in.” While there is no direct way to quantify the magnitude of interests following this Science for Profit ideology, it is important to pause and reflect on the damage these practices have done. They have required public resources to address health challenges these companies have created or exacerbated. As corporate influences become pervasive, it may not be possible to locate unconflicted individuals, including patients, who are available for peer and other review functions (Marks, 2020a). COI policies were totally ineffectual in controlling the overwhelming evidence that the opioid crisis has been created or exacerbated by webs of influence from several pharmaceutical companies (Marks, 2020b). Counterstrategies of influence that are as ambitious as those of involved corporations must be developed and put in place to deal with cumulative effects of corporate strategies that have caused this harm (Marks, 2020a). There is little movement to control COI in research, perhaps, in part, because of a belief that doing so would disrupt current practices of the scientific community as well as the of institutions in which members are housed. In fact, NIH’s first draft of COI policies sparked fierce opposition from the scientific community, were withdrawn and after a few years reintroduced by less strict guidelines (Hauray et al., 2021). In a further example, Christian (2022) raises relevant concerns about scientific experts, in this case regarding CRISPR/Cas-based genome editing, serving as public advocates for regulatory frameworks. These experts issue moral judgments on the risks and benefits of a new technology, describing a responsible pathway and making judgments about which groups and individuals are credible about the issue. At the same time, these scientists are active in pursuing research in the field, serve on scientific advisory boards and often have ownership in related companies; yet, their obvious COI are not recognized and frequently not disclosed. Yet, some simple steps would help to control the under-reporting and biasing effects of COI. After describing significant undisclosed industry payments to physician authors publishing in the New England Journal of Medicine and Journal of the American Medical Association, Baraldi et al. (2022) suggest simply requiring a link to the Open Payments database with every paper submission. As it stands, physician researchers who receive industry payments are more likely to demonstrate results favorable to the company funding them. It is also important to note that both of

104

6  Conflict of Interest and Commitment and Research Integrity

these journals and others, have confronted resignation or dismissal of their own editors-in-chief for COI (Baraldi et  al., 2022). Paludan-Muller and colleagues (2021) documented industry–partner control of research data including its review during trials, and content of publications (Paludin-Muller et al., 2021). Academic institutions could easily ban such agreements/practices and surely patients/research subjects should be notified. It is important to note that COI is incomplete. Much of the corruption of medical science via the pharmaceutical industry is not captured well by the construct of COI. Companies can do their own research, not make it public when it suits them to protect “trade secrets,” and smoothly integrate this work with medical science, taking advantage of the legitimacy of the latter. As noted above, they do so by choice of comparators, doses, surrogate endpoints, choice of study populations, trial durations, and definitions (Sismondo, 2021). Until comparable government regulations, standards, and expectations of science self-regulation govern all research, such practices will continue. A weak regulatory system supports such practices; entirely absent are empirical studies that would document effects on subsequent research and on research participants and subsequent patients. There are a few innovative approaches to COI that promise an eventual breakthrough.

Important New Approaches Authors in management science have suggested integrated approaches including prevention, confrontation, and punishment, going well beyond disclosure which is the common approach to managing COI in biomedical research. For prevention, training and experience can help individuals and organizations to recognize COIs in particular situations. Structuring roles so that COI is unlikely is an additional preventive strategy. Punishment can include reprimand and eventually termination of employment. It is necessary to take the full range of actions to manage COI (Nia et al., 2022). But perhaps the most useful empirical and conceptual breakthroughs have been published by S.  Scott Graham and collaborators (2022a, 2022b). These authors begin with a summary of evidence to date. “Substantial evidence indicates that industry funding of biomedical research and author COI arising from financial relationships with medically related industry can bias research results. Associations between industry funding or COI and positive outcomes such as results favourable to the sponsor, are the most well documented. Available evidence indicates that industry-funded trials can be up to 5.4 times more likely to return positive results than trials not sponsored by industry, and trials with author COI may be as much as 8.4 times more likely to return favorable results when compared with those without author COI. Additional research has demonstrated that industry funding and COI may be associated with reduced drug and device safety and can have adverse effects on the methodological quality of clinical trials. Recent studies also suggest that

An Area where Standards for COI Are Well Described: Data Safety Monitoring Boards

105

industry sponsorship may be associated with premature trial termination and non-­ reporting of trial results. Calls for more evidence documenting that industry funding and COI can measurably bias biomedical research persist even though these findings have been repeatedly replicated” (Graham, et al., 2022a, 2022b, p. 1, and see citations to support these findings). Graham et al. (2022a) note that COI policies and guidelines routinely make distinctions based on the method of remuneration or monetary value of the disbursement but do not agree on the risk presented by different types or magnitudes of COI. Recommended disclosure thresholds vary widely. And it is not clear that different types of COI carry different levels of risk for biomedical research. Graham and co-authors could not locate any studies that conducted an assessment of magnitude of industry funding on author COI, which is a significant deficit in evidence-­ based COI policy necessary for safeguarding the integrity of biomedical research. In other words, favorability of results is still the overwhelmingly dominant target outcome; current evidence cannot guide stratification by type or magnitude of COI (Graham et al., 2022a). In addition, policies that focus on individual researchers alone cannot control risks without attending to institutional COI as well as on the aggregation of influence across decision-making systems. COI is relational. Bias induced by financial relationships among a small minority of researchers may compromise the entire system; such a situation is more difficult to mitigate in the current situation of attending only to individual COIs. Thus, a new intellectual framework is required to adequately study COI (Graham et al., 2022a). Some authors suggest that consequences of not studying and managing COIs effectively can be profound. For example, clinical practice guidelines for opioid prescribing from 2007 to 2013 were at risk of bias because of pervasive conflicts of interest with the pharmaceutical industry, with a paucity of mechanisms to mitigate bias (Spithoff et al., 2020).

 n Area where Standards for COI Are Well Described: Data A Safety Monitoring Boards The purposes of data safety monitoring boards (DSMBs) are to identify any serious emerging trial safety concerns as rapidly as possible in order to minimize time in which research participants may be at excess risk, and to identify problems with conduct of the trial that could be corrected. Because financial or scientific conflicts of interest might cause investigators or the trial sponsor to take actions that would diminish integrity or credibility of a trial, sole responsibility for interim monitoring of data on safety and efficacy is assigned to the DSMB, whose members have no involvement in the trial and no vested or regulatory interests in trial results including in another related trial or competing products or institutions. Regulators increasingly expect companies running clinical trials to have DSMBs, blunting their financial and reputational conflicts of interest. The DSMB charter should include

106

6  Conflict of Interest and Commitment and Research Integrity

procedures for ensuring lack of conflict of interest (Ellenberg et al., 2019). Evidence about whether all DSMBs adhere to these COI and other protections could not be located. While all trials require careful monitoring, those most in need of a DSMB are randomized trials that may provide definitive data about treatments intended to save lives or prevent serious disease, novel or high-risk treatments with the potential to change medical practice. Some have suggested that all gene therapy trials should have DSMBs. DSMBs are only minimally addressed in government regulations but rather are stipulated by funding agencies including the National Cancer Institute and its cooperative groups (Ellenberg et al., 2019). A single 11-member DSMB monitors all government-funded trials of COVID-19 vaccine, to ensure coordinated oversight and to allow shared insights related to safety across trials. Members are free of financial relationships with companies developing vaccines for COVID-19. Consistent with the discussion above, the primary responsibility of this DSMB is safety of study participants and the integrity and scientific validity of the trials it is tasked to oversee (Joffe et al., 2021).

COI and Conflict of Commitment as a National Security Issue The National Security Presidential Memorandum 33 (2022) “directs a national response to safeguard security and integrity of Federally-funded research and development in the United States. It takes steps to protect intellectual capital, prevent research misappropriation and ensure responsible management of US taxpayer dollars while maintaining an open environment to foster research discoveries and innovation that benefit our Nation and the world. It prohibits Federal personnel from participating in foreign government-sponsored talent recruitment programs, directs departments and agencies to control access to and utilization of Federal Government research facilities…” (NSPM-33). The integrity of the research enterprise rests on foundational principles and values, which are also consistent with American values: Openness and transparency…Accountability and honesty…Impartiality and objectivity…Freedom of inquiry….” “Behaviors that violate these foundational principles and values jeopardize the integrity of the research enterprise. Behaviors that threaten the integrity of the research enterprise often also pose risks to the security of the research enterprise, which we term research security…. (NSPM-33).

Guidance for implementing NSPM-33 concerns COI and conflict of commitment in Federally funded research. It stems from efforts to induce American scientists to secretly conduct research programs on behalf of foreign governments and to inappropriately disclose non-public results from research funded by the US government, which must be prevented. Disclosure forms are provided; false representations may be subject to prosecution and liability. Institutions receiving federal research funds have to certify that all individuals and circumstances so affected have been duly reported (NSPM-33, Guidance for implementing).

Summary

107

Research security involves safeguarding the research enterprise against misappropriation of research and development to the detriment of national economic security, related violations of research integrity and foreign government interference (NSPM-33 Guidance for implementing, 2022).

Conflict of interest in the context of national security has been an issue of emerging importance and is now significantly regulated in federally funded research in the USA. A subsequent report from the DHHS Office of Inspector General noted that more than two-thirds of grantee institutions failed to meet one or more requirements for investigators’ disclosure of all foreign financial interests and support, especially non-publicly traded equity interests from foreign entities or in-kind resources, professional affiliations or participation in a “foreign” talents program. Some grantee institutions also did not comply with Federal requirements to train investigators regarding disclosure of foreign financial interest or perform required reviews to determine whether such interests were conflicts that could bias their research. Types of financial interests that investigators must disclose to grantee institutions include (in addition to the equity interest noted above): income from intellectual property rights and interests exceeding $5000/year, salary and payments for services not otherwise identified as salary, equity interest from a publicly traded entity exceeding $5000, and sponsored or reimbursed travel (42 CFR Sec 50.603). Types of support that investigators must disclose to grantee institutions include: resources and financial support from contracts, cooperative agreements, other awards, and active and pending grants; professional affiliations; in-kind resources such as space or equipment or supplies or employees or visiting scholars; and selection to “talents” or similar-type program; and items or services given with the expectation of an associated time commitment or other condition (NIH Grants Policy Statement and NOT-OD-19-114). The National Science Foundation will use large databases to identify those who have failed to declare ties to foreign institutions in their grant applications (Mervis, 2022).

Summary This is a discouraging chapter. COI is clearly defined as are remedies. Although there is significant evidence that COI bias is widespread in biomedical research, quantification of the harms caused is not precise. Neither is there clarity about benefits of multiple roles for individuals and institutions that are well managed, even as funding institutions promote collaboration across sectors with very different COI traditions. Despite this pervasiveness, very little is being done to address the issue, with the exception of security issues in the USA. Broader reform would require facing serious headwinds that have proclaimed economic interests as necessary and therefore dominant. Therefore, COI is best seen as a structural issue that can profoundly affect both research and patient care. Some forward-looking scholars have suggested a research agenda which will provide a better empirical base for COI policies.

108

6  Conflict of Interest and Commitment and Research Integrity

References Baraldi, J. H., Picozzo, S. A., Arnold, J. C., Volarich, K., Gionfriddo, M. R., & Piper, B. J. (2022). A cross-sectional examination of conflict-of-interest disclosures of physician-authors publishing in high impact US medical journals. BMJ Open, 12(4), e057598. https://doi.org/10.1136/ bmjopen-­2021-­05759 Biddle, J. (2013). Institutionalizing dissent: A proposal for an adversarial system of pharmaceutical research. Kennedy Institute of Ethics Journal, 23(4), 325–353. https://doi.org/10.1353/ ken.2013.0013 Bottema-Beutel, K., & Crowley, S. (2021). Pervasive undisclosed conflict of interest in applied behavior analysis autism literature. Frontiers in Psychology, 12, 676303. https://doi. org/10.3389/fpsyg.2021.676303 Brown, J. R. (2017). Socializing medical research. In K. Elliott & D. Steel (Eds.), Current controversies in values and Science. Routledge. Buruk, B., Guner, M. D., Ekmekci, P. E., & Celebi, A. S. (2021). Comparison of COVID-19 studies registered in the clinical trial platforms: A research ethics analysis perspective. Developing World Bioethics, 22, 217. https://doi.org/10.1111/dewb.12333 Chimonas, S., Mamoor, M., Zimbalist, S. A., Barrow, B., Bach, P. B., & Korenstein, D. (2021). Mapping conflict of interests: scoping review. BMJ, 375, e066576. https://doi.org/10.1136/ bmj-­2021-­066576 Cigarroa, F. G., Masters, B. S., & Sharphorn, D. (2018). Institutional conflicts of interest and public trust. JAMA, 320(22), 2305–2306. https://doi.org/10.1001/jama.2018.18482 Christian, A. (2022). Addressing conflicts of interest and conflicts of commitment in public advocacy and policy making on CRISPR/Cas-based human genome editing. Frontiers in Research Metrics and Analytics, 7, 775336. https://doi.org/10.3389/frma.2022.775336 Contessa, G. (2022). It takes a village to trust science: Towards a (thoroughly) social approach to public trust in science. Erkenntnis, online ahead of print, 1. https://doi.org/10.1007/ s10670-­021-­00485-­8 DeFrance, M. J., Yayac, M. F., Courtney, P. M., & Squire, M. W. (2021). The impact of author financial conflicts on robotic-assisted joint arthroplasty research. The Journal of Arthroplasty, 36(4), 1462–1469. https://doi.org/10.1016/j.arth.2020.10.033 Ellenberg, S. S., Fleming, T. R., & DeMets, D. L. (2019). Data monitoring committees in clinical trials: A practical perspective (2nd ed.). Wiley. Elliott, K. C. (2018). Addressing industry-funded research with criteria for objectivity. Philosophy of Science, 85, 8576–8868. https://doi.org/10.1086/699718 El-Rayess, H., Khamis, A. M., Haddad, S., Ghaddara, H. A., Hakoum, M., Ichkhanian, Y., Bejjani, M., & Aki, E. A. (2020). Assessing concordance of financial conflicts of interest disclosures with payments’ databases: A systematic survey of the health literature. Journal of Clinical Epidemiology, 127, 19–28. https://doi.org/10.1016/j.jclinepi.2020.06.040 Fabbri, A., Hone, K.  R., Hrobjartsson, A., & Lundh, A. (2021). Conflict of interest policies at medical schools and teaching hospitals: A systematic review of cross-sectional studies. International Journal of Health Policy and Management. Online ahead of print, 11, 1274. https://doi.org/10.34172/ijhpm.2021.12 Fuller, J. (2018). Meta-research evidence for evaluating therapies. Philosophy of Science, 85, 767–780. https://doi.org/10.1086/6996 Gornall, J. (2018a, October 10). How mesh became a four letter word. British Medical Journal, 363, k4137. https://doi.org/10.1136/gmj.k4137 Gornall, J. (2018b). The trial that launched millions of mesh implant procedures: Did money compromise the outcome? British Medical Journal, 363, k4155. https://doi.org/10.1136/bmj.k4155 Gornall, J. (2018c). Vaginal mesh implants: Putting the relations between UK doctors and industry in plain sight. British Medical Journal, 363, k4164. https://doi.org/10.1136/bmj.k4164

References

109

Graham, S. S., Karnes, M. S., Jensen, J. T., Sharma, N., Barbour, J. B., Majdik, Z. P., & Rousseau, J. F. (2022a). Evidence for stratified conflicts of interest policies in research contexts: A methodological review. BMJ Open, 12(9), e063501. https://doi.org/10.1136/bmjopen-­2022-­063501 Graham, S.  S., Majdik, Z.  P., Barbour, J.  B., & Rousseau, J.  F. (2022b). Associations between aggregate NLP-extracted conflicts of interest and adverse events to drug product. In P. Otero (Ed.), One world. One health- global partnership for digital innovation. IOS Press. Grundy, Q. (2021). A politics of objectivity: Biomedicine’s attempt to grapple with “non-­financial” conflicts of interest. Science & Engineering Ethics, 27(3), 37. https://doi.org/10.1007/ s11948-­021-­00315-­8 Grundy, Q., Mazzarerello, S., & Bero, L. (2020). A comparison of policy provisions for managing “financial” and “non-financial” interests across health-related research organizations: A qualitative content analysis. Accountability in Research, 27(4), 212–237. https://doi.org/10.108 0/08989621.2020.1748015 Hansen, C., Lundh, A., Rasmussen, K., & Hrobjartsson, A. (2019). Financial conflicts of interest in systematic reviews: Associations with results, conclusions, and methodological quality. Cochrane Database Systematic Review, 8(8), MR000047. https://doi.org/10.1002/14651858. MR000047.pub2 Hauray, B., Boullier, H., Gaudilliere, J., & Michel, H. (2021). Introduction: Conflict of interest and the politics of biomedicine. In B. In Hauray, H. Boullier, J. Gaudilliere, & H. Michel (Eds.), Conflict of interest and biomedicine. Routledge. Joffe, S., Babiker, A., Ellenberg, S. S., Fix, A., Griffin, M. R., Hunsberger, S., Kalil, J., Levine, M. M., Makgoba, M. W., Moore, R. H., Tsiatis, A. A., & Whitley, R. (2021). Data and safety monitoring of COVID-19 vaccine clinical trials. The Journal of Infectious Diseases, 224(12), 1995–2000. https://doi.org/10.1093/infdis/jiab263 Kahr, B., & Hollingsworth, M. D. (2019). Massive faculty donations and institutional conflicts of interest. Journal of Scientific Practice and Interests, 1(1). Legg, T., Hatchard, J., & Gilmore, A.  B. (2021). The science for profit model–How and why corporations influence science and the use of science in policy and practice. PLoS One, 16(6), e0253272. https://doi.org/10.1371/journal.pone.0253272 Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry sponsorship and research outcome. Cochrane Database Systematic Reviews, 2(2), MR000033. https://doi. org/10.1002/14651858.MR000033.pub3 Lundh, A., Rasmussen, K., Ostengaard, L., Boutron, I., Stewart, L. A., & Hrobjartsson, A. (2020). Systematic review finds that appraisal tools for medical research studies address conflicts of interest superficially. Journal of Clinical Epidemiology, 120, 104–115. https://doi.org/10.1016/j. jclinepi.2019.12.005 Mandrioli, D., Kearns, C. E., & Bero, L. A. (2016). Relationship between research outcomes and risk of bias, study sponsorship, and author financial conflict of interest in reviews of the effects of artificially sweetened beverages on weight outcomes: A systematic review of reviews. PLoS One, 11(9), e0162198. https://doi.org/10.1371/journal.pone.0162198 Marks, J.  H. (2020a). Beyond disclosure: Developing law and policy to tackle corporate influence. American Journal of Law & Medicine, 46(2–3), 275–296. https://doi. org/10.1177/0098858820933499 Marks, J.  H. (2020b). Lessons from corporate influence in the opioid epidemic: Toward a norm of separation. Journal of Bioethical Inquiry, 17(2), 173–189. https://doi.org/10.1007/ s11673-­020-­09982-­x McClymont, E. (2021). Is ‘conflict of interest’ a misnomer? Managing interests in immunization research and evaluation. Human Vaccines & Immunotherapeutics, 18(1), 1879580. https://doi. org/10.1080/21645515.2021.1879580 Mervis, J. (2022). NSF turns to big data to check if grantees have foreign ties. Science, 378(6615), 16. https://doi.org/10.1126/science.adf1849

110

6  Conflict of Interest and Commitment and Research Integrity

Mialon, M., Serodio, P., Crosbie, E., Teicholz, N., Naik, A., & Carriedo, A. (2022). Conflict of interest for members of the U. S. 2020 dietary guidelines advisory committee. Public Health Nutrition, 21, 1–28, online ahead of print. https://doi.org/10.1017/S1368980022000672 Moynihan, R., Bero, L., Hill, S., Johansson, M., Lexchin, J., Macdonald, H., Mintzes, B., Pearson, C., Rodwin, M.  A., Stavdal, A., Stegenga, J., Thombs, B.  D., Thornton, H., Vandik, P.  O., Wieseler, B., & Godlee, F. (2019). Pathways to independence: Towards producing and using trustworthy evidence. British Medical Journal, 367, 16576. https://doi.org/10.1136/bmj.l6576 National Science and Technology Council, Guidance for implementing National Security Memorandum 33 (NSPM-33) on (2022). National Security Strategy for United States government-­supported Research and Development, A report by the Subcommittee on Research Security, Joint Committee on the Research Environment. Nia, S. J., Jafari, H. A., Vakili, Y., & Kabutarkhani, M. R. (2022). Systematic review of conflict of interest studies in public administration. Public Integrity. https://doi.org/10.1080/1099992 2.2022.2068901 O’Neill, J. (2021). Lessons from the vaginal mesh scandal: Enhancing the patient-centric approach to informed consent for medical device implantation. International Journal of Technology Assessment in Health Care, 37(1), e53. https://doi.org/10.1017/S0266462321000258 Ostergaard, L., Lundh, A., Tjornhoj-Thomsen, T., Abdi, S., Gelle, M.  H. A., Stewart, L.  A., Boutron, I., & Hrobjartsson, A. (2020). Influence and management of conflicts of interest in randomized clinical trials: Qualitative interview study. BMJ, 371, m3764. https://doi. org/10.1136/bmj.m3764 Paludin-Muller, A. S., Ogden, M. C., Marquardsen, M., Jorgensen, K. J., & Gotzsche, P. C. (2021). Are investigators’ access to trial data and rights to publish restricted and are potential trial participants informed about this? A comparison of trial protocols and informed consent materials. BMC Medical Ethics, 22(1), 115. https://doi.org/10.1186/s12910-­021-­00681-­9 Perkmann, M., McKelvey, M., & Phillips, N. (2019). Protecting scientists from Gordon Gekko: How organizations use hybrid spaces to engage with multiple institutional logics. Organizational Science, 30(2), 298–318. https://doi.org/10.1287/orsc.2018.1228 Pivovarova, E., Klitzman, R. L., Murray, A., Stiles, D. F., Apelbaum, P. S., & Lidz, C. W. (2019). How single institutional review boards manage their own conflicts of interest: Findings from a national interview study. Academic Medicine, 94(10), 1554–1560. https://doi.org/10.1097/ ACM.0000000000002762 Reed, G., Hendlin, Y., Desikan, A., MacKinney, T., Berman, E., & Goldman, G. T. (2021). The disinformation playbook: How industry manipulates the science-policy process–And how to restore scientific integrity. Journal of Public Health Policy, 42(4), 622–6343. https://doi. org/10.1057/s41271-­021-­00318-­6 Rodwin, M.  A. (2019). Conflicts of interest in human subject research: The insufficiency of U. S. and international standards. American Journal of Law and Medicine, 45(4), 303–330. https://doi.org/10.1177/0098858819892743 Sanders, M.  R., Kirby, J.  N., Toumbourou, J.  W., Carey, T.  A., & Havighurst, S.  S. (2020). Innovation, research integrity, and change: A conflict of interest management framework for program developers. Australian Psychologist, 55(2), 91–101. https://doi.org/10.1111/ap.12404 Serghiou, S., Contopoulos-Ioannidis, D. G., Boyack, K. W., Riedel, N., Wallach, J. D., & Ioannidis, J. P. A. (2021). Assessment of transparency indicators across the biomedical literature: How open is open? PLoS Biology, 19(3), e3001107. https://doi.org/10.1371/journal.pbio.3001107 Shaw, D. (2022). Withholding conflicts of interest: The many flaws of the new ICMJE disclosure form. Journal of Medical Ethics, 48(1), 19–21. https://doi.org/10.1136/medethics-­2020-­106136 Sismondo, S. (2021). Epistemic corruption, the pharmaceutical industry, and the body of medical science. Frontiers in Research Metrics and Analytics, 6, 614013. https://doi.org/10.3389/ frma.2021.614013 Spithoff, S., Leece, P., Sullivan, F., Persaud, N., Belesiotis, P., & Steiner, L. (2020). Drivers of the opioid crisis: An appraisal of financial conflicts of interest in clinical practice guideline panels

References

111

at the peak of opioid prescribing. PLoS One, 15(1), e0227045. https://doi.org/10.1371/journal. pone.0227045 Sterne, J. A. C., Savovic, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., Cates, C. J., Cheng, H., Corbett, M.  S., Eldridge, S.  M., Emberson, J.  R., Hernan, M.  A., Hopewell, S., Hrobjartsson, A., Junqueira, D. R., Juni, P., Kirkham, J. J., Lasserson, T., Li, T., et al. (2019). RoB 2: A revised tool for assessing risk of bias in randomized trials. British Medical Journal, 366, 14898. https://doi.org/10.1136/bmj.l4898 Thacker, P. D. (2020). Transparency and conflicts in science: History of influence, scandal, and denial. In K. Alyurt (Ed.), Integrity, transparency and corruption in healthcare and research on health (Vol. I). Springer. Thompson, D. F. (2004). Conflicts of interest. In Restoring responsibility: Ethics in government, business, and healthcare (pp. 290–299). Cambridge University Press. Torgerson, T., Wayant, C., Cosgrove, L., Aki, W. A., Checketts, J., Dal Re, R., Gill, J., Grover, S. C., Khan, N., Kahan, R., Marusic, A., McCoy, M. S., Mitchell, A., Prasad, V., & Vassar, M. (2022). Ten years later: A review of the US 2009 institute of medicine report on conflicts of interest and solutions for further reform. BMJ Evidence-Based Medicine, 27(1), 46–54. https:// doi.org/10.1136/bmjebm-­2020-­111503 Tresker, S. (2022). Unreliable threats: Conflicts of interest disclosure and the safeguarding of biomedical knowledge. Kennedy Institute of Ethics Journal, 32(1), 103–126. https://doi. org/10.1353/ken.2022.0004 Wiersma, M., Kerridge, I., & Lipworth, W. (2018). The dangers of neglecting non-financial conflicts of interest in health and medicine. Journal of Medical Ethics, 44(5), 319–322. https://doi. org/10.1136/medethics-­2017-­104530

Chapter 7

Institutional Responsibilities for Research Integrity

But in order for the responsibility to be collectivized, there must be clear institutional mechanisms and guidance provided to individuals to ensure that science indeed does benefit society. It is unclear whether such institutions for ensuring that science benefits society currently exist (Douglas, 2014, p. 974).

Are important institutions of science available and trustworthy? A number of institutions contribute to development and distribution of scientific knowledge and to the ongoing development of scholars. Universities, commercial entities, journals, scientific associations, funders, and regulatory agencies form a loosely connected network called an organizational field. In general, their responsibility for research integrity (RI) is under-examined, perhaps under the assumption that these institutions are themselves of sound moral character, upright and honest, share the same goals for science and that self-regulation of individuals and organizations will flourish within their walls and in their interactions with other institutions in the network. But Douglas reminds us that mechanisms and guidance are unclear and that individuals (on whom most responsibility for RI has fallen) are heavily dependent on incentives, conditions of work and clarity and enforcement of guidelines/standards, which flow from institutions and heavily impact scientists’ ability to practice with integrity. The essential work of the institutions of science is best aligned with the notion of stewardship, a term not widely used within science but still relevant. Such a view sees science as a public good, requiring an ethic of responsibility. Here, I begin the discussion by acknowledging that RI requires placing the institutions of science in a social framework. Next, we turn to responsibility of these institutions for the quality of research produced and disseminated under their aegis. Not only are there positive innovations supporting RI, but there are also problematic examples such as current management of RM and practices of research institutions in the industrial sector. Next, important guardrails from regulators, funders, and journals in supporting RI are noted, as is the importance of research infrastructure. With these resources, institutions must navigate their responsibilities. Institutions in the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_7

113

114

7  Institutional Responsibilities for Research Integrity

network operate with different and often incompatible logics. Examining logics and practices through the Theory of Institutional Corruption, first introduced in Chap. 5, can help to reorient institutions to their basic purposes. Science policy, ultimately accountable to the public, is an important but insufficient source of guidance; all science institutions must individually and in the aggregate, ensure societal benefit.

Social Framework and Trustworthiness The framework set out by Alex John London in his book, “For the Common Good” (2022), is supportive of research integrity. London notes that research should be recognized as a collaborative social activity for the purpose of producing an important social good. Since research is the evidence base that influences the capacity of institutions to safeguard the health, welfare, and rights of individuals (p. 6), it is a moral imperative that this work be done effectively, efficiently, and equitably with little wasted resources (p. 150). London asserts that orthodox research ethics has been overwhelmingly concerned with ethics of researcher–participant relationship rather than with how institutions of science ought to be designed and regulated (p. 111). Research ethics has also been narrowly focused on IRBs, lacking ability to hold other stakeholders accountable for their influence on the scientific enterprise (p. 158). For the advance of reliable and trustworthy knowledge, research integrity requires an orientation broader than that of traditional research ethics.. Trustworthiness can be supported by adhering to methodological standards in research, producing science honestly and reliably, and disseminating it accurately. A set of methodological practices not uncommon today, including using inferior comparators, overinterpretation of results, inappropriate trial duration, poor surrogate endpoints, and publication bias—will not yield reliable science (Pinto, 2020). Yet, their use is often neither challenged nor corrected. Other considerations also can undermine trustworthiness. Jukola (2021) notes that even if individual studies pass methodological rigor, research can be biased at the macro level when the body of evidence is in conflict with the problems we believe to be crucial. The body of evidence available to be utilized for any such problem may contain huge blind spots (questions not addressed) often flowing from narrow or biased funding decisions. A commonly cited example is the research agenda on mental disorders, overly shaped by commercial interests of pharmaceutical institutions following their incentives to derive patents. Blind spots in the research portfolio for this issue include far less research on behavioral treatments or environmental or nutritional causes of mental health issues (Jukola, 2021). The institutions of science are responsible for the quality of research funded, produced, and disseminated under their aegis/sponsorship. There are increasing calls for active involvement and oversight both by and over these institutions and for collaboration among them to assure that their interrelated responsibilities support the goal of research integrity.

Responsibility for Coordinated Quality of Research Produced Under the Network…

115

 esponsibility for Coordinated Quality of Research Produced R Under the Network of Institutions Institutions propagate and enforce norms and rules, evaluate and certify credentials, set agendas, and enforce accountability (Rauch, 2021). They also have cultures, which are rarely defined by written rules and established structures (Schocker et al., 2021). Some may be deliberately changing culture to support RI. But there is evidence that many are managing an important violation (RM) in ways that reward emergent and persistent dishonesty and suppress whistleblowing to expose wrongdoing (Houdek, 2020).

 xamples of Research Integrity Innovation, and Needed Reform E in Research Misconduct Management There are a few publicly reported examples of institutions describing their efforts to ensure scientific integrity. Inserm notes that for over a decade it has invested considerable effort in developing structures and processes aimed at ensuring scientific integrity. Quality managers are present in a third of Inserm research units, doing quality audits. They note weaknesses in scientific methodology, data filing, and traceability and requests on the part of scientists for training and help (Estienne et al., 2020). In a second example, The Research Center Borstel took up the challenge of instigating cultural change after a worst case of research misconduct. Like Inserm, it adopted a person-centered approach with frequent interactions with scientists and research integrity scouts, which has now become part of the center’s culture. Research integrity scouts meet regularly with coordinators for research integrity to understand their experiences and discuss any issues, and meet annually with the Center’s CEO. Most research groups have an annual retreat which includes discussion of specific needs. The Center also introduced a central data archiving system for all primary data in publications so that the data are protected and can be inspected in case of an allegation. Archiving is now mandatory to receive credit for publications in the internal merit-based reward system (Schocker et al., 2021). In contrast to these laudatory examples, current institutional handling of research misconduct (a violation of research integrity) has drawn considerable angst as well as suggestions for reform. One element of concern is institutional management of investigation of alleged fabrication or falsification. Concerns relate to lack of transparency and poor quality. Gusnsalus and colleagues (2018) note that institutional reports of RM lack standardization with little quality control, limited oversight, little relevant information released to the public, lacking supporting evidence, and with little investigation of long-term patterns of RM, both by individuals and within research-performing institutions. Overriding these flaws is the inherent conflict of interest of an institution investigating itself and its faculty, a flaw that should require

116

7  Institutional Responsibilities for Research Integrity

independent outside review. In support of this concern, Loikith and Bauchauchwitz (2016) note that 90% of allegations of biomedical RM in the USA are dismissed by the responsible institutions without any faculty assessment or auditable record because of insufficient recorded data. In an investigation of a long-­term record of suspected RM, Grey et al. (2019) reported little response from universities. Keeping RM invisible likely is aimed at protecting the authority of science, leaving the assumption in the cases that become public scandals, that science is self-­ regulating. Invisibility is also essential to control potential damage to institutional reputation and to disallowance of research funds. Titus and Kornfeld (2021) suggest a reasonable remedy—a post hoc inquiry as a measure of institutional integrity in managing RM allegations. The purpose of such review should be to determine if the sponsoring institution, actively or passively, played a contributing role (e.g., flawed institutional policy, ineffective research integrity officer), and if corrective action was taken. Such action might include retraining the research integrity officer, and assuring whether mentors were effective or need to be retrained or excluded. To provide another perspective, Research Integrity Officers in highly research-­ oriented universities were asked about relevant aspects of their most recent findings of research misconduct. Many of these cases occurred in circumstances in which good research practices were absent. These good practices, which one would presume should be typical, included being open and transparent, feeling empowered to speak up if needed, being part of a group in which the leader is a good manager of people, designing research studies to protect from bias, keeping research records sufficient for others to reconstruct what had or had not been done, having a good understanding of statistics or seeking that expertise. This survey approached violation of research integrity from the perspective that if good organizational practices are in place, it would be much more difficult for research misconduct to have occurred (Kalichman, 2020). In research misconduct cases under active investigation or completed, research-­ performing institutions, those disseminating the results of research, and funders must play a coordinated role. Wager and colleagues (2021) provide positive and specific recommendations for these parties to work together in ways not now common, to assure integrity of reported research. Briefly, research institutions have a responsibility to assess integrity of reported research and not just deal with misconduct (FFP), and journals have the responsibility to pass on research integrity concerns to institutions. More specifically, research institutions should: notify journals directly and release relevant sections of RM investigations to journals that have published research that was the subject of investigation, allow journals to quote from misconduct investigation reports or cite them in retractions, and take responsibility for all research performed under their auspices regardless of whether the researcher still works at their institution. Academic journals should: pass on RI concerns to institutions regardless of whether they intend to accept the work for publication, and have criteria for information/evidence that should be passed on to the institution. Funders should include in their funding agreements a requirement for

Responsibility for Coordinated Quality of Research Produced Under the Network…

117

research data to be retained for at least 10 years and for a named primary institution that will coordinate the response to any RI concerns (Wager et al., 2021). It should be noted again that in addition to how allegations of RM are managed by institutions, current research misconduct regulations 42 CFR 93 are ethically problematic in additional ways that affect RI. These regulations pay no attention to effects of RM on subjects or on patients on whom the knowledge produced is subsequently used. Secondly, allegations of RM flow from whistleblowers who often experience significant repercussions, therefore ensuring under-reporting. Third, allegations must be made in good faith; it is unclear that allegations made in bad faith are effectively screened out.

Research Institutions in the Industrial Sector The overwhelming challenge regarding research from the industrial sector has been dominance of financial goals over transparent evidence of research integrity, often hidden behind protection of “commercial secrets.” This stance makes them highly distrusted by the public, and for good reason. Companies commonly invoke trade secret protections to prevent or limit disclosure of data to outsider researchers or the public, and without such access researchers cannot investigate a product’s claimed benefits or risks. It is noteworthy that ClinicalTrials.gov contains information that would otherwise be held in secret (Durkin et al., 2021); perhaps that explains poor compliance with ClinicalTrials.gov regulations. Industry-funded research findings favor industry interests more often than those associated with other funding, yielding well-documented bias from pharmaceutical, medical device, tobacco, alcohol, and food companies. Bias can be introduced at almost any stage in the research process. In an effort to establish a partnership in which research integrity standards can be met, Plotter and colleagues suggest an independent advisory board, perhaps across institutions, to address whether the research has been influenced in any way. Details may be found in Plotter and colleagues (2020). In advanced economies, more than two-thirds of research money comes from industry. These organizations have a dual responsibility to contribute to society’s common good as well as to make a profit. They often operate in partnership with academic institutions/scientists and professional bodies. Ethical issues in these relationships are common. Moynihan and colleagues (2020) note that almost three-­ quarters of the leaders of 10 influential professional medical associations in the USA had financial relationships with pharmaceutical and device manufacturers, including payments for research. As noted in Chap. 6, if this conflict of interest is not examined, disclosed, and otherwise managed, such arrangements can be a direct challenge to integrity. There is little evidence that they are regularly successfully managed. Empirical research has shown that industry intervenes through coordinated strategies aimed at delaying, preventing, or weakening attempts to regulate their products, many times through deliberate efforts to obfuscate science, mislead the public

118

7  Institutional Responsibilities for Research Integrity

and manipulate political actors. In the case of chemical products containing lead, benzene, and PCBs, whose effects are frequently not immediately evident, companies had prior knowledge of adverse health effects of their products (Aho, 2020). Similar tactics have been used by tobacco and pharmaceutical industries. Consistent strategies include: manipulation of scientific method to show evidence in their favor, suggesting alternate causes for resulting problems, reshaping criteria for establishing scientific “proof,” championing legislation that gains commercial entities access to publicly funded research data so that it can be reanalyzed to fit their agenda, and promotion of policy reform that increases reliance on industry evidence. Industry strategy is to promote industry-favored policies as solutions and to legitimize industry’s role as a scientific stakeholder, saturating the scientific record. These patterns are deep and spread to other industries (Legg et al., 2021). There have been important although incomplete, efforts at reigning in industry bias, through public reporting requirements. In 2007, the Food and Drug Administration Amendments Act (FDAAA) was passed by Congress, requiring registering and reporting of clinical trials for new drugs in a publicly accessible database. Following finalization of this rule, the proportion of trials registered and reporting results for neuropsychiatric drugs, in ClinicalTrials.gov was found to be significantly higher and publication bias significantly lower, suggesting that FDAAA likely contributed to improved research reporting including outcome reporting (Zou et al., 2018). But others note that compliance with FDAAA is still poor and likely to reflect lack of enforcement by regulators. Open public audit of compliance for each sponsor (usually a pharmaceutical company) would be helpful. Failure to report results of a clinical trial will distort the evidence base and breaches researchers’ obligations to trial participants (DeVito et al., 2020). A more radical approach would require industry to fund trials conducted by an independent entity (Legg et al., 2021). It is also important to consider issues of regulatory injustice—stark differences in regulatory requirements imposed on those institutions (largely academic) receiving federal research funds, against less rigorous requirements imposed on industry-­ producing research. Spector-Bagdady (2021) outlines this issue in more detail, noting that self-regulation by industry has apparently not yet been attempted and would require government trust in such a private process.

 esearch Integrity Guardrails from Regulators, Funders, R Journals, and Research Infrastructure Misbehavior is encouraged when strong institutional safeguards are absent, when individual rewards from corruption are high and when prospects of detection and punishment are low. Multiple examples of these conditions exist within biomedical science. These include tolerating lax practice by regulators, funders ignoring regulatory functions assigned to them, implicitly supporting scientific publishing focused on market interests rather than on scientific functions, and not attending to an infrastructure that will support research integrity.

Research Integrity Guardrails from Regulators, Funders, Journals, and Research…

119

Regulators As noted above, a 2007  US law (FDAAA) requires that many clinical trials be posted publicly in a federal database so physicians and patients can see what trials are ongoing and view the results of trials to see whether new treatments are safe and effective. FDAAA required such posting; yet, despite a 2017 final rule of the law, many still ignore the requirement. Federal officials (NIH and FDA) have done little to enforce the law which allows assessing penalties and withholding federal research grants. While reporting rates by most large pharmaceutical companies and some universities have improved, habitual violators continue to ignore these requirements. The history of the federal database—ClinicalTrials.gov—shows multiple rejections by staff of submissions lacking very basic information such as unclear outcome measures (Piller, 2020). A substantial number of trials had not been published 2–4 years after trial completion. It should be noted that ClinicalTrials.gov provides the only public reporting of results (Zarin et al., 2019). In April of 2022, FDA issued its first notice of noncompliance to a company for failure to submit required summary results information on ClinicalTrials.gov—an action taken long after the regulation was in place. Also, for noncompliance, NIH can withhold grant funding but has declined to take such enforcement action on its own (Ramachandran et  al., 2021). Contrast this scenario with that in the UK in which in 2018 only 29% of universities had results available on the European Clinical Trial Registry. Because of direct political pressure, by June 2021, 91% were posted. Clinical trial transparency is at the foundation of evidence-based medicine, a clear ethical obligation (Keestra et al., 2022). In another example, a study of FDA trial site inspections found significant evidence of: falsification or submission of false information, problems with adverse event reporting, protocol violations, inaccurate/inadequate record keeping, and other violations. Only 4% of publications resulting from these trials mentioned the inspection findings, and no corrections, retractions, or expressions of concern acknowledging these issues were subsequently published (Seife, 2015). The FDA warns researchers when falsified data are discovered, but notice of these infractions does not appear in the medical literature. Garmendia et al. (2019) found that about half of all meta-analyses would have had altered conclusions if these data had been excluded. Because FDA does not make trial inspection reports public, it is difficult to impossible to determine from the literature and from ClinicalTrials.gov postings the integrity of study conclusions (Dal-Re et al., 2000). Since the scientific community has insisted that scientists should head agencies regulating science, it is important to consider the undermining of regulatory purpose, by mechanisms of cognitive capture. In such situations, regulators come to adopt the perspective of those they are supposed to regulate and fail to recognize problems, weakening regulatory oversight and enforcement. Regulators are more likely to listen to those they feel are like them or are in their social networks. Rule avoidance becomes prevalent, raising the question about why regulators fail to adjust regulatory frameworks to deal with it (Rilinger, 2021).

120

7  Institutional Responsibilities for Research Integrity

Funders Despite the fact that research integrity practices/standards of funders affect what research topics get addressed and what research outputs are produced and which impacts are likely, guidance documents are underdeveloped. Likewise, there is limited knowledge about how funding instruments are designed and attempt to affect desired societal outcomes. While most focus has been on public science funders, private foundations, patient organizations, and many others also fund research. Both research-performing institutions (RPOs) and research funding organizations (RFOs) affect how research comes to be performed. RPOs are expected to create an environment of integrity and to control incentives to researchers. RFOs are in a position to impose safeguards on RPOs if the latter fails to protect the integrity of the research that they produce (Scepanovic et al., 2021). Despite this leverage, a study by Labib et al. (2021) found little consensus among RFOs on which topics are important to address in research integrity policies. So far, their concerns were about dealing with breaches of RI, dealing with conflicts of interest, and setting expectations on RPOs. Perhaps this reflects, in part, limitations in knowledge about how institutional and system-of-science factors affect RI (Labib et al., 2021).

Journals The journal community has been unable to control those that deviate from best editorial practices, false or misleading information, and lack of transparency. A study by Hayden et al. (2021) found, in a Cochrane review of low back pain, that 12% of included studies came from such journals. These studies were characterized by lack of evidence of registration or protocol publication (75%), insufficient sample size (84%), often with missing conflict of interest statements, incomplete study methods, and high risk of bias. Articles from these journals are listed in trusted electronic databases (Hayden et al., 2021). The journal community has also taken little responsibility for cleansing the literature of retractions. Systematic reviews frequently include retracted studies; yet, only 5% were found to be corrected. An automated alert system would still require action on the part of editors/publishers (Kataoka et al., 2022). Grey and colleagues (2022) well document efforts to retract 292 publications by a research group for which publication integrity concerns had been identified and in the process suggest much more rigorous standards to which journals should be held. There is no recommended time frame for dealing with such investigations; transparent reporting of such and resolution of cases including reasons for delay should be the norm. In the spirit of self-regulation, COPE (Council on Publication Ethics) should audit practices of member journals (Grey et al., 2022). Ethically, journals should also be fair in their selection of manuscripts to publish; yet, Scanff et  al. (2021) identified a subset in which an unusual percentage of publications by a single author and/or

Navigating Institutional Responsibilities for Research Integrity; Among Competing…

121

those with links to the editor/editorial board. These might be called “nepotistic” journals (Scanff et al., 2021).

Research Infrastructure The digital era offers significant opportunities for tools that can support quality in science. Research infrastructure is a collective scientific enterprise to support creation of knowledge and other services for a range of users. There are some notable sources of research infrastructure already in place and under development. Database resources of the National Center for Biotechnology Information (NCBI) are a prime example. Created in 1988 at the National Library of Medicine (part of NIH) to develop information systems for molecular biology, NCBI maintains 35 databases including the well-known GenBank (Sayers et  al., 2022). Others have created a Research Commons to merge siloed sources in information from biobanks, clinical and genomic data sources, to support standardization, resource sharing, and computer infrastructure and tools, and to harmonize ethics agreements from different governance systems (Asiimwe et al., 2021). Despite these fine resources, other digital support tools are neglected, often developed and sustained by individuals who lack the resources to do so. Creation, development, and maintenance of research software are still ad hoc and improvised, making the infrastructure fragile and vulnerable to failure. There are few incentives for doing this work and a lack of standards and platforms for categorizing software. Few respondents in a recent survey (27%) system-tested their products, and only about half of research software developers have had formal training for this work (Carver et  al., 2022). The academic funding model wasn’t conceived to support software-based tools or to maintain a digital infrastructure, which is necessary for quality science (Knowles et al., 2021); and for data collection, management processing, analyzing, and archiving data (Ribeiro, 2022). Properly structured, such infrastructure could play a role in harmonizing (and where necessary upgrading) quality and transparency of science across commercial and academic settings. Regulators, research-performing and research funding organizations, scientific journals, and digital infrastructure all share responsibility for research integrity, with both successes and areas of needed development noted.

 avigating Institutional Responsibilities for Research N Integrity; Among Competing Institutional Logics Institutions that house, approve, and disseminate research bear at least two responsibilities. As noted in the previous section, they must assure that research within their aegis is competently and responsibly conducted and disseminated. They must also assure that trainees learn to conduct trustworthy research. To meet these

122

7  Institutional Responsibilities for Research Integrity

responsibilities institutions need to navigate market-oriented logic, often required as a condition of societal financial support, but which frequently conflicts with research integrity logic.

 ssuring Research Competence and Responsibility to Conduct A Trustworthy Research Recall the discussion in previous chapters noting that individuals often engage in unethical behavior without awareness that they are doing so or believe that social or situational forces are pushing them to cross ethical boundaries. Incentives and decisions should be designed so the unethical option is less tempting. Individuals commit wrongful acts when benefits of wrongdoing outweigh the costs, judging the situation they face, when others are doing so without cost, and then will justify their actions in ways that allow them to maintain their self-image as a good person (Zhang et al., 2014). The workplace is a moral laboratory which can be used for ethical learning and character development. Doing so requires creating an ethical culture, developing psychological safety, and internalizing moral reflection, over the long term. These conditions are necessary to protect individuals from strong situational and social pressures that can lead well-intentioned people to do immoral things (see above paragraph). Barriers to a moral learning environment include defensiveness focusing on self-validation, moral disengagement, and framing decisions exclusively in economic terms (Smith and Kouchaki, 2020). Such a moral laboratory should utilize well-known strategies. Stop unethical behavior early as it will escalate over time and individuals will try to justify their missteps. Training programs are necessary to develop bystander skills to bring attention to missteps, which can change the organizational culture if supported by superiors and colleagues with assurance that such reporting will have an impact (Sanderson, 2020). It is essential for organizations to foster a culture in which ethical considerations are a regular part of functioning (as in the examples described earlier in this chapter),—a shared purpose and set of values that provide a shared mindset for all members, that are credible both inside and outside the organization. Three conditions have been found important for ethics to take root in organizations: a sense of responsibility to society, conditions for ethical deliberation and learning, and respect for moral autonomy in which individuals can identify issues without fear of retaliation (Martinez et  al., 2021). Most of the work on ethical organizations has been done in business and is very scarce in academic organizations, but that which has been done shows a disconcerting missing-of-the-mark. Studies done in Denmark illuminate fundamental problems in currently accepted versions of institutional responsibility for research integrity. Students are trying to establish a research career in an environment in which gaining a foothold may

Navigating Institutional Responsibilities for Research Integrity; Among Competing…

123

legitimize minor compromises to one’s integrity and one cannot “buck the system of incentives and power.” Why should early career researchers be held responsible for changing the research culture and certainly not through integrity courses? The institution and senior researchers must take this responsibility (Sarauw, 2021). Individualization and responsibilization of the most junior actors in the research and higher education system is not an adequate response to fostering a culture of research integrity (Sarauw et al., 2019). This response likely is one explanation of the “failure” of research integrity education, and since RCR training is widely mandated, contributes to preserving the status quo and victimizing trainees. The IRB system, largely anchored in individual RPS, is a major gatekeeper for quality of research and protection of research subjects. There are growing calls for its upgrade. Prospective review of research is important but insufficient to address all aspects of research integrity; without such an extension, the central objective of ensuring that research is conducted in the most ethical way cannot be met. Addition of a retrospective review would be helpful but likely resisted (Dawson et al., 2019) as it may expose poor practice. Others question whether IRBs are fit-for-purpose for big data research, citing a number of concerns: if data are considered non-­identifiable and researchers didn’t engage directly with subjects the study could be exempt under US law, it’s increasingly difficult to anonymize data, nature of risks change to include privacy breaches and algorithmic discrimination, among others (Ferretti et al., 2021). Finally, Lynch and colleagues have reaffirmed the long-standing problem backed by an extensive body of literature, that there are no accepted metrics for assessing IRBs and thus no way to know whether they provide adequate protection to research participants (Lynch et al., 2022).

 avigating Competing Institutional Logics: Market Logic; N Research Integrity Logic The necessity of institutions across the network of those funding, producing, and disseminating research, requires dealing with their quite different institutional logics. In this section, two famous cases illustrate difficulties in not examining various logics in conflict in a particular situation. Logics are distinct constellations of beliefs that define who we are and what we do. Many institutions central to science have been heavily influenced by market logic. A study of senior research-intensive universities documents this transition from a focus on supporting social goals (public good), to either integrating or converting heavily to market logic, prioritizing serving the economy including developing deeper ties with industry (Gumport, 2019). Journals are the most prominent example of market logic run amok. Publishers not only obtain intellectual property rights of academic work but also exert major control over academic research quality by selection of what is to be published (Knoche & Fuchs, 2020). Much has been made of the dramatic fiscal inequities in

124

7  Institutional Responsibilities for Research Integrity

current arrangements—the fees that authors pay to publishers come from universities or funders; yet, reviewers work for free, and large commercial publishers post profits of 30%, which does not suggest market competition. Some describe this as a massive transfer of public funds into private hands, which the law allows. Current plans for journal reform (plan S, see Appendix for further details) are largely aimed at broadening access to the scientific literature, which may have already been paid for with public funds. Financial rearrangements under Plan S simply require costs to be paid by funders, universities, or authors, making such arrangements impossible for many countries in the world and ignoring the inequity of subsidizing private entities with public funds. Restructuring the scientific publishing system in nonprofit organizations with low fees is suggested (Knoche & Fuchs, 2020). More problematic for our discussion of research integrity are two items: (1) non-transparent acceptance of submitted manuscripts to support journal branding, with the disastrous effect of accepting only those with positive “showy” results and (2) the conflation of journal quality with proxy indicators of research impact—more on that issue in Chap. 8. Confounding these unfair (but not illegal) business practices is evidence of conflict of interest among editors and editorial boards, providing an additional incentive to adhere to market logic instead of scientific logic/values. Documentation from two medical specialties shows: dermatology, 87% of editors received payment from pharmaceutical companies with four journals having more than a quarter of their editorial staff receiving more than $100,000 (Updyke et al., 2018). Among editorial boards of top five emergency medicine journals a third had such a financial conflict of interest although only one of the journals disclosed the presence of payments (Niforatos et al., 2020). A similar situation is found in public health journals, a field with widespread industry involvement. A third of such journals had disclosure forms for financial conflict of interest for authors but not for editors or reviewers. The way toward policy improvement is clear: disclosure of COI to and by journals, enforced editorial recusal, and transparent policies and practices publicized to readers (Ralph et al., 2020). Editorial objectivity should be based on avoidance of economic and political biases, a role compared to a judge in the judiciary system (Updyke et al., 2018). Currently, undeclared conflict of interest is a privilege that scientists, editors, editorial boards, and others keep for themselves, against clear evidence that it is harmful to the institutions of science. They and norm-setting scientific institutions have failed to self-regulate. What logic supports justification and continuation of these practices? Two historical European cases of the management of research misconduct offer lessons in conflict of logics. Recall that institutional logics are belief systems that influence the behavior of institutions; they may conflict or coexist. The first example involves the British Psychological Society investigating some of the scholarship of a deceased member, Hans Eysenck whose work focused on links between personality traits and the causes, prevention, and treatment of cancer and heart disease, now thought to be “unsafe.” Classical medical logic (do no harm to patients), academic or scientific logic (truth claims should prevail over the reputation of an individual),

Navigating Institutional Responsibilities for Research Integrity; Among Competing…

125

and market-oriented logic (brand and image) all existed in the Eysenck case. While professional bodies, universities, and publishers are all institutions of science, their roles in dealing with research misconduct are often unclear, each assuming that the other should act (diffused responsibility). The British society had previously declined to investigate an earlier member (Cyril Burt), apparently fearing reputational damage, and was guided by market logic as were journal editors, leaving the integrity of the evidence base unresolved (Craig et al., 2021). A modern-day case involved Paolo Macchiarini, a surgeon at the Karolinska Institute and associated hospital, who was eventually found to have committed research misconduct. Complete case details and an excellent analysis, describing conflict among medical (well-being of the patients involved), market (institutional reputation, brand, citations), and scientific logic may be found in Berggren and Karabag (2019). In this case, the three institutional logics competed for long periods of time, based in part on relative power of the parties involved, yielding a fragmented and ineffective response. Lessons learned: universities increasingly outsource scientific quality control (scientific logic) to journals which in this case showed variable competency and are themselves often driven by market logic. Institutional officials and co-authors were not held responsible. The whistleblowers were physicians, displaying medical logic to prevent harm to patients, but faced serious backlash. In the Macchiarini case, the entire response was so fragmented (Berggren & Karabag, 2019) across institutions and their competing logics that resolution was slow and twice required outside media exposure. The institutional logics approach provides a valuable framework for analysis beyond the usual consideration of ethics training and compliance focusing on individuals with hit-and-miss intraorganizational coordination. Through comparison between the Macchiarini case and a case at Duke University which eventually found researcher Anil Potti to have committed research misconduct, the authors draw attention to limitations of the US approach, which charged only a single individual and not the institution (Berggren & Karabag, 2019). “Misconduct is seldom committed only by an individual or a single organization but must be understood in a wider context” (Berggren, p. 437). It is fair to say that market logic has usurped research integrity logic in journal practices and has largely done so in industry-funded research trials in many areas of science. Confusion can also be seen in authorship practices—is scientific publishing to be seen as a gift economy or described more appropriately in the logic of a market model? (Hesselmann et al., 2021). Conflict of interest, prominent in both the Macchiarini and Duke cases, is very inadequately controlled under the current institutional logic (Montgomery & Oliver, 2009), which is confused between scientific logic and market logic. Resolving overreach in market logic in the case of journals is simple: restore non-profit academic publishing that accurately documents the scientific record; require strict compliance with conflict of interest policies heavily favoring recusal or banning of journal editors, editorial board members, and reviewers with externally validated conflicts of interest; and test the ability of mandatory independent infrastructures in assuring lack of conflict in industry-funded trials.

126

7  Institutional Responsibilities for Research Integrity

A higher level looks at institutional logic questions whether the widely adopted business/economic framework of New Public Management (NPM) is suited to allow science and research to work well. NPM logic has to some extent been imposed on academia with the expectation that it should work. This shift in management philosophy prompts several reflections. Prior to NPM, the academy was not perfectly organized; imposition of NPM raises the question of whether it is a good thing for science to manage itself and develop independently. Second, certain practices inherent in NPM help to explain why allocation of research resources has moved to criteria easy to measure in quantitative terms. While conditions for success of practice (in this case research) can originate from the “outside,” “invasions” of a different logic such as NPM can lead to severe problems. Third, Kruse (2022) suggests that motivational pressure on young scientists to produce—consistent with NPM—can yield suffering and likely hinders scientific excellence, in part because it destabilizes the logics of education and research. Clearly, while the institution of a free science requires some sort of quality control, logics such as NPM might better be used selectively to address challenges for which it is better suited such as using economics to stabilize the researcher job market (Kruse, 2022).

 essons From the Theory of Institutional Corruption; A Lens L for Return to Basic Scientific Values Introduced in Chap. 5, the theory of institutional corruption suggests that a polity can be corrupted by extraneous influences that distort its decision-making process and thereby impair its capacity to function in accordance with its fundamental values. In the case of science, relevant values are honesty, transparency, objectivity, and stewardship necessary to create and sustain valid knowledge. Generally, people in these institutions do not have corrupt motives but practice in institutions that have failed to uphold values basic to research integrity. Because such corrupting practices may be widespread, individuals regard them as part of a legitimate though flawed process. Surely, practices documented in other chapters of this book could be thought of as institutional corruption (deviating from the values of science) (Thompson, 2018). This includes: widespread questionable research practices; journals that choose novel manuscripts with positive scientific findings instead of documenting the cumulative progress of science including negative findings; or “predatory” journals that make no effort at quality control; nearly completely ignoring conflicts of interest throughout the scientific endeavor; widespread use of metrics that bear little relationship to quality of the scientific work, and others. While none of these practices is illegal, they surely undermine the basic purpose of science. Other scholars of institutional corruption reassert that officials, who have a duty to act in a way that is coherent with the basic purpose and integrity of their institution, can identify those who are blameworthy in undermining that purpose and must accept responsibility for restoring what corruption has disrupted. In other instances,

Science Institutions and Public Policy

127

deviation from basic purpose is due to complex interactions of multiple agents and practices that are so entrenched in the organization that it is impossible to trace blame (Ferretti, 2019). Reforms for a system of institutions thought to be exhibiting institutional corruption should include finding alternatives for the functions the corruption serves (change the rules and incentives to align with the values). Reform usually comes from outside established leaders and existing institutions since leaders “in the system” are profiting from the status quo. Reform efforts need to aim at the several institutions in the system, in this case, of producing and distributing scientific knowledge, since their incentives are inter-twined (Thompson, 2018).

Science Institutions and Public Policy What role does science policy play as a source of guidance to assure that research is produced and disseminated with integrity, even among conflicted logics and the economic focus of NPM? Since a significant portion of academic research is funded from public sources, policy about use of those funds is adopted by the institutions as a condition of receiving funds and should support its public purpose. Some institutions and associations of universities contribute substantially to assure that RI, enforced by scientific self-regulation supports and furthers federal policies. An example is the Association of Biomolecular Resource Facilities and its Committee on Core Rigor and Reproducibility, which supports NIH efforts to enhance the reproducibility of research findings through increased scientific rigor. It notes that reproducible research requires rigorously controlled and documented experiments using validated reagents. Core research facilities at research-intensive universities provide training, equipment, consultation, and formal quality management (Knudson et al., 2019). This is a strong example of effective self-regulation by science institutions, conforming to core science values. Some impressive self-regulation work has also been done in the area of authorship. Long contentious because scientific authorship is a central currency, and not covered by government regulation or well defined by institutions of science, the scientific community has developed not only a vocabulary for controversial authorship practices but also a taxonomy of finer recognition of and responsibility for forms of contributor-ship. This work serves the dual purpose of more fairly acknowledging author contributions but also describes the types of labor critical to scientific research. Fair authorship is critical to the reward structure of science (Lariviere et al., 2020). Given the nature and history of science, governance of research integrity is expected to be managed with a combination of scientific self-regulation as well as public policy. The issues, the range of institutions involved, and the necessity of changing important behaviors/ norms/incentives make research integrity policy complex. Public policy makers generally focus only on a small number of dimensions of a

128

7  Institutional Responsibilities for Research Integrity

much larger issue and leave the other parts untouched, not attending to the magnitude of the issue. Political attention may evolve in fits and starts and suffer from a lack of timely data about any or all parts of the system of institutions that produce and disseminate research (Epp, 2018). While much has been left to self-correction by the scientific community, serious concerns have for some time, surfaced about significant underinvestment by the institutions of science in replication of scientific findings; these concerns have not yet led to sustainable change. Replicability is widely understood to ground the authority of science and lack of this evidence affects all consumers of science (Romero, 2019). Without independent replications, error cannot be ruled out and stalemates settled. Statistical tools can help to detect publication bias and errors in reporting but are clearly insufficient. Romero (2018) suggests that the social structure of scientific work undermines and leads to nonreplicable findings in many fields. No one is responsible to do replication work, what little is done is not systematic, and lack of economic incentives ensures it is not a desirable career track. If such incentives were reversed with funders investing in important replications, and with employers recognizing this work in promotion and tenure criteria, scientists may change their practices to replicate research before it is published. Adequate self-regulation requires cultural and social change in the scientific community (Romero, 2018) and across the institutions that create incentives and conditions of work. But underlying institutional responsibility in general is a willingness to participate in empirical work that will describe that state of science practice. In The Netherlands, such a survey was actively resisted, in part because of concerns about negative publicity (de Vrieze, 2020). So, the answer to the question at the beginning of this section is that, given the nature of science and how it is produced and disseminated, science policy will always have to be paired with self-regulation by the scientific community. But each should be expected to contribute to the goal of science with integrity (see Chap. 5 for an expanded discussion of science self-regulation), with constant adaptation to improvement.

Conclusion Institutions are joint activities governed by social norms in accordance with collective ends that define their purpose. For science, those collective ends are production and dissemination of valid knowledge, produced with integrity. Evaluation of the performance of institutions as sites of collectivized responsibility is crucial. Here, we have considered those functions as responsibility for quality of research produced and disseminated, compliance with the letter and spirit of regulations, assurance of skills to produce trustworthy science with integrity. In fact, movement to data-intensive science will require an even higher level of trust in scientific institutions because of the distributed nature of data dissemination systems and limited

References

129

ability of individuals to see the system as a whole (Gabrielson, 2020). A framework of institutional logics provides a way to analyze both conflicts within institutions as well as across organizations central to science: universities, commercial entities, journals, quasi-professional bodies, funders, and regulatory agencies. The framework of institutional corruption requires returning to science’s core values. The record here examined supports Douglas’s (2014) proposition (see preface to this chapter) that current institutions in science do not adequately ensure that science benefits society. There are, however, important efforts toward reaching this goal.

References Aho, B. (2020). Violence and the chemicals industry: Reframing regulatory obstructionism. Public Health Ethics, 13(1), 50–61. https://doi.org/10.1093/phe/phaa004 Asiimwe, R., Lam, S., Leung, S., Wang, S., Wan, R., Tinker, A., McAlpine, J. N., Woo, M. M. M., Huntsman, D. G., & Talhouk, A. (2021). From biobank and data silos into a data commons: convergence to support translational medicine. Journal of Translational Medicine, 19(1), 493. https://doi.org/10.1186/s12967-­021-­03147-­z Berggren, C., & Karabag, S.  F. (2019). Scientific misconduct at an elite medical institute: The role of competing institutional logics and fragmented control. Research Policy, 48, 428–443. https://doi.org/10.1016/j.respol.2018.03.020 Carver, J. C., Weber, N., Ram, K., Gesing, S., & Katz, D. S. (2022). A survey of the state of the practice for research software in the United States. PeerJ Computer Science, 8, e963. https:// doi.org/10.7717/peerj-­cs.963 Craig, R., Pelosi, A., & Tourish, D. (2021). Research misconduct complaints and institutional logics: The case of Hans Eysenck and the British Psychological Society. Journal of Health Psychology, 26(2), 296–311. https://doi.org/10.1177/1359105320963542 Dal-Re, R., Kesselheim, A. S., & Bourgeois, F. T. (2000). Increasing access to FDA inspection reports on irregularities and misconduct in clinical trials. JAMA, 323(19), 1903–1904. https:// doi.org/10.1001/jama.2020.1631 Dawson, A., Lignou, S., Siriwardhana, C., & O’Mathuna, D.  P. (2019). Why research ethics should add retrospective review. BMC Medical Ethics, 20(1), 68. https://doi.org/10.1186/ s12910-­019-­0399-­1 DeVito, N., Bacon, S., & Goldacre, B. (2020). Compliance with legal requirements to report clinical trial results on ClinicalTrials.gov: A cohort study. Lancet, 395(10221), 361–369. https://doi. org/10.1016/S0140-­6736(19)33220-­9 de Vrieze, I. (2020, November 25). Largest ever research integrity survey flounders as universities refuse to cooperate. Science. Douglas, H. (2014). The moral terrain of science. Erkenntnis, 79, 961–979. https://doi.org/10.1007/ s10670-­013-­9538-­0 Durkin, A., Sta Maria, P. A., Willmore, B., & Kapczynski, A. (2021). Addressing the risks that trade secret protections pose for health and rights. Health and Human Rights Journal, 23(1), 129–144. Epp, D. A. (2018). The structure of policy change. University of Chicago Press. Estienne, M., Chevalier, C., Fagard, C., Letondal, P., & Giesen, E. (2020). Responsible scientific research at Inserm: A field study. International Journal of Metrology and Quality Engineering, 11, 1. https://doi.org/10.1051/ijmqe/2019016 Ferretti, A., Ienca, M., Sheehan, M., Blasimme, A., Dove, E. S., Farsides, B., Friesen, P., Kahn, J., Karlen, W., Kleist, P., Liao, S. M., Nebeker, C., Samuel, G., Shabani, M., Velarde, M. R.,

130

7  Institutional Responsibilities for Research Integrity

& Vayena, E. (2021). Ethics review of big data research: What should stay and what should be reformed? BMC Medical Ethics, 22, 51. https://doi.org/10.1186/a12910-­021-­00616-­4 Ferretti, M. P. (2019). A taxonomy of institutional corruption. Social Philosophy & Policy, 35(2), 242–263. https://doi.org/10.1017/S0265052519000086 Gabrielson, A.  M. (2020). Openness and trust in data-intensive science: The case of biocuration. Medicine Health Care and Philosophy, 23(3), 497–504. https://doi.org/10.1007/ s11019-­020-­09960-­5 Garmendia, C.  A., Gorra, L.  N., Rodriguez, A.  L., Trepka, M.  J., Veledar, E., & Madhivanan, P. (2019). Evaluation of the inclusion of studies identified by the FDA as having falsified data in the results of meta-analyses: The example of the Apixaban trials. JAMA Internal Medicine, 179(4), 582–584. https://doi.org/10.1001/jamainternmed.2018.7661 Grey, A., Bolland, M., Gamble, G., & Avenell, A. (2019). Quality of reports of investigations of research integrity by academic institutions. Research Integrity & Peer Review, 4, 3. https://doi. org/10.1186/s41073-­019-­0062-­x Grey, A., Avenell, A., & Bolland, M. (2022). Timeliness and content of retraction notices for publications by a single research group. Accountability in Research, 29(6), 347–378. https://doi. org/10.1080/08989621.2021.1920409 Gumport, P.  J. (2019). Academic Fault Lines; The Rise of Industry Logic in Public Higher Education. Johns Hopkins University Press. Gunsalus, C. K., Marcus, A. R., & Oransky, I. (2018). Institutional research misconduct reports need more credibility, JAMA, 319(13):1315, 1316. https://doi.org/10.1001/jama.2018.0358 Hayden, J. A., Ellis, J., Ogilvie, R., Boulos, L., & Stanojevic, S. (2021). Meta-epidemiological study of publication integrity, and quality of conduct and reporting of randomized trials included in a systematic review of low back pain. Journal of Clinical Epidemiology, 134, 65–78. https://doi.org/10.1016/j.jclinepi.2021.01.020 Hesselmann, F., Schendzielorz, C., & Sorgatz, N. (2021). Say my name, say my name: Academic authorship conventions between editorial policies and disciplinary practices. Research Evaluation, 30(3), 382–392. https://doi.org/10.1093/reseval/rvab003 Houdek, P. (2020). Fraud and understanding the moral mind: Need for implementation of organizational characteristics into behavioral ethics. Science & Engineering Ethics, 26(2), 691–707. https://doi.org/10.1007/s11948-­019-­00117-­z Jukola, S. (2021). Commercial interests, agenda setting, and the epistemic trustworthiness of nutrition science. Synthese, 198(Suppl 10), 2629–2646. Kalichman, M. (2020). Survey study of research integrity officers’ perceptions of research practices associated with instances of research misconduct. Research Integrity and Peer Review, 5(1), 17. https://doi.org/10.1186/s41073-­020-­00103-­1 Kataoka, Y., Banno, M., Tsujimoto, Y., Ariie, T., Taito, S., Suzuki, T., Oide, S., & Furukawa, T. A. (2022). Retracted randomized controlled trials were cited and not corrected in systematic reviews and clinical practice guidelines, Journal of Clinical Epidemiology 150:90–87. https:// doi.org/10.1016/j.jclinepi.2022.06.015 Keestra, S.  M., Rodgers, F., Gepp, S., Grabitz, P., & Bruckner, T. (2022). Improving clinical trial transparency at UK universities: Evaluating 3 years of policies and reporting performance on the European clinical trial register. Clinical Trials, 19(2), 217–223. https://doi. org/10.1177/17407745211071015 Knoche, M., & Fuchs, C. (2020). Science communication and open access: The critique of the political economy of capitalist academic publishers as ideology critique. TripleC, 8, 508–534. https://doi.org/10.31269/triplec.v18i2.1183 Knowles, R., Mateen, B. A., & Yehudi, Y. (2021). We need to talk about the lack of investment in digital research infrastructure. Nature Computational Science, 1, 169–171. https://doi. org/10.1038/s43588-­021.00048-­5 Knudson, K. L., Carnahan, R. H., Hegstad-Davies, R. L., Fisher, N. C., Hicks, B., Lopez, P. A., Meyn, S. M., Mische, S. M., Weis-Garcia, F., White, L. D., & Sol-Church, K. (2019). Survey on scientific shared resource rigor and reproducibility. Journal of Biomolecular Techniques, 30(3), 36–44. https://doi.org/10.7171/jbt.19-­3003-­001

References

131

Kruse, J. (2022). How can science and research work well? Toward a critique of new public management practices in academia from a socio-philosophical perspective. Frontiers in Research Metrics and Analytics, 7, 791114. https://doi.org/10.3389/frma.2022.791114 Labib, K., Roje, R., Bouter, L., Widdershoven, G., Evans, N., Marusic, A., Mokkink, L., & Tijdink, J. (2021). Important topics for fostering research integrity by research performing and research funding organizations: A Delphi consensus study. Science & Engineering Ethics, 27(4), 47. https://doi.org/10.1007/s11948-­021-­00322-­9 Lariviere, V., Pontille, D., & Sugimoto, C. R. (2020). Investigating the division of scientific labor using the contributor roles taxonomy (CRediT). Quantitative Science Studies, 2(1), 111–128. https://doi.org/10.1162/qss_a_00097 Legg, T., Hatchard, J., & Gilmore, A. B. (2021). The science for profit model – How and why corporations influence science and the use of science in policy and practice. PLoS One, 16(6), e0253272. https://doi.org/10.1371/journal.pone.0253272 Loikith, L., & Bauchwitz, R. (2016). The essential need for research misconduct allegation audit. Science & Engineering Ethics, 22(4), 1027–1049. https://doi.org/10.1007/s11948-­016-­9798-­6 London, A. J. (2022). For the common good. Oxford University Press. Lynch, H. F., Eriksen, W., & Clapp, J. T. (2022). “We measure what we can measure”: Struggles in defining and evaluating institutional review board quality. Social Science & Medicine, 292, 114614. https://doi.org/10.1016/j.socscimed.2021.114614 Martinez, C., Skeet, A.  G., & Sasia, P.  M. (2021). Managing organizational ethics: How ethics becomes pervasive within organizations. Business Horizons, 64(1), 83–92. https://doi. org/10.1016/j.bushor.2020.09.008 Montgomery, K., & Oliver, A. L. (2009). Shifts in guidelines for ethical scientific conduct: How public and private organizations create and change norms of research integrity. Social Studies of Science, 39(1), 137–155. https://doi.org/10.1177/0306312708097659 Moynihan, R., Albarqouni, L., Nangla, C., Dunn, A. G., Lexchin, L., & Bero, L. (2020). Financial ties between leaders of influential US professional medical associations and industry: Cross sectional study. BMJ, 369, m1505. https://doi.org/10.1136/bmj.m1505 Niforatos, J. D., Narang, J., & Trueger, N. S. (2020). Financial conflicts of interest among emergency medicine journals’ editorial boards. Annals of Emergency Medicine, 75(3), 418–422. https://doi.org/10.1016/j.annemergmed.2019.02.020 Piller, C. (2020). Transparency on trial. Science, 367(6475), 240–243. https://doi.org/10.1126/ science.367.6475.240 Pinto, M. F. (2020). Commercial interests and the erosion of trust in science. Philosophy of Science, 87, 1003–1013. https://doi.org/10.1086/710521 Plottel, G.  S., Adler, R., Jenter, C., & Block, J.  P. (2020). Managing conflicts and maximizing transparency in industry-funded research. AJOB Empirical Bioethics, 11(4), 223–232. https:// doi.org/10.1080/23294515.2020.1798562 Ralph, A., Petticrew, M., & Hutchings, A. (2020). Editor and peer review financial conflict of interest policies in public health journals. European Journal of Public Health, 30(6), 1230–1232. https://doi.org/10.1093/eurpub/ckaa183 Ramachandran, R., Morten, C.  J., & Ross, J.  S. (2021). Strengthening the FDA’s enforcement of ClinicalTrials.gov reporting requirements. JAMA, 326(21), 2131–2132. https://doi. org/10.1001/jama.2021.19773 Rauch, J. (2021). The constitution of knowledge; A defense of truth. Brookings Institution Press. Ribeiro, M. (2022). Towards a sustainable European research infrastructure ecosystem. In H. P. Beck & P. Charitos (Eds.), The economics of big science. Springer. Rilinger, G. (2021). Who captures whom? Regulatory misperceptions and the timing of cognitive capture. Regulation and Governance, 15, 43. https://doi.org/10.1111/rego.12438 Romero, F. (2019). Philosophy of science and the replicability crisis. Philosophy Compass, 14(11), e12633. https://doi.org/10.1111/phc3.12633 Romero, F. (2018). Who should do replication labor. Advances in Methods and Practices in Psychological Science, 1(4), 516–537. https://doi.org/10.1177/2515245918803619

132

7  Institutional Responsibilities for Research Integrity

Sanderson, C. A. (2020). Why we act. Harvard University Press. Sarauw, L. L. (2021). The reversed causalities of doctoral training on research integrity: A case study from a medical faculty in Denmark. Journal of Academic Ethics, 19, 71–93. https://doi. org/10.1007/s10805-­020-­09388-­9 Sarauw, L. L., Degn, L., & Orberg, J. W. (2019). Research development through doctoral training in research integrity. International Journal for Academic Development, 24(2), 178–191. https:// doi.org/10.1080/1360144X.2019.1595626 Sayers, E. W., Beck, J., Bolton, E. E., Bourexis, D., Brister, J. R., Canese, K., Comeau, D. C., Funk, K., Kim, S., Klimke, W., Marchler-Bauer, A., Landrum, M., Lathrop, S., Lu, Z., Madden, T. L., O’Leary, N., Phan, L., Rangwala, S. J., Schneider, V. A., et al. (2022). Database resources of the National Center for Biotechnology Information. Nucleic Acids Research, 49(D1), D10– D17. https://doi.org/10.1093/nar/gkaa892 Scanff, A., Naudet, F., Cristea, I. A., Moher, D., Bishop, D. V. M., & Locher, C. (2021). A survey of biomedical journals to detect editorial bias and nepotistic behavior. PLoS Biology, 19(11), e3001133. https://doi.org/10.1371/journal.pbio.3001133 Scepanovic, R., Labib, K., Buljan, I., & Tijdink, J. (2021). Practices for research integrity promotion in research performing organisations and research funding organisations: A scoping review. Science & Engineering Ethics, 27(1), 4. https://doi.org/10.1007/s11948-­021-­00281-­1 Schocker, F., Fehrenbach, H., & Schromm, A. (2021). Mission impossible? EMBO Reports, 22(7), e52334. https://doi.org/10.15252/embr.202052334 Seife, C. (2015). Research misconduct identified by the US Food and Drug Administration; out of sight, out of mind, out of the peer-reviewed literature. JAMA Internal Medicine, 175(4), 567–577. https://doi.org/10.1001/jamainternmed.2014.7774 Smith, I.  H., & Kouchaki, M. (2020). Ethical learning: The workplace as a moral laboratory for character development. Social Issues and Policy Review, 15(1), 277–322. https://doi. org/10.1111/sipr.12073 Spector-Bagdady, K. (2021). Governing secondary research use of health data and specimens: The inequitable distribution of regulatory burden between federally funded and industry research. Journal of Law and the Biosciences, 8(1), Isab008. https://doi.org/10.1093/jlb/lsab008 Thompson, D. F. (2018). Theories of institutional corruption. Annual Review of Political Science, 21, 495–513. https://doi.org/10.1146/annurev-­polisci-­120117-­110316h Titus, S., & Kornfeld, D. S. (2021). The research misconduct post hoc inquiry as a measure of institutional integrity (DR). Accountability in Research, 28(1), 54–57. https://doi.org/10.108 0/08989621.2020.1801431 Updyke, K.  M., Niu, W., St Clare, C., Schlager, E., Knabel, M., Leader, N.  F., Sacotte, R.  M., Dunnick, C.  A., & Dellavalle, R.  P. (2018). Editorial boards of dermatology journals and their potential financial conflicts of interest. Dermatology Online Journal, 24(8), 13030/ qt198587m9. Wager, E., Kleinert, S., & CLUE Working Group. (2021). Cooperation & liaison between universities & editors (CLUE): Recommendations on best practice. Research Integrity & Peer Review, 6(1), 6. https://doi.org/10.1186/s41073-­021-­00109-­3 Zarin, D.  A., Fain, K.  M., Dobbins, H.  D., Tse, T., & Williams, R.  J. (2019). 10-year update on study results submitted to ClinicalTrials.gov. New England Journal of Medicine, 381(20), 1966–1974. https://doi.org/10.1056/NEJMsr1907644 Zhang, T., Gino, F., & Bazerman, M.  H. (2014). Morality rebooted: Exploring simple fixes to our moral bugs. Research in Organizational Behavior, 34, 63–79. https://doi.org/10.1016/j. riob.2014.10.002 Zou, C.  X., Becker, J.  E., Phillips, A.  T., Garritano, J.  M., Krumholz, H.  M., Miller, J.  E., & Ross, J. S. (2018). Registration, results reporting, and publication bias of clinical trials supporting FDA approval of neuropsychiatric drugs before and after FDAAA: A retrospective cohort study. Trials, 19(1), 581. https://doi.org/10.1186/s13063-­018-­2957-­0

Chapter 8

Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

Evaluative systems are part of the ethos of science -a community judging each other’s work and validating, challenging, discrediting, and/or correcting it. This responsibility is currently fulfilled through three mechanisms: peer review, bibliometrics, and research impact assessment (assessment of science produced in an institution or in a program of research judged by its societal impact). Each in its own way demonstrates strengths and significant flaws, leaving the goal of research integrity insufficiently supported. Some conceptual, methodological, and normative breakthroughs can be noted. Here, we examine the implications of these several evaluative forms for the goal of research integrity and suggest an ethical analysis of the current system.

Peer Review Peer review currently is heavily embedded in knowledge generation systems, perceived as a basic method for establishing quality (Tennant & Ross-Hellauer, 2020). It is widely used in critical decision areas: academic hiring and promotion, allocation of research monies, and publication of manuscripts. But its effectiveness remains largely assumed rather than demonstrated, leaving questions of fairness, reliability, transparency, and sustainability repeatedly questioned. Potential biases from various parties abound. Reviewers are typically protected by anonymity and not rewarded for an accurate or fair review or held accountable for the opposite, with few formal incentives to act for the benefit of science as opposed to their own self-interest. In its current form, it is unclear whether incentives embedded in peer review may or may not be consistently aligned with the goals of science or with research integrity. For example, in publication peer review (the most studied form), some worrisome practices have been noted but without evidence of how widespread they are. Cheating © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_8

133

134

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

reviewers have been shown to lower the average quality of published literature and can reject good papers (D’Andrea & O’Dwyer, 2017). Editorial decisions are opaque, using peer reviewers’ suggestions or not. Editors often fail to disclose their conflicts of interest, personally, or in support of their journal’s financial interests (Tennant & Ross-Hellauer, 2020). Cronyism is always a possibility. The developmental functions of peer review (helping authors and editors to improve manuscripts) are greatly understudied. The history of peer review provides perspective about when and why it was adopted, appearing in the late twentieth century in response to pressure for science to be more accountable. Peer review within science was put forward as ensuring its quality and trustworthiness, rewarding good science, and correcting bad science. Available evidence raises questions about whether it does either of these things well (Baldwin, 2020). Peer review has historically been conceptualized as a gift economy running on perpetually renewed experiences of mutual indebtedness among members of an intellectual community with shared norms. It has conventionally been presented as self-regulating. But broader shifts in scholarly publishing reflecting commodification have undermined gift exchanges. Editors are stewards of journals which are now judged by metrics such as JIF scores, and high JIF scores (the yearly mean number of citations of articles published in the last 2 years in a given journal) increase manuscript submissions, overwhelming the system. Article fees paid by authors mean that they feel they have already paid and are not indebted to the scientific community. Growing dominance of a handful of commercial publishers and their significant documented revenues have raised a chorus of voices about the lack of fairness of volunteered work for peer review. An implicit assumption of peer review is a stable community of exchange, now disrupted by the economy of scholarly publishing as a business (Kaltenbrunner et al., 2022). Recent discussion of peer review in academic hiring and promotion decisions has largely focused on the widespread use of bibliometrics as a major evaluative criterion with less attention to judgments made by peers. Peer review in funding decisions is the most recently studied, with a more limited evidence base than in the most commonly addressed publication peer review. Consequences of current review practice and efforts to improve their performance or to replace them, are also addressed.

Peer Review in Funding Decisions Peer review decisions award more than 95% of academic medical research funding. Partial evidence shows that ratings vary considerably among reviewers, with some evidence of the bias of cronyism. While varying, well-argued perspectives about the quality and value of a study are useful, such ratings are also used in highly competitive decisions where variation that occurs among reviews has the potential to be

Peer Review

135

unfair. The basic assumption that there is consensus among peers on evaluation criteria, necessary for reliability, is not supported by the evidence, although saying nothing about validity of such judgments. There is conflicting evidence about whether more standardized peer review can reach a level of consistency (Guthrie et al., 2017) important for fair research funding decisions. For these reasons, reliability, fairness, and predictive validity in peer review of grant applications (Hug & Aeschbach, 2020) are important. Examples are explored here. In the social sciences, Jerrrim and de Vries (2020) document a low level of consistency among reviewers (correlation 0.2), noting that a single negative peer review decreases funding chances from 55% to 25%, even when the proposal was otherwise rated highly. Outcome of the review can depend on the assigned reviewer. The number of weaknesses likely does not track with the numeric rating, with a large subjective element perhaps related to biases against outgroup members (Pier, et al., 2018). And the use of peer review for grant funding recommendations has shown, over several studies, a bias against novel, breakthrough research (Ayoubi et al., 2021). Forscher and colleagues note that for NIH grant review, the average reliability with three reviewers is 0.2, requiring 10 reviewers to reach an acceptable reliability of 0.5. Little attention has been paid to the effects of low reliability on harms to scientific progress and researcher career trajectory (Forscher et al., 2019). Nearly 40% of NIH grant applicants found reviewer feedback to be very unuseful, suffering not only from bias against innovative ideas, but also from lack of inter-panel reliability. Required on resubmission to address reviewer judgments, applicants found that concerns from the original review panel were not those of the panel which would judge their resubmission (Gallo et al., 2021). Resolution of grant review issues could be improved by standardizing evaluative criteria, replacing what appear to be different, perhaps ambiguous, vague evaluative criteria. Instructions that explicitly define how quality and magnitude of weakness align with a particular rating should be tested. Some suggest that above a certain quality level, lottery could yield a fairer outcome (Pier et al., 2018). Vinkenburg and colleagues (2021) provide details on process optimization, discretion, and bias elimination in peer review. Suggestions include: defining vague criteria such as excellence or research potential, evaluating per criterion not by candidate, not allowing initial comments by senior panel members to serve as anchors for the rest of the discussion, and holding all members, especially panel chairs, accountable for fairness in process.

I ssues In Peer Review as a Practice, Consequences and Solutions, With Most Focus on Publication Peer Review Peer review is still largely a closed and secretive process. There is a lack of consensus about how to define review quality and reviewers’ responsibilities including their ability to filter out flawed research and their likely inability to address reproducibility. Relationship between pre-publication evaluation and post-publication assessment has received virtually no attention (Tennant & Ross-Hellauer, 2020).

136

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

A small amount of experimental work (24 trials over a 30-year period) demonstrates low interest in solving well-documented problems in peer review. From these trials only addition of a statistical reviewer, blinding of reviewer to author’s identity, and editorial pre-screening have been supported. The COVID pandemic should have raised interest in improving peer review; this experience demonstrated clearly that the current journal peer review system was stressed to its limits and sometimes failed (Gaudino et al., 2021). A systematic review of tools used to assess the quality of peer review found the development and validation process to be questionable (Superchi et al., 2019). Text similarity scanners, registered reports in health and psychology journals, image manipulation scanners in biomedical journals, and some examples of open review are in evidence (Horbach & Halffman, 2020). And some innovative tools are emerging. Sumner et  al. (2021) developed a trust in reproducibility score, documenting elements of a paper that may facilitate a future researcher to most accurately replicate it. The score involves scanning the paper for thoroughness in explaining methods, analysis process, and analysis software and availability of data or code (Sumner et al., 2021). Tennant and Ross-Hellauer (2020) note issues involving peer review in several contested and consequential areas and outline an agenda for improvement; many of their suggestions involve publication peer review. Since the decisions of editors and their relationship to reviewers are unknown, as are conflicts of interest of both parties, the obvious resolution is to require disclosure and transparency of decisions. Because peer reports are usually kept secret, it is difficult under current practice, to assess whether peer review can adequately assess research quality. Development of tools for measuring quality of peer reviews would be important next steps. A deeper understanding of peer review would help to answer the question of how adequate it could be as a primary form of research evaluation, a status that it currently enjoys. Eve et  al. (2021) lay out a number of basic normative questions that must be answered about peer review. What is a peer and who decides? What does it mean when a peer approves someone’s work? How many peers are required before a manuscript can be properly vetted? What happens if peers disagree with each other? Should authors (or readership) be told who has reviewed the manuscript? What are reasonable time lines for peer review; in some instances, if a manuscript is correct and is significantly delayed, lives may be lost (Eve et al., 2021). While the majority of researchers consider peer review critical to contemporary science, surprisingly little evidence exists to support the claim that it is the best way to pre-audit work. Another concern: peer review reports are often owned and guarded by organizations that wish to protect not only anonymity of reviewers but also the system of review that brings them operational advantage (in competition with each other). Absent a clear notion of ownership, such peer-review reports are unavailable for research. Such research should be done openly, not opaquely by those with market or private corporate agendas (Eve et al., 2021). In summary, “there are still dangerously large gaps in our knowledge of this essential component of scholarly communication” (Tennant, 2018, p. 9). A proposal for the future roadmap into peer-review research is well delineated in Table 1 of a previous

Peer Review

137

article by the senior author, outlining a sustained and strategic program of research (Tennant & Ross-Hellauer, 2020). This should include a serious examination of outsourcing peer review to entities that operate outside research communities.

Consequences and Solutions Consequences of poor peer review and/or poor use of it are sobering. Teixeira da Silva et al. (2021) note that documented failures of peer review in COVID-19 manuscripts and multiple retractions might constitute a public health risk. Use of peer review is a critical element in the disbursement of trillions of dollars of research money, plays a central role in defining the hierarchical structure of higher education, and should avoid potential inequities in the diffusion of ideas. Robust peer review is central to helping the scientific community self-regulate. Biases (confirmation, publication) are not fully understood at the system level but have downstream consequences on the composition of the scholarly record (Tennant & Ross-Hellauer, 2020). A recent study found strong empirical evidence of the “Matthew effect,” in which famous authors were evaluated differently from the less famous (Brainard, 2022). It is also important to note that editorial decisions can have profound effects on equity in diffusion of scientific ideas. Teplitskiy et al. (2018) note that removing bias from peer review cannot be accomplished simply by recusing closely connected reviewers, as is the practice today, but rather by recruiting reviewers embedded in diverse professional networks. Distinct schools of thought were found to account for some, if not most of variance in reviewer outcomes; diversifying and balancing reviewers should help to control bias. To an unknown degree, traditional peer review likely has contributed to scientific literature with non-reproducible results, incomplete or poorly described methodologies, and editors who do not reliably issue correction or retraction notices perhaps to avoid reputational damage to their journals, leaving much poor science undetected and not corrected. In a word, this is not self-correcting as advertised. Despite breathtaking breakthroughs in some areas of science, accountability, transparency, and reliability of the journal, editor, and peer-review system are seen as below standard and themselves are not self-correcting and not reforming to reflect and support integrity. Given the enormous amount of time involved in the current system of peer review, and a lack of empirical research supporting two main purposes of quality control and detection of fraudulent work, Heesen and Bright (2021) argue for elimination of pre-publication peer review. Exchanges between scientists would remain but would be informal. Several relatively new forms of peer review are becoming more common. Registered reports, submitted and reviewed before the research is done, assure publication if the research is carried out correctly. Post-publication peer review, requested or not, is also expanding although still with limited platforms in place. Pre-print servers with subsequent peer review are common, especially as in a

138

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

pandemic whereby access to results of studies is urgent. Teixera da Silva et  al. (2017) describe mechanisms through which post-publication review is emerging. Hug (2021) notes that while peer review reflects a guiding principle of science— meritocratic legitimacy—it should also be held to being efficient, reliable, with good predictive validity, and fair. Peer review faces two questions: (1) is it an appropriate measure of merit, and (2) do its biases compromise equality of opportunity? In an analysis of the field, Hug notes that while low reliability in peer review is now accepted, few studies have investigated why it occurs and might be improved (Hug & Aeschbach, 2020). While the debate over whether the peer review system is irrevocably broken has not been settled, it is clear that significant improvements are needed to improve its efficacy (reliable, insightful, and accurate assessors of research quality) as a gatekeeping mechanism for high-quality science. It is also important to note that peer review regulates the flow of ideas through an academic discipline and so has power to shape what a research community knows, investigates, and passes along to policy makers and to the public (Marcoci et al., 2022). This makes its integrity and breadth of view especially important.

 ibliometrics or Infometrics to Evaluate B Research Performance Evaluation is a systematic determination of merit, worth, and significance; criteria should be governed by a set of standards. Infometrics is the quantitative empirical study of information. Bibliometrics studies patterns of publication citations. Altimetrics uses documents and entries from social media platforms as sources of information. Metrics should be used in the evaluation of research performance only within the context of a well-articulated evaluation framework. In fact, growth of the various metrics has not led to established standards. These metrics have, however, challenged the monopoly of peer review which has seemed self-serving, as those who did the research also evaluated it (Petersohn & Heinze, 2018). Research units needed “objective” ways to document value for research expenditures. Number of publications, citations, and journal impact factor (JIF) are widely used to evaluate individual researchers, journals, institutions, and nations and to distribute public research investments from governments. They are, however, not valid indicators of science quality or impact. Several lines of evidence suggest that science methodological quality does not increase with increasing journal rank, a remarkable finding (Brembs, 2018). Nevertheless, these factors are widely used in academic hiring and review, and in promotion and tenure evaluations. This use distorts academic incentives (McKiernan et  al., 2019) toward quantity of scientific production, neglecting quality, which is central to the mission of science. Citations are a function of many variables, many unrelated to scientific merit. Teplitskiy and colleagues (2020) note that fewer than half of citations denote meaningful influence. These most highly cited papers have led researchers to change the problems they pursue and the methods they use. By being equally counted with less

Bibliometrics or Infometrics to Evaluate Research Performance

139

influential papers, the highly influential are undervalued. The current use of metrics undervalues and discourages riskier research that is likely to shift the knowledge frontier, especially if judged in the short term. This kind of work often shows its value 15 years after publication and may be published in journals with lower impact factors (Stephan et al., 2017). Dominant and almost exclusive use of citations and journal impact factors is also alleged to perpetuate networks exclusionary by gender and race in their citation patterns (Davies et al., 2021). The social identity of a researcher can affect her position in a community as well as the uptake of that person’s ideas. In many fields, members of underrepresented groups are less likely to be cited, leading to citation gaps, which are likely due at least in part to the structure of scientific communities. People tend to cite papers written by authors sharing their own social identity. This pattern not only disadvantages these individual scientists but also serves to overlook the ideas/ findings they are studying (Rubin, 2022). A basic ethical flaw in use of citation metrics is the assumption that the literature being cited meets standards of research integrity and will therefore not cause undue harm to research subjects and subsequent patients or overpromise benefits. Such is not the case. In addition, bias of many forms (publication bias, outcome reporting bias, citation bias, reframing bias which ignores previous relevant work, and other forms of bias) is thought to be widespread, providing deceptive results and threatening the integrity of science (Ekmekci, 2017). Incorporating research with these biases fouls the usefulness and believability of citation metrics (MacRoberts & MacRoberts, 2018). While deeply entrenched in the academic evaluation including university rankings, the use of metrics in these ways is a form of deception. Such use may be another indication that commercialization has overpowered science’s control of quality/value. It also challenges the notion that the move to metrics as an evaluative tool is an improvement over the challenges of peer review. Currently, metrics seem to be winning, not because they have been shown to be superior to peer review or to any other method for evaluating science but because they are easy to aggregate by individuals/groups external to scholarly fields without explicitly challenging experts with domain knowledge in the science. Decisions of great import for science are made with these data, entirely separate from their scientific claims but alleging transparency and framing evidence from scholarly fields; such a pattern provides little accountability to the broader public (Biagioli, 2018). Here, we look at the use of bibliometrics that better supports research integrity, and better reflects how citation metrics embedded in the structures of science institutions perpetuate their influence.

 oward Fairer Use of Bibliometrics to Support T Research Integrity Other promising approaches to measurement of research quality and productivity are not currently part of mainstream practice but provide different perspectives. Daraio (2019) suggests that we are in the middle stage of research evaluation, in

140

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

transition from bibliometric indicators and citations to a modern evaluation characterized by a number of distinct, complementary dimensions. He suggests this step will incorporate use of artificial intelligence and machine learning. This broader set of mechanisms should seek answers to the central question of the effect of organizational setting on individual productivity (resources, time, social incentives), and operate with a set of standards for research assessment. Others suggest use of multiple factors: productivity, quality, impact, and influence (Braithwaite, et al., 2019). It is important to get this right, as the way scientific success is measured will impact the way research is performed. There is some evidence of self-correction by the broader scientific community against unfair gaming of bibliometrics. Clarivate, the company behind the Impact Factor, denied an impact factor to 10 journals and delivered an expression of concern to an additional 11 because of excessive self-citation which inflates the Impact Factor. This practice is sometimes called “citation stacking,” “citation cartels,” or “citation rings” (Retraction Watch). Both evaluative approaches (peer review and bibliometrics) should be significantly upgraded. Wouters et  al. (2019) suggest for impact factors: expansion of indicators to cover all functions of scholarly journals including registration, curation, critical reviews, dissemination, and archiving; articulating a set of principles to govern their use, and creation of a governing body to maintain such standards. Impact factors must be shown to be valid and fit for purpose as opposed to the broad use and rampant misuse common today. Others have developed technical approaches that more accurately link citations to the quality of science. Machine learning and natural language processing systems are being developed to provide automated methods for evaluating citation context (Kunnath et al., 2022). Nelson et al. (2022) found that deep learning models of biomedical paper content (title, abstract content) can predict inclusion in a patent, guideline, or policy document, with far greater fidelity than do citation metrics alone. In other words, complex models of paper content should be preferred over citations, making the latter unsupportable. Modeling of the body of a paper (currently infeasible for copyright reasons) is likely to yield still higher fidelity (Nelson et al., 2022).

Citation Metrics Embedded in Scientific Structures In addition to judging the research performance of individual scientists, citation metrics are embedded in systems documenting the scientific record, in scientific institutions and research infrastructure within and across countries and in establishment of the new field of translational science. While scientific self-regulation and the usefulness of machine learning are welcome, recent research is revealing the nature of other evaluative issues and their potential resolution. For example, continued citations to retracted articles reveal serious problems with record-keeping. While continued citation long after

Bibliometrics or Infometrics to Evaluate Research Performance

141

retraction can be problematic, it is important to note how/why these studies are cited: as part of development of a particular research topic, as an example of problematic science, or as a reason for excluding papers from a meta-analysis. These are acceptable reasons for citation (Hsiao & Schneider, 2021). But a case study of an article retracted for falsified clinical trial data starkly displays how the system of notice of retraction and assurance that subsequent citations are appropriate has failed. Eleven years after retraction this study continues to be cited positively (96% of citations) to support a medical nutrition intervention. This falsified trial remains the only paper at or above its level in the medical evidence hierarchy. If it continues to be positively cited, are others deterred from initiating a new study on this topic? Why didn’t the scientific community acknowledge the retraction? Medical journal editors’ guidelines say retracted work should not be cited as science. But reference management software packages do not note retraction status and journals do not systematically check bibliographies for retracted articles. Surely these tools/practices could be changed (Schneider, et al., 2020). Citation metrics are heavily embedded in academic policy. For example, a study of institutional policies in German universities for academic degrees and appointments showed a strong continuance of traditional metrics (number of publications, journal impact factor). Such policies do not support robust and transparent science practice. Instead, study registration, reporting of results; sharing of research data, code, and protocols; open access; and measures to increase robustness would serve overall goals of transparency and robustness (Holst et al., 2022). These findings are in alignment with an international study of the same issues (Rice et al., 2020). Horst and colleagues (2022) emphasize the essential role and influence of research institutions in supporting proper incentives upholding research integrity; their accreditation as a research institution should require such evidence. Instead of the current strategy to assess individual researchers heavily through citation metrics, emphasis could be shifted to assessing institutional support for a healthy research environment and for oversight of sound research practices. It is important to note other relevant factors. JIF serves as a major source of revenue for the companies that promote it, with little incentive to support metrics gaming. Such gaming can have the effect of infringing on the professional autonomy of scholars and on shaping the scientific record in ways that do not reflect scientific validity. Some suggest that metrics gaming might be justified in order to counteract entrenched status hierarchies (Siler & Lariviere, 2022). But a solution more consistent with research integrity would be to address a root cause of citation practice—what Van Calster and colleagues believe is a long-term failure by the scientific community to address overall poor methodological quality in research. It is unethical to expose research subjects to risk that is methodologically unsound (Van Calster et al., 2021) and by extension unethical to perpetuate the assumption that citations adequately reflect the methodological soundness of cited work. Evaluation of the scientific impact of investment in scientific infrastructure (facilities, systems, and services) needed by scientists to carry out large-scale research in cutting edge fields is also important (Fabre et al., 2021). Because of

142

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

their strategic importance and need for long-term funding, assessing the research outcomes from these resources is essential to document and assess public benefit, requiring evidence that will actually translate into such benefits. A review of indicators in biobanks notes that these institutions intend to serve as high-performing and stable elements of an essential research infrastructure, requiring regular measurement and evaluation using appropriately set indicators. But a survey of 36 biobanks showed that data quality was monitored by only 45% and that regular monitoring of predetermined measurable indicators is not the norm (Slusna & Balog, 2022). Finally, consider the newly defined field of translational science (introduced in Chap. 3) This field encourages efficient movement of biomedical research from “bench to bedside,” to meet societal needs and has been a priority of funding agencies. Both Kim et al. (2020) and Hutchins et al. (2019) have developed metrics to detect whether a particular type of paper (not the number of papers or citations) is likely to translate knowledge into clinical research. This work, for a societally important purpose, belies the adage that scientific advances are generally considered to be unpredictable (Hutchins et al., 2019). CTSA Hubs are meant to improve research infrastructure and have as a group mounted a Common Metrics Initiative, aimed at improving both process and outcomes monitored by metrics including: duration from IRB submission to approval, meeting subject accrual goals in clinical trials, and in the research training area sustainability of research careers for those trained in translational science (Soergel & Helfer, 2016). The 60 CTSAs across the USA might operate as a laboratory for metric development, thus making a major impact on the clinical research management enterprise across institutions (Rubio et al., 2015). Is the current metrics approach to science evaluation still dominant across the world? A recent study of criteria for academic promotion and tenure in biomedical sciences faculties, gathered across 92 world universities found 95% using traditional criteria: peer-reviewed publications, authorship order, journal impact factor, and grant funding with citations mentioned by a quarter of institutions (Rice, et al., 2020). Alternatively, the University Medical Centers (Germany) developed a dashboard based on metrics of responsible research practices including such practices as registration and reporting of clinical trials; robustness in animal research; and Open Science Access, Data, and Code. Stakeholders judging the dashboard noted that while these metrics could serve science as a whole, making such information public might put the University Medical Centers in a bad light. While such a dashboard signals institutional responsibility to supporting responsible conduct of research, stakeholder responses signaled potential risks to migrating to metrics much more congruent with research integrity (Haven et al., 2022). These examples describe the use of metrics fit for purpose or not, and the need for emerging scientific structures to develop metrics to ensure societal contribution of science. On a broader structural level, research impact evaluation has become common, challenging earlier assumptions about the societal value of science, although with unclear metrics.

Research Impact Evaluation

143

Research Impact Evaluation The 1950s and 1960s brought a common assumption that society would derive benefits from high-quality science. The 1970s and 1980s saw the rise of market-­oriented logic emphasizing patenting, spinoffs, and entrepreneurship as indicators of the capacity of universities to generate economic concerns. A new narrative aims to correct narrow bibliometric or market-oriented paradigms (Dotti & Walczyk, 2022). Used to evaluate the societal impact of large research programs/expenditures, research impact has been defined as: “demonstrable and/or perceptible benefits (or harms) to individuals, groups, organizations and society, in present and/or future, that are causally linked to research…Research impact evaluation is the process of assessing the significance and reach of both positive and negative effects of research” (Reed et al., 2021). The significance of an impact refers to its magnitude or intensity; reach refers to the extent or diversity of individuals or groups that benefit from the research or have been harmed by it. Impact may be evaluated over different time horizons and at different social scales (individuals to society). Methods of studying research impact may be found in Reed et al. (2021). European countries commonly use research assessment exercises to judge the quality of resource investment and perhaps to distribute research funds. Sometimes the outcomes are based on peer review, and sometimes on bibliometric indicators. At the level of individual article assessment, the degree of agreement between these two measures has been found to be weak (Baccini et al., 2020); thus they are not substitutable, suggesting both are useful and should not be used alone. Others, such as small research grantmaking organizations, use a more tailored approach to impact assessment. The Society of Family Planning and its research foundation, seeking an answer to the question of value of their research investment over a 10-year period, provide an example. They found that the current focus on research outputs such as number of publications and citations to be very limiting and so chose to measure: building research capacity, advancing knowledge, improving research methods, disseminating knowledge, conducting interventions, and influencing community and public policy (Dennis et al., 2020). All of these goals are relevant to research integrity. Several very insightful analyses of the research impact system provide perspective. Langfeldt et al. (2020) note strife between classical understanding of research quality defined by fields of study, often tacit and without codified articulation and managed by peer review, and a recent notion of quality aimed at research policy and allocation of resources. These two notions co-exist, particularly in funding organizations; neither is fully developed as an evaluative metric. Thomas et  al. (2020) describe Performance-Based Research Evaluation Arrangements (PREAs), which are becoming the predominant means worldwide to allocate research and infrastructure funds and reflect a university reputation. Analysis of PREAs and their use shows a lack of frameworks by which they can be studied and little knowledge about causal relationships between PREAs and their apparent effects. Most importantly, there has been little investigation of whether

144

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

PREAs improve or harm research performance or societal impacts or how global research fields are affected (Thomas et al., 2020). Woodson and Boutilier (2022) describe another ethically important impact, not commonly included in impact assessment. Assessing National Science Foundation (NSF) grants, the authors concluded that advantaged groups in society were more often benefited from this work than marginalized groups. They argue that within a broader impacts framework, criteria of inclusion and immediacy of impact should be included, perhaps requiring earmarked funds to assure attention (Woodson and Boutilier, 2022). Although the USA does not use a performance-based research funding system, researchers everywhere are increasingly held accountable for public goods arising from the use of public funds. This means the academic community and some individual researchers, are judged by four sometimes incompatible institutional logics: science, commercial, training, and impact, whether economic and/or social (Llopis et al., 2022). Research impact assessment is logically incomplete as it fails to consider research infrastructure, which require investment over many years (Zakaria, et al., 2021). It is also normatively incomplete. Science and economic impacts have been privileged over environmental, political, cultural, organizational, and health impacts. A 35-year commitment to economic growth and competitive objectives means that impacts in these other areas have significantly lagged. A more normatively oriented social impact presupposes a democratically agreed upon hierarchy or relative weighting of societal objectives to set research priorities. Feller (2022) notes that none such exists.

Ethical Imperatives to Test Reforms Practices described in this chapter can put academics at odds with core ethical beliefs of science. In order to stay in the game, scientists/authors are judged by number of citations and journal impact factors which do not directly support scientific values of quality. They also must be positively constantly judged in peer reviews of their proposals/work. A number of suggested reforms could yield improvement; there is an ethical imperative to test them. –– Some advocate elimination of self-citations from the impact factor, the h-index and other journal metrics. Limiting this reform is the fact that these calculations are made by third parties with little concern for the ethics of the issue (Wilhite et al., 2019). –– Number of citations ignores the evolution and organization of follow-up studies inspired by the seed article. This impact can be measured by depth (influence on the research field) and breadth (influence on related fields) of the follow-on citations. A highly influential paper tends to have citations spread in both directions (depth and breadth) (Chakraborty et al., 2021).

Ethical Imperatives to Test Reforms

145

–– Achieving a systematic deposition of datasets in public repositories requires a scientific reward system in addition to that for publications. Several metrics are suggested to quantify the importance of this work: reanalysis (how many times a dataset has been reused), number and quality of direct citations in publications, connections metric (how a database contributes to a specific knowledge base) (Perez-Riverol et al., 2019). –– Manipulations are not just part of the publishing process but rather can serve as a legal cause of action under statutes such as the False Statements Act. This liability may extend to the authors who add coerced citations. The government has a public interest in distortion of research; “something that seems as harmless as inflating impact factors to bolster a journal’s image may be illegal” (Hickman et al., 2019, p. 6). –– The Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods is an automated tool (SciScore) that evaluates research articles based on adherence to key rigor criteria; the average score for a journal in a given year (Rigor and Transparency Index) is a new journal quality metric. The authors found no correlation with the Journal Impact Factor. An initial study found fewer than half of articles addressed such issues as blinding or power analysis. SciScore is an automated tool that uses natural language processing and machine learning to evaluate the materials and methods sections of scientific papers (Menke, et al., 2020). –– A review of 50 interventions in different areas of peer review found some areas of particular value. For example, reviewer training on assessment criteria has shown a potential decrease in bias and an increase in inter-rater reliability. There is evidence that when an application is likely to divide reviewers, a larger number of them is needed. But the broader impact of these interventions has not been tested (Recio-Saucedo, et al., 2022). –– Fong and Wilhite (2017) note that research misconduct should include not just FFP but also address the widespread misattribution found in publication and research proposals, through coercive citation, honorary authorship, and padded citations, all of which are frequently used by the same individuals. Two suggested reforms could be helpful: double-blind grant reviews, and tracking citation requests made by editors (coercive citation), which are usually contained in communications to authors. A broader approach to reform involves better development of an ethics of quantification, in part informed by sociology. In a review of the study of quantification, Mennicken and Espeland (2009) note that enthusiasm for it is often driven by a desire to hold to account, to make uncertainty manageable, as a way to extend market rationality to domains previously viewed as nonmarket, and as a way to exert control, particularly in areas in which there are no data. Basic normative questions have to be addressed: who gets to decide what gets quantified and how it is done, when and how do numbers matter, and how is quantification governed (Berman & Hirschman, 2018).

146

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

An ethics of quantification, covering metrics, statistics, and mathematical modeling (which currently lacks quality standards) and including artificial intelligence is taking shape, largely by the issues it raises. These include the need for an ethics that: provides defense against abuse, avoids a focus entirely on consequentialism, requires answers to questions of who is responsible, and assures an understanding that techniques are not neutral and likely to be used when political conflicts and power asymmetries are ignored in favor of framing the issue as a technical one. Quantification can be used to silently make decisions that would have been legislated or addressed through administrative rulemaking, thus excluding the public’s view/will (Saltelli, 2019).

Summary: State of Science Evaluation and Ethics Science’s internal evaluative mechanism, peer review, remains essential but has not sufficiently evolved to meet standards of a new era. Lacking a robust evidence base about effects of the current system on research integrity, there does not seem to be an imperative within the scientific community to repair it. The rise of industrial logic in universities laid the base for a quantitative metrics industry to evaluate individuals, groups, disciplines, departments, and universities but with metrics incapable of reflecting valid scientific knowledge production. Indeed, the logic of industrialization is that science must be made profitable, efficiency increased, monetized, and held accountable and that universities are incapable of bringing about these changes. A whole industry has arisen to provide this “evaluation” and these rankings. Limitations to science’s self-governance were seen as necessary, even though the logic of science as an institution, focused on self-governance including peer review, is central to its functioning. Rise of the metrics industry may be ethically misdirected, not capable of capturing the full impact of science and operating as a means to put professional operations under bureaucratic control. The metrics movement has corrupted science’s ability to self-regulate, to support integrity. It is based on the misconception that what appears to be transparency (e.g., counting of citations) is sufficient to evaluate quality, a false idea that is potentially attractive in light of increasing distrust of authority. The metrics movement also serves as a diversion from dealing with issues of scientific quality such as methodological standards and reproducibility. On the other hand, while the ubiquity of science metrics reflects wider societal trends toward quantification, its ascendance was likely aided both by little effort to develop a public-facing translation of science quality and to repair weaknesses in peer review. One root cause of this weakness is uncontrolled conflict of interest, well documented throughout the scientific enterprise (see Chap. 6 for further discussion). Methods of controlling institutional COI are also well known but not currently adopted: accurate disclosure backed by sanctions for nondisclosure; policies by funders, employers, and journals that are scrupulously enforced including in their own internal affairs (see editors/editorial boards); clear requirements for recusal in which institutions bear risk; clear mutually adopted rules for academics to work with industry, all of these safeguards to be adopted universally. Easing of the harmful effects of conflict of

References

147

interest on the core mission of science requires a political will that does not seem to exist within the scientific or the broader community. This reticence likely reflects a broader political frame of marketization but absent pushback and development of rigorous ethical standards will continue to erode science’s ability to fulfill its mission. Science metrics, like much of science regulation and the system that produces and distributes knowledge, is focused on scientists and not on what the public has reason to expect from its investment in science. The public should expect to receive valid knowledge, eventually usable, which requires control of research misconduct and poor quality science, even under shocks/stress from emerging technologies (see Chap. 9). It requires governance that is not siloed and incomplete, as is currently the case, but connected and consistent across levels and sectors, to support norms consistent with research integrity. What general analyses/strategies should be addressed? First, goals against which the system should be judged are fairness, integrity, prudence, and responsibility; standards and infrastructure should support these values and be rid of perverse incentives that undermine them. Conflation of ends and means is common in systems (Carney, 2021). In some ways, the current use of metrics represents such a confusion as citations are largely means devoid of a clear relationship to end goals and values. Worse, science doesn’t seem to establish an evaluative measure that better connects to important ends. Finally, while self-regulation is important, in complex systems its effectiveness varies and rarely is sufficient. Unchecked bad behavior can proliferate and if not held to account, and eventually becomes the norm.

Conclusion Because research is considered as a vital investment all over the world, it is important to assure that investment is justified. Science evaluation is at an unfortunate stage, unclear if it will evolve to valid measures in peer review or metrics, in support of research integrity. At the very least, it should not be allowed to remain in its current state, infused with conflict of interest, lack of standards, and in general perverse incentives endured by scientists who want to practice with integrity. But dealing with the convenience of current science metrics to preferentially support economic opportunity over scientific integrity, will be formidable. Necessary change will be thwarted by the advantages many parties obtain from the current arrangement.

References Ayoubi, C., Pezzoni, M., & Visentin, F. (2021). Does it pay to do novel science? The selectivity patterns in science funding. Science & Public Policy, 48(5), 635–648. https://doi.org/10.1093/ scipol/scab031 Baccini, A., Barabesi, L., & De Nicolao, G. (2020). On the agreement between bibliometrics and peer review: Evidence from the Italian research assessment exercises. PLoS One, 15(11), e0242520. https://doi.org/10.1371/journal.pone.0242520

148

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

Baldwin, M. (2020). Peer review. In Encyclopedia of the History of Science. https://doi. org/10.34758/srde-­jw27 Berman, E.  P., & Hirschman, D. (2018). The sociology of quantification: Where are we now? Contemporary Sociology, 47(3), 257–266. https://doi.org/10.1177/0094306118767649 Biagioli, M. (2018). Quality to impact, text to metadata: Publication and evaluation in the age of metrics. KNOW, 2(2). https://escholarship.org/uc/item/1pm2s9pg Brainard, J. (2022). Reviewers award higher marks when a paper’s author is famous. Science, 377(6613), 1251. https://doi.org/10.1126/science.ade8714 Braithwaite, J., Herkes, J., Churruca, K., Long, J.  C., Pomare, C., Boyling, C., Mierbaum, M., Clay-Williams, R., Rapport, F., Shin, P., Hogden, A., Ellis, L.  A., Ludlow, K., Austin, E., Seah, R., McPherson, E., Hibbert, P. D., & Westbrook, J. (2019). Comprehensive researcher achievement model (CRAM): A framework for measuring researcher achievement, impact and influence derived from a systematic literature review of metrics and models. BMJ Open, 9(3), e025320. https://doi.org/10.1136/bmjopen-­2018-­025320 Brembs, B. (2018). Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience, 12, 37. https://doi.org/10.3389/fnhum.2018.00037 Carney, M. (2021). Value(s), public affairs. Chakraborty, T., Bhatia, S., Joshi, A., & Paul, P.  S. (2021). Wider, or deeper! On predicting future of scientific articles by influence dispersion tree. In Y.  Manolopoulos & T.  Vergoulis (Eds.), Predicting the dynamics of research impact. Springer. https://doi. org/10.1007/978-­3-­030-­86668-­6_7 D’Andrea, R., & O’Dwyer, J. P. (2017). Can editors save peer review from peer reviewers? PLoS One, 12(10), e0186111. https://doi.org/10.1371/journal.pone.0186111 Daraio, C. (2019). Econometric approaches to the measurement of research productivity. In W. Glanzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.), Springer handbook of science and technology indicators (pp. 633–660). Davies, S. W., Putnam, H. M., Ainsworth, T., Baum, J. K., Bove, C. B., Crosby, S. C., Cote, I. M., Duplouy, A., Fulweiler, R. W., Griffin, A. J., Hanley, T. C., Hill, T., Humanes, A., Mangubhai, S., Metaxas, A., Parker, L. M., Rivera, H. E., Silbiger, N. J., Smith, N. S., & Bates, A. M. (2021). Promoting inclusive metrics of success and impact to dismantle a discriminatory reward system in science. PLoS Biology, 19(6), e3001282. https://doi.org/10.1371/journal.pbio.3001282 Dennis, A., Manski, R., & O’Donnell, J. (2020). Assessing research impact: A framework and an evaluation of the Society of Family Planning Research Fund’s grantmaking (2007-2017). Contraception, 101(4), 213–219. https://doi.org/10.1016/j.contraception.2019.11.007 Dotti, N. F., & Walczyk, J. (2022). What is the societal impact of university research? A policy-­ oriented review to map approaches, identify monitoring methods and success factors. Evaluation and Program Planning, 95, 102157. https://doi.org/10.1016/j.evalprogplan.2022.102157 Ekmekci, P.  E. (2017). An increasing problem in publication ethics: Publication bias and editors’ role in avoiding it. Medicine Health Care and Philosophy, 20(2), 171–178. https://doi. org/10.1007/s11019-­017-­9767-­0 Eve, M. P., Neylon, C., O’Donnell, D. P., Moore, S., Gadie, R., Odeniyi, V., & Parvin, S. (2021). Reaching peer review. PLOS ONE and Institutional Change in Academia. Cambridge University Press. https://doi.org/10.1017/9781108783521 Fabre, R., Egret, D., Schopfel, J., & Azeroual, O. (2021). Evaluating the scientific impact of research infrastructures: The role of current research information systems. Quantitative Science Studies, 2(1), 42–64. https://doi.org/10.1162/qss_a_00111 Feller, I. (2022). Assessing the societal impact of publicly funded research. Journal of Technology Transfer, 47, 632–650. https://doi.org/10.1007/s10961-­017-­9602-­z Fong, E. A., & Wilhite, A. W. (2017). Authorship and citation manipulation in academic research. PLoS One, 12(12), e0187394. https://doi.org/10.1371/journal.pone.0187394 Forscher, P.  S., Cox, W.  T. L., Devine, P.  G., & Brauer, M. (2019). How many reviewers are required to obtain reliable evaluations of NIH R01 grant proposals? Psyarxiv.com

References

149

Gallo, S. A., Schmaling, K. B., Thompson, L. A., & Glisson, S. R. (2021). Grant review feedback: Appropriateness and usefulness. Science & Engineering Ethics, 27(2), 18. https://doi. org/10.1007/s11948-­021-­00295-­9 Gaudino, M., Robinson, N. B., Di Franco, A., Hameed, I., Naik, A., Demeres, M., Giardi, L. N., Frati, G., Fremes, S. E., & Biondi-Zoccai, G. (2021). Effects of experimental interventions to improve the biomedical peer-review process: A systematic review and meta-analysis. Journal of the American Heart Association, 10(15), e019903. https://doi.org/10.1161/JAHA.120.019903 Guthrie, S., Ghiga, I., & Wooding, S. (2017). What do we know about grant peer review in the health sciences? F1000 Research, 6, 1335. https://doi.org/10.12688/f1000research.11917.2 Haven, T. L., Holst, M. R., & Strech, D. (2022). Stakeholders’ views on an institutional dashboard with metrics for responsible research. PLoS One, 17(6), e0269492. https://doi.org/10.1371/ journal.pone.0269492 Heesen, R., & Bright, L. K. (2021). Is peer review a good idea? British Journal for the Philosophy of Science, 72(3), 635–663. Hickman, C. F., Fong, E. A., Wilhite, A. W., & Lee, Y. (2019). Academic misconduct and criminal liability: Manipulating academic journal impact factors. Science & Public Policy, 46(5), 661–667. https://doi.org/10.1093/scipol/scz019 Holst, M. R., Faust, A., & Strech, D. (2022). Do German university medical centres promote robust and transparent research? A cross-sectional study of institutional policies. Health Research Policy and Systems, 20(1), 39. https://doi.org/10.1186/s12961-­022-­00841-­2 Horbach, S. P. J. M., & Halffman, W. (2020). Journal peer review and editorial evaluation: Cautious innovator or sleepy giant? Minerva, 58, 139–161. https://doi.org/10.1007/s11024-­019-­09388-­z Hsiao, T., & Schneider, J. (2021). Continued use of retracted papers: Temporal trends in citations and (lack of) awareness of retractions shown in citation contexts in biomedicine. Quantitative Science Studies, 2(4), 1144–1169. https://doi.org/10.1162/qss_a_00155 Hug, S. E. (2021). Towards theorizing peer review. Quantitative Science Studies, 1–17: (advance publication). https://doi.org/10.1162/qss_a_00195 Hug, S. E., & Aeschbach, M. (2020). Criteria for assessing grant applications: A systematic review. Palgrave Communications, 6, 37. https://doi.org/10.1057/s41599-­020-­0412-­9 Hutchins, B. I., Davis, M. T., Meseroll, R. A., & Santangelo, G. M. (2019). Predicting translational progress in biomedical research. PLoS Biology, 17(10), e3000416. https://doi.org/10.1371/ journal.pbio.3000416 Jerrrim, J., & de Vries, R. (2020). Are peer-reviews of grant proposals reliable? An analysis of Economic and Social Research Council (ESRC) funding applications. The Social Science Journal. https://doi.org/10.1080/03623319.2020.1728506 Kaltenbrunner, W., Birch, K., & Amuchastegul, M. (2022). Editorial work and the peer review economy of STS journals. Science, Technology & Human Values, 47(4), 670–697. https://doi. org/10.1177/01622439211068798 Kim, Y.  H., Levine, A.  D., Nehi, E.  J., & Walsh, J.  P. (2020). A bibliometric measure of translational science. Scientometrics, 125(3), 2349–2382. https://doi.org/10.1007/ s11192-­020-­03668-­2 Kunnath, S. N., Herrmannova, D., Pride, D., & Knoth, P. (2022). A meta-analysis of semantic classification of citations. Quantitative Science Studies, 2(4), 1170–1215. https://doi.org/10.1162/ qss_a_00159 Langfeldt, L., Nedeva, M., Sorlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58, 115–137. https://doi.org/10.1007/s11024-­019-­09385-­2 Llopis, O., D’Este, P., McKelvey, M., & Yegros, A. (2022). Navigating multiple logics: Legitimacy and the quest for societal impact. Technovation, 110, 102367. https://doi.org/10.1016/j. technovation.2021.102367

150

8  Science Evaluation: Peer Review, Bibliometrics, and Research Impact Assessment

MacRoberts, M. H., & MacRoberts, B. R. (2018). The mismeasure of science: Citation analysis. Journal of the Association for Information Science and Technology, 69(3), 474–482. https:// doi.org/10.1002/asi.23970 Marcoci, A., Vercammen, A., Bush, M., Hamilton, D. G., Hanea, A., Hemming, V., Wintle, B. C., Burgman, M., & Fidler, F. (2022). Reimagining peer review as an expert elicitation process. BMC Research Notes, 15, 127. https://doi.org/10.1186/s13104-­022-­06016-­0 McKiernan, E.  C., Schimanski, L.  A., Nieves, C.  M., Mattias, L., Niles, M.  T., & Alplerin, J. P. (2019). Use of the journal impact factor in academic review, promotion, and tenure evaluations. eLife, 8, e47338. https://doi.org/10.7554/eLife.47338 Menke, J., Roelandse, M., Ozyurt, B., Martone, M., & Bandrowski, A. (2020). The rigor and transparency index quality metric for assessing biological and medical science methods. Science, 23(11), 101698. https://doi.org/10.1016/j.isci.2020.101698 Mennicken, A., & Espeland, W. N. (2009). What’s new with numbers? Sociological approaches to the study of quantification. Annual Review of Sociology, 45, 223–245. https://doi.org/10.1146/ annurev-­soc-­073117-­041343 Nelson, A. P. K., Gray, R. J., Ruffle, J. K., Watkins, H. C., Herron, D., Sorros, N., Mikhailov, D., Cardoso, M.  J., Ourselin, S., McNally, N., Williams, B., Rees, G.  E., & Nachev, P. (2022). Deep forecasting of translational impact in medical research. Patterns, 3(5), 100483. https:// doi.org/10.1016/j.patter.2022.100483 Perez-Riverol, Y., Zorin, A., Dass, G., Vu, M., Xu, P., Glont, M., Vizcaino, J., Jarnczak, A.  F., Petryszak, R., Ping, P., & Hermjakob, H. (2019). Quantifying the impact of public omics data. Nature Communications, 10(1), 3512. https://doi.org/10.1038/s41467-­019-­11461-­w Petersohn, S., & Heinze, T. (2018). Professionalization of bibliometric research assessment. Insights from the history of the Leiden Centre for Science and Technology Studies (CWTS). Science & Public Policy, 45(4), 565–578. https://doi.org/10.1093/scipol/scx084 Pier, E.  L., Brauer, M., Filut, A., Kaatz, A., Raclaw, J., Nathan, M.  J., Ford, C.  E., & Carnes, M. (2018). Low agreement among reviewers evaluating the same NIH grant applications. PNAS, 115(12), 2952–2957. https://doi.org/10.1073/pnas.1714379115 Recio-Saucedo, A., Crane, K., Meadmore, K., Fackrell, K., Church, H., Fraser, S., & Blatch-­ Jones, A. (2022). What works for peer review and decision-making in research funding: A realist synthesis. Research Integrity and Peer Review, 7(1), 2. https://doi.org/10.1186/ s41073-­022-­00120-­2 Reed, M.  S., et  al. (2021). Evaluating impact from research: A methodological framework. Research Policy, 50(4), 104147. https://doi.org/10.1016/j.respol.2020.104147 Retraction Watch. Ten journals denied 2020 impact factors because of excessive self-citation or “citation stacking”. Accessed 6/30/2021. Rice, D. B., Raffoul, H., Ioannidis, J. P. A., & Moher, D. (2020). Academic criteria for promotion and tenure in biomedical sciences faculties: Cross sectional analysis of international sample of universities. BMJ, 369, m2081. https://doi.org/10.1136/bmj.m2081 Rubin, H. (2022). Structural causes of citation gaps. Philosophical Studies, 179, 2323–2345. https://doi.org/10.1007/s11098-­021-­01765-­3 Rubio, D. M., Blank, A. E., Dozier, A., Hites, L., Gilliam, V. A., Hunt, J., Rainwater, J., & Trochim, W. M. (2015). Developing common metrics for the clinical and translational science awards (CTSAs): Lessons learned. Clinical and Translational Science Journal, 8(5), 451–459. https:// doi.org/10.1111/cts.12296 Saltelli, A. (2019). Ethics of quantification or quantification of ethics? Futures, 116, 102509. Schneider, J., Ye, D., Hill, A. M., & Whitehorn, A. H. (2020). Continued post-retraction of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data. Scientometrics, 125, 2877–2913. https://doi.org/10.1007/s11192-­020-­03631-­1 Siler, K., & Lariviere, V. (2022). Who games metrics and rankings? Institutional niches and journal impact factor inflation. Research Policy, 51, 104608. https://doi.org/10.1016/j. respol.2022.104608

References

151

Slusna, L.  K., & Balog, M. (2022). Review of indicators in the context of biobanking, Biopreservation and Biobanking, online ahead of print. https://doi.org/10.1089/bio.2022.0073 Soergel, D., & Helfer, O. (2016). A metrics ontology. An intellectual infrastructure for defining, managing, and applying metrics. Knowl Organ Sustain World Chall Perspect Cult Sci Technol Shar Connect Soc, 15, 333–341. Stephan, P., Veugelers, R., & Want, J. (2017). Reviewers are blinkered by bibliometrics. Nature, 544(7651), 411–412. https://doi.org/10.1038/544411a Sumner, J. Q., Vitale, C. H., & McIntosh, L. D. (2021). Ripeta score: Measuring the quality, transparency and trustworthiness of a scientific work. Frontiers in Research Metrics & Analytics, 6, 751734. https://doi.org/10.3389/frma.2021.751734 Superchi, C., Gonzalez, J. A., Sola, I., Cobo, E., Hren, D., & Boutron, I. (2019). Tools used to assess the quality of peer review reports: A methodological systematic review. BMC Medical Research Methodology, 19(1), 48. https://doi.org/10.1186/s12874-­019-­0688-­x Teixeira da Silva, J. A., Bornemann-Cimenti, H., & Tsigaris, P. (2021). Optimizing peer review to minimize the risk of retracting COVID-19-related literature. Medicine Health Care and Philosophy, 24(1), 21–26. https://doi.org/10.1007/s11019-­020-­09990-­z Teixera da Silva, J. A., Al-Khatib, A., & Dobranski, J. (2017). Fortifying the corrective nature of post-publication peer review: Identifying weaknesses, use of journal clubs, and rewarding conscientious behavior. Science & Engineering Ethics, 23(4), 1213–1226. https://doi.org/10.1007/ s11948-­016-­9854-­2 Tennant, J. P. (2018). The state of the art in peer review. FEMS Microbiology Letters, 365(19), fny204. https://doi.org/10.1093/femsle/fny204 Tennant, J. P., & Ross-Hellauer, T. (2020). The limitations to our understanding of peer review. Research Integrity and Peer Review, 5, 6. https://doi.org/10.1186/s41073-­020-­00092-­1 Teplitskiy, M., Acuna, D., Elamrani-Raoult, A., Kording, K., & Evans, J. (2018). The sociology of scientific validity: How professional networks shape judgment in peer review. Research Policy, 47, 1825–1841. https://doi.org/10.1016/j.respol.2018.06.014 Teplitskiy, M., Duede, E., Menietti, M., & Lakhani, K.  R. (2020). Status drives how we cite: Evidence from thousands of authors. arXiv. Thomas, D. A., Nedeva, M., Tirado, M., & Jacob, M. (2020). Changing research on research evaluation: A critical literature review to revisit the agenda. Research Evaluation, 29(3), 275–288. Van Calster, B., Wynants, L., Riley, R. D., van Smeden, M., & Collins, G. S. (2021). Methodology over metrics: Current scientific standards are a disservice to patients and society. Journal of Clinical Epidemiology, 138, 219–226. https://doi.org/10.1016/j.jclinepi.2021.05.018 Vinkenburg, C.  J., Ossenkop, C., & Schiffbaenker, H. (2021). Selling science: optimizing the research funding evaluation and decision process. Equality, Diversity and Inclusion: An International Journal, 41(2), 1–14. https://doi.org/10.1108/EDI-­01-­2021-­0028 Wilhite, A., Fong, E. A., & Wilhite, S. (2019). The influence of editorial decisions and the academic network on self-citations and journal impact factors. Research Policy, 48, 1513–1522. https://doi.org/10.1016/j.respol.2019.03003 Woodson, T., & Boutilier, S. (2022). Impacts for whom? Assessing inequalities in NSF-funded broader impacts using the inclusion-immediacy criterion. Science and Public Policy, 49(2), 168–178. https://doi.org/10.1093/scipol/scab072 Wouters, P., et al. (2019). Rethink impact factors: find new ways to judge a journal. Nature, 569, 621–623. Zakaria, S., Grant, J., & Luff, J. (2021). Fundamental challenges in assessing the impact of research infrastructure. Health Res Policy Syst, 19(1), 119. https://doi.org/10.1186/s12961-­021-­00769-­z

Chapter 9

Research Integrity in Emerging Technologies: Gene Editing and Artificial Intelligence (AI) Research in Medicine

Emerging technologies pose special problems and offer opportunities for research integrity at several stages: planning the trajectory, doing the research, using the research, and assuring incorporation of evolving and negotiated notions of research integrity throughout the technology’s emergence. Sorting out seemingly new issues raised by the technology, especially those that are morally relevant, evolves over time, often expressed in a series of metaphors to help experts and the public understand it (Droog et al., 2020). As Gordijn and Ten Have (2017) have described the process—reasonable and balanced ethical analysis is difficult early in the technology’s evolution when information about its effects is still emerging, leading to the dilemma that by the time of clarity the technology is firmly rooted in society. Here, I consider several ways of viewing the evolution of new technologies as used in research and their ethical issues. This discussion is followed by examination of research integrity issues in two emerging technologies: human gene therapy/editing research and AI research in health care. Questions of appropriate research governance in these emerging fields follows. Finally, we consider development of the field of innovation ethics and how it offers insights regarding research integrity in emerging technologies.

Research Integrity in Emerging Technologies Frames through which the technology is viewed may include discussion of: the science, relevance for social progress, an issue of risks of different types and levels than are currently understood and of their control, ethical questions, economic questions, and issues of governance. Risks viewed at a particular time as uncontrollable are often deemed as morally not acceptable. Related is the social question: are we allowed to do what we are able to do (Bauer & Bogner, 2020)?

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_9

153

154

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

Others (Chan, 2018) reflect on basic purpose, seeing scientific knowledge as a public good, to be shared for public benefit. This lens requires attention to issues of justice as emerging technologies develop at different rates across countries and in which availability of resulting treatments may not correlate with that population’s need or its contribution to the research necessary to bring the technology to fruition. A critical theory of decolonialization requires attention to parties empowered to impose normative values and standards, decide ownership, and distribution of risks and economic benefits. These arguments are especially well developed in the context of AI algorithmic coloniality, exploitation, and extraction of data—requiring fairness, accountability, and transparency in algorithmic systems. Countries with still-developing regulatory systems and deficits in local expertise are vulnerable to continued colonialization in the guise of new technologies (Mohamed et al., 2020). Governance and ethics of emerging technologies (ET) are now recognized areas of study. Emerging technologies often quickly and fundamentally change economies and societies in many domains and are ethically important. Often, beneficiaries of these technologies such as producers and investors don’t bear the costs of ET risks, transferring them to the society at large or to particular groups disproportionately affected by harms or excluded from benefits. Often, current regulatory agencies respond slowly to ETs (Taeihagh et al., 2021). Characterized by uncertainty in cause-and-effect relationships, which of their applications should cause concern and how to anticipate their future leads to a notion of tentative governance, open to learning and revisions and eventually to adaptive governance to provide some stability (Asquer & Krqchkovskaya, 2020). Ethical, legal, and social implications should (but often aren’t) built into this learning framework from the start (Kuhlmann et al., 2019) and should include not only safety but justice and fairness. Emerging technologies diffuse across boundaries with no single agency having a full picture of the technology or complete jurisdiction over its regulation. In addition, regulation and governance can vary with domain in which the technology is used—research, clinical care, or direct to consumer (Mathews et al., 2022a). A committee of the National Academy of Medicine, Emerging Science, Technology and Innovation in Health and Medicine is further developing a framework taking account of issues noted above. The push and pull of markets that can shape development and diffusion of a technology even in the absence of efficacy data, and regulatory gaps especially governing the private sector have to be brought together into a comprehensive regulatory system with a focus toward social benefits (Mathews et al., 2022b). Previous chapters have described ways in which current, formal research regulation could benefit from becoming more adaptive to reflect new issues not considered in their original formulation. So, how can research integrity be addressed in vulnerable periods when harms/benefits/justice of the emerging technology are still unclear and ethical standards in constant development? Here, we consider research in two relevant technologies: human gene editing and AI in the broader context of data science. Each turns traditional versions of research ethics on its head.

Human Gene Therapy/Editing Research

155

Both human gene therapy/editing research and AI research can profit from attention to guiding ethical principles outlined for engineering biology research: “seek to create products or processes that benefit people, society or the environment; consider and weigh the benefits of research against potential harms; incorporate equity and justice in the selection and implementation of…research, development, policy and commercialization; seek to openly distribute the results of early stage research and development…support open communication between …researchers and the stakeholders who might be affected by research development, and the deployment of new technologies” (Mackelprang et al., 2021, p. 908).

Human Gene Therapy/Editing Research Gene therapy adds a correct copy of the involved gene into the genome of the cells in the target organ or tissue; gene editing alters the genome at a particular location to correct or alter the genetic sequence (Delhove et al., 2020). Clinical trials underway involve gene editing in cancer immunotherapy, in viral infection, in hematological disorders, in metabolic disorders, and in the eye as well as others. An overview of the science and progress to date may be found in Ernst et al. (2020). Ethical challenges in this research include: efficiency of the technology in terms of on-target, off-target effects and lack of clarity about their permanence, therefore inhibiting risk/benefit analysis (Brokowski & Adil, 2019); and exacerbation of social inequality as trials and subsequent treatment facilities are unlikely to be located in low-resource settings. In human gene therapy research, a basic challenge to the status quo has been mounted, questioning the dominant idea that ethics necessarily follows development of the science. Hurlbut (2020) notes that discussion has focused on technical criteria for proceeding with human genome editing research, assuming that science has authority to move forward with a trajectory it determines. Ethical inquiry is delayed until the technology is largely developed and therefore responds to it. Priorities so determined then become embedded in our institutions, positioning science as able to self-govern and drive technological progress and requiring that authorization must then be given once society “catches up.” Hurlbut describes this as an illegitimate exercise of power; setting rules for what should or should not be done should not be decided by science alone. A community of scholars declares there has not yet been an appropriate effort to sponsor inclusive public discussion on key issues concerning genome editing and the values of integrity, rights and dignity that are basic and are beyond questions not scientifically grounded. They argue that we lack infrastructures to host such deliberation and advocate for a global observatory to serve as a center for international reflection on these issues (Saha et al., 2018). The discussion would not originate from scientific research agendas but instead by determining how the science “can be better steered by the values and priorities of society” (Jasanoff & Hurlbut, 2018, p. 436).

156

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

While ethical views about gene editing had been evolving for some time, a scandal often precipitates a serious reckoning. In November 2018, two babies were born in China with DNA edited while they were embryos (germline editing) even though less risky technologies could have achieved the same result. Pronounced by many observers as reckless by the scientist involved, He Jiankui, analysis of the case also showed imperfect regulation and a castigation of the institutions of science similar to that raised by Hurlbut and Jasanoff (see above paragraph). Greely and others noted that He was acting in line with the scientific culture that places a premium on provocative research and being first in international and national scientific competitions. Science also has to make it clearer that it will not pursue research not acceptable to its society and internalize the view that science cannot make the final decision (Greely, 2021). He Jiankui was criminally convicted and sweeping regulatory reforms are an indication that the scandal forced Chinese policies to be aligned with international bioethical principles. Song and Joly (2021) note present lack of empirical data on enforcement of these policies, thus leaving unanswered whether they are being used to address problems of research integrity. The resulting dialogue shows how progress can be made, with governance/ethics experts and leaders from the scientific community responding to each other’s arguments. Jasanoff et  al. (2019) continue to make their point by noting that “public permission is a prerequisite to disrupting fundamental elements of the social order,” reflecting human integrity and autonomy. Senior scientists (Lander et al., 2019) suggest a global moratorium on all clinical uses of human germline editing (but not for research uses not involving transfer of an embryo into a uterus). Appropriately, the scientists describe lack of safety and effectiveness data and the availability of alternatives but also engage with questions that need broad societal consensus from diverse perspectives. They also acknowledge the criticism that the process should not be too strongly controlled by scientists and physicians and that avoidance of harming patients and eroding public trust justifies cost of a moratorium. This stance by some members of the scientific community is different from that in similar earlier situations, in which science did, in essence, believe it had the authority to establish what would now be considered to be public policy. Some suggest that ethical concerns for emerging technology within gene therapy continue to unfold with almost no public deliberation. Polygenic embryo screening (use of polygenic risk scores as a component of preimplantation genetic testing to estimate risk of common diseases) is now commercially available. Yet, few data are available to support the utility of these risk scores and raise the important issues. Polygenic risk scores may mean only a small increase in absolute risk. Parents will need to decide which embryo with varying risk scores for common diseases to implant. This movement could open the door to screening for psychiatric disorders with all the accompanying stigma. Limited regulations address this emerging technology (Lazaro-Munoz et al., 2021). Clearly, some believe that a system for anticipating emerging technologies within the gene therapy space has failed. Others assess the situation otherwise. Martin et al. (2020) see ethical debates on germline genetic engineering as running ahead of the science as a way to establish new norms,

Human Gene Therapy/Editing Research

157

thus playing a key role in moving this emerging technology forward. Its early development was thus seen not to be very disruptive in many places (Martin et al., 2020). Helpfully, the dialogue from such a situation has identified areas requiring additional normative/empirical work but also has raised important questions/observations relevant to emerging technologies in general and how they may be studied with integrity. Evans (2021) tracks how morally relevant elements creating a barrier between somatic and germline gene editing were eroded by new empirical developments as well as by ethical reasoning. Is treating susceptibility to disease an example of enhancement? Isn’t germline human genome editing closely linked to the ethics of procreative choice? If germline editing becomes safe, doesn’t justice allow/ require its use to level the playing field? (Evans, 2021). Farrelly (2021) raises another justice question: why has so little work in genome editing addressed prevention of disease and disability across the complete lifespan, extending it and decreasing the time of frailty. Doesn’t promoting health in late life through genome editing deserve much more attention than it is receiving? Aside from these discussions about decision authority during the evolution of gene therapy as an emerging technology, others have drawn attention to issues that harken back to discussions in earlier chapters of this book. Through the lens of medical anthropology, Addison and Lassen (2017) highlight the limitations of the dominant ethics frame in IRB regulations (logic of oversight) with a huge blind spot in absence of participant/patient voices of ethics of living with and after gene therapy trials. Yeoman et al. (2017) define patient centricity in product development to include: understanding patient experiences, equipping patients to make informed choices about treatment options, and partnering with patients to measure patient-­ important outcomes. Others reflect on methodological threats to RI common across biomedical research and that also occur in gene therapy research. Bittlinger et al. (2021) applied robust clinical trial methodology and conditions necessary for translational research to 46 preclinical somatic cell genome editing trials. These trials, largely publicly funded, did not report trial elements key for robust and confirmatory research: randomization, blinding, sample size calculation, pre-registration, and independent confirmation. These are well-established measures to reduce validity threats and should especially be assured in gene therapy trials, necessary to establish a robust benefit-risk estimation based on preclinical evidence. NIH funding requirements for such research require independent validation of relevant preclinical studies before translation to Phase I/II trials is warranted (Bittlinger et al. 2021), a step in the right direction. Following this criticism, Apaydin and colleagues (2020) provide an evidence map showing a broad overview of genetic therapies that have been evaluated in randomized controlled trials (RCTs) for efficacy and safety. Evidence of this source is still sparse; genetic therapy literature is still largely based on single-arm phase 2 trials. This lack of evidence has implications for ethics, from IRB deliberations to informed consent, as benefit/risk ratio is essential for both deliberations. Further, 119 gene therapy RCTs, largely in cardiovascular disease and cancer, were located and analyzed for gaps and quality of evidence. Studies followed

158

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

patients for only an average of 15 months after treatment, insufficient to determine long-term effectiveness or “cure.” Studies only reported changes in symptoms. Very few long-term trials have been conducted although safety and efficacy studies for approved genetic therapy interventions with a follow-up of up to 15 years are underway. Thus, much about gene therapy safety and effectiveness is still unknown (Apaydin et  al., 2020), consistent with the pattern of research in emerging technologies. Concerns to date in gene therapy/editing research have focused on an appropriate balance between public deliberation and scientific authority, on inability to control scandal and its aftermath and to learn from it, and on quality and completeness of research supporting further development of this area of research. Consistent with self-regulation by the scientific community, the Gene Curation Coalition has undertaken a key methodological responsibility of harmonizing and standardizing validity of gene–disease relationships used in genomic medicine and research. Assessment of the evidence that variants in a gene are linked to a particular monogenic disease is critical for variant interpretation determining content for clinical gene panel tests; unless a gene is convincingly linked to a disease, the pathogenicity of a variant cannot be interpreted. An international database of gene– disease validity assertions is necessary to facilitate data sharing (DiStefano et al., 2022).

Better Practices A number of better practices (reforms) have been suggested. First, potential trial participants need much more public reporting of trial elements in order to make an informed decision about whether to find and join a gene trial. An assessment of registration completeness (WHO Human Genome Editing Database) and published data in journal articles for trials testing genome editing therapies found only half with data safety monitoring boards, 16% providing number and date of IRB approval, 80% not registering the vector used. In addition, posting of informed consent forms is not required by this database (Juric et al., 2022). Second, interests of scientists and research funders dominate, shaping aims, and forms of innovation, while public engagement is left to clean up the consequences. No group—including the National Academies of Science, Engineering and Medicine—has offered unambiguous recommendations on how to assure that gene therapy research serves the public good. Instead, scientists have been left to self-­ govern, guided largely by assumptions that basic research should proceed with public involvement saved for clinical uses. Technical advancement and expansion of private capital become the goals. Proper authority over values which research should aim to respect or promote and institutional structures that will determine distribution of risks, costs, and benefits of gene editing research and its relevance over other public health or societal priorities such as social determinants of health, are not addressed (Nelson et al., 2021).

AI Research in the Biomedical Sciences

159

In both, the Asilomar conference in the 1970s (use of recombinant DNA technique) and the CRISPR conference in 2015 scientists focused on freedom of research and taking some responsibility for consequences. Asilomar scientist-­ produced rules excluded a range of ethical and social issues, instead focusing on technical issues such as biological risk assessment, which gives scientists priority in the decision-making process. Public role should not be limited to acceptability of applications which science has delivered but rather to the direction of research itself. Both Asilomar and the recent CRISPR conference overstepped the authority of science (Rufo & Ficorilli, 2019). The human genome is not the property of science alone. Genome editing and the research that develops it currently leaves unengaged public questions including: who decides what is a disease, anomaly or a variation; what diseases are worth being corrected/treated and which are not; what is discrimination; what kinds of social implications will gene editing bring about when it becomes widely available or available to some and not to others (Sandor, 2022)?

AI Research in the Biomedical Sciences As a second example of an emerging technology, I examine a subfield of data ethics, artificial intelligence research ethics, which offers a different set of lessons from those in gene therapy. AI is a disruptive technology but also one with a deep rift between the technical culture which controls its production and marketing, and ethics/research integrity. The harms too often built into its products can greatly overwhelm benefits—perpetuation of biases and inadequate safety testing—when used to influence human decisions. “AI refers to a subfield of computer science that aims at creating computer programs that can perform tasks that under usual circumstances would require human intelligence.” “Algorithm is a procedure for solving a particular problem by following a set of instructions step by step.” “Machine learning enables a software/algorithm to discriminate patterns in data and/or make predictions by learning distinctive features from these data that are not part of the original set of programming instructions” (pp. 247–248, Kellmeyer, 2019). AI is used in basic and translational (clinical) research; examples include neuroscience and nutrition science. In neuroscience, AI can be used to classify histopathological images, neuroimaging, or electroencephalograms. An AI-based brain–computer interface integrates large amounts of brain data. AI can be used for diagnostics, prediction, and drug development. It raises questions of protection of mental privacy and personal identity but currently does not enjoy extra protection in guidelines and regulation. The fields of neuro-ethics and neurolaw are addressing ELSI challenges from human–AI interactions (Kellmeyer, 2019). AI is used in nutrients research to study relationships between nutrients and the functioning of the human body and in study of gut microbiota. In these and other areas of research, AI allows analysis of large datasets that could not be analyzed using traditional

160

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

statistical methods (Sak & Suchodolska, 2021). Research also documents bias in certain data uses and suggests how to debias them. For example, large pretrained language models have been demonstrated to show a Muslim violence bias that should be remedied (Abid et al., 2021). Current oversight mechanisms for AI research are inadequate and conflicted. Some worry that traditional IRBs do not have the expertise to attend to risks of informational harm such as privacy breaches or algorithmic discrimination or to follow the research into deployment of machine learning algorithms. If data are considered non-identifiable and researchers don’t engage directly with research subjects, a study could be considered exempt from IRB review. Ferretti et al. (2021) argue that because it is increasingly hard to anonymize data, studies using them should not be oversight free. In addition to data access committees, some institutions are creating a distinct specialized ethics committee, data review boards, to operate alongside IRBs (Ferretti et al., 2021). It should be noted that such a stance, in preference to revising IRBs, simply adds to the myriad specialized research review bodies with lack of consistent definitions and unclear lines of authority reviewed by Friesen and colleagues (Friesen et al., 2019). It is also important to note that IRBs do not take into consideration potential harms to individuals or to society from application of research findings (Pamuk, 2021). Preference for a data-trust model with participation of those affected is building. A large number of ethical guidelines for AI research and development exist, often defined by private interests, in a field in which much research and many conferences are industry funded. Consequently, a very small number of papers is published documenting AI misuse. No ethics enforcement mechanisms, other than self-interested self-regulation exist. Hagendorff (2020) prescribes a legal framework, involving independent audit, compensation for individuals harmed by AI, and imposed from outside the AI community. Brown et al. (2021) define ethical algorithm audits as “assessments of the algorithm’s negative impact on the rights and interests of stakeholders, with a corresponding identification of situations and/or features of the algorithm that give rise to these negative impacts” (p. 2). Metrics of AI ethical governance systems should include: societal bias and statistical bias; accuracy, transparence of architecture, data use and collection and use; potential for misuse, abuse and infringements of legal rights; and data security and access (Brown et al., 2021). Insights from the philosophy of innovation raise basic questions. While a decision not to innovate may leave real problems unsolved, AI should not be thought of as inherently good or as primarily technological or economically driven. Risks can be incidental, continuous, or manifest in the future. Such technologies emerge in stages: when a concept is introduced and elaborated, when critical review yields evaluation, and finally a stage of consolidation of viewpoints and accommodation to the concept. With AI we are nowhere near the last stage. Prominent ethical questions remain unaddressed and unresolved: what is the responsibility of innovators in dealing with risks (Bennick, 2020), and how are issues of justice both in benefits and harms measured and addressed? Ethical guidelines are present but clearly insufficient in their current constellation.

AI Research in the Biomedical Sciences

161

Difficulty in applying ethical principles to AI production and use (accountability, fairness, privacy, understandability) has led to suggestions to focus on micro-­ decisions—those that are made in multiple daily technical decisions in research and development. Such an approach suggests embedding ethicists in research teams from the very beginning of development, addressing the question of whether the algorithm should be developed at all and how it will affect the full range of stakeholders. Explaining this approach of moving ethics closer to the actual practice of data science, Bezuidenhout and Ratti (2021) suggest these micro-decisions can take context into account, asking questions, for example, about a training algorithm—how will it affect recipients, will it be a repetition of past injustices, who are we leaving out? The ethicist-embedded approach is meant both as a learning opportunity for where ethical decisions emerge but also as protection in the absence of a legally enforceable framework. In a highly competitive and fast-moving industry, AI lacks common aims and fiduciary duties, professional history, and norms and accountability (McLennan et al., 2020). Indeed, compared with other fields of study, AI is see as driven by hype rather than by self-criticism and assurance that its claims are correct. Integrity is how science proceeds when it is not distorted by the values of other institutions. AI is one of the most important sciences in the world. In order to play the role it needs to play, it has to become a strongly self-critical science (Collins, 2021). Taking responsibility for basic scientific standards such as reproducibility has recently been questioned with alarm about a brewing reproducibility crisis in machine learning-based sciences. The hope is to avoid the kind of crisis in confidence that followed the replication crisis in psychology (Gibney, 2022).

Accountability for Improved AI Research and Development Accountability is the number one problem in this field, exacerbated by an “AI race” for economic purposes among countries. The USA largely expects the market to regulate industry; evidence is largely held secretly by developers, who favor self-­ regulation. Research in important medical applications such as radiology do not involve testing in real-world settings (Aristidou et al., 2022) or adopt the outcome measure of how the product performs in the hands of its intended users (Babic et al., 2021), a high bar most companies are not willing to pursue (Aristidou et al., 2022). Most studies on prediction models developed using machine learning show poor methodological quality and are at high risk of bias through exclusion of participants, small sample size, poor handling of missing data and failure to address overfitting (Navarro et  al., 2021). Data used to train algorithms are seldom obtained according to any specific experimental design and are used even though they may be inaccurate, systematically biased or a poor representation of a population under study (Tsamados et al., 2022). There are many questions about lack of accountability. Big Tech is now the primary employer and funder of most AI research (Hao), raising issues of conflict of

162

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

interest. It is important to note that reporting guidelines for clinical trials of artificial intelligence interventions have been published, as a first international standard to support complete and transparent reporting of protocols and interventions (Ibrahim et al., 2021). Will these standards be followed? AI standards should start with clearly articulated goals to improve fairness and accountability, consideration of how individuals enter the population studied and involvement of those affected, and constant auditing and impact assessment by an independent group. Ethics-based auditing by an external group would allow validation of claims made by automated decision-making systems, with both developer and auditor at risk. The audit would include functionality, code, and impact. Such a governance tool is outlined by Mokander et al. (2021), although a clear institutional structure for providing this service is lacking (Mokander et al., 2021). A body of empirical work demonstrates standards that should have been enforced early in the development of AI, especially that used in health, where consequences of error can be detrimental, sometimes to thousands of patients. Recent meta-­ analysis casts doubt on the external validity of many medical AI studies, finding that reporting standards which have been commonly known for decades, have not been followed, and testing in settings indicative of real clinical environments is uncommon. The view that these systems must be studied in randomized clinical trials is now urged in order to develop a higher quality and more transparently reported evidence base (Nagendran et al., 2020). Study of machine learning for health has also been found not to meet adequate standards of reproducibility (McDermott et  al., 2021). Yet, at the same time, financial investment is pouring in and some algorithms are already at marketing and public adoption stage (Nagendran et al., 2020). Topol (2020) describes and critiques approaches that have been followed in AI clinical research. Hundreds of retrospective reports of in silico assessments of datasets to determine how well a deep neural network performs a clinical task compared with a small number of physicians, have been published. Most of the algorithm regulatory approvals required by FDA rely on this evidence, even though it should be considered preliminary. In addition, datasets used by private companies to develop their algorithms are rarely published and thus are not transparent to clinicians who have a responsibility to assure their validity (Topol, 2020). AI governance documents gathered from around the world provide multiple suggestions for accountability: (1) consider governance arrangements which bring together governments and diverse non-state groups in a balanced and transparent way, (2) the state should go beyond its traditional role of market correction to address mitigating risks and enabling participation of diverse groups, (3) go beyond traditional goals of supporting economic growth to address societal challenges (Ulnicane). Current governance of AI is said to be characterized by an oligopoly of a small number of large companies, which is one of the reasons for lack of consideration of societal interests and needs (Ulnicane et al., 2021a). What can be done to raise this field of research to long accepted standards? Topal notes that new guidelines for AI protocols (SPIRIT-AI Extension) and publication (CONSORT-AI extension) have been published; they should help raise the bar for

AI Research in the Biomedical Sciences

163

AI medical research (Topol, 2020). Others suggest that, even in the absence of regulation forcing requirements for Trustworthy AI, funders of AI research grants must be held accountable for the quality of research they are funding. “There are many examples of AI systems being funded, developed and deployed that are not fit for purpose, unethical, unfair, unsafe and further embedding discrimination in society” (Gardner, p. 282). Statutory legislation and professional requirements are occurring more slowly than the rate of technological development in this field (Gardner et al., 2022). Some advocate for training datasets, architectural algorithms, and models being made available to outside research communities for criticism, while others suggest that state regulation of AI is that preferred option for the future (de Laat, 2021). This is the classic problem for emerging technologies, and clearly it has neither been recognized nor anticipated in medical AI research and development. This field has, in the aggregate, proceeded without research integrity, with no scientific institution (regulatory, professional, funder, journals, scientific community), taking adequate responsibility for a field that must now be reined in after the fact.

Other Ethical Views Some commentators offer a much more ethically intense analysis of AI, revealing much deeper issues of justice. Floridi (2021) suggests that “the real challenge is not innovation within the digital world, but the governance of the digital ecosystem as a whole”(p. 218). Others (Kapur, 2021) notes that regulatory standards are wrong. AI models are deemed successful if they closely emulate clinical data without acknowledging that those data include bias, including racial bias. Research comparing an AI model’s performance to clinical data is insufficient to ensure objective accuracy (Kapur, 2021). Kate Crawford’s book, Atlas of AI analyzes issues of power in the system of developing and managing AI.  Commercial large-scale data capture is then privatized, often without any oversight; such practice has become normalized. AI is a social and political intervention masquerading as purely technical and portrayed as inevitable and claiming scientific neutrality. AI has no formal professional governance structure; it does have multiple ethical frameworks (128 in Europe alone) but no consequence if these ethical frameworks are violated, which could be seen as a way of avoiding regulation. States, which would be the regulators protecting the public interest, profit from AI’s commercial value (Crawford, 2021), thus are caught in a conflict of interest. No wonder the ethics focus has overshadowed interest in adopting regulation, especially considering the reality of corporations driving research and development in the field. But the state has an undeniable responsibility in controlling negative effects and unintended consequences of this technology, especially since states have often funded basic research in this field from public money (Radu, 2021).

164

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

Others suggest a critical theory approach to ethical analysis of AI, aimed at diagnosing and changing society for emancipatory purposes, empowering people and protecting them against systems of power. Critical theory can help to pinpoint ethically relevant issues related to use of power and domination. Ethical principles of transparency, privacy, freedom, and autonomy are valued because they are empowering for individuals to manage their own lives and more specifically, to know what happens to their data. Trust, justice, responsibility and non-maleficence protect individuals against power that could be exercised by means of AI and raises the question of what are considered to be acceptable power relations/systems in society (Waelen, 2022). Some believe that a meta-governance mechanism is necessary, perhaps through the G20, supported by many polycentric governance arrangements. Economically, global competition over AI can lead to an unjust distribution of its benefits and harms, although the extent to which AI benefits outweigh its harms remains unclear (Jelinek et al., 2021). Despite this potential, AI research and development seem not to have been open to public participation, incorporating their values into design. Finally, attention should be drawn to the sense of inevitability that AI is necessary in international competitiveness. The promise is to open new markets early and quickly, to allow a country to set its own standards. Narratives elevate AI to be a core demand of society, a good of which nobody should be deprived. Inevitability and hype mask side effects and power structures (Bareis & Katzenbach, 2022).

 overnance/Regulatory Oversight/Ethical Debates G in Research in Emerging Technologies Unless regularly adapted, current research regulatory structures are inadequate for emerging technologies. Instead, we should ask the questions: how can we improve the process of developing new technologies and how can the pace of progress be matched to the pace at which its contribution to human welfare can be evaluated (Jayaram, 2022)? There is much evidence that answering these questions has not been top-of-mind, and many concerns remain. An example of concernful regulatory oversight may be found with analysis of medical AI devices, which require FDA approval, undergirded by research. Wu et al. (2021) note “that there are no established best practices for evaluating commercially available algorithms to ensure their reliability and safety” (p. 582). These authors’ study of FDA-approved medical AI devices found: most did not involve a side-by-side comparison of clinician performance with and without AI, most did not have publicly reported multi-site assessment to assure that the algorithms performed well across representative populations and demographic subgroup performance was rarely considered. Safe and robust clinical AI and the research supporting FDA approval, require clear answers to common problems with AI: is there overfitting to training data, vulnerability to data shifts and bias against underrepresented patient subgroups (Wu et al., 2021)?

Governance/Regulatory Oversight/Ethical Debates in Research in Emerging Technologies 165

Philosopher Pamuk (2021) considers whether it might be permissible to restrict research under empirical and normative uncertainty and concludes that it may be, if use of the technology is expected to be broad with a potentially irreversible societal impact not limited to those who choose to use the technology. Once the technology is developed, the change that it brings may be irreversible (Pamuk, 2021). Jongsma and Bredenoord (2020) note that researchers as well as others have called for guidance about how to ethically guide emerging biomedical innovations into society. These authors suggest an ethics parallel research framework to place the ethicist in an anticipatory and constructively guiding role developing best practices and providing input for the normative evaluation of such technologies. As defined by these authors, ethics parallel research incorporates six ingredients: (1) disentangling wicked problems, clarifying arguments and interests of the different stakeholders; (2) using upstream or midstream ethical analysis, identifying underlying assumptions, implicit goals and possible effects of the technology and shifting the ethical emphasis from initial consent to ongoing governance obligations; (3) embedding ethicists in the project to help identify safety mechanisms and challenges that demand institutional oversight; (4) using empirical research to explore users’ moral beliefs and reasoning; (5) engaging members of the public in participatory design; and (6) at midstream development when the technology is first performed in practice, focusing not only on traditional impacts of safety and cost effectiveness but also on soft impacts—influence of the new technology on such variables as our values, human flourishing, experiences, perceptions, and identity (Jongsma & Bredenoord, 2020). Regulatory oversight must also include institutions and their readiness to effectively implement an emerging technology in their particular context. Capacities to be developed may include a data infrastructure to bring diverse streams of information together, and governance processes such as quality and safety standards (Webster & Terzic, 2021). The Macchiarini case which was discussed in a previous chapter, suggests that one ought always to consider what structures and policies might have allowed or promoted unethical conduct. Recall from this case that the institution (Karolinska Institute and its affiliated hospital) had a strong incentive to show good performance from the research funds supporting this stem cell research, which in turn, was good for Swedish business. Existing policy called for Swedish universities, not a national board, to monitor conduct of their researchers. Such policy positions create strong conditions for risk taking by multiple actors (McKelvey et al., 2018). In summary, innovation governance aims to yield innovation without undue risk. The Macchiarini case involved interaction among medical research, clinical practice, and technological development, each with disparate bodies of knowledge that operate in different regulatory systems but that need to be aligned in this (and other) case(s). A gray zone of uncertainty between regulations for nonconfirmed clinical practice and research regulation can be detected in this case. Was Macchiarini’s work research (but no IRB approval); or was it clinical practice for a desperate clinical situation (should not have proceeded to a second and third patient). What regulations addressed development and use of the tracheal implants seeded with stem cells? The clinical/research gap was not acknowledged in regulation. Such gray

166

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

zones will recur throughout development and deployment of an innovative technology. They require not only constant monitoring to assure that innovation can proceed with its potential benefits but maximally controls risk including digging deeply into structures and policies (McKelvey et  al., 2018; McKelvey & Saemundsson, 2021). Helgesson (2020) suggests that the widely used research governance document, Declaration of Helsinki, also does not address this gap.

Innovation Ethics as a Distinct Field Questions raised by technological research and innovation have particular ethical characteristics. Drawing and shifting of normative boundaries thus requiring dealing with new norms, regulations, and cognitive frames should be anticipated and responded to in an inclusive manner. Overpromising is common. The technologies are novel, have impact and may have features that are uncontrollable, unpredictable, irreversible, and difficult but important to trace in causal chains. Research and development for innovative technologies should be the topic of public debate. The degree of responsibility to be placed on the innovators is frequently not addressed or is dodged; for example, the field of AI is marked by a search for suitable governance frameworks, still far from settled (Bourban & Rochel, 2021). Innovation ethics is broader than consideration of specific emerging technologies. As an emerging field itself, it suggests a move from governance of risk to governance of innovation, and the research necessary to support claims made. Generally, it requires attention to anticipation—what political, economic, and social forces have determined its development, inclusiveness, and responsiveness including who will be monitoring the ethics of its ongoing research/development and emergence. The construct of socially disruptive technologies is especially relevant for ethics. These emerging technologies can disrupt social relations, institutions, foundational concepts and values, and they make us lose our normative, theoretical and conceptual bearings. Degree of disruptiveness can be judged by the depth and breadth of their impact, by pace of change and by their reversibility but also by ethical salience. Do they create ethical dilemmas and moral confusion that current ethics systems are not well equipped to handle (Hopster, 2021) but without leading to a new set of norms? Such a situation blocks understanding of our own and others’ moral obligations and sense of moral agency. In such a situation, moral inquiry—a guided search process that may eventually lead to moral change, is in order (Nickel, 2020).

Conclusion Emerging technologies share common characteristics: radical novelty, fast growth, prominent impact, and uncertainty and ambiguity (Ulnicane et al., 2021a, b). While fields of research and development in both gene editing and AI share these

References

167

characteristics, response to them by the ethics community differed. With gene editing, strong pushback declared that the scientific community did not have authority it had assumed, to determine direction of the field absent strong public deliberation. In contrast, AI has accumulated vast numbers of ethics statements absent regulation to enforce standards, could be said to be captured by five major industry players who fund much of the research in the field (a conflict of interest), and is struggling to establish successful interaction between its technical core and ethicists. Embedding ethics experts in AI development from the start, attending to ethical implications in daily decisions, may help to develop ethical standards, although it will not overcome the desire for AI to be entirely self-regulating.

References Abid, A., Farooqi, M., & Zou, J. (2021). Large language models associate Muslims with violence. Nature Machine Intelligence, 3, 461–463. https://doi.org/10.1038/s42256-­021-­00359-­2 Addison, C., & Lassen, J. (2017). “My whole life is ethics!” Ordinary ethics and gene therapy clinical trials. Medical Anthropology, 36(7), 672–684. https://doi.org/10.1080/0145974 0.2017.1329832 Apaydin, E. A., Richardson, A. S., Baxi, A., Vockleu, K., Akinniranye, O., Ross, R., Larkin, J., Motala, A., Azhar, G., & Hempel, S. (2020). An evidence map of randomised controlled trials evaluating genetic therapies. BMJ Evidence-Based Medicine, 26, 194. https://doi.org/10.1136/ bmjebm-­2020-­111448 Aristidou, A., Jena, R., & Topol, E.  J. (2022). Digital medicine: Bridging the chasm between AI and clinical implementation. Lancet, 399(10325), 620. https://doi.org/10.1016/ S0140-­6736(22)00235-­5 Asquer, A., & Krqchkovskaya, I. (2020). Uncertainty, institutions and regulatory responses to emerging technologies: CRISPR gene editing in the US and the EU (2012-2019). Regulation & Governance, 15(4), 1111–1127. https://doi.org/10.1111/rego.12335 Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Beware explanations from AI in health care. Science, 373(6552), 284–286. https://doi.org/10.1126/science.abg1834 Bareis, J., & Katzenbach, C. (2022). Taking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology and Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007 Bauer, A., & Bogner, A. (2020). Let’s (not) talk about synthetic biology: Framing an emerging technology in public and stakeholder dialogues. Public Understanding of Science, 29(5), 492–507. https://doi.org/10.1177/0963662520907255 Bennick, H. (2020). Understanding and managing responsible innovation. Philosophy of Management, 19, 317–348. https://doi.org/10.1007/s40926-­020-­00130-­4 Bezuidenhout, L., & Ratti, E. (2021). What does it mean to embed ethics in data science? An integrative approach based on the microethics and virtues. AI & Society, 36, 939–953. https://doi. org/10.1007/s00146-­020-­01112-­w Bittlinger, M., Schwietering, J., & Strech, D. (2021). Robust preclinical evidence in somatic cell genome editing: A key driver of responsible and efficient therapeutic innovations. Drug Discovery Today, 26(10), 2238–2243. https://doi.org/10.1016/j.drudis.2021.06.007 Bourban, M., & Rochel, J. (2021). Synergies in innovation: Lessons learnt from innovation ethics for responsible innovation. Philosophy & Technology, 34, 373–394. https://doi.org/10.1007/ s13347-­020-­00392-­w Brokowski, C., & Adil, M. (2019). CRISPR ethics: Moral considerations for applications of a powerful tool. Journal of Molecular Biology, 431, 88–101. https://doi.org/10.1016/j. jmb.2018.05.044

168

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

Brown, S., Davidovic, J., & Hasan, A. (2021). The algorithmic audit: Scoring the algorithms that score us. Big Data & Society, 8(1), 1–10. https://doi.org/10.1177/2053951720983865 Chan, S. (2018). Research translation and emerging health technologies: Synthetic biology and beyond. Health Care Analysis, 26(4), 310–325. https://doi.org/10.1007/s10728-­016-­0334-­2 Collins, H. (2021). The science of artificial intelligence and its critics. Interdisciplinary Science Reviews, 46(1–2), 53–70. https://doi.org/10.1080/03080188.2020.1840821 Crawford, K. (2021). Atlas of AI. Yale University Press. de Laat, P. B. (2021). Companies committed to responsible AI: From principles towards implementation and regulation? Philosophy & Technology, 34, 1135–1193. https://doi.org/10.1007/ s13347-­021-­00474-­3 Delhove, J., Osenk, I., Prichard, I., & Donnelley, M. (2020). Public acceptability of gene therapy and gene editing for human use: A systematic review. Human Gene Therapy, 31(1–2), 20–49. https://doi.org/10.1089/hum.2019.197 DiStefano, M. T., Goehringer, S., Babb, L., Alkuraya, F. S., Amberger, J., Amin, M., Austin-Tse, C., Balzotti, M., Berg, J.  S., Birney, E., Boccini, C., Bruford, E.  A., Coffey, A.  J., Collins, H., Cunningham, F., Daugherty, L. C., Einhorn, Y., Firth, H. V., Fitzpatrick, D. R., & Rehm, H. L. (2022). The gene curation coalition: A global effort to harmonize gene-disease evidence resources. Genetics in Medicine, 24(8), 1732–1742. https://doi.org/10.1016/j.gim.2022.04.017 Droog, E., Burgers, C., & Kee, K. F. (2020). How journalists and experts metaphorically frame emerging information technologies: The case of cyberinfrastructure for big data. Public Understanding of Science, 29(8), 819–834. https://doi.org/10.1177/0963662520952542 Ernst, M. P. T., Broeders, M., Herrero-Hernandez, P., Oussoren, E., van der Ploeg, A., & Pijnappel, W. W. M. (2020). Ready for repair? Gene editing enters the clinic for the treatment of human disease. Molecular Therapy Methods & Clinical Development, 18, 532–557. https://doi. org/10.1016/j.omtm.2020.06.022 Evans, J. H. (2021). Setting ethical limits on human gene editing after the fall of the somatic/germline barrier. Proceedings of the National Academy of Sciences U S A, 118(22), e2004837117. https://doi.org/10.1073/pnas.2004837117 Farrelly, C. (2021). How should we theorize about justice in the genomic era? Politics and the Life Sciences, 40(1), 106–125. https://doi.org/10.1017/pls.2021.3 Ferretti, A., Ienca, M., Sheehan, M., Blasimme, A., Dove, E. S., Farsides, B., Friesen, P., Kahn, J., Karlen, W., Kleist, P., Liao, S. M., Nebeker, C., Samuel, G., Shabani, M., Velarde, M. R., & Vayena, E. (2021). Ethics review of big data research: What should stay and what should be reformed? BMC Medical Ethics, 22(1), 51. https://doi.org/10.1186/s12910-­021-­00616-­4 Floridi, L. (2021). Digital ethics online and off. American Scientist, 109, 218–222. Friesen, P., Redman, B., & Caplan, A. (2019). Of straws, camels, research regulation and IRBs. Therapeutic Innovation and Regulatory Science, 53(4), 526–534. https://doi. org/10.1177/2168479018783740 Gardner, A., Smith, A. L., Steventon, A., Coughlan, E., & Oldfield, M. (2022). Ethical funding for trustworthy AI: Proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice. AI and Ethics, 2(2), 277–291. https://doi.org/10.1007/ s43681-­021-­00069-­w Gibney, E. (2022). Is AI fueling a reproducibility crisis in science? Nature, 608(7922), 250–251. Gordijn, B., & Ten Have, H. (2017). Emerging technologies and the voice of reason. Medicine, Health Care and Philosophy, 20(1), 1–2. https://doi.org/10.1007/s11019-­017-­9756-­3 Greely, H. T. (2021). CRISPR people. MIT Press. Hagendorff, T. (2020). The ethics of AID ethics: An evaluation of guidelines. Minds and Machines, 30, 99–120. Hao, K. The fight to reclaim AI, change. Helgesson, G. (2020). What is a reasonable framework for new non-validated treatments? Theoretical Medicine and Bioethics, 41(4–5), 239–245. https://doi.org/10.1007/s11017-­020-­09537-­6 Hopster, J. (2021). What are socially disruptive technologies? Technology in Society, 67, 101750. https://doi.org/10.1016/j.techsoc.2021.101750

References

169

Hurlbut, J.  B. (2020). Imperatives of governance: Human genome editing and the problem of progress. Perspectives in Biology and Medicine, 63(1), 177–194. https://doi.org/10.1353/ pbm.2020.0013 Ibrahim, H., Liu, X., Rivera, S. C., Moher, D., Chan, A., Sydes, M. R., Calvert, M. J., & Denniston, A.  K. (2021). Reporting guidelines for clinical trials of artificial intelligence interventions: The SPIR IT-AI and CONSORT-AI guidelines. Trials, 22(1), 11. https://doi.org/10.1186/ s13063-­020-­04951-­6 Jasanoff, S., & Hurlbut, J. B. (2018). A global observatory for gene editing. Nature, 555(7697), 435–437. https://doi.org/10.1038/d41586-­018-­03270-­w Jasanoff, S., Hurlbut, J. B., & Saha, K. (2019). Democratic governance of human germline genome editing. The CRISPR Journal, 2(5), 266–271. https://doi.org/10.1089/crispr.2019.0047 Jayaram, A. (2022). Thinking about moral progress. The Hastings Center Report, 5(5), 1. Jelinek, T., Wallach, W., & Kerimi, D. (2021). Policy brief: The creation of a G20 coordinating committee for the governance of artificial intelligence. AI and Ethics, 1, 141–150. https://doi. org/10.1007/s43681-­020-­00019-­y Jongsma, K.  R., & Bredenoord, A.  L. (2020). Ethics parallel research: An approach for (early) ethical guidance of biomedical innovation. BMC Medical Ethics, 21(1), 81. https://doi. org/10.1186/s12910-­020-­00524-­z Juric, D., Zlatin, M., & Marusic, A. (2022). Inadequate reporting quality of registered genome editing trials: An observational study. BMC Medical Research Methodology, 22(1), 131. https:// doi.org/10.1186/s12874-­022-­01574-­0 Kapur, S. (2021). Reducing racial bias in AI models for clinical use requires a top-down intervention. Nature Machine Intelligence, 3, 460. https://doi.org/10.1038/s42256-­021-­00362-­7 Kellmeyer, P. (2019). Artificial intelligence in basic and clinical neuroscience: Opportunities and ethical challenges. e-Neuroforum, 25(4), 241–250. https://doi.org/10.1515/nf-­2019-­0018 Kuhlmann, S., Stegmaier, P., & Konrad, K. (2019). The tentative governance of emerging science and technology – A conceptual introduction. Research Policy, 48(5), 1091–1097. https://doi. org/10.1016/j.respol.2019.01.006 Lander, E. S., Baylis, F., Zhang, F., Charpentier, E., Berg, P., Bourgain, C., Friedrich, B., Joung, J.  K., Li, J., Liu, D., Naldini, L., Nie, J., Qiu, R., Schoene-Seifert, B., Shao, F., Terry, S., Wei, W., & Winnacker, E. (2019). Adopt a moratorium on heritable genome editing. Nature, 567(7747), 165–168. https://doi.org/10.1038/d41586-­019-­00726-­5 Lazaro-Munoz, G., Pereira, S., Carmi, S., & Lencz, T. (2021). Screening embryos for polygenic conditions and traits: Ethical considerations for an emerging technology. Genetics in Medicine, 23(3), 432–434. https://doi.org/10.1038/s41436-­020-­01019-­3 Mackelprang, R., Aurand, E. R., Bovenberg, R. A. L., Brink, K. R., Charo, R. A., Delborne, J. A., Diggans, J., Ellington, A. D., Fortman, J. L. C., Isaacs, F. J., Medford, J. I., Murray, R. M., Noireaux, V., Palmer, M. J., Zoloth, L., & Friedman, D. C. (2021). Guiding ethical principles in engineering biology research. ACS Synthetic Biology, 10(5), 907–910. https://doi.org/10.1021/ acssynbio.1c00129 Martin, P., Morrison, M., Turkmendag, I., Nerlich, B., McMahon, A., de Saille, S., & Bartlett, A. (2020). Genome editing: The dynamics of continuity, convergence, and change in the engineering of life. New Genetics and Society, 39(2), 219–242. https://doi.org/10.108 ­ 0/14636778.2020.1730166 Mathews, D. J. H., Balatbat, C. A., & Dzau, V. J. (2022a). Governance of emerging technologies in health and medicine – Creating a new framework. New England Journal of Medicine, 386(23), 2239–2242. https://doi.org/10.1056/NEJMms2200907 Mathews, D. J. H., Fabi, R. R., & Offodile, A. C. (2022b). Imagining governance for emerging technologies. Issues in Science and Technology, 38(3), 40–46. McDermott, M.  B. A., Wang, S., Marinsek, N., Ranganath, R., Foschini, L., & Ghassemi, M. (2021). Reproducibility in machine learning for health research: Still a ways to go. Science Translational Medicine, 13(586), eabb1655. https://doi.org/10.1126/scitranslmed.abb1655

170

9  Research Integrity in Emerging Technologies: Gene Editing and Artificial…

McKelvey, M., & Saemundsson, R.  J. (2021). Developing innovation governance readiness in regenerative medicine: Lessons learned from the Macchiarini crisis. Regenerative Medicine, 16(3), 283–294. https://doi.org/10.2217/rme-­2020-­0173 McKelvey, M., Saemundsson, R. J., & Zaring, O. (2018). A recent crisis in regenerative medicine: Analyzing governance in order to identify public policy issues. Science and Public Policy, 45(5), 608–620. https://doi.org/10.1093/scipol/scx085 McLennan, S., Fiske, A., Celi, L.  A., Muller, R., Harder, J., Ritt, K., Haddadin, S., & Buyx, A. (2020). An embedded ethics approach for AI development. Nature Machine Intelligence, 2, 488–490. https://doi.org/10.1038/s42256-­020-­0214-­1 Mohamed, S., Png, M., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659–684. https://doi. org/10.1007/s13347-­020-­00405-­8 Mokander, J., Morley, J., Taddeo, M., & Floridi, L. (2021). Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science & Engineering Ethics, 27(4), 44. https://doi.org/10.1007/s11948-­021-­00319-­4 Nagendran, M., Chen, Y., Lovejoy, C.  A., Gordon, A.  C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies. BMJ, 368, m689. https://doi.org/10.1136/bmj.m689 Navarro, C. L. A., Damen, J. A. A., Takada, T., Nijman, S. W. J., Dhiman, P., Ma, J., Collins, G. S., Bajpapi, R., Riley, R. D., Moons, K. G. M., & Hooft, L. (2021). Risk of bias in studies on prediction models developed using supervised machine learning techniques: Systematic review. BMJ, 375, n2381. https://doi.org/10.1136/bmj.n2281 Nelson, J.  P., Selin, C.  L., & Scott, C.  T. (2021). Toward anticipatory governance of human genome editing: A critical review of scholarly governance discourse. Journal of Responsible Innovations, 8(3), 382–420. https://doi.org/10.1080/23299460.2021.1957579 Nickel, P.  J. (2020). Disruptive innovation and moral uncertainty. NanoEthics, 14, 259–269. https://doi.org/10.1007/s11569-­020-­00375-­3 Pamuk, Z. (2021). Risk and fear: Restricting science under uncertainty. Journal of Applied Philosophy, 38(3), 444–460. https://doi.org/10.1111/japp.12484 Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy & Society, 49(2), 178–193. https://doi.org/10.1080/14494035.2021.1929728 Rufo, F., & Ficorilli, A. (2019). From Asilomar to genome editing: Research ethics and models of decision. NanoEthics, 13(3), 223–232. https://doi.org/10.1007/s11569-­019-­00356-­1 Saha, K., Hurlbut, J. B., Jasanoff, S., Ahmed, A., Appiah, A., Bartholet, E., Baylis, F., Bennett, G., Church, G., Cohen, I. G., Daley, G., Finneran, K., Hurlbut, W., Jaenisch, R., Lwoff, L., Kimes, J. P., Mills, P., Moses, J., Park, B., & Woopen, C. (2018). Building capacity for a global genome editing observatory: Institutional design. Trends in Biotechnology, 36(8), 741–743. https://doi. org/10.1016/j.tibtech.2018.04.008 Sak, J., & Suchodolska, M. (2021). Artificial intelligence in nutrients science research: A review. Nutrients, 13(2), 322. https://doi.org/10.3390/nu13020322 Sandor, J. (2022). Genome editing: Learning from its past and envisioning its future. European Journal of Health Law, 29, 341–358. https://doi.org/10.1163/15718093-­BJA10081 Song, L., & Joly, Y. (2021). After he Jiankui: China reforms it’s biotechnology regulations. Medical Law International, 21(2), 174–191. https://doi.org/10.1177/0968533221993504 Taeihagh, A., Ramesh, M., & Howlett, M. (2021). Assessing the regulatory challenges of emerging disruptive technologies. Regulation & Governance, 15, 1009–1019. https://doi.org/10.1111/ rego.12392 Topol, E. J. (2020). Welcoming new guidelines for AI clinical research. Nature Medicine, 26(9), 1318–1320. https://doi.org/10.1038/s41591-­020-­1042-­x Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2022). The ethics of algorithms: Key problems and solutions. AI & SOCIETY, 37, 215–230. https:// doi.org/10.1007/s00146-­021-­01154-­8

References

171

Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W. (2021a). Framing governance for a contested emerging technology: Insights from AI policy. Policy and Society, 49(2), 158–177. https://doi.org/10.1080/14494035.2020.1855800 Ulnicane, I., Eke, D. O., Knight, W., Ogoh, G., & Stahl, B. C. (2021b). Good governance as a response to discontents? Déjà vu or lessons for AI from other emerging technologies. Interdisciplinary Science Reviews, 46(1–2), 71–95. https://doi.org/10.1080/03080188.2020.1840220 Waelen, R. (2022). Why AI ethics is a critical theory. Philosophy & Technology, 35, 9. https://doi. org/10.1007/s13347-­022-­00507-­5 Webster, A., & Terzic, A. (2021). Regenerative readiness: Innovation meets sociology. Regenerative Medicine, 16(3), 189–195. https://doi.org/10.2217/rme-­2021-­0034 Wu, E., Wu, K., Daneshjou, R., Ouyang, D., Ho, D. E., & Zou, J. (2021). How medical AI devices are evaluated: Limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27(4), 582–584. https://doi.org/10.1038/s41591-­021-­01312-­x Yeoman, G., Furlong, P., Seres, M., Binder, H., Chung, H., Garzya, V., & Jones, R.  R. (2017). Defining patient centricity with patients for patients and caregivers: A collaborative endeavor. BMJ Innovation, 3(2), 76–83. https://doi.org/10.1136/bmjinnov-­2016-­000157

Chapter 10

Research Integrity as Moral Reform: Constitutional Recalibration

Consider these perspectives: …the system provides incentives to publish fraudulent research and does not have adequate regulatory processes. Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk— from retracting studies. Funders, universities and other research institutions similarly have incentives to fund and publish studies and disincentives to make a fuss about fraudulent research they may have funded or had undertaken in their institution—perhaps by one of their star researchers. Regulators often lack the legal standing and the resources to respond to what is clearly extensive fraud, recognizing that proving a study to be fraudulent …is a skilled, complex, and time consuming process…Everybody gains from the publication game…apart from the patients who suffer from being given treatments based on fraudulent data… Stephen Lock, my predecessor as editor of the BMJ, became worried about research fraud in the 1980s, but people thought his concerns eccentric. Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud. All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realizing that the problem is huge, the system encourages fraud, and we have no adequate way to respond. It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary. (Richard Smith, 2021)

While Smith’s analysis focuses on fraudulent research, his diagnosis of the system problems that impair research integrity is revealing. His views also reflect arguments made throughout this book. Yet, it is essential to attend to ways in which science production and distribution are doing well and to how problematic parts of the current situation can be reversed or at least ameliorated. The science community generally believes that the violation of research integrity is rare. Built upon this belief, the scientific system makes little effort to examine the trustworthiness of research…Emerging evidence has suggested that research misconduct is far more common than we normally perceive…The current strategy that tackles potential research © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3_10

173

174

10  Research Integrity as Moral Reform: Constitutional Recalibration misconduct focuses on protecting the reputation of authors and their institutions but neglects the interests of patients, clinicians and honest researchers… (Li et al., 2022).

The health and medical research community generally believes that since science is supposedly self-correcting that the methods and findings of each other’s work can be trusted. Most of the scientific system, from funding of research to translation and implementation of findings, is built upon this assumption. “Is there any chance this assumption could be wrong?” (Li et al., 2022). Such expressions of deep concern from insiders in the scientific community raise alarm, but there is another perspective: …To any extent that available evidence has spoken, it does not suggest that fabricated falsified, biased, and irreproducible results are overwhelming the scientific literature, nor does the evidence suggest that these are increasing in prevalence. For this reason, I have argued elsewhere that the crisis narrative ought to be abandoned, because it is not only not supported empirically but is also potentially and unfairly damaging to the reputation of science and the morale of researchers. However, if by saying that science is in “crisis” we mean that science is in a critical period because it is facing new challenges that require important decisions to be made, then the answer may be “yes”…. (Fanelli, 2022).

Here analysis of research integrity strengths and weaknesses and suggestions for improvement, outlined in earlier chapters, are brought together under broader frameworks of moral change and constitutional recalibration. Research integrity has to solve problems hidden from view, not understood or addressed under the current system of research ethics, which has lightly regulated subjects’ protection and research misconduct and left much of the rest to self-regulation. The current system has assumed that science is accurate until proven otherwise and has not fully documented impact of the quality of science on research participants or patients. The challenge is to assure the cooperation necessary to assure production and distribution of valid research with full protection of humans who participate in its development and use. Following consideration of broad change frameworks, we consider roots that led to our current conceptualization of appropriate ethics in research and steps to assure scientific accountability. One among several alternate frames—the view that there is no crisis—is well articulated. Addressing current research integrity strengths and weaknesses and moving to a research integrity framework will require examining new assumptions, completing empirical research to describe current conditions, testing alignment with the values of research integrity, and much deliberation. We end with potential limitations in the topics and perspectives covered in this book and the methods by which they were retrieved. We also conclude with questions flowing from the title of this book: should research integrity be reconstructed, how might this be accomplished, and are the scientific community and the public emerging from a denial that this transition must occur?

Moral Reform; Constitutional Recalibration Here, we consider two frameworks providing perspective and guidance on how adoption of a research integrity framework might be accomplished.

Moral Reform; Constitutional Recalibration

175

Baker describes several forms of change: moral revolutions in which established moral paradigms are displaced with an incompatible alternative paradigm, moral reform which aims to mitigate deleterious effects of established paradigms, or moral drift in which external forces have altered the community’s sense of morality (Baker, 2019, 2022). Moral reform does fit recommendations made in this book. It might be addressed in part by expanding the role of IRBs or some other oversight body to explicitly address research quality throughout the research process. Toward that end, charge to the group should include adequately managing conflict of interest, assuring that responsible conduct of research instruction is effective in scientific practice, and insisting that current laws and regulations are enforced, among others. Regulatory changes will be necessary. One such change would merge the ethical tenets described in the Belmont Report with research misconduct regulations. Such a regulatory change would require research misconduct investigations to document harm to research participants and even to subsequent patients treated with flawed research (Redman & Caplan, 2021). In addition, Kimmelman (2020) argues that a fourth principle—research integrity—be added to the Belmont Report, in part because without it human subjects are not protected and in part because research has frequently failed to assure that studies incorporate technical elements necessary for integrity. These elements include: underpowered trials, failure to control various forms of bias, nonpublication of trial results, “spinning” of equivocal results, and selective citation and uptake of findings (Kimmelman, 2020). Moral reform involves an intentional attempt at moral change. The depth of distress with current practice is unclear. Surely there is evidence of it among some young scientists forced to operate in a system they consider to be unethical; concern among the public is likely muted by a serious lack of publicly available transparency throughout the system. Of research governance, a number of intentional calls for change have been cited throughout this book and likely will continue to grow. But uncontrolled commercial incentives driving journals and pervasive conflicts of interest that bias the literature will incentivize these and other entities to fight back to retain their privileged status. What should the reform yield? If implemented, it should help to prevent/resolve future scandals and clusters of research practices that we know do not yield valid and reliable scientific knowledge. Associations of researchers will adopt research integrity in their codes of ethics and enforce them and journals, funders, and universities will without exception provide excellent research environments and require good research practice with serious consequences if those standards are not followed. Absence of consequences for violation of RI is a major problem needing reform. Scientists and the public will come to see poor/unethical scientific practice as wrong; evidence of replication and translation to effective interventions will be broadly available. In an alternative conceptualization, Verschraegen (2018) describes an emerging, though still fragmented, recalibration of old agreements to become a transnational social constitution for science. The emergence can be seen in the increasing importance of ethics bodies and public engagement to counteract “the unleashed dynamism of scientific…innovation.” Specific norms and rules that protect the autonomy

176

10  Research Integrity as Moral Reform: Constitutional Recalibration

of science now require attention to the limits of the operations of science and their interaction with other social systems. Professional autonomy by science over standards of evidence and proof, certification and reliability of knowledge and how scientific knowledge making should be calibrated with broader societal concerns should be on the table to be considered by relevant parties. Self-regulation by the scientific community is, of course, necessary but will require significant reform of institutions in the science production and dissemination sphere, especially journals and funders. The entire system should be subject to measurement by an independent oversight body which can authorize and assure stringent formal regulations should the reformed system fail to meet acceptable standards. While an unknown amount of science is done well, descriptions of poor practices and their consequences, noted throughout the text, some of long standing, require reform. Based on examination of self-regulation in professions, some note increasing requirements of oversight bodies and believe pure self-regulation may become extinct. There is little critical assessment of potential regulatory alternatives (Adams, 2017). The public’s assessment of medical researchers’ scientific integrity should sound an alarm bell and a search for better regulatory alternatives. As of November, 2020, while 39% of Americans reported a great deal of confidence in scientists to act in the public interest, only 15% said medical researchers are transparent about potential conflicts of interest due to industry ties or would admit or take responsibility for mistakes they had made all or most of the time. Only 2 in 10 believed that if professional or research misconduct occurs, there would be serious consequences for such misdeeds. Thus, survey data suggest Americans are cautious or suspicious around issues of ethics and scientific integrity in medical research even while overall public trust in science is high (Funk, 2021). As noted above, the concept “research integrity” would replace “research ethics,” an imprecise term that does not describe the end goal of production and dissemination of science—meeting quality standards, necessary to reap benefits, and avoid harms. The imprecision and partial reach of “research ethics” is insufficiently conceptually clear to adequately support the goals of science. Other terms would also change—“bad apple” should now be “system malfunction,” “questionable research practices” would be replaced by assessment of the quality of the research. Research “subjects” autonomy would be further honored by first, assuring that the quality of science in which they are participating has the best chance of providing answers important to them. A small example of that change of focus would require informing research participants of research misconduct in trials in which they participated, assuring a complete and thorough examination of all relevant literature and effects on their health and on health of subsequent patients whose treatment relied on fabricated/falsified data. And Chap. 9 makes the case that much more attention must be paid to effective management of research integrity in emerging technologies. Oduro et al. (2022) note recently proposed legislation requiring much stronger documentation of consequences of AI and machine learning technologies before or during their development, including in the real world. These reforms would be especially focused on disparate impacts on protected classes and on public transparency (Oduro et al., 2022).

Other Frames By Which to Understand Research Integrity

177

Thus, both the overall frameworks of moral reform and of constitutional recalibration are useful ways of conceptualizing how the change required to reach research integrity might be addressed.

Other Frames By Which to Understand Research Integrity Historical analyses and attention to underdeveloped perspectives including the view (previously introduced) that there is no crisis, deserve attention. Technology/innovation policy in a historical context offers an additional frame for understanding the evolution of research integrity, as it operates/evolves within understandings of appropriate research policy. Schot and Steinmueller (2018) describe science policy frameworks since World War II; elements of each continue to exist to this day. Frame I: postwar institutionalization of government support for science research and development were expected to contribute to economic growth and address market failure in private provision of new knowledge. Frame II, in place since 1980 emphasized global competition shaped by national systems of innovation for knowledge creation and commercialization with emphasis on enabling entrepreneurship. Frame III arose at the turn of the twenty-first century emphasizing sustainable development goals and evolving into focus on how to use science and technology policy for meeting social needs. Such framings evolve over time, changing when they are perceived as inadequate to current circumstances (Schot & Steinmueller, 2018). In their current iteration, each of these frames leaves important policy residue that affects the goal of reaching research integrity. From Frame I, we inherited commercialization supports through protection of trade secrecy and intellectual protections to remain competitive, leading to disparate standards and degree of transparency between commercial and academic (nonprofit) research. In Frame I, regulation was usually applied after the research process was complete, which explains how AI became so far advanced, funded largely by companies before its ethical deficiencies are becoming understood. This leaves a very messy process of discovering both the benefits and the harm produced including controlling the latter. Frame I also encouraged science self-regulation, referring problems to members of the scientific community for evaluation and solutions (Schot & Steinmueller, 2018). Frame II called for knowledge to increasingly be produced across disciplines, in the context of application. In this frame, quality control becomes complex because it has to be coordinated not only through multiple disciplines but also across sectors (government, industry, and academia) and thus different logics. Frame III is dealing with need to better align social and environmental challenges with innovation objectives. Both Frames I and II assume stimulating innovation is positive, leaving an ethical problem for Frame III unsolved (Schot & Steinmueller, 2018). Addressing issues in Frame III will likely require revisable governance and destabilization of current locked-in sociotechnical/economic conditions and goals. Strong resistance to changing prevailing regulatory, cognitive, and normative

178

10  Research Integrity as Moral Reform: Constitutional Recalibration

collective rules (Schot & Steinmueller) is on full display. It should be noted that all three frames should have required evidence that the underlying science was of high quality and as reproducible as possible—an issue outlined in previous chapters that is still not adequately addressed. Two other perspectives are instructive as they address undeveloped areas. Douglas notes that science has become too important, too powerful a force in society, and too many crucial issues hang on scientific assessments that it is important to assure lines of accountability. She suggests simultaneous lines of accountability: (1) to scientific evidence and to the scientific community that debates what that evidence means, which evidence is most reliable and when sufficient supportive evidence supports “proven,” (2) to decision makers, and (3) to the general public (Douglas, 2021). Adorno (2021) reminds us of the right to science evolution of scientific integrity, including both freedom to do science and the right to enjoy benefits of science. Legal development of a right to science is still rudimentary.

There Is No Crisis (See Fanelli Quote at Chapter Beginning) A further view was introduced earlier in this chapter and in Chap. 3. Fanelli has provided a useful analysis of existing evidence about practices assumed by some to be poor, and of their effect on the quality of science. Surveys suggest there is a non-­ zero but typically small percentage (below 5%) of respondents who admit having engaged in FFP at least once and a much higher percentage who report engaging in questionable behaviors that may represent improper conduct. This leaves unanswered the accuracy of data attained by survey, the actual number of publications affected by these behaviors, severity of the misbehavior, and the effect these practices have on research publications (Fanelli, 2022). Questionable research practices (QRPs) are a heterogeneous category of behaviors that may or may not represent good scientific practice in a particular situation. Fanelli suggests that claims that one of those behaviors—p-hacking—is widespread in science or even in a specific discipline are not supported by evidence. He also claims that there is no clear evidence that QRPs in general have increased. Multiple studies have found statistical power in the literature to be low, which likely affects reproducibility. But because a certain amount of irreproducibility ought to be expected in most research fields, the question should be what rate should be considered to be problematic. Although there is no consensus on such a criterion, Fanelli suggests 50% and notes that many investigations of reproducibility meet that standard. Retractions are increasing, likely because more journals now have policies about them. So in light of this evidence, there appears to be little reason to conclude that misconduct, bias, or irreproducibility is pervasive in science although it may still be the case that these problems are increasing in prevalence and impact. There is evidence that risk of scientific misconduct may be higher in developing countries, particularly China and India. Yet, a crisis narrative is not supported by evidence, although there might be localized “crises” in specific fields of research (Fanelli, 2022).

Addressing Current Research Integrity Problems Including Those Currently…

179

Fanelli (2022) also notes that science is facing radical transformations that could threaten its integrity and that we are still learning how to deal with them (Fanelli). From an ethical perspective, it would be important to note that the real crisis is that we do not have accurate estimates of these behaviors and their impact. It is unacceptable not to be able to assess, correct, and monitor scientific behaviors that could have a deleterious (and perhaps widespread) impact on research subjects and subsequent patient well-being, stewardship of research resources, and public trust in science.

 ddressing Current Research Integrity Problems Including A Those Currently Neglected/Conflicted Research integrity violations have been characterized by blind spots. For example, the experiences of the accused and of affected research participants are almost never heard or considered relevant. A second unfortunate blind spot follows from the current regulatory focus largely on individuals, which renders invisible and irrelevant contributions to violations of research integrity from the system of scientific institutions including employers, journals, funders, universities, and commercial entities (Kubin et al., 2021). Empirical data have largely been limited to surveys yielding very imprecise estimates of how widespread deviations by individuals from research integrity, in order to justify considering whether or not the problem is serious. Answers to both of these questions are highly relevant and are a way to increase moral understanding. Here, we review several research integrity issues among the many highlighted throughout this book, that have been important blind spots and should concurrently receive community/group/broader governance attention: strengthening science self-­ regulation, developing policy about emerging data ownership/sharing practices/ norms, and finally addressing conflict of interest management.

Strengthening Science Self-Regulation How can self-regulation of science be strengthened? Two examples are instructive, one in which the scientific community came together to analyze what turned out to be an unproductive line of research and another in which there is little will to clean up widespread practices detrimental to research integrity. Both examples depict themes related to improvement in research integrity standards and practices. 1. Cardiac Regeneration Research In Fall of 2018, more than 30 foundational papers that described cardiomyocyte plasticity and regenerative therapy were retracted, a major research institution was fined $10 million related to this work and an NHLBI clinical trial

180

10  Research Integrity as Moral Reform: Constitutional Recalibration

designed to evaluate these cells in patients with ischemic heart failure was paused, causing leaders in the heart failure research community to call for a reset. How could another such incident be prevented (Buttrick, 2019)? Cardiac regeneration was an idea that had surfaced 20 years previously, studied in small trials with “unimpressive” results, and then moved swiftly into clinical trials. Buttrick (2019) has raised questions to be answered as part of the reset: why did the initial data prevail in the face of significant negative work and inability to replicate the original studies, why was this research community anxious to move into clinical trials prematurely, why was the possibility of data fabrication/falsification not considered? Soul searching suggests multiple ways in which scientific rigor and integrity could have been improved. A journal editor admitted a strong desire to publish high-impact, paradigm-shifting work; study registration with a prespecified analysis plan and open access to materials and data might well have resolved the issue much earlier (Buttrick, 2019). A large potential market for a therapy was a powerful motivator, with institutions beginning to advertise these cell therapies (Epstein, 2019). An ability to separate the financial implications of a project from its scientific integrity apparently eluded this research community. Will the heart failure research community use the tools now becoming normative that support higher quality research? Will journal editors and those commercializing very much-needed therapies be able to resist financial and reputational incentives? A similar fork-in-the-road occurred in the field of anesthesia, faced with massive numbers of retractions by four authors, leading to cooperation among journals in detecting research misconduct (Klein, 2017). 2. Social Value Requirement for Health-Related Research Despite its widespread recognition in policy, social value remains relatively unexplained in the research ethics literature, both as a concept and as an ethical requirement, having been discussed only briefly in the Belmont Report. Social value means that the research generates knowledge which can be used to benefit society, certainly sufficient to justify associated risks, burdens, and costs. Because it has long-lasting and unavoidable effects on life chances of all members of a society, social value is necessary for health-related research. Of great relevance to research integrity, such value requires methodological rigor (Rid, 2020), which has been shown in previous chapters to be questioned. Lack of social validity makes informed consent questionable, involves deception, and violates stewardship of public resources including commercial uses of publicly funded research and infrastructure. It also undermines public trust and denies people access to health interventions from previous trials that met research integrity standards. It is not clear how a requirement for social value should be implemented and enforced (Wendler & Rid, 2017). Despite this lack of clarity and commitment, there are breakthrough examples of social value oversight mechanisms being piloted. At Stanford University, an Ethics and Society Review Board fills this moral gap by facilitating ethical and societal reflection as a requirement to access grant funding. Researchers describe their proposals’ research risks to society, subgroups within society, and globally

Addressing Current Research Integrity Problems Including Those Currently…

181

and commit to mitigation strategies for these risks. An interdisciplinary faculty panel works with the researcher to refine and mitigate these risks. This program was piloted in a large AI grant program at Stanford. The AI field of research is known to carry risks of dual use and problems with fair representation and is often outside the scope of IRB review. Adding to this gap/blind spot is the fact that the Common Rule specifically disavows review of consequences to human society (Bernstein et al., 2021). A second example supporting social value involves Promoting Maternal Infant Survival Everywhere (PROMISE) trial, which compared relative efficacy and safety of interventions to prevent mother-to-child transmission of HIV. This was an area in which other research findings and policies were rapidly evolving. The trial sponsor engaged an ad hoc independent international ethics panel to address controversy regarding the study’s standard of care and relevance as national and international guidelines changed, advising about whether the study should be continued. The trial had a DSMB that regularly reviewed interim data, and was required to undergo an annual IRB review. In cases where public communication and transparency are critically important and the study team feels more expertise is warranted, this example of the PROMISE independent ethics panel can serve as an important model (Shah, et al., 2021). Both of these examples show efforts toward ongoing self-governance by the scientific community, essential to strengthen RI practices.

Data Ownership/Sharing in Health Research Despite multiple initiatives to encourage health data sharing and strong open access, the data-sharing revolution thought to be essential in biomedical research is lagging. Analysis by Sorbie et  al. (2021) notes that competing logics are being ignored. Formal legal property-type appeals to ownership have far less power among scientists than do ethical and social concerns for privacy and confidentiality, including scientist obligations to data sources. While there is no single source of law that governs data ownership/sharing in health research, law rarely requires sharing and there is often lack of clarity about conditions in which data can be shared. Scientists do have concerns about effort required to create and curate a data set and about sharing of benefits such as scientific recognition. They also have concerns that someone else may publish first or without credit to them on the basis of a data set that they have generated/or that shared data might be questioned, misused, or misinterpreted. Ownership can be characterized in multiple and competing ways. Many scientists have a sense of stewardship which requires securing the dataset, guaranteeing its integrity and establishing sufficient anonymity. It is increasingly difficult to assess the quality of data and to know where it comes from (Sorbie et al., 2021). While there is a legitimate public interest in assuring efficient use of data, there are legitimate barriers to data sharing. Governance structures created to develop

182

10  Research Integrity as Moral Reform: Constitutional Recalibration

these practices need to recognize tensions between these logics if the hoped-for benefits of data sharing are to be realized (Sorbie et al., 2021).

Conflict of Interest in Research Policy and Practice Conflict of interest in scientific work is an area of policy and practice that has largely failed in both formal regulation and self-monitoring by the scientific community. Most scientific institutions are themselves conflicted, including editorial board members in selected journals (Janssen et  al., 2015), National Academies’ panels (Krimsky & Schwab, 2017), public speakers at meetings of an FDA drug products advisory committee (McCoy et  al., 2018), and patient advocacy organizations, many of whom are funded by industry and associated with pro-industry bias (McCoy, 2018). No single or consistent set of conflict of interest rules applies to most human research, and all standards lack teeth (Rodwin, 2019). And there is no unconflicted source of authority that deals with quality of science, which allows effects of conflict of interest to be hidden. Regulations often make an implicit assumption that the science is of acceptable quality. There is some evidence that it isn’t; for example, the Rigor and Transparency Index shows journals met half of rigor and reproducibility criteria described in the Index (Software, 2020). Indeed, conflicts of interest are built into certain federal regulations such as those of research misconduct (42 CFR 93). A complainant may or may not be lodging an allegation of RM in good faith but may be acting, based on a conflict of interest, to stop a body of research that threatens his/her personal or commercial (in the case of lead, tobacco, and other) interests. Research misconduct regulations allow institutions in which the alleged fraudulent work was done to investigate their own faculty/ staff, raising conflict of interest among collaborators and incentives for the institution to avoid reputational or financial harms. Proper conflict of interest management would require an independent body to receive and investigate allegations of research misconduct. For a fuller discussion of COI/COM, see Chap. 6.

Steps to Research Integrity The argument to this point in this book has been that because traditional research ethics is incomplete, siloed, and uncoordinated and does not address the system-­ wide character of knowledge development and distribution, it has been insufficient to guide the ethical governance of research. Research integrity makes explicit a necessary quality of research and applies to all institutions in the research production system. Here we first, consider why a move to RI is essential. Second, science policy will be the underpinning that supports this transition; there are lessons to be

Steps to Research Integrity

183

learned about how to manage this transition. Next, new lines of empirical research supporting this goal are described. Finally, we consider a cluster of arguably the most fundamental issues to be solved on the way to RI.

Imperative to Move to Research Integrity Longtime issues have to be reconsidered: the source of accountability for science, definition of boundaries between acceptable and unacceptable research practice, managing emerging technologies in support of valid science, and the pull of commercialization. Drahos (2020) traces roots of the discordance underlying research integrity back to the way the institution of science has been bent toward processes of capital accumulation—the public good of knowledge discovery contradicts “the very essence” of capitalism’s commodity nature. Maintaining integrity requires an institution (in this case, science) to stick to its basic and defining purposes. Seeing science largely as a commodity undermines science’s ability to self-govern (Hallonsten, 2022), with weak social control mechanisms that allow opportunity for fraud and methods for detection being seriously underdeveloped. It also undermines efforts to return to science’s basic values. For example, the logic of commercialization undermines the long-term and newly emerging notion of open science, its transparency and accountability. Openness is not required of the large portion of research now funded by the private sector. This leaves unaddressed a number of well-documented red flags in privately conducted or funded research: coverups of financial conflict of interest by undisclosed manipulation of science in tobacco, chemical, and other industries and effects of design bias and publication bias in this work. As currently implemented, open science inadvertently supports corporate interests to harvest “open” work without themselves committing resources to share the science they produce (Pinto, 2020). There are multiple urgent reasons for strengthening scientific accountability in support of research integrity. Biagioli et al. (2019) note the power of technological change including metrics of evaluation, to open up new opportunities for misconduct, misrepresentation, and gaming of the system but also to expose disagreement within the scientific community about which behaviors are appropriate. Meanwhile, significant damage to the goals of science and to patients can result from this ambiguity: selectively reporting research results can be seen as falsification and contributes to failure to replicate; peer review as currently practiced offers opportunities for gaming and fraud and when it occurs is not punished; total dependence on whistleblowers, largely from within the scientific community, to cleanse the system puts them at risk of being sued for libel or defamation or being dismissed from employment. There is little agreement on solutions and misconduct continues to evolve. If the public understood the level of unclarity, it could potentially lead to erosion of public trust.

184

10  Research Integrity as Moral Reform: Constitutional Recalibration

Some note that to whom research scientists are accountable remains an open question. This concern hinges on the analysis that science lacks the status and structure of a profession that would support a test of whether its self-regulation is competent. The argument is supported with lack of a notion of scientific negligence being recognized in regulation or in policy. Negligence is a category of misconduct where a professional does not live up to expected quality of practice and should have known better—this can include scientific methodology, interpreting findings, and competence in carrying a project from inception to publication and beyond. While Desmond believes a professional model is best suited for science, its collective community does not carry formal responsibility for standards of competence (Desmond, 2020; Desmond & Dierickx, 2021). It is worth noting that the widespread “rotten apple” metaphor discussed in Chap. 2 supports the notion that every scientist except the rotten apple is competent. In fact, it has become clear that there are structural causes of poor practice and outright fraud, in part supported by incentives perverse to science’s basic goals and that the current state of its self-regulatory mechanisms may be insufficient. Science cannot seem to escape incentives put in place by others for their (not science’s or its users’) benefit. Indeed, reports of research scandals in biomedicine fit the “rotten apple” metaphor, are overwhelmingly about research misconduct (fabrication/falsification) and because they are usually portrayed as due to bad individuals, have not so far precipitated a moral crisis or a rethinking of norms and social controls. To some degree, scandals are a necessary and normal component of social order and represent those transgressions that reach the attention (Fine, 2019), usually, of the scientific community and infrequently of the general public. Reframing the reporting of scandals to include contributing structural and institutional factors contributing to it would draw attention to necessary changes in specific norms. In summary, a switch to research integrity as an integrating concept puts focus on creating systems that assure the quality, relevance, and reliability of all research. Achieving it requires regular practices of record keeping, assuring quality experimental designs, use of techniques to detect and reduce bias, and rewards for rigorous work rather than narrow efforts to punish a few bad actors. All institutions in the research production and dissemination system are responsible for rigorous standards of research integrity.

Constructing the Policy Base to Support Research Integrity The policy sciences provide much expertise that has not been visible in the literature on attaining and managing research integrity. Here options are considered through a framing of: policy learning, policy design with built-in monitoring and increasing attention to noncompliance, and a merging of long-established social science knowledge into organizational management of research regulation and integrity. Policy learning is the updating of beliefs based on lived and witnessed experiences, analytic (empirical evidence), or social interaction, such as through

Steps to Research Integrity

185

bargaining. Continuous learning can be boosted by situational crises, perhaps leading to a period of intense learning, when old ways of thinking begin to crack. Or learning can be stymied leading to repeated failures. Learners need to possess absorptive, administrative, and analytic capacities (Dunlop & Radaelli, 2022). Policy failures are often assumed to be unintentional, ignoring the tendency to sweep an issue under the rug, doing nothing, or addressing serious issues in a purely symbolic way (Leong & Howlett, 2022). Or the agency assigned to manage the issue fails to do so or is co-opted by the group being regulated. Alternatively, policies that fail may have had poor design and/or management. Implicitly or explicitly relying on goodwill and engaged, compliant behavior is naïve. Constant monitoring, regular policy reviews, increasing level of coercion for noncompliance are recommended (Howlett. 2022). Such elements have not been built into important research regulations such as that for research misconduct. Policy effectiveness has not been rigorously assessed and improved, suggesting a passive approach to assuming that the scientific community will self-regulate, likely supported by that community, which deflects what it considers to be interference. Van Rooij and Fine (2021) have described the following steps to apply behavioral science to issues of regulation/law, thus providing a playbook for how research misconduct might be addressed. Step 1: It is vital to know what variation there is in the behavior so can analyze separately. Step 2: How does the behavior work (how is it done)? Step 3: What do people need to refrain from this behavior—technology, education, help with self-control? Step 4: Do people think the rules and their enforcement are legitimate; if not, they will feel less obligation to obey. Procedural fairness is essential. Step 5: What are existing morals and norms in the working situation? Step 6: How do incentives factor in? (van Rooij & Fine, 2021). Punishment only deters if it is certain and inevitable. Some offenders can be rehabilitated, sometimes because they do not know the law, or because enablers of bad behavior are replaced (van Rooij & Fine, 2021). Toxic institutional cultures, which involve long-term and systemic rule-breaking behavior, are rarely addressed. People may not be helped to learn from errors, which are a normal part of work. Moral climates in such organizations involve neglect, inaction, and/ or justification; rule breaking may be normally condoned. Neutralization techniques, introduced earlier in this book, include denying responsibility, denying injury, indicating that the victim deserves the harm that occurred (van Rooij & Fine, 2018). In the policy sciences, some problems are called “wicked,” in part because divergent framings generate conflict about their nature and scope and thus how to address them. Wicked problems are likely to be ongoing and recurrent with no common root cause, and no single policy level can “fix” the problem. It is common for such problems to be analyzed and addressed in small pieces rather than in their totality. Attempts to reframe the problem are ongoing, sometimes reaching thresholds with more rapid change (Head, 2022).

186

10  Research Integrity as Moral Reform: Constitutional Recalibration

It is also important to draw out policy successes, which are often understudied. These successes are evaluated by their performance, legitimacy, process, and endurance (Compton & ‘t Hart, 2019).

Potential Lines of Empirical Research/Normative Analysis In a move toward research integrity, there are many central questions to be investigated, informed by empirical research and normative analysis. What can be done to strengthen professional-level self-regulation? When will science replace some current norms which are detrimental in practice such as apparently widespread use of questionable research practices? What is the tipping point in which disaffection with norms based on perverse incentives is overturned in favor of norms better aligned with science’s goal of producing reliable knowledge and supportive of participant/patient/social welfare? Interventions facilitating a common understanding of benefits of abandoning detrimental norms will be necessary. Change may be gradual as individuals and groups weigh costs of changing behaviors thought to be supportive of their current status in science. If a critical number of individuals abandon the detrimental norms, rapid progress toward an altered state can be attained, even though some will persist. Alternatively, societies/groups can fail to abandon norms when they have become inefficient (in producing reliable science) if policy intervention is lacking (Andreoni et al., 2021). A line of empirical research central to ethical science should assess the level and quality of avoidable harm embedded in the current system of scientific practice and regulation, essential for IRB core functions. As an example, Junqueira et al. (2021) document only limited improvement in the reporting of harms in RCTs since the CONSORT Harms checklist was adopted; most trials report less than half of the CONSORT Harms items. Why is this intervention so marginally effective in improving practices/norms? Previous chapters have noted evidence of flaws in how science is practiced, supported, and regulated, although none of this work is clear about the frequency of these issues or of their consequences. Nevertheless, operating in such a system can harm individuals, undermine trust in scientific institutions and rob society of the full benefits of science practiced with integrity. Comparing the actual state with an ideal standard can make clear a viable alternative and strengthen desire to improve. Sometimes such a reconnoiter with unfavorable practice is belated and insufficient, especially given the costs of allowing poor standards to continue. As an example, for the past decade, biomedical researchers have worried that too many animal experiments couldn’t be repeated in other labs or that the potential treatment didn’t work when tested in humans. Yet, NIH, in conjunction with a panel from the scientific community finally considering the issue, suggested education particularly in statistics since low power has been a problem (Kaiser, 2021), instead of a more forceful response that should have expected any funded project would assure adequate statistical expertise.

Arguably The Most Problematic Issues

187

Evaluation of scientific work, in perspectives much more diverse than is currently prominent, is needed to capture the full range of science’s contributions such as truth, democracy, well-being, and justice, and from the perspectives of the broader publics. Some advise that the scientific community should stop blaming external incentives and take responsibility—examine internal processes as well as external incentives (Schneider et al., 2021). Such attention should help to restore public trust in science (Knaapen, 2021). While empirical research on these issues can be helpful, it will not automatically be applied to improving research integrity. Identification of problematic methodological issues in science was documented decades ago but has not spurred correction. Peer review and other mechanisms have “limited capacity to filter low quality and problematic research,” even as they preserve well-established social stratifications in science. Science self-correction cannot just be assumed to inevitably happen even with a better empirical base describing problems and solutions. Such correction needs explicit support from oversight and review. To that issue, a last line of suggested empirical investigation and normative analysis might build on system justification theory. People are often motivated to see their institutions and norms as legitimate, and can rationalize away faults of a system on which they heavily depend even if they are harmed or disadvantaged by this system. Many see the situation as inescapable. System justification theory predicts that when consequences are self-relevant, the risks of a problematic system can become dire. Change is most likely to be spurred by those with moderate levels of confidence in the current system as they can see its problematic elements but also believe they can make a difference (Friesen et al., 2019). Tolerating unjust elements of a system may take a psychological toll over time (Osborne et al., 2019).

Arguably The Most Problematic Issues Evidence from all chapters in this book suggest the following issues to be most problematic. –– Disseminated accountability, leaving no entity responsible for the whole system of institutions and policies necessary to produce and disseminate valid research. Systems thinking has seen little application in regulatory scholarship. Consensus among systems thinkers is that changing just one element of a system is likely to trigger unanticipated changes elsewhere and may have little impact on its overall performance (van der Heijden, 2022). Disarticulation among elements of the institutions/policies managing research production and dissemination is clear. –– Appropriate role for science self-governance and assurance that it will reliably and competently play that role, especially when so much of science is commercially produced and not open to inspection. An example may be seen in an investigation of evidence of preclinical research efficacy in investigators’ brochures. Currently, the responsibility to assure that the supporting evidence is complete

188

10  Research Integrity as Moral Reform: Constitutional Recalibration

and sufficiently robust lies with the sponsor, with few checks by regulatory and ethics gatekeepers. Some suggest that sponsors construct the investigator’s brochure with the end goal of obtaining product approval; studies that do not support that goal may be left out. Clearly, all stakeholders in the review should have access to the complete evidence, both that in the public domain and studies conducted in house by the sponsor. Preregistration of preclinical studies offers one reform (Haslberger et al., 2022). –– Attainment of quality of the research record, including documenting its cumulativeness. –– Reversing a near total absence of an evaluation system that represents science’s core goals (see Chap. 8 for a full discussion). This problematic issue is perplexing because well-done science has the highest probability to be translatable to products, thus serving an economic goal. Until consumers and funders of research demand such quality, there will be little incentive to produce it.

The Bottom Line Moralities can change by moral, political, or economic revolutions. They are usually marked by introduction of new concepts and terminologies, typically accompanied by obsolescence or reinterpretation of traditional moral concepts and terminologies. Some moral changes are successful, others fail (Baker, 2019). The refreshed concept is research integrity, which requires a system approach, in which each element of research production and dissemination has clear and enforced responsibilities and accountability supporting the goals of science; this includes universities, corporate entities, funders, journals, regulators. Such is not currently the case. Despite stunning scientific successes, much evidence cited in earlier chapters suggests that management of the quality and impact of scientific practice should be more thoroughly examined. Blind spots in science policy, ubiquitous undeclared conflicts of interest, lack of an adequate evidence base from which to assure or improve quality, and indications that some of its major institutions have been usurped by market forces, have been documented. The revelations of meta-science (based largely on empirical research) and now coming to light should have been known all along and these insights, where relevant, used to correct the system. The emerging sciences/technologies are struggling to build quality and ethical practice, in part because they contain new elements but also because a strong template for research integrity, in general, is lacking. Responses to both examples described in Chap. 9 (research in gene editing, AI) show potential normative corrections. For AI these include embedding ethicists and public deliberation early in the research/ development process, regular audits and impact assessments by independent bodies, building toward the understanding that not only can algorithms correct bias but the prior question should be asked: should this area be automated in the first place?

References

189

Challenges to research integrity emerge regularly and unexpectedly, often in the form of a blind spot, leaving the impression that there may be many more. A recent example is signs of fabrication in scores of Alzheimer’s articles which may have misdirected research in this field for many years and cost millions of public research dollars. Investigation of this concern showed journals not analyzing scientific images with widely available tools. Investigation is ongoing (Piller, 2022) but raises the question of whether funding agencies should require institutions receiving research funds to verify the findings, and/or require independent verification of research fields in which they are heavily investing. Traditional research ethics not only is incomplete, it is also complicit in creating this bottom line because it has allowed displacement (through political means) of responsibilities onto individuals and has protected those whose accountability has been excused. Practices and institutions of science have been fundamentally shaped by their engagement with the external world; working out how to practice science with integrity when incentives of other societal institutions do not support norms of research integrity is clearly incomplete. We don’t know the cost of not acting to improve research integrity in scientific practice. It should be noted that no institution has claimed leadership for the move to research integrity. While all research institutions (scientific community, universities, commercial entities doing research, journals, funders) will need to rid themselves of perverse incentives, some organizations will need to make changes that will begin to tip the balance. Alternatively, abrupt and nonlinear change can occur, triggered by technological innovation or natural disasters (Smith et al., 2019). In summary, research integrity is an integrated concept, with a goal of valid scientific knowledge. Unlike traditional research ethics, it requires assurance that science is competently done. There is evidence that denial of the potential of this transition has impeded it.

Limitations As an emerging, umbrella field, research integrity is still under construction. As noted throughout the book, there is disagreement about what level of research practice is sufficient to qualify as integrity. As noted in Chap. 1, this book draws from available literature which may well over-report problems rather than solutions or successes. Nevertheless, the central question remains—is the current system operating at a level consistent with its responsibilities?

References Adams, T. L. (2017). Self-regulating professions: Past, present, future. Journal of Professions and Organization, 4, 70–87. https://doi.org/10.1093/jpo/jow004

190

10  Research Integrity as Moral Reform: Constitutional Recalibration

Adorno, R. (2021). The right to science and the evolution of scientific integrity. In H. Porsdam (Ed.), The right to science: Then and now. Cambridge University Press. Andreoni, J., Nikiforakis, N., & Siegenthaler, S. (2021). Predicting social tipping and norm change in controlled experiments. PNAS, 118(16), e2014893118. https://doi.org/10.1073/ pnas.2014893118 Baker, R. (2019). The structure of moral revolutions. MIT Press. Baker, R. (2022). Principles and duties: A critique of common morality theory. Cambridge Quarterly of Healthcare Ethics, 31(2), 199–211. https://doi.org/10.1017/ S0963180121000608 Bernstein, M.  S., Levi, M., Magnus, D., Rajala, B.  A., Satz, D., & Waeiss, C. (2021). Ethics and society review: Ethics reflection as a precondition to research funding. PNAS, 118(52), e2117261118. https://doi.org/10.1073/pnas.2117261118 Biagioli, M., Kenney, M., Martin, B. R., & Walsh, J. P. (2019). Academic misconduct, misrepresentation and gaming: A reassessment. Research Policy, 48, 401–413. https://doi.org/10.1016/j. respol.2018.10.025 Buttrick, P. (2019). Research integrity. Journal of Cardiac Failure, 25(5), 401–402. https://doi. org/10.1016/j.carddfail.2019.04.005 Compton, M. E., & ‘t Hart, P. (2019). How to ‘see’ great policy successes: A field guide to spotting policy successes in the wild, in great policy successes. Oxford University Press. Desmond, H. (2020). Professionalism in science: Competence, autonomy, and service. Science & Engineering Ethics, 26(3), 1287–1313. https://doi.org/10.1007/s11948-­019-­00143-­x Desmond, H., & Dierickx, K. (2021). Trust and professionalism in science: Medical codes as a model for scientific negligence? BMC Medical Ethics, 22(1), 45. https://doi.org/10.1186/ s12910-­021-­00610-­w Douglas, H. (2021). The role of scientific expertise in democracy. In M. Hannon & J. deRidder (Eds.), The Routledge handbook of political epistemology. London. Drahos, P. (2020). Responsive science. Annual Review of Law and Social Science, 16, 327–342. https://doi.org/10.1146/annurev-­lawsosci-­040220-­065454 Dunlop, C. A., & Radaelli, C. (2022). Policy learning in comparative policy analysis. Journal of Comparative Policy Analysis, 24(1), 51–72. https://doi.org/10.1080/13876988.2020.1762077 Epstein, J.  A. (2019). A time to press reset and regenerate cardiac stem cell biology. JAMA Cardiology, 4(2), 95–96. https://doi.org/10.1001/jamacardio.2018.4435 Fanelli, D. (2022). Is science in crisis? In L. J. Jussim, J. A. Krosnick, & S. T. Stevens (Eds.), Research integrity. Oxford University Press. Fine, G. A. (2019). Moral cultures, reputation work, and the politics of scandal. Annual Review of Sociology, 45, 247–264. https://doi.org/10.1146/annurev-­soc-­073018-­022649 Friesen, J.  P., Laurin, K., Sheperd, S., Gaucher, D., & Kay, A.  C. (2019). System justification: Experimental evidence, its contextual nature, and implications for social change. British Journal of Social Psychology, 58(2), 315–339. Funk, C. (2021). What the public really thinks about scientists: Surveys show a desire or greater transparency and accountability in research. American Scientist, 109(4), 196–197. Hallonsten, O. (2022). On the essential role of organized skepticism in science’s “internal and lawful autonomy” (Eigengesetzlichkeit). Journal of Classical Sociology, 22(3), 282–303. https:// doi.org/10.1177/1468795X211000247 Haslberger, M., Schorr, S. G., Strech, D., & Haven, T. (2022). Preclinical efficacy in investigator’s brochures: Stakeholders’ views on measures to improve completeness and robustness. British Journal of Clinical Pharmacology, 89, 340. https://doi.org/10.1111/bcp.15503 Head, B. W. (2022). Wicked problems in public policy. Palgrave Macmillan. van der Heijden, J. (2022). The value of systems thinking for and in regulatory governance: An evidence synthesis. Sage Open, 2022, 1–12. https://doi.org/10.1177/21582440221106172 Howlett, M. (2022). Avoiding a panglossian policy science: The need to deal with the darkside of policy-maker and policy-taker behavior. Public Integrity, 24(3), 306–318. https://doi.org/1 0.1080/10999922.2021.1935560

References

191

Janssen, S. J., Bredenoord, A. L., Dhert, W., de Kleuver, M., Oner, F. C., & Verlaan, J. (2015). Potential conflicts of interest of editorial board members from five leading spine journals. PLoS One, 10(6), e0127362. https://doi.org/10.1371/journal.pone.0127362 Junqueira, D. R., Phillips, R., Zorzela, L., Golder, S., Loke, Y., Moher, D., Ioannidis, J. P. A., & Vohra, S. (2021). Time to improve the reporting of harms in randomized controlled trials. Journal of Clinical Epidemiology, 136, 216–220. https://doi.org/10.1016/j.jclinepi.2021.04.020 Kaiser, J. (2021). NIH should boost rigor of animal studies with stronger statistics, pilot studies, experts say. Science. Kimmelman, J. (2020). What is human research for? Perspectives on the omission of scientific integrity from the Belmont report. Perspectives in Biology and Medicine, 63(2), 251–261. https://doi.org/10.1353/pbm.2020.0017 Klein, A. A. (2017). What Anaesthesia is doing to combat scientific misconduct and investigate fabrication and falsification. Anaesthesia, 72(1), 3–4. https://doi.org/10.1111/anae.13731 Knaapen, L. (2021). Science needs more external evaluation, not less. Social Science Information, 60(3), 338–344. https://doi.org/10.1177/05390184211-­19161 Krimsky, S., & Schwab, T. (2017). Conflicts of interest among committee members of the National Academies’ genetically engineered crop study. PLoS One, 12(2), e0172317. https://doi. org/10.1371/journal.pone.0172317 Kubin, E., Puryear, C., Schein, C., & Gray, K. (2021). Personal experiences bridge moral and political divides better than facts. PNAS, 118(6), e2008389118. https://doi.org/10.1073/ pnas.2008389118 Leong, C., & Howlett, M. (2022). Policy learning, policy failure, and the mitigation of policy risks: Re-thinking the lessons of policy success and failure. Administration & Society, 54(7), 1379–1401. https://doi.org/10.1177/00953997211065344 Li, W., Gurrin, L.  C., & Mol, B.  W. (2022). Violation of research integrity principles occurs more often than we think. Reproductive Medicine, 44(2), 207–209. https://doi.org/10.1016/j. rbmo.2021.11.022 McCoy, M. S., Pagan, O., Donohoe, G., Kanter, G. P., & Litman, R. S. (2018). Conflicts of interest of public speakers at meetings of the anesthetic and analgesic drug products advisory committee. JAMA Internal Medicine, 178(7), 996–997. https://doi.org/10.1001/jamainternmed.2018.1325 McCoy, M. D. (2018). Industry support of patient advocacy organizations: The case for an extension of the sunshine act provisions of the affordable care act. American Journal of Public Health, 108(8), 1026–1029. https://doi.org/10.2105/AJPH.2018.304467 Oduro, S., Moss, E., & Metcalf. (2022). Obligations to assess: Recent trends in AI accountability regulations. Patterns, 3(11), 100608. https://doi.org/10.1016/j.patter.2-­22.100608 Osborne, D., Sengupta, N. K., & Sibley, C. G. (2019). System justification theory at 25: Evaluating a paradigm shift in psychology and looking towards the future. British Journal of Social Psychology, 58(2), 340–361. https://doi.org/10.1111/bjso.12302 Piller, C. (2022). Blots on a field? Science, 377(6604), 358–363. https://doi.org/10.1126/science.add9993 Pinto, M. F. (2020). Open science for private interests? How the logic of open science contributes to the commercialization of research. Frontiers in Research Metrics and Analytics, 5, 588331, 2020. https://doi.org/10.3389/frma.2020.588331 Redman, B. K., & Caplan, A. L. (2021). Should the regulation of research misconduct be integrated with the ethics framework promulgated in the Belmont report? Ethics & Human Research, 43(1), 37–41. https://doi.org/10.1002/eahr.500078 Rid, A. (2020). Judging the social value of health-related research: Current debate and open questions. Perspectives in Biology and Medicine, 63(2), 293–312. https://doi.org/10.1353/ pbm.2020.0020 Rodwin, M. A. (2019). Conflicts of interest in human subject research: The insufficiency of US and international standards. American Journal of Law and Medicine, 45(4), 303–330. https:// doi.org/10.1177/0098858819892743

192

10  Research Integrity as Moral Reform: Constitutional Recalibration

Schneider, J. W., Horbach, S. P. J. M., & Aagaard, K. (2021). Stop blaming external factors: A historical-sociological argument. Social Science Information, 60(3), 329–337. https://doi. org/10.1177/05390184211018123 Schot, J., & Steinmueller, W.  E. (2018). Three frames for innovation policy: R&D, systems of innovation and transformative change. Research Policy, 47(9), 1554–1567. https://doi. org/10.1016/j.respol.2018.08.011 Shah, S.  K., London, A.  J., Mofenson, L., Lavery, J.  V., John-Stewart, G., Flynn, P., Theron, G., Bangdiwala, S. I., Moodley, D., Chinula, L., Firlie, L., Sekoto, T., Kakhu, T. J., Violari, A., Dadabhai, S., McCarthy, K., & Fowler, M.  G. (2021). Ethically designing research to inform multidimensional, rapidly evolving policy decisions: Lessons learned from the PROMISE HIV Perinatal Prevention Trial. Clinical Trials, 18(6), 681–689. https://doi. org/10.1177/17407745211045734 Smith, L.  G. E., Livingston A.  G., & Thomas, E.  F. (2019). Advancing the social psychology of rapid societal change. British Journal of Social Psychology, 58, 33–44. https://hdl.handle. net/10871/35987 Smith, R.. Time to assume that health research is fraudulent until proven otherwise. 7/5/2021. https://blogs.bmj.com/bmj Software tracks rigour of scientific papers over time. Nature, 577, 602, 30 January, 2020. Sorbie, A., Gueddana, W., Laurie, G., & Townend, D. (2021). Examining the power of the social imaginary through competing narratives of data ownership in health research. Journal of Law and the Biosciences, 8(2), Isaa068. https://doi.org/10.1093/jlb/lsaa068 Van Rooij, B., & Fine, A. (2021). The behavioral code: The hidden ways the law makes us better…or worse. Beacon Press. Van Rooij, B., & Fine, A. (2018). Toxic corporate culture: Assessing organizational processes of deviancy. Administrative Sciences, 8, 23. https://doi.org/10.3390/admsci8030023 Verschraegen, G. (2018). Regulating scientific research: A constitutional moment? Journal of Law and Society, 45(S1), S163–S184. Wendler, D., & Rid, A. (2017). In defense of a social value requirement for clinical research. Bioethics, 31(2), 77–86. https://doi.org/10.1111/bioe.12325

Appendix

These cases provide further real-world examples of both problems in research integrity and how they are being addressed, often combining content from multiple chapters in this book. Cases included largely reflect structural elements meant to support research integrity, often providing evidence of evolution of these structures over time. Each case is ongoing.

Case: Open Access—Plan S and OSTP Policy The fight to end paywalls in science has been of long standing but gained momentum in 2019 with Plan S, largely initiated in Europe and now supported by important funders who require research they funded must be made open access immediately upon publication. Such a move has consequences for current science incentive and assessment systems, in eliminating journal-based metrics (Van Noorden, 2022). This case, narrated by a book, Plan S for Shock (Smits & Pells, 2022), integrates many of the topics argued in the chapters in this book, Reconstructing Research Integrity, and therefore provides a case study of how change in some parts of the science system inevitably requires reconsideration of others. Here we consider issues with strong ethical implications, from this movement (Smits & Pells, 2022). –– Patients with a serious disease could not access research for which their taxes paid, because about 75% of relevant research was behind paywalls. This denied them access to understanding cutting-edge research/treatments being studied. –– While global North countries are working with LMICs to increase these countries’ research capacity, many LMIC countries cannot afford either subscription rates or authors’ fees to journals, locking scientists in those countries out of the

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3

193

194

–– ––

––

––

––

Appendix

flow of research. There is currently no regulation of how high such author fees can be. Scientists who cannot afford access to the full range of journals may unwittingly be retreading ground already covered by other scientists. This situation slows scientific progress; open access may accelerate scientific progress. Publishers were established to distribute scientific literature but morphed into setting rules of the game in pricing, copyright, and access, some with 30–40% profit. Availability of the Internet should but has not made this business model extinct. Free release of COVID research during the pandemic has exposed the absurdity of current publisher business models and raises the question—how many lives may have been saved during prior epidemics had the literature been fully available? Movement to open access, as championed by Plan S, has exposed flaws in self-­ regulation by the scientific community. It let itself become trapped into meaningless metrics such as the journal impact factor, by which many academics are still judged. Hiring and promotion guidelines still focus on where one publishes (high-impact journals) rather than quality of what they publish. In addition, flaws in the peer-review system (as noted in previous chapters of this book) have not been studied and repaired. The open access movement provides an opportunity for the community to check each other’s work and better regulate reproducibility and quality. Research funders have played the major role in forcing change, by adopting policies requiring immediate and full access to research they fund, assuring scientists’ choice of journals in which to publish, and monitoring compliance with their policies. With some exceptions (University of California), universities have not played a leadership role in forcing publishers into transparent and reasonable subscription agreements or in the movement to OA.  Indeed, some university leaders serve on editorial boards of journals captured by publishers with unfair policies, itself a conflict of interest against academic values. The system does not have to be this way, violating scientific norms of openness, transparency, collegiality, and excluding from access to research by both significant chunks of the scientific community as well as the public. The movement is slowly moving forward; by January 2021, 160 Elsevier journals have not registered as Plan S aligned (p. 191) (Smits & Pells, 2022).

Recent research has further defined movement to OA, revealing significant differences by academic discipline. In some disciplines, OA can be assumed to be a natural continuation of publishing cultures; in others, implementation of OA faces major barriers. The foundation for OA was laid in physics. The medical and life sciences generally have convenient open repositories and sufficient funds for publication fees in journals that require them. Chemistry and engineering, humanities, and law feature low OA levels (Severin et al., 2022). On August 25, 2022, the US Office of Science and Technology Policy (OSTP) announced a policy change to take effect in 2025. Results of federally funded research are to be free to read as soon as published. Current policy permit a delay of

Appendix

195

up to a year before papers will be posted outside paywalls. It is unclear whether US funding agencies will help researchers to cover up-front per paper fees for open-­ access publishing. Coalition S requires publishers to give up copyright; the new US policy does not (Brainard & Kaiser, 2022).

Case: Cochrane This case involves self-regulation by the scientific community, accuracy of research methods and metrics, conflict of interest, all introduced in earlier chapters and important to research integrity. Established in 1993 as the Cochrane Collaboration (now Cochrane), includes 127 different groups around the world, producing systematic reviews of research on the effects of health care, to inform decisions. From an organizational point of view, Cochrane can be seen as: (1) a movement (part of the evidence-based medicine movement) with the aim of changing existing institutions toward higher standards within a collaborative ideology and structure, (2) a network, and (3) an institution. Cochrane has mixed these organizational logics, relatively well aligned for some time (Gleave, 2019), even though tensions have become apparent. In 2018, the Cochrane board expelled one of its founders, Peter Gotzsche, who had said it was totally unacceptable that up to half of the authors on a Cochrane paper could have a direct conflict of interest with those companies whose products they were reviewing (Burki, 2018). Particularly at issue was the Cochrane review of human papillomavirus (HPV) vaccine, which Gotzsche and others indicated had not incorporated all available trials. Some that were included were funded by the HVP vaccine manufacturers, and members of the review team had significant conflicts of interest (Jorgensen et al., 2018). Other concerns reflected assumptions built into the original organizational design. Review quality is said to be variable, with little systemic surveillance of it or feedback to groups regarding quality of their work. Like many other areas of science self-governance, an original assumption might have been that Cochrane had the highest quality systematic reviews produced by the network of review groups, leaving no need for such surveillance (Heywood et al., 2018). Although little public information has been available about review quality, it is now become an area of research by outside scholars, identifying gaps in Cochrane methodology and suggesting improvements. Jia and colleagues note that many meta-analyses of rare events in the Cochrane Database of Systematic Reviews were underpowered, so results should be treated with caution. Just as individual studies may suffer from lack of sufficient statistical power, so, also may systematic reviews, due to limited numbers of studies or variability among them. Suggestions include publishing post hoc power of the results and using that information to inform when updates must be done to improve credibility of review results (Jia et  al., 2021). Tomlinson and colleagues (2022) found heterogeneity in reporting mortality in Cochrane systematic reviews, problematic for evidence-based decision-making.

196

Appendix

Greater standardization of the term “mortality” is necessary for effective utilization of health research (Tomlinson et al., 2022). Others find assessment of attrition bias in Cochrane systematic reviews highly inconsistent, thus hindering trial comparability. Clear instructions to reviewers about appraising risk of attrition bias would improve reliability of Cochrane’s risk of bias tool (Babic et al., 2019). Risk of bias assessments for blinding of participants and personnel were frequently not in line with Cochrane Handbook recommendations (Barcot et al., 2019). So, also, was risk of bias assessment for allocation concealment frequently not in line with Cochrane’s Handbook guidance (Propadalo et al., 2019). These are a selection of concerns, published by members of the scientific community in the spirit of self-regulation, lacking a response from Cochrane. They do suggest the need for further monitoring of quality. In summary, Cochrane must address methodological quality, and the related area of conflict of interest. In addition, although its work has been thought to be a knowledge commons, open access to its work is still not available (Heywood et al., 2018). Still unresolved is how to detect research misconduct—fabricated or falsified data/ findings—since the assumptions that trials happened and were honestly reported are still held. Detection and retraction of such articles are unreliable at best; if detected, they should be excluded from systematic reviews. If such articles are concentrated in a subject matter area, their inclusion can distort or invalidate a systematic review. A more basic argument is made by Roberts et al. (2015). The medical literature contains biased trials, and about half of trials are unpublished. In the UK, funding provided to Cochrane review groups is proportional to the number of trials included in reviews, creating a financial incentive to find and include every trial regardless of its quality (Roberts et  al., 2015). Thus, the argument that the knowledge system underpinning health care is not fit for purpose—biased, unpublished, fabricated, falsified trials, and with limited means to detect all of these defects.

 ase: Commercial Determinants of Health C and Research Integrity This case combines elements of blind spots overcome after a long period of time, and pervasive research conflict of interest. Research integrity does include asking the right question, in this case after much empirical research and reframing of the issue. For decades, the public health community has understood how health-harming corporations (tobacco, processed food, alcohol, firearms, etc.) influence population health. That much effort has been placed on changing behaviors of individuals affected by these products has been an ethical mistake of epic proportions. This field has now been reframed as the commercial determinants of health (CDoH). The reframing aids in scrutinizing the role of power and politics in favoring corporate interests over public health interests. This reframing helps to identify a broader

Appendix

197

range of countervailing actions that could curb this corporate power (Wood et al., 2021) and avoids intimidation of researchers by corporations trying to hide or distort scientific evidence and to control the research agenda. While researchers played a major role in uncovering this problem (see prior chapters in this book), the research community now has a responsibility to elevate CDoH to an appropriate level of research integrity. Studies have shown that changing individual behaviors is less effective than changing government policies and business environments that shape individual choices. But much of the research has occurred in silos—focused on individual product areas and associated health impacts. Synthesis across these silos should lead to comprehensive strategies that address commercial actors on multiple levels and through multiple routes (Lee & Freudenberg, 2022). Second, scientifically defined metrics are necessary to quantify and monitor CDoH and their impact on health (Freudenberg et al., 2021). Current harmful industry practices can be denormalized. The current system is not inevitable (Gomez, 2022).

 ase: Scientific Integrity: DHHS Agencies Reporting C and Addressing Political Interference “Since 2007, Congress and multiple administrations have taken action s to help ensure that federal science agencies have scientific integrity policies and procedures in place that, among other things, protect against the suppression or alteration of scientific findings for political purposes. GAO defines scientific integrity as the use of scientific evidence and data to make policy decisions that are based on established scientific methods and processes, are not inappropriately influenced by political considerations, and are shared with the public when appropriate.” CDC, FDA, NIH, and ASPR were studied. “The four agencies reviewed do not have procedures that define political interference in scientific decision-making or describe how it should be reported and addressed….The absence of specific procedures may explain why the four selected agencies did not identify any formally reported internal allegations of potential political interference in scientific decision-making from 2010–2021….Employees observed incidents they perceived to be political interference but did not report them for various reasons…including fearing retaliation, being unsure how to report issues…” All four selected agencies train staff on some scientific integrity-related topics but only NIH includes information on political interference in scientific decision-­ making as part of its scientific integrity training. “Since 2007, Congress and multiple administrations have taken actions to help ensure that federal science agencies have policies and procedures in place that…protect against the suppression or alteration of scientific findings for political purposes.” “Political interference” refers to political influences that seek to undermine

198

Appendix

impartiality, nonpartisanship and professional judgment.” Such protections are important to preserve the integrity of scientific information used to guide policy decisions, to assure they are evidence-based as well as to assure free flow of scientific information including to the public. It is also vital to “supporting sustained advances in the nation’s public health security, especially during a global health emergence. “Since the onset of the COVID-19 pandemic, there have been various allegations of political interference affecting scientific decisions at several HHS offices and agencies.” GAO now plans an additional report examining key characteristics that can insulate agencies from political interference.

Case: Predatory Journals and Conferences (IAP, 2022) Conclusions from this report are: (1) predatory academic practices are on a continuum, ranging from deliberately deceitful practices to poor quality ones—a spectrum of behaviors, not predatory or not predatory, (2) awareness of predatory practices is generally poor, (3) predatory journals and conferences risk becoming engrained in research culture and institutionalized including in prominent databases/indexes, even though their impact is wholly underplayed, (4) drivers of predatory practices include monetization and commercialization of academic research output and contemporary research evaluation systems which strongly privilege quantity over quality, (5) predatory practices exploit weaknesses in the current peer review system which is opaque, slow, lacking definitions of quality and training for making peer judgments. There are now believed to be 15,500 predatory journals, prevalent globally and rising, particularly targeting medical disciplines. Risks are undertheorized but include: dilution and distortion of evidence in systematic reviews often used in policy making, economics of wasting research dollars on substandard research, and vulnerability among early scholars especially in LMICs. While all scientific institutions (universities, funders, commercial, and nonprofit publishing industry and international science organizations) bear responsibility for reform of the poorly-functioning elements of this system that support predatory practices, it is important to note that academic publishing presently lacks any governance body or regulatory authority.

Case: Retraction Watch Retraction Watch is an open-access database that collects retracted articles and reasons for their retraction. Initiated in 2010, this database now contains 35,000 entries. Based on evidence from surveys, studies, and reports from sleuths who look for errors and problems in papers, Oransky (2022) estimates that 1  in 50 papers in

Appendix

199

health/medicine would meet at least one of the criteria for retraction from the Committee on Publication Ethics such as “clear evidence the findings are unreliable,” yet the rate of retraction is still under 0.1% (Oransky, 2022). The Retraction Watch database has been used in a number of studies related to research integrity. For example, Avissar-Whiting (2022) has noted that pre-prints are rarely linked to the final version of a paper that is published; similarly, retractions do not reach through this chain of versions. Retractions are a means of correction. The problem is the persistence of these papers in the literature due to their continued citations (Avissar-Whiting, 2022). In a second example, Pastor-Ramon and colleagues tested the ability to find retracted articles through scientific search engines—Google Scholar, PubMed, SCOPUS, Web of Science and found none 100% effective in recognizing and providing notice to users that a document had been retracted. This means all such databases must be used in order to retrieve as many such notices as possible. Researchers usually do not think to use Retraction Watch, which would be a more accurate source of such information (Pastor-Ramon et al., 2022). Retraction Watch is an excellent resource to be consulted directly and as a source of data for empirical studies further understand the accuracy of scientific studies. Such work is important to avoid building further research on unreliable prior work. Law has online databases through which information about the current status of cases can be obtained. Why does science/medicine not have a similar resource? This situation demonstrates lack of responsibility on the part of several of the institutions of science—research-performing institutions that do not make their findings of misconduct public in retraction notices, journals that are loathe to provide distinct information about why an article was retracted, regulators which do not have authority to require journals to publish retractions, and others.

Case: ClinicalTrials.gov ClinicalTrials.gov was launched in 2000 by the National Library of Medicine. It is one of many clinical trial registries around the world with the shared objectives of addressing reporting bias including publication bias and selective outcome reporting, and of increasing clinical trial transparency and accountability to the public. Such registries are used by the public to access trial information. A 2005 requirement by ICMJE for trials to be registered prior to beginning enrollment and in 2006 by the US Congress (Food and Drug Administration Act of 2007, FDAAA) expanding trial registration and reporting requirements (Gresham, et al., 2022). Prior to these registries, there was no viable way of identifying trials except through the published literature—problematic since only a small portion of trials are published. Further problematic is that a large number of trials are still not registered and that accuracy and quality of the data in the registry cannot be guaranteed (Gresham et al., 2022).

200

Appendix

Like Retraction Watch (noted above), data from Clinical Trials.gov are used for further studies. One such study found from a random sample of trial protocols lack of documentation of systematic searches of prior or ongoing clinical trials testing similar hypotheses. Indeed, 41% did not cite the majority of easily accessible trials. Such lax practice hampers IRB’s ability to judge a proposed trial’s merit or its risks and benefits (Sheng et al., 2022). A second study using data from ClinicalTrials.gov found that approximately forty percent of RCTs are not published, constituting a major opportunity for publication bias. Since about half of registered trials on ClinicalTrials.gov have no corresponding publications, the registry is the only source of information about them (Zarin & Selker, 2022). Together, these studies demonstrate how extremely partial is our information about the research enterprise, much of it unavailable for examination relevant to research integrity. Of relevance is the fact that NIH has failed to follow up on their funded investigators (intramural and extramural) who did not comply with federal reporting requirements on ClinicalTrials.gov, yet the agency continued to fund these investigators. NIH concurred with these findings and maintained that it had started taking steps to tracking and reporting researcher compliance. Such questions are increasingly raised in an effort to convince regulators to boost oversight of clinical trial results and oversight (Silverman, 2022).

References Avissar-Whiting, M. (2022). Downstream retraction of preprinted research in the life and medical sciences. PLoS One, 17(5), e0267971. https://doi.org/10.1371/journal.pone.0267971 Babic, A., Tokalic, R., Cumha, J. A. S., Novak, I., Suto, J., Vidak, M., Miosic, I., Vuka, I., Pericic, T.  P., & Puljak, L. (2019). Assessments of attrition bias in Cochrane systematic reviews are highly inconsistent and thus hindering trial comparability. BMC Medical Research Methodology, 19, 76. https://doi.org/10.1186/s12874-­019-­0717-­9 Barcot, O., Boric, M., Dosenovic, S., Pericic, T. P., Cavar, M., & Puljak, L. (2019). Risk of bias assessments for blinding of participants and personnel in Cochrane reviews were frequently inadequate. Journal of Clinical Epidemiology, 113, 104–113. https://doi.org/10.1016/j. jclinepi.2019.05.012 Brainard, J., & Kaiser, J. (2022). U S to require free access to papers on all research it funds. Science, 377(6610), 1026–1027. https://doi.org/10.1128/science.ade6577 Burki, T. (2018). The Cochrane board votes to expel Peter Gotzsche. Lancet, 392, 1103–1104. https://doi.org/10.1016/s0140-­6736(18)32351-­1 Freudenberg, N., Lee, K., Buse, K. L., Collin, J., Crosbie, E., Friel, S., Klein, D. E., Lima, J. M., Marten, R., Mialon, M., & Zenone, M. (2021). Defining priorities for action and research on the commercial determinants of health: A conceptual overview. American Journal of Public Health, 111(12), 2202–2211. https://doi.org/10.2105/AJPH.2021.306491 Gleave, R. (2019). More than a collaboration – What sort of organization should Cochrane be? Using organizational metaphors to explore the work of Cochrane. Journal of the Evaluation of Clinical Practice, 245, 729–738. https://doi.org/10.1111/jep.13214 Gomez, E.  J. (2022). Enhancing our understanding of the commercial determinants of health: Theories, methods, and insights from political science. Social Science & Medicine, 301, 114931. https://doi.org/10.1016/j.socscimed.2022.114931 Government Accountability Office, Scientific Integrity: HHS agencies need to develop procedures and train staff on reporting and addressing political interference, GAO-22-104613, April, 2022.

Appendix

201

Gresham, G., Meinert, J. L., Gresham, A. G., Piantadosi, S., & Meinert, C. L. (2022). Update on the clinical trial landscape: Analysis of ClinicalTrials.gov registration data, 2000-2020. Trials, 23(1), 858. https://doi.org/10.1186/s13063-­0220-­06569-­2 Heywood, P., Stephani, A.  M., & Garner, P. (2018). The cochrane collaboration: Institutional analysis of a knowledge commons. Evidence & Policy, 14(1), 121–142. https://doi.org/10.133 2/174426417X15057479217899 Inter-Academy Partnership (IAP). (2022). Combatting Predatory Academic Journals and Conferences, Report “IAP is a network of over 140 global, regional and national academies representing over 30,000 leading scientists, engineers and health professionals in over 100 countries”, p 24. Jia, P., Lin, L., Kwong, J. S. W., & Xu, C. (2021). Many meta-analyses of rare events in the cochrane database of systematic reviews were underpowered. Journal of Clinical Epidemiology, 131, 113–122. https://doi.org/10.1016/j.jclinepi.2020.11.017 Jorgensen, L., Gotzsche, P., & Jefferson, T. (2018). The cochrane HPV vaccine review was incomplete and ignored important evidence of bias. BMJ Evidence-Based Medicine, 23(5), 165–168. https://doi.org/10.1136/bmjebm-­2018-­111012 Lee, K., & Freudenberg, N. (2022). Public health roles in addressing commercial determinants of health. Annual Review of Public Health, 43, 375–395. https://doi.org/10.1146/annurev-­publhe alth-­052220-­020447 Oransky, I. (2022). Retractions are increasing, but not enough. Nature, 608, 9. Pastor-Ramon, E., Herrera-Peco, I., Agirre, O., Garcia-Puente, M., & Moran, J.  M. (2022). Improving the reliability of literature reviews: Detection of retracted articles through academic search engines. European Journal of Investigation in Health, Psychology and Education, 12(5), 458–464. https://doi.org/10.3390/3jihpe12050034 Propadalo, I., Tranfic, M., Vuka, I., Barcot, O., Percic, T.  P., & Puljak, L. (2019). In cochrane reviews, risk of bias assessments for allocation concealment were frequently not in line with Cochrane’s handbook guidance. Journal of Clinical Epidemiology, 106, 10–17. https://doi. org/10.1016/j.jclinepi.218.10.002 Roberts, I., Ker, K., Edwards, P., Beecher, D., Manno, D., & Sydenham, E. (2015). The knowledge system underpinning healthcare is not fit for purpose and must change. BMJ, 350, h2463. https://doi.org/10.1136/bmj.h2463 Severin, A., Egger, M., Eve, M. P., & Hurlimann, D. (2022). Discipline-specific open access publishing practices and barriers to change: An evidence-based review. F1000, 7, 1925. https://doi. org/10.12688/f1000research.17328.1. Sheng, J., Feldhake, E., Zarin, D.  A., & Kimmelman, J. (2022). Completeness of clinical evidence citation in trial protocols: A cross-sectional analysis. Med, 3(5), 335–343.e6. https://doi. org/10.1016/j.medj.2022.03.002 Silverman, E. (2022). Lawmakers push NIH to disclose steps being taken to ensure clinical trial results are reported. STAT, Oct 14. Smits, R., & Pells, R. (2022). Plan S for shock: Science shock solution speed. Ubiquity Press. Tomlinson, E., Pardo, J.  P., Dodd, S., Sivesind, T., Szeto, M.  D., Dellavalle, R.  P., Skoetz, N., Laughter, M., Wells, G. A., & Tugwell, P. (2022). Substantial heterogeneity found in reporting mortality in cochrane systematic reviews and core outcome sets in COMET database. Journal of Clinical Epidemiology, 145, 47–54. https://doi.org/10.1016/j.jclinepi.2022.01.006 Van Noorden, R. (2022). An open-access history: The world according to Smits. Nature, 603(7901), 384–385. https://doi.org/10.1038/d41586-­022-­00717-­z Wood, B., Baker, P., & Sacks, G. (2021). Conceptualizing the commercial determinants of health using a power lens: A review and synthesis of existing frameworks. International Journal of Health Policy & Management. https://doi.org/10.34172/ijhpm.2021.05 Zarin, D.  A., & Selker, H.  P. (2022). Reporting of clinical trial results: Aligning incentives and requirements to do the right thing. Clinical Therapeutics, 44(3), 439–441. https://doi. org/10.1016/j.clinthera.2022.02.001

Index

A Academic journals, 116 Accountability, 161 ARRIVE, 79 Asilomar conference, 159 Autonomous science, 74 B Behavioral research scientists, 78 Bias, 100 Biomedical science, 20, 21 Blind spots, 19 C Cardiac Regeneration Research, 179 Cardiovascular disease, 22 Cardiovascular trials, 22 Clinical Research Appraisal Instrument, 67 Clinical Trial Design, 49 ClinicalTrials.gov, 119 Cognitive locks, 28, 29 Common biases, 60 Conflict of interest (COI), 125, 182 background, 93, 94 in biomedical research, 104 case studies, 96, 97 in clinical trials, 95 and Conflict of Commitment, 106, 107 data safety monitoring boards (DSMBs), 105 management and regulation, 98–105 COVID-19 vaccine, 106

D Data collection, 86 Dual use research, 80 E Editorial decisions, 134 Empirical research, 186, 187 Ethical inquiry, 155 Ethical issues, 117 Ethics-based auditing, 162 European Clinical Trial Registry, 119 Evidence-based research integrity policy limitations, 50, 51 meta-science, 40, 41 regulatory science, 39 science policy, 41, 42 scientific findings, 37 scientific standards clarity of, 44, 45 scientific quality, 42, 43 translational science, 38, 39 F Fabrication/falsification/plagiarism (FFP), 21 False Statements Act, 145 Falsification, 115 Federal Government research facilities, 106 Financial conflicts of interest (FCOI), 95 Food and Drug Administration Amendments Act (FDAAA), 118

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 B. Redman, Reconstructing Research Integrity, https://doi.org/10.1007/978-3-031-27111-3

203

204 G Gene Curation Coalition, 158 Genome editing, 159 Germline human genome editing, 157 H Human thinking, role of, 19 I Industry-based research, 86 Industry-funded research findings, 117 Informed consent, 97 Institutional accountability, 20 Institutional Conflicts of Interest (ICOI), 98, 101 Institutional Corruption, 73, 75, 87 Integrity management, 1 Intellectual integrity, 2 International Science Council, 85 J Journal policies, 95 Journal reform, 124 Journals, 123 L Logics, 123 M Machine learning, 159 Meta-research, 76 Meta-science, 40, 41 Muslim-violence bias, 160 N National Center for Biotechnology Information (NCBI), 121 National Institute of Neurological Disease and Stroke (NINDS), 84 National Science Foundation (NSF) grants, 107, 144 Negligence, 77 NIH RCR Training policy, 58 NIH Revitalization Act, 22 NIH Strategic Plan, 74

Index O Office of Research Integrity (ORI), 21, 62 P Patient-centered insights, 19 Peer review, 133, 134 consequences and solutions, 137 in funding decisions, 134, 135 practice, consequences and solutions, 135, 136 Performance-Based Research Evaluation Arrangements (PREAs), 143 Personal integrity, 1 Policy sciences, 29 Polygenic embryo screening, 156 Professional integrity, 1 Publication bias, 86 Publication Pressure Questionnaire, 64 Public health recommendations, 27 Public policy, 127, 128 PubMed, 77 Q Quality governance, 61 Quality managers, 115 Quality of data, 95 Quest Center model, 65 Questionable Measurement Practices (QMP), 29 Questionable research practices (QRPs), 178 R Racial inequalities, 19 Randomized controlled trials (RCTs), 157 RCR Reasoning Test, 58 Registered reports (RR), 77 Regulatory agencies, 85 Regulatory science, 39 Research Center Borstel, 115 Research funding organizations (RFOs), 120 Research impact assessment, 144 Research infrastructure, 121 Research integrity (RI), 57, 73, 81–83 biomedical science, 20, 21 blind spots, 19 detection of, 21–23 in research practice/governance, 23 cognitive locks, 28–30

Index conflict of interest, 182 data ownership/sharing in health research, 181 in emerging technologies AI Research, in biomedical sciences, 159–163 cause-and-effect relationships, 154 governance and ethics of, 154 governance/regulatory oversight/ethical debates in, 164, 165 human gene therapy/editing research, 155–158 innovation ethics, 166 funder, 120 historical context, 177 hyperindivudualization of, 25 institutional responsibilities for coordinated quality of research, 115–118 market logic, research integrity logic, 123, 124, 126 social framework and trustworthiness, 114 Trustworthy Research, 122, 123 journals, 120 justice in, 26, 27 measurement instruments and methods, 46, 47 metaphors, 23, 24 moral revolutions, 175, 176 organizational support of ethical behavior, 25, 26 policy, 19, 31 problems, 179 publishing research, 173 regulators, 119 research infrastructure, 121 science self-regulation, 179, 180 self-correction, 1 social value requirement, for health-related research, 180, 181 socio-cultural perspective, 32, 33 standards of evidence in, 45, 46 steps to, 182–187 trustworthiness, 27, 28 Research Integrity Officers, 116 Research misconduct (RM), 21, 23, 59, 73, 74, 81–83, 88, 116 Research performing institutions (RPOs), 120 Research security, 107 Research syntheses, 48, 49 Resource allocation, 87 Responsible conduct of research (RCR)

205 consensus on training goal, 62 evaluation, 58, 59 higher education research, 67, 68 implications for, 60, 61 institutional responsibility, 65 internationals and RCR training, 64, 65 learning, 63 mentoring and peer support, 62, 63 and quality of science, 61, 62 regulators regulating, 65, 66 research ethics, 64 self-monitoring and system-­ monitoring, 63, 64 Reynolds, 24 Rigor and Transparency Index Quality Metric for Assessing Biological and Medical Science Methods, 145 S Science evaluation bibliometrics/infometrics, 138–142 ethical imperatives, 144–146 peer review, 133, 134 consequences and solutions, 137 in funding decisions, 134, 135 practice, consequences and solutions, 135, 136 research impact evaluation, 143, 144 Science institutions, 127, 128 Science policy, 41, 42 Science regulation, 75 Science self-governance, 75 Science self-regulation, 74 Scientific community, 28, 74, 128, 156 Scientific institutions, 87 Scientific knowledge, 21 Scientific self-governance, 74 Scientific self-regulation, 99, 100, 127 Self-regulation, by scientific community, 74 external regulation, 80–83 human experiments with hepatitis, 79, 80 institutional corruption framework, 83–88 preclinical research, 79 research misconduct regulations, 74 research practices and products, 73 science self-governance, 79 and suggestions, for improvement, 76–79 Social and political structures, 2 Social behavior, 60 social networks, 25 Social responsibility, 74 Stewardship responsibilities, 75

206 Sunshine Act, 97 Survey of Organizational Research Climate (SOuRCe), 58, 64 Systematic reviews, 120 T Theory of institutional corruption, 126

Index Translational science, 38, 39 Trustworthiness, 114 Trustworthy Research, 122, 123 U US Office of Research Integrity (ORI), 59 US Physician Payments Sunshine Act, 95