190 85 2MB
English Pages [289] Year 2021
Ethics and Epidemiology
Ethics and Epidemiology Third Edition Edited by
ST EV E N S . C OU G H L I N A N D A N G U S DAWS O N
1
3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2021 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Coughlin, Steven S. (Steven Scott), 1957–editor. | Dawson, Angus, editor. Title: Ethics and epidemiology /edited by Steven S. Coughlin and Angus Dawson. Description: Third edition. | New York, NY : Oxford University Press, [2021] | Includes bibliographical references and index. Identifiers: LCCN 2021002991 (print) | LCCN 2021002992 (ebook) | ISBN 9780197587058 (hardback) | ISBN 9780197587072 (epub) | ISBN 9780197587089 Subjects: MESH: Epidemiology—ethics | Ethics, Medical | Codes of Ethics | Epidemiologic Methods | Health Services Research—ethics | Social Medicine—ethics Classification: LCC RA652 (print) | LCC RA652 (ebook) | NLM WA 105 | DDC 174.2/944—dc23 LC record available at https://lccn.loc.gov/2021002991 LC ebook record available at https://lccn.loc.gov/2021002992 DOI: 10.1093/oso/9780197587058.001.0001 1 3 5 7 9 8 6 4 2 Printed by Integrated Books International, United States of America This material is not intended to be, and should not be considered, a substitute for medical or other professional advice. Treatment for the conditions described in this material is highly dependent on the individual circumstances. And, while this material is designed to offer accurate information with respect to the subject matter covered and to be current as of the time it was written, research and knowledge about medical and health issues is constantly evolving and dose schedules for medications are being revised continually, with new side effects recognized and accounted for regularly. Readers must therefore always check the product information and clinical procedures with the most up-to-date published product information and data sheets provided by the manufacturers and the most recent codes of conduct and safety regulation. The publisher and the authors make no representations or warranties to readers, express or implied, as to the accuracy or completeness of this material. Without limiting the foregoing, the publisher and the authors make no representations or warranties as to the accuracy or efficacy of the drug dosages mentioned in the material. The authors and the publisher do not accept, and expressly disclaim, any responsibility for any liability, loss, or risk that may be claimed or incurred as a consequence of the use and/or application of any of the contents of this material.
Contents Preface Contributors
vii ix
PA RT I F O U N DAT IO N S 1. Historical Foundations Steven S. Coughlin
3
PA RT I I K EY VA LU E S A N D P R I N C I P L E S 2. Epidemiology and Informed Consent Anna C. Mastroianni and Jeffrey P. Kahn 3. Solidarity and the Common Good: Social Epidemiology and Relational Ethics in Public Health Bruce Jennings 4. Understanding the Ethics of Risk as Used in Epidemiology Diego S. Silva 5. Risk and Precaution: The Ethical Challenges of Translating Epidemiology into Action Stephen D. John
27
44 66
85
PA RT I I I M E T HO D S 6. Ethical Issues in the Design and Conduct of Community-Based Intervention Studies Michelle C. Kegler, Steven S. Coughlin, and Karen Glanz
105
PA RT I V I S SU E S 7. Ethics in Public Health Practice Robert E. McKeown
137
8. Ethical Issues in Genetic Epidemiology Laura M. Beskow, Stephanie M. Fullerton, and Wylie Burke
175
vi Contents
9. Ethics, Epidemiology, and Changing Perspectives on AIDS Carol Levine
196
10. Ethics Curricula in Epidemiology Kenneth W. Goodman and Ronald J. Prineas
223
11. Conflicts of Interest Walter Ricciardi and Carlo Petrini
245
Index
265
Preface In the ten years since the second edition of Ethics and Epidemiology was published, there have been many important ethical developments in epidemiology and related fields in public health and medicine. These developments include the rise of public health ethics and the complex interrelations between professional ethics in epidemiology, public health ethics, and research ethics. Most of the chapters in previous editions tended to assume legal and regulatory structures that exist in the United States are the norm across the world, when this is not the case. In the third edition, chapters were written for a truly international, global audience and include the perspectives of additional scholars from across the world. This book will be of interest to practicing public health professionals from various public health disciplines (epidemiology, behavioral science, genomics, health disparities, and global health), bioethicists, legal scholars, and members of nonprofit organizations, government agencies, and health advocacy organizations. Public health ethics is increasingly a standard part of master of public health (MPH) degrees. This book will be an invaluable resource for the thousands of MPH students across the world. This revised edition of the book includes selected chapters from the first edition, which have been substantially updated and revised, along with several new chapters. The chapters are organized topically and divided into four parts. The first part is titled “Foundations” because the chapter introduces basic and recurring concepts and principles. The subsequent parts deal with “Key Values and Principles,” “Methods,” and “Issues.” The objective of this work is to make students, epidemiologists, and health professionals aware of situations that require moral reflection, judgment, or decision, while pointing to ways in which justified moral conclusions can be reached. We hope the book will also be of use to persons interested more broadly in bioethics and health policy. S.S.C. A.D.
Contributors Laura M. Beskow, MPH, PhD Center for Biomedical Ethics and Society Vanderbilt University Medical Center Nashville, TN, USA
Bruce Jennings, MA Department of Health Policy Vanderbilt University Brentwood, TN, USA
Wylie Burke, MD, PhD Department of Bioethics and Humanities University of Washington Mercer Island, WA, USA
Stephen D. John, PhD Department of History and Philosophy of Science University of Cambridge Cambridge, UK
Steven S. Coughlin, PhD, MPH Department of Population Health Sciences Medical College of Georgia Augusta University Augusta, GA, USA Angus Dawson, PhD Sydney Health Ethics University of Sydney Sydney, NSW, Australia Stephanie M. Fullerton, DPhil Department of Bioethics and Humanities University of Washington School of Medicine Seattle, WA, USA Karen Glanz, PhD, MPH Department of Biobehavioral Health Sciences University of Pennsylvania Philadelphia, PA, USA Kenneth W. Goodman, PhD Institute for Bioethics and Health Policy University of Miami Miller School of Medicine Miami, FL, USA
Jeffrey P. Kahn, PhD, MPH Berman Institute of Bioethics Johns Hopkins University Baltimore, MD, USA Michelle C. Kegler, DrPH, MPH Department of Behavioral, Social and Health Education Sciences Rollins School of Public Health, Emory University Atlanta, GA, USA Carol Levine, MA Families and Health Care Project United Hospital Fund New York, NY, USA Anna C. Mastroianni, JD, MPH School of Law University of Washington Seattle, WA, USA Robert E. McKeown, PhD Department of Epidemiology and Biostatistics Arnold School of Public Health, University of South Carolina Columbia, SC, USA
x Contributors Carlo Petrini, PhD Bioethics Unit Italian National Institute of Health (istituto Superiore di Sanità) Rome, Italy
Walter Ricciardi, MD, PhD Dipartimento di Scienze della vita e sanità pubblica Università Cattolica del Sacro Cuore Rome, Italy
Ronald J. Prineas, MB, BS, PhD, FRCP Division of Public Health Sciences Wake Forest University Winston-Salem, NC, USA
Diego S. Silva, PhD Sydney Health Ethics, School of Public Health University of Sydney Camperdown, NSW, Australia
PART I
F OU N DAT IONS
1
Historical Foundations Steven S. Coughlin
This chapter considers the history of the rise of ethical concerns in the public health movement and epidemiology, which is the study of the distribution and determinants of disease in human populations. Epidemiology is a basic science in public health. This chapter provides an overview of early developments in public health and ethics. More recent developments are also discussed, including the origins of bioethics, regulatory safeguards for human subjects research, public health ethics, and contemporary epidemiologic ethics.
Early Developments in Public Health and Ethics Until the end of the Middle Ages, few advances were made in public health except for the control of a very limited number of communicable diseases achieved through the segregation and quarantine of persons thought to be infectious.1,2 Around the sixteenth century in Europe, hypotheses began to emerge regarding the social genesis of disease and some proposals were advanced concerning the role of government in public health. At the onset, these ideas had little practical impact, but they ultimately contributed to the emerging realization that government has an obligation to improve unsanitary conditions that threatened the health of rich and poor alike.1 The early writings relied on speculative hypotheses and were based more in the humanities than the sciences. For example, Thomas More (1478–1535) wrote a fictitious story set in the land of Utopia (1516) in which hygiene protected health and insurance was provided against sickness and unemployment. Jean- Jacques Rousseau (1712–1778) also speculated in Discourse on the Origins and Foundations of Inequality Among Men (1775)2–4 that in the context of a prolonged criticism of “civil society,” disease developed from social circumstances and ill health resulted from many factors, most of which were beyond the power of medicine to heal: With regard to illnesses, [I note] the extreme inequality in our lifestyle: excessive idleness among some, excessive labor among others; the ease with which Steven S. Coughlin, Historical Foundations In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0001
4 Ethics and Epidemiology we arouse and satisfy our appetites and our sensuality; the overly refined foods of the wealthy, which nourish them with irritating juices and overwhelm them with indigestion; the bad food of the poor, who most of the time do not have even that, and who, for want of food, are inclined to stuff their stomachs greedily whenever possible; staying up until all hours, excesses of all kinds, immoderate outbursts of every passion, bouts of fatigue and mental exhaustion; countless sorrows and afflictions which are felt in all levels of society and which perpetually gnaw away at souls; these are the fatal proofs that most of our ills are of our own making, and that we could have avoided nearly all of them by preserving the simple, regular and solitary lifestyle prescribed to us by nature.4
Rousseau subsequently influenced writers in ethical theory during the Enlightenment. Rousseau’s theories also had an impact on later public health writers, such as the influential German physician Johann Peter Frank (1745– 1821), who held high positions in both government and academia in Germany, Austria, Italy, and Russia and was an early proponent of social medicine controlled by the state. Frank promoted the idea of “medical police,” or physicians with a public health role of sufficient authority to protect people against the health consequences of squalid urban living conditions. He argued that the physician’s primary obligations were not owed merely to patients or the local community, but to the state and the monarch. Public health responsibilities were thereby reconceived as physicians’ primary responsibilities.1,5 Pathogenic microorganisms were still unknown during the Enlightenment. With the exception of a few diseases such as smallpox, disease was regarded as the result of unhealthy lifestyles and environments rather than of contagion. Poor air, water, and living conditions were thought to foster miasmas (poisonous vapors) that caused illness. Hence, many Enlightenment physicians undertook public health campaigns emphasizing both personal and environmental hygiene. They understood that prevention methods were more effective than curative techniques and believed that people were responsible for maintaining their own health. The success of their efforts was not surprising because standard therapies at that time such as laxatives, bloodletting, and induced vomiting yielded less impressive results than did public health efforts.6,7 A small number of Enlightenment figures, including American physician Benjamin Rush (1745–1813) and Scottish physician John Gregory (1724–1773), focused on professional medical ethics. They were among the first writers to lecture and publish extensively on this subject. Both Rush and Gregory believed that physicians had a moral obligation to educate the public and disclose relevant information to patients. However, neither believed that physicians had a moral obligation to obtain informed consent from patients for the care they provided. Rush and Gregory only wanted patients and the general public to be
Historical Foundations 5 sufficiently educated to understand physicians’ recommendations and be motivated to comply. Rush and Gregory doubted that nonphysicians could intelligently form their own opinions about medical issues and make appropriate choices about care. For example, Rush advised physicians to “yield to [patients] in matters of little consequence, but maintain an inflexible authority over them in matters that are essential to life.” Gregory was quick to underscore that the physician must be keenly aware of the harms that untimely disclosures might cause to patients or to the public. Rush, Gregory, and other Enlightenment figures did not discuss the need to respect the patient’s right to self-determination or to obtain consent for any purpose other than a medically successful outcome. Gregory and Rush appreciated the value of providing information, but ideas of informed consent in health care did not originate in their writings. The language of “informed consent” was not used during the Enlightenment, and indeed not until the 1950s.8,9 An organized system to protect public health was not developed until the nineteenth century in England.10 England was the first country to experience the social costs of the Industrial Revolution.2 Due to the efforts of Edwin Chadwick (1800–1890) and other English reformers, laws were enacted that provided relief to the poor, made it illegal to employ children under the age of nine in factories, and promoted the health and welfare of industrial workers.1,10 Chadwick was largely responsible for the passage of the Public Health Act of 1848, which created a board of health to oversee sanitary improvements at about the time that British physician John Snow began his classic series of investigations on cholera in London.1,3,10 A royal commission headed by Chadwick recommended improvements in drainage systems in large towns, where a lack of sanitation had resulted in the spread of typhoid, cholera, and other diseases.1 Legislation of this type quickly spread to other countries and had a major impact on public health and life expectancy. The primary motivation for reform was the realization that poverty and unsanitary conditions had adverse economic and social consequencies.1,10 Chadwick maintained an association with English philosopher Jeremy Bentham (1748–1832), whose progressive social reforms for children employed in the factories, the poor, and women had a major impact in Victorian England and caused repercussions throughout Europe and India. As a young man, Chadwick was Bentham’s assistant and later applied Bentham’s utilitarian theories11 to practical public health problems. Chadwick came to see poverty as a major cause of ill health through increased exposures to toxic substances, poor diet, and so forth. Chadwick was a contemporary of John Stuart Mill (1806–1873), a Benthamite and the foremost utilitarian writer of the nineteenth century. Mill was elected to the British Parliament in 1865 and supported such unpopular measures as increased protection for the more vulnerable members of society, especially
6 Ethics and Epidemiology women, the poor, and persons condemned to capital punishment. Mill described his views on how to control disease in the following passage from his book Utilitarianism, which is still the most widely read account of utilitarian ethics: Even that most intractable of enemies, disease, may be indefinitely reduced in dimensions by good physical and moral education, and proper control of noxious influences; while the progress of science holds out a promise for the future of still more direct conquests over this detestable foe. And every advance in that direction relieves us from some, not only of the chances which cut short out own lives, but, what concerns us still more, which deprive us of those in whom our happiness is wrapt up.12
Many of the founders of the public health movement in nineteenth-century England were guided by the utilitarian moral philosophy of Benham, Mill, and other philosophers. The leaders of the sanitary movement attempted to use epidemiological methods of observation to prevent or control diseases that afflicted those who were the worse off in society. For example, Chadwick led an inquiry into the diseases and unsanitary living conditions that were prevalent among England’s impoverished working-class population. The resulting General Report on the Sanitary Conditions of the Labouring Population of Great Britain (1842) was a powerful indictment of the appalling living conditions of industrial workers and their families.1,10,13 A similar sanitary survey was undertaken subsequently by legislator Lemuel Shattuck (1793–1859) in Massachusetts. Shattuck’s influential Report of the Sanitary Commission of Massachusetts 1850 outlined the basis for an organized system of public health.14 By the end of the nineteenth century, the germ theory of disease causation gained widespread acceptance because of influential bacteriological discoveries by German physician and Nobel Prize winner Robert Koch, French chemist and biologist Louis Pasteur, and others.15,16 Microbiology and bacteriology became the most important medical sciences of both the United States and Europe, while epidemiology focused on the prevention of infectious diseases.16,17 With major breakthroughs in bacteriology and immunology, disease prevention in the individual moved to the forefront.10
Twentieth-Century Developments in Epidemiology and Ethics Since the start of the public health movement in the mid-1800s—roughly the period in which epidemiology originated—the goal of epidemiology and public health has been to prevent premature death and disease by applying scientific
Historical Foundations 7 and technical knowledge.18 However, as events in the twentieth century attest, the rights of individuals have not always been respected in pursuing these important societal and scientific objectives. At the beginning of the twentieth century, epidemiology in the United States was developed primarily in federal, state, and local health departments. In 1891, the Marine Hospital Service, later the U.S. Public Health Service, was a major center for epidemiological research in the United States. It was organized by the Hygienic Laboratory. Investigators at the Hygienic Laboratory, renamed the National Institute of Health in 1930, studied both infectious and nutritional deficiency diseases, such as pellagra. Joseph Goldberger, Wade Hampton Frost, and other prominent epidemiologists received their training at the Hygienic Laboratory.17 Epidemiology developed separately in England, where leading epidemiologists in the 1930s, such as Major Greenwood of the London School of Hygiene, were concerned about both infectious and noninfectious epidemiology.16,17 Greenwood was an active member of the Socialist Medical Association and an early advocate of socialized medicine. Like other British epidemiologists of the era, he was concerned about the social causes of disease and the health of all groups in society.10,16 The leading epidemiologists during this period rarely mentioned ethical issues in their publications. Experts in medicine, public health, and moral philosophy showed little interest in the major issues of biomedical ethics that we focus closely on today. A noteworthy exception was U.S. Army surgeon Walter Reed, who developed formal procedures for obtaining the consent of potential subjects in his yellow fever experiments using a written contract that set forth Reed’s understanding of the ethical duties of medical researchers.19 Although deficient by contemporary standards of disclosure and consent, these procedures recognized the right of patients to refuse or agree to participate in research. By the mid-twentieth century, the focus in epidemiology had shifted in both Europe and the United States in response to the increasing spread of chronic diseases such as cardiovascular disease, cancer, and diabetes. These diseases were believed to have multiple environmental and genetic etiologies. The later part of the 1940s was notable for both the founding of the World Health Organization (WHO) and the initiation of the Framingham Study, a well-known cohort study of heart disease that has been ongoing since 1949.20 The Nuremberg Code and the Declaration of Geneva were also developed during this period. In 1956, Sir Richard Doll and Sir Austin Bradford Hill released the results of their cohort study of cigarette smoking and lung cancer among British doctors.21 A few years later, in 1960, Brian MacMahon and his colleagues published Epidemiological Methods, the first text to provide a clear description of case–control and cohort study designs.20,22
8 Ethics and Epidemiology In the years immediately following World War II, references to ethical issues in the epidemiological literature were limited to narrowly focused discussions of the ethics of randomized controlled trials.23,24 Epidemiological researchers, primarily physicians, undertook studies with little or no public scrutiny of their methods or professional obligations. In addition, they were unencumbered by what would later become regulatory safeguards for the protection of human subjects such as the shift to review by institutional review boards (IRBs) or research ethics committees. Since that time, major regulatory changes have been made in the United States and many other countries. These changes have substantially improved the safeguards for the welfare and rights of human research subjects. These improvements have largely been driven by the widespread belief that people possess fundamental rights that should not be violated in the pursuit of scientific and medical progress.25–27
The Origins of Regulatory Safeguards for Human Subjects Research In 1908, Sir William Osler, a physician who revolutionized the U.S. medical school curriculum, appeared before the British Royal Commission on Vivisection. He used the occasion to discuss Reed’s research on yellow fever. When asked by the commission whether risky research on humans is morally permissible, a view Osler attributed to Reed, Osler answered: “It is always immoral without a definite, specific statement from the individual himself, with a full knowledge of the circumstances. Under these circumstances, any man, I think is at liberty to submit himself to experiments” (emphasis added). When then asked if “voluntary consent . . . entirely changes the question of morality,” Osler replied, “Entirely.”28 Some writers on the history of this period describe Osler’s testimony as reflecting the usual and customary ethics of research at the turn of the century,29 but this sweeping historical claim has little supporting evidence. The extent to which any principle of research obligation scrutiny and any consent requirement was then ingrained in the ethics of research, or would become ingrained in the next half century, is still a matter of historical controversy. One reason for the relatively late emergence of interest in research ethics is that scientifically rigorous research involving human subjects did not become common in the United States or Europe until the middle of the twentieth century. Only shortly before the outbreak of World War II had research evolved into an established and thriving concern.30–32 Research ethics prior to World War II had approximately the same influence on research practices as medical ethics had on clinical practices.33
Historical Foundations 9 The major events that pushed research ethics to the forefront occurred at the Nuremberg trials, when prominent leaders of Nazi Germany were prosecuted for crimes committed during the Holocaust. The Nuremberg military tribunal developed the Nuremberg Code of 1947, which was a set of ten principles for human experimentation. According to the famous Principle 1, the primary consideration in research is the subject’s voluntary consent, which is “absolutely essential.”34 The Nuremberg Code was not an attempt to formulate new rules of professional conduct.35 Rather, it delineated principles of medical and research ethics in the context of a trial for war crimes. Although it had little immediate impact on the conduct of biomedical research, the Nuremberg Code served as a model for many professional and governmental codes formulated in the 1950s and 1960s, and its provision requiring voluntary consent was a forerunner of informed consent practices in biomedical research.33,36 The General Assembly of the World Medical Association (WMA), an international organization of physicians founded in 1946, drafted the Declaration of Geneva in 1948. Subsequently, the WMA began to formulate a more comprehensive code to distinguish ethical from unethical clinical research. A draft was produced in 1961, but the WMA did not adopt the code until its 1964 meeting in Helsinki.37 This three-year delay was not caused by vacillation or indifference, but rather by international political processes and a determination to produce a universally applicable and useful document.38 The Declaration of Helsinki made consent a central requirement of ethical research and introduced an important distinction between therapeutic and nontherapeutic research. The former is defined in the declaration as research “combined with patient care” and is permitted as a means of acquiring new medical knowledge only insofar as it “is justified as purely scientific research without therapeutic value or purpose for the specific subjects studied.” The declaration requires consent for all instances of nontherapeutic research, unless a subject is incompetent, in which case guardian consent is necessary. According to paragraph I.9 of the declaration, “In any research on human beings, each potential subject must be adequately informed of the aims, methods, anticipated benefits and potential hazards of the study and the discomfort it may entail [and] that he is at liberty to abstain . . . The doctor should then obtain the subject’s freely given informed consent.”25,35 The American Medical Association, the American Society for Clinical Investigation, the American Federation for Clinical Research, and many other medical groups either endorsed the Declaration of Helsinki or established their own ethical requirements consistent with the declaration’s provisions.39 Officials at federal agencies in the United States also developed provisions based on the declaration, some of which were almost verbatim reformulations of the declaration. Regardless of its shortcomings, the Helsinki Code is a foundational
10 Ethics and Epidemiology document in the history of research ethics and the first significant attempt at self-regulation by the medical research community. The Nuremberg Code was the first code of medical research developed externally by a court system, and the Declaration of Helsinki was the first code of medical research developed internally by a professional medical body. More comprehensive guidelines formulated in 1982 by the WHO and the Council of International Organizations of Medical Sciences (CIOMS) used the Declaration of Helsinki (1975) as a starting point.25,35 According to these guidelines, all human subjects research should be reviewed by an independent committee.35 However, the WHO/ CIOMS guidelines also contain special provisions for protecting vulnerable persons in medical experiments, such as pregnant women, children, people who are mentally ill, and people in developing countries.35 In the United States, Congress passed the Drug Amendments of 1962 to make fundamental changes in federal regulation of the drug industry.40 These amendments were passed in response to large numbers of prescriptions of the drug thalidomide, which had not been adequately tested for use of pregnant women. Thalidomide caused severe birth defects in many children of such pregnant women. The amendments required researchers to inform research subjects of a drug’s experimental nature and receive their consent before starting an investigation, except when the researchers “deem it not feasible or, in their professional judgment, contrary to the best interests of such human beings.”41 On January 17, 1966, James Lee Goddard, a former assistant surgeon general, became commissioner of the U.S. Food and Drug Administration (FDA). Beset by numerous reports of medical experimentation without consent of subjects as well as the swirl of controversy caused by the injection of live human cancer cells into twenty-two chronically ill patients without their consent at the Jewish Chronic Disease Hospital, Goddard was determined to resolve the ambiguities surrounding informed consent. He appointed several FDA officials to study the issue and make recommendations. In August 1966, the FDA published new provisions in its “Consent for Use of Investigational New Drugs on Humans: Statement of Policy.”42,43 This publication took place two months after the appearance of an influential article in the New England Journal of Medicine by Henry Beecher, MD, an anesthesiologist credited with establishing the peer-review system for experimental protocols in medicine. In his article,44 Beecher charged that many patients involved in clinical research experiments had never had the risks of participating satisfactorily explained to them or were unaware that they were the subjects. This issue was noticed by leaders of the National Institutes of Health (NIH). In late 1963, James Shannon, NIH director from 1955 to 1968, asked the NIH division that supported research centers to investigate these problems and make recommendations.45 An associate chief for program development, Robert
Historical Foundations 11 B. Livingston, led this study.46 His report, in November 1964, noted the absence of an applicable code of conduct for research, as well as an uncertain legal context.47 According to Livingston’s report, it would be difficult for NIH to assume responsibility for ethics and research practices without striking an unduly authoritarian posture on requirements for research. The authors also noted that ethical problems were raised by policies “inhibiting the pursuit of research on man” and added that “NIH is not in a position to shape the educational foundations of medical ethics, or even the clinical indoctrination of young investigators.”45 NIH Director Shannon was disappointed with this part of the report because he believed that NIH should command a position of increased responsibility. However, he accepted the report and regarded some of its recommendations as urgent. In early 1965, Shannon asked the U.S. Surgeon General to give “highest priority” to “rapid accomplishment of the objectives” of the basic recommendations. He suggested more consultation with members of the legal profession and clergy as well as the medical profession and endorsed the idea of “review [of research protections] by the investigator’s peers.”48 Shannon and Surgeon General Luther Terry jointly decided to present the problems discussed in the report to the National Advisory Health Council (NAHC) in September 1965. At this decisive meeting, Shannon argued that NIH should assume responsibility for placing formal controls on the independent judgment of investigators to remove conflicts of interest and biases. Specifically, he argued in favor of subjecting research protocols to impartial peer review of the research risks and the adequacy of protections of subjects’ rights.49 Shannon knew that “consent” could easily be manipulated by physicians using their authority to persuade otherwise-unwilling patients to participate and prompted a discussion of how the consent process could also become impartial. The NAHC members agreed that all of these concerns were valid, but they did not believe that the many fields involved in government-supported research, including epidemiology and the social sciences, could be governed by a single set of procedures or regulations. Nevertheless, within three months, NAHC supported a resolution at its December 3, 1965, meeting that followed the broad outlines of Shannon’s recommendations and proposed guidelines for federal research ethics.50 The resolution was accepted by newly installed Surgeon General William H. Stewart, who issued a policy statement in February 1966 that became a landmark in the history of research ethics in the United States. This policy statement on “Clinical Investigations Using Human Subjects” compelled institutions receiving grant support from the Public Health Service to provide prior review by a committee for proposed research with human subjects. The new IRBs would be responsible for reviewing (1) the rights and welfare of subjects, (2) the appropriateness of methods used to obtain informed consent, and (3) the balance
12 Ethics and Epidemiology of risks and benefits for research subjects.51 The subjective judgment by a principal investigator or program director that human subjects’ rights would be adequately protected in a proposed study was no longer sufficient for federal funding eligibility. These developments, along with parallel developments in other countries such as Australia and Great Britain, began the “movement to ethics committees.”52 Peer review served as the basis for several federal policies governing research ethics. The federal initiatives were endorsed by much of the biomedical community53 and were adopted in modified form by the Association of American Medical Colleges as “a requirement” for medical school accreditation.54 Over the next decade, they served as a crude model that was gradually refined and finally became accepted in institutional practices for the protection of human research subjects throughout the United States. Shortly after these developments, the U.S. Department of Health, Education, and Welfare (now the Department of Health and Human Services) issued a series of guidebooks and regulations for the protection of human research subjects. The National Research Act of 1974 established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which made a number of important recommendations by 1978, many having the effect of federal law.25,33,55 Subsequent federal regulations for the protection of human research subjects in the United States, including those of the FDA and the Department of Health and Human Services, have resulted in a complex IRB system (to ensure the rights and welfare of research subjects as well as justice in the selection of subjects) and other regulatory safeguards.25,26,33
The Origins of Contemporary Epidemiological Ethics Ethics in contemporary epidemiology have their origin in the historical developments that led to regulatory safeguards for human subjects research and in parallel developments in the history of bioethics that are beyond the scope of this chapter. Nevertheless, the foundational concepts and principles of bioethics are an important part of ethics in epidemiology.
The 1970s By the early 1970s, the Tuskegee Syphilis Study (an observational study of four hundred Black men with syphilis who were not given curative treatment), the Jewish Chronic Disease Hospital case, and Beecher’s article had fostered a growing awareness of the potential for ethical problems and dilemmas in epidemiology and clinical research.25,33,56–58 Some ethical issues in epidemiological research and practice drew widespread attention in the 1970s when
Historical Foundations 13 U.S. legislators responded to public concern and began drafting stringent laws, including the Privacy Act of 1974 to protect the privacy and confidentiality of medical records.59 Similar data protection legislation was enacted in the Federal Republic of Germany in 1970, and Great Britain followed suit in 1984. Because of this legislative trend, some forms of epidemiological research and routine surveillance activities, including study designs that had provided important insights into the environmental causes of disease, were at risk of becoming unjustifiably restricted. Leading epidemiologists responded to the threat of growing limitations on the use of routinely collected medical records by explaining the usefulness of the endangered research to society and future patients and by outlining the confidentiality safeguards that should be employed by epidemiologists.60 An influential article by Leon Gordis, Ellen Gold, and Raymond Seltser on privacy protection in epidemiological research appeared in the American Journal of Epidemiology in 1977, the same year that the Privacy Protection Study Commission report was released in the United States.60,61 Mervyn Susser, Zena Stein, and Jennie Kline published a far-ranging paper on ethical issues in epidemiology in 1978.62 In the late 1970s, epidemiologists had no ethics guidelines or professional codes of conduct specific to their field. Unlike many other professional groups, they had no acknowledged means of self-regulation.59 As a result of the growth of epidemiology graduate programs for nonphysicians, epidemiologists were being trained without direct exposure to the ethical traditions of medicine.20,60
The 1980s New public health problems, such as the global spread of AIDS, brought new ethical questions, as discussed by Carol Levine in Chapter 9. In the mid-1980s, Colin Soskolne and others proposed the development of ethics guidelines for epidemiologists.63,64 In “Epidemiology: Questions of Science, Ethics, Morality, and Law,” published in the American Journal of Epidemiology in 1989, Soskolne argued that ethics guidelines could be useful for teaching purposes and as a framework for the debate of ethical issues.59 By 1987, the Society for Epidemiologic Research had formed committees to examine the ethical problems of conflict of interest and access to data by third parties. Also in 1987, the International Epidemiological Association held a major session on ethics at its annual conference in Helsinki, Finland. The Industrial Epidemiology Forum organized a conference on ethics in epidemiology in Birmingham, Alabama, in 1989, in conjunction with the annual meeting of the Society for Epidemiologic Research. The papers presented at this conference were published in 1991 with proposed ethics guidelines for epidemiologists.65,66
14 Ethics and Epidemiology
The 1990s In 1990, the International Epidemiological Association circulated draft ethics guidelines for epidemiologists at an ethics workshop in Los Angeles,67 and the International Society of Pharmacoepidemiology established an ethics committee. The next year, the American College of Epidemiology established its Committee on Ethics and Standards of Practice and CIOMS published the CIOMS International Guidelines for Ethical Review of Epidemiological Studies.59,68 By this time, most major groups of epidemiologists had seen the importance of a number of ethical issues. Epidemiologists in Italy, Canada, the United States, and many other countries had discussed the development of ethics guidelines for epidemiologists. A symposium on Ethics and Law in Environmental Epidemiology was held in Mexico City in 1992 in conjunction with the annual meeting of the International Society for Environmental Epidemiology (ISEE),69 and the WHO and the ISEE jointly convened an International Workshop on Ethical and Philosophical Issues in Environmental Epidemiology in North Carolina in 1994, where findings of an international ethics survey were presented.70,71 The International Clinical Epidemiology Network Ethics Group also met during 1994 to discuss the recently published CIOMS ethics guidelines and to determine which participating clinical epidemiology units around the world were adequately protecting human subjects. In the mid-1990s, many epidemiology graduate programs began incorporating an ethics curriculum (see Kenneth Goodman and Ronald Prineas in Chapter 10).72 During this period, the Society for Epidemiologic Research, the American College of Epidemiology, and other international professional organizations for epidemiologists began including ethics workshops in their annual meetings. By July 1994, membership in the American Public Health Association (APHA) Forum on Bioethics, which organizes sessions on bioethics and public health during the APHA annual meeting, had risen to 145 bioethicists, legal experts, epidemiologists, and other public health professionals. These developments in professional ethics in epidemiology occurred against a background of social and political movements in the early 1990s that included vigorous efforts to ensure that women and minorities were adequately represented in research projects funded by NIH.73,74 NIH launched the Women’s Health Initiative in 1991, and other epidemiological investigations of understudied women’s health problems began during this period.75 Women’s health advocates testified on Capitol Hill in support of increased federal spending for breast cancer research and improved procedures for recruiting and obtaining the informed consent of patients in breast cancer chemoprevention trials. Other important developments during this period included increased public concern about the integrity of scientific research.69,76
Historical Foundations 15 Another development in Europe and North America was renewed concern among legislators, data protection advocates, and members of the general public about the privacy and confidentiality of information in health information systems. In light of pending legislation in the European Community that would severely restrict the use of routinely collected medical data for epidemiological research,7 both the International Society for Pharmacoepidemiology Ethics Committee and the joint WHO– ISEE International Workshop on Ethical and Philosophical Issues in Environmental Epidemiology have made recommendations to policymakers and legislative bodies that underscore the societal value of epidemiological research (for example, the contribution of epidemiological studies to scientific knowledge about the etiologies of birth defects and cancer).70 From 1995 to the present, there has been an increased recognition of the societal importance of epidemiological research and practice. The ethical duties of epidemiologists have also been clarified.78 In addition, the number of publications on ethical issues in epidemiology has continued to increase.78–90 Many of these articles have dealt with professional responsibilities of epidemiologists.88,90 Ethical issues in public health practice have also increasingly been addressed,84,91 as discussed by Robert McKeown in Chapter 7. Interest in ethical issues in epidemiology has extended beyond North America and Europe to include researchers in many other parts of the world, including developing countries. One sign of the increased attention to ethics in epidemiology in recent years is the development of refined guidelines for epidemiologists and policy statements on data sharing, privacy and confidentiality protection, DNA testing for disease susceptibility, and other issues.86,87,92–94 Ethical issues in genetic epidemiology are discussed by Laura Beskow, Stephanie Malia Fullerton, and Wylie Burke in Chapter 8. The ISEE adopted ethics guidelines for environmental epidemiologists in 1999.95 The American College of Epidemiology adopted a set of ethics guidelines for epidemiologists in North America in 1999.96 Ethics surveys of epidemiologists and other public health professionals, public health students, and institutions that train epidemiologists and other public health professionals have provided information about the ethical interests and concerns of epidemiologists.97–101 Several institutions that train public health professionals have created new courses on ethics in epidemiology and public health.99,102 Other institutions have created curricula on public health ethics for epidemiology graduate students, and the Association of Schools of Public Health has developed model curricula in public health ethics, as discussed by Kenneth Goodman and Ronald Prineas in Chapter 10. In the United States, these efforts have been strengthened by training on ethical principles and IRB procedures recommended by the Office for Human Research Protections of the U.S. Department of Health and Human Services.103
16 Ethics and Epidemiology
2000 to the Present The Health Insurance Portability and Accountability Act (HIPAA) of 1996 privacy rules took effect in the United States early in 2004 after years of planning and discussion.104 The regulations protect the privacy of certain individually identifiable health data, referred to as “protected health information.” The privacy rules permit disclosures without individual authorization to public health authorities authorized by law to collect or receive the information to prevent or control disease, injury, or disability, including public health practice activities such as surveillance.105 In 2004, the American College of Epidemiology circulated a request to its members to identify problems implementing the new rules when conducting research. A policy forum on the impact of the new HIPAA privacy rules was held at the American College of Epidemiology meeting in Boston in 2004. The forum suggested that epidemiologists were encountering new obstacles to the performance of their research (for example, more hospitals were refusing to release medical records for research purposes). These developments in ethics in epidemiology occurred in conjunction with a number of important biomedical research policy developments in the late 1900s and early twenty-first century, including the implementation of new NIH guidance on sharing research data from NIH grants and contracts and updated institutional policies following controversies involving conflicts of interest in research. Contemporary epidemiological ethics are influenced by several major developments, including the completion of the Human Genome Project, the introduction of whole genome sequencing, the rise of exposomics, and the use of Big Data for health research.106–112 Epidemiologists are also contributing to knowledge about the health effects of climate change and global warming. The related ethical and social justice concerns are being addressed using the methods of public health ethics.113–118 Important advances have been made in compiling ethics cases suitable for ethics instruction in epidemiology, public health, and global health research and in examining important cases such as the Flint, Michigan, water contamination and the Guatemala Syphilis Study.119–122
Summary and Conclusions The upsurge of interest in the ethics of epidemiological research and practice in recent decades could be regarded as a sign of both the maturation of epidemiology as a profession and the important role that epidemiology plays in contemporary society. In the spirit of William Farr and other founders of the public health movement, today’s epidemiologists are addressing a wide range of public
Historical Foundations 17 health problems. Their research could give rise to new ethical problems, many of which are anticipated in this volume.
References 1. Brockington, C. F. “The History of Public Health.” In The Theory and Practice of Public Health, ed. W. Hobson. Oxford University Press, 1971: 1–7. 2. Kerkhoff, A. H. M. “Origins of Modern Public Health and Preventive Medicine.” In Ethical Dilemmas in Health Promotion, ed. S. Doxiadis. John Wiley & Sons, 1987:35–45. 3. Lilienfeld, A. M., and Lilienfeld, D. E. “Threads of Epidemiologic History.” In Foundations of Epidemiology. Oxford University Press, 1980: 23–45. 4. Rousseau, Jean-Jacques. The Basic Political Writings, trans. and ed. Donald A. Cress. Hackett Publishing Co., 1987. 5. Frank, Johann Peter. A System of Complete Medical Police, trans. Erna Lesky. Johns Hopkins University Press, 1976. 6. Porter, R. Disease, Medicine and Society in England, 1550–1860. Macmillan, 1987. 7. Temkin, O. “Health and Disease.” In The Double Face of Janus and Other Essays in the History of Medicine, ed. Owsei Temkin. Johns Hopkins University Press, 1977. 8. Gregory, J. Lectures on the Duties and Qualifications of a Physician. W. Strahan and T. Cadell, 1772. 9. Rush, B. Medical Inquiries and Observations. Vol. 2, ch. 1. Published as a single essay titled An oration . . . An Enquiry into the Influence of Physical Causes upon the Moral Faculty. Charles Cist, 1786. 10. Chave, S. P. W. “The Origins and Development of Public Health.” In Oxford Textbook of Public Health, ed. W. W. Holland, R. Detels, and G. Knox. Oxford University Press, 1984:3–19. 11. Bentham, J. An Introduction to the Principles of Morals and Legislation, ed. J. H. Burns and H. L. A. Hart. Clarendon Press, 1970. 12. Mill, J. S. “Utilitarianism.” In Collected Works of John Stuart Mill, vol. 10. University of Toronto Press, 1969. 13. Chadwick, E. Report on the Sanitary Condition of the Labouring Population of Great Britain, ed. M. W. Flinn. Edinburgh University Press, 1964. 14. Shattuck, L. Report of the Sanitary Commission of Massachusetts 1850. Cambridge, MA: 1948. 15. Greison, G. L. “Pasteur’s Work on Rabies: Re-Examining the Ethical Issues,” Hastings Center Report 8 (1978): 26–33. 16. Terris, M. “The Changing Relationships of Epidemiology and Society: The Robert Cruikshank Lecture,” Journal of Public Health Policy 6 (1985): 15–36. 17. Terris, M. “Epidemiology and the Public Health Movement,” Journal of Public Health Policy 8 (1987): 315–329. 18. Hanlon, J. J., and Pickett, G. E. “Philosophy and Purpose of Public Health.” In Public Health Administration and Practice, 7th ed. C.V. Mosby Company, 1979:2–12. 19. Bean, W. B. “Walter Reed and the Ordeal of Human Experiments,” Bulletin of the History of Medicine 51 (1977): 75–92. 20. Susser, M. “Epidemiology in the United States After World War II: The Evolution of Technique,” Epidemiologic Reviews 7 (1985): 147–177.
18 Ethics and Epidemiology 21. Doll, R., and Hill, A. B. “Lung Cancer and Other Causes of Death in Relation to Smoking,” British Medical Journal 2 (1956): 1071–1081. 22. MacMahon, B., Pugh, T. G., and Ipsen, J. Epidemiological Methods. Little, Brown & Co., 1960. 23. Hill, A. B. “The Clinical Trial,” British Medical Journal 7 (1951): 278–282. 24. Mainland, D. “The Clinical Trial: Some Difficulties and Suggestions,” Journal of Chronic Diseases 11 (1959): 484–496. 25. Levine, R. J. Ethics and Regulation of Clinical Research. Yale University Press, 1986. 26. Katz, J. “The Regulation of Human Experimentation in the United States—A Personal Odyssey,” IRB: A Review of Human Subjects Research 9 (1987): 1–6. 27. Katz, J. “Ethics and Clinical Research Revisited. A Tribute to Henry K. Beecher,” Hastings Center Report 23 (1993): 31–39. 28. Cushing, H. The Life of Sir William Osler. Oxford University Press, 1940. 29. Brady, J. V., and Jonsen, A. R. “The Evolution of Regulatory Influences on Research with Human Subjects.” In Human Subjects Research, ed. R. Greenwald, M. K. Ryan, and J. E. Mulvihill. Plenum Press, 1982. 30. Ivy, A. C. “The History and Ethics of the Use of Human Subjects in Medical Experiments,” Science 108 (1948): 1–5. 31. Beecher, H. Experimentation in Man. Charles C. Thomas, 1959. 32. Brieger, G. H. “Human Experimentation: History.” In Encyclopedia of Bioethics, 4 vols., ed. W. T. Reich. Free Press, 1978: 684–692. 33. Faden, R. R., and Beauchamp, T. L. A History and Theory of Informed Consent. Oxford University Press, 1986. 34. United States v. Karl Brandt, Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10, vols. 1 and 2. “The Medical Case” (Military Tribunal I, 1947). U.S. Government Printing Office, 1948–1949. 35. Howard-Jones, N. “Human Experimentation in Historical and Ethical Perspectives,” Social Sciences and Medicine 16 (1982): 1429–1448. 36. Katz, J. The Silent World of Doctor and Patient. Free Press, 1984. 37. World Medical Association. “Declaration of Helsinki: Recommendations Guiding Medical Doctors in Biomedical Research Involving Human Subjects,” New England Journal of Medicine 271 (1964): 473. 38. Winton, R. R. “The Significance of the Declaration of Helsinki: An Interpretive Documentary,” World Medical Journal 25 (1978): 58–59. 39. World Medical Association. “Human Experimentation: Declaration of Helsinki,” Annals of Internal Medicine 65 (1966): 367–368. 40. Public Law 87-781, U.S.C. 355, 76 Stat. 780; amending Federal Food, Drug, and Cosmetic Act. 41. Federal Food, Drug, and Cosmetic Act, Sec. 505(i), 21 U.S.C. 355(i). 42. Curran, W. J. “Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies,” Daedalus 98 (Spring 1969). 43. Curran, W. J. “1938–1968: The FDA, the Drug Industry, the Medical Profession, and the Public.” In Safeguarding the Public: Historical Aspects of Medicinal Drug Control, ed. J. Blake. Johns Hopkins University Press, 1970. 44. Beecher, H. K. “Ethics and Clinical Research,” New England Journal of Medicine 274 (1966): 1355–1360. 45. Livingston, R. B. Memorandum to Director J. A. Shannon on “Moral and Ethical Aspects of Clinical Investigation” (February 20, 1964).
Historical Foundations 19 46. Memorandum from Clinical Director, NCI (National I. Berlin) to Director of Laboratories and Clinics, OD-DIR, on “Comments on Memorandum of November 4, 1964 from the Associate Chief of Program Development DRFR, to the Director, NIH” (August 30, 1965). 47. Livingston, R. B. Memorandum to Director J. A. Shannon on “Progress Report on Survey of Moral and Ethical Aspects of Clinical Investigation” (November 4, 1964). 48. Shannon, J. A. Memorandum and Transmittal Letter to the U.S. Surgeon General on “Moral and Ethical Aspects of Clinical Investigations” (January 7, 1965). 49. Transcript of the NAHC meeting. Washington, DC, September 28, 1965. 50. “Resolution Concerning Clinical Research on Humans” (December 3, 1965), transmitted in a Memorandum from Dr. S. John Reisman, Executive Secretary, NAHC to Dr. J. A. Shannon (“Resolution of Council”) on December 6, 1965. Reported in a Draft Statement of Policy on January 20, 1966. 51. U.S. Public Health Service, Division of Research Grants, Policy and Procedure Order (PPO) 129, February 8, 1966, “Clinical Investigations Using Human Subjects,” signed by Ernest M. Allen, Grants Policy Officer. 52. Curran, W. “Evolution of Formal Mechanisms for Ethical Review of Clinical Research.” In Medical Experimentation and the Protection of Human Rights, ed. N. Howard-Jones and Z. Bankowski. Council for International Organizations of Medical Sciences, 1978. 53. “Friendly Adversaries and Human Experimentation” [editorial], New England Journal of Medicine 275 (1966): 786. 54. Marston, R. Q. “Medical Science, the Clinical Trial, and Society,” speech delivered at the University of Virginia on November 10, 1972 [typescript]. 55. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. U.S. Government Printing Office, 1978. 56. Brandt, A. M. “Racism and Research: The Case of the Tuskegee Syphilis Study,” Hastings Center Report 8 (1978): 21–29. 57. Final Report of the Tuskegee Syphilis Study Ad Hoc Panel. Public Health Service, April 28, 1973. 58. Hearings before the Subcommittee on Health of the Committee on Labor and Public Welfare, U.S. Senate. “Quality of Health Care—Human Experimentation” (1973). 59. Soskolne, C. L. “Epidemiology: Questions of Science, Ethics, Morality, and Law,” American Journal of Epidemiology 129 (1989): 1–18. 60. Gordis, L., Gold, E., and Seltser, R. “Privacy Protection in Epidemiologic and Medical Research: A Challenge and a Responsibility,” American Journal of Epidemiology 105 (1977): 163–168. 61. Privacy Protection Study Commission. Personal Privacy in an Information Society. U.S. Government Printing Office, 1977. 62. Susser, M., Stein, Z., and Kline, J. “Ethics in Epidemiology,” Annals of the American Academy of Political and Social Sciences 437 (1978): 128–141. 63. Soskolne, C. L., and Zeighami, E. A. “Research, Interest Groups, and the Review Process.” Paper presented at the 10th Scientific Meeting of the International Epidemiological Association, Vancouver, British Columbia, Canada, August 19– 25, 1984. 64. Soskolne, C. L. “Epidemiological Research, Interest Groups and the Review Process,” Journal of Public Health Policy 7 (1985): 173–184.
20 Ethics and Epidemiology 65. Fayerweather, W. E., Higginson, J., and Beauchamp, T. L., eds. “Industrial Epidemiology Forum’s Conference on Ethics in Epidemiology,” Journal of Clinical Epidemiology 44 (Suppl. I) (1991): 1S–169S. 66. Beauchamp, T. L., Cook, R. R., Fayerweaither, W. E., et al. “Ethical Guidelines for Epidemiologists,” Journal of Clinical Epidemiology 44 (1991): 151S–169S. 67. American Public Health Association. 1991 Section newsletter Epidemiology. Winter 1990. 68. Council for International Organizations of Medical Sciences. “International Guidelines for Ethical Review of Epidemiological Studies,” Law, Medicine and Health Care 19 (1991): 247–258. 69. Soskolne, C. L., ed. “Proceedings of the Symposium on Ethics and Law in Environmental Epidemiology,” Journal of Exposure Analysis and Environmental Epidemiology 3 (Suppl. 1) (1993). 70. World Health Organization Meeting Report. “Joint WHO- ISEE International Workshop on Ethical and Philosophical Issues in Environmental Epidemiology, Research Triangle Park, North Carolina, U.S.A., September 16–18, 1994,” Science of the Total Environment 184 (1996): 131–136. 71. Soskolne, C. L., Jhangri, G. S., Hunter, B., and Close, M. “Interim Report on the International Society for Environmental Epidemiology/ Global Environmental Epidemiology Network Ethics Survey.” Working paper presented at the joint WHO/ ISEE International Workshop on Ethical and Philosophical Issues in Environmental Epidemiology, Research Triangle Park, North Carolina, U.S.A., September 16–18, 1994. 72. Coughlin, S. S., Etheredge, G. D., Metayer, C., and Martin, S. A., Jr. “Curriculum Development in Epidemiology and Ethics at the Tulane School of Public Health and Tropical Medicine. Results of a Needs Assessment and Plans for the Future.” Paper presented to the Association of Schools of Public Health Council on Epidemiology, Washington, DC, October 30, 1994. 73. Coughlin, S. S., and Beauchamp, T. L., “Ethics, Scientific Validity, and the Design of Epidemiologic Studies,” Epidemiology 3 (1992): 343–347. 74. U.S. House of Representatives, Committee on Energy and Commerce. National Institutes of Health Revitalization Amendments of 1990 (report 101-869). U.S. Government Printing Office, 1990. 75. Cummings, N. B. “Women’s Health and Nutrition Research: US Governmental Concerns,” Journal of the American College of Nutrition 12 (1993): 329–336. 76. Weed, D. W. “Preventing Scientific Misconduct,” American Journal of Public Health 88 (1998): 125–129. 77. James, R. C. “Consent and the Electronic Person.” Working paper presented at the joint WHO/ISEE International Workshop on Ethical and Philosophical Issues in Environmental Epidemiology, Research Triangle Park, North Carolina, U.S.A., September 16–18, 1994. 78. Coughlin, S. S. “Ethics in Epidemiology at the End of the 20th Century: Ethics, Values, and Mission Statements,” Epidemiologic Reviews 22 (2000): 169–175. 79. Soskolne, C. L., and Sieswerda, L. E. “Implementing Ethics in the Professions: Examples from Environmental Epidemiology,” Science and Engineering Ethics 9 (2003): 181–190. 80. Soskolne, C. L., and Bertollini, R., eds. “Ethical and Philosophical Issues in Environmental Epidemiology. Proceedings of a WHO/ISEE International Workshop,
Historical Foundations 21 September 16–18, 1994, Research Triangle Park, North Carolina, U.S.A.,” Science of the Total Environment 184 (1996). 81. Coughlin, S. S., ed. Ethics in Epidemiology and Clinical Research: Annotated Readings. Epidemiology Resources Inc., 1995. 82. Weed, D. L., and McKeown, R. E. “Epidemiology and Virtue Ethics,” International Journal of Epidemiology 27 (1998): 343–349. 83. Weed, D. L., and Coughlin, S. S. “New Ethics Guidelines for Epidemiology: Background and Rationale,” Annals of Epidemiology 9 (1999): 277–280. 84. Coughlin, S. S., Soskolne, C. L., and Goodman, K. W. Case Studies in Public Health Ethics. American Public Health Association, 1997. 85. Coughlin, S. S. Epidemiology and Public Health Practice: Collected Works. Quill Publications, 1997:9–26. 86. Hunter, D., and Caporaso, N. “Informed Consent in Epidemiologic Studies Involving Genetic Markers,” Epidemiology 8 (1997): 596–599. 87. American College of Epidemiology. “Draft Policy Statement on Privacy of Medical Records,” Epidemiology Monitor 19 (1998): 9–11. 88. Weed, D. L., and Mink, P. J. “Roles and Responsibilities of Epidemiologists,” Annals of Epidemiology 12 (2002): 67–72. 89. Soskolne, C. L., and Sieswerda, L. E. “Implementing Ethics in the Professions: Examples from Environmental Epidemiologists,” Science and Engineering Ethics 9 (2003): 181–190. 90. Weed, D. L., and McKeown, R. E. “Science and Social Responsibility in Public Health,” Environmental Health Perspectives 111 (2003): 1804–1848. 91. Fairchild, A. L., and Bayer, R. “Ethics and the Conduct of Public Health Surveillance,” Science 303 (2004): 631–632. 92. Kahn, K. S. “Epidemiology and Ethics: The Perspective of the Third World,” Journal of Public Health Policy 15 (1994): 218–225. 93. Clayton, E. W., Steinberg, K. K., Khoury, M. J., et al. “Informed Consent for Genetic Research on Stored Tissue Samples,” Journal of the American Medical Association 274 (1995): 1786–1792. 94. Beskow, L. M., Burke, W., Merz, J. F., et al. “Informed Consent for Population-Based Research Involving Genetics,” Journal of the American Medical Association 286 (2003): 2315–2321. 95. Soskolne, C. L., and Light, A. “Toward Ethics Guidelines for Environmental Epidemiologists,” Science of the Total Environment 184 (1996): 137–147. 96. American College of Epidemiology. “Ethics Guidelines,” Annals of Epidemiology 10 (2000): 487–497. 97. Soskolne, C. L., Jhangri, G. S., Hunter, B., et al. “Interim Report on the Joint International Society for Environmental Epidemiology (ISEE)— Global Environmental Epidemiology Network (GEENET) Ethics Survey,” Science of the Total Environment 184 (1996): 5–11. 98. Prineas, R. J., Goodman, K., Soskolne, C. L., et al. “Findings from the American College of Epidemiology Ethics Survey on the Need for Ethics Guidelines for Epidemiologists,” Annals of Epidemiology 8 (1998): 482–489. 99. Rossignol, A. M., and Goodmonson, S. “Are Ethical Topics in Epidemiology Included in the Graduate Epidemiology Curricula?” American Journal of Epidemiology 142 (1996): 1265–1268.
22 Ethics and Epidemiology 100. Coughlin, S. S., Etheredge, G. D., Metayer, C., et al. “Remember Tuskegee: Public Health Student Knowledge of the Ethical Significance of the Tuskegee Syphilis Study,” American Journal of Preventive Medicine 12 (1996): 242–246. 101. Coughlin, S. S., Katz, W. H., and Mattison, D. R. “Ethics Instruction at Schools of Public Health in the United States,” American Journal of Public Health 89 (1999): 768–770. 102. Coughlin, S. S. “Model Curricula in Public Health Ethics,” American Journal of Preventive Medicine 12 (1996): 247–251. 103. Office for Human Research Protections, U.S. Department of Health and Human Services. Federalwide Assurance (FWA) for the Protection of Human Subjects. 104. Health Insurance Portability and Accountability Act of 1996. Public Law No. 104- 191, 110 Stat. 1936 (1996). 105. Epidemiology Program Office, U.S. Department of Health and Human Services. “HIPAA Privacy Rule and Public Health. Guidance from the CDC and the U.S. Department of Health and Human Services.” Morbidity and Mortality Weekly Report 52 (2003): 1–12. 106. Venter, J. C., Adams, M. D., Myers, E. W., et al. “The Sequence of the Human Genome,” Science 291 (2001): 1304–1351. 107. Lander, E. S. “Initial Impact of the Sequencing of the Human Genome,” Nature 470 (2011): 187–197. 108. Wild, C. P. “Complementing the Genome with an ‘Exposome’: The Outstanding Challenge of Environmental Exposure Measurement in Molecular Epidemiology,” Cancer Epidemiology, Biomarkers and Prevention 14 (2005): 1847–1850. 109. Juarez, P. “Sequencing the Public Health Genome,” Journal of Health Care for the Poor and Underserved 24 (2013): 114–120. 110. Coughlin, S. S. “Toward a Road Map for Global—Omics: A Primer on—Omic Technologies,” American Journal of Epidemiology 180 (2014): 1188–1195. 111. Coughlin, S. S., and Dawson, A. “Ethical, Legal and Social Issues in Exposomics: A Call for Research Investment,” Public Health Ethics 7 (2014): 207–210. 112. Salerno, J., Knoppers, B. M., Lee, L. M., et al. “Ethics, Big Data and Computing in Epidemiology and Public Health,” Annals of Epidemiology 27 (2017): 297–301. 113. Childress, J. F., Faden, R. R., Gaare, R. D., et al. “Public Health Ethics: Mapping the Terrain,” Journal of Law and Medical Ethics 30 (2002): 170–178. 114. Kass, N. E. “An Ethics Framework for Public Health,” American Journal of Public Health 91 (2001): 1776–1782. 115. Grill, K., and Dawson, A. “Ethical Frameworks in Public Health Decision- Making: Defending a Value-Based and Pluralist Approach,” Health Care Analysis 25 (2017): 291–307. 116. Marckmann, G., Schmidt, H., Sofaer, N., and Strech, D. “Putting Public Health Ethics into Practice: A Systematic Framework,” Frontiers in Public Health 3 (2015): 1–8. 117. Powers, M., Faden, R., and Saghai, Y. “Liberty, Mill and the Framework of Public Health Ethics,” Public Health Ethics 5 (2012): 6–15. 118. Coughlin, S. S. “How Many Principles for Public Health Ethics?” Open Public Health Journal 1 (2008): 8–16. 119. Hanna-Attisha, M. “Flint Kids: Tragic, Resilient, and Exemplary,” American Journal of Public Health 107 (2017): 651–652.
Historical Foundations 23 120. Ortmann, L. W., Barrett, D. H., Saenz, C., et al., eds. Public Health Ethics Global Cases, Practice, and Context. https://www.ncbi.nlm.nih.gov/books/NBK435780/ pdf/Bookshelf_NBK435780.pdf 121. Lynch, H. F. “The Rights and Wrongs of Intentional Exposure Research: Contextualising the Guatemala STD Inoculation Study,” Journal of Medical Ethics 38 (2012): 513–515. 122. Spector-Bagdady, K., and Lombardo, P. A. “From In Vivo to In Vitro: How the Guatemala STD Experiments Transformed Bodies Into Biospecimens,” Milbank Quarterly 96 (2018): 244–271.
PART II
KEY VA LU E S A N D PR I NC I PLE S
2
Epidemiology and Informed Consent Anna C. Mastroianni and Jeffrey P. Kahn
Introduction Informed consent is a central concept and practice in the protection of the rights and interests of both patients receiving clinical care and individuals participating in research. A commitment to the ethical principles underlying informed consent dates back to the early twentieth century, as reflected in many countries’ laws governing the physician–patient relationship. Later, informed consent was codified into national policies and international guidelines and standards for ethical research on human subjects. These parallel origins of informed consent and its applications—based on distinctions between clinical practice and research—do not naturally apply to or readily translate to epidemiology. At times, epidemiology focuses on providing benefit to individuals affected by illness or disease, which gives it the marks of clinical practice. At other times it is focused on obtaining information about groups and populations rather than particular individuals, whether through public health research or surveillance. Requirements for consent have been and are treated differently in epidemiology depending on the type of activity and sometimes the practicability of seeking consent from participants. As we will see, the relationship between epidemiology and informed consent is far from clear-cut, creating challenges for policy and practice.
Informed Consent and the Scope of Epidemiology Informed consent occupies an unusual space in regard to epidemiology because it has three main activities: (1) public health research (e.g., the study of disease patterns among large groups), (2) public health practice (e.g., infectious disease tracing), and (3) a space in between the two, surveillance, each with its own implications for whether and how informed consent should be approached. This hybrid feature of epidemiology is unique in biomedicine and public health where the distinctions between research and practice are usually less ambiguous.
Anna C. Mastroianni and Jeffrey P. Kahn, Epidemiology and Informed Consent In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0002
28 Ethics and Epidemiology Informed consent in public health contexts in many respects parallels informed consent in biomedical research: the goals of informed consent are to ensure participants’ understanding and protect their right of self-determination when information or biosamples are sought from them. This contrasts with the clinical medicine context where the goal of informed consent is to foster autonomous decision-making by patients seeking health care, an intervention that has the potential to provide the consenting individual with direct medical benefit. When public health efforts involve research, such as in studies using group or population data on disease incidence and prevalence, requirements for informed consent of subjects seem to follow from the same justifications on which informed consent in biomedical and other types of research rely. However, this research model poses significant challenges in population settings. Obtaining informed consent can be onerous and rarely succeeds in achieving consent from all members of a population group under study, thereby limiting the statistical power of the research. These challenges were acknowledged in the Council for International Organizations of Medical Sciences’ (CIOMS) 1991 issuance of international consensus guidelines specifically addressing epidemiological studies, which stressed the need for “flexibility” in the application of traditional research ethics principles, including those requiring individual informed consent.1,2 CIOMS ethics guidelines no longer segregate epidemiological research and other health-related research,3 but its historical treatment as different from other types of research reveals a long-recognized tension between the ethical bases for obtaining informed consent and the goals of public health research—an issue at the core of ethics and epidemiology. In contrast, many activities falling in the scope of public health practice have not adhered to research or clinical approaches to informed consent, sometimes even being referred to as “nonresearch.”4 Notably, public health surveillance has long been conducted without informed consent, despite criticism and objections.2 Exemptions from informed consent requirements in this public health context are often reflected in laws, such as those concerning mandatory collection and reporting of personal health information about individuals infected by some communicable diseases. In lieu of informed consent, other protections, including confidentiality safeguards, are instituted to protect the interests of individuals and enable the realization of important public health goals of surveillance and reporting. The justifications for forgoing informed consent rest on the importance of the activity to the public’s health coupled with the infeasibility or even the impossibility of obtaining individual informed consent.2 The World Health Organization (WHO) Guidelines on Ethical Issues in Public Health Surveillance go even further, characterizing participation in surveillance activities as an affirmative ethical obligation.2
Informed Consent 29 This chapter argues that informed consent is required of subjects in epidemiological research and should be part of the practice of epidemiology when individual rights and interests deserve the sorts of protections offered by informed consent. The chapter articulates those protections and their ethics foundations, discusses alternative approaches to achieving the goals of informed consent, and highlights some of the practical challenges in its implementation.
The Foundations of Informed Consent in Epidemiology The Goals of Informed Consent The practice of informed consent has at its foundation the primary goal of ensuring the voluntary agreement of individuals to either receive medical care or participate in research, underpinned by the ethics principle of respect for the autonomy of the individual, sometimes termed respect for persons. To realize this goal in practice, individuals must be sufficiently informed and their decisions must be voluntary. In their landmark analysis of the conceptual foundations of informed consent, Faden and Beauchamp argue that any truly informed consent requires an autonomous choice by the individual. On this analysis, actions must satisfy three criteria in order to be autonomous: they must (1) be intentional, (2) reflect understanding on the part of the individual, and (3) be free of control by others.5 Intentionality refers to the purpose or plan of one’s actions; the intention to act, or of being purposeful. This can be particularly important when groups or communities have a cultural background that may not have a strong commitment to individual decision-making. For example, some Southeast Asian cultures, such as the Hmong, defer to clan elders for important decisions rather than relying on individual decision-making.6 In such cases, individual intentionality is not a fundamental aspect of decision-making, and so the standard understanding of consent may not apply. Second, a person must have sufficient understanding of what he or she intends. The “informed” part of informed consent presumes that adequate and relevant information is shared with the individual through a disclosure process and that he or she has an adequate understanding of the information to make a decision that truly reflects his or her intentions. This can be a challenge, for instance in research involving individuals of limited literacy or limited English- speaking proficiency.6 While the goal must be full understanding, that is often unrealistic even for the same individual who may well have a full understanding of some aspects of the information being disclosed and only limited understanding of other aspects. So, if full understanding is likely elusive, what level
30 Ethics and Epidemiology of understanding is sufficient? Scholars writing on this issue have identified the threshold as a substantial understanding of the nature of the proposed action and the positive and negative consequences of a decision.5 Finally, non-control or voluntariness is the third criterion fundamental to autonomous actions. Its importance in informed consent is clear and unconditional as a key principle of the Nuremberg Code and other influential ethics documents, such as the articulation of the principle of respect for persons in The Belmont Report (discussed later in the chapter).7,8 The amount of control over an individual’s actions (his or her voluntariness) can be understood as occupying a continuum,5 with three general types of influence that may undermine voluntariness. Ranked from least to most influence over voluntariness, they are (1) persuasion, (2) manipulation, and (3) coercion. Persuasion, or appeal to reason, is the least problematic form of influence, residing at one end of a continuum. When a person is successfully persuaded, he or she acts because the information and reasons presented appeal as convincing arguments. Such influence is free from control and therefore compatible with autonomous actions.9 At the other end of the continuum is coercion, in which a person is caused to act in ways that are against his or her wishes and only does so because of the threat of harm.10 A coerced action can never reflect autonomous choice because it relies on a response to a credible threat of harm, such as the classic example of the demand to turn over one’s wallet at gunpoint. The decision to hand over the wallet is not autonomous but rather made under the greatest sort of duress. On the continuum between persuasion and coercion is manipulation, by which influence is exerted over another by taking advantage of weaknesses or by devious behavior. Manipulation carries a sense of falsification or trickery because the manipulator usually has a goal of personal gain in his or her actions. Given the sense of dishonesty that comes with manipulation, it has little place as part of acceptable influence over autonomous decision-making.
Consent as Promotion of Self-Determination and Informed Decision-Making The history of informed consent indicates that consent has served as a means of promoting and protecting the self-determination of individuals, protecting their rights and interests allowing informed decision-making. As noted earlier, informed decision-making is realized by the disclosure and discussion of important information and a subsequent assessment by a patient or research subject about whether an appropriate risk/benefit balance exists. In this way, informed consent helps to protect and serve the rights and interests of both patients and research subjects. For epidemiological practice and research, informed consent plays a lesser role in the assessment of risk/benefit balance but has more obvious importance in protection from rights violations, as discussed later in the chapter.
Informed Consent 31
Consent as a Means of Determining Acceptable Risk for Individuals Since the risks and potential benefits of research are best weighed by individuals, informed consent gives prospective research subjects the opportunity to decide (1) whether the risks posed in research constitute harms from their individual perspectives and (2) whether the risks are sufficiently outweighed by potential benefits to be acceptable to them. The history of informed consent in biomedical research suggests that the perception of research risks has been focused primarily on the physical risks posed by research participation. This focus makes sense in the context of clinical trials but quickly begins to feel out of place when applied to public health or behavioral research. In behavioral research, although subjects may not experience physical risks, they may be exposed to risks of psychosocial harm. Public health efforts, such as surveillance or epidemiological research, pose little to no physical risk and are less likely to carry psychosocial risk, except in limited cases in which individual disease status may be stigmatizing and is disclosed to the individual or others. For example, infectious disease testing programs can be critically important for assessing the incidence and prevalence of infection. Disclosure of test results for a disease such as tuberculosis might carry a much lower risk of psychosocial harm than would disclosure of HIV status. In public health contexts, individuals participating in research or those whose information is collected in surveillance efforts may require protection not only from physical or psychosocial harm but from some violation of their rights (termed “wrongs”), such as violations of rights to privacy and confidentiality even if no harm results. Because the conceptualization of consent that underpins laws and policies governing informed consent is based largely on reactions to the history of mistreatment of patients or research subjects, the emphasis in policy and practice for protecting individuals from harms or wrongs is both appropriate and understandable.
Informed Consent in Practice and Policy The discussion thus far has laid out the conceptual and normative underpinnings of informed consent as autonomous decision-making as a means of realizing and respecting the self-determination of individuals. How do those underpinnings translate to the practice and policy of informed consent in epidemiology? The policies that serve as an answer have evolved over time and the practices meant to reflect them aim to serve these goals. The epidemiologist’s job would be much easier if demographic and other personal information about individuals were linked to disease registries, vital statistics, and other public records. As Alexander Capron has commented, a place
32 Ethics and Epidemiology where meticulously kept, fully linked data are available to epidemiologists would deserve to be called “epidemiologists’ heaven.”11 However, it would be very difficult to protect the rights and interests of individuals in such a place. Policies and practices of informed consent can play a useful function in creating an environment of accountability between professionals, those participating in clinical care or research, and the institutions in which clinical care, public health practice, and biomedical research are carried out. This environment leads to important expectations for practice on the part of those involved throughout both patient care and the research enterprise and is a crucial aspect of the environment that leads to trust.12,13 History has proven that lapses in informed consent and inattention to sociocultural experiences in research and within the health-care system can undermine public health efforts and public trust overall. 14–17 Without trust on the part of patients, research participants, and society at large, neither the health- care system nor the research enterprise will function as it should.
Historical Perspective The practice of informed consent initially developed out of concerns that patients needed to be protected from uninvited invasive procedures (and their harms) at the hands of physicians and medical institutions. This idea dates to the early part of the twentieth century, when paternalism on the part of physicians toward patients was the norm rather than the exception. In the 1914 U.S. court opinion Schloendorff v. Society of New York Hospital, Justice Benjamin Cardozo wrote that “[e]very human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient’s consent commits an assault, for which he is liable in damages.”18 (This statement came in response to a case brought by a woman who agreed to undergo exploratory surgery but awoke from surgery to find that she had had a tumor removed by surgeons who deemed it necessary, but who did not ask her permission to do so.) The concept of self-determination has provided justification for informed consent as a means to protect an individual’s right to determine what will happen to his or her body. The legal recognition of the patient’s right to give consent to treatment was based on respect for the self-determination of individuals, or the principle of respect for autonomy. Only in the late 1950s and into the following decades was the notion of “informed” added to consent in the context of medical decision-making. This early requirement for information centered on disclosure of relevant information to patients, focusing on the risks and potential benefits of particular courses of action.19–21 The standard for what constituted adequate disclosure evolved from a standard focused on professional practice to a standard
Informed Consent 33 focused on what patients would need and want to know about risks and possible benefits. This legal evolution was occurring in the context of cases addressing voluntary decision-making by patients in clinical care. A similar policy evolution in informed consent occurred in the context of research on human subjects. Voluntary consent is a cornerstone of the Nuremberg Code, created in response to Nazi atrocities during World War II, and has served as a foundation for international law (e.g., United Nations International Covenant on Civil and Political Rights)22 and ethics codes and guidance for the conduct of medical research (e.g., the World Medical Association’s 1964 promulgation of the Declaration of Helsinki).23 Nonetheless, starting in the late 1960s and into the mid-1970s, information about unethical research practices with human subjects came to public attention through explosive exposés. In the United States., those examples included the infamous Tuskegee Syphilis Study and other well- chronicled studies, such as those conducted at the Willowbrook State School and the New York Jewish Chronic Disease Hospital.5 These research scandals led to the appointment of influential congressional commissions, such as the Tuskegee Syphilis Study Ad Hoc Advisory Panel24 and the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research,8 and the formulation of research ethics policies and procedures that were incorporated into U.S. law. The various scandals pointed out several common problems in studies, including absent or inadequate consent. Also highlighted were the use of subjects from “vulnerable groups” and exposure of the subjects to risk without potential for benefit to them as individuals.12 Officials in the United States in the late 1970s focused the government’s first federal policies relating to research participation on protecting subjects from both physical harm and the violation of their rights, using the now-standard federal nomenclature of “policies for the protection of human subjects.”25 Such protections, which exist in similar form today, included requirements for informed consent documents executed by all subjects, prospective review and approval of research by research ethics review committees or institutional review boards charged with assessing appropriate risk/benefit balancing and the fair recruitment of subjects, and limitations on acceptable risk for subjects from groups deemed to deserve additional protections, including children, prisoners, and pregnant women and fetuses. In light of the relatively recent history of informed consent, it is not surprising that the importance of this ethical concept was not recognized in the early years of epidemiology. John Snow, who is sometimes referred to as the “father of epidemiology,”26 relied on personally identifiable information about individuals that was collected without their knowledge or consent to locate the source of cholera-contaminated water in 1850s London.27 As early as the 1940s, however, the “crown jewel of epidemiology,”28 known as the Framingham Heart Study,
34 Ethics and Epidemiology proceeded with explicit consent to voluntary participation and to the risks related to privacy breaches.27 (That U.S. study is an ongoing, long-term longitudinal cohort study designed to reveal the causes of heart disease in which participants must agree to medical histories, examinations, and tests.29) Discussions in the 1980s about HIV/AIDS confidentiality30 and in the early 1990s about health data privacy concerns more generally (notably in the United States and Germany) prompted awareness of informed consent and other ethical issues related to epidemiological research.27 Those discussions ultimately led national and international professional bodies to formalize policies directly targeting ethical issues arising in epidemiology,27 including consideration of the circumstances under which informed consent might be waived.
Approaches and Applications for Epidemiology The discussion in the chapter so far leads to the conclusion that informed consent is an important practice for many aspects of epidemiology. Given that reasoning, should there be special rules for informed consent in this context? Should explicit informed consent be required less often than in other kinds of research, given the central role of epidemiology in public health? Alexander Capron has described the tension in epidemiology as situated between deontological and utilitarian commitments.11 A deontological, or duty- based, view holds that research can only proceed when those exposed to its risks knowingly agree to participate, based on the ethical duties of truthfulness and respect for individual decision-making. This is one type of justification for informed consent. By contrast, a utilitarian view might hold that researchers have a moral duty to advance knowledge in ways that improve the good of the whole, based on ethical duties to engage in policies and actions that produce the greatest good for the greatest number of people. The fact that this objective can come at the expense of a few individuals harmed or wronged in research is an acceptable moral cost of serving the greater good. Capron proposes that there need not be an absolute resolution of this tension but that society must “weigh the value of knowledge (both for its own sake and as a means for improving life) against many other values, prime among them autonomy, beneficence [doing good for others] and justice.”11 We live with this tension in epidemiology, negotiating practices and policies that do their utmost to respect these values while collecting the knowledge critical to protecting society. The challenge for surveillance and other epidemiological methods is how to achieve the laudable goals of public health that underpin efforts to understand illness and disease at the population level while respecting the concepts and principles underlying informed consent. Guidance from a range of sources,
Informed Consent 35 such as WHO, CIOMS, the U.S. Centers for Disease Control and Prevention, and the American College of Epidemiology, suggest that there is some variability in requirements for informed consent for certain types of research or under specific conditions or if an activity is classified as public health practice.2–4,25,31 For example, waivers to informed consent can be justified by assessment of the probability and magnitude of harm,3 the need for exigency,3,4,32 practicability,3 and/or legal requirements.25,33 What the discussion so far should make clear is that the goals and approaches in epidemiology can make informed consent difficult, if not impossible, to achieve. For example, epidemiologists interested in understanding the course of HIV-related disease in HIV-infected members of the military would need to review medical records to identify appropriate individuals to study, raising questions about consent for records review. Because of the sheer quantity of records involved in such a review, it would be impossible to seek consent from all members of the military before accessing medical records to ascertain HIV infection. Epidemiologists studying workplace injury could be hampered by the need for consent, as workers may be reluctant to participate out of fear of discovery and disclosure of injury-related information that might result in job termination. In addition, public health officials interested in assessing the effects of school-based drug and alcohol education or smoking cessation may lose important data if consent from students and/or their parents is required before distributing even anonymous surveys about drug or tobacco use since some students or their parents will refuse. In these examples, the quality of epidemiological information is undermined by the constraints created by attempts to meet the demands of the prospective informed consent of individuals. Some modifications or adaptations may be available, however, to achieve the aforementioned goals of informed consent while allowing important epidemiological work to go forward. There is, however, more than one recognized approach to achieving the goals of informed consent. Each approach deserves consideration in epidemiological applications. The traditional approach involves a prospective process that conveys information about a procedure or research, its risks, benefits, and alternatives, and ensures that the individual comprehends that information and that consent is given voluntarily. Traditional approaches also include opportunities for proxy consent for those who are not capable of consenting, like the process of parental permission and child assent that may be used with pediatric populations. Country-specific laws and global guidelines inform the appropriate approach to authorization (e.g., oral, written, both, waiver/exception), and policies developed by public health authorities may specify applications. Although country-specific laws also often dictate the general content of informed consent, such as a right to withdraw from research, concerns about exploitation of
36 Ethics and Epidemiology participants in resource-poor countries have prompted global recommendations for inclusion of additional content in informed consent processes, such as information about care for participants’ health needs during and after research (CIOMS International Ethical Guidelines for Health-Related Research Involving Humans [hereafter CIOMS Guidelines], Guideline 6).3 Over time, several other approaches have emerged that have value for epidemiological applications, spurred in some cases by a growing sensitivity to cultural differences and the relationships of individuals to their communities (e.g., CIOMS Guidelines 7 and 15);3 by privacy concerns related to technological advancements in the collection, management, and storage of large amounts of data (e.g., European Union GDPR [General Data Protection Regulation] 2016/ 679);33 and the recognition of the value and inherent identifiability of genetic samples and data.34 These modified approaches to informed consent are used when it has been determined that repeated contacts would likely place a large burden on data users and contributors despite their diminishment of autonomous authority. Where alternative approaches offer less than full expression of the underlying goals and concepts of prospective, individualized, context- specific informed consent, other mechanisms and formalities should be used to protect the rights and interests of individuals, such as the use of independent review committees and other procedural assurances of confidentiality. For example, the United Kingdom has created the Independent Group Advising on the Release of Data to review access to UK National Health System digital data.35 These alternative approaches have also introduced descriptive terminology into the common lexicon of informed consent. In contrast to what could be described as the opt-in approach traditionally recognized by informed consent, opt-out or presumed consent allows an individual’s data to be collected and used unless the individual actively indicates otherwise. For example, in England, patients can use an opt-out process to prevent researchers from accessing their confidential health information.36 Traditional approaches to informed consent are often referred to as specific consent, meaning the consent sought is specific to a particular intervention or research project. In contrast, technological advancements and the increasing use of “big data” approaches have prompted consideration of other approaches that can be applied in epidemiology. For example, broad consent asks individuals to allow the use of data and samples in future unspecified research.37 Tiered consent allows individuals to indicate their choices for access to and use of information about them from a list of alternatives, and dynamic consent uses technology, such as online platforms, to allow participants to change their consent preferences over time.38,39 These consent approaches are designed to maximize participation by allowing individuals to control access to and use of information about them as
Informed Consent 37 a way of realizing self-determination and addressing concerns about privacy and use of data and samples in research.37 Epidemiology’s focus on populations and groups raises issues in addition to concerns about informed consent for individuals. Findings related to particular groups can have implications for its members, even when no individual can be linked to his or her own result. For example, in epidemiological research on the prevalence of the BRCA1 genetic mutation (an indicator of an increased lifetime chance of developing breast or other cancers) among women of Ashkenazi ancestry, concerns were expressed about how findings might lead to the stigmatization of the group and even potential insurance or other forms of discrimination toward women identified with that group.40 The notion that communities and their members can have interests separate from and in addition to those of individuals has led to recommendations for community consultation and engagement, and for a time there were even calls for “community consent.”41 The idea of community consent is controversial, raising questions about what characteristics define a group or community, who the relevant stakeholders are, and who from the community or group is authorized to consent on the group’s behalf.13 Nonetheless, there is consensus that the interests of communities and groups ought to be taken into account when planning for, carrying out, and overseeing research involving definable communities,42,43 and some have argued for similar recognition in surveillance activities.44 (See also Chapter 6 in this volume for discussion of the role of communities in epidemiology.) This can entail public meetings and other forms of consultation with the groups and communities from which potential participants are members, as well as publicity and information in places accessible to them. These activities can be accomplished in ways that are consistent with research goals and individual consent while respecting the features of groups and communities. Recognition of the features of research involving groups of individuals or communities has supported attention to community engagement, which can include participatory processes that involve communities in designing the informed consent process, among other things (e.g., CIOMS Guideline 6).3 Such efforts are not intended to replace individualized informed consent but rather recognize that individuals live in communities and as part of groups that have interests in addition to those of the individuals who make them up.
Adapting and Modifying Informed Consent for Epidemiology How can the goals of informed consent be realized in the context of epidemiology, and with what adaptations or modifications to policies and practices for informed consent? Underlying any decision about waiving or altering informed
38 Ethics and Epidemiology consent requirements should be assurances that adequate confidentiality protections are in place to minimize the risks associated with the collection and subsequent use of the data.4,31,45 For example, public concern about the confidentiality of data collected in a proposed comprehensive diabetes surveillance program in New York prompted the development of their opt-out procedure for consent, arguably driven by strongly held views on privacy of medical information in the United States.46
Non-Identifiability In research policy, the mechanism used to implement a deviation from the traditional approach to informed consent is often described as a waiver. Non- identifiability has been used as a justification for waiver of informed consent, for example when the research involves existing samples or records in which individuals cannot be identified, provided the research is approved by a research ethics review committee.25 Capron has argued that while this type of waiver may be justified by the fact that anonymity adequately protects research subjects from the harms of research, it does not serve the goals of promoting the self-determination of those in research or engendering trust on the part of the public.11 One suggestion for approximating the goals of informed consent is to seek out members of the community on whom the research will be carried out. Such representatives could participate in the planning and improvement of research. Such a process of community engagement, whether termed “peer consultation”47 or “community consultation,”40 can be seen as an adjunct to self-determination of individuals. In working with members of the group of interest there is also transparency and the public reassurance that comes with it. One approach to the use of existing records is to obtain permission from those who maintain them and are charged with their safekeeping. While not a substitute for, or even an approximation of, informed consent, the permission of and cooperation from these custodians can perform an important function. Since the custodians of records (institutions, employers) have an important relationship with those whose information is sought, their permission could serve as evidence that there is some formal process for seeking use of such records. Examples include records that are held in large repositories that control access to them and provide necessary protections, including all assurances of confidentiality.48 Exigency Although traditional approaches to informed consent rely on prospective agreement to participate, exigent circumstances may make that difficult or impossible. But even in those circumstances, such as disease outbreaks like SARS
Informed Consent 39 (severe acute respiratory syndrome), Ebola, and COVID-19, investigators are expected at minimum to disclose information about the investigation and its purpose to the participants.31,44 Commentators have suggested that the reliance on exigency to justify waiving the need for informed consent could be overcome by “submit[ting] [to review committees] model protocols and survey methods developed to cover the more predictable investigation types for prior approval.”49,50 Presumably, this could include a standardized approach to informed consent. In the past, when biomedical research was carried out in settings in which consent was impossible to obtain, such as in emergency settings and intensive care units, proposals had been made for what has been termed “deferred consent.” In deferred consent, subjects are informed and debriefed after the research has been conducted.51 In the United Kingdom, for example, deferred consent has been used when prospective informed consent has been determined to be impracticable, for example in emergency pediatric settings.52 And in the United States, federal regulations permit trauma research to proceed without prospective consent provided that community consultation and other protections are in place.53 Such processes are more accurately termed “deferred notification” because the research has already been performed and consent is achieved in name only, whether after the fact or not. In the United States, deferred consent in these settings has been superseded by federal rules and other policies54–57 that rely instead on community engagement approaches involving community consultation and public notification. A similar approach could be considered for epidemiological applications, with subjects or groups being given information after data collection in the form of an effective debriefing, with or without the option to have their information removed on request, and using processes of community consultation and notification where appropriate.
Cluster Randomized Trials Community engagement can also play a role in cluster randomized trials (CRTs). A CRT investigates the effect of an intervention at the group or population level. For example, a CRT might measure the efficacy of a public health media campaign on health risk awareness.58 Obtaining individual consent might be impracticable in terms of time and resources and importantly might affect the research results. Depending on the risk to the individual (e.g., minimal) and the type of data (e.g., non-stigmatizing), it may be ethically justifiable to waive individual consent. In those cases, some commentators/researchers have suggested that some form of prospective community consultation or other type of sharing of study information with the eligible study population may assist in realizing the goals of informed consent59–63 (see Chapter 6).
40 Ethics and Epidemiology
Conclusion A challenge in epidemiology is how to achieve vital goals of public health while respecting the goals of informed consent. To gain information through the use of the tools of epidemiology, the public health community ought to work with policymakers to apply informed consent where appropriate and seek flexible approaches to informed consent requirements where they can be justified, including adjuncts to individual consent in some cases. In this way, respect for the rights and interests of those whose information is collected through epidemiological efforts can be properly respected while serving the interests of public health.
References 1. Council for International Organizations of Medical Sciences. “International Guidelines for Ethical Review of Epidemiological Studies,” Law, Medicine and Health Care 19 (1991): 247–258. 2. World Health Organization. WHO Guidelines on Ethical Issues in Public Health Surveillance. 2017. https://w ww.who.int/ethics/publications/public-health- surveillance/en/ 3. Council for International Organizations of Medical Sciences. International Ethical Guidelines for Health- Related Research Involving Humans. 2016. https://cioms. ch/shop/product/international-ethical-guidelines-for-health-related-research- involving-humans/ 4. U.S. Centers for Disease Control and Prevention, Epidemiology Program Office. Overview of Scientific Procedures, Section II.A., “Research vs. Non Research.” 2002. https://www.cdc.gov/od/science/integrity/docs/cdc-policy-distinguishing-public- health-research-nonresearch.pdf 5. Faden, R. R., and Beauchamp, T. L. A History and Theory of Informed Consent. Oxford University Press, 1986. 6. Andrulis, D. P., and Brach, C. “Integrating Literacy, Culture, and Language to Improve Health Care Quality for Diverse Populations,” American Journal of Health Behavior 31 (2007): S122–S133. 7. “Nuremberg Code.” In Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10. Vol. 2. U.S. Government Printing Office, 1949: 181–182. 8. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report. U.S. Government Printing Office, 1979. 9. Benn, S. “Freedom and Persuasion,” Australasian Journal of Philosophy 45 (1967): 259–275. 10. Nozick, R. “Coercion.” In Philosophy, Science, and Method: Essays in Honor of Ernest Nagel, ed. S. Morgenbesser, P. Suppes, and M. White. St. Martin’s Press, 1969: 440–472. 11. Capron, A. M. “Protection of Research Subjects: Do Special Rules Apply in Epidemiology?” Law, Medicine and Health Care 19 (1991): 184–190. 12. Mastroianni, A., and Kahn, J. “Swinging on the Pendulum: Shifting Views of Justice in Human Subject Research,” Hastings Center Report 31 (2001): 21–28.
Informed Consent 41 13. Mastroianni, A. “Sustaining Public Trust: Falling Short in the Protection of Human Research Participants,” Hastings Center Report 38 (2008): 8–9. 14. Gamble, V. N. “A Legacy of Distrust: African Americans and Medical Research,” American Journal of Preventive Medicine 9 (1993): 35–38. 15. Gamble, V. N. “Under the Shadow of Tuskegee: African Americans and Health Care,” American Journal of Public Health 11 (1997): 1773–1778. 16. Corbie- Smith, G. “The Continuing Legacy of the Tuskegee Syphilis Study: Considerations for Clinical Investigation,” American Journal of the Medical Sciences 317 (1999): 5–8. 17. Smith, Y. R., Johnson, A. M., Newman, L. A., et al. “Perceptions of Clinical Research Participation Among African American Women,” Journal of Women’s Health 16 (2007): 423–428. 18. Schloendorff v. Society of New York Hospital, 211 N.Y. 125, 105 N.E. 92 (1914). 19. Salgo v. Leland Stanford Jr. University Board of Trustees, 154 Cal. App. 2d 560, 317 P.2d 170 (1957). 20. Natanson v. Kline, 186 Kan. 393, 350 P. 2d 1093 (1960). 21. Canterbury v. Spence, 464 F.2d 772 (D.C. Cir. 1972). 22. UN General Assembly. International Covenant on Civil and Political Rights. 1966. 23. World Medical Association. “World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects.” 2013. https:// w ww.wma.net/ p olicies- p ost/ w ma- d eclaration- o f- h elsinki- e thical- principles-for-medical-research-involving-human-subjects/ 24. Tuskegee Syphilis Study Ad Hoc Advisory Panel. Final Report of the Tuskegee Syphilis Study Ad Hoc Advisory Panel. U.S. Government Printing Office, 1973. 25. U.S. Department of Health and Human Services. Basic HHS Policy for Protection of Human Research Subjects. 45 C.F.R. § 46, Subpart A (2018). 26. Fine, P., Victora, C. G., Rothman, K. J., et al. “John Snow’s Legacy: Epidemiology Without Borders,” Lancet 381 (2013): 1302–1311. 27. Brody, B. A. “Epidemiological Research.” In The Ethics of Biomedical Research, ed. B. A. Brody. Oxford University Press, 1998: 55–75. 28. U.S. National Institutes of Health. “The Framingham Heart Study: Laying the Foundation for Preventive Health Care.” https://www.nih.gov/sites/default/files/ about-nih/impact/framingham-heart-study.pdf 29. Tsao, C. W., and Vasan, R. S. “Cohort Profile: The Framingham Heart Study (FHS): Overview of Milestones in Cardiovascular Epidemiology,” International Journal of Epidemiology 44 (2015): 1800–1813. 30. Bayer, R. “Public Health Policy and the AIDS Epidemic: An End to HIV Exceptionalism?” New England Journal of Medicine 324 (1991): 1500–1504. 31. American College of Epidemiology. “Ethics Guidelines.” January 2000. http://www. acepidemiology.org/policystmts/EthicsGuide.htm 32. Lee, L. M. “Public Health Surveillance.” In Oxford Handbook of Public Health Ethics, ed. A. C. Mastroianni, J. P. Kahn, and N. E. Kass. Oxford University Press, 2019: 320–330. 33. EU General Data Protection Regulation. EU 2016/679. https://gdpr-info.eu 34. U.S. National Institutes of Health. Genomic Data Sharing Policy. 2014. https:// www.federalregister.gov/ d ocuments/ 2 014/ 0 8/ 2 8/ 2 014- 2 0385/ f inal- n ih- genomic-data-sharing-policy 35. U.K. National Health Service. “The Independent Group Advising on the Release of Data Review Process.” April 2018. https://digital.nhs.uk/about-nhs-digital/
42 Ethics and Epidemiology corporate- i nformation- and- d ocuments/ i ndependent- g roup- a dvising- on- t he- release-of-data/the-independent-group-advising-on-the-release-of-data-review- process 36. U.K. National Health Service. “Where Your Choice Does Not Apply.” https://www. nhs.uk/your-nhs-data-matters/where-your-choice-does-not-apply/ 37. U.S. National Institutes of Health, National Human Genome Research Institute. “Special Considerations for Genomics Research.” https://www.genome.gov/ about- g enomics/ p olicy- i ssues/ Informed- C onsent- f or- G enomics- R esearch/ Special-Considerations-for-Genome-Research 38. Thiel, D. B., Platt, J., Platt, T., et al. “Testing an Online, Dynamic Consent Portal for Large Population Biobank Research,” Public Health Genomics 18 (2015): 26–39. 39. Kaye, J., Curren, L., Anderson, N., et al. “From Patients to Partners: Participant- centric Initiatives in Biomedical Research,” Nature Reviews Genetics 13 (2012): 371. 40. Modan, B. “The Genetic Passport,” American Journal of Epidemiology 147 (1998): 513–515. 41. Weijer, C., and Emanuel, E. J. “Protecting Communities in Biomedical Research,” Science 289 (2000): 1142–1144. 42. Dickert, N., and Sugarman, J. “Ethical Goals of Community Consultation in Research,” American Journal of Public Health 95 (2005): 1123–1127. 43. Policy for the Responsible Collection, Storage, and Research Use of Samples from Named Populations for the NIGMS Human Genetic Cell Repository. August 25, 2004. http:// ccr.coriell.org/Sections/Support/NIGMS/CollPolicy.aspx?PgId=220 44. World Health Organization. Ethics in Epidemics, Emergencies and Disasters: Research, Surveillance and Patient Care: Training Manual. 2015. https://apps.who.int/iris/bitstream/handle/10665/196326/9789241549349_eng.pdf;jsessionid=B7BD18BD255A 11F5FB23FF0B52DFC393?sequence=1 45. International Society for Environmental Epidemiology. “Ethics Guidelines for Environmental Epidemiologists.” 2012. http://ethics.iit.edu/codes/Final%20 ISEE%20Guidelines.pdf 46. Fairchild, A. “Diabetes and Disease Surveillance,” Science 313 (2006): 175–176. 47. Baumrind, D. “Nature and Definition of Informed Consent in Research Involving Deception.” In The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, Appendix Vol. II. U.S. Government Printing Office, 1978: 23.1–23.71. 48. Citro, C. F., Iglen, D. R., and Marrett, C. B., eds. Protecting Participants and Facilitating Social and Behavioral Sciences Research. National Academy Press, 2003. 49. Cone, J. “Ethical Issues in Occupational Disease Outbreak Investigations,” Occupational Medicine 17 (2002): 657–663. 50. Macklin, R., and Cowan, E. “Conducting Research in Disease Outbreaks,” PLoS Neglected Tropical Diseases 3 (2009): e335. doi:10.1371/journal.pntd.0000335 51. Levine, R. “Research in Emergency Situations: The Role of Deferred Consent,” JAMA 273 (1995): 1300–1302. 52. Harron K., Woolfall K., Dwan K., et al. “Deferred Consent for Randomized Controlled Trials in Emergency Care Settings,” Pediatrics 136 (2015): e1316–e1322. 53. Contant, C., McCullough, L. B., Mangus, L., et al. “Community Consultation in Emergency Research,” Critical Care Medicine 34 (2006): 2049–2052. 54. U.S. Department of Health and Human Services. “Exception from Informed Consent Requirements for Emergency Research.” 21 C.F.R. § 50.24 (2018).
Informed Consent 43 55. U.S. Department of Health and Human Services. Office for Human Protections. “Informed Consent Requirement in Emergency Research.” OPRR Letter, 1996. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/emergency-research- informed-consent-requirements/index.html 56. U.S. Department of Health and Human Services, Food and Drug Administration. “Guidance for Institutional Review Boards, Clinical Investigators, and Sponsors. Exception from Informed Consent Requirements for Emergency Research.” https:// www.fda.gov/downloads/Regulatoryinformation/Guidances/UCM249673.pdf 57. Barrett, D. H., Brown, N., DeCausey, B. R., et al. “Public Health Research.” In Public Health Ethics: Cases Spanning the Globe, ed. D. H. Barrett, L. H. Ortmann, A. Dawson, et al. Springer Open, 2016: 285–318. 58. Taylor, H. A., and Johnson, S. “Ethics of Population-Based Research,” Journal of Law, Medicine and Ethics 35(2) (2007): 295–299. 59. Taylor, H. A., McGee, S. J., Faden, R. R., and Kass, N. “Ethics of Public Health Research: Moral Obligations to Communities.” In International Encyclopedia of Public Health, ed. K. Heggenhougen and S. Quah. Elsevier, 2016: 423–427. 60. Taylor, H. A. “Framing Public Health Research Ethics.” In Oxford Handbook of Public Health Ethics, 331–341. 61. Buchanan, D. R. “Community-Based Participatory Research: Ethical Considerations.” In Oxford Handbook of Public Health Ethics, 342–353. 62. King, K. F., and Lavery, J. V. “Justice, Research, and Communities.” In Beyond Consent: Seeking Justice in Research, 2nd ed., ed. J. P. Kahn, A. C. Mastroianni, and J. Sugarman. Oxford University Press, 2018: 135–151. 63. Weijer, C., Grimshaw, J. M., Eccles, M. P., et al. “The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials,” PLoS Medicine 9(11) (2012): e1001346. https://doi.org/10.1371/journal.pmed.1001346
3
Solidarity and the Common Good Social Epidemiology and Relational Ethics in Public Health Bruce Jennings
Disregarded by the dominant decontextualized perspectives . . . the claim that societal processes drive the social patterning of population distributions of health and disease . . . runs diametrically opposed to the reductionist and individualistic biomedical and lifestyle assumptions that disease distributions arise from intrinsic characteristics of individuals, whether biological or behavioral. —Nancy Krieger1
Introduction Medicine and public health have a complex relationship, as do biomedical ethics and public health ethics, which have arisen as interdisciplinary fields focusing on philosophical, social, and normative aspects of the health sciences and professional practice. These two areas of practical ethics have distinctive analytic tendencies and discursive characteristics, especially in the Anglophone world. In those areas of biomedical ethics focused on the clinical practice of medicine, significant theoretical emphasis has been given to respect for autonomous will formation by individuals in private decisions and behavioral choices directly affecting them. In the domain of public health ethics focused on population health and health policy, the principle of respect for autonomy has not been nearly as salient. In part this is because those who take populations as their unit of analysis and service do not consider individual agency to be as important scientifically or normatively (for the purposes of explanation or evaluation) as those who take individual human beings as their focus. Notions of justice—doing the most good for the greatest number of people or respecting universal human rights of all people—seem to thrive more readily in a discursive world of collective life Bruce Jennings, Solidarity and the Common Good In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0003
Relational Ethics 45 and objective well-being than in a discursive world of unique individuals, who pursue diverse ideals and wide-ranging life plans. Both biomedical ethics and public health ethics have often tried to bridge these different universes of discourse, but there is no one best way to do so, and it remains to be seen if a collective and an individualistic perspective can be reconciled at all. In many ethical debates concerning health, justice is understood mainly as distributive justice, which tends to focus on making the distribution of benefits equitable and efficient; but increasingly justice is being seen from a social structural perspective, which tends to focus more on empowerment and institutional change to produce civic and cultural equality, as well as an equitable economic distribution of goods and services.2 Finally, in ethical and political theory generally there is the clash between a libertarian and individualistic understanding of rights, liberty, and privacy, on the one hand, and an egalitarian and progressive communitarian understanding of the common good and the health and well-being of society as a whole, on the other. Discursive frameworks are fundamental to ethical and normative analysis, but as I intend the concept here, these frameworks are not exclusively normative; rather, they work at the intersection of the descriptive and the prescriptive, the what is and the what ought to be. In my own work, I seek a more nuanced and multifaceted account of individual rights and agency—the liberal dimension of public health ethics—on the one hand, and of solidarity, mutuality, and common flourishing—the communitarian dimension of public health ethics—on the other. An important part of such an inquiry is an examination of the relationship between social justice and social epidemiology. That is the aim of this chapter. Using one powerful, albeit contested, approach to epidemiology as my point of departure, I offer a particular interpretation (also contested) of social justice as being crucially informed by a relational ethics of mutuality and solidarity. This analysis is premised on the hypothesis that relational theorizing and conceptualization, such as that developed in ecological epidemiology, for example, has its analogue in ethics. It is also premised on the aspiration that relational theorizing in both ethics and epidemiology can provide a promising pathway to a critical public health ethics that is both empirically grounded and normatively compelling. The discussion turns first to consider the philosophy of epidemiology and the constitutive concepts guiding relational or social theorizing in the field. Next, I sketch the approach of relational theorizing more broadly in ethics and normative social analysis, noting how epistemological bright lines between the empirical and the normative—facts and values—tend to be blurred in these approaches and that this is not necessarily a drawback. I turn then to the work of conceptual clarification in public health ethics by focusing on the concepts of solidarity and the common good. This is a task that is important in its own right to advance
46 Ethics and Epidemiology discourse on policy and practice in public health, and it is also one that serves to illustrate in a more concrete way what is involved in relational theorizing.
Social Epidemiology The field of epidemiology depends on an infrastructure of monitoring, reporting, field investigation, and the maintenance of large databases in order to fulfill its vital role in public health surveillance and research. It also depends on the development of mathematical theories, models, and tools in cognate fields such as biostatistics to explain causal linkages and statistical associations within these data. Indeed, the modern era of public health was made possible by two developments: (1) recognition by the state that population health and building an infrastructure of health surveillance had economic, political, and military importance and (2) the development of a knowledge base of statistical mathematics and probability theory sufficient to be meaningfully applied to environmental, health, and social welfare policies. In other words, public administration, public health, and epidemiology are integral to the disciplines of social control and normative legitimation in the modern state; they were born together and have remained symbiotically interdependent since.3 Yet noting the key position that epidemiology plays in the structure of knowledge and in global, national, and regional health governance is only the beginning of the story. How should the knowledge offered by epidemiologists be interpreted ethically and politically? How should public health governance construe probabilistic health correlations and risk factors within populations and act upon them? These questions remain open and subject to active debate. The rigor and validity of epidemiological science and methods have increased, but a positivistic and linear notion of scientific progress within the field does not adequately capture the complexity of the history, sociology, and politics of epidemiology. In epidemiology, as in other areas of public health, rival and conflicting philosophies, theories, disciplinary matrices, and research programs compete for intellectual allegiance, governmental funding, and public support.4 The remit of epidemiology is broad. It monitors the distribution of disease within specific populations and societies as a whole. It conducts research concerning the determinants of disease and their patterns of concentration within populations. It also contributes to broader public health policies and practices aimed to prevent and limit the spread of disease and to promote population and individual health. For instance, the role of epidemiology in responding to HIV was— and is—crucial. If epidemiology seeks to understand the patterning of health as well as disease, then health needs to be defined not merely as the absence of disease, but as the capacity to develop full human potential and effective social functioning of individuals and groups. This way of conceptualizing health and the mission of
Relational Ethics 47 both public health and epidemiology leads to a broad understanding of the societal determinants of a healthy population and the deleterious effects of systemic, institutionalized social and economic inequalities. Here we have one basis for the strong interdependency between social justice and social epidemiology.5 In her work on the history and theory of epidemiology, which is well aware of the connections between epidemiology and ethics, Nancy Krieger traces a detailed narrative of the conceptual development of the field in the twentieth and twentieth-first centuries.6 She recounts how epidemiological theorizing left behind the work on social medicine and holistic explanations prominent in the first half of the twentieth century and turned to constructions based in a reductionistic and individualistic biomedical model of disease. She calls this the “biomedical and life-style” framework. She regards it as the currently predominant paradigm in epidemiological theorizing, and she contrasts it to the paradigm of social epidemiology. She summarizes three hallmarks of the biomedical and lifestyle framework as follows: (1) the “real” causes of disease comprise biophysical agents, genes, and “risk factors,” with exposure largely a consequence of individual characteristics and behaviors; (2) these “real” causes of disease in individuals are the causes of— and are sufficient to explain—population rates of disease; and (3) theorizing about disease occurrence is equivalent to theorizing about disease causation in relation to mechanisms occurring with biological organisms; by implication, population-level theorizing is largely, if not wholly, irrelevant.7
Krieger documents recent findings that call the biomedical model into question. For example, studies of noninfectious diseases among the aging populations in the developed world do not fit an explanatory model oriented toward either an internalist notion of wholly biological causation or an externalist notion of individualistic risk behavior. In addition, population-based studies of the dynamics and spread of infectious diseases do not indicate that their dynamics and spread could be explained by recourse to the biological properties of infectious microbial agents alone. Also, for both infectious and noninfectious diseases, social and environmental factors are connected in some way to the level of risk of disease among various populations. Turning now to the social model of epidemiologic theorizing, for the purposes of making connections with ethical theorizing, three tenets identified by Krieger should be noted: (1) the longstanding thesis that distributions of health and disease in human populations cannot be understood apart from—and necessarily occur in—the societal context;
48 Ethics and Epidemiology (2) the corollary that social processes causally (albeit probabilistically) determine any health or disease outcome that is socially patterned; (3) the prediction that as societies change, whether in their social, economic, cultural or technological features, so too will their population levels and distributions of health and disease.8 Krieger identifies various schools of thought within social epidemiology— the sociopolitical, the psychosocial, and the ecosocial—that share these tenets. The sociopolitical approaches study disease patterns and distribution in relation to power, politics, economics, and rights.9 The psychosocial approaches study the perceptions and experiences of individuals to social conditions, social interactions, and social status, and find that individual responses to these experiences can be either health enhancing or health damaging.10 The ecosocial approach, which Krieger’s own work helped to develop, is oriented toward integration and synthesis of the psychological, social, and political-economic dimensions of past epidemiology. In addition, it seeks a way to bring biological embodiment back into the field of social epidemiology, which has heretofore tended to ignore it, perhaps due to its rivalry with the biomedical-lifestyle paradigm.11 The ecosocial program of integration is put forward even in the face of the realization that different schools of epidemiological theorizing differ among themselves in terms of the kinds of hypotheses they posit, the empirical research they pursue, the data they generate, and the guidance they provide concerning efforts to change existing social patterns of health and disease. This last point reminds us that the field of epidemiology has never really been an area of purely detached scientific inquiry; it has always had an applied and practical aspiration and orientation. That is to say, epidemiology, of whatever methodological and conceptual orientation, has always had a calling for social action—for social improvement in the service of health, as one might say. This suggests that the next step in our discussion is to look at the normative side of this practical vocation, epidemiology’s political morality, so to speak. By a “political morality” I mean a discourse that articulates what the ends of the governance of public health should be, what constitutes a good social order to strive toward, and how power should be used by public health through regulation of social, economic, and personal conduct.
The Political Moralities of Epidemiological Theorizing Work emphasizing the contextual, relational, and methodologically holistic features of the kind of knowledge epidemiology has to offer should not be contrasted with the biomedical-lifestyle paradigm on the grounds that the
Relational Ethics 49 former is value-laden and tied to a particular understanding of social justice while the latter is value-neutral and apolitical. This dichotomy is misleading. When one is considering what might be called the “ethics in” epidemiology, it is entirely appropriate to discuss individual and collective professional responsibility and obligations in terms of impartiality, intellectual honesty, and standards of logical and methodological rigor that are process values independent of substantive policy outcomes. However, when it comes to the issues under consideration in this chapter, which might be called the “ethics of ” epidemiology, the value-laden character of the theorizing cannot be avoided, nor is it necessarily illicit. Understanding the dynamics and effects of value-laden knowing is a key part of the history, sociology, and philosophy of science—the critical study of the growth of knowledge. The difference between the biomedical-lifestyle approach and the social approach, then, does not turn so much on value-neutrality as on the contrasting orientations each mode of theorizing takes toward its fundamental object of study—population health—and the contrasting political moralities that are implicit in each orientation. The first of these issues has largely to do with various forms of individualism: political, ethical, and methodological. The second has to do with the epidemiological guidance provided on aspects of currently existing social structure, power, resource distribution, and patterns of behavior that should be the targets of public health policy and intervention. In other words, which levers of health promotion and social change should epidemiological knowledge prompt the hands of policymakers and democratic citizens to pull? And what principles of social justice justify that and make it democratically legitimate? Taken together, two challenges link epidemiological theorizing to a political morality. One is the challenge of reconciling individualism—independent freedom for persons in societies—with communalism—interdependent mutuality for persons in societies. Freedom from external coercion without mutual concern and care can be empty. Mutuality without freedom can be stifling. The other challenge is to achieve improved health without subordinating social justice. Rightly understood, health and justice need not be pitted against one another; they can be fulfilled in tandem. Individualism is a chronically vague and perennially useful concept. It is not really surprising that individualism in its ontological and methodological guises should surface in the intellectual history of epidemiology, since the field’s historical and philosophical horizons are inseparable from liberalism as an economic and social philosophy, and from democracy and capitalism as institutional and cultural formations. The ontological debate between individualism (social atomism) and collectivism or communitarianism (social holism) runs through the social thought of the West since the nineteenth century and remains central to
50 Ethics and Epidemiology social ethics. So do cognate questions such as the proper balance to be struck between liberty and equality, or how to reconcile the value of respect for human rights and the value of achieving a state of net aggregate social benefit (such as health and well-being) across a population when these values conflict in practice. In the conflict between differing modes of epidemiological theorizing, a special form of individualism, one having to do with the nature of scientific explanation, is at the forefront. This is the difference between what is often called “methodological individualism,” a hallmark of the biomedical model, and “methodological holism or contextualism,” a hallmark of social epidemiology. In his study of the history of the concept of individualism, Steven Lukes defines methodological individualism as “a doctrine about explanation which asserts that all attempts to explain social (or individual) phenomena are to be rejected . . . unless they are couched wholly in terms of facts about individuals.”12 Among many critiques of this account of explanation, one that is most pertinent to the concerns of this chapter has to do with a problematic assumption being made by individualistic theories concerning what counts as, in Lukes’s terms, “facts about individuals.” The problem with methodological individualism in the biomedical-lifestyle model is not that it concentrates on individual human beings—the social model of epidemiological theorizing does not want to lose sight of the individual person or define it out of existence, either. Rather, the problem is that the individualism model misconstrues what its object of study (an individual) is and how it comes to have certain properties or characteristics. Lukes captures this idea in the way he characterizes an important element of methodological holism: “A sociological perspective differs from the individualist picture in revealing all the manifold ways in which individuals are dependent on, indeed constituted by, the operations of social forces, by all the agencies of socialization and social control, by ecological, institutional and cultural factors, by influences ranging from the primary family group to the value system of society as a whole.”13 What does the political morality of the biomedical-lifestyle model look like? It seems closest to a moderate form of welfare liberalism. The biomedical- lifestyle theory looks for biological determinants of disease primarily, and secondarily for behavioral determinants that put persons at risk for biological harm. To this a corollary, also drawn from the ontological individualism of the biomedical-lifestyle model, can be added, namely the doctrine that a “population,” a “society,” or a “public” have no substantive ontological status other than as an aggregation of individuals. The main policy implication of this ontological and methodological individualism is that the future of public health lies in population-wide health insurance coverage providing individual access to a system of clinical medicine, coupled with a system of incentives and education
Relational Ethics 51 to prompt individual behavioral change toward lower risk and better health. No more fundamental structural or cultural transformation is needed. The public health is best that changes what is public least. What does the political morality of social epidemiology look like? Krieger suggests some clues. For one thing, it is not individualistic, but that does not mean that respect for the dignity of individual persons is morally unimportant. All the types of social epidemiology, she writes, recognize the agency of individuals and also inter-individual social and biological variation . . . [but] these individual-level phenomena are not the focus. At issue instead are how the societal processes creating the groups to which individuals belong—and delimiting their material and social conditions— powerfully shape the options and constraints that social-determined membership in these groups affords for the possibility of living healthy and dignified lives.14
Moreover, the political morality of social epidemiology is not conservative or traditionalistic communitarianism, either. Krieger argues that social epidemiology does not conceptualize societies “as intrinsically harmonious or static ‘wholes’ but rather as dynamic entities comprised of often conflicting social groups, whose between-group relationships of power and property define their social (and even special) boundaries and shape each group’s characteristics, including their on-average health status.” She continues: By implication, analysis of causes of disease distribution requires attention to the political and economic structures, processes, and power relationships that produce societal patterns of health, disease, and well-being via shaping the conditions in which people live and work; a focus only on decontextualized “lifestyles,” consumption and exposures is incomplete and inadequate.15
I will address these important issues more fully when I turn to the concepts of solidarity and the common good. For the moment, let me elaborate on the political morality of social epidemiological theorizing when we recognize that this theorizing does not originate in a utilitarian conception of justice but actually arises from a critique of utilitarianism. What is problematic about the classic utilitarian claim that distributive justice is fulfilled through the achievement of an aggregate net benefit of healthy life across a population? This is a key question for social epidemiology. Utilitarianism begins by measuring the health of each individual and seeks to maximize the total health state represented in the population at some future time. It is interested only in a final pattern of distribution and a single measure of welfare compared
52 Ethics and Epidemiology with other possible patterns and values. In this way, it is ethically indifferent to how health gains are distributed across the population and to the fairness of the process through which that distribution is made.16 In addition to ignoring inequality and unfair process, this notion of distributive justice ignores respect for persons. It does not seek to maximize the health of all individuals. It is not meritocratic in seeking to achieve the level of health status that each person deserves, nor egalitarian in striving to set a universal level (of good enough health) that all individuals should be helped to attain, and none should fall below. Withal, in utilitarianism flesh-and-blood persons are implicitly conceptualized as merely the repositories of right-making and goodness- manifesting properties (pleasure, happiness, utility) but have no intrinsic value that need concern us beyond that. It thereby loses sight of the humanity of persons—their concrete vulnerability, dependency, and need—as well as respect for their rights and dignity. Finally, it does not provide an adequate conceptual framework for ethically assessing inequalities, discrimination, rights, and fairness within the various patterns of distribution that are compatible with aggregative net maximization. It ignores the phenomena of institutional discrimination and structural power. The justice it offers is hegemonic and normalizing. Its single measure of health and welfare (optimal health defined by species-typical functioning and the happiness of those living under the bell curve) is insensitive to difference and rests on many motivational and behavioral assumptions belied by the pluralistic experiences and lifeworlds of the populations in contemporary societies that utilitarian public health purports to serve.17 I think that the political morality of social epidemiology comprises responses to many, if not, all of these criticisms of standard liberal welfarism and economic models in health policy inspired by utilitarianism. Indeed, my contention is that conceptions of social justice found in political moralities such as liberal egalitarianism, capability theory, civic republicanism, and especially relationally inclusive and progressive communitarianism provide aspects of a political morality and a public health ethics that can be attached in a compelling way with social epidemiological theorizing. The political morality of social epidemiology holds that health policy, public health, and clinical medicine should strive to eliminate or compensate for systematic conditions of inequality, deprivation, and discrimination. It argues that health justice cannot be achieved unless we reconceptualize health and redirect medical care toward a notion of integrated, comprehensive social care. Rather than seeing health as a resource to be distributed among individuals and appropriated by individuals, it should be seen as an aspect of a larger social fabric or pattern of power and meaning that shapes the life chances of individuals developmentally.
Relational Ethics 53 The political morality of social epidemiology also provides a conceptual vocabulary with which to reform institutional discrimination and to democratize structural power. It offers normative rationales and empirical support for directing governance in specific ways. Foremost among these are a belated awareness of the health implications of climate change. Yet another imperative is to plan for the transition to aging societies. Finally there is the looming imperative of redistributive fiscal and monetary policies to mitigate the radical inequalities of wealth and income that paralyze liberal governance and are fostering political upheaval and polarization within the developed capitalistic democracies.18 The combined effects of a carbon tax and a publicly funded system of long-term social and health services would set precisely such an egalitarian redistribution of wealth in motion. Interconnecting these challenges is the reconstruction of the tools and institutions of the social welfare state, which have been drastically eroded and degraded by neoliberal governance and fiscal austerity policies. In particular, conservation and reconstruction measures should be taken to strengthen the institutional mechanisms of lifelong income maintenance and social insurance. Among such measures are progressively financed universal, comprehensive health coverage, including services for long-term care, disability, housing, nutrition, and transportation to sustain quality of life in an aging society. The social ethical values inherent in this social model of care and insurance appreciate the radical implications of shared risk and the challenges of protecting health in an increasingly global ecology. These values call for a communal and intergenerational solidarity in which the well subsidize the sick, there is mandatory universal participation, and the financing of this system is fairly and progressively shared.
Relational Ethics Relational theorizing is concerned with structures of meaning and power that shape the self-understandings, reasons for action, and emotional sensibilities of human beings in social and political life. These structures are dynamic, as Krieger points out, and they can be episodic or institutionalized, ephemeral or deeply embedded in regimes of power and forms of life. Relational theorizing holds that social structures of power and cultural structures of meaning constitute individual self-identity and agency, and that purposive and evaluative human agency also constitute and reshape those structures in turn. Society and culture are viewed neither as ontologically separate from nor normatively superordinate to individuals and groups who compose them. Thus, relational theorizing is not ontologically or metaphysically
54 Ethics and Epidemiology holistic. It is holistic, however, in its stress on the necessary interdependencies among human beings. In part, this is biologically rooted epigenetically and ecologically. Interdependence is also socially and economically manifested in social networks and interlocking institutional loci of meaning, power, and wealth. These interdependencies usually operate in the background and people are not always self-consciously aware of them. If they are, they often fatalistically accept them as given and unchangeable. At specific times historically, and in principle at any time, human beings—especially in the context of social movements—can become critically aware and exercise agency to reshape these interdependencies in accordance with ethical values and normative ideals. By taking patterns of action or practices rather than isolated or conceptually individuated acts as its focal point, normative relational theorizing in moral and political philosophy is analogous to social epidemiology. In ethics it has the potential to form a bridge between the person as a “subject” endogenously moved to act on reasons and as an “object” exogenously caused to act by forces beyond its intention and control. This does not mean that a relational perspective eliminates the tension between a subject-centered (first person) and an object-centered (third person) viewpoint by achieving conceptual synthesis. But it does mean that relational theorizing can tack back and forth between these two viewpoints so that they can be mutually illuminating rather than mutually negating. A relational approach needs to embed and embody the subject without thereby embracing either social, historical, or biological determinism. Moreover, relational theorizing often strives to be developmental so that the subject to be retained is not a timeless essence but rather a changing entity in which latent potentialities are manifested over time as realized capabilities or functionings. How such latent potentials come to be actualized may require an exogenous explanation using categories drawn from the social and behavioral sciences, including epidemiology. Indeed, normative relational theorizing often draws upon such explanations, just as contemporary public health policy and practice is often informed by the findings of social epidemiology concerning the social determinants of health. Conversely, if Krieger is correct, epidemiological theorizing itself contains both first-person and third-person standpoints within it, depending on the approach in question, but tends to highlight relationality within each of these standpoints. It is important to note, however, that the explanations embraced by relational theorizing need not entail determinism or reductionism, much less objectification, because the actualized capabilities in question are themselves constitutive features of personhood, agency, and individuated self-realization within the broad cognitive, emotional, and behavioral repertoire of human beings. Reductionistic explanations may explain the how of this process, but they alone are ill designed to comprehend the what of this process (let alone its should or
Relational Ethics 55 justification). In order to articulate what is going on in developmental agency, relational theorizing makes a gestalt shift from explanation to explication and turns its attention to the communicative aspects of human behavior that, substantially more so than in any other species, depend upon the ability to apprehend what others are thinking in and through their agency, and to interpret the intentions and meanings expressed in social interaction. In this way, cultures and societies provide pathways for the development of flourishing lives, well lived. It is always in and through relational interdependent action that the categories of ethical discourse come to life and become concrete as virtues of character are developed, rights are respected, duties are fulfilled, and beneficial collective consequences are pursued and obtained. I have argued that there is a gap between an individualistic construal of the concepts of justice, rights, interests, and health, on the one side, and a relational interpretation of them, on the other. And I contend that relational ethics is of practical as well as normative importance to public health. There are times when public health problems and proposed solutions to them are not comprehensible or articulatable articulable unless one has recourse to the concept of a public thing (res publica). But this is a sophisticated concept. How best to render it: as a social whole, as a collective phenomenon, or as a condition of life shared in common? The notion of a republic, or public thing, goes back to ancient Greek political thought, especially in the work of Aristotle. In modern times the “public sphere” is term that came into use mainly in the nineteenth century to account for the rise of a commercial class that could not easily be located within previously existing social hierarchies.19 In recent decades the term has taken on a more universal and deliberative democratic meaning. A public sphere, understood as a social phenomenon, denotes a community of individuals intertwined through complicated institutional and cultural systems in (and through) which they act and carry out their lives. Moreover, the idea that something is public is not simply a descriptive concept—it is not just any configuration or system of interrelationship and life in common—it is also a normative vision of equality, diversity, and dignity that provides an account of how that system should be structured and how our lives in common ought to be composed and lived. Earlier I noted the distinction between population health and public health. Now it is worth asking again, what is public about public health?20 The phrase “population health” has been around for some time, and it figures in Krieger’s intellectual history of epidemiology. Are the health of the population and the health of the people the same thing? It is noteworthy that Krieger uses the term “people” in her book title: not just people as number of persons, but the people as an assemblage that is arguably agentic and causally efficacious in relation to health. Do subtle normative and descriptive shifts occur when one sees one’s goal
56 Ethics and Epidemiology as the care and maintenance of population health compared with pursuing the care and maintenance of the health of the members of the republic? Beyond that, should those who practice public health be concerned with the proper ordering and functioning of the republic as a political community and an associational system—in other words, the health of the republic itself? Many public health professionals rarely ponder these questions, or when they do, they tend to treat population health and public health as synonymous. Yet, this leaves something out. It overlooks the fact that a population is a statistical concept within a bureaucratic classification system that serves certain administrative ends in governance. As such, a population is an aggregation of individuals that is fungible in the sense that it can be subdivided or broken down into its component parts in various ways as needed.21 As our foregoing discussion of epidemiological theorizing indicates, public health is already thoroughly familiar with the problem of conceiving of objects of study that are not comprehensible as the aggregation of individual things. Consider a “system”—a complex network of interacting and interrelated component elements—that has properties no one of its individual components possesses on its own. Or, again, consider the special mathematical properties of statistics and probability theory, and the special use of statistics employed in epidemiology in the service of public health. These intellectual tools indicate that simplistic notions of populations as aggregates of atomistic elements are inadequate to the science of public health. There is no reason why we should not recognize that these conceptions are also too simplistic for the ethics of public health and for the authority and legitimacy of the use of power in public health. As I close this metatheoretical discussion of individualistic and relational theorizing, which has stressed the importance of the latter, I turn to more substantive ethical arguments that are not entailed by all versions of relational theorizing as such. I shall focus on two concepts that are central to a relational public health ethics: solidarity and the common good. I regard them as essential, but underdeveloped, concepts in the discourse of a critical public health ethics that guides health policy and practice toward the achievement of a just and public well-being, a res publica of health.
Solidarity Let me begin the discussion of solidarity with a statement from Keith Banting and Will Kymlicka that is both a definition to fix ideas and, to me, a persuasive set of normative contentions about the survival and functions of solidarity in contemporary developed national welfare states:
Relational Ethics 57 Solidarity refers to attitudes of mutual acceptance, cooperation and support in time of need. In the contemporary context of increasingly diverse societies, we are interested in a solidarity that transcends ethno-religious differences, operates at a societal scale, and has civic, democratic, and redistributive dimensions. Such an inclusive solidarity, we contend, is needed to sustain just institutions. Although considerable political conflict attended the emergence of the welfare state historically, just institutions cannot be built or sustained solely through strategic behavior and partisan contestation, or through unbounded humanitarianism.22
This is an important statement of solidarity from a structural and institutional point of view. To add to this, I would only underscore that from an agency perspective solidarity involves a public recognition of the ethical standing (ethical significance, entitlement, considerability) of others in the civic polity and the moral community. Solidarity entails affirmation through standing up for and with those unjustly treated (and standing up against those committing the injustice). Solidarity demands that exploitative relationality be displaced by just relationality.23 This transformation, even at the individual level, even if solidarity aids only one person, gives civic voice to justice and goes beyond the philanthropic voice of charity. It evokes freedom from oppression as a common good, a state of mutual well-being among all members of the association. This is the point of tangency between solidarity and the common good: the political psychology and the moral ethos of each in all and all in each. Both a structural and an agency perspective are important to the normative relational theorizing of solidarity. Solidarity begins with the latent possibilities of a given place at a given time. It leads people to tolerate social plurality or individuated difference based on an overriding commitment to connection with the other. It leads people to accept political defeat or moral disappointment out of another overriding commitment to a decision-making process with which they identify, either viscerally or out of considered judgment and principled reasoning. (The occupation of the United States Capitol building on January 6, 2021 revealed the breakdown of this procedural commitment.) It leads beyond the grasping and competitive life orientation of possessive individualism to a willingness to subordinate one’s own material self-interest in favor of sharing and providing for others in need. Solidarity transforms zero-sum relationality into a mutuality of recognition, respect, common purpose, and endeavor. In sum, solidarity is an orientation centered around what Banting and Kymlicka call “mutual acceptance, cooperation and mutual support in time of need.”24 This orientation can and does influence individual motivation and sociocultural norms, obligations, and expectations.
58 Ethics and Epidemiology One simply plans and lives one’s life differently in a society with a strong ethos of solidarity than one does in a society in which that ethos is lacking. At its heart solidarity involves a recognition of and an acceptance of membership in a web of obligations to affirm, attend to, and deal fairly with others, in the expectation that they will fulfill their obligations of membership toward you in the same way. And solidarity bestows that recognition and acceptance on others, thereby supporting those who may be put upon or marginalized and whose due recognition is being denied. However, the failure by others to live up to their obligations toward you does not cancel your obligations of solidarity. If one or a few citizens emigrate, it does not mean that the citizenship of those who remain is rendered moot. This is important because part of the motivation supporting an ethos of solidarity and its practices is strategic or instrumental self-interest. Yet, both ethical analysis and research in sociology and political science indicate that self-interested participation in solidaristic practices alone is not enough. The push of obligation that runs counter to self-interest is demanding; the pull of self- interest is strong, especially in modern individualistic and competitive societies awash in narratives of heroic self-reliance and admirable entrepreneurialism. On the basis of their review of social scientific studies of the issue, Banting and Kymlicka conclude that “the strains of commitment make self-interest insufficient or unreliable on its own to maintain a good society, especially in the context of growing diversity, and that citizens must also have, if not virtue or altruism, at least some degree of solidarity: they must at times be motivated by attitudes of mutual concern and mutual obligation towards their fellow co-citizens.”25 Thus solidarity moves beyond individualism and toward commonality— common sense and a sense of what we have in common. It contains a recognition that individual well-being, flourishing, or the good does not exist in a state of isolation but inhabits an ecology of common flourishing that can neither be achieved nor fully enjoyed by individuals acting on their own. Solidarity builds on senses of historical memory and tradition, and it feeds on the gratitude felt when one remembers the service and contributions that others have made to one’s way of life in the past, or when one has the moral imagination to foresee the contributions that newcomers can make in the future. Solidarity begins with the recognition of reciprocal and symbiotic interdependence among members of a moral community and then intervenes in—interrupts—an ongoing community when it is unjustly exclusionary and refuses to recognize the moral standing of some within it. Solidarity inherently leads us to view our own lives and agency as bound together with the rights, well‐being, health, and dignity of others here and now. Solidarity also mediates between another polarity: cosmopolitanism versus ethnic nationalism. The recognition of human rights, the sense of moral responsibility for the welfare of others, and the motivation to fulfill such obligations
Relational Ethics 59 based on a commonality among us that solidarity supports—all of these require some boundary conditions. Hence a debate has arisen among normative theorists of solidarity concerning the degree of universalism or abstraction that solidarity can tolerate and the degree of concrete linguistic, ethnic, religious, and geographical boundedness it requires in order to be sustainable. Most contemporary research on the conditions under which solidarity is strongly felt and practiced has focused in good Durkheimian fashion on the trends at work that tend to fracture common bonds, in particular racial and ethnic animosity, resentment about social and economic decline, increasing ideological and normative polarization, and the failure of social mechanisms of compromise and consensus-formation to function well in the face of these trends.26 As the terms imply, an ethic of solidarity may be a solidarity of humanity as such, but an ethos of solidarity may require specific human characteristics and behaviors be specified and normatively prescribed or proscribed. Philosophers such as John Rawls and Jürgen Habermas argue that a philosophically grounded solidarity of inclusion can replace earlier exclusionary solidarities of blood and soil. Traditional ethnic homelands and contemporary nation-states are not boundaries that need delimit the possibilities of solidarity. Other political theorists argue that a more nation-state–oriented arrangement is necessary to sustain solidarity.27 Still others present a multilevel approach in which the ethos of solidarity is sustainably rooted in localized practices that address particular aspects of solidarity and focus on particular social programs and needs coupled with a larger, more idealized ethical “imaginary” or a narrative of the nation’s or a people’s normative identity capable of knitting localized community-based solidarities together into a broader boundedness that is not exclusionary but tolerant and inclusive.28 This latter work, one might say, stands at the borderline between the concept of solidarity and the concept of the common good.
The Common Good There is a close relationship between solidarity and the common good in that both pertain to the obligations and the sensibilities of associational membership and mutuality. There is no reason to pry these concepts categorically apart, but some differentiation may help to refine our thinking about the role they can play in public health ethics. There is at least some differential nuance in the notions of mutuality and commonality. In the former the space between minds can be substantial, so long as they are attentive to the ties of trust and confidence that exist between them. Commonality, on the other hand, connotes a greater degree of convergence among thoughts and values, a state not simply of rowing in the
60 Ethics and Epidemiology same direction but of literally holding the same oar. However—it’s past time to lay this analogy to rest—the important thing about the concept of the common good is not so much where the boat is going but how the navigational decisions on the boat are made, how and by whom the destination and compass are set. Thus the common good should be rendered in the singular because it relates, in Rawls’s apt term, to the “community of communities” or to the overarching framework of the social, political, and moral association as a whole to which we the people belong. It is the good of goods, the constitutive cultural and institutional framework within which distinct naturally good things can come to be a part of the lives and flourishing of individuals and groups. The common good, like the concept of the public, is not an aggregative notion, understandable as a kind of overlapping convergence of individual health, happiness, or well-being for large numbers of people. Of course, the common good does affect persons as individuals and as members of smaller groups, but it principally pertains to the constitution of a “people,” a population of individuals as a structured social whole. An aggregation of individuals becomes a people, a public, a political community when it becomes capable of recognizing social purposes and problems from the vantage point of the common good. So conceived, the common good is not simply one good among others that are valued by many or all people. It makes no sense to ask: Which good shall we pursue today, the common good or the favored good of this or that particular group? Similarly, the common good is not simply the most important good ranked among other goods along a single metric. Just as the common good is not an aggregative or reductionistic notion, it is not a distributive notion, either. It oversees the governance of distribution but does not take part in it. Thus, to keep the common good in mind is to focus on the governance of our society and our lives, something that the state or the government plays a role in but is considerably larger in its social scope.29 Government and governance are not the same. Governance is the calibration of social cohesion and innovation; it is the maintenance of a social order that promotes certain kinds of meaning and impedes others; it is the formation and reformation of our collective identity. The common good of the people is a framework that makes democracy possible; the particularistic, exclusive good of the sovereign or of a favored class or caste is a political goal that had to be rejected (sometimes quite literally overthrown) through political struggle in order to move toward a more democratic society. Installing the common good as the end of our political lives together was a hard- won achievement; sustaining it there is an ongoing, unfinished effort. Instead of saying that some particular state of affairs is in the common good or represents the common good—harmony and tranquility instead of contestation and conflict, for example—we should be concerned with the productive and distributional governance that has brought that debate to the fore in the first
Relational Ethics 61 place. The common good is always a lens through which to evaluate governance in accordance with fundamentally defining values of a just association. In this example, to think through the common good is to ask about the rationales for both tranquility and conflict: Here and now, will either of them, or some combination of both, lead to a morally better governance of our humanity, our associational membership, our shared way of life? In his analysis of the concept of the common good, Waheed Hussain interestingly places it in the practical reasoning of citizenship, when associational membership and obligation are taken as given and the question of properly discerning what those obligations require arises.30 He uses the notion of maintaining the “facilities” necessary to make the attainment of common interests (for example, clean air in the service of good health) in something like the way I have used the notion of a discursive framework or political morality. In another interesting analysis of the concept, William Galston does not use the second-order notion of the common good that I have suggested but instead sees the common good as the bargaining mechanism in which popular agreement on the overall functioning of social cooperation can be maintained. As he puts it, “the common good requires a balance between the benefits and burdens of social cooperation such that all (or nearly all) citizens believe that the contribution they are called on to make leaves them with a net surplus.”31 It seems to me that this analysis is open to the same criticism that we have seen concerning the claim that self-interest is the basis of solidarity. In both solidarity and the common good, something more than interest or aggregate net benefits—even if fairly negotiated—is are involved.
Conclusion If one important rationale for taking solidaristic and associational obligations seriously is revealed by social epidemiology, then gaining ground for the mindset and sensibility behind those obligations needs support from the voice of public health in both its empirical and its normative register. Public health must strive to bring about change at both the level of individual behavior and social norms and institutions. But the individual level in question is already thoroughly social and relational in character, and change at the social level is constituted by change in the ways in which individuals experience and live their own social being. This is another reason why public health and its ethics should move beyond individualistic to relational theorizing. Human acts are intentional, purposive, and meaningful—both to the actors and to others who share in the rule-governed forms of life and communication within a society and culture. The ethical norms that fit into human agency therefore are not limited to self-referential states of interest or desire. In order to understand ethical conduct—or in order to engage
62 Ethics and Epidemiology in ethical discourses of justification and other forms of argument—one must have recourse to concepts and categories that reflect the relational nature of the human self or actor and the contextual, social nature of the actor’s meaningful, symbolically mediated relationships with others.32 This is also the key to understanding the kinds of human situations and behaviors that are related to health and disease. In practice this means that for public health to respond to the coming health needs of complex societies, it must have recourse to values (like solidarity) and purposes that the members of these societies may already have, but can sustain only if they think and act like “citizens” in the classic sense (regardless of their legal or immigrant status) by coming to see that their personal interests and well- being are inextricably tied to conditions affecting both others and themselves. Among those “conditions” are surely the social determinants of health distribution and health disparities. Public health has an important educative role to play in the formation and maintenance of this political cultural ethos that is essential to social justice and to democratic governance. Public health must identify and interpret for society changes in patterns of disease and risk that are not analytically reducible to individual behavior, and in so doing public health can provide a civic education in how to understand systemic properties of health. To fulfill this role of civic education, public health needs the knowledge and interpretation generated by social epidemiology. Public health, perhaps to a greater degree than most other professions or communities of expert discourse, traffics in goods that pertain not to individuals in isolation but to selves-in-relationships; not to atomistic bearers of interest, preference, and desire but to social persons whose personal flourishing is inextricably linked to the flourishing of others. Is health a good we have in common, or is it a prerequisite for the individual enjoyment? Perhaps, more candidly, should health be seen primarily as a commodity; as the ability and the positionality necessary to consume or take advantage of various opportunities potentially open to us? This is suggested by such influential notions as “species-typical functioning” and the “normal opportunity range” in a society. The latter formulation would seem to be the appropriate answer so long as we are living in a form of society that does not truly grasp the notion of the public or the common, but only understands a collectivity or an aggregation of private interests. Outside a social and conceptual space in which the idea of the common good is comprehensible and motivational (reinforced by an ethos of solidarity and a governance of social justice), there can be no genuinely public health; there can only be a body of expertise employed in the service of increasing the health states of private individuals. If the foregoing analysis is correct, it suggests an interesting notion. In any society, no matter what its constitutional form or its political shape, when public
Relational Ethics 63 health works effectively it must create a small public sphere of mutual recognition and concern—a space for the moral imagination of solidarity and the common good—within a wider society and political culture that is not hospitable to these values and ideals. These spaces may be transient and fleeting. But every day when public health does its work, perhaps some seeds of relational ethics are sown. Public health does not always take advantage of this potentiality or succeed in actualizing it. At times the effect of public health programs unfortunately is to reinforce privatization, competition, and particularistic self-interest. But relationally theorized public health ethics and social epidemiological theorizing stand together in seeking to realize the civic potential, so to speak, that it finds in the polities, civil societies, and local communities where the job of promoting and protecting human health must be done. When a profession of public service (including epidemiology and public health) serves the public, not only does it serve the public as it may be in the present, but it also serves an emergent, imaginatively corrected public with an aspirational common good. Public health serves health today to promote the becoming of a potential public of greater justice and human flourishing tomorrow. Relational theorizing in public health ethics and in epidemiology can be partners in this endeavor. Public health stands at the epicenter of many upheavals of thought and politics—climate change, aging, global health justice. Its ethical task is not only to protect and serve health, but also to take part in the conceptual reclamation of associational membership and obligation, what Hannah Arendt referred to as our political “lost treasure.”33 In no small measure, it is within the discourse of public health ethics and social epidemiology that this crucial work of reclamation will be done, if indeed it can be done at all.
References 1. Krieger, Nancy. Epidemiology and the People’s Health: Theory and Context. Oxford University Press, 2011: 164. 2. Powers, Madison, and Faden, Ruth. Structural Injustice: Power, Advantage, and Human Rights. Oxford University Press, 2019. 3. See Foucault, Michel. The Birth of Biopolitics: Lectures at the Collège de France, 1978– 1979. Picador, 2010. 4. Lakatos, Imre, and Feyerabend, Paul. For and Against Method. University of Chicago Press, 1999. 5. An early statement of this link is Daniels, Norman, Kennedy, Bruce P., and Kawachi, Ichiro. “Why Justice Is Good for Our Health: The Social Determinants of Health Inequalities.” In Public Health Ethics: Theory, Policy, and Practice, ed. Ronald Bayer, Lawrence O. Gostin, Bruce Jennings, and Bonnie Steinbock. Oxford University Press, 2007: 205–230.
64 Ethics and Epidemiology 6. Krieger, Epidemiology and the People’s Health, 126–162. 7. Krieger, Epidemiology and the People’s Health, 126. 8. Krieger, Epidemiology and the People’s Health, 162–163. 9. Krieger, Epidemiology and the People’s Health, 165. 10. Krieger, Epidemiology and the People’s Health, 191. 11. Krieger, Epidemiology and the People’s Health, 202. 12 Lukes, Steven. Individualism. Harper and Row, 1973: 110. 13. Lukes, Individualism, 86. 14. Krieger, Epidemiology and the People’s Health, 166. 15. Krieger, Epidemiology and the People’s Health, 168. 16. The ethical literature critical of utilitarianism is large. I have been guided by Sen, Amartya, and Williams, Bernard, eds. Utilitarianism and Beyond. Cambridge University Press, 1982. 17. See Boorse, Christopher. “Health as a Theoretical Concept,” Philosophy of Science 44 (1977): 542–573; and Daniels, Norman. Just Health Care. Cambridge University Press, 1985. 18. Kuttner, Robert. Can Democracy Survive Global Capitalism? W.W. Norton and Co., 2018; and Levitsky, Steven, and Ziblatt, Daniel. How Democracies Die. Crown, 2018. 19. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society, trans. Thomas Berger. MIT Press, 1989. 20. Coggon, John. What Makes Public Health Public?: A Critical Evaluation of Moral, Legal, and Political Claims in Public Health. Cambridge University Press, 2012. 21. A specific example of this general issue is the relationship between genomics and the concept of race in bureaucratic population-based classification schemes. See Zimmer, Carl. She Has Her Mother’s Laugh: The Powers, Perversions and Potential of Heredity. Picador, 2018. 22. Banting, Keith, and Kymlicka, Will. “Introduction.” In The Strains of Commitment: The Political Sources of Solidarity in Diverse Societies, ed. K. Banting and W. Kymlicka. Oxford University Press, 2017: 10. 23. Jennings, Bruce, and Dawson, Angus. “Solidarity in the Moral Imagination of Bioethics,” Hastings Center Report 45 (2015): 31–38; and Jennings, Bruce. “Solidarity and Care as Relational Practices,” Bioethics 32 (2018): 553–561. 24. Banting and Kymlicka, “Introduction,” 2. 25. Banting and Kymlicka, “Introduction,” 3–4. 26. Rogers, Daniel T. Age of Fracture. Harvard University Press, 2011. See also Kaebnick, Gregory E., Gusmano, Michael, Jennings, Bruce, Neuhaus, Carolyn P., and Solomon, Mildred Z., eds. Democracy in Crisis: Civic Learning and the Reconstruction of Common Purpose. special report, Hastings Center Report 51 (2021): S1–S75. 27. Banting and Kymlicka, “Introduction,” 15–20. 28. On this nested networks approach see Hall, Peter A. “The Political Sources of Social Solidarity.” In Strains of Commitment, 201–232; Smith, Rogers. Stories of Peoplehood: The Politics and Morals of Political Membership. Cambridge University Press, 2003; and Gould, Carol C. Interactive Democracy: The Social Roots of Global Justice. Cambridge University Press, 2014. 29. Reich, Robert B. The Common Good. Alfred A. Knopf, 2018. 30. Hussain, Waheed. “The Common Good,” Stanford Encyclopedia of Philosophy, 1–2. https://plato.stanford.edu/archives/spr2018/entries/common-good
Relational Ethics 65 31. Galston, William A. “The Common Good: Theoretical Content, Practical Utility,” Daedalus 142 (2013): 11. 32. See Harré, Rom. Social Being, 2nd ed. Blackwell, 1993; and Gergen, Kenneth J. Relational Being: Beyond Self and Community. Oxford University Press, 2009. 33. Arendt, Hannah. On Revolution. Viking Press, 1962: 217–286.
4
Understanding the Ethics of Risk as Used in Epidemiology Diego S. Silva
The concept of “risk” is central to epidemiology, since the study of disease or health of populations necessarily requires assessing, including determining the probability of, the factors that may increase or decrease the likelihood of disease or health. Yet the normative implications of how risk is conceptualized and used in epidemiology have received little to no attention. In this chapter, I will argue that the purportedly non-normative understanding of risk in epidemiology fails to capture two separate but interrelated points. First, the description of risk assessments, which are a necessary component of determining risk factors, require some consideration of the ethical values that underpin these descriptions. Second, sometimes it may be important not only to analyze the risk to a population for disease p but also to understand the political or economic values that help create the context that led to the increased risk of p for that population; in other words, sometimes it will be ethically important for epidemiologists to help analyze and explain who imposed risk onto whom and in what ways this risk imposition occurred. I conclude by arguing that a normative sense of risk in epidemiology, which appeals to most theories of justice, must make sense of the ethics of causation in either more modest or stronger terms; however, I will not choose between them. This essay will proceed as follows. In Section 1, I analyze some definitions of “risk” and associated terms as used in epidemiology textbooks and dictionaries. Section 2 presents two ethical challenges associated with this non-normative, or statistical, sense of risk. Section 3 introduces John Oberdiek’s argument for ethical risk imposition, while in Section 4 I evaluate Jonathan Wolff and Avner de- Shalit’s discussion of risk in the context of a functionings approach. Finally, I end by applying the descriptions and argument from Sections 3 and 4 back to the use of risk in epidemiology. A caveat before I begin: I will not be adopting any particular theory of justice for the purposes of this essay. As I hope to show in Section 4, the idea of risk imposition in the context of justice can be applied to most theories of justice in which the worst off in society are prioritized to some degree regardless of Diego S. Silva, Understanding the Ethics of Risk as Used in Epidemiology In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0004
Ethics of Risk 67 justification. In other words, I will assume for the sake of simplicity and space that most—if not all plausible—modern theories of justice strive toward some form of equality, including rectifying inequalities or inequities.
Section 1: The Use of “Risk” in Epidemiology To begin, the idea of risk in epidemiology is usually understood and applied as a non-normative idea, consisting—generally speaking—of a description of a hazard (or harm) and the probability of that hazard occurring given certain conditions. This is in keeping with Alex Broadbent’s observation that risk in epidemiology is “a purely statistical concept.”1 In addition, risk is often understood as objectively true; that is, it is an aspect of the world that can be measured such that other people can obtain the same or similar understanding of the risk in question. This does not mean, however, that epidemiologists believe describing risk is easy to do in the “real world,” readily acknowledging that many factors may hinder measurement (e.g., the lack of resources available in a particular country or region). Consider the following descriptions or definitions of risk in prominent2 epidemiology dictionaries or textbooks: Risk can be defined as the probability of an event (such as developing a disease) occurring.3 Risk: The probability that an event will occur, e.g., that an individual will become ill or die within a stated period of time or by a certain age.4 In the health sciences, risk is understood as the probability of an individual developing or acquiring a disease within a specific period of time.5
In all three definitions, we note the matter-of-fact description of risk as a probability of a “bad thing” (illness or infection) occurring. In the second and third definitions, the dimension of risk occurring within a particular timeframe is introduced. The idea of risk is then used to build up to the idea of a risk factor or attributable risk. For example, the term “attributable risk” is defined as follows: If we think that it is reasonable to assume that the excess disease can be attributed to the exposure, i.e. the exposure is causing the disease, then both of these measures can also be described as the attributable risk. [emphasis in the original]6 The attributable risk or rate provides information about the absolute effect of the exposure. If a causal relationship between the study factor and the outcome can be assumed, then the value of the attributable risk or rate indicates the
68 Ethics and Epidemiology number of cases of the outcome among the exposed group that can be attributed to the exposure.7
Here we see that attributable risk is the risk that can be surmised as being derived from an exposure to a particular hazard or set of hazards. Both sets of authors explicitly invoke the idea of causation, as in “population p’s exposure to q causes the hazard r to come about at the rate of s.” The notion of a “risk factor” seems to be a related, though perhaps slightly different, term: An aspect of personal behaviour or life-style, an environmental exposure, or an inborn or inherited characteristic, that, on the basis of epidemiological evidence, is known to be associated with health-related condition(s) considered important to prevent. . . . An attribute or exposure that is associated with an increased probability of a specified outcome, such as the occurrence of a disease. Not necessarily a causal factor.8
With John Last’s definition of “risk factor” we see both similarities and differences to that of “attributable risk” by Penny Webb et al. and Petra Büttner and Reinhold Muller. Similarly, all three definitions are interested in being able to describe an association between a hazard and something else, that being an attribute either within or outside the given population, and some understanding of the rate at which this association occurs. The key difference, then, is that whereas both definitions of the notion of “attributable risk” invoked the idea of p causing q as a necessary condition, “risk factor” may or may not. Therefore, for the purpose of this chapter, I will take both terms to mean relatively the same thing with the important caveat about whether or not causation is a necessary condition. Finally, the notion of risk factor or attribution is crucial for epidemiologists in making their overall risk assessments, defined by Last as: The qualitative or quantitative estimation of the likelihood of adverse effects that may result from exposure to specified health hazards or from the absence of beneficial influences. Risk assessment uses clinical, epidemiological, toxicologic, environmental, and any other pertinent data. The process of determining risks to health attributable to environmental or other hazards.9
He goes on to note that risk assessment has four steps: hazard identification, a description of what the hazard or harm actually is; risk characterization, what effects the identified hazard will have on a population; exposure assessment, “quantifying exposure (dose) in a specified population”; and risk estimation, a combination of the first three steps to arrive at an understanding of the level of risk of hazard p on a population.10 As such, “risk assessment” is, if I understand
Ethics of Risk 69 correctly, the process by which we determine what is or is not a risk to a particular population given that certain conditions come about or hold. Moreover, as Leon Gordis states, risk assessments are then used to actually manage risks.11 Thus, if risk assessments are critical for risk management, then being clear and understanding the various components of risk assessments become imperative; how risk is understood in the context of assessments is vital for the sound conduct of epidemiology. It is interesting to note that Last’s description allows for a “risk assessment” to be at least somewhat informed by qualitative data, and where the estimation itself need not necessarily be expressed numerically; in other words, the description of the hazard is, in some sense, described via natural language. This seems to make sense given the steps he outlines for assessing risk, including a description of the actual hazard in the first step. Although Last does not argue explicitly for this interpretation, it would seem that the description of a hazard is in some important sense constructed or constitutive of the person or people describing the hazard itself. It would seem to allow for differences in the description of the hazard, thus closing off the possibility of risk assessments being merely and always a rote activity whereby different people will reach the same or similar conclusions. In my reading of Last, I am not arguing that he did not believe that there is something in and of itself, a hazard in the world regardless of observer, but rather that the description of that hazard cannot be divorced from those describing it.12 While the notion of “risk assessment” can be understood as being in some important sense shaped by the description of the hazard as construed by an observer, writing from the perspective of social epidemiology (the subdiscipline that is the subject of her textbook), Julie Cwikel challenges the notion that “risk factors” themselves can ever be divorced from their social or political contexts in the first place. She writes that “health and social policy can be both negative and positive environmental factors . . . Consider how important health care coverage is and how the lack of health insurance adversely affects health status.”13 She later goes on to ask of her reader: “Who are the ‘accessories to the crime’ who promote toxic social conditions and allow them [the disease vector, such as an infection] to gain access to susceptible human hosts?”14 Her argument is that social epidemiology, as opposed to epidemiology simpliciter, is concerned with the social and political risk factors that allow hazards to affect different populations differentially. By pointing to risk factors or attributable risk as being socially, politically, and economically shaped, she does away with the notion that the exposure can be understood by virtue of biology alone. To borrow from Marcel Verweij and Angus Dawson, Cwikel seems to position herself against the idea of public health, and public health activities like epidemiology, as being narrowly construed to the exclusion of considering the social factors of health and disease. A further question is the extent to which Cwikel’s account of epidemiology is
70 Ethics and Epidemiology also normative—that is, where a normative definition of public health activities “ascribe particular responsibilities to the community.”15 It seems reasonable to ascribe to her understanding of exposure to risk factors as being value-laden; for instance, note her use of the imagery of being an accessory to a crime. Whether Cwikel is being normative in the sense of arguing that some members of the community bear a particular responsibility to improve or remove these social risk factors may or may not be true. Regardless, she does introduce the idea that risk factors often are not merely biological in nature and that this needs to be taken into account within the risk assessments of epidemiologists.
Section 2: Two Ethical Shortfalls I now want to outline two separate but interrelated objections to the solely statistical use of the concept of risk in epidemiology and argue why these objections are important if we want epidemiology to abide by considerations of justice, broadly construed. First, as Cwikel explicitly states, doing epidemiology should not be separated from an understanding of the social, economic, and political dimensions (SEPD) of risk factors. I want to claim that if we take seriously the SEPD, then we also need to be attentive to the ethical values that underpin SEPD, often implicitly. The evidence for this claim is abundant. For example, one factor that will influence women’s health is access to safe abortions. The policy decision to increase or decrease such access is dependent, in large measure, on the legislators’ and judges’ beliefs about the status of an embryo or fetus, which in turn is primarily based on their relationship between their metaphysical beliefs and their subsequent ethical beliefs. In another example, one’s beliefs about the role and scope of government in the lives of citizens and residents will affect whether one believes that the provision of health care should be publicly funded or within the remit of private markets. Again, such cases are abundant. As such, the SEPD of risk factors—and population-level risk factors—are shaped in large measure by the ethical values and beliefs of the public, politicians, and bureaucrats. And if the SEPD are shaped by ethical values, then it follows that the ethical values and beliefs from these various sources will impact a group’s or population’s health risks, too. It seems rather odd, then, that epidemiology would not be interested in the ethical values that help shape risk assessments and the determination of risk factors; more importantly, it may amount to a dereliction of duty if one evaluates the aims of epidemiology as stated by epidemiologists themselves. For example, Gordis notes that two of the objectives of epidemiology include identifying “the etiology or cause of disease and the relevant risk factors [italics in original]” and
Ethics of Risk 71 providing “the foundation for developing public policy relating to environmental problems, genetic issues, and other considerations regarding disease prevention and health promotion.”16 He later notes, in his chapter on ethics, that epidemiologists “should assist the public with understanding uncertainty”17 and also “have a major function in communicating health risk and in interpreting epidemiological data for nonepidemiologists.”18 Webb et al. describe the most “fundamental role” of epidemiology as providing “a logic and structure for the analysis of health problems both large and small.”19 Finally, Büttner and Muller write: “[i]n times when decision making in the health sciences is required to be more and more evidence-based, epidemiology research becomes increasingly important.”20 In all of these passages, we see that the role of epidemiology is, first, to contribute to improving health via an understanding of—among other things—risk factors for diseases. Second, one can surmise that epidemiologists do or should see part of their roles as influencing public debate, “public policy” (Gordis), and contributing to “decision making” (Büttner and Muller). Both of these activities (i.e., providing an understanding of disease at a population level and contributing to public policy and debate) are inherently normative; it requires understanding associations or causes between risks and diseases (the particular specialty of epidemiologists), which requires taking seriously the SEPD of risk factors (as noted by Cwikel), which requires at least an acknowledgment that people’s values and moral beliefs shape the SEPD of health risk. Perhaps epidemiologists may not have the expertise to, nor should they, expound on what ought to be done,21 but at the very least, if they take seriously their role in describing the causes of diseases in an effort to shape public debate and policy, they should have it as part of their mandate to describe the values that underpin the SEPD that shape risk factors for diseases. The second shortcoming of the statistical model of risk is that not only are risk factors shaped by SEPD, but that the very notion of risk itself is inherently value- laden, especially in the context of public health policy. Imagine you were tasked with completing a risk assessment of tuberculosis (TB) in Canada. You know that your boss wants you to answer at least the following three questions: What is the risk of contracting TB and becoming ill? Who is at greatest risk of infection and disease? And what are the risk factors for those populations at greatest risk? Given that Canada is a low-incidence country, the overall prevalence rate of TB in Canada is about five per 100,000 population (and has been for about the last decade or so); however, the rates are unequally distributed. Notably, Indigenous persons in Canada in 2017 accounted for 17.4% of all the active TB cases,22 though they represent less than 5% of the total Canadian population. Among the Inuit in particular, the rates of TB are 290% higher than the Canadian average.23 The risk factors associated with TB, namely economic poverty and lack of access to basic social and medical services, are unfortunate realities of many
72 Ethics and Epidemiology Indigenous people in Canada because of the history of colonialization. The rates, the populations affected, and the risk factors for TB in Canada are well known and uncontroversial. In fact, in March 2019, the Canadian government apologized to the Inuit people for the high rates of TB due to colonialization and recognized its lingering effects on present-day disparities in the rates of TB.24 One rather uncontroversial conclusion is that the SEPD of risk factors are clear in this case of TB in Indigenous persons in Canada, as they are in many cases as noted earlier. However, yet another conclusion is that we—as private individuals and as members of the public—actually care about the causes of harms, and the causes of risks of harm; if not, there would seem to be no need for the prime minister to apologize on behalf of Canada. Critically, it is not just that people do care about the causes of risks of harms but also that one ought to care about the causes of risks for two reasons. The first reason is obvious: because we want to eliminate or lessen the potential effects of the risk itself. Second, it matters morally who causes a risk to occur to whom, why that risk is occurring, and how that risk comes about. In other words, if we care about moral responsibilities for actions, then we should care about the causes of risks, i.e., of the potential harms that may come about, not just those that materialize. Jonathan Wolff summarizes the importance of causality as such: [Sometimes] blame attaches itself not to the hazard or the probability [of a risk] but to the cause of the hazard. Hence, it appears, the cause of the hazard must appear as an independent variable if we are to model public concerns about risk. Cause concerns how a hazard is created or sustained, and in consequence whether it can be viewed as a matter of culpable human action or inaction, especially the culpable action of those supposed to have a special relationship.25
As Wolff states, it seems that blame for the occurrence of risk, when blame is present, is not part of the description of the hazard itself or of the statistical probability; it lies squarely with the cause of the risk. In the TB example, the blame or responsibility for the higher rates of TB in Indigenous communities is not a product of the existence of the mycobacteria itself, nor is it attached to the actual rate calculation or the probability associated with it, but is attached to the historical acts of the Canadian government and society. Of course, there may be instances when we do not attach any blame or responsibility to a risk at all; for example, the risk of dying due to prostate cancer only exists in men or those persons born male. The point is, rather, that instances where we do place blame for the existence of risk serve as evidence that we are always interested in the causes of risks in a normative sense, even in instances where there is no blame whatsoever. Thus, the statistical sense of risk being the probability of a hazard coming about should not and cannot be divorced from cause as a “primary variable” in
Ethics of Risk 73 descriptions of risk,26 at least not when we are interested in risk for the purposes of public debate and policy. So even if there exists some sense of risk that is not concerned with the normative sense of causation, this is not the sense of risk that is used in public health. It should be noted that the importance of determining the causes of risks is already part of the work of epidemiologists in two different ways, one direct and the other indirect. First, the definitions of “attributable risk” presented earlier explicitly note that it requires a determination of causality between an exposure to a particular hazard and a given population. The definition of a “risk factor” is similar to that of “attributable risk” except that causality may or may not be established; but even here, there is at least a sense of correlation with a probability attached to it between a population and its exposure to a hazard. Thus, this is a direct example where “risk”—understood in the statistical sense of a hazard and the probability of it occurring—becomes “risk + cause.” Second, recall that part of the objectives or purposes of the epidemiologist is to help the public and decision-makers interpret risk for the sake of debate and policy. For the public and policy makers, it matters why things happen, which it correctly ought to matter. If the goal of public health is the improvement of the health of populations, then understanding not just which diseases are prevalent but also why they are prevalent is imperative. It is only by understanding the causes of diseases that we can then intervene and address or prevent them; likewise, it is only by understanding the causes of good health that we can then promote it. So, if one of the objectives of epidemiology is to help interpret disease and health for the public and for policymakers, then part of that description includes clarifying causality and correlation. But more importantly, it matters—and ought to matter—for the public and policymakers who is responsible for the causes of disease because sometimes we will want to blame the causers and hold them responsible. Thus, even if epidemiologists wish not to become involved in “the politics” of public health, by describing the causes and correlations of risks to populations, and by discharging their duty to explain these causes and correlations, statistical conceptualizations of risk must include descriptions of a normative sense of risk if for no other reason than this is what the public and decision-makers care about. Epidemiology, though perhaps unaware or unsuspecting, is concerned with causes and, moreover, the normative sense of causes, necessarily so. However, one might raise the following objection: for practical and political purposes, we should want our epidemiology, and the notion of risk used therein, to remain normatively “neutral.” Why might this be? Verweij and Dawson argue that there are good reasons why we might be weary of a too normatively laden understanding of public health: “some might consider normative accounts of public health to be too political. This might mean that even those things we
74 Ethics and Epidemiology can all agree are public health issues get lost because of disputes about the more marginal cases.”27 I understand their caution to extend to epidemiology since it is a principal activity of public health. The objection then is that although there may be some cases where the normative aspects of risk are important for epidemiologists to consider, most of the time that normativity is in the background, as it were. In other words, one need not explicitly engage at all times with a normative sense of risk in the conduct of epidemiology, and not heeding this caution threatens to divert our attention and energy toward the marginal cases that may be ethically problematic. Assuming that we would do more harm than good by diverting and driving polemics in public health by focusing on risk as normative—thought I’m not sure I’m convinced of such an empirical supposition—the fact remains that there are times when, practically speaking, it is important to understand the normative aspects of risk in risk assessments. An epidemiologist well versed in the ethics of risk and its application to assessments should ideally be able to distinguish between instances when we should merely acknowledge cases that have underpinning values and those where there are ethical challenges that arise and require careful deliberation and action.
Section 3: A Brief Summary of Modern Articulations of Risk Imposition Before proceeding with a normative articulation of risk in the context of epidemiology in Section 5, it is important to give a brief history of the ideas that have recently emerged in thinking through the ethics of risk and risk imposition. In the next section, I will describe the link between normative notions of risk and justice. The last fifteen years or so has seen a rise in the philosophical examination, and ethical examination in particular, of risk (though classic texts with regard to risk have existed for many decades28). Analysis regarding what makes imposing risk upon others ethical is, perhaps, most easily conceptualized in consequentialist terms, broadly speaking.29 The general idea is that what justifies p imposing risk q onto r is that q will bring about a greater amount of good in the world than had the risk of harm q not been imposed on r, however one chooses to define the “good” in question. In a classic example, we know that the enterprise of driving will cause deaths and serious injuries, but we will allow it, and promote it, because of the economic and social good that comes with modern forms of transportation. In order to make it safer, laws and regulations govern what drivers are allowed to do, thereby increasing the good brought about by the enterprise of driving. In other words, the simplified consequentialist approach to justifying the imposition of risk centers on weighing the probability of
Ethics of Risk 75 good coming from activity q, minus the probability of the harm that also accompanies activity q. This simple but powerful formulation of risk of harm is used throughout medicine and public health; for example, various formulations of the precautionary principle or the variations of cost/benefit analyses that are used in health economics stem from this formulation. The problem with the consequentialist formulation of risk of harm is that (a) it is indiscriminate regarding who bears the risk itself and (b) it is unclear what justifies imposing risk on persons who are otherwise free30 to reject the imposition of risk. The first challenge is a classic one leveled against many consequentialist theories that are agent-neutral, namely that it does not distinguish between the distribution of benefits and burdens of consequences and, as such, may inadvertently further disadvantage already marginalized persons for the sake of the overall good.31 This then leads to the second challenge, which is what exactly does justify imposing risks upon a free or autonomous person, if not the good consequences of an action? Here, deontic theories—or at least some deontic considerations—can provide an answer given the central role of agency in this tradition. As such, this second problem becomes the problem that nonconsequentialist theories of risk must address. Stated differently, that risk is a part of everyday living is a truism; the challenge becomes balancing the risk we impose and have imposed upon us, coupled with the freedom to live and direct our lives. Bypassing the labels of consequentialism and deontology,32 Christian Munthe attempts to improve upon consequentialist intuitions by introducing what he calls the “idea of relative progressiveness,” which states that: to be ethically defensible, the moral importance of running the risk of harm or loss has to be seen as relative to not only the harmfulness of that particular harm but also to what we have to lose or gain by choosing any of the other options open to us in a situation of choice [emphasis in original] . . . the moral importance of harms becomes greater the more we have to lose by embarking on an activity that may lead to the harm in question.33
Munthe’s relative progressiveness asks us to consider counterfactuals about what choices one could make, and that in instances when a risk of harm is great, one’s starting position matters as well as how much one has to gain or lose by engaging in that risky activity or being subject to a risk. There is some evidence that in public health, policymakers and practitioners certainly do employ something like relative progressiveness in the context of real-world decision-making. For example, participants in a recent qualitative project seem to support providing persons with TB access to bedaquiline and delaminid, two recently produced compounds to be used in combination with other drugs to address extensively
76 Ethics and Epidemiology drug-resistant TB (XDR-TB). The unfavorable risk profiles of bedaquiline and delaminid were dismissed by participants when compared to the near-certain deaths of persons with XDR-TB without access to these new drugs.34 My concern with the relative progressiveness idea is that although this seems intuitively true, it is not clear it provides much explanatory value. It is not clear how it can be used to justify imposition of risk beyond rearticulating and providing a clearer and more nuanced defense of consequentialist intuitions, namely that we have to be very careful about weighing the relative goodness and badness of potential risky activities. That is, it seems to reaffirm the relativeness of harm/benefit or “risk of harm/possibility of benefit” analysis, which is important, but insufficient to guide moral decision-making, as noted earlier. Perhaps the most robust attempt to answer the question of how a nonconsequentialist account justifies risk imposition is that of John Oberdiek.35 According to Oberdiek, a theory of imposing risk of harm must include holding myself and others accountable for those risks that are imposed (and that are not) in a reciprocal manner between autonomous persons. The sense of autonomy that is used by Oberdiek is that, in part, of Joseph Raz, which (in the words of Oberdiek) “requires plotting one’s own life and having a range of acceptable options from which to do so” whereby removing a choice may (though not necessarily) diminish a person’s autonomy.36 Crucial for Oberdiek’s account, the risk of harm that may be transgressive need not be known by individuals themselves; that is, it need not be experienced by the individual at risk since what matters ethically is not merely one’s biological life but also one’s normative life. In other words, the aims and goals of one’s life and what makes living worthwhile exist and extend beyond one’s biological life (e.g., one’s goal to raise one’s children to be good, responsible adults). The notion of harm that Oberdiek adopts and adapts is that of Feinberg, meaning it is not solely or primarily physicalist, but rather is understood as a wrongful setback to a person’s interests; as such, a risk of harm “foreclose formally safe possibilities” thereby affecting a person’s autonomy and, crucially, need not actually lead to harm to be wrong.37 Imposing risk that diminishes autonomy is justified, according to Oberdiek, when the “ends and means that produce the risks could be endorsed by those who are subject to them.”38 In other words, what justifies risk imposition is that the person who is enduring the risk agrees to it as a reasonable part of modern life as part of a social contract. As Oberdiek notes: “The authority that one has over one’s own life is not, however, unlimited. The pursuit of personal ends in a complex society requires a great deal of interaction with others, as of course does the pursuit of mutual ends.”39 Take, for example, the enterprise of driving as discussed earlier; what justifies driving is not that it brings about the most good, but rather that we exist in a type of social contract that confers on persons the benefits and burdens of driving and that we agree to participate. Oberdiek’s reciprocal understanding
Ethics of Risk 77 of risk imposition then not only can be endorsed by mixed deontologists like Raz or Feinberg but may also appeal to some Kantian scholars due to the mirroring of reciprocity of risk to reciprocal notions of autonomy.40 Crucially, the acceptance of risk impositions necessary for modern life turns not on the acceptance of risk by each and every individual in that society, but rather on what a reasonable person would deem to be acceptable. As Oberdiek notes, the concept of a reasonable person as articulated in tort law “does not revolve around the judgment of an ordinary person simpliciter, but of an ordinary person exercising reasonable care” [emphasis in the original].41 In other words, the reasonable person must justify, or be satisfied with, the justification of a risk imposition on the basis that everyone could possibly enjoy the fruits of the type of risk (e.g., the enterprise of driving) and that the justification is both hypothetical and necessarily normative—in other words, the justification is grounded in a normative argument that satisfies respecting the autonomy of others while recognizing that risks must be imposed as a part of daily living. For example, an Amish person may not object to the enterprise of driving on the grounds that they do not drive since the normative argument and principle derived from it (that driving is a risk most people accept and exists for the betterment of all, including those who ex post facto are harmed by driving or choose not to drive) are justified on the grounds of the reasonable person.42 As Oberdiek notes: Discrete instances of risk imposition cannot be morally assessed in abstraction from the principle of which they are an instantiation . . . The basic reason why principles are important within contractualism is that conduct that conforms to principles, rather than being mere one-off acts, makes possible a distinctive and valuable pattern of relations that one-off conduct does not.43
As such, according to Oberdiek, it is the principle of justified risk imposition based on a reasonable normative person that should bind us, not real-life exceptions to these principles. An objection can be raised that appealing to the notion of a reasonable person, if ever sound as a method in law or ethics, cannot be used in the context of risk appraisals. Evidence from psychology or behavioral economics strongly suggests that people are not very good at assessing risk and that different people have different thresholds for risk taking. If this is true, it’s not clear that Oberdiek can even make reference to the reasonable person being a normative conception, since even one’s normative conception of acceptable risk might naturally vary widely. The response is, I believe, twofold. First, I would argue that the idea of a reasonable normative person, instead of a reasonable person simpliciter, must provide reasons for placing others at risk, which can then be evaluated morally as would any other action. In other words, there is a sense in which Oberdiek’s articulation
78 Ethics and Epidemiology of ethically acceptable risk imposition is dependent not on the idea of a reasonable person at all, but on the less conceptually loaded idea that one must defend one’s actions with reasons, which can be evaluated. Second, Oberdiek himself introduces a “test,” as it were, that the reasonable normative person must abide by when considering risk impositions, namely that “risk must be characterized to be least objectionable to the individual to whom it is most objectionable.”44 If we take the example of the Amish person objecting to the enterprise of driving, we might say that given some Amish will use nonmotorized modes of transportation (e.g., horse-drawn carriage), the law must accommodate them and provide instructions to drivers that best protect them.45 Oberdiek’s theory, including the use of the reasonable normative person, rests on an important epistemological claim regarding how we know and describe risk. He describes two paradigmatic ways to conceptualize risk, that of the fact-relative perspective and the belief-relative perspective. The fact-relative perspective is dependent upon an agent knowing all the relevant causal facts before acting; stated differently, it purports that the right thing to do in a case of risk can be determined by objectively examining the facts of the matter. The belief-relative perspective holds that risk is always—or overwhelmingly—a subjective experience of the person who is risking and the person subject to the risk. According to Oberdiek, the problem with the fact-relative perspective is that it is insufficiently action-guiding, since it is false that a person can ever know all the relevant facts about a risky situation before acting (i.e., there is always some uncertainty); the belief-relative perspective, on the other hand, is insufficiently normative, since the right course of action is always subject to the perspectives of the parties at risk. Thus, building from Derek Parfit, Oberdiek introduces as a middle ground the evidence-relative perspective, which holds that a person is morally responsible to act in risky situations with the knowledge that a reasonable person could know or discover. It is worth stating at length Oberdiek’s position using his words: [T]he proper characterization of a risk imposition is that which reveals the greatest risk to the patient [the one subject to risk], as judged from the agent’s [the one imposing the risk] evidence-relative reasonable person perspective. The agent is responsible for acting in light of whatever morally relevant causal facts a reasonable search would discover, taking a middle way between the fact-relative perspective’s excessive epistemic demands and the belief-relative perspective’s insufficient epistemic demands, thus giving due regard to both agent and patient. The patient is still the most vulnerable party, however, for it is the patient who is exposed to risk. To make up for this fact, the characterization of the risk imposed upon the patient must be construed as favorably to the patient as is compatible with the agent’s evidence-relative reasonable person
Ethics of Risk 79 perspective: the greater the risk, the stronger the patient’s claim will be under in any plausible moral framework of risk imposition.46
Oberdiek’s theory of risk imposition, then, is that of the reasonable normative person who must act in an evidence-relative manner toward the person subject to the risk, where the reasonable normative person is responsible to take into account whatever facts could be known or discovered prior to imposing risk. This understanding of risk is assumed to be reciprocal between members of a society, such that it protects and promotes the autonomy of both the risk imposer and the person subject to the risk, as best as possible.
Section 4: Justice and Risk Scholars seem to conclude that risks of harm that are unequally distributed due to preexisting unjust social structures or relationships doubly morally impede persons by being subject to both the original injustice (e.g., poverty)47 and the risks that come about from that injustice (e.g., working as a sex worker without legal protections in order to feed their children). For Jonathan Wolff and Avner de-Shalit’s theory of what makes disadvantage wrong, theorizing about the role of risk is central. They argue from a functionings perspective of justice that “[o]ne central way of being disadvantaged is when one’s functionings are or become insecure involuntarily, or when, in order to secure certain functionings, one is forced to make other functionings insecure in a way that other people to not have to do” [emphasis in original].48 The first disjunct holds that being forced to bear risk against one’s wishes may be an injustice when one cannot reasonably do otherwise (e.g., a child walking on the road to school due to a lack of neighborhood sidewalks). The second way that risk may be unjust is when commitment to securing one’s basic need in order to function properly places another basic need in jeopardy (e.g., a Muslim person taking a job in a Hindu neighborhood in India in order to secure money for food but thereby placing themselves at risk of bodily harm).49 Another way of interpreting this second sense of unjust risk is by acknowledging that risk sometimes has a multiplying or amplifying effect, where risk begets more risk.50 For example, the risks some sex workers are subject to is not only poverty itself but also interactions with johns and police, and potential sexually transmitted infections. Although Wolff and de-Shalit’s general argument turns on one accepting the functionings and capabilities theory of justice, the argument they present regarding risk and disadvantage would seem to apply to most theories of justice. For example, liberal theories of justice—ones that take seriously a robust, and perhaps relational, sense of autonomy—would be inclined to agree that an
80 Ethics and Epidemiology unequal distribution of risk is unjust, as it would be for any other good. Ethical risk imposition, since it cannot be avoided in modernity, may even be conceptualized as a primary good or burden to be distributed equally given the unavoidable nature of risk and risk imposition. If the imposition of risk onto certain groups or populations is due to perpetuated social structures, then luck egalitarians would seem to be amenable to the idea of fair distribution of risks, too.51 Some level of convergence alone between theories of justice does not make it the case that Wolff and de-Shalit’s position ought to be taken as sound, but short of a careful analysis—which I will not do here—it does seems to provide some support and reason to accept it as true. I would argue that although Wolff and de-Shalit’s claims seem intuitively appealing to me, they lack explanatory power; however, they also provide an important point of extrapolation to Oberdiek’s theory of risk imposition. In fairness to Wolff and de-Shalit, the aim of their book Disadvantage is not to focus on the ethics of risk but rather to provide “practical guidance” on the application of egalitarianism for policymakers.52 Here it is worth noting that they describe two causes or consequences of risk as disadvantage: first, that persons who are constantly and disproportionally subject to risk struggle to plan their lives due to uncertainty and stress, and second, that this may then lead to despondency and believing that everything is “beyond one’s control, even when in fact this is not the case.”53 I believe that Oberdiek’s theory of risk imposition would provide a defense for Wolf and de-Shalit’s observations. If someone is constantly having to take risks due to socioeconomic factors and political marginalization that others do not, to the extent that it impedes their ability to plan their life, then on most accounts this person is lacking in their capacity of autonomy in some important respect. And this is Oberdiek’s point: there’s no sense in which any reasonable normative person would enter into a social contract to live in perpetual and unequal risk relative to others in society. Stated simply, unjust risk belies reciprocal notions of autonomy and risk imposition. Conversely, it does not appear that Oberdiek’s aims to extend his theory to population-level challenges or public health. Still, Wolff and de-Shalit’s claims regarding risk and justice seem to suggest that one could extrapolate from Oberdiek’s theory toward challenges in public health, especially since the history of contractualism upon which he basis his theory of risk imposition itself has a long tradition in debates about justice. The claims about a reciprocal notion of risk imposition could be imagined to exist as a countless series of bilateral agreements between people, between individual persons and society, and between groups of people; notions such as autonomy and self-determination are not merely individual-level concepts in political philosophy (e.g., we often speak about the self-determination of Indigenous peoples), and there seems to be no reason to restrict it here.
Ethics of Risk 81
Section 5: Applying Normative Conceptions of Risk in Epidemiology So what can the recent literature on the ethics of risk imposition lend to our conceptualizations of risk in the context of epidemiology? First, while Cwikel explains the role SEPD on our conceptualization and description of risk factors for disease and health, Wolff and de-Shalit’s arguments about the nature of risk and justice fit well with social epidemiology, thus providing a normative defense for Cwikel’s explanation. That is, where Cwikel argues that it is descriptively the case that risk factors are shaped by SEPD, Wolff and de-Shalit provide the normative ethics argument as to why we must care about it, insofar as any notion of risk that is conceptually linked to the SEPD of risk factors is a matter of justice. Second, I believe we are now in a position to add some explicit normativity to the definition of risk as used in epidemiology. To begin, one might adopt a modest approach and extend Wolff ’s insight that people care morally about the causes of risk and make it such that epidemiologists define risk as a hazard + the probability of the hazard + the cause of the hazard, where “cause” is understood to include a description of who is subjecting whom to risk. The advantages to this modest formulation are that it captures normatively salient information that would be relevant in understanding risk in epidemiology—especially when in conversation with members of the public or decision-makers—in a simple format; it requires minimal understanding of ethics. Moreover, it is the least normative of the normative options; that is, epidemiologists may plausibly state that using the modest approach means merely describing that the risks in question are normative in some manner, but may deny that they themselves are arguing that the particular risk is ethically right or wrong. However, this approach’s virtues are also its vices; primarily, it does not tell us why the risk may be problematic or not. Perhaps epidemiologists are not in a position to solely make this determination, but it seems prima facie correct that they would need to be engaged in a moral evaluation, along with others, precisely because of their training in epidemiology. The stronger approach would be to add Oberdiek’s insights regarding risk imposition to our definitions of risk in epidemiology. Such an approach would articulate the “cause” of risk, as used in the modest approach, in a reciprocal manner whereby we use the test of the reasonable normative person. Doing so allows us to articulate the risk on a population as rightful or wrongful by appealing to autonomy or self-determination, and its relational characteristics. Moreover, though normative, through the use of the evidence-relative perspective as part of the reasonable normative person test, we are able to reject a purely subjective understanding of risk, thus avoiding a devolution toward relativism.
82 Ethics and Epidemiology In addition to having greater explanatory power relative to the modest approach, I believe it would make risk assessments more robust, too. Recall that a risk assessment is a process by which one determines or describes a risk factor or risk attribute, while using various sources of data. The steps of a risk assessment, as described by Last, include hazard identification, risk characterization, and exposure assessment. I would argue that it ought to also include the strong approach as part of the normative evaluation of the cause of risk; the modest approach provides insufficient guidance for epidemiologists regarding how to actually go about describing the normative sense of risk, while the strong approach is action guiding. I concede that one may very well disagree with the application of Oberdiek in the strong approach, but then there is a clear set of ideas to which one may actually object, unlike the vaguer modest approach. The potential disadvantage is that this would be highly political, and as noted earlier, there may be good reasons to avoid too normatively laden descriptions of risk in the context of epidemiology and public health policy more broadly.
References 1. Broadbent, A. (2013). Philosophy of epidemiology. Palgrave Macmillan, 22. 2. As judged by me, given the multiple copies of each textbook available at my university’s library. 3. Gordis, L. (2009). Epidemiology (4th ed.). Elsevier/Saunders, 201. 4. Last, J. M. (2001). A dictionary of epidemiology. Oxford University Press, 159. 5. Büttner, P., & Muller, R. (2011). Epidemiology. Oxford University Press, 128. 6. Webb, P., Bain, C., & Pirozzo, S. (2005). Essential epidemiology: an introduction for students and health professionals. Cambridge University Press, 147. 7. Büttner & Muller, Epidemiology, 299. 8. Last, Dictionary of epidemiology, 160. 9. Last, Dictionary of epidemiology, 159. 10. Last, Dictionary of epidemiology, 159–160. 11. Gordis, Epidemiology, 339. 12. I will return to this particular issue later in the chapter, when I argue that Oberdiek’s evidence-relative perspective provides us with a middle ground between purely objective and purely subjective accounts of risk. 13. Cwikel, J. (2006). Social epidemiology: strategies for public health activism. Columbia University Press, 48–49. 14. Cwikel, Social epidemiology, 63. 15. Verweij, M., & Dawson, A. (2007). “The meaning of ‘public’ in ‘public health’.” In A. Dawson & M. Verweij (Eds.), Ethics, prevention, and public health. Oxford University Press, 13–29 (quotation at 18). Verweij and Dawson argue that definitions of public health can be (a) narrow or broad (where the latter includes greater discussion of things like the social determinants of health) and (b) descriptive or normative. One may reasonably ask why we should accept the narrow/broad and descriptive/normative distinctions in the definitions of public health. Since this is not a central concern
Ethics of Risk 83 of my chapter, I will set this question aside, except to say that it seems at least an intuitively plausible and a helpful heuristic. 16. Gordis, Epidemiology, 3. 17. Gordis, Epidemiology, 357. 18. Gordis, Epidemiology, 358. 19. Webb et al., Essential epidemiology, 2. 20. Büttner & Muller, Epidemiology, 2. 21. Here I mean that epidemiologists are not trained in moral theory and, as such, are not well equipped to expound on what ought to be done from a moral philosophy point of view. However, as I will argue later in the essay, they necessarily need to be part of public health and public health ethics discussions. 22. Tuberculosis: Monitoring. (2019). https://www.canada.ca/en/public-health/services/ diseases/tuberculosis/surveillance.html 23. Patterson, M., Flinn, S., & Barker, K. (2018). “Addressing tuberculosis among Inuit in Canada.” Canada Communicable Disease Report, 44(3/4), 82–85. 24. Murphy, K., McKay, J., Cohen, S., & Frizzell, S. (2019, March 8). “Trudeau apologizes for ‘colonial,’ ‘purposeful’ mistreatment of Inuit with tuberculosis.” CBC News. https:// www.cbc.ca/news/canada/north/trudeau-apology-tuberculosis-iqaluit-1.5047805 25. Wolff, J. (2006). “Risk, fear, blame, shame and the regulation of public safety.” Economics and Philosophy, 22(3), 409– 427 (quotation at 424). doi:10.1017/ S0266267106001040 26. Wolff, “Risk, fear,” 426 27. Verweij & Dawson, “The meaning of ‘public’ in ‘public health,’ ” 10. 28. Nozick, R. (1974). Anarchy, state, and utopia. Blackwell; and Thomson, J. J., & Parent, W. (1986). Rights, restitution, and risk: essays in moral theory. Harvard University Press. 29. Hayenhjelm, M., & Wolff, J. (2012). “The moral problem of risk impositions: a survey of the literature.” European Journal of Philosophy, 20, E26–E51. doi:10.1111/ j.1468-0378.2011.00482.x 30. The assumption here is that one cares about autonomy as a right or a good. This is certainly a concern for liberals but would also extend to any theories that hold that autonomy is a good to be promoted. 31. Hayenhjelm & Wolff, “The moral problem.” 32. Munthe, C. (2011). The price of precaution and the ethics of risk, 57. http://myaccess. library.utoronto.ca/login?url=http://books.scholarsportal.info/uri/ebooks/ebooks2/ springer/2011-11-21/1/9789400713307 http://myaccess.library.utoronto.ca/ login?url=https://link.springer.com/openurl?genre=book&isbn=978-94-007-1329-1 33. Munthe, Price of precaution, 118–119. 34. Boulanger, R., Komparic, A., Dawson, A., Upshur, R., & Silva, D. (2020). “Developing and implementing new TB technologies: key informants’ perspectives on the ethical challenges.” Journal of Bioethical Inquiry, 17(1), 65– 73. doi.org/ 10.1007/ s11673-019-09954-w 35. Oberdiek, J. (2017). Imposing risk: a normative framework. http://myaccess.library. utoronto.ca/ l ogin?url=https:// w ww.oxfordscholarship.com/ v iew/ 10.1093/ o so/ 9780199594054.001.0001/oso-9780199594054 36. Oberdiek, Imposing risk, 86. 37. Oberdiek, Imposing risk, 87. 38. Oberdiek, Imposing risk, 128.
84 Ethics and Epidemiology 39. Oberdiek, Imposing risk, 128. 40. Silva, D. S., Dawson, A., & Upshur, R. E. (2016). “Reciprocity and ethical tuberculosis treatment and control.” Journal of Bioethical Inquiry, 13(1), 75–86. doi:10.1007/ s11673-015-9691-z 41. Oberdiek, Imposing risk, 48. 42. Oberdiek, Imposing risk, 143–149. 43. Oberdiek, Imposing risk, 147. 44. Oberdiek, Imposing risk, 64. 45. This example could even extend to why cyclists or pedestrians are upset or angry about unsafe drivers, and more importantly about municipalities and states/provinces that institute insufficient protections at a systems level, because their safety is not being sufficiently accounted for in the context of a dangerous activity like driving and sharing the roads. 46. Oberdiek, Imposing risk, 64. 47. Hansson, S. O. (2013). The ethics of risk: ethical analysis in an uncertain world. Palgrave Macmillan. 48. Wolff, J., & de-Shalit, A. (2007). Disadvantage. Oxford University Press, 72. 49. Wolff & de-Shalit, Disadvantage, 65. 50. Wolff & de-Shalit, Disadvantage, 66. 51. Segall, S. (2010). Health, luck, and justice. Princeton University Press, 107. 52. Wolff & de-Shalit, Disadvantage, 3. 53. Wolff & de-Shalit, Disadvantage, 69.
5
Risk and Precaution The Ethical Challenges of Translating Epidemiology into Action Stephen D. John
Epidemiology studies the distribution and determinants of health outcomes within populations. A wide variety of political philosophies agrees that policymakers should care about the distribution and determinants of health. Hence, much epidemiological research is, could be, or should be policy-relevant. However, translating epidemiological research into policy also requires ethical or political deliberation; it does not follow from the fact that eating processed meat causes cancer that we must ban processed meat, if doing so would involve a curtailment of individual liberty. Many of the ethical issues that arise at the interface between epidemiology and policy, such as the proper balance between the public good and individual liberty, are familiar from broader ethical and political debate.1 This chapter outlines two less familiar issues, both falling under the broad topic of “the ethics of risk”: problems of chance and problems of certainty. Problems of chance concern the ways in which risk factor epidemiology complicates the task of balancing between individual interests and the collective good. For example, a 2012 study of the UK’s breast cancer screening program concluded that the population benefits—1,300 deaths from breast cancer prevented each year—outweighed the population costs of overdiagnosis and overtreatment.2 Nonetheless, for each individual, the probability that her life will be saved by being screened is very low—about one in 180—and must be weighed against a chance of medically unnecessary treatment (as well as the less easily measured costs of being “medicalized”). It seems possible, then, that aggregate population health would be improved if each invitee was screened, but, also, that each individual might reasonably judge that being screened is not a gamble worth taking. How should we balance between these perspectives? My second topic, problems of certainty, concerns how certain we should be of some epidemiological finding before acting upon it. For example, in 2015, the International Agency for Research on Cancer (IARC) concluded that the common pesticide glyphosate is “probably carcinogenic.”3 Still, this evaluation was based mainly on animal studies, and IARC noted that there is “limited evidence of carcinogenicity Stephen D. John, Risk and Precaution In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0005
86 Ethics and Epidemiology in humans.” As such, the result is far from certain. Still, is it certain “enough” for policy? Some policymakers seem to think it is, but others do not; for example, the use of glyphosate in public parks is banned in Portugal but not in the UK. You might think that these problems concern the uses of research findings by policymakers, rather than their generation by epidemiologists; they are part of public health ethics, rather than the ethics of epidemiology. However, epidemiologists need to be aware of these problems when disseminating and discussing their findings. Furthermore, as I argue later in this chapter, taking these concerns about communication seriously may have important implications for what and how research is conducted. In short, this chapter concerns the interrelated ethics of chance, certainty, and communication in epidemiology. In Section 1, I develop these schematic remarks, most notably the different aspects of the ethics of risk. In Section 2, I outline some problems in the ethics of chance, paying particular attention to the prevention paradox. In Section 3, I turn to the ethics of certainty, sketching ongoing debates over the precautionary principle in public health policy and how they relate to the evidence-based medicine movement. In Section 4, I explain how these issues interrelate by studying emerging issues around concepts of risk and precaution in epidemiology, linked to the rise of Big Data analytics.
Section 1: Clarifications and Motivations To frame my discussion, this section sets out two topics I have already touched upon: the relationship between (the ethics of) epidemiology and (the ethics of) public health, and the distinction between chance and certainty. Very broadly, we can distinguish two kinds of topics under the heading “ethics and epidemiology”: first, ethical issues that arise when doing epidemiological research; second, ethical issues that arise when using epidemiological research. Consider Doll and Hill’s famous cohort study of the relationship between British doctors’ smoking habits and lung cancer.4 We can ask of those studies whether they were performed in an “ethical” manner: for example, were respondents to surveys adequately informed? We can also ask what ethical implications should follow from them; for example, do they justify high taxation of cigarettes or are such policies paternalistic? It may seem that the second kind of question, although important and interesting, is not a question about the ethics of epidemiology per se; rather, it is a question in public health ethics. On this view, an epidemiological study is ethical just so long as it was conducted in accordance with certain ethical rules— consent was gained, data stored properly, and so on—regardless of how it is (not) used in policy.
Translating Epidemiology into Action 87 However, while there are sometimes good reasons to distinguish research and its applications, we should not overstate this distinction. First, policymakers cannot even consider a finding’s possible implications unless that finding is communicated to them. How results are communicated can have an important effect on how policymakers and publics understand the policy options.5 In virtue of their general political obligations, epidemiologists have a responsibility, then, to communicate their findings in as clear a way as possible. (And, as the case of the smoking–lung cancer link reminds us, communicating epidemiological findings may be no easy task, because various parties might have interests in preventing effective communication.)6 Hence, at least one part of the research process—decisions about what and how to publicize findings—must be sensitive to questions of use. Second, we can often predict that gaining certain sorts of knowledge will have harmful consequences and that gaining other sorts of knowledge would be beneficial. Plausibly, epidemiologists—or at least their funders—should pay attention to such knowledge in deciding research priorities.7 Given the precipitous rise in lung cancer rates in the early twentieth century, there were good reasons for Doll and Hill to study the epidemiology of lung cancer rather than, say, the epidemiology of prostate cancer. Therefore, a second part of the research process—decisions about what to research—should also be sensitive to questions of application. Third, and more controversially, many aspects of epidemiological research may be epistemically underdetermined, in the sense that different methodological choices may all be equally epistemically justifiable. Even if the different options are epistemically justifiable, this choice may have an important effect on the results obtained.8 Some philosophers have argued that such cases of methodological underdetermination should be resolved by non-epistemic considerations; for example, whether false positives or false negatives would have worse non-epistemic consequences.9 Although these arguments are controversial, they suggest—again—that insulating how we research from the effects of its communication is ethically risky. Therefore, in this chapter I focus on ethical problems in translating epidemiological research into action: specifically, the “ethics of risk.” Although standard definitions of epidemiology focus on the search for determinants of disease— hence implying a search for causes—it is the concept of “risk” that is ubiquitous in contemporary epidemiology, with talk of “risk factors,” “relative risk reduction,” and so on.10 However, the concept of “risk,” and its relationship to other concepts (most notably causation), is unclear and controversial, even before we get to the associated ethical issues.11 Very broadly, we use the term “risk” to talk about the probability of some harmful outcome. Of course, identifying and quantifying “harms” are no simple matter, as debates over the proper metrics for assessing health-related quality of
88 Ethics and Epidemiology life remind us.12 Furthermore, but maybe less familiar to ethicists, the concept of probability is horrendously complex. (Note that even my attempt to characterize this complexity will be controversial to some readers!13) Broadly, there are two ways of interpreting probability claims. One approach is epistemic: in terms of the proper degree of belief in some proposition given our evidence (as calculated, for example, by Bayes’ theorem). A second approach is in objective terms. This approach has (at least) two major variants. The first is frequentism, according to which the true probability of some outcome is the proportion of expected cases within some reference class. The second is propensity theory, according to which the probability of some outcome is grounded in the “powers” or “propensities” of some natural process. In turn, there is no shortage of arguments over the correct interpretation of probability claims, with important implications for such topics as the norms of statistical testing.14 Bearing these complexities in mind, we can identify two main senses in which we talk of “risk” in epidemiology: first, as the probability of some adverse, future health event (for example, the risk that Adam will develop cancer, or the risk of a bird flu epidemic); second, as the probability that some claim is wrong (for example, the risk that we have misidentified some chemical as safe when it is, in fact, carcinogenic). Typically, the first sort of claim is best understood in objective, frequentist terms, whereas the second is best understood in epistemic terms. Consider a simplified example: after conducting a genetic epidemiology study, we might conclude that some genetic variant is a risk factor for breast cancer. This claim is best understood in the first sense: the expected frequency of cases among women with this variant is higher than the expected frequency of cases among women without this variant. In this sense, each woman with the variant is at “higher risk” than each woman without. However, we might also worry that our study has gone wrong: for example, that the putative association between the variant and breast cancer is a statistical artifact. Again, we might describe this possibility in terms of “risk”; for example, that there is a “5% risk” that the putative association is not real. I shall use the term “chance” when talking about the first kind of risk and the term “(un)certainty” when referring to the second. Arguably, ultimately, the two senses can or should be reconciled. However, for our purposes, it is useful to distinguish them. To see why, consider two (highly idealized) scenarios: in one, a policymaker is told that a new strain of the influenza virus will, with certainty, affect 5% of the population (we just don’t know who); in a second, a policymaker is told that we cannot rule out the possibility that an influenza strain might affect 100% of the population; given our evidence, we think there is a 5% chance of this outcome. These two situations can be treated as mathematically equivalent, in the sense that we should expect 5% of the population to be affected by the new strain. However, ethically and politically, they seem very different. The first case raises issues of distributive justice—how
Translating Epidemiology into Action 89 should we balance helping the (unidentifiable) 5% of victims against the interests of the (unidentifiable) 95%?—whereas the second does not.15 The second case raises questions about whether we should ensure against low-probability/high- impact events, whereas the first does not. At least for purposes of ethical analysis, then, it is useful to keep these senses of risk separate. Before going on, one final clarification is necessary: there is a vast literature on the topic of public understanding of risk, much of which suggests that laypeople and policymakers find it hard to comprehend and reason about statistics and uncertainty.16 Given that policy should be guided, ultimately, by the people rather than the experts, these findings point to problems with the relationships between scientific research and democratic oversight. However, we should be careful in interpreting claims about public “irrationality.” Very often, they slide from more-or-less straightforward claims that non-experts misunderstand such things as basic concepts of probability to (far more) controversial claims that non-experts reason “wrongly” because their responses (apparently) violate the maxims of rational choice theory. For example, consider the well-known fact that members of the British public seem to fear railway accidents more than car accidents, even though the chances of dying in a traffic accident are far higher than those of dying in a railway accident.17 Clearly, sometimes such fears might be based on a misunderstanding—or even downright ignorance—of the statistics. Furthermore, if all that people care about is minimizing their own risk of death, then such fears are irrational. However, it is not clear that maximizing their own well-being is all that people do or should care about. There may be important moral differences between suffering harm as the result of a car accident and suffering harm as the result of a railway accident; for example, the latter involves the breach of a duty of care—owed by the railway company to the passenger—in a way in which the former does not.18 Someone who cares more about railway crashes than car crashes is not necessarily irrational; rather, she may be expressing a more complex ethical or political concern. Of course, these claims are contestable! Still, before bewailing the public for their irrationality in not listening to the experts, we must first ask whether the experts’ claims themselves involve (implicit) value judgments—for example, that all forms of harm are commensurable—and, if so, whether they are justifiable. We need an ethics of risk. That is the topic of the next two sections.
Section 2: Chances: Risk Factors and the Prevention Paradox One key use of epidemiological research is in contexts of preventive medicine. A key problem in this area arises from what Geoffrey Rose called the “fundamental axiom” of preventive medicine: that “a large number of people exposed
90 Ethics and Epidemiology to a small risk may generate many more cases [of disease] than a small number of people exposed to a high risk.”19 In turn, Rose suggested that this fundamental axiom implies the possibility of what he called the “prevention paradox”: that “population strategies” that reduce the (relatively) low risk of many can be more effective at improving overall population health than “high-risk strategies,” which reduce the (relatively) high risk of smaller subpopulations.20 There are various ways of reading Rose’s comments (and some unclarity whether, on any reading, there is a genuine paradox).21 In this section, then, using the example of cancer screening, we will outline two ways in which Rose’s fundamental axiom can generate ethical problems in public health policy, in terms of a “relative” and an “absolute” paradox, before showing the implications of these problems for epidemiology. Consider a paradigm example of a population-level public health campaign: screening asymptomatic individuals for cancer. The basic logic behind such programs is compelling: the earlier we identify cancerous growths, then, in general, the easier it is to treat them. Unfortunately, however, screening programs also lead to overdiagnosis and overtreatment, because the earlier we treat, the more likely that our treatment is medically unnecessary, in the sense that the cancer would never have gone on to cause symptoms in the patient’s lifetime anyway. Screening programs will benefit some participants but harm others. There are, then, major disputes over whether (proposed or actual) cancer screening programs are effective, in the sense of generating more benefit than harm.22 As well as these familiar concerns around overdiagnosis, Rose’s work point us toward two further problems in thinking about screening: a “relative” and an “absolute” prevention paradox. To motivate the “relative paradox,” consider a stylized example of a real-life policy choice: Case 1: Age is a risk factor for breast cancer; so, too, is possessing both the BRCA1 and BRCA2 mutations. We have a choice between two screening programs: one screens all and only women between forty and seventy (a cohort with a 3.5% risk of breast cancer); the second screens all and only women with the BRCA1 and BRCA2 mutations (a cohort with a 70% risk of breast cancer). We can choose to pursue either program 1 or program 2, but not both.
It may seem that, placing to one side practicality concerns, there are ethical reasons to pursue the second program, because it helps people at far higher risk. However, the number of women with the BRCA1/BRCA2 mutations is far smaller than the number of women between forty and seventy; therefore, this “high-risk strategy” might not have as great an effect on overall population health as the “population strategy” of screening the “moderate-risk” group. In this case,
Translating Epidemiology into Action 91 there is a tension between two prima facie plausible principles: help those individuals at greatest risk of harm, and do whatever will have the greatest impact on overall population mortality and morbidity. Note that “greatest impact” here need not be interpreted solely in utilitarian terms of aggregate benefit: the “population strategy” might, for example, also reduces health outcome inequalities more than the “high-risk” strategy.23 Cases involving this kind of tension are ubiquitous in public health policy: changing the behavior of “moderate” drinkers may do more to reduce overall alcohol-related morbidity and mortality than targeting “heavy” drinkers; punishing those who drive a little over the speed limit may do more to reduce traffic accidents than punishing teenage racers; vaccinating those at greatest risk of flu may not be the best way to stop the spread of an epidemic; and so on. Call such cases where the moral imperative to help those most at risk conflicts with the imperative to help the greatest number “relative prevention paradoxes.” Consider, now, a second scenario: Case 2: Age is a risk factor for breast cancer. Imagine that a screening program for all women between forty and seventy would be “effective,” in the sense that its net effect on health-related outcomes would be positive. However, each individual woman in the population finds breast cancer screening psychologically costly, such that each would prefer living with a 3.5% risk of breast cancer and not getting screened over living with a lower risk and getting screened.
In this scenario, there are obvious reasons not to offer screening to these women: they will simply fail to take up the offer, unless they are tricked or cajoled—raising questions about consent.24 Consider, however, a more fundamental question: Is this a good policy at all? Oddly enough, the answer seems to be both yes and no. On the one hand, it seems plausible that implementing the policy would have net benefit at the population level. On the other hand, it seems that the policy would not be in the interests of each affected individual, because the reduction in mortality risk is outweighed by the risks of overdiagnosis and the known costs of screening. It seems that the policy is not in the ex-ante interests of any affected individual but promotes ex-post aggregate population outcomes! Other “population strategies” have a similar structure: for example, widespread statin use among older men leads to a significant reduction in mortality and morbidity; however, each individual’s risk decreases by only a small amount (from 2% to 1%), and statins can have side effects such that it’s unclear that any individual man benefits from statin consumption.25 I will call these cases, where policies seem negative from the perspective of their effects on each individual’s chances but positive in terms of their effects on population-level outcomes, “absolute prevention paradoxes.”
92 Ethics and Epidemiology It is clear that there could be a clash between the prima facie plausible moral principles that we should help the most vulnerable and that we should do as much good as possible, such that there could be “relative prevention paradoxes.” By contrast, it is less clear that we can even make sense of “absolute prevention paradoxes.”26 Specifically, one might think that if a policy is not in anyone’s ex- ante interests, then that policy simply cannot benefit the population as a whole. It might seem, then, that such apparent paradoxes arise only because our assessment of outcomes fails to consider some important aspects of overall well-being. For example, were we also to measure and aggregate the negative effects of the screening policy on subjective well-being, as well as its positive effects on population health, then the policy’s overall effect on population well-being would be negative. It is clearly true that policymakers often assess the outcomes of policies in narrow terms, ignoring the broader impacts of their decisions. In this sense, many absolute paradoxes may be practically important—because they remind us of the need to be sensitive to the broader “costs” of policies—but not involve a deep clash of ethical principles. (This may be particularly important in my case study of screening, where policies “medicalize” healthy individuals, hence changing their sense of self.) However, I suggest that absolute prevention paradoxes may sometimes reflect deeper ethical tensions. First, even when we focus on issues of measurement, many theorists propose principled reasons for adopting “narrow” accounts of well-being for purposes of assessing policy, related to notions of political neutrality or transparency.27 As such, there may be good reasons why policymakers’ assessments of outcomes ignore certain sorts of “subjective” factors that matter to individuals affected by those policies, giving rise to a tension between the individual ex-ante and population ex-post perspectives. Second, more generally, the principle that a policy that is not in anyone’s ex- ante interests cannot have beneficial expected consequences at the population level is contestable. That principle is true on a (broadly) utilitarian account of population benefit; however, it is false on alternative accounts. For example, some writers propose that we should care not only about the sum total of well-being a policy produces but also how well-being is distributed across the population, preferring more egalitarian over less egalitarian distributions. A policy that worsens everyone’s interests relative to the status quo can produce more equal outcomes than the status quo. In our simplified example of cancer screening, it might be true, for example, that, even if each woman would prefer not to be screened, the predictable net effect of screening each woman would be a narrowing of health inequalities, because there would be fewer cases of breast cancer–related mortality.28 (It is important to stress that such a position does not imply that it is permissible to force women to be screened against their will; rather, the key question is simply how to think about the costs and benefits of making an offer of
Translating Epidemiology into Action 93 screening at all.) Just as we can make sense of relative prevention paradoxes as involving a clash between prima facie plausible principles, so, too, we can understand absolute prevention paradoxes as pointing to tensions between plausible conditions on public health policies: that they should not harm affected individuals and that they should further equality.29 I will not resolve either the relative or the absolute prevention paradoxes here. Even if they have correct answers, policymaking will (quite properly) be sensitive to a wide range of different perspectives and viewpoints. Rather, I point to them as a way of thinking about two kinds of problems for translating epidemiology into policy. Epidemiological research can be used to ground predictions about the likely population-level effects of policy interventions, such as the introduction of a new breast cancer screening program (although any such predictions will always rely on ceteris paribus clauses, and, hence, be subject to high degrees of uncertainty;30 see the next section). However, my earlier comments point to at least two problems in inferring from the claims that a policy promotes population health outcomes that it should, therefore, be introduced. First, there are different ways of measuring population-level benefit, and calculations may be sensitive to which outcomes we choose to measure. It may be easy to claim that a policy will be beneficial when, in fact, it will not benefit those affected, at least from their subjective perspective. Second, and more complicated, a policy may have positive population-level consequences but overlook the plight of a particularly badly off group (as in relative prevention paradoxes), or, even, not be in the ex-ante interests of anyone (as in the more complex cases of absolute prevention paradoxes). For policymakers to be fully informed, they need to know not only the consequences of some policy but also how it affects the distribution of risk. One danger in applying epidemiology to policy, then, is that of placing too great a weight on aggregate consequences, at the expense of considering how policies change individuals’ chances. The other danger is the opposite: of placing too great a weight on “risks” at the cost of ignoring consequences. As noted earlier, a key feature of modern epidemiology is that it identifies “risk factors” for disease. Such information can undoubtedly benefit both individuals and policymakers. However, identifying “at-risk” individuals and populations may not provide the most policy-relevant information, at least if we care about the overall consequences of policy. Consider, again, breast cancer screening; there are, of course, many more “risk factors” for breast cancer than age. However, identifying these risk factors may not help us to construct a program that has the greatest population benefits; although targeting “high-risk” groups will reduce the risks of overdiagnosis, so, too, will it have a lower impact on mortality and morbidity. If we (at least sometimes) have moral reasons to adopt “population” rather than “high-risk” strategies, there are good reasons not to stratify the population into ever-smaller subgroups. Conversely, however, adopting policies solely
94 Ethics and Epidemiology on the basis of their population-level consequences (regardless of how these are measured) may overlook risk-based claims for help. Hence, thinking through the ethics of risk implies important challenges for the responsible communication and pursuit of epidemiology. Epidemiologists should be careful not to assume that just because a policy seems beneficial from one perspective—its effect on individual risks or on population-level consequences—it must, therefore, be beneficial from the other. Resolving the tensions between these standpoints is the proper role not of epidemiology but of the policymaking process. What epidemiologists should do, then, is to provide as much information as possible about the effects of policies viewed from both perspectives, without prejudging which perspective is privileged or how they should be balanced. This is challenging for three reasons. First, communicating notions of risk can be difficult. Second, there may well be other social actors who also seek to influence policy and who have an interest in “blocking” effective communication; the more subtle our advice, the more likely these actors are to undermine our claims. Third, the proposal assumes that results can be communicated in a “value-free” manner; the next section turns to concerns about this assumption, related to the concept of “certainty.”
Section 3: Certainty: The Precautionary Principle Just as the ethics of “chance” are complex, so, too, are the ethics of certainty. To introduce these issues, which I will relate to both the precautionary principle and the notion of an evidence hierarchy, it is useful to start with a toy example. Imagine that we have gathered evidence on the question of whether some chemical causes cancer. (Of course, in practice, matters are complex, because, typically, we have evidence showing that a chemical is a risk factor for cancer, hence also raising issues around the ethics of chance. Assume, for now, we are dealing with a deterministic relationship.) Although our evidence provides strong support for the claim that the chemical is carcinogenic, it does not render the claim certain—for example, because we cannot rule out the possibility that there is a common cause of both exposure and cancer. How certain should we be that exposure to the chemical does cause cancer before we act on that claim, for example by banning it? (Again, matters are complex in practice because there are often legal rules bearing on such questions31; my aim, however, is to understand how and when such rules are ethically justified.) It seems hard to stipulate any single answer to this question: demanding complete certainty would be unreasonable, because no scientific claim is ever proven conclusively.32 However, acting on the basis of the merest smidgen of a possibility
Translating Epidemiology into Action 95 would also seem unreasonable, because doing so would involve endless action against harmless exposures.33 The most reasonable answer, then, seems to be that we should vary the degree of certainty we demand before acting on the claim in proportion to the practical costs of different sorts of error. For example, if the costs of acting on a false positive—banning the chemical when it is, in fact, safe—would be very high, but the costs of acting on a false negative—failing to ban the chemical when it is dangerous—would be comparatively low, then we should demand a higher level of certainty than if the costs of a false negative are high and the costs of a false positive are low.34 Of course, assessing the costliness of errors is a complex matter, for reasons explored in the previous section; for example, there might be disagreement over the badness of some outcome that turns on how we balance concerns about aggregate well-being against concerns about equality. However, the general idea that we should vary our demands for evidence in proportion to the practical costs of different sorts of error seems plausible. After all, this is merely an instance of how we reason in everyday life: looking at the cloudy sky, we must decide whether or not to act on the claim that “it will rain later” by taking an umbrella to work. Plausibly, we might be willing to act on this claim when the costs of a false negative—failing to take an umbrella to work—are high (say, we are recovering from a cold), whereas we would not be willing to act on this claim when the costs of a false negative are lower (say, we are hale and hearty), even if our evidence—a cloudy sky—is identical in both cases. Consider, now, two debates at the intersection of epidemiology and policy. First, some claim that environmental and public health policy should be guided by the “precautionary principle”: when an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.35
The proper interpretation of such statements is a matter of much controversy. For example, some writers interpret the principle as implying that we should never adopt policies that pose a small probability of great harm, hence conflicting with standard models of cost/benefit analysis, and/or that the principle is un-or anti-scientific, as it would allow “hunches” or “feelings,” rather than scientific research, to determine policy.36 I will not discuss all of these disputes over the plausibility or desirability of a “precautionary” approach to policy. However, one strand of debate points us to an important problem for understanding the reciprocal relationships between epidemiology and policy. Typically, epidemiological findings are reported only when they reach a particular level of statistical significance (for example,
96 Ethics and Epidemiology p = 0.05). We can think of this standard as defining what counts as being “scientifically certain” of some epidemiological finding. However, as the earlier toy example suggests, it is not obvious that such “scientific” standards of certainty— which place a strong premium on avoiding “false positives”—are also the standards of certainty that should concern policymakers. Rather, when the costs associated with “false negatives” are very high, they should be willing to act on claims that are less certain. Therefore, it is possible that epidemiologists often fail to report claims that are “certain enough” for policy. In this way, we can justify at least one aspect of the precautionary principle: sometimes policymakers should act even when claims are not “fully established scientifically.”37 The precautionary principle is not necessarily “un-” or “anti-” “scientific”; rather, it simply reminds us that scientific certainty may not be the only decision- relevant standard of certainty. We are, however, faced with two tricky ethical problems. First, we clearly need to distinguish those claims that are not scientifically established but are worth taking seriously from those that are not. In turn, policymakers need experts to help them understand these issues; hence, epidemiologists may find that they have a complex role in policymaking, of advising on the plausibility of claims that are not established. Second, in principle at least, epidemiologists have a wide range of communicative options beyond merely reporting that a claim has been established or not. Consider, for example, that many bodies synthesize evidence to publish reports that set out the likelihood of some result rather than simply stating a finding; for example, the IARC report mentioned at the beginning of this chapter that concluded that glyphosate is “probably” carcinogenic.38 Such reports might serve as models for socially responsible epidemiology because they allow policymakers to decide how certain is “certain enough.” However, there is a lurking problem that even complex syntheses of evidence must proceed on the basis of research that has been conducted and/or published. Such research involved choices about what to research, how to research, and what to publish. If these choices were not aligned with policy-relevant standards of certainty, the syntheses may still be inappropriately skewed toward avoiding false positives or false negatives.39 As a second example of problems of certainty, consider the “evidence-based medicine” and “evidence-based policy” movements. Typically, these movements go beyond that platitude that evidence should inform policy to the notion that there is a “hierarchy of evidence.”40 Although many epidemiologists question the relative privilege granted to the randomized controlled trial over other forms of study, such as observational trials, the general trend of these hierarchies is to privilege epidemiological tools over “basic science” or “expertise.” Many writers have objected to these hierarchies on broadly epistemological grounds, suggesting, for example, that inferring claims about the effects of interventions in some target population on the basis of their effects in a study
Translating Epidemiology into Action 97 population requires knowledge of the mechanisms underlying effectiveness.41 The remarks earlier in the chapter suggest a different kind of concern. Even if it is true that certain sorts of studies provide better evidence than other sorts of studies for the effectiveness of interventions, it does not follow that we should act only when we have completed the studies that provide the greatest degree of certainty. Consider, for example, the infamous advice that “if a study wasn’t randomized, we suggest that you stop reading it and go on to the next article in your search.”42 Rather, even if evidence hierarchies are correct and the only evidence we possess is of low quality, that evidence might be good enough to warrant intervention given the risks of false negatives. This reminder might be particularly important when we consider cases where, by their nature, “high-quality” evidence is hard to come by. For example, although the initial evidence linking smoking and lung cancer was based only on “observational studies,” and, hence, resolutely in the middle of most evidence hierarchies, given the huge costs associated with failing to act on this putative link, it was certainly good enough to warrant intervention. Clearly, then, we should not confuse the claim that certain sorts of evidence would be epistemologically better with the claim that we should never act until we have those sorts of evidence. More generally, we should beware of creating a situation where epidemiologists only study problems where relatively “high- quality” evidence may be obtained, neglecting more complex problems. We should not allow the search for certainty get in the way of doing the research that matters most. Note, in turn, that avoiding these problems requires reform of the institutional norms governing publishing, rather than some kind of individual effort by researchers.
Section 4: Chance, Certainty, Communication: Big Data as an Emerging Issue So far I have outlined two problems around the “ethics of risk”—concerning chance and certainty—that complicate the relationship between epidemiology and policy. In turn, both problems relate to the question of how researchers should communicate their findings. As we have seen, these questions of proper communication quickly become entwined with broader, not obviously “ethical” questions, such as the structure of scientific publishing, or even how to set significance levels. It would be nice to find a simple way through these issues, such as a single set of easily memorable principles to guide decisions. However, such a hope seems forlorn; instead, we need to engage with the complexities of cases. I will conclude my discussion, then, by showing how recent developments in research practice might make some of these challenges even harder.
98 Ethics and Epidemiology There is much interest around the rise of Big Data and associated tools such as machine learning.43 Even if much of this hype is unjustified, it is certainly true that, as a result of technological developments, we are increasingly able to collect huge amounts of data to process this data through new techniques. One key use of these technologies is to broaden our epidemiological understanding. For example, through Big Data and associated analytic techniques, we can identify ever more risk factors for disease.44 In turn, particularly when applied to -omics data, these approaches may allow us to stratify populations in surprising and novel ways.45 These developments both intensify and subtly change the two problems listed earlier. First, consider problems of “chance.” One key claim of proponents of Big Data analytics is that their work will allow for the growth of a new form of “personalized medicine,” which focuses in particular on prevention.46 This promise seems to imply the intriguing result that the best way to lessen the central tension in preventive medicine between the interests of populations and of individuals is to do even more studies of ever larger populations! However, against this intriguing promise, I suggest that Big Data research may, in fact, simply create new problems, related both to chance and to certainty. Concerning problems of chance, note, first, that personalized medicine does not truly give us information about individuals so much as allow us to segment individuals into ever-smaller “at-risk” groups. As noted earlier, there is no straightforward argument that takes us from “we have identified a particularly high-risk group” to “we should prioritize this group in intervention.” Rather, we need to ask how treating those most at risk will affect overall outcomes. Personalized medicine may promise better medicine but worse public health outcomes. Second, and more fundamentally, many public health interventions depend on a kind of “public ignorance” of risk differentials. We claim, for example, that each should be vaccinated against measles because each is at risk of measles. However, it is probably false that, when we consider genomic variation, each is at equal risk of measles. Individuals who know their “personal” risk may, then, feel that they have weaker reasons to take part in mass preventive measures. At least in some cases—vaccination being a paradigm example—however, mass uptake may be required for the program to be effective. That is to say, the success of (at least some) public health interventions depends on a solidaristic attitude that requires that individuals are ignorant of differences between their individual chances. Increasingly personalized information, of the sort promised by personalized medicine, may undermine the attitudes required for successful public health interventions.47 Of course, the possibility that public ignorance may be public health bliss is not a knockdown argument against researching or communicating “personalized” risk information; rather, it points to a problem with a simple-minded assumption that more is always better in all ways.
Translating Epidemiology into Action 99 As well as intensifying and changing issues around chance, the rise of Big Data also suggests new problems around certainty and communication. Typically, new tools for analyzing Big Data are “opaque,” in the sense that they are so computationally complex that human users find it difficult to comprehend how they have reached the conclusions they have reached. For example, imagine that an algorithm recommends against offering insurance to some individual Jim; Jim demands to know why he has been treated in this way; unfortunately, the ways in which the system reached this conclusion were so computationally complex that they cannot be framed in terms that Jim—or, indeed, any human—could comprehend. These concerns give rise to the idea that individuals might have a “right to explanation,” with some even suggesting that such a right is enshrined in the “General Data Protection Regulations” enacted in the EU in 2018).48 Regardless of whether or not we do—or should—have a “right to explanation,” these concerns point to a more general issue. In general, when epidemiologists put forward claims, they imply that they can defend or justify these conclusions. In turn, given the important role such findings can and should play in policy, it is important that epidemiologists can defend their claims; engaging in such practices serves as an assurance that epidemiologists’ influence on policy is justified. However, as epidemiologists come increasingly to rely on automated tools to generate and justify conclusions, they may find that they can no longer justify their findings directly but, instead, have to appeal to the track record of algorithms. In this situation, their claims are ones that they can offer but cannot own. This lack of ownership need not imply a lack of certainty, but it does pose a significant, novel challenge to understanding how and when epidemiologists’ claims should properly play a role in policy, insofar as it challenges norms of transparency and accountability.
Section 5: Concluding Comments The proper translation of epidemiology into policy raises many questions and problems. I have focused, however, only on two aspects of the “ethics of risk”: problems around chance (such as the prevention paradox) and problems around certainty (such as the precautionary principle). In turn, these topics are particularly important because they imply the significant limitations of the popular metaphor of translation. Epidemiological findings are not—or should not be—made, then applied to policy. Rather, how we make our findings should be sensitive to questions of use. In conclusion, however, it is important to note that, in very many real-life settings, the key problem is not an overeager willingness to use epidemiological findings without an appreciation of ethical complexity. Rather, it is that policymakers seem remarkably unwilling to engage with
100 Ethics and Epidemiology epidemiological findings, even when, given their stated or proper ethical and political aims, they should. In turn, there are a wide range of different actors with an interest in stopping epidemiological findings from influencing policy. I have said little about epidemiologists’ positive responsibilities to overcome such politically motivated refusals to engage with research. However, some of my comments earlier in the chapter do suggest an important insight into the tactics appropriate for responding to such problems. The public relations executives hired to undermine antismoking campaigns in the 1950s declared “doubt is our product.”49 In light of the importance that we all place on maintaining our health, this was a clever strategy. It is also a strategy that can be used again and again. The proper response to such strategies is not to show that findings are completely certain; that is an impossible task. Rather, it is to show that they are certain enough. But how certain is certain enough? The answer to this question requires us to take a stance on the value of different sorts of public health outcomes, and the relationships between these outcomes and individuals’ chances. In short, even when their findings are clearly policy-relevant, epidemiologists need to get their hands dirty in public health ethics.
References 1. For useful overviews of some of these issues, see Anand, S., Peter, F., & Sen, A. (Eds.). (2004). Public health, ethics, and equity. Oxford University Press; Coggon, J. (2012). What makes health public?: a critical evaluation of moral, legal, and political claims in public health (Vol. 15). Cambridge University Press; and Holland, S. (2015). Public health ethics. John Wiley & Sons. 2. Marmot, M. G., Altman, D. G., Cameron, D. A., et al.; Independent UK Panel on Breast Cancer Screening. (2013). “The benefits and harms of breast cancer screening: an independent review.” British Journal of Cancer, 108(11), 2205–2240. http://doi.org/ 10.1038/bjc.2013.177 3. Guyton, F., Loomis, D., Grosse, Y., et al. (2015). “Carcinogenicity of tetrachlorvinphos, parathion, malathion, diazinon, and glyphosate.” Lancet Oncology, 16(5), 490–491. 4. Doll, R., & Hill, A. B. (1954). “The mortality of doctors in relation to their smoking habits.” British Medical Journal, 1(4877), 1451. 5. Pielke Jr, R. A. (2007). The honest broker: making sense of science in policy and politics. Cambridge University Press. 6. For an account of how “agnotologists” can distort the communication of research, see the essays in Proctor, R. N., & Schiebinger, L. (Eds.). (2008). Agnotology: the making and unmaking of ignorance Stanford University Press; for (revisionary) thoughts on the normative implications of this phenomenon, see John, S. (2018). “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology, 32(2), 75–87. 7. Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.
Translating Epidemiology into Action 101 8. For useful discussion of how these phenomena relate to concerns about industry- funded research, see Wilholt, T. (2009). “Bias and values in scientific research.” Studies in History and Philosophy of Science Part A, 40(1), 92–101. 9. Douglas, H. (2000). “Inductive risk and values in science.” Philosophy of Science, 67(4), 559–579. 10. Greenland, S., Gago-Dominguez, M., & Castelao, J. E. (2004). “The value of risk factor (‘black box’) epidemiology.” Epidemiology, 15(5), 529–535; and Haack, S. (2004). “An epistemologist among the epidemiologists.” Epidemiology, 15(5), 521–522. 11. Broadbent, A. (2013). Philosophy of epidemiology. Palgrave Macmillan. 12. Hausman, D. M. (2006). “Valuing health.” Philosophy & Public Affairs, 34(3), 246–274. 13. For useful guides to these debates, see Gillies, D. (2000). Philosophical theories of probability. Psychology Press; and Mellor, D. H. (2004). Probability: a philosophical introduction. Routledge. 14. Nardini, C. (2016). “Bayesian versus frequentist clinical trials.” In Solomon, M., Simon, J. R., & Kincaid, H. (Eds.), The Routledge companion to the philosophy of medicine. Routledge, pp. 228–236. 15. On the related general question of how to think about unidentifiable victims, and how they relate to identifiable victims, see Cohen, I. G., Daniels, N., & Eyal, N. M. (Eds.). (2015). Identified versus statistical lives: an interdisciplinary perspective. Oxford University Press. 16. Kahneman, D. (2011). Thinking, fast and slow. Macmillan; and Sunstein, C. R. (2002). Risk and reason: Safety, law, and the environment. Cambridge University Press. 17. Wolff, J. (2006). “Risk, fear, blame, shame and the regulation of public safety.” Economics & Philosophy, 22(3), 409–427. 18. Wolff, “Risk, fear.” 19. Rose, G. (2008). The strategy of preventive medicine. Oxford University Press, p. 53. 20. Rose, Strategy of preventive medicine, Chapter 3. 21. Much of what follows is based on my presentation in John, S. D. (2014). “Risk, contractualism, and Rose’s ‘prevention paradox’.” Social Theory and Practice, 40(1), 28–50; for an alternative approach, see Kelleher, J. P. (2013). “Prevention, rescue, and tiny risks.” Public Health Ethics, 6(3), 252–281. 22. For good philosophically informed guides to these debates, see Solomon, M. (2015). Making medical knowledge. Oxford University Press; and Plutynski, A. (2012). “Ethical issues in cancer screening and prevention.” Journal of Medicine and Philosophy, 37(3), 310–323. 23. Fleurbaey, M., & Voorhoeve, A. (2013). “Decide as you would with full information!” In Eyal, N., Hurst, S. A., Norheim, O. F., & Wikler, D. (Eds.), Inequalities in Health: Concepts, Measures, and Ethics. Oxford University Press, 113–128. 24. Although the issue about consent here is complicated by questions around the proper presentation of risk information; for discussion, see Schwartz, P. H., & Meslin, E. M. (2008). “The ethics of information: absolute risk reduction and patient understanding of screening.” Journal of General Internal Medicine, 23(6), 867–870. 25. John, “Risk, contractualism.” 26. For discussion, see Thompson, C. (2018). “Rose’s prevention paradox.” Journal of Applied Philosophy, 35(2), 242–256. 27. Hausman, “Valuing health.” 28. Fleurbaey & Voorhoeve, “Decide as you would.”
102 Ethics and Epidemiology 29. Note that this is not the only way in which prevention paradoxes may emerge, because they may also be linked to “moralized” conceptions of who is liable to (perceived) “punishment.” For these further complexities, see John, S. (2017). “Should we punish responsible drinkers? Prevention, paternalism and categorization in public health.” Public Health Ethics, 11(1), 35–44. 30. Broadbent, Philosophy of epidemiology. 31. Cranor, C. (2006). Toxic torts. Cambridge University Press. 32. Douglas, “Inductive risk.” 33. Sunstein, Risk and reason. 34. Rudner, R. (1953). “The scientist qua scientist makes value judgements.” Philosophy of Science, 20(1), 1–6. 35. Wingspread Statement; Wingspread Conference on the Precautionary Principle, January 1998, Racine, Wisconsin. 36. Sunstein, Risk and reason. 37. John, S (2010). “In defence of bad science and irrational policies: an alternative account of the precautionary principle.” Ethical Theory and Moral Practice, 13(1), 3–18. 38. Guyton et al., “Carcinogenicity.” 39. Stegenga, J. (2011). “Is meta-analysis the platinum standard of evidence?” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 42(4), 497–507. 40. For a useful philosophically informed overview of the evidence-based medicine movement, see Howick, J. (2011). The philosophy of evidence-based medicine. Wiley Blackwell. 41. Russo, F., & Williamson, J. (2007). “Interpreting causality in the health sciences.” International Studies in the Philosophy of Science, 21(2), 157–170. 42. S traus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B. (2005). How to practice and teach EBM. Evidence-Based Medicine. Third edition. Elsevier, 13–29. 43. Beam, A., & Kohane, I. (2018). “Big data and machine learning in health care.” Journal of the American Medical Association, 319, 1317– 1318. https://doi.org/10.1001/ jama.2017.18391 44. Cancer Research UK. (2016). “Stratified medicine programme.” https://www. cancerresearchuk.org/ f unding- for- researchers/ how- we- d eliver- research/ our- research-partnerships/stratified-medicine-programme 45. Easton, D. F., Pharoah, P. D., Antoniou, A. C., et al. (2015). “Gene-panel sequencing and the prediction of breast-cancer risk.” New England Journal of Medicine, 372(23), 2243–2257. 46. Academy of Medical Sciences. (2015). “Stratified, personalised or P4 medicine: a new direction for placing the patient at the centre of healthcare and health education.” https://acmedsci.ac.uk/download?f=file&i=32644 47. Prainsack, B., & Buyx, A. (2017). Solidarity in biomedicine and beyond (Vol. 33). Cambridge University Press, Chapter 6. 48. Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242; and Wachter, S., Mittelstadt, B., & Floridi, L. (2017). “Why a right to explanation of automated decision-making does not exist in the general data protection regulation.” International Data Privacy Law, 7(2), 76–99. 49. Proctor & Schiebinger, Agnotology.
PART III
MET HODS
6
Ethical Issues in the Design and Conduct of Community-Based Intervention Studies Michelle C. Kegler, Steven S. Coughlin, and Karen Glanz
“Community-based intervention research” (CBIR) is a general term applied to research conducted in or with a community for the purpose of improving health outcomes, typically through changing behaviors or environments. CBIR is often conducted with geographically defined communities, but it also includes research with defined populations in a range of institutional settings such as faith-based organizations, schools and worksites, health care delivery or social service systems, people living with or at risk of certain health conditions, and the general public. Communities are defined by a shared sense of identity, interests, or characteristics. With this broad definition, communities can also be online. Interventions studied in community-based research are designed to improve the health of individuals and populations through wider adoption of healthful behaviors or reduction of unhealthy behaviors, early detection of risk factors or disease, changes in policies and practices to support healthy behaviors, improvements in built environments, efforts to disseminate evidence-based strategies, mobilization of community assets to address quality of life and community well-being, and combinations of these approaches. Our definition of CBIR is intentionally broad and encompasses community as the setting for the intervention, the target of the intervention, and the agent or mechanism for change.1 CBIR includes primary, secondary, and tertiary prevention and can also cover treatment, survivorship, rehabilitation, long- term care, and end-of-life research. While this chapter uses the umbrella term of “community-based intervention research,” studies with a variety of labels, including community health research, public health intervention research, behavioral research, health education research, health promotion research, community-based participatory research, participatory action research, citizen science, and dissemination and implementation research, can fall under this label or intersect with this category of research when focused on a community intervention. Intervention research in community settings has increased and diversified dramatically over the past forty years.2–5 Initially, heightened interest Michelle C. Kegler, Steven S. Coughlin, and Karen Glanz, Ethical Issues in the Design and Conduct of Community-Based Intervention Studies In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0006
106 Ethics and Epidemiology in community-based interventions was stimulated by the epidemiological transition from infectious diseases to chronic diseases as the leading causes of death, rapidly escalating health care costs, and data linking individual behaviors to increased risks of morbidity and mortality.6,7 Subsequently, increased recognition of social ecologic models that acknowledge the power of social and environmental factors in shaping individual behavior has fueled interest in community-based research.8–11 More recently, focus on social determinants of health and persistent health disparities in low-, middle-, and high-income countries has maintained the emphasis on CBIR, especially research conducted with a health equity lens.12–15 Additionally, advances in medicine and genetics research have drawn attention to the need for compliance with evidence-based therapeutic regimens, participation in screening programs for disease susceptibility and early detection, and appropriate use of health services.16–19 With these advances, new dilemmas have arisen, dilemmas involving ethics, law, and the study of human behavior.20–24 Moreover, the rapid expansion of social media and wireless technology has resulted in new types of communities and new intervention strategies that raise corresponding ethical issues.25–27 Public and professional concern about ethical issues in research has continued to grow. New ethical issues have arisen out of new health-related problems addressed by research, such as preexposure prophylaxis (PReP) for HIV prevention, the opioid epidemic, gun violence, and the Zika virus,16,17,28–32 combined with the expanded repertoire of methods used by community health researchers, including community- based participatory research, citizen science, mobile health interventions, social media (e.g., Facebook and Twitter), and wireless wearable devices.25–27,33,34 Historically, most attention on research ethics focused on clinical research involving individuals; the emergence of ethical examinations of preventive, educational, and health promotion research in communities is a newer development.35,36 The challenge of balancing scientific rigor and dynamic community research environments with ethical concerns has become increasingly complex. In this chapter, we will examine the scientific, methodological, and practical foundations of CBIR that bear on ethical concerns. The chapter begins with a description of CBIR, including intervention strategies, study designs, and data collection methods. Given the major role of partnerships in community-based research, the chapter examines ethical issues along a continuum of community- engaged research and discusses the establishment, maintenance, and dissemination phases of community-engaged research. Considerations for working with vulnerable or disadvantaged communities are also shared, along with considerations for ethical issues in a global context. Traditional ethical principles in research such as respect for autonomy and beneficence are discussed, along with
Community-Based Intervention Studies 107 suggestions for expanding these concepts to include community-engaged research. The chapter concludes with a brief review of professional codes of ethics with implications for CBIR.
Methodological Issues and Ethics in CBIR There is wide diversity in the choice of intervention strategies, data collection techniques, and study design in CBIR. We will consider ethical issues in the context of these methodologies.
Intervention Strategies Contemporary community interventions may address determinants of health problems at one or more levels of the social ecology. Interventions may focus on intrapersonal factors (such as knowledge, attitudes, behavior), interpersonal processes (family relationships, social support, social networks), institutional factors (organizations and their norms or rules), community factors (neighborhood walkability, community coalitions), and public policy.37 Interventions generally labeled as health promotion include not only educational and motivational strategies but also organizational change, policy directives, laws, economic supports, and community activation through partnerships or community organizing interventions.38,39 For example, interventions to reduce childhood obesity target a range of physical activity and nutrition determinants, including family rules about screen time and TV watching, family-based physical activity norms, home food environments, physical education classes in school, vending machine policies, school cafeteria offerings, safe biking and walking routes to school, physical activity opportunities in afterschool programs, and availability of community-based recreation facilities and activities, among others.40–43 Successful tobacco control programs have similarly used a variety of strategies, such as mass media campaigns, community coalitions, increased prices through excise taxes, retailer education, reduced financial barriers for cessation therapies, provider reminder systems, telephone counseling, and smoke-free policies.44,45 Other community-based health interventions range from environmental health interventions advocate for demand policy change to community partnerships to promote health equity.46,47 These higher-level interventions that focus on changing policies, systems, and environments are often complemented by more traditional interventions that focus on individual-level or interpersonal-level change strategies, which attempt to change knowledge, attitudes, beliefs, self-efficacy, social support, and social networks.37,48 New intervention strategies at the
108 Ethics and Epidemiology individual level, labeled mHealth, capitalize on mobile and wireless technologies such as smartphones, text messages, apps, and social media.49–53
Study Designs Randomized, controlled experiments remain the most rigorous type of design in CBIR. However, numerous variations on randomized controlled trials are also employed because of practical and political considerations such as the reluctance of organizations or participants to take part in a study where they may be randomized to a no-treatment control group. In some studies, randomization to different conditions is performed on a unit larger than the individual, for example schools, churches, clinics, or worksites. This technique is both scientifically justified and necessary if the intervention involves organizational change and where social networks would create excessive contamination between groups (such as cross-talk within the organization), which happens in many worksite health intervention studies. However, when an “organization” is the unit of randomization, individuals may not have a personal choice about whether to participate, even though intervention strategies may be directed at these individuals.54 This raises the issue of community consent, in addition to the more traditional research participant consent. It can be difficult or ethically unacceptable to recruit organizations, groups, or individuals to participate in research without their receiving the benefit of some type of intervention or service. Control groups, in particular, can raise ethical issues in some situations where control group participants feel they are being denied the benefit of a potentially valuable intervention. Hence, additional design variations may include using modified control groups such as usual care or minimal intervention (e.g., an education brochure), stepped wedge, before– after comparisons, and combinations of cohort and cross-sectional samples for evaluation. Wait-list, stepped wedge, and before–after comparisons enable all participants to receive the intervention over time. Usual care control groups ensure that control group participants are not denied standard care as a result of the research.
Methods of Data Collection The unit of observation in CBIR may be individuals, groups, organizations, or communities defined by geographic or political boundaries. Data may be collected both directly (by asking questions or conducting observations) and indirectly (by reviewing records and archival sources). The most common methods
Community-Based Intervention Studies 109 for collecting data from individuals is survey research using self-administered web-based or written questionnaires, and telephone or face-to-face interviews. However, wearable devices that measure exposures, behaviors (e.g., physical activity, sleep patterns), health data (e.g., heart rates), and location, often linked to the internet, are increasingly commonplace and raise ethical concerns about who has access to the data and for what purposes, as well as privacy and informed consent. The era of Big Data and linkages across datasets without permission or informed consent requires close scrutiny of ethical issues.55 Big Data can include electronic medical records, insurance claims data, online activity, credit card purchases, text messages, social media, and genomic information.55 Access to these databases can require investigators to document specific data requests and the need for them, but often the data are available through commercial services with personal information such as addresses included. At the organizational or community levels, information about social structure and the physical environment may be obtained through interviews with key informants or by direct observation. For example, neighborhood nutrition environments have been assessed by observing food availability in a sample of stores or restaurants in a neighborhood.56 Indirect indicators of enforcement of tobacco control policies or of consumer eating patterns might include measures such as secondhand smoke using nicotine monitors or cafeteria plate waste. These data collection approaches do not require informed consent of individuals and involve collection of data about and sometimes from an entity such as a neighborhood, school, or worksite. This raises the question of how and to what extent it is necessary to protect the privacy of social groups, organizations, and communities when individuals are not the focus of the research. The answer to this question depends, at least in part, on the extent to which the information is generally available to researchers and the public. Community intervention researchers increasingly are using qualitative and mixed methods to obtain greater in-depth information and to learn about unique cultural groups or high-risk populations. Qualitative methods are especially useful in their ability to capture and communicate stories.57 Major qualitative data collection techniques include interviews, observations, focus groups, and document review.57 Qualitative interviews ask open- ended questions that allow respondents to answer in their own words. Observations result in detailed descriptions of an event or experience, such as behaviors, interpersonal interactions, or organizational processes. Focus groups allow for in-depth exploration of topics in a group setting. Documents include written and other material from program or organizational records such as progress reports, attendance records, meeting minutes, photographs, correspondence, and so forth.
110 Ethics and Epidemiology
Ethical Issues in Researcher–Community Partnerships Almost all community-based research requires researchers to build a relationship with the community or, at a minimum, with a gatekeeper organization within the community. The rare exception is when researchers recruit participants through mass media and collect data through social media, telephone interviews, or mailed surveys and never interact with community organizations or with community residents, except as research participants. A more common approach to designing and implementing community-based research is to establish a partnership with one or multiple organizations within the community. Community-based organizations can provide valuable resources for research such as endorsement and promotion of a research project, access to study participants and data collection settings, and local understanding of health problems and possible solutions. Partnerships between scientists and these organizations present challenging and often neglected problems regarding researchers’ responsibilities to the participants and to the intermediary organizations or communities. These problems include the extent of collaboration and shared decision-making, whose research agenda is addressed, the risk of raising false hopes when preparing grant proposals, the sharing of data and research findings, and the responsibility to establish a long-term relationship with the community or, at a minimum, to leave the community with increased capacity or services. Addressing these issues early on in a relationship increases the likelihood that damaging conflicts will be avoided, that both community members and researchers will benefit from the research, and that the community will be receptive to future research. The primary emphasis in CBIR should not be on the advancement of researchers’ careers or advances in scientific knowledge alone but also on the individuals and communities whose health is at stake. These partnerships fall along a continuum of possible degrees of involvement and interaction with communities.58 One model of community engagement anchors the spectrum with outreach at one end and shared leadership at the other end. At the minimal engagement end of the spectrum, the community is primarily just the setting for research and community partners provide modest input into selected aspects of the research, such as recommendations for the best way to recruit participants or review of a survey instrument to provide advice on wording. On this traditional side of the community-engaged research spectrum, community organizations may be engaged mainly to assist with recruitment, people are viewed primarily as subjects, research is conducted on the community, and researchers maintain sole control of the research process, own the data, and manage dissemination, typically through journal articles and academic presentations. In this model, the researcher typically approaches a gatekeeper organization with a research question in mind and is seeking access to information
Community-Based Intervention Studies 111 about potential research participants and permission to recruit participants from specific institutions. A researcher may obtain the cooperation of an organization such as a local health department, hospital, school, or employer to provide demographic data and information about how to contact potential participants. In the middle of the continuum are models where community input is actively sought but researchers still drive the research process. Language may change from research subjects to research participants and community members have some input on the research questions and research design, but researchers typically still control most aspects of the research process, including dissemination of the findings. A community advisory board (CAB) is usually formed and operates as the mechanism for community engagement. Community-based participatory research (CBPR) is at the end of the continuum where community members do more than provide input into the research. CBPR is an alternative to traditional research models in which outsiders control the questions, methods, interventions, indicators of success, and interpretation of results.33 In CBPR, research questions often originate from community members. Community members are equal partners in designing the research and data collection methods, often collect the data themselves, aid in analysis and interpretation, and are co-authors on resulting presentations and publications. Research is geared toward action that benefits the community. CBPR as a paradigm has grown rapidly in the past two decades, and it acknowledges and values the unique strengths that all partners, both researchers and community members, bring to the research process.33,34,59 According to Minkler and Wallerstein, a variety of forces have fueled “increased attention on alternative orientations to inquiry that stress community partnership and action for social change and reductions in health inequities as integral parts of the research enterprise” (p. 3).60 These include an emphasis on health equity, increased recognition of the need for community and social change, and disappointment in results from researcher-driven community interventions. Israel et al.61 identify nine key principles of CBPR:
1. 2. 3. 4. 5.
Recognizes community as a unit of identity; Builds on strengths and resources within the community; Facilitates collaborative, equitable partnership in all phases of the research; Promotes co-learning and capacity building among all partners; Integrates and achieves a balance between research and action for the mutual benefit of all partners; 6. Emphasizes local relevance of public health problems and ecological perspectives that recognize and attend to the multiple determinants of health and disease; 7. Involves system development through a cyclical and iterative process;
112 Ethics and Epidemiology 8. Disseminates findings and knowledge gained to all partners and involves all partners in the dissemination process; and 9. Involves a long-term process and commitment. Citizen science is a relatively new and somewhat unique approach to community-engaged and community-driven research that has gained considerable momentum in the biological and environmental sciences. It was originally viewed as a method for gathering large amounts of data through volunteer monitoring or “crowdsourcing” from a large number of people across large geographic areas.62,63 Wiggins et al. developed a typology based on type of project: action, investigation, conservation, virtual, and education.64 They also noted that projects vary by whether scientists or community members lead them, whether the project is internet-based or not, and whether the primary purpose is science or public engagement. Citizen science can be a form of participatory action research and, although not yet widespread, has significant potential as a policy advocacy strategy for interventions targeting environmental conditions (e.g., crowdsourcing industrial pollution noise or smell reports, or bicycle–vehicle collisions and near-miss mapping).65,66 As with other forms of community-engaged research, it raises a number of ethical concerns.67 These include ownership of the data, especially given its co- production; how to give credit in scientific publications, especially when the number of contributors is large; issues of compensation, given the blurring of boundaries between scientists and volunteers; and domination by well-educated and middle-class participants. The use of the term “citizen” can also raise red flags given the current political climate toward immigrants in the United States, Australia, and some European countries. Additional ethical issues revolve around the confidentiality of shared data, particularly from personal monitors. Moreover, with environmental monitoring data, potential health effects of exposure are often unknown, complicating the issue of sharing results back to participants in a meaningful way. Data quality is another major concern, although a variety of strategies have been developed to help ensure data quality (e.g., training, close supervision, cross-checking).67 In all forms of community- engaged research, a common approach for soliciting community participation is to establish a CAB to provide input to investigators. CABs typically include members of the community and representatives from organizations that serve the community (who may or may not live in the community). CABs can be structured simply to allow members to provide input into a research effort or so that members can participate as equal partners in the research endeavor. CABs and similar structures for community participation can provide concrete benefits to researchers such as gaining access to local leaders, resources, and technical skills; garnering citizen support and volunteer
Community-Based Intervention Studies 113 time; incorporating local values and symbols into intervention activities; developing local skills and competencies for future community problem-solving efforts; and enhancing local ownership and long-term maintenance of changes in the community.68,69 In addition to benefiting the researcher, community participation through CABs can provide benefits to community members and strengthen a community’s capacity to address other issues of concern.70,71 CABs can build planning, evaluation, and research skills among participants; strengthen ties across personal and organizational networks; and provide access to resources within the community and external resources such as specialized expertise or new funding opportunities. These collaborative structures provide a practical means through which researchers and community residents can jointly frame the research questions, design data collection instruments, establish data collection procedures, design and ensure the cultural appropriateness of intervention materials, gain access to intervention sites, implement interventions, interpret results, and disseminate the findings and institutionalize intervention activities.69,72
Ethical Issues in Establishing Researcher and Community Partnerships A variety of practical ethical challenges emerge when conducting research in collaboration with community partners. Cultural misunderstandings, lack of trust between researchers and community members, and the role of power and privilege all play out in varying ways throughout the life cycle of a community- engaged research partnership.73 At the early stages, ethical issues include who represents the community, who controls the research agenda, and who consents at the community level.
Who Represents the Community Early issues to be faced in establishing a research partnership include how to define a community and who legitimately represents the community’s interests. “Community” is usually defined by a shared identity. This identity can be based on geography, such as county or city, or on ethnicity, culture, faith, or institutional affiliation.74 Accordingly, an early step in building a research partnership is understanding how a community defines itself and its boundaries. According to Wallerstein et al.,74 “it is shared identity and the institutions and associations that grow up within shared identity that allow the development of partnerships, and outside research partners must begin by getting to know how ‘the community’ is
114 Ethics and Epidemiology in fact defined by those with whom they hope to partner” (p. 34). Communities are typically complex, with histories, divisions, and conflicts that complicate figuring out who best represents the community.75 Practically speaking, this influences who is invited to sit on a CAB. These decisions can be especially challenging in larger and more diverse communities with a history of fragmentation, unfair distribution of resources, and/or organizational conflict.
How to Approach a Community and Gain Approval Ideally, community- based research projects build on existing positive relationships, with either the lead investigators, a colleague, or existing institutional connections. In CBPR, the community may initiate contact with a researcher. Quite often, though, a researcher initiates the relationship without any prior connections to the community.76 In these situations, conducting a community assessment is a logical place to start, with one purpose being to identify and engage formal and informal leaders and organizations with strong ties to the community of interest.77 Selecting the wrong partner—for example, one with a negative history with the community—can create unnecessary difficulties for the research project.
Who Controls the Research Agenda Other ethical problems concern who—researchers or community members— should control the research agenda and determine the research questions of interest, especially when the researchers are from outside the community. In CBPR, one of the basic tenets is that the research addresses an issue of concern to the community. Ideally, then, the research questions originate from the community or at least are generated in collaboration with the community. Faden et al. identify “three ethical principles that lie at the heart of CBPR—respect for self-determination, liberty, and action for social change” (p. 251).78 These principles are grounded in the belief that people can assess their own needs and have the right to address them.78,79 In much community-based research, however, the researcher defines research topics and seeks community partners with shared interests. This behavior is a result of the fact that researchers’ expertise tends to be focused on a limited number of content areas, such as heart disease prevention or cancer screening, or that researchers are responding to a funding opportunity that stipulates the problem. Fortunately from a researcher’s perspective, given the heterogeneity within communities, it is usually possible to find a community partner with a shared interest.
Community-Based Intervention Studies 115 More obvious ethical issues can emerge when working with a disadvantaged community on an issue peripheral to their needs and diverting resources, including the attention of community leaders and respected organizations, to a lower-priority concern. A low-income African American community, for example, may be in dire need of economic development or community-oriented policing, but a financially strapped community-based organization may agree to divert its energies from advocacy efforts to participate in a teen pregnancy prevention project due to the lure of funding. Even when the value of collaboration between researchers and the communities they study is recognized, many complicated pragmatic issues remain. As Dressler notes,80 “Negotiating such a collaborative relationship can demand skills, time, and patience perhaps notably lacking in some academic researchers. Similarly, the willingness of the community to enter into the long-term pact required for high-quality research can oftentimes necessitate a difficult shift in values” (p. 33). With their different agendas, researchers may have understandable reasons for conflicts with community leaders and residents. These may be science-based, such as believing randomization is the only way to ensure internal validity in an intervention research study, or more personal, such as feeling the need to publish study results in a timely manner without the delays inevitable in a collaborative writing process.
Who Consents at the Community Level Traditional informed consent occurs at the individual level as people sign informed consent forms to serve as a research participant. Consent issues focus on the rights of human subjects, safety, privacy, and choice.75 But who has the authority to provide consent at a community level? This is a complex question with suggestions focused on transparency and processes that seek community advice or approval.81 Absent a formal gatekeeper, ethical issues remain, especially in heterogeneous communities. For example, one part of a community may desire an environmental health project to document exposures to a contaminant and identify the source, while another may be loyal to the polluter due to economic dependence or concern about property values. No easy solutions are available, beyond establishment of a diverse CAB and careful thought given as to inclusion of members with opposing views. Increasingly, communities are becoming more sophisticated in their interactions with researchers and are developing processes for community- level consent. Most notably, tribal (and other indigenous) communities are demanding accountability from researchers through the establishment of tribal institutional review boards.82,83 These boards have the power to deny access
116 Ethics and Epidemiology to researchers and to control publication of research findings.74 The American Indian Law Center developed a checklist for Indian Health Boards to use in their decisions about whether to support particular research proposals.83 Some of the key questions on the checklist are: • What are the expected benefits of the research to the tribes and local community, to the individual research subjects, and to society as a whole? • What are the assurances regarding the confidentiality of data? Will the tribe or community be identified in the research report? • Will the researcher agree to satisfy tribal, Health Board, and community concerns in final drafts and the final report? • Is the researcher willing to attempt to find means of using local people and resources rather than import all resources? • Is the researcher willing to deposit raw data in a tribal or tribally designated repository or otherwise share the data with the tribe? A few non-Native communities have developed community review boards, sometimes informally and affiliated with a university and sometimes more formally through a nonprofit organization that serves in a gatekeeper role.84
Ethical Issues in Conducting and Disseminating Community-Engaged Research Another set of ethical issues emerges in the actual conduct of the research, as well as the dissemination of the research. The level of flexibility in research protocols, the varying levels of expertise in research, and dual roles for community members serving on the research team all raise ethical issues.75,81 Dual roles can refer to community members who are hired as researchers finding themselves still representing the community, but with new accountability to the principal investigator.73 This new role can lead to concerns about confidentiality when they are collecting data from people they know or they themselves are in a dual role of participant and researcher.81 Hiring community members as research staff and/ or relying on community partners for study-related tasks requires researchers to share control and increases researchers’ dependence on community commitment, which can change over time. This can be challenging for researchers, who are often accountable to a funder and a specified timeline with clear deliverables. Partners may terminate or step back from a relationship as new priorities emerge, and this may impact timelines and/or the quality of the research.81 Researchers can also find themselves in dual roles as they become more integrated into the community. They can begin to serve as advocates for community action and/or
Community-Based Intervention Studies 117 to shift resources to addressing injustices uncovered by the research or made salient through deepened understanding and new relationships.73 They may need to spend considerable amounts of time in a community, which may slow down research productivity by traditional metrics. Addressing injustice while not raising false hopes for the ability of research to provide solutions to long-term problems is another ethical balancing act. Dissemination of research results can also be fraught with ethical issues, especially when results may stigmatize or further stereotype a community. Research results may not clearly benefit a community, may jeopardize future funding if results are null, or may further an unflattering perception of the community in some way. Research can also make visible a community that would prefer to remain private and/or not be noticed by authorities.75 Use of research results, and a desire to balance publications with action, can also raise ethical issues. Concerns about who benefits from the publication can arise, along with who claims credit for positive changes that result from research, such as an impact on policy or practice.75 Timing of dissemination can also flame tensions if researchers want to wait to release results after peer-reviewed publication but community members want results made public sooner to advocate for action. Data ownership and dissemination of results back to the community are additional ethical issues that can cause problems if not dealt with transparently. Most community researchers now understand it is imperative to share results back with community members, although this is still a common complaint from the community perspective. Finally, terminating a research relationship is not discussed in the literature but deserves the same attention as establishing a relationship. Once the funding is gone and the papers are published, what are a researchers’ obligations to work toward sustaining the partnership, the funding, and/or some follow-up action that results from the research? CBPR principles refer to long-term relationships, but if funding is not forthcoming for collaborative work after significant efforts are made to obtain it, what is the obligation to continue the relationship? Is strengthening the capacity of a community to conduct research sufficient? Funders of community-engaged research can also exacerbate and/or ameliorate some of the ethical tensions that arise from these models of research in terms of the length and flexibility of funding streams and handling ambiguity that may be present during the early stages of research when relationships are still solidifying and trust is being built.
Guidelines for Collaboration One practical approach to navigating these sometimes challenging and always complex relationships between investigators and community partners is to
118 Ethics and Epidemiology develop guidelines for collaboration. At a minimum, these should address roles and responsibilities of various categories of partners (e.g., staff, investigators, and CAB members), rules for decision-making (including how decisions will be made and what types of decisions will be made jointly), human subjects protection and institutional review board (IRB) review (including who will be listed on IRB protocols and who needs to be formally trained in human subjects protection), stewardship of data, and rules for co-authorship on publications and presentations. There are often ways to reach compromises that permit both communities and researchers to meet their needs. An example related to co- authorship of peer-reviewed publications is to follow established international guidelines for authorship but to structure the research and paper-writing process so that a substantive role in the design, conduct, and writing of results exists for nonacademic partners. In many cases, the multiple needs of scientists and participating communities can all be accommodated with attention to five key areas: 1. Increased sensitivity to community history, culture, and norms 2. Instilling trust through better communication 3. Considering communities as research partners, not just sources of research subjects 4. Understanding and addressing important problems of communities, such as poverty, racism, or police violence 5. Developing guidelines for collaboration or operating procedures that clarify how a relationship will function.76,80 These matters are not intended simply to require researchers to compromise in order to satisfy community representatives; openness to understanding each other’s views should come from both residents and researchers.
Special Considerations for High-Risk and Vulnerable Communities Due to the disproportionate burden of disease, disability, and premature death, there is likely to be increased intervention research directed to racial/ethnic minority and low-income communities, more recently with an equity and social justice lens.85–89 Despite the structures and policies of organized science that are designed to protect study populations, many members of racial/ethnic minority and disadvantaged communities have a fundamental distrust of scientific research directed at them due to past injustices. This distrust is particularly evident in African American communities, where the abuses of the Tuskegee Syphilis
Community-Based Intervention Studies 119 Study, which sought to document the natural history of syphilis among poor Black men in Alabama, have led to distrust of health researchers.90–92 Sometimes labeled as “insider– outsider” tensions, differing expectations and agendas and the need to overcoming mistrust emerge as challenges in establishing relationships for research.75 Formal safeguards and reforms in research may be insufficient to change fears of harm and exploitation in vulnerable communities. The needs of communities to protect the welfare of their members and ensure long-term social benefit make community partnerships in which community members can help ensure sensitivity in design, conduct, and interpretation of findings especially important.88
Special Considerations in a Global Context Many of these same ethical issues apply in a global health context but are often amplified. Although there are universally acknowledged principles for protection of human subjects, the actual rules and oversights are put in place within countries. Industrialized countries usually have their own guidelines and monitoring for ethical research,93,94 which may not be identical to those in the United States. Research ethics systems may not even be in place in some locations, where they are greatly needed. A study of forty-six member states of the World Health Organization (WHO) African Region found that many countries do not have formal Research Ethics Committees, although some have ad hoc review mechanisms.95 Anecdotal reports suggest that bribery or demands for payment from researchers are common practice in some countries. Further, the populations’ vulnerability and often low literacy levels raise questions about the applicability of voluntary informed consent in non-Western settings.96 There may be a mismatch between U.S. consent regulations and the infrastructure, culture, and context of underdeveloped countries, so that the complex processes that work in the United States or other developed countries often seem not to fit in the developing world. Understanding how gender, religion, and history affect cultural issues is critical in community-based research outside of high-income countries. King et al. delineate a range of issues related to community engagement and global health research, including management of nonobvious risks.97 These may emanate from cultural perspectives on the role of women, respect for elders, and/or the cultural or spiritual significance of biological samples (e.g., blood or tissue). Misunderstandings can impact recruitment, retention, adherence to protocols, and action following research completion.97 Involving local co-investigators can aid in addressing ethical issues, as does authentic community and stakeholder engagement.97 As in most community-engaged research with an insider–outsider
120 Ethics and Epidemiology dynamic, listening, developing personal relationships, building trust, and being transparent in decision-making can aid in navigating tricky ethical issues.
Expansion of Traditional Ethical Issues Related to Research Participants Ethical considerations impose both restrictions and responsibilities on researchers. Ethical guidelines are vital to community intervention research to ensure that studies have worthwhile goals and to protect the welfare of research participants. Several ethical principles in CBIR relate to research participants. These principles include respect for autonomy, beneficence, justice, privacy, and avoidance of deception.98,99 Several of these concepts can be expanded to address many of the issues raised in this chapter.81,100
Respect for Autonomy Autonomy involves respect for the rights and ability of individuals and groups to make decisions for themselves.99 In public health practice, it is accepted that there are some situations in which autonomy must be limited to protect the public’s health, such as the requirement of immunization against communicable diseases. But in most cases, respect for autonomy becomes the central ethical principle and infringements of autonomy are not permitted. In CBIR, respect for autonomy is the reason for obtaining informed consent and the justification of rules of voluntary data provision and participation in interventions. Elements of disclosure in informed consent include a statement of purpose, explanation of procedures, and a description of discomforts and risks that participants might experience. One ethical question regarding informed consent concerns how specific a researcher must be about the study aims when the provision of complete information may lead to refusal to participate, behavior change in anticipation of the study or in the control group, or biased responses to measures used for evaluation. For example, it is common for consent forms in randomized trials to state that participants will be assigned to one of two different educational programs, without reference to specific differences or the fact that one program is enhanced in some manner. Decisions regarding disclosure should be carefully weighed so that information is as complete as possible without severely compromising research methodology. Program or study attrition compromises internal and external validity of community-based intervention studies. A strategy to ameliorate the problem of
Community-Based Intervention Studies 121 attrition is the use of incentives such as small gifts, payment to participants, or reimbursement for inconvenience or travel expenses. Although this practice may be perceived as manipulative or restrictive of freedom to withdraw from participation, such compensation usually does not compromise ethics if incentives do not involve unreasonable enticements and if they do not have even the appearance of constituting coercion. It is even possible that, for some groups, small incentives covering extra expenses of time or effort involved in the research enable a broader base of participants to take part. While this example should probably be regarded as a reimbursement, different IRBs take different stands on incentives.101–103 Some may view any compensation as potentially coercive or manipulative while others permit some incentives but define a level that is unacceptable. In some cases, consent for participation has been obtained for the group, organization, or community, and individuals become research participants without their consent by virtue of belonging to a relevant group or community. An example of this is a cluster randomized trial (a study in which intact groups rather than individuals are the units of randomization) of a school-based tobacco prevention program, in which schools that agree to participate are assigned to receive one strategy or another, or serve as “controls.” At the organizational level, organizational decision-makers such as principals or schools boards make decisions about participation. Often, while all members of an organization may be exposed to an intervention, individuals still have free choice about participating in the data collection components of the research. Decision-making authority is murkier at the community level. This raises the issue discussed earlier: Who has the authority to authorize study participation at the community level? While community approval to participate in research is one ethical issue, autonomy at the community level can mean more than simply voluntary community participation; it can also mean respect for community values, culture, and interests and, ultimately, joint interpretation of results, joint dissemination of results, and joint action based on the results.81 Another set of intervention studies involves collecting data from medical records or archival data on factors such as use of preventive services or absenteeism. In these studies, individuals may not be approached directly for their permission. While these assessment methods raise ethical questions, they are sometimes justified—for example, when research questions are significant and the risk of harm to subjects is very low and precautions to protect privacy are taken.
Privacy Conflicts can arise in research when individuals’ rights to privacy are at odds with the goal of obtaining knowledge to improve public health. Dimensions of
122 Ethics and Epidemiology privacy include the sensitivity of information, the setting being observed, and plans for dissemination of information and its linkage to participants’ names. Anonymity may not be possible in cohort studies or record linkages; however, privacy should be protected to the extent feasible. Names should be destroyed as soon as they are no longer needed, and the anonymity of individuals should always be protected in reporting results. In the United States, Health Insurance Portability and Accountability Act (HIPAA) regulations impose strict guidelines on the use of “protected health information,” which can be defined very broadly by some IRBs.104,105 While HIPAA greatly limits access to participants’ medical or health care records, it is possible to obtain partial or complete waivers when a study could only be completed with direct access to such information (like insurance claims information). In such cases, researchers must demonstrate how they will protect participants’ privacy and only use the specific information needed to answer a research question so the public’s health may be improved. However, with the widespread availability of government and commercial datasets, combined with a changing regulatory environment, issues of privacy and confidentiality are increasingly complex.55,106 For example, recent changes to the Common Rule reduce the regulatory burden associated with some types of data and allow for broad consent for use of data in future studies.55,106 Community- based intervention studies that use indirect data collection methods and unobtrusive measures preserve privacy if they collect data at the level of the environment, community, or other aggregate unit of observation and if individuals cannot be identified. These may be appropriate techniques for testing environmental interventions for health promotion. Often, however, privacy or confidentiality at the community level may be more difficult to maintain. The use of pseudonyms for research communities may not accomplish its intended purpose because of the difficulty of maintaining secrecy when descriptive information is provided and/or institutional affiliations are listed for authors and those acknowledged. An alternative is to use a general description of the study’s locale (such as region or state). Regardless of the approach, it is important to discuss the level of privacy that can be guaranteed with community partners early in the research process and to have clear language about confidentiality at the community level in consent forms, in addition to the more common language about confidentiality of participant data. Guarantees of confidentiality are essential when research is being conducted in settings where individuals work, receive services, and are educated. For example, high school students may be reluctant to respond to surveys in a tobacco use prevention study if they have reason to believe that their parents, teachers, or school administrators will have access to the information they provide. Parent or teacher access to information about tobacco use could lead to disciplinary action or punishment. Although the provision of aggregate data to cooperating
Community-Based Intervention Studies 123 organizations is usually justified, it is critical that participants be protected from any possible repercussions of disclosure of information they provide in the course of research. In community studies, investigators should avoid situations in which workers are collecting personal data on people they know, which is a real issue since researchers are increasingly hiring community workers or long- time residents with deep community ties.
Beneficence The principle of beneficence requires that researchers minimize risk to participants and maximize the potential benefits both to participants and to society.99 Potential risks to participants include psychological distress resulting from participation in research, physical danger, respondent burden, loss of self- esteem, and anxiety. The potential benefits of research should be judged in terms of the direct benefits to participants, as well as the prospects that the findings will improve the health of populations, often different from the research population. One group of people should not be expected to bear risks unduly, which is especially true for socioeconomically disadvantaged and underserved populations. The concept of beneficence has unique implications for CBIR in which screening or health examinations might result in the identification of at-risk individuals. In such situations, provisions should be made for referral and follow-up to protect the welfare of participants. It is unethical to recruit people into screening programs without providing for treatment should abnormalities arise. Investigators are responsible for minimizing risks to participants by examining the potential risks and ensuring that they are unlikely, minor, and reversible if they occur. Another aspect of beneficence in the conduct of health-related research is the obligation to act on important interim findings. For example, in interim analysis, a fear-arousing communication designed to motivate people to quit smoking could be found to generate significant anxiety but minimal behavior change. Under such circumstances, it would be unethical to continue the trial without substantive modification to the intervention. Alternatively, interim analysis could show a clear improvement in psychosocial or medical outcomes associated with an intervention. It would then be reasonable to offer the more effective strategy to all communities or participants. Accordingly, the research questions could focus on identifying subgroups that are most or least likely to benefit, based on individual or organizational factors. Consideration of risks and benefits at the community level are also essential to the ethical practice of CBIR. How might study results benefit or harm a
124 Ethics and Epidemiology community? Will results perpetuate a stereotype or negative view of a particular community? How will negative results be disseminated and who will control the messaging?
Justice Ethical principles of justice concern the fair distribution of benefits and burdens of research among potential subjects.99,107 According to utilitarian principles of justice, the public benefit should be maximized, potentially justifying a relaxing of the requirement that each research participant receives an equal share of the benefits from the research.108 Justice-based considerations are relevant to selection of research subjects and communities and to randomization of participants or organizations to receive different interventions. For example, investigators may need to make a choice between studying those most in need of an intervention and selecting a more accessible or practical population. Meeting requirements of distributive justice is particularly challenging when individuals or communities are assigned to control or comparison groups that do not receive the intervention hypothesized to be most effective. The control participants may be burdened disproportionately by data collection requirements without receiving the benefits of services. In some studies, the use of a minimal intervention such as an educational brochure may provide an acceptable level of benefit. Another common solution to this problem is the use of a delayed control group design, wherein the intervention is delivered during a later phase of the study, or the intervention materials or services found most effective are provided to all groups at the conclusion of the investigation. However, this solution may require special resources that are not always available as part of research funding. Investigators should take particular care not to make promises they cannot keep: it is easy to offer delayed treatment but much harder to follow through. Some have argued that “respect for communities” should be added as a principle to supplement the individualistic interpretation of protecting human subjects as represented in the Belmont Report.72,100,109 This means that community benefits should be recognized and that burdens and benefits should be fairly distributed within and across communities.81 Quinn states that CABs are one way to achieve community consultation and to implement a community consent mechanism.72 She argues that “some form of consultation in the process of informed consent can help ensure that researchers gain an understanding of the social context in which community members assess the risks and benefits of research” (p. 919).
Community-Based Intervention Studies 125
Interdisciplinary Health Research Traditions and Ethical Concerns By its nature, CBIR is eclectic and interdisciplinary. It draws on perspectives and tools from such diverse disciplines as psychology, sociology, anthropology, communications, statistics, biology, epidemiology, community development, and marketing. An important consequence of this interdisciplinary approach is that the codes and standards of conduct, as well as the identification and resolution of ethical dilemmas, may differ among the groups involved.110–112 Psychology, with its emphasis on individual behavior, has been the basis for many of the research studies and methodologies used; sociological approaches are particularly important to the study of organizations and social structures within communities. Social science traditions may conflict with those of biomedical researchers,111 and these differences may cause ongoing challenges for social and behavioral scientists due to the composition of ethics review boards, the guidelines used for health behavior research, and unevenness in IRB deliberations about social research investigations.112 There is substantial overlap between public health and medicine in health interventions carried out in clinical settings. However, the ethical issues most salient for CBIR are derived primarily from the traditions of the social sciences and public health sciences, and only secondarily from biomedical ethics. A fundamental distinction has been made between public health ethics and traditional medical ethics.107 Medical ethics has its roots in the rights and respect due to individuals in their relationships with physicians and other health care professionals, whereas many view the central concerns of public health ethics to be maximizing the welfare of the community or society as a whole and social justice. Ethical conflicts arise in public health interventions when decisions must be made about distribution of health resources and priorities for programs and when standards for health protection are at stake. They also occur in public health research due to concerns about privacy, autonomy, and the equitable treatment of individuals.99 In both medicine and public health, conflicting obligations sometimes result from the dual roles of practitioner and scientist. From a public health viewpoint, decisions about community research design depend primarily on three factors: 1. Anticipated kind and extent of benefit to the public and to scientific knowledge 2. Degree of restriction of individual rights needed to achieve the benefit 3. Balance between risks and benefits attendant to participation in the research.
126 Ethics and Epidemiology While research seldom warrants entirely sacrificing individual rights and liberty for the public good, this perspective of public health as an independent value can influence decisions regarding the design and conduct of community health intervention research. A further issue related to interdisciplinary health research is that researchers who conduct CBIR interact not only with research participants and their communities but also with other professionals, both within and outside their primary disciplines, and with the public at large. Relationships with funding sources, professional colleagues, practitioners, and legislators and policymakers may present ethical dilemmas such as conflict of interest, as well as biased conduct and interpretation of research. Use of research findings in practice and policy arenas can also pose ethical challenges. Professional codes of ethics include attention to principles of research integrity in areas of funding, relationships, data use, and communication.98 In recent years, there has been an enormous increase in requirements that investigators complete continuing professional education on the responsible conduct of research—via courses, seminars, or online tutorials. It is less clear how many of these programs address professional conduct issues, as the greatest emphasis is typically on protection of human subjects. Other chapters in this volume provide more detailed up-to-date coverage of the issues of professional conduct concerning CBIR, such as funding and sponsorship, publications and reporting, and the use of research findings to influence public health practice and policy.
Summary and Conclusions As health investigators strive to conduct innovative, high-quality research, they should be alert to ethical issues at both the research participant and community level. Special care should be exercised to protect the rights and interests of vulnerable populations, including children, older adults, racial/ethnic minorities, and high-risk populations. CABs and guidelines for collaboration serve as practical mechanisms for navigating many of the ethical issues that arise in CBIR. Bias in reporting results should be avoided, and scientists should fulfill their obligations for responsible use of research knowledge for public health, medical practice, and social policy. Moreover, public health investigators should form partnerships with ethicists or members of IRBs to identify proactively the ethical implications of research and to ensure that basic ethical principles are not violated. IRBs should expand their expertise to offer a strong community-oriented perspective on ethical issues in research. Finally, there should be a continuing search to improve the nature of information provided to participants in studies
Community-Based Intervention Studies 127 and to preserve informed consent as an ethical foundation of community-based research. Ethical principles are vital to CBIR to ensure that the research addresses worthwhile goals, to protect the welfare of individuals and communities participating in the research, and to help establish and maintain effective community partnerships and professional relations. Technical proficiency must be accompanied by sensitivity to values and ethics, and a sense of social responsibility. There are few ready-made formulas for making difficult decisions. In this chapter we have tried to present some familiar issues and to analyze new problems of growing concern to community researchers.
Acknowledgment Special thanks to Dr. Barbara Rimer for her contributions to an earlier version of this chapter.
References 1. McLeroy, K., Norton, B., Kegler, M., et al. Community-based interventions [editorial]. American Journal of Public Health 93 (2003): 529–533. 2. Merzel, C., & D’Afflitti, J. Reconsidering community- based health promotion: promise, performance, and potential. American Journal of Public Health 93 (2003): 557–574. 3. Cheadle, A., Schwartz, P. M., Rauzon, S., et al. The Kaiser Permanente Community Health Initiative: overview and evaluation design. American Journal of Public Health 100 (2010): 2111–2113. 4. Coughlin, S. S., Smith, S. A., & Fernandez, M. E., editors. Handbook of community- based participatory research. Oxford University Press, 2017. 5. Schwartz, P. M., Kelly, C., Cheadle, A., et al. The Kaiser Permanente Community Health Initiative: a decade of implementing and evaluating community change. American Journal of Preventive Medicine 54 (2018): S105–S109. 6. McGinnis, J. M., & Foege, W. Actual causes of death in the United States. Journal of the American Medical Association 270 (1993): 2207–2212. 7. Mokdad, A. H., Marks, J. S., Stroup, D. F., & Gerberding, J. L. Actual causes of death in the United States, 2000. Journal of the American Medical Association 291 (2004): 1238–1245. 8. McLeroy, K., Bibeau, D., Steckler, A., & Glanz, K. An ecological perspective on health promotion programs. Health Education Quarterly 15 (1988): 351–377. 9. Institute of Medicine. The future of the public’s health in the 21st century. National Academies Press, 2003. 10. Glass, T. A., & McAtee, M. J. Behavioral science at the crossroads in public health: extending horizons, envisioning the future. Social Science and Medicine 62 (2006): 1650–1671.
128 Ethics and Epidemiology 11. Golden, S. D., & Earp, J. A. Social ecological approaches to individuals and their contexts: twenty years of health education & behavior health promotion interventions. Health Education and Behavior 39 (2012): 364–372. 12. Marmot, M., Friel, S., Bell, R., et al. Closing the gap in a generation: health equity through action on the social determinants of health. Lancet 372 (2008): 1661–1669. 13. Frieden, T. R. A framework for public health action: the health impact pyramid. American Journal of Public Health 100 (2010): 590–595. 14. Purnell, T. S., Calhoun, E. A., Golden, S. H., et al. Achieving health equity: closing the gaps in health care disparities, interventions, and research. Health Affairs 35 (2016): 1410–1415. 15. Bailey, Z. D., Krieger, N., Agenor, M., et al. Structural racism and health inequities in the USA: evidence and interventions. Lancet 389 (2017): 1453–1463. 16. Riddell, J., Amico, K. R., & Mayer, K. H. HIV preexposure prophylaxis: a review. Journal of the American Medical Association 319 (2018): 1261–1268. 17. Okwundu, C. I., Uthman, O. A., & Okoromah, C. A. Antiretroviral pre-exposure prophylaxis (PrEP) for preventing HIV in high-risk individuals. Cochrane Database Systematic Reviews 7 (2012): CD007189. 18. McBride, C. M., Abrams, L. R., & Koehly, L. M. Using a historical lens to envision the next generation of genomic translation research. Public Health Genomics 18 (2015): 272–282. 19. Christensen, K. D., Roberts, J. S., Zikmund-Fisher, B. J., et al. Associations between self-referral and health behavior responses to genetic risk information. Genome Medicine 7 (2015): 10. 20. Wilkinson, J., & Targonski, P. Health promotion in a changing world: preparing for the genomics revolution. American Journal of Health Promotion 18 (2003): 157–161. 21. Beskow, L. M., Khoury, M. J., Baker, T. G., & Thrasher, J. F. The integration of genomics into public health research, policy and practice in the United States. Community Genetics 4 (2001): 2–11. 22. Khoury, M. J., Coates, R. J., Fennell, M. L., et al. Multilevel research and the challenges of implementing genomic medicine. Journal of National Cancer Institute Monographs 44 (2012): 112–120. 23. Grosse, S. D., Rogowski, W. H., Ross, L. F., et al. Population screening for genetic disorders in the 21st century: evidence, economics, and ethics. Public Health Genomics 13 (2010): 106–115. 24. Ransohoff, D. F., & Khoury, M. J. Personal genomics: information can be harmful. European Journal of Clinical Investigation 40 (2010): 64–68. 25. Segura Anaya, L. H., Alsadoon, A., Costadopoulos, N., & Prasad, P. W. C. Ethical implications of user perceptions of wearable devices. Science and Engineering Ethics 24 (2018): 1–28. 26. Lucivero, F., & Jongsma, K. R. A mobile revolution for healthcare? Setting the agenda for bioethics. Journal of Medical Ethics 44 (2018): 685–689. 27. Nebeker, C., Lagare, T., Takemoto, M., et al. Engaging research participants to inform the ethical conduct of mobile imaging, pervasive sensing, and location tracking research. Translational Behavioral Medicine 6 (2016): 577–586. 28. Plourde, A. R., & Bloch, E. M. A literature review of Zika virus. Emerging Infectious Diseases 22 (2016): 1185–1192. 29. Inaba, K., Eastman, A. L., Jacobs, L. M., & Mattox, K. L. Active-shooter response at a health care facility. New England Journal of Medicine 379 (2018): 583–586.
Community-Based Intervention Studies 129 30. Seth, P., Rudd, R. A., Noonan, R. K., & Haegerich, T. M. Quantifying the epidemic of prescription opioid overdose deaths. American Journal of Public Health 108 (2018): 500–502. 31. Meldrum, M. L. The ongoing opioid prescription epidemic: historical context. American Journal of Public Health 106 (2016): 1365–1366. 32. Kegler, S., Dahlberg, L., & Mercy, J. Firearm homicides and suicides in major metropolitan areas: United States, 2012–2013 and 2015–2016. Morbidity and Mortality Weekly Report 9 (2018): 1233–1237. 33. Wallerstein, N., Duran, B., Oetzel, J., & Minkler, M., editors. Community-based participatory research for health: advancing social and health equity. 3rd ed. Jossey- Bass, 2018. 34. Israel, B., Eng, E., Schulz, A., & Parker, E., editors. Methods for community-based participatory research for health. 2nd ed. Jossey-Bass, 2013. 35. Buchanan, D. A new ethic for health promotion: reflections on a philosophy of health education for the 21st century. Health Education and Behavior 33 (2006): 290–304. 36. Resnik, D. B., Zeldin, D. C., & Sharp, R. Research on environmental health interventions: ethical problems and solutions. Accountability in Research 12 (2005): 69–101. 37. Glanz, K., Rimer, B., & Viswanath, V., editors. Health behavior: theory, research and practice. 5th ed. Jossey-Bass, 2015. 38. Green, L., & Kreuter, M. Health promotion planning: an educational and ecological approach. 3rd ed. Mayfield Publishing Co., 1999. 39. Kahan, S., Gielen, A., Fagan, P., & Green, L., editors. Health behavior change in populations. Johns Hopkins University Press, 2014. 40. Frongillo, E. A., Fawcett, S. B., Ritchie, L. D., et al. Community policies and programs to prevent obesity and child adiposity. American Journal of Preventive Medicine 53 (2017): 576–583. 41. Ritchie, L. D., Woodward-Lopez, G., Au, L. E., et al. Associations of community programs and policies with children’s dietary intakes: the Healthy Communities Study. Pediatric Obesity 13 (Suppl. 1) (2018): 14–26. 42. Micha, R., Karageorgou, D., Bakogianni, I., et al. Effectiveness of school food environment policies on children’s dietary behaviors: a systematic review and meta-analysis. PLoS One 13 (2018): e0194555. 43. Strauss, W. J., Nagaraja, J., Landgraf, A. J., et al. The longitudinal relationship between community programmes and policies to prevent childhood obesity and BMI in children: the Healthy Communities Study. Pediatric Obesity 13 (2018): 82–92. 44. Hopkins, D., Briss, P. A., Ricard, C. J., et al. Reviews of evidence regarding interventions to reduce tobacco use and exposure to environmental tobacco smoke. American Journal of Preventive Medicine 20 (2001): 16–66. 45. Farrelly, M. C., Chaloupka, F. J., Berg, C. J., et al. Taking stock of tobacco control program and policy science and impact in the United States. Journal of Addictive Behaviors and Therapy 1 (2017): 8. 46. Gonzalez, P. A., Minkler, M., Garcia, A. P., et al. Community-based participatory research and policy advocacy to reduce diesel exposure in West Oakland, California. American Journal of Public Health 101 (Suppl. 1) (2011): S166–S175. 47. Bromley, E., Figueroa, C., Castillo, E. G., et al. Community partnering for behavioral health equity: public agency and community leaders’ views of its promise and challenge. Ethnicity and Disease 28 (2018): 397–406.
130 Ethics and Epidemiology 48. Bartholomew, L., Markham, C., Ruiter, R., et al. Planning health promotion programs: an intervention mapping approach. 4th ed. Jossey-Bass, 2016. 49. Kazemi, D. M., Borsari, B., Levine, M. J., et al. A systematic review of the mHealth interventions to prevent alcohol and substance abuse. Journal of Health Communication 22 (2017): 413–432. 50. Wang, Y., Xue, H., Huang, Y., et al. A systematic review of application and effectiveness of mHealth interventions for obesity and diabetes treatment and self-management. Advances in Nutrition 8 (2017): 449–462. 51. McCarroll, R., Eyles, H., & Ni Mhurchu, C. Effectiveness of mobile health (mHealth) interventions for promoting healthy eating in adults: a systematic review. Preventive Medicine 105 (2017): 156–168. 52. Muller, A. M., Alley, S., Schoeppe, S., & Vandelanotte, C. The effectiveness of e-and mHealth interventions to promote physical activity and healthy diets in developing countries: a systematic review. International Journal of Behavioral Nutrition and Physical Activity 13 (2016): 109. 53. Rathbone, A. L., & Prescott, J. The use of mobile apps and SMS messaging as physical and mental health interventions: systematic review. Journal of Medical Internet Research 19 (2017): e295. 54. Eldridge, S. M., Ashby, D., & Feder, G. S. Informed patient consent to participate in cluster randomized trials: an empirical exploration of trials in primary care. Clinical Trials 2 (2005): 91–98. 55. Salerno, J., Knoppers, B., Lee, L., et al. Ethics, big data and computing in epidemiology and public health. Annals of Epidemiology 27 (2017): 297–301. 56. Glanz, K., Sallis, J. F., Saelens, B. E., & Frank, L. Healthy nutrition environ ments: concepts and measures. American Journal of Health Promotion 19 (2005): 330–333. 57. Patton, M. Q. Qualitative research and evaluation methods. 4th ed. Sage Publications, 2014. 58. Clinical and Translational Science Awards Consortium, Community Engagement Key Function Committee Task Force on the Principles of Community Engagement. Principles of community engagement. 2nd ed. National Institute of Health, 2011. 59. Green, L., George, M., Daniel, M., et al. Study of participatory research in health promotion: review and recommendations for the development of participatory research in health promotion in Canada. Royal Society of Canada, 1995. 60. Minkler, M., & Wallerstein, N., editors. Community-based participatory research for health. Jossey-Bass, 2003. 61. Israel, B., Schulz, A., Parker, E., et al. Critical issues in developing and following community-based participatory research principles. In Community-based participatory research for health, ed. M. Minkler & N. Wallerstein. Jossey-Bass, 2003: 53–76. 62. Bonney, R., Cooper, C., Dickinson, J., et al. Citizen science: a developing tool for expanding science knowledge and scientific literacy. BioScience 59 (2009): 977–984. 63. Bonney, R., Cooper, C., & Ballard, H. The theory and practice of citizen science: launching a new journal. Citizen Science: Theory and Practice 1 (2016): 1. 64. Wiggins, A., & Crowston, K. From conservation to crowdsourcing: a typology of citizen science. Proceedings of the 44th Hawaii International Conference on Systems Science, 2011. 65. Blair, B. D., Brindley, S., Hughes, J., et al. Measuring environmental noise from airports, oil and gas operations, and traffic with smartphone applications: laboratory
Community-Based Intervention Studies 131 and field trials. Journal of Exposure Science and Environmental Epidemiology 28 (2018): 548–558. 66. Nelson, T. A., Denouden, T., Jestico, B., et al. Bikemaps.org: a global tool for collision and near miss mapping. Frontiers in Public Health 3 (2015): 53. 67. Riesch, H., & Potter, C. Citizen science as seen by scientists: methodological, epistemological and ethical dimensions. Public Understanding of Science 23 (2014): 107–120. 68. Bracht, N., Kingsbury, L., & Rissel, C. A five-stage community organization model for health promotion: empowerment and partnership strategies. In Health promotion at the community level: new advances, ed. N. Bracht. 2nd ed. Sage Publications, 1999: 83–104. 69. Newman, S. D., Andrews, J. O., Magwood, G. S., et al. Community advisory boards in community-based participatory research: a synthesis of best processes. Preventing Chronic Disease 8 (2011): A70. 70. Norton, B., Burdine, J., McLeroy, K., et al. Community capacity: theoretical roots and conceptual challenges. In Emerging theories in health promotion practice and research: strategies for improving public health, ed. R. J. DiClemente, R. A. Crosby, & M. C. Kegler. Jossey-Bass, 2002: 194–227. 71. Goodman, R., Speers, M., McLeroy, K., et al. An initial attempt at identifying and defining the dimensions of community capacity to provide a basis for measurement. Health Education and Behavior 25 (1998): 258–278. 72. Quinn, S. Ethics in public health research. American Journal of Public Health 94 (2004): 918–922. 73. Wilson, E., Kenny, A., & Dickson-Swift, V. Ethnical challenges in community- based participatory research: a scoping review. Qualitative Health Research 28 (2018): 189–199. 74. Wallerstein, N., Duran, B., Minkler, M., & Foley, K. Developing and maintaining partnerships with communities. In Methods in community-based participatory research, ed. B. Israel, E. Eng, A. Schulz, & E. Parker. Jossey-Bass, 2005: 31–51. 75. Banks, S., Armstrong, A, Carter, K., et al. Everyday ethics in community-based participatory research. Contemporary Social Science 8 (2013): 263–277. 76. Suarez-Balcazar, Y., Harper, G., & Lewis, R. An interactive and contextual model of community-university collaborations for research and action. Health Education and Behavior 32 (2005): 84–101. 77. Eng, E., Moore, K., Rhodes, S., et al. Insiders and outsiders assess who is the community: participant observation, key informant interview, focus group interview, and community forum. In Methods in community-based participatory research, ed. B. Israel, E. Eng, A. Schulz, & E. Parker. Jossey-Bass, 2005: 77–100. 78. Faden, R., Minkler, M., Perry, M., et al. Ethical challenges in community-based participatory research. In Community-based participatory research for health, ed. M. Minkler & N. Wallerstein. Jossey-Bass, 2003: 242–262.79. Minkler, M., & Pies, C. Ethical issues in community organization and community participation. In Community organizing and community building for health, ed. M. Minkler. Rutgers University Press, 1997. 80. Dressler, W. W. Commentary on “Community research: partnership in black communities. American Journal of Preventive Medicine 9 (1993): 32–33. 81. Mikesell, L., Bronley, E., & Khodyakov, D. Ethical community-engaged research: a literature review. American Journal of Public Health 103 (2013): e7–e14.
132 Ethics and Epidemiology 82. Harding, A., Harper, B., Stone, D., et al. Conducting research with tribal communities: sovereignty, ethics, and data-sharing issues. Environmental Health Perspectives 120 (2012): 6–10. 83. American Indian Law Center. Model tribal research code. American Indian Law Center, 1999. 84. Albert Einstein College of Medicine. Community IRBs and research review boards: shaping the future of community- engaged research. Albert Einstein College of Medicine, The Bronx Health Link, and Community-Campus Partnerships for Health, 2012. 85. Institute of Medicine. Unequal treatment: confronting racial and ethnic disparities in healthcare. National Academies Press, 2002. 86. Cooper, S. P., Heitman, E., Fox, E. E., et al. Ethical issues in conducting migrant farmworker studies. Journal of Immigrant Health 6 (2004): 29–39. 87. Braveman, P. A., Kumanyika, S., Fielding, J., et al. Health disparities and health equity: the issue is justice. American Journal of Public Health 101 (2011): S149–S155. 88. Wallerstein, N. B., Yen, I. H., & Syme, S. L. Integration of social epidemiology and community-engaged interventions to improve health equity. American Journal of Public Health 101 (2011): 822–830. 89. Wolff, T., Minkler, M., Wolfe, S., et al. Collaborating for equity and justice: moving beyond collective impact. Nonprofit Quarterly 9 (2016): 42–53. 90. Shavers, V. L., Lynch, D. F., & Burmeister, L. F. Racial differences in factors that influence the willingness to participate in medical research studies. Annals of Epidemiology 12 (2002): 248–256. 91. Mays, V. M. Research challenges and bioethics responsibilities in the aftermath of the presidential apology to the survivors of the U. S. Public Health Services syphilis study at Tuskegee. Ethics and Behavior 22 (2012): 419–430. 92. Alsan, M., & M. Wanamaker. Tuskegee and the health of Black men. Quarterly Journal of Economics 133 (2018): 407–455. 93. Anderson, W. P., Cordner, C. D., & Breen, K. J. Strengthening Australia’s framework for research oversight. Medical Journal of Australia 184 (2006): 261–263. 94. Downie, J., & McDonald, F. Revisioning the oversight of research involving humans in Canada. Health Law Journal 12 (2004): 159–181. 95. Kirigia, J. M., Wambebe, C., & Baba-Moussa, A. Status of national research bioethics committees in the WHO African region. BMC Medical Ethics 6 (2005): 10. 96. Molyneux, C. S., Wassenaar, D. R., Peshu, N., & Marsh, K. “Even if they ask you to stand by a tree all day, you will have to do it (laughter) . . .!” Community voices on the notion and practice of informed consent for biomedical research in developing countries. Social Science and Medicine 61 (2005): 443–454. 97. King, K., Kolopack, P., Merritt, M., & Lavery, J. Community engagement and the human infrastructure of global health research. BMC Medical Ethics 15 (2014): 84. 98. Babbie, E. The ethics and politics of social research. In The practice of social research, ed. E. Babbie. 6th ed. Wadsworth Publishing Co., 1992: 462–482. 99. Beauchamp, T. L., & Childress, J. F. Principles of biomedical ethics. 6th ed. Oxford University Press, 2008. 100. Shore, N. Re-conceptualizing the Belmont Report: a community-based participatory research perspective. Journal of Community Practice 14 (2006): 5–26. 101. Grant, R. W., & Sugarman, J. Ethics in human subjects research: do incentives matter? Journal of Medicine and Philosophy 29 (2004): 717–738.
Community-Based Intervention Studies 133 102. Emanuel, E. J. Undue inducement: nonsense on stilts? American Journal of Bioethics 5 (2005): 9–13. 103. Klitzman, R. The importance of social, cultural, and economic contexts, and empirical research in examining undue inducement. American Journal of Bioethics 5 (2005): 19–21. 104. Walfish, S., & Watkins K. Readability level of Health Insurance Portability and Accountability Act notices of privacy: practices utilized by academic medical centers. Evaluation and the Health Professions 28 (2005): 479–486. 105. Breese, P., Burman, W., Rietmeijer, C., & Lezotte, D. The Health Insurance Portability and Accountability Act and the informed consent process [letter]. Annals of Internal Medicine 141 (2004): 897–898. 106. Menikoff, J., Kaneshiro, J., & Pritchard, I. The Common Rule, updated. New England Journal of Medicine 376 (2017): 613–615. 107. Powers, M., & Faden, R. R. Social justice: the moral foundations of public health and health policy. Oxford University Press, 2006. 108. Coughlin, S. S., & Beauchamp, T. L. Ethics, scientific validity, and the design of epidemiologic studies. Epidemiology 3 (1992): 343–347. 109. Weijer, C. Protecting communities in research: philosophical and pragmatic challenges. Cambridge Quarterly of Healthcare Ethics 8 (1999): 501–513. 110. American Psychological Association. Ethical principles of psychologists and code of conduct. 2017. https://www.apa.org/ethics/code/index 111. Hoeyer, K., Dahlager, L., & Lynoe, N Conflicting notions of research ethics: the mutually challenging traditions of social scientists and medical researchers. Social Science and Medicine 61 (2005): 1741–1749. 112. DeVries, R., DeBruin, D. A., & Goodgame, A. Ethics review of social, behavioral, and economic research: where should we go from here? Ethics and Behavior 14 (2004): 351–358.
PART IV
ISSU E S
7
Ethics in Public Health Practice Robert E. McKeown
Introduction Ethical challenges and controversy are occupational companions for public health (PH) practitioners, whether conducting routine surveillance or implementing common PH policies and programs, such as immunizations and vital records collection and maintenance. However, the frequency, urgency, and gravity of questions, dilemmas, and challenges seem to be intensifying, especially with greater emphasis on PH preparedness for disasters and emergencies that require immediate real-time responses. PH professionals must be prepared to respond to an increasing number and diversity of challenges from new hazards such as Zika and its attendant birth anomalies, outbreaks of vaccine-preventable diseases thought to be eliminated, the threat of pandemic influenza or emergent infections, the more frequent and more severe devastation of tropical storms as climate change contributes to more dangerous weather patterns and more extensive health impacts,1–3 and continuing anthropogenic threats ranging from environmental pollution to terrorism. In all these areas, and more, in addition to planning, implementing, and evaluating more routine PH programs, PH practitioners are confronted by ethical questions that require careful, principled, and rational analysis and response. This chapter outlines a foundation for addressing ethical concerns in PH practice. This perspective is informed by the approach of Alasdair MacIntyre,4 whose definition of practice is the starting point for the foundation I propose here, though it is also consistent with other recent treatments of PH ethics.5–7 The approach starts with the goal (end or telos) of PH and views practice as directed toward fulfillment of that goal and related goods, thus providing a common ground on which to base further discussions. Lee and Zarowsky8, p.7 write that we “consider the foundational values of public health practice [in] order to identify the common moral governance of our work.” A common element of recent arguments is the importance and value of health as necessary for human well- being and flourishing, a perspective essential for discussions of the role of human rights and equity in PH ethics.8–13
Robert E. McKeown, Ethics in Public Health Practice In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0007
138 Ethics and Epidemiology Considerations of value are essentially related to the ends of PH but are also critical in our assessment and implementation of the means by which we achieve those ends. The task of ethics then involves a continuing examination of means and ends in an iterative process that includes refining the definition of the end, evaluating appropriate means, and balancing the range of sometimes competing values, virtues, obligations, and principles that guide our practice.8,14 Some ethicists have proposed an iterative process termed reflective equilibrium after the work of John Rawls.8,15–17 The process, whether by an individual or a group, involves consideration, in something like a Socratic questioning (the reflective component), of theory, principles, rules, judgments, and specific cases, going back and forth to revise them in an effort to achieve coherence (or equilibrium). Daniels17 holds that this approach allows for agreement about specific action in a given case even when there is disagreement about theoretical issues and that it demonstrates the indissoluble connection between theoretical and practical ethics. Similarly, the approach adopted here is grounded in the view that we cannot disentangle questions of value from those of science from which the means are derived. Thus, in this perspective, ethics also comes into play in scientific considerations within PH practice,6,8,18–21 an issue to be discussed in more detail later in the chapter. Madison Powers and Ruth Faden22 have proposed a view of social justice that bears some similarities to the foundation of this chapter in that the basis for evaluating the justice of a set of principles or policies is the ends or purposes the principles are intended to achieve and the adequacy of the principles for achieving those ends in concrete life situations. They propose that the overarching end toward which justice is directed is human well-being, which is composed of six essential and distinct but interrelated dimensions: health, respect, self-determination, personal and social attachment, reasoning, and personal security. (Note that respect and self-determination are differentiated in this treatment, as are health and well-being.) In their view, the emphasis on ends to be achieved implies that “empirical judgments of how various inequalities affect one another in concrete circumstances are ineliminable moral data” and that “achieving justice is an inherently remedial task, constantly shifting in its specific requirements as social circumstances themselves change.”22, p. 5 This is consistent with the position that ethics in PH practice requires ongoing examination of means and ends with continuing adjustments relating ends and means to professional values, virtues, obligations, and principles. This approach also has implications for our understanding of responsibility, which is viewed here as an obligation to work toward ends to which we have committed, a position that has been developed elsewhere.23,24 In professional PH practice, responsibility can be viewed as accountability, as reliability, and as commitment to or for someone or something. All of these enter into considerations
Ethics in Public Health Practice 139 of ethical practice of any profession, including acceptance of responsibility as a commitment to the fundamental ends of a profession itself, and they can be seen in the various frameworks for PH ethics5–7,25 and in published collections of case studies.26–28 The case study collections offer helpful guidance and practice in ethical analysis for more complete, extensive, and thorough reflection than could be presented in this single chapter.
Foundation for Ethics in PH Practice MacIntyre defines practice as a “coherent and complex form of socially established cooperative human activity through which goods internal to that form of activity are realized.”4, p. 187. The emphasis on practice as the means for achieving the goods that define (or are internal to) practice is central to the approach in this chapter. The concept of shared values as a uniting bond goes back at least to Augustine, who defined a people as “[A]n assemblage of reasonable beings bound together by a common agreement as to the objects of their love, then, in order to discover the character of any people, we have only to observe what they love” (Book XIX, 24).29 In this tradition the community of PH professionals may be defined by the ultimate goal of healthy people in healthy communities toward which their professional practice is directed. PH practice spans policy and program development, implementation, evaluation, regulatory activities, health promotion and intervention, and, in some cases, provision of direct health services. Given this scope, it is unavoidable that PH professionals, institutions, and agencies will face, and sometimes generate, tension between competing values and obligations. PH professionals are often called upon to determine avenues of health investigation, protocols for research or surveillance, courses of action, and public policies that are intended to protect or promote health but that may also have uncertain scientific foundations. These choices sometimes place constraints on fundamental rights, impose risks, or allocate resources or benefits unequally. Thus, it is inevitable that PH professionals will face tradeoffs due to competing values and obligations. Moreover, the breadth of PH practice captures only a subset of the even more encompassing social determinants of health, which involve or engage domains beyond the scope of PH agencies and PH professional practice.10,11,14,30–33 Despite this breadth of scope and diversity, PH professionals share many core values and obligations and experience similar ethical dilemmas so that we see similar lists of core values and obligations in reviews of PH ethics.34 The view in this chapter is that the mission of PH sets forth a common overarching goal that unifies the otherwise disparate PH disciplines. This mission and closely related shared values, especially those related to justice, respect for
140 Ethics and Epidemiology persons, and the importance of health for personal and social well-being, constitute the foundation for addressing unavoidable tensions and tradeoffs. The task of ethics then becomes determining when such tradeoffs are necessary, adjudicating among unavoidable tradeoffs and conflicts in the context of the mission to protect and promote health, and justifying the chosen alternative. Ethics is a discipline concerned with the reasoned and critical judgment of human actions and decisions that have moral content, including an account of the ends, values, obligations, principles, and rules that guide that judgment.35–37 Inevitably a principle of selectivity comes into play in any account of ethics or of a specific ethical decision. Each account is partial and provisional, shaped by core concerns, focusing on those features relevant for resolution of the issue at hand, and framed by the overarching end. The relative importance or place of specific ethical values, principles, and obligations differs among professions, though it is characteristic of professions that each has common values and obligations, as well as a commitment to expertise in the discipline and excellence in its practice, and accountability to the profession and society.36,38 Even within a profession, there may be relatively more or less emphasis on a particular ethical value or obligation according to professional roles or specialties. For example, maintaining confidentiality plays a different, and arguably more important, role for the PH professional involved in disease transmission contact tracing than for the food service inspector. In keeping with MacIntyre’s definition of practice, shared ends, core values, and fundamental duties internal to the practice of a profession provide the basis for the ethics of the PH profession. Lee and Zarowsky write: “Our profession’s morality—the set of norms shared by all public health professionals—is determined by what public health is and what we think it should be.”8, p. 7. Or, as Lee has written elsewhere, the first question is “What are we trying to do in public health?”34, p. 96. To lay a foundation for ethics in PH practice, therefore, we turn now to the nature and the mission of PH, for that defines the end toward which practice is directed.
The Mission of PH The Institute of Medicine (IOM)’s influential 1988 report The Future of Public Health39 defined the mission of PH as “The fulfillment of society’s interest in assuring conditions in which people can be healthy.” The substance of PH has been characterized, at least since the first quarter of the twentieth century, as “organized community efforts aimed at the prevention of disease and promotion of health.”39, p. 41. See also 40. PH practitioners are committed to work toward the desired end of ensuring conditions that contribute to the public’s health, which
Ethics in Public Health Practice 141 includes conducting the epidemiological research needed to determine what those conditions are and how they contribute to well-being.6 The IOM statement also made explicit the claim that society has an interest in the health of people. It is not clear whether “society’s interest” in the health of the population is because society holds the public’s health to be a societal good in itself or because health is instrumental to attaining other goods or desired ends. Recent contributions to PH ethics theory have emphasized health as of both intrinsic value—valued in itself—and of instrumental value, necessary for human flourishing.8–11 Powers and Faden provide helpful insight into this issue in the context of their delineation of a broader concept of human well-being that includes health as one of six essential and distinct dimensions (enumerated earlier in the chapter).22 In their view, the value of each dimension is both in achieving a certain state, such as being healthy, and in the role each plays in the achievement of other dimensions of well-being or other goods. To illustrate, historically, happiness and well-being have been accepted as legitimate ends for purposes of ethical analysis. It seems reasonable to consider health as a similar end as well as an essential contributor to happiness and well-being. In this sense, health is valued for its own sake and as an instrumental good, necessary to achieve other desired ends. A parallel question is this: Is the aim of PH provision or attainment of health itself by and for people, or is the aim of PH only to ensure conditions and capabilities for people to pursue their own (or their community’s health) if they so choose? Lee and Zarowsky write: A basic level of health is necessary for human flourishing so persons can seek a satisfying, autonomous life. To achieve this basic level of health, public health systems must exist to protect the health of individuals and communities, to prevent disease and injury, and to promote engagement of communities. Given that opportunities and resources for health are inequitably distributed, public health must seek to right this inequity . . . Righting inequity is based in a sense of social justice. Social justice arguments in public health have called for the provision of [a]basic level of health for all persons.8, p. 7
The argument presented by Powers and Faden,22 however, is pertinent in making a distinction between the instrumental capabilities perspective11 and their view of the dimensions of well-being, including health. They acknowledge that provision for or protection of capabilities is a critical component of achieving the dimensions of well-being but hold that it is the achievement of the dimension itself that is desired and not only the capability, which after all may not be possible for some persons, such as children. “Moreover,” they write,
142 Ethics and Epidemiology for most of the dimensions on our list, the focus on “capability of achieving it if one wishes” misunderstands the central aims of justice . . . While there is some truth in the notion that a part—but only a part—of what matters in being healthy is room to pursue our own health objectives if and to the extent that we wish, most of our dimensions are quite different. We want to be respected by ourselves and by others, not simply for us and others to have the capability to exercise respect if we, or they, so choose . . . While to some degree both health and reason are matters for which the individual is largely the proper judge of how much achievement is worth pursuing, even here the emphasis on capabilities is misplaced. In some sufficient measure, the actual development and exercise of reason is essential to the functioning of society and the well-being of others, no less than respect. The same might be said of a certain level of good health . . . The central concern of justice, then, is with achievement of well-being, not the freedom or capability to achieve well- being.22, p. 40
The mission of PH to create conditions in which people can be healthy may suggest the more limited view: that public officials and agencies cannot foster or nurture health but can only design institutions, programs, and policies that enhance the ability of people and communities to achieve the level of health they choose. That is, public agents can develop policies and institutions that make it more likely people will gain, maintain, and exercise their capabilities for achieving health, thus creating conditions in which people can be healthy. However, the commitment of many PH professionals to “organized community efforts”40 directed toward the prevention of disease and the promotion of health argues for some sense of obligation for promotion and achievement of health itself, and not only the conditions for health. Finally, when considering the mission statement’s claim that society has an interest in “assuring conditions in which people can be healthy,” we see two important implications for PH ethics: 1. “Society’s interests” may justify PH actions that infringe on individual autonomy; and 2. As an expression of societal values, PH efforts must be evaluated both ethically and scientifically. Thus, the commitment of PH professionals to work toward the public’s health is the basis for our responsibility, and society’s interest in the public’s health provides the justification for those actions and decisions that conflict with other responsibilities and values. Although the public’s health is the essential, overarching end toward which practice is directed, ethical reflection also engages
Ethics in Public Health Practice 143 society’s interests, more proximal ends, other dimensions of well-being, other values and obligations, and scientific judgments of means. This issue will return in a discussion of the role of human rights in PH ethics later in the chapter.
The Nature of PH Practice Because of its mission and goal as well as its scope, PH practice requires commitment to accountability, reliability, and ethical behavior. All of these enter into considerations of ethical practice of any profession, including acceptance of responsibility in terms of commitment to the fundamental ends of the profession itself. These very general propositions constitute the basic framework for the ethical perspective developed in this chapter.
A Brief Historical Summary The origins of PH practice may be traced to the Sanitary Movement of the eighteenth and early nineteenth centuries, with its focus on community characteristics, economic conditions, and environmental influences (see Chapter 1). Improvement of living conditions was believed to be a means of improving health. The Sanitary Movement developed knowledge to influence public policy. This formative period of PH practice is still relevant because it reflects a multifactorial and contextual understanding of the determinants of health. Scientific advances in bacteriology, chemistry, and medicine, as well as epidemiology, established the dominance of the germ theory of disease in the late nineteenth century. These developments dramatically shaped PH practice. PH laboratories were established; vaccines developed; and water treatment and dairy inspections initiated. Public education campaigns promoted disease control practices and sanitation. Tuberculosis and sexually transmitted diseases were brought under control by improved health practices and effective treatments. Malaria control efforts included draining of swamps and widespread pesticide use. Sharp reductions occurred in the incidence of communicable, foodborne, and vector-borne diseases and resulting mortality.39,41 By the middle of the twentieth century, PH attention shifted to chronic disease control and prevention, with emphasis on risk factor epidemiology and interventions directed toward individual behavior and lifestyle. Later in the twentieth century emphasis broadened to include attainment of a healthy life, understood more in line with the original World Health Organization (WHO) definition as presented later in the chapter. Today the emphasis on genetic and cellular processes (a potential new “germ theory”) and renewed interest
144 Ethics and Epidemiology in both psychosocial characteristics and broader contextual and environmental influences are seen as integral to personal and community health and well-being.41–44 In the twenty-first century emerging infectious diseases continue to have global significance in an era of resurgent multidrug-resistant pathogens. These are compounded by the ever-present dangers of pandemic influenza and widespread distribution of vector-borne diseases. Previous successes in eliminating or dramatically reducing vaccine- preventable diseases are now threatened in many areas due to vaccine refusal based on faulty science or religious, political, or philosophical objections. In addition to new challenges in prevention and control of communicable diseases, the noncommunicable diseases that are leading causes of death and disability in higher-income countries are now exhibiting increased prevalence with attendant consequences in lower-income countries. In addition, PH professionals now have a better understanding of the global importance of mental disorders as ends in themselves and of their impact on other health outcomes. Further, in ways not considered in the previous two centuries, we are confronting increasing perils from natural and anthropogenic disasters: tropical storms and their aftermath, intensified by climate change; earthquakes, volcanic eruptions, and tsunamis; floods and landslides or drought and wildfires that threaten life and displace persons. Any of these, not to mention intentional terrorist attacks, can result in infrastructure failure, collapse of health care systems, and disruption of social structures. To these are added the hazards from air and water pollution, as well as political and social turmoil. PH finds itself on the front lines of emergencies quite unlike those faced a century ago in the influenza pandemic of 1918. We must ask then if the lessons we have learned in the past century are adequate to deal with new threats as illustrated vividly by the COVID-19 pandemic. Do we have clarity about our mission and our role, about the values, principles, and obligations that guide us, and about the moral foundations and ethical processes on which we rely? In each historical period, the dominant scientific approaches adopted to improve the public’s health also raised ethical issues that were part and parcel of the method. As noted earlier, the position in this chapter is that we cannot disentangle questions of value from scientific determinations in adopting PH measures to achieve agreed-upon ends.
The Contemporary Model of PH Practice The bio-socio-ecological model of PH practice stresses the multiple dimensions that constitute our lives, relationships, and environments and, therefore,
Ethics in Public Health Practice 145 contribute to health and wellness or disease and disability.42 PH practice inevitably encounters ethical dilemmas, tensions, conflicts, or tradeoffs. Ethical analysis, therefore, is necessary for PH practice that is effective as well as ethically sound and responsible. The 2003 IOM follow-up report42 emphasized the “public” aspect of PH—that is, “healthy people in healthy communities.” There has been a rich discussion in the PH literature on the definition and nature of PH and healthy communities.10,11,14,18,32,33,45,46 Concepts of health and healthy communities entail different emphases in policy development, programmatic focus, evaluation, and allocation of resources, even different measures, all pertinent to ethical analysis of PH practice. For example, viewing a healthy community primarily in terms of access to and provision of certain services has different implications for ethics than emphasizing characteristics such as social capital, general economic prosperity, and the level of income inequality, or more general human rights and social conditions. It is important to note that PH professionals increasingly recognize an organic notion of community, emphasizing that individual health is achieved or threatened by larger-scale contextual factors, including social networks, environment, education, economic opportunity, and other characteristics of communities.6,33 PH professionals, I believe, would agree that a healthy community is not simply a collection of healthy individuals.6,22 Indeed, some might argue that individuals cannot be healthy apart from healthy communities. This aspect of PH, intimately related to its population or community focus, is the source of some of the most common ethical tensions encountered, because obligations and values related to individuals may conflict with obligations and values related to the larger community. For example, at the time of this writing there is intense political debate in the United States over proposed weakening of environmental protection regulations concerning air and water quality. By the time of publication, we will have been deluged by the most devastating pandemic in 100 years, with PH efforts hampered by ill-informed, sometimes malicious, political debate. These debates illustrate clearly the competing values of those who place higher value on protecting health and limiting environmental threats versus those who place greater emphasis on limiting regulation or so-called personal liberties because of presumed economic or individual costs to implement or continue restrictions.6 Clean air and water are common goods: when they are not available, people in the community will suffer harm. At the same time, economic opportunity is highly valued by individuals and communities, so in difficult circumstances, citizens may want to protect employment and wages. The short-and long-term health consequences of poor air and water quality then are in tension with the possible economic consequences and consequent poor health in the community if jobs are lost. Defining the common good and determining
146 Ethics and Epidemiology how that shared value should be weighted versus individual goods or rights, informed by the best possible scientific evidence about short-and long-term health effects, are among the most critical challenges facing PH ethics, as we have seen during the COVID-19 pandemic, especially in resistance to implementation of evidence-based PH preventive measures. PH practitioners may also team with experts in other disciplines to propose alternatives—for example, showing that retraining of coal miners for jobs in the sustainable/renewable energy sector may produce greater benefits for the economic well-being of the population, as well as for the environment, but it requires political will to develop, fund, and sustain such programs. PH and the determinants of health are multidimensional.42 There is a spectrum of targets of interventions undertaken by PH agencies, from individuals (as in vaccination) to the environment (as in monitoring and treatment of air and water) to broad social interventions to change individual behavior or community conditions and enhance justice.47,48 The meaning of “public” can range from an aggregate of individuals to a community or corporate identity and even the global ecosystem. Understandings of “health” range from the reductionist, biomedical model to broader, more encompassing views as reflected in the WHO definition of health as “a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.”49 Ethical analysis enters at each level: clarifying the goals of PH actions directed to the public and justifying the means employed, developing notions of corporate responsibility and accountability, and examining social norms and values that shape policy.14 The WHO definition can be taken to imply that health as absence of disease is secondary to health characterized by well-being and a number of other positive goods. This breadth creates difficulties at a number of levels, not least of which is the threat to a clear understanding of the domain of PH agencies and practitioners. More recently, Royo- Bordonada and Román- Maestre50 have noted the “radical change” implied for this definition in the adoption by the WHO Regional Office for Europe of Health 2020 and its distinct targets for assessing health and well-being,51 with a goal of defining well-being as distinct from health, proposing indicators for assessing it, and investigating the reciprocal relationship between health and well-being. PH activities encompass a broad range, including health-related policy and regulation, health promotion, disease prevention and control, surveillance, and community-based program planning, implementation, and evaluation. Access to primary education and adequate housing are critical for healthy people and healthy communities, but PH professionals and agencies do not often see providing either of these services as part of their responsibility. If PH is responsible for everything that contributes to or threatens health, then it becomes too broad to have any meaningful focus.37,52–54 Powers and Faden refer to this as
Ethics in Public Health Practice 147 the “boundary problem” in PH and note that it results in a perception of PH as having “no real core, no institutional, disciplinary, or social boundaries.”22, p. 10 In their view, situating health as an essential dimension for well-being allows for the better understanding of how harmful determinants, such as war, natural disasters, or environmental threats, can impact health as well as other dimensions of well-being. This points out, once again, that the definition and pursuit of an overarching end is not separable from reflection on the means, with regard to both scientific and ethical judgments. There have been efforts to address this boundary problem. These include the U.S. Association of State and Territorial Health Officials “Policy Statement on Health in All Policies” and its report “The State of Health in All Policies”;14,55 the Pan American Health Organization Commission on Equity and Health Inequalities in the Americas;10,33 the WHO Regional Office for Europe’s report on multisectoral and intersectoral action for improved health and well-being;56 and publications by a number of PH ethics researchers, including those working in developing a PH ethics approach informed by convergence science.31,32,34,48 Such coalitions, while pooling expertise and resources, also raise possibilities for conflicts that must be resolved for collaborative action to be successful.46,57 Resolution of these conflicts involves clarifying ends, evaluating consistency of ends, assessing means for achieving those ends, and respecting values that are shared. This process is the task of ethics, and reflective equilibrium, as noted earlier, is a promising approach to resolving such conflicts.
Values in Science and Practice Values are embedded in research and practice, in both the pursuit and the application of knowledge. As discussed by Ortmann et al.,6 we rely on scientific principles and methods that shape our knowledge and understanding. Beyond that scientific and epistemological reliance, our pursuit of knowledge and application of it in practice exist within social, historical, cultural, professional, and disciplinary contexts. PH practice is focused on a common good: the pursuit of population health. In that regard MacIntyre writes: “goods . . . can only be discovered by entering into those relationships which constitute communities, whose central bond is a shared vision of and understanding of goods.”4, p. 258. This is a view consistent with the Augustinian definition of a people quoted earlier. MacIntyre’s emphasis on communities and the social tradition is consistent with the IOM conclusion that “The history of the public health system is a history of bringing knowledge and values together in the public arena to shape an approach to health problems.”39, p. 57 The National Academies of Sciences, Engineering, and Medicine have addressed some of these issues directly in On
148 Ethics and Epidemiology Being a Scientist: Responsible Conduct in Research,58 which speaks to the social role of science and the importance of values in science, though the approach still highlights objectivity as a value and, in their view, the problem of personal values compromising objectivity, an issue addressed later in the chapter. The role of values is evident in funding decisions for research and prevention programs, implementation of screening recommendations, and the PH emphasis on addressing health disparities and inequities.6 Our ability to act requires discussion of our shared vision and common values, the ultimate ends we seek, the means provided by science and resources, and the choices among competing intermediate ends. Because PH practice takes place in the social and political arena, there will be tensions among differing values and priorities. However, as Ruger has argued, we may agree on specific actions even when we disagree about the justifications for those actions.46,59 Lee34 holds that we observe many common elements in varied ethics frameworks, though they often use different terms for similar concepts, and, therefore, such frameworks provide a basis for discussion. However, Lee continues, PH practitioners need “concrete tools for decision-making.”34. p. 97. Such tools have been suggested in several frameworks but may not be widely shared among professionals in practice. Perhaps a more productive approach is to teach processes for arriving at decisions, such as reflective equilibrium, to develop processes by which decisions can be made with confidence in their fidelity to ethical norms such as justice and respect for persons and communities, especially for vulnerable populations, while guided by the vision of health and well-being, employing means that rely on the best scientific evidence.
The Reciprocal Relation Between Research and Practice PH practice involves the application of knowledge gained through research toward the end of healthy people in healthy communities. It follows that science provides the means we employ in practice to achieve the end of health. This requires an iterative process of adjusting intermediate ends and means, which by its very nature invokes questions of value and obligation. I have held that we cannot disentangle questions of value from those of science in the process of applying what is gained from research to our pursuit of PH ends. This formulation also suggests that the transition from research to practice is characterized by reciprocal influences, as implementation in practice raises new questions for research, and research provides new options for practice. The distinction between research and practice is, therefore, not an easy one and is better viewed as reciprocal rather than clearly distinct. Nevertheless, the distinction is viewed as an important one because of the implications for
Ethics in Public Health Practice 149 ethics review in some countries. Though much of this discussion centers on U.S. agencies and regulations, the arguments may be helpful by analogy in other countries or in international research and practice. The role of institutional review boards in the United States is to oversee human subjects research, not to oversee practice. An oft-expressed concern is that such oversight can be and has been used improperly60 to impede PH practice, especially in responses to critical events, disasters, or outbreaks. Examples include surveillance activities, especially where there may be sensitive exposures or behaviors, such as sexually transmitted infections; rapid response teams collecting information in an outbreak or disaster to assess current health, to track patterns, and to prepare for future events; and oversight and monitoring to ensure quality and evaluate program implementation and effectiveness. Specific examples include collecting information on airline passengers who may have been exposed to SARS-CoV-2, Ebola virus, or a pandemic influenza strain when the results may not be known until the incubation period has passed; investigating possible adverse effects of a vaccine; and collecting data from survivors and relatives of victims of disasters to assess health status, access to services, and quality of response. Both research and practice employ tools and methods that are similar and frequently overlap, and PH practice may lead to generalizable knowledge, so the line between the two is porous and fuzzy. A characteristic frequently used to differentiate research and practice is whether the activity is designed to produce generalizable knowledge using systematic investigation.61,62 This understanding dates at least to the Belmont Report in 1978 in the United States63,64 and the first version of the Council for International Organizations of Medical Sciences (CIOMS) Guidelines in 198262 for international research. In addition to the generally cited characteristics of activities that are designed (or that use systematic investigation) to produce generalizable knowledge, Kass lists three “empirical assumptions” commonly found in guidelines that pertain to who benefits or faces greater risk of harm, whether the burdens or risks are part of usual practice, and how treatment or intervention decisions are made—whether by protocol or professional judgment based on specifics of the case or setting.65 But, as Kass argues, these characteristics are inadequate to clarify the distinction. The U.S. Federal Policy for the Protection of Human Subjects, known as the Common Rule (45 C.F.R., part 46, subpart A),61,66 was revised in 2017 and, as of this writing, is scheduled to take effect in January 2019. This rule, which governs research conducted within or funded by sixteen U.S. federal agencies, defines research as “a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge,”61, §46.102(l) whereas in practice, the intent of an investigation or study is to identify and control a health problem or improve a PH program or service. The data collected are needed to assess the
150 Ethics and Epidemiology program or service and improve the health of the participants or the participants’ community. Knowledge that is generated does not extend beyond the scope of the activity, and project activities are not experimental. Though the new revision of the Common Rule does specifically address some PH practices as not being covered by research review requirements, it does not solve the larger problem of the distinction. The distinction places considerable emphasis on the intent or purpose of the activity designed to produce generalizable knowledge. Analogous to clinical practice, practice (non-research) activities focus on prevention and control of disease or injury, contribution to population health and well-being, or enhancement of practice. Because scientific principles and methodology are applied to both non-research and research activities in the practice of PH, knowledge is generated in both cases. Furthermore, at times the extent to which that knowledge is generalizable may not differ greatly between research and non-research activities. Thus, research and non-research activities cannot be easily defined by the methods they employ. Three PH activities—surveillance, emergency responses, and evaluation—are particularly susceptible to the quandary over whether the activity is research or non-research. Again, the focus is on whether the activity was intended to apply to persons or populations beyond those directly affected by the activity. If generalizability was not the intent but the knowledge gained appears to be of value outside that practice setting, then publication of the findings, on this view, would require research review.67 This approach highlights the difficulty of determining clear lines of demarcation for research and practice, analogous to the distinction between research and treatment in clinical settings, when intent is central and can change for the same activity. It is a distinction that blurs upon examination. Consider, for example, PH surveillance activities, which are observational, typically involve no treatment, and are designed to monitor patterns of health but which may also reveal important knowledge that applies to large populations. The initial intent of data collection may be purely for purposes of monitoring, but the intent may change as data accumulate and the research potential becomes more evident. This distinction has become apparent as the COVID-19 pandemic has advanced. A question for consideration is this: Why should the intent of the person conducting the activity be determinative for the ethical assessment of the activity and of the treatment of persons who are the recipients of the activity? In addition to intent, guidelines often consider who benefits: participants directly or a more general population or even the community of science. This is similar to distinctions between research and treatment in clinical settings but overlooks the level of risk that may be attached to PH interventions that are not research.18 There have been discussions of community consultation and
Ethics in Public Health Practice 151 community consent,68,69 which could apply to both research and new practice activities as well as PH response to emergency and disaster situations. Hodge proposes an enhanced approach to making the distinction, including a table that functions as a flowchart.70 He outlines six characteristics used to distinguish research from practice—general legal authority, specific (or primary) intent, responsibility (to whom and for whom is the agent responsible), participant benefits (direct or indirect), experimentation (evidence-based or designed to produce evidence), and subject selection (a function of the program and its goals and resources or of research design)—but virtually all of these can apply to either research or practice. Their value as discriminating criteria, therefore, is limited, especially when viewed in isolation, and decision charts such as this one have a tendency to become lists of check-off boxes that do not promote more reflective probing of essential issues. General legal authority, for example, may exist for research activities when there is a legislative or regulatory mandate to conduct research on some condition or question. The issue of responsibility raises the need to explore who the responsible agent for PH activities is, what corporate responsibility means, and how it relates to the individual responsibility of practitioners representing the agency. When considering who benefits from research and from practice, it is clear that research participants frequently derive direct benefit, especially when they receive new treatments that prove effective. Similarly, even coercive PH actions, such as mandatory immunizations or quarantines, can benefit persons who are not quarantined or immunized by protecting them from exposure, as is the case for COVID-19. This is the very basis for herd immunity, a fundamental concept in PH immunization policy. As for subject selection, the criterion seems to confuse self-selection with selection on the basis of specific disease conditions or other characteristics. Many research designs recruit participants specifically because they have some condition or characteristic or exposure of interest. Further, random selection does not guarantee generalizability, even under the optimal conditions of the randomized controlled trial, especially when there are stringent inclusion or exclusion criteria. Similarly, PH activities may target specific groups of people (e.g., the very young or very old) for some programs, without definitive evidence that such targeting actually reduces the occurrence of adverse events in the population. For example, COVID-19 immunization campaigns are prioritized front-line healthcare workers, older adults, and other vulnerable groups. Immunication of these groups reduces their risk of infection illness, but also reduces the spread of SARS-CoV-2 in the community and thus benefits the rest of the population. Conversely, under some conditions, PH practice may rely on randomization as a method of fair allocation of scarce resources. A lottery system, for example, may be used to choose recipients of influenza vaccine during a time of short supply.
152 Ethics and Epidemiology Barrett et al.18 note that some have argued for a shift from intent to produce generalizable knowledge to consideration of the level of risk carried by a treatment or intervention, but they acknowledge challenges in such a fundamental reorientation for regulatory entities with less flexibility in oversight of research. These issues were at the center of a controversy surrounding a program implemented by Johns Hopkins University intended to reduce catheter-related hospital infections. The setting was sixty-seven cooperating hospitals in Michigan in 2006–2008 where a checklist provided substantial improvement in outcomes.71– 74 A complaint to the Office for Human Research Protections (OHRP) resulted in a finding by OHRP that the implementation of a checklist comprising procedures for catheter placement recommended by the Centers for Disease Control and Protection (CDC) constituted human subjects research. As a result of OHRP determinations, the Johns Hopkins Institutional Review Board (IRB) suspended activity related to the project. Subsequent to considerable publicity, discussion, and further review, OHRP issued a revised statement indicating that the research/ quality improvement project should be allowed to proceed, though the delay caused by the initial suspension may have resulted in unnecessary complications and deaths.74 OHRP continued to assert that the project as originally implemented constituted human subjects research and, thus, required at least expedited IRB review; however, the revised statement did acknowledge that the project would likely have been approved for waiver of consent and should not have been inhibited. The revised opinion gave approval for Michigan hospitals to continue implementation of the checklist program to reduce catheter-related infections without further OHRP scrutiny and in fact “strongly” encouraged other hospitals to adopt it. In the initial ruling the determinative factors were not what was being done, or whether consent would be required for it to be done as part of hospital practice, but that it was called research. Analysis was conducted to determine if the checklist did indeed reduce infections significantly, and the results were published. As one commentator noted: “institutions can freely implement practices they think will improve care as long as they don’t investigate whether improvement actually occurs. A hospital can introduce a checklist system without IRB review and informed consent, but if it decides to build in a systematic, data-based evaluation of the checklist’s impact, it is subject to the full weight of the regulations for human- subjects protection.”71, 769 Ironically, slavishly following a checklist approach to the determination of research versus practice imposed harm to patients because use of the effective clinical checklist had to be discontinued.74 From an ethical perspective a more fundamental question is: What difference should the distinction between research and practice make in our assessment of clinical or PH actions? There is no question that implementation of new practice guidelines or policy should be examined carefully for scientific evidence of effectiveness and for protection of patients.75 This is another instance when
Ethics in Public Health Practice 153 ethical values and principles cannot be disentangled from determination of scientific means. From the perspective of the patient, however, the determinative factors cited by OHRP in its original decision seem trivial and the requirements onerous to the point of interfering with care and threatening health and safety. As MacQueen and Buehler76, p. 930 write, “Regardless of whether public health projects are deemed to represent research or practice, it is essential that they be conducted ethically, emphasizing the need for public health ethics review mechanisms that are responsive to crises and sensitive to levels of risk, especially when projects involve vulnerable groups . . . .” But these ethical considerations must be appropriate to the demands of the situation and the risks—to both health and autonomy—imposed by the threats and by the proposed response, and they should not be unduly burdensome either to practitioners or to those impacted by PH activities. For the persons and communities affected, the issues are less likely to be whether something should be termed research or practice and more likely to reflect concerns about whether the activity provides benefits or imposes risks; whether participation or being subject to the action is voluntary or is coerced; whether the risks, if present, can be justified by potential benefit to themselves or to the community; whether the program appears to be implemented fairly; whether the individual’s or community’s rights and dignity have been respected; and similar questions. For purposes of answering these questions, it is less important whether the agents intend that the activity be one of research or practice, or whether the information gained is generalizable or not. What is critically important is that the activity be conducted responsibly. These are precisely the kinds of questions research oversight committees review, but they are fundamental to responsible conduct of research or practice quite apart from regulatory procedures. Practitioners should be sensitive to these issues even in the face of an emergency and formulate processes that incorporate such considerations efficiently and expeditiously. The attempt to delineate a bright line between research and non-research practice is akin to efforts to develop ethics frameworks and processes intended to function as a step-by-step guide to decision-making. A number of reports have developed or summarized such frameworks or called for their creation, some with an emphasis on the process.6,7,34,77,78 Some have argued that, as the previous example demonstrates, the checklist approach, as valuable as it may be in clinical practice, is not sufficient for ethical analysis and decision-making.79,80 Barrett et al.18, pp. 8–9 write: “A general concern is that overreliance on guidance documents encourages a legalistic or compliance approach to ethics, rather than encouraging reflection and analysis . . . [T]o be successful, research oversight needs to focus on moral judgment and reflection, not on strict rule-like adherence to regulations documented on a checklist. Though formal training in ethics
154 Ethics and Epidemiology is desirable, moral judgment and discernment are developed by making ethical judgments.” To that, Coughlin and others add that a culture of ethical professionalism and mentors who are models of ethical reflection and practice are necessary to nurture ethical decision-making.79,80
PH Preparedness and Preventive Ethics PH agencies must be prepared to coordinate with other agencies and organization to respond to emergencies that fall into three major categories: major epidemics and pandemics, natural disasters, and anthropogenic threats, whether intentional or not, such as acts of terrorism or conditions resulting from human error or technological failure. In planning and implementing responses, ethical values, principles, and obligations that guide decisions and actions are brought into sharpest focus. Each type of emergency, and each instance of a type, has distinctive characteristics and demands, and each requires both advance preparation and action in responding to emerging issues and events.8,20,77 We have seen critical issues arise in devastating events such as Hurricane Maria in Puerto Rico in 2017 and the Sulawesi earthquake and tsunami in 2018 when hundreds of lives were lost and infrastructure was destroyed, including housing, critical health care facilities and capacity, communications, and basic services. Such disasters result in scarce resources, including basic necessities such and food and water, as well as medical supplies and electricity for homes and commercial and medical facilities. Rationing becomes more likely in such circumstances. Governments and agencies, therefore, have a duty to plan collaboratively for emergency situations, whether human-caused or the result of natural events or pandemics.77 A part of that planning should involve public engagement around moral norms.16 The discussions should be practical while also clarifying the norms and values that should guide decisions. With regard to standards of care in disaster response, Leider et al.77, p. e8 write: “Ethical norms must be clearly stated and justified and practical guidelines ought to follow from them. Ethical frameworks should guide clinical protocols, but this requires that ethical analysis clarifies what strategies to use to honor ethical commitments and achieve ethical objectives. Such implementation issues must be considered well ahead of a disaster.” Others have emphasized similar themes of advance planning, incorporating ethical considerations into planning and design, and engagement with other partners and with the community.81,82 Lurie et al.83, p. 1253 emphasize that between disasters is the time for “deliberative thinking that makes for good planning.” They provide guidance to those who would conduct research while responding to disasters and include a timeline of 33 major PH emergencies from 2001 to 2012, categorized by type.
Ethics in Public Health Practice 155 When resources and personnel are scarce, triage may be inevitable. In disaster planning, therefore, ethical issues should be addressed ahead of time.78 In addition to the PH emphasis on protecting life and health, respecting human rights, promoting social justice, and building civic capacity, Petrini78 outlines three challenges in disaster response that triage intensifies: rationing, restrictions (quarantine), and responsibilities (to prevent avoidable diseases, not to impose burdens, and to care for the needs of those needing help). Similar to other arguments, he calls for values guiding decisions to be examined before triage is imposed and emphasizes justice and respect for the dignity and value of the persons affected. The challenge for PH practitioners is how one can operationalize those convictions in the midst of an emergency response.
Preventive Ethics: Prospective and Procedural Ethics Ethical preparation is necessary for both routine PH practice and for emergency response. In order to carry out existing and new policies and programs and respond to critical situations expeditiously and ethically, PH agencies could develop a preventive ethics approach that combines prospective ethics and procedural ethics. Preventive ethics, as advocated for clinical practice by McCullough et al.,84, p. 1222 describes what I have called the prospective component of preventive ethics: “the strategy of anticipating, prospectively identifying, and addressing ethical challenges with the goal of preventing such challenges from becoming conflicts.” Consider, for example, immunization programs. When vaccine distribution is designed in pandemic settings, there is intentional reflection on ethical concerns, whether inherent in the activities themselves, such as threats to privacy in determining who should be prioritized or restrictions on individual autonomy in emergency situations, by imposing lockdown and mask wearing until herd immunity can be achieved, or as direct or indirect consequences of the actions. As programs are implemented, unanticipated problems are analyzed and corrections made when necessary for future implementation. This prospective approach allows essential activities to proceed with greater confidence that ethical concerns have been addressed and thereby engenders trust by those affected by PH activities. Planners examine and learn from ethical challenges encountered in other program implementation how to avoid similar problems or violations in the new program. For example, if vaccinations are scheduled by appointments that require access to the internet, anticipating the lack of access by vulnerable populations might result in modifying the delivery of vaccinations to those populations so they are not disadvantaged. Other examples as well as development of mechanisms for timely and compassionate response to concerns or objections. include training of staff in privacy issues, cultural sensitivity, and
156 Ethics and Epidemiology in technical aspects of protecting data, greater transparency, and education of those who may be targeted for special consideration, as well as development of mechanisms for timely and compassionate response to concerns or objections. Procedural ethics, modeled after the concept of procedural justice as described by John Rawls in A Theory of Justice,15 builds ethical considerations into the design of practice programs. (See, for example, Romero et al.82 on incorporating ethical considerations into the design of a contraceptive access program in response to a Zika virus outbreak in Puerto Rico in 2016–2017.) The concept, put simply, is to focus on designing processes and procedures that we agree are fair beforehand, so that, whatever the outcome, if the process has been followed faithfully, we can agree that the outcome was fairly obtained, even if we do not agree with the result or we judge it to be unfortunate. That is, rather than focusing on a specific ethical challenge, as in prospective ethics, we focus on coming to agreement on attributes of procedures that are fair and designing procedures to incorporate those attributes. For example, in developing policies and procedures for emergency response, there may be concerns about limited resources (such as ventilators or vaccines), determinations about quarantine, or provision of shelter. Examining alternative approaches for fairness and effectiveness, with input from ethicists and representatives of the community affected as well as from PH professionals, increases the probability that, even if there are unforeseen or unfortunate consequences, the process will be viewed as having been fair. This emphasis on process or procedure as an integral part of ethical practice has been picked up by other recent authors.5,6 Consistent with Rawls’s formulation, justice is the dominant focus of procedural ethics, though other principles also play a role, while prospective ethics gives greater weight to respect for individuals and communities, promotion of health and prevention of harm, collaboration, transparency, and protection of vulnerable people. The use of prospective and procedural ethics approaches would, for example, contribute to creating PH programs, such as vaccination distribution and PH preparedness plans, that are characterized by adherence to ethical obligations and, especially with the input of community representatives as well as professionals, that reflect the values of the affected population. Not everyone will agree with such a system or have a veto, but both PH practitioners and the communities where they practice can carry out programs and respond to urgent matters with greater confidence that ethical considerations will be explicitly addressed.85,86
Case Study: Planning for Pandemic Influenza At the time of this writing PH professionals are observing the centennial of the 1918 influenza pandemic, a devastating event with heretofore unimaginable
Ethics in Public Health Practice 157 loss of life and unprecedented demands on PH resources and the medical community, and with parallels to the COVID-19 pandemic. In 2005, the WHO recommended that “all countries take urgent action to prepare for a pandemic.”87, p. 1 As recently as 2009 the world experienced a pandemic of much less far-reaching consequences, yet one that revealed continuing vulnerabilities in our capacity for response. Again in the 2017–2018 influenza season, CDC estimated that the burden of illness, hospitalization, and the death toll was greater than any season since the 2009 pandemic.88,89 Though there has been research in recent years, the development of an effective universal flu vaccine is still years in the future, though the rapid development of multiple COVID-19 vaccines gives greater hope. In an interview for the Journal of the American Medical Association, Dr. Michael Osterholm contended that preparation for pandemic influenza should be a top priority for PH practitioners and researchers.90 Osterholm also argued that the world today is even more vulnerable than in 1918 because of the much larger population and greater population density, with more very large population centers; the limited effectiveness of the vaccines we do have and limited availability of vaccines for the vast majority of the global population in the early stages of a pandemic; limited stockpiles of antiviral medicines and crucial medical supplies and equipment; our inability to manufacture large quantities of vaccine, antivirals, and supplies quickly; and the concentration of manufacturing for these essential goods coupled with the vulnerability of international shipping necessary to distribute them. This prediction has been borne out in the COVID-19 pandemic. An example of the latter point concerning manufacture and shipping of essential medical supplies was illustrated in Puerto Rico in 2017 when Hurricane Maria destroyed facilities for production of both medical gases91 and normal saline,92 impacting supplies for the island and elsewhere. Further, Osterholm argues our governments have not given sufficient priority and committed adequate funds to the development of new vaccines and treatments in spite of the continuing toll of influenza in mortality and morbidity and strain on PH resources and the health care system, even in years when there is not a pandemic. The COVID-19 experience may change these attitudes and resource allocations.
Reviewing Foundational Principles The most influential approach to bioethics, at least in the United States, has been principlism,35 or “the four principles approach,” which derives a core set of guiding ethical principles from “considered judgments in the common morality and medical tradition.”35, p. 37 The four principles are viewed as central to bioethics. They are:
158 Ethics and Epidemiology 1. Respect for autonomy, which has to do with the ability of autonomous persons to make decisions concerning themselves 2. Nonmaleficence, which calls for refraining from doing or causing harm 3. Beneficence, which literally means “do good” and has to do with creating or providing benefits, and with considerations of the relative balance of benefits and costs or harms 4. Justice, which requires that benefits, costs, and harms are fairly distributed or received. Much of the material on PH ethics has relied on the four principles foundation because of its dominance and usefulness, and because it is more fully developed than alternative approaches that may be applicable. There have been recent criticisms of this approach, often based on the fundamental differences in mission and methods between clinical medicine and PH.6 The foundation for ethics in PH practice presented at the beginning of this chapter relies on an understanding of the mission of PH as defining the overarching goal, the essential constitutive value, toward which PH practice is directed, thus providing an underlying unity of purpose for the broad range of disciplines, agencies, and other entities that constitute PH. Shared ends, core values, and fundamental duties internal to the practice of a profession provide the basis for the ethics of that profession. The shared values implied by the mission of PH, especially related to justice, respect for persons, the centrality of health for well-being, and the importance of social determinants of health, are central to any ethical evaluation of the means that PH practice may employ to fulfill that mission. In ethical judgments about PH practice, those shared values are joined by societal interests, especially as expressed in the values and obligations of the common morality, other more proximal ends, and scientific judgment. It is in that light that we believe the four principles should be understood and applied. For example, among the most frequent criticisms of principlism, and bioethics in general, as a basis for PH ethics are its focus on the individual, if not individualism, and the priority given to the principle of respect for persons understood as autonomy or self-determination.6,52,60,85,93,94 Because PH is primarily focused on the health of populations and communities, and less focused than clinical medicine on treatment of individuals, an ethics of PH practice would place relatively greater weight on public actions, programs, and policies and their impact on communities, rather than on individual patients, clinical practitioners, or caregivers. Similarly, the influence of community-based participatory research is in part a reflection of the desire to demonstrate respect for the needs and wishes of communities and to provide communities a level of self-determination.8,18,68,69,81 These approaches are especially encouraged for vulnerable populations or those who have historically been burdened or marginalized. The community-based
Ethics in Public Health Practice 159 prevention research approach is also promoted as a means of increasing recruitment and obtaining better data. In the United States the principle of respect is typically understood as self- determination or autonomy, but in the Canadian Tri-Council Policy Statement respect for the dignity of persons becomes central.30 Though in the Kantian tradition the dignity of the individual is derived from autonomy, the Canadian statement proposes a broader concept that “includes consideration of the human condition, cultural sensitivity by researchers, and protecting persons, not only from physical harm, but also from demeaning or disrespectful actions or situations.”95, p. 740 Powers and Faden also distinguish between leading a self- determining life and having respect from others, for others, and for oneself as distinct dimensions of well-being.22 For them, respect is relational and has to do with treating others as “dignified moral beings deserving of equal moral concern. Respect for others requires an ability to see others as independent sources of moral worth and dignity and to view others as appropriate objects of sympathetic identification.”22, p. 22 For example, when conducting research or engaging in PH programs, such as vaccinations with elderly persons, there may be limited self-determination in the sense of the ability to make an informed, competent, and autonomous decision, but the researcher or practitioner still demonstrates respect in the way he or she relates to the subject, collects the data, or administers the treatment. Even when autonomous decisions can be made, the elderly often have special needs and concerns that should be considered to avoid coercion, facilitate participation or treatment without undue burden or embarrassment, and provide special protection from harms. A related area in need of further research and reflection, especially in global PH efforts but also within countries, is the potential conflict between respect for cultures and communities and PH actions deemed necessary to achieve important PH goals or to protect the health of the population. This is illustrated in the conflict between some cultural practices and what many consider basic human rights. For example, education, and especially education of women, is widely viewed as an important contributor to PH, but in some cultures, women are prohibited from obtaining formal education or having access to health resources. It is in such conflicts that PH ethicists must make a stronger, more fully articulated case for the ethical soundness of a decision, policy, program, or action that is contrary to a prevailing cultural tradition. The last example illustrates that the principle of beneficence plays a central role in PH ethics. I noted earlier that society’s interest in the public’s health implies a prima facie endorsement of actions to achieve PH and constitutes an essential basis for justifying PH actions that violate other principles, such as breaches of confidentiality or mandatory (coerced) compliance, as necessary for the
160 Ethics and Epidemiology common good. Pursuit of the common good in the form of the public’s health is central to the ethical perspective presented in this chapter and to the implementation of the beneficence principle in PH practice. PH ethics requires an understanding of what the value of the public’s health is and of the rationale or basis for its claim. I have attempted to suggest one starting point for that more extensive discussion. In coming to an understanding of common goods, we should clarify whether the goods are common in the sense of communal (for example, in the way that public schools, libraries, and parks are communal goods) or common in the sense of affecting or being enjoyed by the preponderance of people in the population (for example, in the sense that people value liberty or security). This distinction can be important in considerations of who benefits and who suffers, as well as what PH practice should pursue and how it should be pursued. The PH ethicist should also attend to the claim of non-maleficence, the principle that asserts a fundamental obligation not to cause harm to others, in PH practice. PH actions do at times result in some persons involuntarily suffering harms, assuming risks, or being constrained from exercising full autonomy for the benefit of the population’s health.85,86 However, the imposition of risk or harm without consent requires justification that explicates the goods that are obtained and their value in relation to the harms suffered, along with a justification that alternatives were not feasible or effective and demonstration of conscientious efforts to minimize harm or reduce risk.60 The mandatory quarantine of a person with active tuberculosis serves as an example where individual autonomy is constrained to protect the community. Much of the literature on the ethics of PH takes justice as the basic orienting principle.22,35,49 For PH the issue goes beyond distributive justice in fair distribution of benefits and burdens. Especially when decisions are guided by analysis of cost-effectiveness, justice requires that more than a mere balance sheet approach be taken in deciding where and to whom resources should be directed. Justice also includes obligations to attend to vulnerable populations and to provide equal opportunity. Elements of compensatory (or restorative) justice may also come into play in order to address the need to compensate for past injustices.60 Indeed, the current emphasis on addressing disparities and inequalities in health care and health outcomes is a reflection of a concern for restorative justice. This is the justification for giving greater weight to programs and policies that have the potential to enhance health and well-being of those on the margins of society, giving more attention to vulnerable populations or those previously ignored. This expansion of the concept of justice is especially important in those situations where the power of the state is employed to mandate actions or coerce persons to undergo treatment, as in directly observed treatment for tuberculosis. The principle of justice also comes into play when determining how to target PH interventions. The ideas of Geoffrey Rose96 have become more influential in
Ethics in Public Health Practice 161 the United States since the 2003 IOM report relied on his work,42 evident in three axioms of that report: 1. Disease risk exists on a continuum rather than as a dichotomy. 2. Typically, only a small percentage of a population falls in the extremes of high or low risk, and low-cost programs with benefits for a large number of people may produce a greater impact on PH than more targeted programs that benefit a small group—what Rose called the “prevention paradox.” The decision of where to direct limited resources, therefore, involves questions of both science and ethics. 3. An individual’s risk of ill health cannot be isolated from the risk of his or her population, emphasizing again the importance of population-level approaches to promoting and protecting the public’s health. Consideration of stewardship of scarce resources, respect for persons and communities, and expanded concepts of justice22 are essential for PH professionals to incorporate these perspectives into practice in ethically responsible ways. PH practitioners who take Rose’s position into account should also weigh the impact that shifting resources for intervention to lower-risk groups may have on groups whose higher risk status is the result of past or current injustices or disparities.
Critical Considerations for PH Ethics Childress et al.52 emphasize the importance of public justification and transparency, a component of maintaining trust. But accountability (one of the meanings of responsibility24) also provides an element of mutuality in PH processes that would otherwise be entirely paternalistic. PH actions that infringe on other moral values or rights may still be required, but the decisions are made openly, and justifications are provided. Buchanan97 builds on Powers and Faden,22 among others, to argue there should be more emphasis on promoting autonomy and justice than on justifying paternalism in PH. He proposes greater public effort to agree on those “capabilities that citizens consider valuable”97, p. 18 with a focus on those social inequalities that are obstacles to achieving the valued capabilities. “In the end, the field of public health needs to engage the public directly in building consensus on what we owe each other in creating a society in which all citizens feel supported in living decent lives characterized by dignity, integrity, and mutual responsibility.”97, p. 20 The issue of advocacy and science is yet another aspect of a core concern in this chapter, namely the inseparability of science from ethics.6,24 It has to this
162 Ethics and Epidemiology point been cast largely in terms of the ends–means framework; that is, the importance of the values implicit in the goal (the ends) for determinations of the means to be employed. The introduction of advocacy may be seen as an additional component of concern for the centrality of ethics in PH practice. The argument for advocacy is intimately tied to the emphasis in PH circles on elimination of health disparities and inequities and, within epidemiology, on social epidemiology and the importance of the ecological model.
PH Ethics and Human Rights The work of the late Jonathan Mann is most closely associated with an emphasis on a human rights approach to PH ethics.53,54,98 In a brief 1997 article,54 Mann proposed that human rights is the more appropriate framework for moral issues in PH than the individual focus of bioethics. The distinction is problematic because it does not account for the ethical foundation on which the human rights perspective rests.22 He acknowledged that PH actions and human rights are often assumed to be in conflict. He also argued that human rights violations undermine PH and that promoting human rights is integral to promoting PH, while affirming the intrinsic value of human rights in addition to their value as instrumental to PH. Since his untimely death in 1998, human rights has come to be a well- established component of, if not a foundation for, PH ethics, or at least a critical issue to be addressed. Some argue that health both is a valued end in itself for all people10,33 and is essential to pursue a good life, sometimes tying the right to health to the right to health care as an essential to obtaining health. However, there is considerable divergence in how the arguments are treated and what conclusions are drawn, from a right to care to a right to certain conditions or capabilities to a right to health itself.9,11–13,34,50,99 Further, because every right implies a corresponding obligation, the arguments often raise questions about what constraints may be placed on the right to health or health care because of resource limitations. Brudney100 offers a helpful summary of positions in an introduction to a special issue on the question “Is health care a human right?” Mann’s argument is tied to the well-established association between well- being and social and economic status. He contended that the impact of a constellation of “societal factors,” which includes more than income, education, and occupation, persists even after accounting for differences in indicators of medical care. This is yet another instance of science interfacing with ethics: the scientific study of contextual factors related to health is intimately linked to ethical questions regarding solutions to the contextual inequities (that is, means to the end of the people’s health).14 Here again the multidimensional concept
Ethics in Public Health Practice 163 of well-being proposed by Powers and Faden22 is helpful because their model explicitly emphasizes the interaction of health with other dimensions of well- being that are related to human rights, and they provide a discussion of the moral basis for a theory of human rights that is consistent with their view. They show that, unlike purely legal rights, basic human rights are noninstitutional—that is, “not strictly an artifact of institutional arrangements.”22, p. 45 For them rights are grounded in fundamental human needs that are also integral to achievement of the various dimensions of well-being. Reliance on a human rights perspective to resolve ethical conflicts in PH will require a more thorough grounding of these rights in the context of PH and well- being and an examination of the ways that they are furthered or limited by PH policies, programs, and practice. One of the dangers of the human rights approach is that it may be “co-opted” by stressing individual rights, particularly individual liberty and freedom from coercion, to oppose PH actions.101 Bayer and Fairchild85, p. 489 contend that “limitations on the rights of individuals in the face of public health threats are firmly supported by legal tradition and ethics. All legal systems, as well as international human rights, permit governments to infringe on personal liberty to prevent a significant risk to the public.” Indeed, the tension between individual rights and autonomy and pursuit of the common good in the form of the public’s health constitutes one of the central issues for PH ethics. Though there is no simple solution, I have sought to provide a basis for discussion in our framing of PH practice as directed toward the overarching good of the public’s health, in positing that society has an interest in that end, and that society’s interest constitutes a prima facie warrant for pursing that end and evaluating the means to achieve it in both scientific and ethical terms. The considerations outlined in the previous section indicate how ethical principles, values, and obligations are brought to bear on the determination of when some rights may be justifiably infringed in order to achieve the larger end. Here again, the process of reflective equilibrium, informed by the numerous other contributions to the field over the past decade, provides a promising approach to a deliberative process directed toward agreement on actions to protect and promote the public’s health.5,15,17,46
Ethics and Interpreting the Evidence for Evidence-Based Practice This chapter has attempted to lay a foundation for ethics in PH. We have briefly outlined some of the most common areas where PH practitioners face difficult ethical decisions. Throughout this chapter we have referred to the inseparability of science and ethics. We offer one additional example to illustrate how ethical
164 Ethics and Epidemiology concerns can be central to what are often considered purely scientific, methodological, or statistical decisions. These issues are especially pertinent to program evaluation and evidence-based practice, and they illustrate that there are ethical implications even of statistical issues, such as how measures are defined and used,102 how analysis is conducted, what critical values are chosen for screening tests, and even what alpha or beta levels are selected in determination of statistical significance. One example of how the analytic approach can affect results is shown in a project reported by Silberzahn et al.103–105 They asked twenty-nine teams of researchers to conduct analysis on a single dataset using the same research question: “whether soccer players with dark skin tone are more likely than those with light skin tone to receive red cards from referees.”104, p. 338 After a complex, multistage process that included sharing of analytic approaches, evaluation of the methods, reanalysis in light of comments and methods used by others, then open discussion of the results and methods, twenty of the twenty-nine results were “statistically significant,” but point estimates and confidence intervals varied substantially. All of the approaches had supporting arguments, and there were no obvious vested interests in the outcome of the analysis. Their conclusion: analytic results “can be highly contingent on justifiable, but subjective, analytic decisions. Uncertainty in interpreting research results is therefore not just a function of statistical power or the use of questionable research practices; it is also a function of the many reasonable decisions that researchers must make in order to conduct the research.”104, p. 354 The authors acknowledge that most research groups—or PH professionals hoping to evaluate the findings—can replicate this process of crowdsourcing analysis. Another example, while not crowdsourcing, was the result of an external researcher calling into question the methods and results of a highly publicized research letter in the journal Nature.106 The original article purported to show that the oceans are soaking up heat at a much faster rate than previously thought. The authors responded on a climate science blog that they had erroneously analyzed systematic errors as if they were random resulting in an underestimate of uncertainty and upward bias in estimates of heat uptake in regression modeling.107 The journal subsequently issued a retraction, with the authors’ agreement,108 and a corrected version of the article was published the following year.109 This is an excellent example of self-monitoring by the scientific community and of accountability and responsible conduct by the authors. Though PH practitioners may doubt that they will encounter errors like this, they should critically evaluate the evidence that comes to them, and respond as these autors did. Errors may (or may not) be unintentional, but they can adversely affect interventions based on the findings, thus depriving other programs of needed resources and participants of effective interventions, even potentially
Ethics in Public Health Practice 165 harming them. Even though this report was about climate science, we have already seen that climate change is having deleterious effects on weather patterns, with major implications for PH practice. PH professionals all have an obligation to assess research reports and other evidence using the expertise we have and calling on others with additional expertise to assist us in making determinations about programs and policy we are considering. Objectivity is a characteristic of research evaluation often held up as a standard. Though focused primarily on interpretation of epidemiological findings, Greenland has argued that objectivity as usually defined is “too complex, ill-defined and unattainable.”110, p. 968 He has proposed transparency and neutrality (or balance or fairness) as shared values that can temper bias in judgment of evidence.110 I contend that our values infuse and underlie more than our evaluation of evidence; indeed, they are the basis of our commitments and ethical judgment. The issue of choosing alpha or beta levels, which determine the probability of Type I or Type II errors, has been discussed at greater length elsewhere.111 A Type I error involves judging there is an association, effect, or difference when there is none. A Type II error means judging there is not an association, effect, or difference when, in fact, one exists. In the realm of hypothesis testing, Type I and Type II errors refer to decisions that are incorrect due to random error. (The probabilities associated with them do not provide information about errors resulting from bias, which is systematic or nonrandom.) A very small alpha level reduces the probability of a Type I error, with a resulting loss of power. A larger alpha level reduces the probability of a Type II error but also increases the probability of rejecting the null hypothesis erroneously—that is, making a Type I error. The tradeoff of Type I and Type II errors is not only a statistical tradeoff; it also requires consideration of opportunities lost because of wasted resources and needs going unaddressed or risks imposed because a Type I error results in a decision to implement a program that is ineffective or could even be harmful. Those considerations must be weighed against the risk of adverse outcomes or actual harm, unrelieved suffering, and loss of improved health because an intervention that would have been effective was not implemented due to a Type II error. Statistical decisions, with a particular focus on p values, have been the subject of considerable recent debate and attempts at clarification in the research community and especially in statistical circles.112–116 This represents yet another instance of analytic decisions with potential ethical implications. Though the cited articles focus on the proper interpretation of p values and what they can and cannot tell us, how they should and should not be used, and similar issues, they do not guide us in the ethical implications except insofar as they reject the practice of “p-hacking” (i.e., trying many different analyses until one turns out significant).
166 Ethics and Epidemiology The role of implicit values in discovery and decisions was explored by Polanyi117 and, from a different perspective, by Kuhn118 in the last century, and more recently, from yet another perspective, by Tversky and Kahneman, whose work in behavioral economics spilled over into other disciplines as disparate as history, psychology, and epidemiology.119,120 Polanyi argued that personal values (and commitments) are not only unavoidable but also an essential part of discovery and knowing. For Kuhn also, values play a role in the progress of science. Kahneman and Tversky showed how assumptions and other unarticulated factors influence decisions. As I have noted, we cannot disentangle values from scientific judgment. Our task is to engage in a process of reflection on our fidelity to professional best practices as we understand how those values affect our judgment when making ethical decisions. In evidence-based practice and PH program and policy development, one reaches a point when a decision must be made based on the weight of the evidence, the precision of the estimates, the relative weighting and the tradeoffs of one type of error versus the other, all in light of underlying values and obligations. For example, the adoption of alpha of 0.05 and beta of 0.2 seems too facile when we begin to probe the differences in policy and program implementation resulting from erroneous rejection of a null hypothesis or, conversely, failure to reject the null when an intervention could make a meaningful difference in the lives and well-being of people. These differences should be considered not only in terms of allocation of resources but also in terms the impact on the lives of the people affected and consonance with their values. Translating the principles of community-based research into PH practice could mean engaging a community to factor into its risk/benefit deliberations the probability of assuming an ineffective intervention is effective or, conversely, assuming an effective intervention is not. Effective evaluation considers the mission, values, and goals of programs whether in public or private agencies. Non-quantifiable goals and vocational motives may be especially difficult for usual evaluation approaches to accommodate, but failure to consider these motives and goals results in an evaluation that does not address the reasons the program was implemented. Imposition of external goals, values, and standards fails to acknowledge and respect the integrity of the community and its autonomy. Inappropriate outcomes may result in decisions concerning the effectiveness of programs that miss their actual value for accomplishing the community’s desired ends. On the other hand, failing to evaluate programs adequately means that the community may be deprived of effective programs and limited resources may be wasted in programs that contribute little to the community’s health. Similar concerns arise in screening programs, where there may be several alternative tests, variations in protocol, or a range of values from which to choose
Ethics in Public Health Practice 167 in order to enhance the sensitivity or the specificity or the screening. These decisions should consider the relative importance and cost (in human terms as well as resources) of false-negative versus false-positive results. Receiver operating characteristic curves and other approaches to evaluating the performance of a test typically suggest a point or process that provides the optimal combination of sensitivity and specificity but may not account for the human costs or values associated with the tradeoffs. Techniques that allow differential weighting to reflect costs or relative values are not a substitute for careful ethical reflection. Ethical analysis becomes especially critical when the community and researchers or practitioners hold different values or have different perceptions of problems or goals.
Summary and Conclusion I have argued in this chapter that ethics in PH practice is shaped by the mission of PH and that the ethical obligations of PH practitioners are grounded in their commitment to that mission and in their voluntary assumption of responsibility for the public’s health. Society also has an interest in the public’s health, and that endorsement provides partial justification for infringing on certain individual rights for the sake of the common good. Recent developments in PH ethics have made clear that it must make use of methods and concepts in addition to those of bioethics. The resulting PH ethics will include methods such as reflective equilibrium, broader concepts of justice and respect for persons and communities, considerations of human rights, including the right to health as fundamental to human flourishing, a deeper understanding of the nature of health and well-being as our ultimate goal. Because of the multidimensional nature of PH, ethical analysis must include the perspectives of a broad range of persons, communities, agencies, and institutions. Development of preventive ethics with prospective and procedural components is recommended as a means of maintaining assurance of ethical approaches to urgent or emerging PH problem or disasters, such as the COVID-19 pandemic and recent natural and anthropogenic disasters. Finally, we have seen that ethical considerations enter PH practice even regarding issues such as analysis of data, evaluation of evidence, statistical decisions, and design of screening programs. Many ethical issues, concepts, values, and obligations enter PH responses in every setting, from routine implementation of programs and policy to planning for emergency responses, providing further evidence of the importance of ethical analysis for responsible and effective PH practice. Though the task seems daunting, we close with this encouraging note from Ortmann et al.6, p. 12:
168 Ethics and Epidemiology Ethical values and rules enjoy the approval of history, custom, law, and religious tradition, but they also find anchor biologically, psychologically, and socially in human life. Value judgments and ethical determinations, then, are not relative as much as correlative; that is, they correlate and resonate with these deeper roots of human life that we share. If humans indeed share a set of fundamental values, then ethical conflicts primarily reflect differences in prioritizing values in a particular context, rather than a fundamental disagreement about values. This point of view provides grounds for optimism about the possibility of finding a deeper basis for understanding and mutual respect, if not agreement, when ethical tensions surface.
Acknowledgment The author gratefully acknowledges the contribution of Dr. R. Max Learner in writing case studies for the previous edition of this chapter, and for the careful reading and critique of that previous version by Dr. Learner and Dr. George Khushf. Their insight, analysis, and lucid suggestions made that chapter better than it could otherwise have been, and this revision continues to benefit from their contributions.
References 1. Arabena K, Armstrong F, Berry H, et al. Australian health professionals’ statement on climate change and health. Lancet. 2018;392(10160):2169–2170. 2. Shultz JM, Kossin JP, Galea S. The need to integrate climate science into public health preparedness for hurricanes and tropical cyclones. Journal of the American Medical Association. 2018;320(16):1637–1638. 3. Singh JA. Why human health and health ethics must be central to climate change deliberations. PLoS Medicine. 2012;9(6):e1001229. 4. MacIntyre AC. After Virtue: A Study in Moral Theory. 2nd ed. University of Notre Dame Press; 1984. 5. Marckmann G, Schmidt H, Sofaer N, Daniel S. Putting public health ethics into practice: a systematic framework. Frontiers in Public Health. 2015;3:8. 6. Ortmann LW, Barrett DH, Saenz C, et al. Public health ethics: global cases, practice, and context. In: Barrett DH, Ortmann LW, Dawson A, et al., eds. Public Health Ethics: Cases Spanning the Globe. Vol. 3. Springer Open; 2016. 7. Petrini C. Theoretical models and operational frameworks in public health ethics. International Journal of Environmental Research and Public Health. 2010;7(1):189–202. 8. Lee LM, Zarowsky C. Foundational values for public health. Public Health Reviews. 2015;36(2):5. 9. Liao SM. Health (care) and human rights: a fundamental conditions approach. Theoretical Medicine and Bioethics. 2016;37(4):259–274.
Ethics in Public Health Practice 169 10. Marmot M. Just societies, health equity, and dignified lives: the PAHO Equity Commission. Lancet. 2018;392(10161):P2247–2250. 11. Ruger JP. The health capability paradigm and the right to health care in the United States. Theoretical Medicine and Bioethics. 2016;37(4):275–292. 12. Sreenivasan G. Health care and human rights: against the split duty gambit. Theoretical Medicine and Bioethics. 2016;37(4):343–364. 13. Tasioulas J, Vayena E. The place of human rights and the common good in global health policy. Theoretical Medicine and Bioethics. 2016;37(4):365–382. 14. Association of State and Territorial Health Officials. Policy Statement on Health in All Policies. 2018. 15. Rawls J. A Theory of Justice, revised ed. Belknap Press of Harvard University; 1999. 16. Bensimon CM, Smith MJ, Pisartchik D, et al. The duty to care in an influenza pandemic: a qualitative study of Canadian public perspectives. Social Science & Medicine. 2012;75(12):2425–2430. 17. Daniels N. Reflective equilibrium. In: Stanford encyclopedia of philosophy. Fall 2018. 18. Barrett DH, Ortmann LW, Brown N, et al. Public health research. In: Barrett DH, Ortmann LW, Dawson A, et al., eds. Public Health Ethics: Cases Spanning the Globe. Springer Open; 2016:285–300. 19. Lee LM. Adding justice to the clinical and public health ethics arguments for mandatory seasonal influenza immunisation for healthcare workers. Journal of Medical Ethics. 2015;41(8):682–686. 20. Phelan AL, Gostin LO. Flu, floods, and fire: ethical public health preparedness. Hastings Center Reports. 2017;47(3):46–47. 21. Upshur REG. Evidence and ethics in public health: the experience of SARS in Canada. New South Wales Public Health Bulletin. 2012;23(6):108–110. 22. Powers M, Faden R. Social Justice: The Moral Foundations of Public Health and Health Policy. Oxford University Press; 2006. 23. Jonas H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age [Translation of Das Prinzip Verantwortung: Versuch einer Ethik fuer die technologishce Zivilisation, translated by Hans Jonas with David Herr]. University of Chicago Press; 1984. 24. Weed DL, McKeown RE. Science and social responsibility in public health. Environmental Health Perspectives. 2003;111(14):1804–1808. 25. Mann J. Health and human rights: if not now, when? Health and Human Rights. 1997;2(3):113–120. 26. Barrett DH, Ortmann LW, Dawson A, et al., eds. Public Health Ethics: Cases Spanning the Globe. Springer Open; 2016. Selgelid MJ, ed. Public Health Ethics Analysis; No. 3. 27. Coughlin SS. Model curricula in public health ethics. American Journal of Preventive Medicine. 1996;12(4):247–251. 28. Coughlin SS, Soskolne CL, Goodman KW, eds. Case Studies in Public Health Ethics and Instructor’s Guide. American Public Health Association; 1997. 29. Augustine. City of God (Dods M, trans.). T & T Clark; 1871. 30. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. 2014. 31. Dzau VJ, Balatbat CA. Reimagining population health as convergence science. Lancet. 2018;392(10145):267–268.
170 Ethics and Epidemiology 32. Marmot M, Friel S. Global health equity: evidence for action on the social determinants of health. Journal of Epidemiology and Community Health. 2008;62(12):1095–1097. 33. Pan American Health Organization. Just Societies: Health Equity and Dignified Lives. Executive Summary of the Report of the Commission of the Pan American Health Organization on Equity and Health Inequalities in the Americas. 2018. 34. Lee LM. Public health ethics theory: review and path to convergence. Journal of Law, Medicine, and Ethics. 2012;40(1):85–98. 35. Beauchamp TL, Childress JF. Principles of Biomedical Ethics. 7th ed. Oxford University Press; 2012. 36. Porta M. A Dictionary of Epidemiology. Oxford University Press; 2008. 37. Weed D, McKeown R. Glossary of ethics in epidemiology and public health: I. Technical terms. Journal of Epidemiology and Community Health. 2001;55:855–857. 38. Pellegrino ED, Thomasma DC. The Virtues in Medical Practice. Oxford University Press; 1993. 39. Institute of Medicine, Committee for the Study of the Future of Public Health. The Future of Public Health. National Academy Press; 1988. 40. Winslow C-EA. The Evolution and Significance of the Modern Public Health Campaign. Yale University Press; 1984 (originally published 1923). 41. Susser M, Susser E. Choosing a future for epidemiology: I. Eras and paradigms. American Journal of Public Health. 1996;86(5):668–673. 42. Institute of Medicine, Committee on Assuring the Health of the Public in the 21st Century. The Future of the Public’s Health in the 21st Century. National Academy Press; 2003. 43. Susser M. Epidemiology in the United States after World War II: the evolution of technique. Epidemiologic Reviews. 1985;7:147–177. 44. Susser M, Susser E. Choosing a future for epidemiology: II. From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health. 1996;86(5):674–677. 45. Institute of Medicine. Healthy Communities: New Partnerships for the Future of Public Health. National Academy Press; 1996. 46. Ruger JP. Ethics of the social determinants of health. Lancet. 2004;364(9439): 1092–1097. 47. Burris S. Introduction: merging law, human rights, and social epidemiology. Journal of Law, Medicine, and Ethics. 2002;30(4):498–509. 48. Khushf G. System theory and the ethics of human enhancement: a framework for NBIC convergence. Annals of the New York Academy of Science. 2004;1013:124–149. 49. Nijhuis H, Van der Maesen L. The philosophical foundations of public health: an invitation to debate. Journal of Epidemiology and Community Health. 1994;48:1–3. 50. Royo-Bordonada MÁ, Román-Maestre B. Towards public health ethics. Public Health Reviews. 2015;36(3):15. 51. WHO Regional Office for Europe. Targets and Indicators for Health 2020. 2018. 52. Childress J, Faden R, Gaare R, et al. Public health ethics: mapping the terrain. Journal of Law, Medicine, and Ethics. 2002;30:170–178. 53. Gostin LO. Public health, ethics, and human rights: a tribute to the late Jonathan Mann. Journal of Law, Medicine, and Ethics. 2001;29(2):121–130. 54. Mann JM. Medicine and public health, ethics and human rights. Hastings Center Reports. 1997;27(3):6–13. 55. Association of State and Territorial Health Officials. The State of Health in All Policies. 2018.
Ethics in Public Health Practice 171 56. WHO Regional Office for Europe. Multisectoral and Intersectoral Action for Improved Health and Well-Being for All: Mapping of the WHO European Region—Governance for a Sustainable Future: Improving Health and Well-Being for All. 2018. 57. Galarneau C. Health care as a community good: many dimensions, many communities, many views of justice. Hastings Center Reports. 2002;32(5):33–40. 58. National Academy of Sciences, National Academy of Engineering, Institute of Medicine. On Being a Scientist: A Guide to Responsible Conduct in Research. 3rd ed. National Academies Press; 2009. 59. Ruger JP. Health and social justice. Lancet. 2004;364(9439):1075–1080. 60. Kass N. An ethics framework for public health. American Journal of Public Health. 2001;91:1776–1782. 61. Department of Health and Human Services. Federal Policy for the Protection of Human Subjects. Vol. 45 CFR, part 46. 2017. 62. Council for International Organizations of Medical Sciences (CIOMS). International Ethical Guidelines for Health-Related Research Involving Humans. 4th ed. 2016. 63. Adashi EY, Walters LB, Menikoff JA. The Belmont Report at 40: reckoning with time. American Journal of Public Health. 2018;108(10):1345–1348. 64. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects. Government Printing Office; 1978. 65. Kass NE, Faden RR, Goodman SN, et al. The research–treatment distinction: a problematic approach for determining which activities should have ethical oversight. Hastings Center Report. 2013;43(1S):S4–S15. 66. Hodge JGJ, Gostin LO. Revamping the US Federal Common Rule: modernizing human participant research regulations. Journal of the American Medical Association. 2017;317(15):1521–1522. 67. Office of the Associate Director for Science. Distinguishing Public Health Research and Public Health Nonresearch. Centers for Disease Control and Prevention, July 29, 2010:13. 68. Dickert NW, Sugarman J. Ethical goals of community consultation in research. American Journal of Public Health. 2005;95(7):1123–1127. 69. Dickert NW, Sugarman J. Community consultation: not the problem—an important part of the solution. American Journal of Bioethics. 2006;6(3):26–28. 70. Hodge JG. An enhanced approach to distinguishing public health practice and human subjects research. Journal of Law, Medicine, and Ethics. 2005;33(1):125–141. 71. Baily MA. Harming through protection? New England Journal of Medicine. 2008;358(8):768–769. 72. Miller FG, Emanuel EJ. Quality-improvement research and informed consent. New England Journal of Medicine. 2008;358(8):765–767. 73. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter- related bloodstream infections in the ICU. New England Journal of Medicine. 2006;355(26):2725–2732. 74. Beauchamp TL. Viewpoint: why our conceptions of research and practice may not serve the best interest of patients and subjects. Journal of Internal Medicine. 2011;269(4):383–387. 75. The Brussels Declaration on Ethics and Principles for Science and Society Policy- making. http://w ww.euroscientist.com/wp-content/uploads/2017/02/Brussels- Declaration.pdf
172 Ethics and Epidemiology 76. MacQueen K, Buehler J. Ethics, practice, and research in public health. American Journal of Public Health. 2004;94(6):928–931. 77. Leider JP, DeBruin D, Reynolds N, et al. Ethical guidance for disaster response, specifically around crisis standards of care: a systematic review. American Journal of Public Health. 2017;107(9):e1–e9. 78. Petrini C. Triage in public health emergencies: ethical issues. Internal and Emergency Medicine. 2010;5(2):137–144. 79. Coughlin SS, Barker A, Dawson A. Ethics and scientific integrity in public health, epidemiological and clinical research. Public Health Reviews. 2012;34(1):71–83. 80. Lehmann L, Sulmasy L, Desai S, ACP Ethics, Professionalism, and Human Rights Committee. Hidden curricula, ethics, and professionalism: optimizing clinical learning environments in becoming and being a physician: a position paper of the American College of Physicians. Annals of Internal Medicine. 2018;168(7):506–508. 81. Inglesby TV. Progress in disaster planning and preparedness since 2001. Journal of the American Medical Association. 2011;306(12):1372–1373. 82. Romero L, Koonin LM, Zapata LB, et al. Contraception as a medical countermeasure to reduce adverse outcomes associated with Zika virus infection in Puerto Rico: the Zika Contraception Access Network program. American Journal of Public Health. 2018;108(S3):S227–S230. 83. Lurie N, Manolio T, Patterson AP, et al. Research as a part of public health emergency response. New England Journal of Medicine. 2013;368(13):1251–1255. 84. McCullough LB, Coverdale JH, Chervenak FA. Preventive ethics for including women of childbearing potential in clinical trials. American Journal of Obstetrics and Gynecology. 2006;194(5):1221–1227. 85. Bayer R, Fairchild AL. The genesis of public health ethics. Bioethics. 2004;18(6): 473–492. 86. Fairchild AL, Bayer R. Ethics and the conduct of public health surveillance. Science. 2004;303:631–632. 87. World Health Organization Communicable Disease Surveillance and Response Global Influenza Programme. Responding to the Avian Influenza Pandemic Threat: Recommended Strategic Actions. 2005. 88. Garten R, Blanton L, Abd Elal AI, et al. Update: influenza activity in the United States during the 2017–18 season and composition of the 2018–19 influenza vaccine. Morbidity and Mortality Weekly Report. 2018;67(22):634–642. 89. National Center for Immunization and Respiratory Diseases (NCIRD), Centers for Disease Control and Prevention. Estimated influenza illnesses, medical visits, hospitalizations, and deaths in the United States—2017–2018 influenza season. 2018. https://www.cdc.gov/flu/about/burden/estimates.htm 90. Voelker R. Vulnerability to pandemic flu could be greater today than a century ago. Journal of the American Medical Association. 2018;320(15):1523–1525. 91. Federal Emergency Management Agency. 2017 Hurricane Season FEMA After- Action Report. 2018:ix, 65. 92. Sacks CA, Kesselheim AS, Fralick M. The shortage of normal saline in the wake of Hurricane Maria. JAMA Internal Medicine. 2018;178(7):885–886. 93. Callahan D. Principlism and communitarianism. Journal of Medical Ethics. 2003;29:287–291. 94. Callahan D, Jennings B. Ethics and public health: forging a strong relationship. American Journal of Public Health. 2002;92:169–176.
Ethics in Public Health Practice 173 95. McKeown R, Weed D. Glossary of ethics in epidemiology and public health: II. Applied terms. Journal of Epidemiology and Community Health. 2002;56:739–741. 96. Rose G. Sick individuals and sick populations. International Journal of Epidemiology. 1985;14(1):32–38. 97. Buchanan DR. Autonomy, paternalism, and justice: ethical priorities in public health. American Journal of Public Health. 2008;98(1):15–21. 98. Marks SP. Jonathan Mann’s legacy to the 21st century: the human rights imperative for public health. Journal of Law, Medicine, and Ethics. 2001;29(2):131–138. 99. Conly S. The right to preventive health care. Theoretical Medicine and Bioethics. 2016;37(4):307–321. 100. Brudney D. Is health care a human right? Theoretical Medicine and Bioethics. 2016;37(4):249–257. 101. Jacobson PD, Soliman S. Co-opting the health and human rights movement. Journal of Law, Medicine, and Ethics. 2002;30(4):705–715. 102. McDowell I, Spasoff RA, Kristjansson B. On the classification of population health measurements. American Journal of Public Health. 2004;94(3):388–393. 103. Silberzahn R, Uhlmann EL. Crowdsourced research: many hands make tight work. Nature. 2015;526(7572):189–191. 104. Silberzahn R, Uhlmann EL, Martin DP, et al. Many analysts, one data set: making transparent how variations in analytic choices affect results. Advances in Methods and Practices in Psychological Science. 2018;1(3):337–356. 105. Mandrola JM. The year’s most important study adds to uncertainty in science. Medscape. 2018. https://www.medscape.com/viewarticle/904286 106. Resplandy L, Keeling RF, Eddebbar Y, et al. Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition. Nature. 2018;563(7729):105–108. 107. Keeling R. Resplandy et al. correction and response. In: RealClimate: Climate Science from Climate Scientists. RealClimate.org; 2018. Available from https:// www.realclimate.org/index.php/archives/2018/11/resplandy-et-al-correction-and- response/#ITEM-22025-0. Accessed March 29, 2021. 108. Resplandy, L., et al., Retraction Note: Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition. Nature, 2019. 573 (7775): 614. 109. Resplandy, L., et al., Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition. Scientific Reports, 2019. 9(1): 20244. https://doi. org/10.1038/s41598-019-56490-z 110. Greenland S. Transparency and disclosure, neutrality and balance: shared values or just shared words? Journal of Epidemiology and Community Health. 2012;66(11):967–970. 111. Weed DL, McKeown RE. Epidemiology: observational studies on human populations. In: Emanuel EJ, Grady C, Crouch RA, et al., eds. Oxford Textbook of Clinical Research Ethics. Oxford University Press; 2008:325–335. 112. Chavalarias D, Wallach JD, Li AHT, Ioannidis JPA. Evolution of reporting P values in the biomedical literature, 1990–2015. Journal of the American Medical Association. 2016;315(11):1141–1148. 113. Ioannidis JPA. The proposal to lower P value thresholds to .005. Journal of the American Medical Association. 2018;319(14):1429–1430. 114. Kyriacou DN. The enduring evolution of the P value. Journal of the American Medical Association. 2016;315(11):1113–1115.
174 Ethics and Epidemiology 115. Wasserstein RL, Lazar NA. The ASA’s statement on p-values: context, process, and purpose. American Statistician. 2016;70(2):129–133. 116. Greenland S, Senn SJ, Rothman KJ, et al. Statistical tests, P values, confi dence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology. 2016;31(4):337–350. 117. Polanyi M. Personal Knowledge. Harper & Row; 1964. 118. Kuhn TS. The Structure of Scientific Revolutions. 2nd ed. University of Chicago Press; 1970. 119. Kahneman D. Thinking, Fast and Slow. Farrar, Straus and Giroux; 2011. 120. Lewis M. The Undoing Project: A Friendship That Changed Our Minds. W. W. Norton & Co.; 2017.
8
Ethical Issues in Genetic Epidemiology Laura M. Beskow, Stephanie M. Fullerton, and Wylie Burke
Introduction The breakthrough discovery of the complete sequence of human DNA in 2003 rapidly ushered in an era of genomics and Big Data.1,2 This evolution has been aided by significant technological advances, including developments in next- generation sequencing,3,4 widespread use of electronic health records,5,6 and the proliferation of mobile health apps and devices.7,8 Together, these advances have heightened hopes for precision medicine and precision public health. “Precision medicine” refers to the notion of accounting for individual variability in genes, environment, and lifestyle to devise new ways to prevent, detect, diagnose, and treat health conditions.9–11 “Precision public health” is the related effort to use these kinds of variability to more finely tailor preventive interventions for at-risk groups and improve population health.12–15 Epidemiological research is essential to realizing these aspirations. Important scientific questions about the roles of genes, environmental exposures (including lifestyle influences), and gene– gene and gene– environment interactions in human health can only be answered through the rigorous study of genotypic, phenotypic, and environmental data in human populations.16,17 Unlocking the genomic contributions to health and disease has always required a variety of study designs, such as family-based studies to assess whether diseases and gene variants show correlated transmission among related individuals;18–20 population-based studies to determine whether diseases and gene variants show associations among unrelated individuals;21,22 and intervention studies, in both clinical and public health settings, to supply the evidence needed to make informed policy choices about the appropriate use of genetic information to improve health outcomes.23–25 More recently, considerable time and resources have been invested in the creation of massive research platforms, incorporating molecular, health, and environmental data from large numbers of people to facilitate an extensive range of studies.9,26 Relatedly, researchers are expected to share data broadly to enable new analyses, thereby accelerating scientific discovery and validation.27–29 There are also calls to harmonize and aggregate information from existing cohort Laura M. Beskow, Stephanie M. Fullerton, and Wylie Burke, Ethical Issues in Genetic Epidemiology In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0008
176 Ethics and Epidemiology studies to achieve larger sample sizes and increased statistical power, improve diversity and generalizability, and encourage efficient use of existing data.30,31 Despite their advantages, these large-scale endeavors also involve risks to participants. These can be categorized as:32 1. Unintended access to identifying information, through hacking or other breach, or triangulation enabled by the depth and breadth of data collected and generated for research; 2. Permitted but potentially unwanted use of information, including objectionable research uses, as well as non-research uses of data, such as for law enforcement and marketing; 3. Risks based on the nature of genetic information, including the probabilistic and familial aspects of genomic data, as well as the “unknown unknowns” of future genomic and technological advances; and 4. Risks associated with longitudinal studies, arising from the typically open- ended design of precision medicine and precision public health research, limits on participants’ ability to withdraw, and changes over time that occur external to the study (e.g., in participants’ health and lives, in the sociopolitical milieu). In turn, these risks invoke the prospect of an array of harms, including:33 • Physical harm, primarily related to unwarranted medical actions taken based on return of individual research results; • Dignitary harm, based on research uses that participants find morally offensive; • Group harm, stemming from research that serves to exacerbate stigmatization of socially defined groups; • Economic harm, for example from employment and insurance discrimination (particularly life, disability, and long-term care insurance) or identity theft; • Psychological harm, related again to return of individual research results (e.g., anxiety, familial distress) as well as to unintended access to identifiable information and objectionable use; and • Legal harm, premised on government and/or law enforcement access to data. The likelihood and severity of these risks and harms depends both on the characteristics and values of individual participants and on specific decisions for the design and conduct of the research. At the same time, unduly restricting researchers’ ability to use and generate genetic, health, and environmental
Genetic Epidemiology 177 information can pose a threat to scientific validity and add uncertainty, cost, and delay.21,34–36 Thus, an important goal is to simultaneously protect research participants and preserve epidemiologists’ ability to conduct beneficial research. Ethical issues in genetic research37–39 and in epidemiology40–42 have been explored at length. In addition, many argue that genetic information is fundamentally similar to other kinds of health information,43–47 and thus the issues and concepts addressed elsewhere in this book are applicable to genetic epidemiology. In this chapter, we focus on three selected topics that, although not unique to genetics, are becoming increasingly important in the kind of large-scale genetic epidemiological research needed to support precision medicine and precision public health: (1) the use of broad consent to unspecified future research, (2) changing ethical norms and considerations for offering research results to participants, and (3) the key role of stakeholder engagement and governance processes. These topics are interrelated and represent areas in need of continued attention and debate to ensure participants are respected while promoting the quality and efficiency of research.
Broad Consent to Future, Unspecified Research Most genetic epidemiological research is based in the analysis of biospecimens and data obtained from participants who have given their permission for collection and use of these materials, with the exact nature of the informed consent process governed by national research regulations. In January 2017, major revisions were announced to U.S. federal policies for the protection of human subjects (known as the “Common Rule”).48 These changes, which were the result of an iterative process of public comment on proposed rules,49,50 represent the first update of the Common Rule since it was issued in 1991. In the interim, the scientific and societal landscape had evolved significantly, including developments in human genome research as well as high-profile controversies surrounding research uses of biospecimens and data.51–53 Thus, many of the regulatory changes are directly relevant to genetic epidemiology,54 including important changes surrounding informed consent. The goal of informed consent is to enable competent individuals to make voluntary decisions about participating in research with an understanding of the purpose, procedures, risks, benefits, and alternatives. With limited exceptions, informed consent is a mainstay for research involving human subjects, based on well-established ethical principles of respect for persons, beneficence, and justice.55 Even so, decades of accumulated evidence in many research contexts, including biobanking, amply document that prospective participants fail to grasp key information conveyed in consent forms and processes.56–66
178 Ethics and Epidemiology The new U.S. regulations adopt a number of provisions intended to help remedy this situation.67,68 They require that prospective participants be given “the information that a reasonable person would want to have in order to make an informed decision” (45 CFR 46 §__.116(a)(4)). Sufficient detail must be provided, “organized and presented in a way that does not merely provide lists of isolated facts, but rather facilitates . . . understanding of the reasons why one might or might not want to participate (45 CFR 46 §__.116(a)(5)). In addition, the regulations now enshrine the use of broad consent. “Broad consent” refers to the approach of asking prospective participants to consent to the storage and use of their biospecimens and data for unspecified future research. The conditions under which such research will occur is defined at the time of consent (e.g., access procedures, privacy protections) but not the details of the individual studies, since those are typically not known. The shift to the use of broad consent is also occurring in many other developed countries but is not universally endorsed. A 2016 review found, for example, that broad consent was not permitted in Germany and was disallowed for genetic data in France.69 The trend, however, points to greater acceptance of broad consent over time and across a wide range of jurisdictions. In the United States, the revised Common Rule expressly permits the use of broad consent “for the storage, maintenance, and secondary research use of identifiable private information or identifiable biospecimens (collected for either research studies other than the proposed research or nonresearch purposes)” (45 CFR 46 §__.116(d)). Basic informed consent disclosures are required, along with informational elements specific to broad consent, including: • a general description of the types of research that may be conducted, comprising sufficient information such that a reasonable person would expect that the broad consent would permit the types of research conducted; and • a statement that participants will not be informed of the details of any specific studies that might be conducted—and that they might have chosen not to consent to some of those specific research studies. Amassing the complex data required for precision medicine and precision public health research involves substantial investment of time and resources, leading to a strong interest in ensuring that the data are readily available for a wide array of studies. Broad consent is meant to achieve this goal, but its ethical acceptability has been extensively debated.70–76 A primary concern is the degree to which research participants can be truly informed, given that the details of the studies that will be conducted are not known at the time of consent. In general, oversight mechanisms are considered crucial to the ethical justification for broad consent, which has often been described as consent to governance.77–80 In other
Genetic Epidemiology 179 words, participants agree—in essence—to entrust decisions about future research to oversight bodies and processes. A survey of U.S. biobanks81 suggested that broad consent is often implemented with requirements for institutional review board (IRB) review of specific studies, as well as a role for other oversight bodies such as data access committees and community advisory boards. Further, lay perspectives are essential to inform the ethical implementation of broad consent. As reflected in new Common Rule requirements, devising effective consent forms and processes requires a firm grasp of what reasonable people would want to know about large-scale genetic epidemiological research, as well as concerted efforts to organize and present information in a way that promotes comprehension. A large volume of research has been conducted on patient and public attitudes toward storing biospecimens and data for future use; this literature has been the subject of multiple published reviews that cover a variety of studies that differ in context, purpose, design, sampling, results, and conclusions.82 Much of this research seems to suggest willingness to give broad consent. For instance, in a systematic review of forty-eight studies exploring individuals’ perspectives on broad consent and data sharing in the United States, Garrison et al.83 found that broad consent was often preferred over tiered or study-specific consent—especially when broad consent was the only option, samples were de-identified, logistics of biobanks were communicated, and privacy was addressed. Willingness for data to be shared was high, but it was lower among individuals from underrepresented minorities, among individuals with privacy and confidentiality concerns, and when pharmaceutical companies had access to data. Similarly, in a systematic review of twenty-three quantitative studies assessing public and research participant perspectives on the conduct of genomic cohort studies, Goodman et al.84 found support for the use of broad consent among both general and research populations. Uniformly, these studies identified trust as an important predictor of acceptance of alternatives to traditional study-specific consent. However, these and other systematic reviews commonly highlight nontrivial limitations in the underlying studies. Additional, rigorously designed research is needed that elicits considered opinions and reasoning about the acceptability of (not preferences for)82 different consent models, especially in diverse populations. Importantly, data are lacking concerning what people understand broad consent to encompass. Many such consent forms are written to describe research on “health and disease,” and the validity of informed consent depends on participants’ understanding of what they are agreeing to. If research participants and those to whom they entrust decisions about future uses of stored materials differ in their understanding of the meaning of “health and disease,” specimens and data could be used in ways that are inconsistent with participants’ expectations and values. Studies suggest that people consider research purpose to be
180 Ethics and Epidemiology among the information most important to their decision to participate85 and that they may have concerns about research on topics such as ancestry, substance abuse, violent behavior, and intelligence.86 These findings suggest the need for broad consent to include specific descriptions of the types of research anticipated and for biobanking procedures to include efforts to seek feedback from research participants and the public concerning the scope of research to be undertaken. Thus, although broad consent can increase research efficiency and may be generally acceptable to many participants, it could present a serious threat to public trust if it facilitates research that participants find offensive, morally concerning, or beyond the scope of research to which they believe they consented. It is imperative not only to build and maintain trust but also to ensure that individuals, organizations, policies, and processes associated with large-scale genetic epidemiology research are, in fact, trustworthy.87–89
Offering Individual Research Results to Participants Traditionally, few researchers have attempted to offer individual genetic research results to participants in epidemiological studies.90 There are likely several reasons for this. First, much of the information generated from epidemiological research is exploratory or provisional as opposed to immediately clinically important or actionable. Second, epidemiological approaches often involve the investigation of very large cohorts for which any systematic attempt to contact participants for the offer and return of individual results could be logistically complex and expensive. Finally, epidemiologists may not feel that they have the training required to responsibly convey results of potential clinical relevance to research participants. These considerations notwithstanding, changing research ethics norms and the rapid pace of technological innovation in genome-wide genotyping and next-generation sequencing have combined to suggest that offering individual results may need to be considered and, where appropriate, incorporated into genetic epidemiological investigation. Research ethics guidelines have evolved over the course of the last fifteen years to recommend that, where feasible, analytically valid and clinically actionable information identified in the course of research be offered to research participants. The National Academies of Sciences, Engineering, and Medicine (NASEM), for example, recently completed a year-long consensus development process aimed at creating recommendations for the return of individual research results generated by research laboratories. In their report, NASEM advocated for a study-specific “process-oriented approach” to offering individual results that considers the value of findings for participants, the potential risks and feasibility of return, and quality standards for the research laboratory generating the
Genetic Epidemiology 181 results.91 These general recommendations largely ratified an earlier set of consensus recommendations, agreed upon by a joint working group of investigators drawn from the Electronic Medical Records and Genomics (eMERGE) and Clinical Sequencing Exploratory Research (CSER) networks that focused on return of individual genomic results.92 These investigators suggested that analytically and clinically valid information that is of an important and actionable medical nature and that is identified as part of the research process should be offered to participants. The eMERGE/CSER working group further clarified that researchers do not have a duty to look for actionable findings beyond those identified in the normal process of their investigations (i.e., that there is no “duty to hunt” for returnable results) and that participants have the right to refuse any results that may be offered. While these research ethics recommendations support the offer of individual, especially but not exclusively genetic, results to research participants, exactly which results meet the threshold for return remains dependent on context and subject to interpretation.93 For epidemiological research that involves the simultaneous investigation of multiple genic regions and discovery of novel or individually rare gene variants (e.g., exome sequencing or whole genome sequencing), the likelihood of identifying information that could warrant return to research participants is magnified. Of course, in a genome-scale investigation it is largely impossible to “stumble across” a clinically actionable result. Instead, automated analytic pipelines must be designed either to look for changes in a subset of genes relevant to the trait or condition of interest, or to scan all sequenced genes for small changes in affected participants that are absent in matched controls. In either case, calling algorithms can identify genetic changes that have previously been proven to increase disease risk (i.e., pathogenic variants) or that change protein coding in ways predicted but as yet unproven to affect disease risk (i.e., likely pathogenic variants). Researchers must therefore decide in advance which genes will be investigated and which type of genetic changes identified therein will be offered to participants. Although the American College of Medical Genetics and Genomics has recommended a minimum list of fifty-nine genes for which pathogenic variants identified in the course of clinical exome and/or genome sequencing should be offered to patients,94,95 no comparable consensus gene list exists in the research context, and study teams can, and often do, employ different gene lists and criteria for return.96 Prior to initiating a new research study, investigators should gauge the likelihood of generating clinically important and actionable research results and decide on their preferred approach to identifying and offering such results to research participants. This plan should address the criteria guiding decision- making about the kind of results to offer, the manner in which the analytic and clinical validity of results will be ensured (e.g., many U.S. funders expect genetic
182 Ethics and Epidemiology results offered to participants to be validated by confirmation in a Clinical Laboratory Improvement Amendments of 1988 [CLIA] compliant laboratory, and assurance of analytic validity was identified as a significant issue for international policymaking related to return of results97–99), the methods by which participants will be notified and the result returned (including use of personnel with specialized expertise, such as a genetic counselor), and the approximate timing of result return. The NASEM report noted that offering individual results may not always be feasible and that feasibility will be determined by an array of study-specific factors such as the costs, the burdens posed by the return process on other study activities, and the potential for result return to lead to investigative bias. Balancing the value to participants of returning results against its feasibility is an important part of the planning process.91 Once a specific plan for returning results is determined, or a decision to forgo return is made and justified, the plan should be reviewed and approved by the governing IRB. In addition to establishing a protocol for returning results, recent revisions to the Common Rule now require that investigators describe as part of the informed consent process whether, and under what circumstances, they will offer clinically relevant research results to participants (§__.116(c)(8)). Such notification ensures that potential participants are aware of the possibility of receiving individual results and can include that prospect in their decision-making about whether to participate. Alternatively, for studies where returning results is not anticipated or is deemed infeasible, notifying participants that they should not expect to receive results can help forestall misplaced expectations or disappointment at not being recontacted. Ideally, information shared at the time of informed consent will also describe the anticipated timeframe for any return- related contact as well as specific plans for the handling of such results. For example, clinical translational research studies such as eMERGE100 and CSER101,102 now routinely place CLIA-validated genetic research results in the participant’s electronic health record. In addition, some have recommended that investigators solicit at the time of informed consent participants’ preferences for return of results to family members in the event of their death.103 Where a wide array of potentially clinically relevant results is expected to be generated, investigators may also wish to solicit participants’ preferences with respect to the particular type of findings they wish to receive (or not), either at the time of study enrollment or just prior to result disclosure. When individual research results are identified that meet established criteria for return, the study team should aim to communicate those findings to research participants in a timely manner and in a way that will maximize participant understanding. Unfortunately, no explicit research ethics recommendations or regulatory requirements address procedures for offering individual research results, and best practices will likely vary by study design and participant characteristics.
Genetic Epidemiology 183 In addition to timing of return, investigators will need to consider carefully the most appropriate mode of communication (e.g., in person, by telephone, by letter, or via an electronic delivery modality), what additional reference information will need to be included with each result to help explain its meaning for the participant, and what follow-up actions, if any, might be recommended. If appropriate, assistance with referral for clinical follow-up should also be provided. An excellent discussion of these and related considerations is included as part of the NASEM report.91 While anticipating and planning for the potential offer of individual research results to study participants may represent a daunting prospect for many epidemiologists, current recommendations and a growing body of experience with offering genetic results provide a robust basis from which to plan and prepare. Many, perhaps most, research participants will continue to participate in biomedical research altruistically and with no expectation of direct benefit. Nevertheless, it is also clear that many participants value receiving information generated in the course of their study participation and appreciate the gesture of reciprocity symbolized by the offer of research results, whether individual104–107 or aggregate.108,109 Wherever feasible, investigators should design their studies in a way that will allow for the possibility of such return.
Stakeholder Engagement and Research Governance Both broad consent and return of research results require procedures for oversight and decision-making—that is, governance procedures. Broad consent allows research data and/or samples to be stored for unspecified future use, and governance procedures are needed to determine who may have access to those resources. Many research repositories have data access committees or similar bodies that consider the qualifications of the researcher and the goals of the proposed research before granting access. For return of results, governance procedures are needed to evaluate results emerging from the research to determine whether they are suitable for return, and then to plan appropriate procedures for offering them to participants. This process is generally undertaken by the research team that recruited the participants, sometimes with the assistance of an IRB or similar research oversight body. Research participants have an interest in these governance procedures and in the principles and criteria that inform them. Although broad consent allows for a wide range of future research, the participant may want to be assured that decisions about future use are made in a responsible fashion, enabling research of value, ensuring protection of data and samples, and avoiding uses that may lead to stigma, discrimination, or waste. Participants’ interest in return of results
184 Ethics and Epidemiology is more direct: to be assured that results appropriate for return are identified and offered in an informative and supportive way. To the extent that research involves public resources or has implications for societal benefit or harm, the public also has an interest in these decisions. Two general strategies are available for incorporating participant and public views: stakeholder engagement and direct participation in research governance. Each can be accomplished in different ways. Taken together, they offer complementary strategies for ensuring participant input. Rooted in democratic principles, stakeholder engagement covers a broad range of activities. It is receiving increasing attention as a component of research practice. The need for stakeholder engagement was noted in a 2013 report on the Clinical and Translational Science Award (CTSA) program from the Institute of Medicine110 and has been a central concern of the Patient-Centered Outcomes Research Institute (PCORI)111 and National Institutes of Health’s “All of Us” research program.112 Projects in the United Kingdom, the Netherlands, Australia, and Finland have also provided evidence for the value of stakeholder engagement in guiding research activities.113,114 Five stages of engagement have been defined by the International Association for Public Participation (IAP2),115 an organization devoted to increasing public participation in governmental and organizational activities related to the public interest: (1) inform; (2) consult; (3) involve; (4) collaborate; and (5) empower. As applied to a research context, the process of informed consent is a central component of the first stage. With broad consent, for example, the researcher has an opportunity to provide potential participants with the rationale for retaining data and biospecimens for future use and to inform them about security measures as well as procedures that will govern access. With return of results, the consent process allows for participants to indicate whether or not they wish to receive results and can also provide information about the likelihood and criteria for the types of results to be offered. How completely and effectively these aspects of the research process are conveyed is an ethical concern; failure to fully inform participants about procedures governing future uses of data or return of results could result in participation that is based on misunderstandings or false expectations. The second stage of engagement, consultation, may take different forms. The research process itself offers an opportunity to gain information about both participant and public perspectives. As noted in this chapter, a substantial body of work now provides information on the views, and in some cases participants’ experiences, related to broad consent, data sharing, and return of results. The methods used include surveys, interviews, focus groups, and participant observation. Other approaches have also been developed to elicit considered opinions from the public, notably methods that utilize techniques of public
Genetic Epidemiology 185 deliberation.116–118 For example, a structured deliberation involving twenty- eight individuals from the general public was used to inform policies for the BC BioLibrary, a network to enhance access to research biospecimens.119 This process produced recommendations related to informed consent, participant recruitment, and biobank governance. Similarly, a University of Michigan research group hosted sixty-six members of the public in a deliberation on policy options related to return of secondary findings to participants in genomic research.120 These deliberations start with a balanced presentation of information to participants, followed by structured discussion aimed at identifying relevant values and justifications for different policy options. An example of the third stage of engagement, involvement, is the formation of a community advisory board (CAB). These bodies are typically made up of representatives of the relevant study population or community. CABs can provide advice on both policy questions and research design. They are an important component of the engagement procedures developed to promote research in isolated and disadvantaged communities, particularly where a past history of research missteps has led to mistrust of researchers. CABs and other community-based consultation efforts offer an opportunity to ensure local input on risks and values relevant to the research,121 highlight areas of shared interest that help to ensure that research reflects local concerns,122 and provide information about social context that may increase the practical and policy relevance of the research.123 Engaging community representatives also demonstrates respect for a community’s social and cultural structures, thereby promoting trust122–124 and potentially increasing recruitment and retention of study participants.123,125 All of these benefits of engagement are of value to research repositories as they develop policies and expectations about how data and/or biological samples will be used, and to research teams as they consider the issue of returning individual results to participants. The community in question for a biorepository or a large epidemiological study may, however be difficult to define. It may include several constituencies: participants whose data and samples have been contributed; patients whose clinical data and samples may be used; and the broader communities where recruitment has occurred. These different constituencies may have diverse interests, and assembling an appropriately representative CAB, or identifying stakeholders for other engagement activities, may be difficult. A study of patient engagement in several health studies in Canada and the United States found that diverse methods of recruitment were used, including social marketing, community outreach, and recruitment through a health system or partner advocacy organization.126 For each recruitment activity, clear goals are needed, including the stakeholder perspectives being sought (e.g., individuals with particular health conditions, individuals from a particular community) and the activities the stakeholder will be asked to undertake. Another study identified some design
186 Ethics and Epidemiology principles for effective stakeholder engagement, including attention to organizational issues (identifying resources to support engagement and awarding effective engagement); values (fostering a commitment to sustained engagement among both individual stakeholders and organizations); and practices (planning for engagement activities, with appropriate flexibility and resources for analysis and use of stakeholder input in an ongoing iterative fashion).127 These issues represent important topics for further investigation to confirm and disseminate best practices. A survey of biobanks found that only 26% had CABs, with the remainder using expert committees for decision-making.81 Another study found varied approaches to stakeholder engagement among biobanks, with reported outcomes including identification of research priorities; policies concerning re- consent and withdrawal; appropriate use of racial, ethnic, or other social identities in sample labeling; and methods to contact potential biobank participants.128 While creation of a CAB cannot be viewed as an ethical requirement for a biorepository or research study, it nevertheless represents an important opportunity for community input. In the absence of a CAB, other methods to gain stakeholder input take on greater importance. The IAP2 spectrum identifies two more intensive stages of engagement: collaboration and empowerment.115 At these stages, engagement moves beyond eliciting the views of participants and the public to direct inclusion of these stakeholders in governance. Community- based participatory research (CBPR)123,124,129,130 is an example of a research model that incorporates both components. It is defined as130 a collaborative process that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community with the aim of combining knowledge and action for social change to improve health and human welfare. (p. 686)
Although CBPR can take many forms, it always emphasizes participation by the community of interest in all aspects of a project. In this model, community representatives would participate in decision-making about both data sharing and return of results. Often, this community empowerment precludes submission of data or biospecimens to a repository that is not under community control. For example, tribal perspectives on data sharing include the view that tribal governments have a fiduciary responsibility to review any uses of tribal data, in order to ensure that data are used responsibly and that harms to the community are avoided.131 Disease advocacy networks voice a similar expectation about oversight, in order to ensure appropriate use of data and samples to advance the
Genetic Epidemiology 187 knowledge most important to the community.132 These examples demonstrate that in some settings, members of a study population may require involvement in governance as a condition for participating in research, reflecting a need to promote trust and ensure a research agenda that is in keeping with a community or advocacy group’s priorities. The degree to which direct participant or community participation in governance procedures should become the norm for research is an unresolved question. One consideration in determining the appropriate form and intensity of stakeholder engagement is the potential for group harm. Hausman133 notes that harms to members of a group may occur as a consequence of the research process; for example, when participants are subjected to inappropriate disease management as in the Tuskegee syphilis studies, or as a consequence of research results that exacerbate existing stigma or stereotypes. Because the latter harm, which is more relevant for epidemiological research, can occur not just to research participants but also to members of the group who are not participants, informed consent disclosures do not provide an adequate solution. Engagement can reduce the risk of such harms by promoting dialogue between group members and researchers, providing researchers with guidance about the perspectives and experiences of individuals within the group, and potentially framing questions and reporting results in ways that reduce the likelihood of harm. Research that involves socially identifiable groups or topics that are potentially stigmatizing may justify greater empowerment of participants and community representatives. Indeed, concerns about group harm are a motivating factor in the requirement of many American Indian and Alaska Native communities to require tribal approval and oversight of any research undertaken in their jurisdictions.131,134 Similar empowerment may be justified for research based on biobank data or samples when the research involves socially identifiable groups or sensitive research topics. Different engagement methods can be used either concurrently or sequentially. As an example, a research network recently reported a multilevel engagement process to inform policies and procedures for the network, incorporating interviews, surveys, consultative meetings, a CAB, and inclusion of a patient investigator on the research team.135 This process led to a conceptualization of stakeholder engagement as a continuum, starting with short-term involvement of many stakeholders through surveys, interviews, and consultation and extending to ongoing involvement of smaller numbers of stakeholders at higher levels of intensity.135 An advantage of this approach is that it combines purposeful methods for input on specific questions with organizational structures (such as CABs) that can provide ongoing advice as needed. Innovative methods development is also occurring, including the use of online tools to elicit feedback from participants or community members.136
188 Ethics and Epidemiology As these different methods and levels of engagement are considered, the distinction between solicitation of views and direct participation in decision- making remains important. This distinction applies both to the identification of guiding values and principles and to the decision-making procedures themselves. Even the creation of a CAB does not ensure direct participation of participant or community stakeholders in decision-making.137 Most CABs, for example, are advisory, and although a well-managed system will respect CAB views, final decisions are likely to be made by expert authorities. And when participants are included on decision-making bodies, such as data access committees, they may be a minority or a single representative who may be overruled by expert members. As stakeholder engagement and participation in governance are planned, the degree of weight and authority placed on stakeholder views must be carefully justified. Inclusion of stakeholders may also result in the need to change existing governance procedures—for example, to consider the location and timing of meetings or the creation of new governance structures. O’Doherty et al. note that there must be a fit between the characteristics of a particular biobank and the governance structures that are appropriate to it; issues include the type of governance bodies to be created (e.g., a Board of Directors, CAB, standing committees) and their interaction with research management and stakeholder engagement.88 As greater stakeholder engagement occurs, different governance models are likely to emerge. A key feature will be the creation of an adaptive governance structure—that is, governance that can be responsive to new technologies, regulatory requirements, research opportunities, and stakeholder input over time.
References 1. Collins F. S., Green E. D., Guttmacher A. E., & Guyer M. S. (2003). A vision for the future of genomics research. Nature, 422(6934), 835–847. 2. Green E. D. & Guyer M. S. (2011). Charting a course for genomic medicine from base pairs to bedside. Nature, 470(7333), 204–213. 3. Bamshad M. J., Ng S. B., Bigham A. W., et al. (2011). Exome sequencing as a tool for Mendelian disease gene discovery. Nat Rev Genet, 12(11), 745–755. 4. Goldstein D. B., Allen A., Keebler J., et al. (2013). Sequencing studies in human genetics: design and interpretation. Nat Rev Genet, 14(7), 460–470. 5. Blumenthal D. & Tavenner M. (2010). The “meaningful use” regulation for electronic health records. N Engl J Med, 363(6), 501–504. 6. Hripcsak G. & Albers D. J. (2013). Next-generation phenotyping of electronic health records. J Am Med Inform Assoc, 20(1), 117–121. 7. Gange S. J. & Golub E. T. (2016). From smallpox to big data: the next 100 years of epidemiologic methods. Am J Epidemiol, 183(5), 423–426.
Genetic Epidemiology 189 8. Ouedraogo B., Gaudart J., & Dufour J. C. (2019). How does the cellular phone help in epidemiological surveillance? A review of the scientific literature. Inform Health Soc Care, 44(1), 12–30. 9. Collins F. S. & Varmus H. (2015). A new initiative on precision medicine. N Engl J Med, 372(9), 793–795. 10. Ashley E. A. (2015). The precision medicine initiative: a new national effort. JAMA, 313(21), 2119–2120. 11. Sabatello M. & Appelbaum P. S. (2017). The precision medicine nation. Hastings Cent Rep, 47(4), 19–29. 12. Khoury M. J. & Evans J. P. (2015). A public health perspective on a national precision medicine cohort: balancing long-term knowledge generation with early health benefit. JAMA, 313(21), 2117–2118. 13. Khoury M. J., Iademarco M. F., & Riley W. T. (2016). Precision public health for the era of precision medicine. Am J Prev Med, 50(3), 398–401. 14. Molster C. M., Bowman F. L., Bilkey G. A., et al. (2018). The evolution of public health genomics: exploring its past, present, and future. Front Public Health, 6, 247. 15. Weeramanthri T. S., Dawkins H. J. S., Baynam G., et al. (2018). Editorial: precision public health. Front Public Health, 6, 121. 16. Hamburg M. A. & Collins F. S. (2010). The path to personalized medicine. N Engl J Med, 363(4), 301–304. 17. Phimister E. G., Feero W. G., & Guttmacher A. E. (2012). Realizing genomic medicine. N Engl J Med, 366(8), 757–759. 18. Beskow L. M., Botkin J. R., Daly M., et al. (2004). Ethical issues in identifying and recruiting participants for familial genetic research. Am J Med Genet, 130A(4), 424–431. 19. Leve L. D., Harold G. T., Ge X., et al. (2010). Refining intervention targets in family- based research: lessons from quantitative behavioral genetics. Perspect Pscyhol Sci, 5(5), 516–526. 20. D’Onofrio B. M., Lahey B. B., Turkheimer E., & Lichtenstein P. (2013). Critical need for family-based, quasi-experimental designs in integrating genetic and social science research. Am J Public Health, 103(S1), S46–S55. 21. Beskow L. M., Burke W., Merz J. F., et al. (2001). Informed consent for population- based research involving genetics. JAMA, 286(18), 2315–2321. 22. Roger V. L., Boerwinkle E., Crapo J. D., et al. (2015). Strategic transformation of population studies: recommendations of the Working Group on Epidemiology and Population Sciences from the National Heart, Lung, and Blood Advisory Council and Board of External Experts. Am J Epidemiol, 181(6), 363–368. 23. Khoury M. J., Gwinn M., & Ioannidis J. P. (2010). The emergence of translational epidemiology: from scientific discovery to population health impact. Am J Epidemiol, 172(5), 517–524. 24. Spitz M. R., Caporaso N. E., & Sellers T. A. (2012). Integrative cancer epidemiology— the next generation. Cancer Discov, 2(12), 1087–1090. 25. Nishi A., Milner D. A., Jr., Giovannucci E. L., et al. (2016). Integration of molecular pathology, epidemiology and social science for global precision medicine. Expert Rev Mol Diagn, 16(1), 11–23. 26. Manolio T. A., Weis B. K., Cowie C. C., et al. (2012). New models for large prospective studies: is there a better way? Am J Epidemiol, 175(9), 859–866.
190 Ethics and Epidemiology 27. Paltoo D. N., Rodriguez L. L., Feolo M., et al. (2014). Data use under the NIH GWAS data sharing policy and future directions. Nat Genet, 46(9), 934–938. 28. Contreras J. L. (2015). NIH’s genomic data sharing policy: timing and tradeoffs. Trends Genet, 31(2), 55–57. 29. Global Alliance for Genomics and Health. (2016). A federated ecosystem for sharing genomic, clinical data. Science, 352(6291), 1278–1280. 30. Doiron D., Burton P., Marcon Y., et al. (2013). Data harmonization and federated analysis of population-based studies: the BioSHaRE project. Emerg Themes Epidemiol, 10(1), 12. 31. Lesko C. R., Jacobson L. P., Althoff K. N., et al. (2018). Collaborative, pooled and harmonized study designs for epidemiologic research: challenges and opportunities. Int J Epidemiol, 47(2), 654–668. 32. Beskow L. M., Hammack C. M., Brelsford K. M., & McKenna K. C. (2018) Thought leader perspectives on risks in precision medicine research. In: I. G. Cohen, H. F. Lynch, E. Vayena, & U. Gasser (eds.), Big Data, Health Law, and Bioethics. Cambridge University Press; 161–174. 33. Beskow L. M., Hammack C. M., & Brelsford K. M. (2018). Thought leader perspectives on benefits and harms in precision medicine research. PLoS One, 13(11), e0207842. 34. (2015). Data overprotection [editorial]. Nature, 522(7557), 391–392. 35. Harrell H. L. & Rothstein M. A. (2016). Biobanking research and privacy laws in the United States. J Law Med Ethics, 44(1), 106–127. 36. Wilcox A. J., Taylor J. A., Sharp R. R., & London S. J. (1999). Genetic determinism and the overprotection of human subjects. Nat Genet, 21(4), 362. 37. Kaye J., Meslin E. M., Knoppers B. M., et al. (2012). Research priorities. ELSI 2.0 for genomics and society. Science, 336(6082), 673–674. 38. McEwen J. E., Boyer J. T., Sun K. Y., et al. (2014). The ethical, legal, and social implications program of the National Human Genome Research Institute: reflections on an ongoing experiment. Annu Rev Genomics Hum Genet, 15, 481–505. 39. Callier S. L., Abudu R., Mehlman M. J., et al. (2016). Ethical, legal, and social implications of personalized genomic medicine research: current literature and suggestions for the future. Bioethics, 30(9), 698–705. 40. McKeown R. E., Weed D. L., Kahn J. P., & Stoto M. A. (2003). American College of Epidemiology Ethics Guidelines: foundations and dissemination. Sci Eng Ethics, 9(2), 207–214. 41. Council for International Organizations of Medical Sciences & World Health Organization. (2009). International ethical guidelines for epidemiological studies. CIOMS Geneva. 42. Salerno J., Knoppers B. M., Lee L. M., et al. (2017). Ethics, big data and computing in epidemiology and public health. Ann Epidemiol, 27(5), 297–301. 43. Gostin L. O. & Hodge J. G. (1999). Genetic privacy and the law: an end to genetics exceptionalism. Jurimetrics, Fall, 21–58. 44. Green M. J. & Botkin J. R. (2003). “Genetic exceptionalism” in medicine: clarifying the differences between genetic and nongenetic tests. Ann Intern Med, 138(7), 571–575. 45. Rothstein M. A. (2007). Genetic exceptionalism and legislative pragmatism. J Law Med Ethics, 35(2 Suppl), 59–65. 46. Sulmasy D. P. (2015). Naked bodies, naked genomes: the special (but not exceptional) nature of genomic information. Genet Med, 17(5), 331–336.
Genetic Epidemiology 191 47. Lynch H. F., Bierer B. E., & Cohen I. G. (2016). Confronting biospecimen exceptionalism in proposed revisions to the Common Rule. Hastings Cent Rep, 46(1), 4–5. 48. (2017). Federal Policy for the Protection of Human Subjects. Final rule. Fed Regist, 82(12), 7149–7274. 49. Emanuel E. J. & Menikoff J. (2011). Reforming the regulations governing research with human subjects. N Engl J Med, 365(12), 1145–1150. 50. Hudson K. L. & Collins F. S. (2015). Bringing the Common Rule into the 21st century. N Engl J Med. 51. Bayefsky M. J., Saylor K. W., & Berkman B. E. (2015). Parental consent for the use of residual newborn screening bloodspots: respecting individual liberty vs. ensuring public health. JAMA, 314(1), 21–22. 52. Skloot R. (2013, March 23). The immortal life of Henrietta Lacks, the sequel. New York Times, SR4. 53. Mello M. M. & Wolf L. E. (2010). The Havasupai Indian tribe case—lessons for research involving stored biologic samples. N Engl J Med, 363(3), 204–207. 54. Bledsoe M. J. (2017). The final Common Rule: implications for biobanks. Biopreserv Biobank, 15(4), 283–284. 55. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. U.S. Government Printing Office. 56. Joffe S., Cook E. F., Cleary P. D., et al. (2001). Quality of informed consent in cancer clinical trials: a cross-sectional survey. Lancet, 358(9295), 1772–1777. 57. Beardsley E., Jefford M., & Mileshkin L. (2007). Longer consent forms for clinical trials compromise patient understanding: so why are they lengthening? J Clin Oncol, 25(9), e13–e14. 58. McCarty C. A., Nair A., Austin D. M., & Giampietro P. F. (2007). Informed consent and subject motivation to participate in a large, population-based genomics study: the Marshfield Clinic Personalized Medicine Research Project. Community Genet, 10(1), 2–9. 59. Bergenmar M., Molin C., Wilking N., & Brandberg Y. (2008). Knowledge and understanding among cancer patients consenting to participate in clinical trials. Eur J Cancer, 44(17), 2627–2633. 60. Ormond K. E., Cirino A. L., Helenowski I. B., et al. (2009). Assessing the understanding of biobank participants. Am J Med Genet, 149A(2), 188–198. 61. Bergenmar M., Johansson H., & Wilking N. (2011). Levels of knowledge and perceived understanding among participants in cancer clinical trials: factors related to the informed consent procedure. Clin Trials, 8(1), 77–84. 62. Jefford M., Mileshkin L., Matthews J., et al. (2011). Satisfaction with the decision to participate in cancer clinical trials is high, but understanding is a problem. Support Care Cancer, 19(3), 371–379. 63. Lipton L. R., Santoro N., Taylor H., et al. (2011). Assessing comprehension of clinical research. Contemp Clin Trials, 32(5), 608–613. 64. Koh J., Goh E., Yu K. S., et al. (2012). Discrepancy between participants’ understanding and desire to know in informed consent: are they informed about what they really want to know? J Med Ethics, 38(2), 102–106. 65. Rahm A. K., Wrenn M., Carroll N. M., & Feigelson H. S. (2013). Biobanking for research: a survey of patient population attitudes and understanding. J Community Genet, 4(4), 445–450.
192 Ethics and Epidemiology 66. Montalvo W. & Larson E. (2014). Participant comprehension of research for which they volunteer: a systematic review. J Nurs Scholarsh, 46(6), 423–431. 67. Menikoff J., Kaneshiro J., & Pritchard I. (2017). The Common Rule, updated. N Engl J Med, 376(7), 613–615. 68. Sugarman J. (2017). Examining provisions related to consent in the revised Common Rule. Am J Bioeth, 17(7), 22–26. 69. Rothstein M. A., Knoppers B. M., & Harrell H. L. (2016). Comparative approaches to biobanks and privacy. J Law Med Ethics, 44(1), 161–172. 70. Hansson M. G., Dillner J., Bartram C. R., et al. (2006). Should donors be allowed to give broad consent to future biobank research? Lancet Oncol, 7(3), 266–269. 71. Hofmann B. (2009). Broadening consent—and diluting ethics? J Med Ethics, 35(2), 125–129. 72. Petrini C. (2010). “Broad” consent, exceptions to consent and the question of using biological samples for research purposes different from the initial collection purpose. Soc Sci Med, 70(2), 217–220. 73. Karlsen J. R., Solbakk J. H., & Holm S. (2011). Ethical endgames: broad consent for narrow interests; open consent for closed minds. Camb Q Healthc Ethics, 20(4), 572–583. 74. Helgesson G. (2012). In defense of broad consent. Camb Q Healthc Ethics, 21(1), 40–50. 75. Spellecy R. (2015). Facilitating autonomy with broad consent. Am J Bioeth, 15(9), 43–44. 76. Master Z. (2015). The U.S. National Biobank and (no) consensus on informed consent. Am J Bioeth, 15(9), 63–65. 77. Maschke K. J. (2006). Alternative consent approaches for biobank research. Lancet Oncol, 7(3), 193–194. 78. Koenig B. A. (2014). Have we asked too much of consent? Hastings Cent Rep, 44(4), 33–34. 79. Boers S. N., van Delden J. J., & Bredenoord A. L. (2015). Broad consent is consent for governance. Am J Bioeth, 15(9), 53–55. 80. Garrett S. B., Dohan D., & Koenig B. A. (2015). Linking broad consent to biobank governance: support from a deliberative public engagement in California. Am J Bioeth, 15(9), 56–57. 81. Henderson G. E., Edwards T. P., Cadigan R. J., et al. (2013). Stewardship practices of U.S. biobanks. Sci Transl Med, 5(215), 215–217. 82. Beskow L. M. (2016). Lessons from HeLa cells: the ethics and policy of biospecimens. Annu Rev Genomics Hum Genet, 17(1), 395–417. 83. Garrison N. A., Sathe N. A., Antommaria A. H., et al. (2016). A systematic literature review of individuals’ perspectives on broad consent and data sharing in the United States. Genet Med, 18(7), 663–671. 84. Goodman D., Bowen D., Wenzel L., et al. (2018). The research participant perspective related to the conduct of genomic cohort studies: a systematic review of the quantitative literature. Transl Behav Med, 8(1), 119–129. 85. Trinidad S. B., Fullerton S. M., Bares J. M., et al. (2010). Genomic research and wide data sharing: views of prospective participants. Genet Med, 12(8), 486–495. 86. Trinidad S. B., Fullerton S. M., Bares J. M., et al. (2012). Informed consent in genome-scale research: what do prospective participants think? AJOB Prim Res, 3(3), 3–11.
Genetic Epidemiology 193 87. Meslin E. M. & Cho M. K. (2010). Research ethics in the era of personalized medicine: updating science’s contract with society. Public Health Genomics, 13(6), 378–384. 88. O’Doherty K. C., Burgess M. M., Edwards K., et al. (2011). From consent to institutions: designing adaptive governance for genomic biobanks. Soc Sci Med, 73(3), 367–374. 89. Burke W., Beskow L. M., Trinidad S. B., et al. (2018). Informed consent in translational genomics: insufficient without trustworthy governance. J Law Med Ethics, 46(1), 79–86. 90. Stein C. M., Ponsaran R., Trapl E. S., & Goldenberg A. J. (2019). Experiences and perspectives on the return of secondary findings among genetic epidemiologists. Genet Med, 21(7), 1541–1547. 91. National Academies of Sciences Engineering and Medicine. (2018). Returning Individual Research Results to Participants: Guidance for a New Research Paradigm. National Academies Press. 92. Jarvik G. P., Amendola L. M., Berg J. S., et al. (2014). Return of genomic results to research participants: the floor, the ceiling, and the choices in between. Am J Hum Genet, 94(6), 818–826. 93. Beskow L. M. & Burke W. (2010). Offering individual genetic research results: context matters. Sci Transl Med, 2(38), 38cm20. 94. Green R. C., Berg J. S., Grody W. W., et al.; American College of Medical Genetics and Genomics. (2013). ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing. Genet Med, 15(7), 565–574. 95. Kalia S. S., Adelman K., Bale S. J., et al. (2017). Recommendations for reporting of secondary findings in clinical exome and genome sequencing, 2016 update (ACMG SF v2.0): a policy statement of the American College of Medical Genetics and Genomics. Genet Med, 19(2), 249–255. 96. Berg J. S., Amendola L. M., Eng C., et al. (2013). Processes and preliminary outputs for identification of actionable genes as incidental findings in genomic sequence data in the Clinical Sequencing Exploratory Research Consortium. Genet Med, 15(11), 860–867. 97. Thorogood A., Dalpe G., & Knoppers B. M. (2019). Return of individual genomic research results: are laws and policies keeping step? Eur J Hum Genet, 27(4), 535–546. 98. Budin-Ljosne I., Mascalzoni D., Soini S., et al. (2016). Feedback of individual genetic results to research participants: is it feasible in Europe? Biopreserv Biobank, 14(3), 241–248. 99. Knoppers B. M., Deschenes M., Zawati M. H., & Tasse A. M. (2013). Population studies: return of research results and incidental findings: policy statement. Eur J Hum Genet, 21(3), 245–247. 100. Fossey R., Kochan D., Winkler E., et al. (2018). Ethical considerations related to return of results from genomic medicine projects: the eMERGE Network (phase III) experience. J Pers Med, 8(1), 2. 101. Green R. C., Goddard K. A. B., Jarvik G. P., et al. (2016). Clinical Sequencing Exploratory Research Consortium: accelerating evidence-based practice of genomic medicine. Am J Hum Genet, 98(6), 1051–1066. 102. Amendola L. M., Berg J. S., Horowitz C. R., et al. (2018). The Clinical Sequencing Evidence-Generating Research Consortium: integrating genomic sequencing in diverse and medically underserved populations. Am J Hum Genet, 103(3), 319–327.
194 Ethics and Epidemiology 103. Wolf S. M., Branum R., Koenig B. A., et al. (2015). Returning a research participant’s genomic results to relatives: analysis and recommendations. J Law Med Ethics, 43(3), 440–463. 104. Murphy J., Scott J., Kaufman D., et al. (2008). Public expectations for return of results from large-cohort genetic research. Am J Bioeth, 8(11), 36–43. 105. Beskow L. M. & Smolek S. J. (2009). Prospective biorepository participants’ perspectives on access to research results. J Empir Res Hum Res Ethics, 4(3), 99–111. 106. Bollinger J. M., Scott J., Dvoskin R., & Kaufman D. (2012). Public preferences regarding the return of individual genetic research results: findings from a qualitative focus group study. Genet Med, 14(4), 451–457. 107. Daack-Hirsch S., Driessnack M., Hanish A., et al. (2013). “Information is information”: a public perspective on incidental findings in clinical and research genome- based testing. Clin Genet, 84(1), 11–18. 108. Beskow L. M., Burke W., Fullerton S. M., & Sharp R. R. (2012). Offering aggregate results to participants in genomic research: opportunities and challenges. Genet Med, 14(4), 490–496. 109. Mester J. L., Mercer M., Goldenberg A., et al. (2015). Communicating with biobank participants: preferences for receiving and providing updates to researchers. Cancer Epidemiol Biomarkers Prev, 24(4), 708–712. 110. Institute of Medicine. (2013). Committee to Review the Clinical and Translational Science Awards at the National Center for Advancing Translational Sciences. The CTSA program at NIH: Opportunities for Advancing Clinical and Translational Research. National Academies Press. 111. Patient-Centered Outcomes Research Institute. (2018). Engagement: Influencing the Culture of Research. https://www.pcori.org/engagement/ 112. National Institutes of Health. (2018). All of Us Research Program: Communications and Engagement. https://allofus.nih.gov/about/program-components/ communications-and-engagement 113. Bjugn R. & Casati B. (2012). Stakeholder analysis: a useful tool for biobank planning. Biopreserv Biobank, 10(3), 239–244. 114. Manafo E., Petermann L., Vandall-Walker V., & Mason-Lai P. (2018). Patient and public engagement in priority setting: a systematic rapid review of the literature. PLoS One, 13(3), e0193579. 115. International Association for Public Participation. (2018). Advancing the Practice of Public Participation. https://www.iap2.org/ 116. Kim S. Y., Wall I. F., Stanczyk A., & De Vries R. (2009). Assessing the public’s views in research ethics controversies: deliberative democracy and bioethics as natural allies. J Empir Res Hum Res Ethics, 4(4), 3–16. 117. De Vries R., Stanczyk A. E., Ryan K. A., & Kim S. Y. (2011). A framework for assessing the quality of democratic deliberation: enhancing deliberation as a tool for bioethics. J Empir Res Hum Res Ethics, 6(3), 3–17. 118. Goold S. D., Damschroder L. J., & Baum N. (2007) Deliberative procedures in bioethics. In: Empirical Methods for Bioethics: A Primer. Emerald Group Publishing Limited, 183–201. 119. O’Doherty K. C., Hawkins A. K., & Burgess M. M. (2012). Involving citizens in the ethics of biobank research: informing institutional policy through structured public deliberation. Soc Sci Med, 75(9), 1604–1611.
Genetic Epidemiology 195 120. Ryan K. A., De Vries R. G., Uhlmann W. R., et al. (2017). Public’s views toward return of secondary results in genomic sequencing: it’s (almost) all about the choice. J Genet Couns, 26(6), 1197–1212. 121. Weijer C. (1999). Protecting communities in research: philosophical and pragmatic challenges. Camb Q Healthc Ethics, 8(4), 501–513. 122. Sharp R. R. & Foster M. W. (2000). Involving study populations in the review of genetic research. J Law Med Ethics, 28(1), 41–51, 43. 123. Leung M. W., Yen I. H., & Minkler M. (2004). Community-based participatory research: a promising approach for increasing epidemiology’s relevance in the 21st century. Int J Epidemiol, 33(3), 499–506. 124. Israel B. A., Parker E. A., Rowe Z., et al. (2005). Community-based participatory research: lessons learned from the Centers for Children’s Environmental Health and Disease Prevention Research. Environ Health Perspect, 113(10), 1463–1471. 125. Foster M. W., Sharp R. R., Freeman W. L., et al. (1999). The role of community review in evaluating the risks of human genetic variation research. Am J Hum Genet, 64(6), 1719–1727. 126. Vat L. E., Ryan D., & Etchegary H. (2017). Recruiting patients as partners in health research: a qualitative descriptive study. Res Involv Engagem, 3(1), 15. 127. Boaz A., Hanney S., Borst R., et al. (2018). How to engage stakeholders in research: design principles to support improvement. Health Res Policy Syst, 16(1), 60. 128. Lemke A. A. & Harris-Wai J. N. (2015). Stakeholder engagement in policy development: challenges and opportunities for human genomics. Genet Med, 17, 949. 129. Green L. W. & Mercer S. L. (2001). Can public health researchers and agencies reconcile the push from funding bodies and the pull from communities? Am J Public Health, 91(12), 1926–1929. 130. Minkler M. (2004). Ethical challenges for the “outside” researcher in community- based participatory research. Health Educ Behav, 31(6), 684–697. 131. James R., Tsosie R., Sahota P., et al. (2014). Exploring pathways to trust: a tribal perspective on data sharing. Genet Med, 16(11), 820–826. 132. Edwards K. A., Terry S. F., Gold D., et al. (2016). Realizing our potential in biobanking: disease advocacy organizations enliven translational research. Biopreserv Biobank, 14(4), 314–318. 133. Hausman D. (2008). Protecting groups from genetic research. Bioethics, 22(3), 157–165. 134. Claw K. G., Anderson M. Z., Begay R. L., et al. (2018). A framework for enhancing ethical genomic research with Indigenous communities. Nature Commun, 9(1), 2957. 135. Boyer A. P., Fair A. M., Joosten Y. A., et al. (2018). A multilevel approach to stakeholder engagement in the formulation of a clinical data research network. Med Care, 56(10 Suppl 1), S22–S26. 136. Kim K. K., Khodyakov D., Marie K., et al. (2018). A novel stakeholder engagement approach for patient-centered outcomes research. Med Care, 56(10 Suppl 1), S41–S47. 137. Simon C. M., Newbury E., & L’heureux J. (2011). Protecting participants, promoting progress: public perspectives on community advisory boards (CABs) in biobanking. J Empir Res Hum Res Ethics, 6(3), 19–30.
9
Ethics, Epidemiology, and Changing Perspectives on AIDS Carol Levine
In developed countries AIDS is often described as a chronic condition that can be managed by various combinations of drugs. The risk of HIV infection can be reduced significantly by pre-exposure prophylaxis (PrEP), a daily regimen of pills. The current state of HIV/AIDS care fits Lewis Thomas’s 1971 definition of a “halfway technology,” an intermediate stage between “nontechnology,” which is supportive medical care that does little to affect the course of disease, and “high” or “transformative technology,” which depends on advances in basic sciences. Transformative technology in his view included immunization, chemotherapy, and antibiotics, and halfway technologies included dialysis, organ transplants, and mechanical ventilation.1 For people at risk of HIV or ill with AIDS, a halfway technology is better than an invariably lethal disease. But scientists have not yet created a transformative technology—a cure or a vaccine. The theme of this chapter is that AIDS—past, present, and future—is as much a concern in the twenty-first century as it was forty years ago. This is no time for complacency. In many areas of the world, and in parts of developed countries such as the United States, even this halfway technology is not widely available to all who could benefit from it.23 And equally worrisome, prevention is lagging behind treatment.4 The AIDS epidemic is ravaging African countries such as South Africa and Nigeria,5 spreading in parts of Asia such as Thailand and the Philippines, in Eastern Europe in countries like Russia6 and Ukraine, and in Caribbean and Central American countries. Even in the United States, southern states are experiencing new outbreaks of AIDS.7 In AIDS as in so many other areas of health care there are ethnic and racial disparities; African Americans, who make up 12% of the U.S. population, accounted for 43% of all people living with diagnosed and undiagnosed HIV in 2014.8 In fiscal year 2020 the Centers for Disease Control and Prevention (CDC) began a ten-year campaign called, optimistically, “Ending the HIV Epidemic: A Plan for America.” The goal is reducing new HIV infections to less than three thousand per year by 2030 with an initial focus on the forty-eight counties (plus Carol Levine, Ethics, Epidemiology, and Changing Perspectives on AIDS In: Ethics and Epidemiology. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0009
Changing Perspectives on AIDS 197 Washington, D.C., and San Juan, Puerto Rico) and the southern rural states where HIV transmission occurs most frequently.9 The key strategies are: 1. Diagnose all people with HIV as early as possible. 2. Treat people with HIV rapidly and effectively to reach sustained viral suppression. 3. Prevent new HIV transmissions by using proven interventions, including PrEP and syringe services programs (SSPs). 4. Respond quickly to potential HIV outbreaks to get needed prevention and treatment services to people who need them. There are considerable barriers to implementing these strategies and reaching the goal of making HIV infection “rare.” Stigma and injustices persist, and financial and other barriers reduce access to care. The still-evolving opioid epidemic in the United States, although different in origin and affected populations, presents similar problems of stigma, criminalization, and social isolation. The ethical issues that perplexed scientists, ethicists, and lawmakers in the 1980s and 1990s continue in new areas, and new ethical issues have emerged in a world now driven by instant communication, genetic advances, social media, and massive data collection. It is appropriate to acknowledge the advances in prevention and treatment that are available (but only to some), and it is also instructive to look back to see how this epidemic unfolded and why it persists.
Ethics and Epidemiology in the Early Years of HIV/AIDS In the early 1980s AIDS was unknown. It was epidemiologists who gave this new disease its name and defined its modes of transmission, and who now monitor its spread, natural history, and the effect of public health and clinical interventions. Each step and misstep in this process has had far-reaching consequences, some of which will be described in this chapter. (More specific accounts of the early years of the epidemic and the role played by epidemiologists have been chronicled elsewhere.10–12) Epidemiology deals with disease processes and trends in populations. But, as Gerald Oppenheimer points out, “Epidemiology, unlike virology, has a strong social dimension.”13 His assertion that epidemiology explicitly incorporates perceptions of a population’s social relations, behavioral patterns, and experiences into its explanations may be an overstatement, but it is certainly true that epidemiology frequently encounters such social dimensions. When, as in the case of AIDS, those perceptions involve a potentially lethal disease,
198 Ethics and Epidemiology stigmatized behaviors such as drug use and homosexual sex, and a suspicious and fearful public, the potential for moral problems soars. After some general considerations relating to ethics and epidemiology, this chapter focuses on four examples, mostly drawn from my own experience and research, that illustrate this interrelationship in the case of AIDS. They are presented in roughly chronological order. The first example describes the conflict in the early years of the epidemic between the need for valid data about a new disease of unknown etiology and subjects’ fears of confidentiality breaches. The second concerns the conflict between scientific definitions of the term “disease” and the economic and regulatory uses of these definitions, as well as their impact on individuals. The third pits the epidemiological value of anonymous serological surveillance techniques against the clinical value of identifying seropositivity in individuals as it played out in the case of pregnant women and newborns. The fourth example concerns the global impact of HIV/AIDS on children whose parents are ill or dead and the ethical implications of various definitions of orphanhood. These examples are not comprehensive, nor are they the end of the story. Even now we are learning more about the origins of HIV and its emergence in the United States. The opioid epidemic, which has led to an increase in heroin and fentanyl use because these drugs are cheaper and more readily available than doctor-prescribed opiates, is a new source of HIV infection as well as hepatitis B and C. The National Institute on Drug Abuse explains the connection: “Heroin use increases the risk of being exposed to HIV, viral hepatitis, and other infectious agents through contact with infected blood or body fluids (e.g., semen, saliva) that results from the sharing of syringes and injection paraphernalia that have been used by infected individuals or through unprotected sexual contact with an infected person.”14 The examples described in this chapter demonstrate that the relationship between ethics and epidemiology evolves over time. Ethical principles are timeless, but their emphasis and interpretation may vary as new knowledge becomes available. Questions that appear to be resolved may reappear in a different context, as occurred in discussions of HIV testing. Questions that did not appear at the beginning—just allocation of resources—emerge when therapies became available. Populations once thought immune—non–drug-using women—are vulnerable. Another theme that resonates throughout these examples: before AIDS, epidemiology was mostly a concern for specialists who defined disease, identified risk factors, and established potential associations. After AIDS, and for future epidemics, advocates for the affected populations will insist on being part of the process. For epidemiologists and ethicists alike, their presence can be both helpful and challenging.
Changing Perspectives on AIDS 199
Ethical Issues in Epidemiological Studies Although all the ethical principles that govern research apply to both clinical and noninterventional epidemiological research, the weight given to one principle or another varies according to the context. Clinical research focuses on the individual as part of a study population that may benefit from the findings. Ethical principles that predominate in clinical research also focus on individuals: respect for autonomous decision-making, beneficence (enhancing the welfare of the individual), and nonmaleficence (avoiding harm to individuals). These principles weigh heavily in considerations of the ratio of risks to benefits in medical decision-making, informed consent, and privacy and protection of confidentiality. Epidemiological studies, on the other hand, focus on populations but may involve identified individuals. The predominant ethical values include the importance of knowledge to be gained and the potential benefit to groups (future patients or society in general). Questions of justice—fairness between the selection of subjects who bear the burdens of research and the eventual recipients of any benefits derived from the research—also arise in epidemiological studies. In some epidemiological studies (for example, studies of environmental toxins), subjects want to be included and do not perceive participation as a burden or a risk; they may in fact perceive it as a benefit. Some of the ethical problems common in epidemiology are invasion of privacy, violation of confidentiality, conflict of interest, and tension between a researcher’s values and those of the communities studied.15 Clinical trials and noninterventional epidemiological research present different types of risks or harms. Risks in clinical trials typically involve adverse physical effects or, less commonly, psychological harm. But, as Alexander Morgan Capron,16 points out: Epidemiologic research can also involve the risk of harm, but it is typically of a different sort. Since, in most cases, investigators do not physically intervene with the subject and do not even have direct contact of any sort, physical and psychological injuries are unlikely. Yet other sorts of harm may occur. First, if data dealing with sensitive matters—either raw data or results—can be linked to subjects, they may suffer social harm, such as ostracism or loss of employment. Second, even when individuals cannot be linked to information that is embarrassing (or worse), findings that paint an adverse picture of an entire population may eventuate in harm to that group, either directly or because of the adoption of laws or policies that have a negative impact on the welfare of group members.
200 Ethics and Epidemiology Furthermore, Capron continues, even when subjects are not physically or psychologically harmed, they may be wronged—for instance, by invasion of privacy without consent or by treating people solely as a means to an end. Such possibilities, he rightly maintains, explain why ethical guidelines are important, even if the risk of direct harm to subjects is negligible. For these reasons and others, epidemiological studies are governed by special regulatory requirements, although some types of research are exempt from institutional review board (IRB) oversight. Examples include research involving the collection or study of existing data, records, pathology specimens, or diagnostic specimens as long as there are no identifiers linking the data to the subjects. IRBs accustomed to reviewing clinical studies may have difficulty in devising appropriate standards for epidemiological studies.17 The initial attempts to track the then-unknown disease AIDS, described in the following section, illustrate some of these problems. Population health is a relatively new framework for studying disease that links epidemiology and social determinants of health. Keyes and Galeo offer nine foundational principles of population health science. Especially relevant in this context are these two: • Large benefits in population health may not improve the lives of all individuals. • Efforts to improve overall population health may be a disadvantage to some groups: whether equity or efficiency is preferable is a matter of values.18 The history of the AIDS epidemic provides vivid examples of how these principles arose and were resolved, at least temporarily.
Confidentiality and the Wary Subject In I981 Michael Gottlieb, a Los Angeles physician, reported to the CDC the unexpected occurrence of the rare Pneumocystis carinii pneumonia (PCP) in five previously healthy homosexual men he had treated in 1980 and 1981. The CDC reported these cases in the June 5, 1981, issue of Morbidity and Mortality Weekly Report (MMWR).19 An editorial note suggested that some aspect of a “homosexual lifestyle” might be involved. Thus began the official story of the AIDS epidemic. Soon after, a second MMWR report described a finding of Kaposi’s sarcoma, a cancer rarely seen in the United States, in twenty-six gay men in California and New York City treated in the previous thirty months.20 Not part of the official record but certainly part of public perception is the idea that a French-Canadian flight attendant named Gaetan Dugas—dubbed “Patient
Changing Perspectives on AIDS 201 Zero”—started the epidemic when he had sex with many men in California and New York in the early 1980s. Recent genetic analyses of Dugas’s blood and that of men infected with hepatitis B in the late 1970s has proved that HIV was circulating in the United States as early as 1971 and was not related to Dugas. Patient Zero itself was a misnomer; Dugas was actually identified as Patient O (for “Out[side] of California”).21,22 In mid-1981 the CDC formed a surveillance task force, which contacted state and local health departments to identify suspected cases of what was soon to be called AIDS. (An earlier designation, Gay-Related Immunodeficiency, or GRID, was used until late 1982.23) Although the case of a heterosexual woman with AIDS had been reported to the CDC by August 1981, and a New York City investigation of eleven men with PCP included seven drug users, five of them heterosexual, the focus remained on men who had sex with men as the defining characteristic of the population at risk. By 1983 several investigations were under way involving gay men as subjects. To gather valid data on the sexual, drug- using, and other behavior of gay men, epidemiologists sought to obtain highly detailed and accurate descriptions of these aspects of subjects’ lives. The researchers especially focused on the numbers of sex partners—the “promiscuity” theory—and on the use of amyl nitrate (“poppers”) during sex. In these interviews the subjects might have revealed information about homosexual behaviors, which were illegal in many states, sometimes with severe penalties, such as imprisonment for five years to life (Idaho), twenty years (Oklahoma), and fifteen years (Michigan). In its decision in Lawrence v. Texas (539 U.S. 558 [2003]), the U.S. Supreme Court held that sodomy laws were unconstitutional and unenforceable; however, as late as 2014, fourteen states and Puerto Rico had not repealed these laws or had not revised them appropriately. Even if not illegal, homosexuality was widely stigmatized. Subjects might also have revealed drug use, criminal activities such as prostitution, or illegal entry into the United States. They also might have named other individuals involved in these activities. Many subjects, unwilling to trust government researchers with such potentially damaging information, either refused to cooperate or gave inaccurate or incomplete answers. While some epidemiologists were sensitive to the subjects’ concerns, others failed to see why they should treat information about this disease, or the people who had it, with any special protections. Public health departments were proud of their record of maintaining confidentiality of information about other diseases. Nevertheless, in some instances at least, internal procedures were less than strict. Case folders with identifying names were sometimes left on desks or given to other researchers or agency employees. Local health departments reported cases with identifiers to the CDC. At that time the modes of transmission of this deadly disease were still under investigation. Against a background
202 Ethics and Epidemiology in which police departments, fire departments, and others were calling for lists of people with AIDS for self-protection, and against a barrage of demands that people with AIDS be isolated, the subjects’ concerns were understandable. At this point ethicists became involved in the issue. Early in 1983, a physician treating gay men with AIDS and a cancer researcher asked the staff of the Hastings Center, then located in Hastings-on-Hudson, New York, to join them in stressing the importance of confidentiality in AIDS research. (I was on the staff of the Center at that time, responsible for work on ethics in research.) These advocates specifically sought to bolster their views, which were regarded by some health department officials as biased, with the professional and independent standing of ethicists. The Center’s staff subsequently decided to convene a working group to develop guidelines on this subject. The proposal to the Charles A. Dana Foundation, which funded the project, stated: There is an inherent tension between the needs of researchers who want access to a maximum of information with a minimum of hindrance and the desires of AIDS patients who want sensitive and identifiable information about themselves given the maximum protection and the most restricted distribution. This tension need not pose an insuperable difficulty. While the legitimate interests of researchers and patients can be accommodated, it will require a serious examination of the contexts in which disclosure takes place, the purposes for which the information may be used, and the people who will have access to it.
The proposal warned that “the future integrity of epidemiological research on AIDS” depended on reaching a mutual understanding. Given adversarial positions that were hardening, the staff judged that the composition of the working group would be a determining factor in the acceptance of the guidelines. No matter how ethically justifiable or cogent the guidelines were, if they did not have the support of the parties whose interests were at stake, they would have no impact. This was a real-world situation fraught with drama. Many gay men believed that quarantine in “concentration camps” was a distinct possibility. They were, they believed, facing a hostile, angry, and irrational public. Some researchers and health department officials, for their part, feared a rapidly spreading and uncontrollable epidemic. These perceptions made the issues emotionally explosive. For these reasons the working group included government and academic researchers, epidemiologists, lawyers, privacy specialists, ethicists, physicians, and representatives of gay and AIDS organizations. Most of the professionals had never talked with the subject representatives of the target population, who were extremely wary of researchers and government officials of any kind. Interestingly, the way epidemiologists had framed the epidemic to that date helped determine
Changing Perspectives on AIDS 203 the composition of the group. There were no participants representing drug users or women, because these groups or their behaviors had not been formally linked with the disease. There were, however, representatives of the Haitian community, which had been officially termed a “risk group” (a designation that was protested and later dropped).24 The overall problem addressed in the guidelines was: What procedures and policies will both protect the privacy of research subjects and enable research to proceed expeditiously? In the grant proposal the ethical challenge was described as “striking a balance between the principle of respect for the autonomy of persons (which requires that individuals should be treated as autonomous agents who have the right to control their own destinies) and the pursuit of the common good (which requires maximizing possible benefits as well as minimizing possible harms, to society as well as to individuals).” Despite their different perspectives, members of the working group reached a consensus on all but one of the proposed guidelines. These covered descriptive issues such as what identifiers are necessary, when they are needed, and what precautions should be taken to protect identifiable data; who should and who should not have access to personally identifiable information; the rather severe limitations of relevant legal protections; steps that should be taken to enhance the legal protections for both research subjects and researchers whose data might be subpoenaed; standards for IRBs; and questions of consent. The single issue on which the working group could not agree was use of the Social Security number (SSN) as an identifier. The guidelines pointed out that SSNs offer the greatest potential for matching datasets but that they also pose the greatest threat to confidentiality: “Some researchers believe that Social Security numbers are indispensable in longitudinal studies, where it is important to be able to recognize that different sets of data have come from a single person. Those who oppose the use of Social Security numbers stress that these numbers are assigned and held by the federal government. Potential misuse of information by government agencies is one of the strongest fears expressed by subjects in AIDS research” (p. 3).25 This question recurred in 1994 in the debate over national health reform, and in 1996 the Health Insurance Privacy and Accountability Act of 1996 (HIPAA) addressed the increasing concern about the privacy of data collected electronically under a regional or national system. Lawrence O. Gostin et al. assert that “perhaps the most critical single decision regarding privacy and security in a reformed health care system is whether to use the Social Security Number . . . as the individual identifier.”26 Pointing out that the SSN currently is not a completely reliable identifier, and that this identifier is used extensively for a variety of non– health-related purposes, Gostin et al. instead recommended a personal health security number, which would have no other purpose and would be essentially as
204 Ethics and Epidemiology private as a health record itself. Despite the importance of this issue, at the time it seemed a minor matter in the face of overwhelming consensus on other issues that the confidentiality guidelines were written by the AIDS working group. In a recent recognition of the risks of SSN identifiers, Medicare recently changed its beneficiary identifier from the SSN to a random number to prevent fraudulent use of the account. This did not deter scammers who immediately found new ways to get older adults to give them personal information.27 More recently, breaches of privacy from computer hackers and inadequate security of data files have created problems of a different scale. Even so mundane an activity as sending mail can be hazardous. In 2017 twelve thousand Aetna- insured patients with HIV were mailed letters in which, for an unknown percentage, their HIV status was clearly visible on the envelope.28 The guidelines had no official weight but were cited repeatedly during negotiations over epidemiological research. The process of their formulation also set some important precedents. At this early stage of the epidemic, ethics was firmly established as integral to public health decision-making. Affected communities were involved in recommendations about their interests. Finally, consensus could be achieved on most thorny issues. Even when agreement could not be reached, the dissenting positions could be articulated and clarified. Since then, this collaborative model has been used many times, with varying success, in developing AIDS policy and programs. A case study describing the formation and activities of the DC Commission of Public Health AIDS Advisory Committee stresses the importance of including representatives of the target population.29 The rallying cry of disability activists— “Nothing about me without me”—has at least some roots in AIDS activism. On a less confrontational level, there is scarcely a public health or disease group that does not have a “consumer” representative. In hindsight, the process of inclusion of the affected population in program and policy discussions may well be as important as the product of the discussions and as lasting a contribution to ethics and epidemiology as other, more technical advances.
Defining HIV and AIDS Ethical dimensions of epidemiology come into play even before the design and implementation of specific studies. Disease classification is the process of defining and describing diseases in various categories, such as affected body part, impact on bodily function, infectious agent, stage of disease, or pathology. Disease surveillance is the collection and monitoring of data about the prevalence of disease classifications so that prevention and control efforts can be targeted appropriately. Classification systems and surveillance definitions are
Changing Perspectives on AIDS 205 ordinarily tools for epidemiologists and clinicians, not matters for political debate and patient advocacy. It is hard to imagine a street demonstration protesting the classification system for stages of colon cancer. But when it comes to AIDS, nothing is ordinary. Surveillance case definitions enable public health officials to count cases consistently across reporting jurisdictions. These definitions are not intended to be used by health care providers for making a clinical diagnosis or determining how to meet an individual patient’s health needs. As an example of how terminology evolves, CDC abandoned the term “homosexual lifestyle” and now uses “male- to-male sexual contact” in its surveillance data as the primary risk for HIV/ AIDS so that it is the behavior that matters, not the way the person is identified by others. Even the commonly used category “men who have sex with men” or MSM may not be congruent with homosexuality in some societies. In South Africa, for example, in determining how to target populations at risk for HIV infection with PrEP programs, the category of “MSM” may be problematic. Some men who have sex with men are considered female, either by themselves or by others.30 In the absence of a test to identify HIV, the CDC’s initial surveillance case definition required the diagnosis of one of eleven opportunistic infections, or of two cancers that were considered “at least moderately predictive of a defect in cell-mediated immunity, occurring in a person with no known cause for diminished resistance to that disease” (p. 508).31 By 1983, all U.S. states required name reporting of cases of AIDS but not of HIV infection. Later, the CDC developed an alternative, comprehensive classification system for HIV infection for adults and adolescents, with a separate classification system for children. The classification system covered the broad spectrum of HIV disease, from initial infection, asymptomatic infection, and persistent generalized lymphadenopathy through serious opportunistic infections and cancers. The primary purpose was to provide a framework for categorizing HIV-related morbidity and immunosuppression. In the early 1990s controversy erupted over the CDC’s proposed revision of the existing surveillance case definition of AIDS.32 The surveillance case definition was the primary focus of the controversy discussed in this section, although the classification system was also involved because it relied on the surveillance case definition for criteria for the end stage of AIDS. Public health officials, researchers, clinicians, hospital administrators, disability specialists, insurance administrators, health economists, legislators, social workers, psychologists, policymakers, and the media all used the CDC’s surveillance case definition of AIDS. It influenced the way the epidemic was perceived, managed, and funded. An AIDS diagnosis triggered a series of benefits and services generally not available to a person with HIV infection. It is not surprising,
206 Ethics and Epidemiology then, that the CDC’s surveillance case definition of AIDS transcended epidemiology to become a symbol for the inadequacies of the U.S. government’s response to the HIV epidemic, particularly for the failure to address adequately the needs of HIV-infected women. Since the first version in 1981, the CDC’s surveillance case definition of AIDS has been changed five times, in 1985, 1987, 1993, 1999, and most recently 2014.33 In 1984, when HIV-1 was identified, various laboratory tests were developed to measure and confirm the presence of HIV antibodies. Using these tests as diagnostic indicators, the CDC broadened the surveillance case definition of AIDS in 1985 to include additional opportunistic infections or cancers that would be indicative of AIDS in persons with positive HIV antibody test results.34 The surveillance case definition was further expanded in 1987 to include several severe nonmalignant HIV-associated conditions, including HIV wasting syndrome and neurological manifestations, and to permit “presumptive” diagnoses, such as diagnoses of AIDS based on the presence of one of seven indicator diseases without confirmatory laboratory evidence of HIV.35 The most recent revision of the surveillance case definition for HIV infection, published in 2014, combined definitions for all age groups and added criteria recognizing early HIV infection, among other changes. In 1991 the CDC proposed revising the surveillance case definition and the disease classification system. Under this scheme there would be three categories of HIV disease (asymptomatic, symptomatic, and AIDS). There would also be three categorical levels of CD4+ cells (also called T cells) per cubic millimeter of blood, which would guide clinicians in recommending therapeutic actions in disease management. Because declining CD4+ cell counts had been shown to be reasonably reliable indicators of disease progression, individuals with less than 200 CD4+ cells would be considered to have AIDS, regardless of their symptoms. No new opportunistic infections or other conditions would be added to the already long list of twenty-three AIDS-defining conditions in the surveillance case definition. This proposal was greeted with intense and often acrimonious debate. Virtually everyone agreed that the surveillance case definition should be revised, but many disagreed with the CDC’s approach. Advocates argued that the “outdated” surveillance case definition artificially lowered the number of cases of AIDS, which led to inadequate federal funding and attention. Officials in states with large numbers of women and drug users with HIV-related illnesses, the groups most likely to fall outside the CDC’s 1987 surveillance case definition for AIDS, were concerned that funding formulas based on case reports of CDC- defined AIDS were inequitable. Women’s advocates claimed that many women with HIV-related illnesses were improperly diagnosed and treated because the surveillance case definition was developed from data on clinical manifestations
Changing Perspectives on AIDS 207 in gay men. Children at risk of HIV infection from maternal transmission were also underdiagnosed. Moreover, community-based organizations that provided services to individuals, especially women, who were disabled by HIV illness but did not meet the criteria for AIDS found it difficult to obtain various federal, state, and local entitlements and benefits for their clients. A full-page advertisement in the New York Times (June 19, 1991), initiated by the AIDS Coalition to Unleash Power (ACT UP) and signed by over two hundred individuals and organizations, protested: “Women don’t get AIDS. They just die from it.” One ad hoc group went even further. Members of this group handcuffed themselves to participants who represented AIDS organizations at a meeting at the offices of the American Public Health Association held with CDC officials to discuss the controversy. (I was at the meeting but declined to be handcuffed; I listened to the conversation from the hallway.) During the several hours that followed, the protestors argued that the meeting should not have been held, that AIDS advocates by their very presence were betraying their constituents, that the proposed revisions failed to include conditions specific to women, and that the government was to blame for just about everything connected to the epidemic. The CDC claimed that there was insufficient evidence to include specific gynecological conditions. By strict research standards, the evidence was indeed scant, but the studies that would have provided more adequate evidence one way or the other had not been done or had not been started early enough in the epidemic to provide reliable data. In the end the protestors won a small victory: the CDC’s final revision of the HIV infection classification system and the surveillance case definition for AIDS contained all its original proposals, but it also included one female-specific condition (invasive cervical cancer) as well as pulmonary tuberculosis and recurrent pneumonia.36 As a result of the expanded surveillance case definition, in 1993 reported AIDS cases increased 111% over cases reported in 1992 (103,500 compared with 49,016),37 significantly surpassing the 75% rise the CDC had predicted.21 Of cases reported in 1993, 54% were based on conditions added to the definition in that year, and the increase was greater among females (151%) than males (105%). The largest increases were among racial/ethnic minorities, adolescents and young adults, and cases attributed to heterosexual transmission. As expected, the rate of increase declined in subsequent reporting periods. In 1999 the CDC recommended that all states and territories conduct case surveillance for HIV, as an extension of their AIDS surveillance.38 The new definition combined reporting criteria for HIV and AIDS into a single case definition and reflected new laboratory techniques that were not available in 1993. With the advent of highly active antiretroviral therapy (HAART), which slowed the progression of HIV to AIDS, it became especially important to identify HIV- infected individuals early so that they could be offered treatment and so that the
208 Ethics and Epidemiology epidemic’s course could be monitored more accurately. The CDC also pointed to the opportunities for public health prevention efforts and targeting of resources. By 2018, all states and territories had instituted reporting by name; these results are forwarded to CDC without identifying information.39 The controversy over the 1993 case definition demonstrated that surveillance case definitions, disease classification systems, and the CDC’s role in both were poorly comprehended. The public’s understanding of the current state of the epidemic suffered from the initial, almost single-minded focus on gay men. This controversy raised the profile of women in the epidemic, which will be discussed in the next section. The controversy also showed how secondary uses of surveillance information—in this case, formulas for funding or benefits—can have a far greater impact than its primary epidemiological purpose. The borderline between science and policy is often ambiguous, and those who cross it should be clear about what lies on the other side.
HIV Surveillance and Mother-to-Child Transmission Women with HIV/AIDS were neglected at the outset of the epidemic. When HIV/AIDS was identified in infants and young children—the media called them “AIDS babies”—the mothers were considered responsible. The public saw hemophiliacs, tainted-blood-transfusion recipients, and newborns as “innocent victims” but HIV-infected mothers who gave birth to these babies, along with gay men and injection drug users, as “guilty.” Two issues were most contentious: whether testing in clinical care should be mandatory, voluntary, or routine; and whether blinded seroprevalence surveys should be unblinded to inform those who were HIV-positive of their status. These dilemmas have implications beyond mother-to-child transmission.
HIV Testing Discussions about HIV counseling and testing policies for pregnant women and newborns have taken place in the context of a broader debate: Should testing be voluntary with informed consent, mandatory (legally required), or routine (usually interpreted to mean that clinicians will test for HIV as they do for other conditions unless patients refuse or “opt out”)? With some exceptions of mandatory screening (of blood donors, military personnel, and immigrants, for example), the debate appeared to be resolved in favor of voluntary testing. (As a result of advances in HIV testing, in 2015 the Food and Drug Administration revised the “indefinite deferrals” limitation for men who have sex with men to a
Changing Perspectives on AIDS 209 behavior standard; that is, blood donations are permitted as long as the donor has not had sex with men within the past twelve months. This is the same standard used for people whose risk for HIV infection includes blood transfusion and accidental exposures.40) The early HIV tests could not determine whether a newborn was truly infected or carrying maternal antibodies that would disappear. (This is also true of hepatitis C infection and in the opioid epidemic, where newborns with neonatal abstinence syndrome undergo withdrawal symptoms but are not addicted to the drugs their mother used.) Only when more sophisticated tests were introduced in the mid-1990s was it possible to make accurate diagnoses. With the advent of effective therapy and faster, more accurate laboratory techniques, many public health officials and physicians now advocate routine testing as being cost- effective and an important way to identify and treat HIV-infected people who do not know their status.41–44 In the early 1990s, however, all the major organizations and groups that specifically examined testing policies for pregnant women concluded that voluntary screening with informed consent was the course most likely to produce the desired effects of education, prevention, and appropriate medical and social service follow-up. In its 1991 report, for example, the Institute of Medicine (IOM) asserted that “individuals (or their legally recognized representatives) should have the right to consent to or refuse HIV testing (except when such testing is conducted anonymously for epidemiologic purposes).” Opposing mandatory newborn or prenatal screening programs, the IOM found “no compelling evidence that women and children should constitute an exception to this principle.”45 Similarly, a working group from Johns Hopkins University and Georgetown University rejected mandatory screening and recommended a range of voluntary policies.46 This consensus began to erode, however, following announcement in February 1994 of the results of AIDS Clinical Trial Group Study 076, which showed that transmission from mother to fetus was reduced dramatically (from 25% to 4.3%) in a group of pregnant women treated with zidovudine (AZT).47 Pediatricians were also seeing the benefits for HIV-infected children of antiretroviral therapy and prevention of opportunistic infections.48 Finally, the risk of HIV transmission from breast milk, while small, can be avoided if an HIV-infected mother does not breastfeed. These benefits, while no panacea, had a significant impact on the ethics of the screening calculus. In April 2003, Julie Gerberding and Harold Jaffe of the CDC sent a “Dear Colleague” letter recommending that “clinicians routinely screen all pregnant women for HIV infection, using an ‘opt out’ approach, and that jurisdictions with statutory barriers to such routine prenatal screening consider revising them.”49 In November 2004 the American College of Obstetrics and Gynecology similarly recommended that “Pregnant women
210 Ethics and Epidemiology universally should be tested as part of the routine battery of prenatal blood tests unless they decline the test” (p. 1119).50 In one of the epidemic’s few clear-cut successes, mother-to-child HIV transmission has been drastically reduced in the United States and Europe. The CDC reported only 99 cases in 2016 and a 32% decrease in cases from 2011 to 2015.51 In Africa, where the prevalence of HIV infection is very high and the availability of antiretroviral treatment for pregnant women is low but increasing, HIV mother-to-child transmission remains a serious problem. Major medical centers that once were home to “boarder babies” now seldom see a new case of an HIV-infected newborn. Although only three states (New York, Connecticut, and Illinois) have mandatory newborn HIV testing, counseling and voluntary testing of pregnant women and routine testing of newborns if the mother’s status is unknown have become standard in most jurisdictions. In a paradoxical result, a pregnant woman may refuse testing for herself but not for her baby; the baby’s test, while inconclusive for the newborn, definitively reveals the mother’s HIV status. Her refusal to learn her HIV status only postpones being told. Obtaining consent for testing and counseling, however, is still valuable in encouraging follow-up.52,53 The availability of a highly successful method of preventing transmission changed the public health and clinical emphasis from one of voluntary testing to required or at least vigorously recommended testing. It also changed the emphasis for women, the majority of whom would not want to transmit HIV to their babies. It did not eliminate, however, the need for counseling and services.
Blinded Seroprevalence Surveys To estimate the prevalence of HIV infection in sentinel areas and groups throughout the country, in 1988 the CDC developed what it called a “family” of serological surveys, which were anonymous unlinked HIV surveys in sentinel sites, including sexually transmitted disease clinics and drug treatment centers, in selected metropolitan areas.54 The CDC also funded blinded serosurveys of newborns to estimate HIV prevalence among pregnant women; New York State initiated this testing in 1987. In June 1999, the New York State Department of Health proposed to “modify its on-going blind newborn HIV antibody testing program to permit voluntary notification of mothers whose infants test positive.” Under the proposal, new mothers would have been given the option of learning their baby’s test results (and therefore their own HIV status). The potential conflict between the values that predominate in epidemiological studies and those that weigh most heavily in clinical practice became real in the case of HIV seroprevalence surveys conducted in newborns. Under these
Changing Perspectives on AIDS 211 circumstances the importance of obtaining accurate knowledge about the course of the epidemic, in a way that does not present any risk to individuals, came into conflict with the importance of identifying and treating individual patients. This was not an instance of one goal being more valuable than the other; rather, it was a case of one methodology—blinded seroprevalence surveys—being unable to serve both the goal of obtaining accurate knowledge and identifying named individuals. Several objections were raised to the proposal by community-based health care providers and others. These objections primarily concerned (1) the confusing and psychologically traumatic impact on new mothers of learning about their own HIV infection and the possible infection of their babies at a time when they are physically and emotionally vulnerable; (2) the potential for manipulation or coercion by health care providers, who might not understand or accept a mother’s unwillingness to learn the test results at that time; and (3) the lack of health care and support services for women and their children once identified as seropositive. Because of these objections, the state Department of Health agreed to postpone the implementation of this proposal in favor of a much more aggressive but still voluntary program, called the Obstetrical HIV Counseling/Testing/Care Initiative. Instead of a “take it or leave it” approach to testing, practitioners in this program advocated it. It was designed to increase rates of voluntary testing, with counseling at twenty-four sites, to women who had given birth without access to prenatal care.55 In addition, New York City’s child welfare administration revised its policy on HIV testing for infants and children entering foster care.56 Because newborns in foster care were much more likely to be HIV-positive than newborns going home with their mothers, the agency took several steps to ensure that they receive appropriate evaluation and follow-up. Infected children and their foster parents became eligible for special medical and social services. Despite these two initiatives aimed at increasing the numbers of HIV-infected infants identified at birth, the controversy over newborn testing erupted in the New York State legislature in 1993–94. Nettie Mayersohn, an assemblywoman from Queens, introduced legislation to require the State Department of Health to notify parents if their infant showed positive results on the HIV test that was being done anonymously. The debate quickly polarized and raged not only in the legislative halls but also in the media. To deflect the furor and to table action on the Mayersohn bill, the New York State Assembly’s Ad Hoc Task Force on AIDS asked the Governor’s AIDS Advisory Council to study the issue. After several months of hearings and debate, in February 1994 a subcommittee convened by the Advisory Council recommended a policy of “mandatory counseling and strongly encouraged voluntary testing for all pregnant and postpartum women” as well as other measures
212 Ethics and Epidemiology to strengthen counseling and testing and availability of services.57 Pediatricians have been vigorous advocates for their HIV-infected patients. A group of pediatricians dissented from the report, declaring that this policy was “insufficient to offer the protection which every infant deserves” and that voluntary testing has an “unacceptably high failure rate.”58 A legislative compromise that would have mandated counseling and encouraged voluntary testing failed on the last day of the session. The New York State Task Force on Life and the Law, another executive branch body, was then asked to restudy the entire issue. In May 1995 the CDC suspended funding of the blinded newborn seroprevalence studies. (CDC discontinued the unlinked “family” of seroprevalence studies in 1999.59) Although name-based or coded HIV reporting has become standard in most of the United States, it is still important to distinguish this kind of monitoring from blinded seroprevalence surveys. In the acrimonious debate about unblinding the newborn studies, the value of anonymous surveillance as an epidemiological tool was hardly mentioned, and then only to be misunderstood. Anonymous unlinked surveys test blood samples already collected for other medical purposes and are stripped of all personal identifiers. Blinded surveys can generate less biased estimates of HIV prevalence because individuals do not have the opportunity to select whether to participate in serological testing. Blinded surveys are simpler, quicker, and less costly than nonblinded surveys. Ethically, blinded surveys do not place any participant at risk of identification, so issues of privacy and confidentiality are not raised.60 As public health practice rather than human studies research, survey protocols are exempt from IRB review. Even so, the CDC IRB reviewed and approved the protocol for the HIV seroprevalence surveys. As Fairchild and Bayer point out, in public health, there is an ethical mandate to “undertake surveillance that enhances the well-being of populations.” At the same time, they urge, ethical oversight of surveillance “can serve as a means of avoiding inadvertent breaches in confidentiality and stigma; it can help to ensure that the public understands that surveillance will occur and what purposes it serves; and it can protect politically sensitive surveillance efforts.”61 Blinded seroprevalence surveys in general have been remarkably uncontroversial and free of political influence. However, blinded HIV serological surveys on adults or children have been very controversial in England62 and the Netherlands,63 and one commentator in the United States has claimed that “surreptitious testing is deceitful” and that “in the quest to eliminate self-selection bias, epidemiologists are ignoring the difference between human subjects and laboratory animals.”64 One of the purposes of blinded serological surveys is to pinpoint precisely where resources and services are needed, including counseling and testing. But,
Changing Perspectives on AIDS 213 as the Public Health Service pointed out, “The surveillance activity, in the case of an HIV prevalence survey, must not be confused with the public health intervention for which the survey may indicate a need” (p. 213).65 In this view the use of blinded surveys seems to be compatible with a parallel system of voluntary counseling and testing in settings where individuals likely to be at risk of HIV infection are treated. More recent international discussions of surveillance focus on “second- generation surveillance.” According to UNAIDS and the World Health Organization (WHO), this includes “biological surveillance of HIV and other sexually transmitted infections (STIs) as well as systematic surveillance of the behaviour that spreads them. It aims to use these data together to build up a comprehensive picture of the HIV/AIDS epidemic.”66 The Working Group warns, however, “second-generation surveillance for HIV is an intrusive business. It involves collecting specimens of people’s body fluids and asking them questions about some of the most intimate aspects of their lives. Continuing to do this over time is morally unacceptable unless the data are actively used to improve life for the people from whom they are collected and their communities.” This statement takes a strong moral stance that could equally be and often is applied to clinical research. While I agree with the principle, the implementation in practice is often difficult because of resource constraints and lack of health infrastructure. The agencies that do the surveillance are not usually the agencies that make resource decisions. Nevertheless, public health officials should certainly advocate for positive actions based on the knowledge they learn.
Defining Orphanhood If the low rates of HIV testing in the United States, even among populations at greatest risk, are a good indicator,67 public fears about the HIV/AIDS epidemic appears to have waned. The CDC estimates that 1.1 million people in the United States are living with HIV, including about 162,500 people who are unaware of their status. Approximately 40% of new HIV infections are transmitted by people who are living with undiagnosed HIV. However, because of the vast scale of the epidemic in Africa, Asia, and several Caribbean and Central American countries, and the poverty of the countries where it has killed millions and wreaked devastation on fragile economies and medical and social infrastructures, there are daunting ethical as well as practical and economic issues. The controversy over the use of placebo-controlled trials (in which some participants are randomly assigned to a sham treatment) to prevent mother-to-child transmission in countries where the Western standard of care cannot—or will not—be provided is the best-known but by no means the only example.68 Debates about who will
214 Ethics and Epidemiology receive the growing but still inadequate supplies of antiretroviral therapy accompany the efforts to provide treatment.69,70 In the United States, the relationship between children and HIV was considered almost exclusively in the context of mother-to-child transmission. That these inaptly termed “AIDS babies,” most of whom were not HIV-infected in utero, had uninfected siblings who would become orphans when their mother died was not, at least initially, recognized, studied, or addressed. Since the mid- 1990s, however, significant efforts have been made to provide services for all these children and their new guardians.71 In sub-Saharan Africa, however, the situation was quite different. In the context of a heterosexually transmitted disease, the growing number of orphans and the strain placed on extended families caring for them was recognized (but not addressed) in the late 1980s. The first survey of orphan prevalence was conducted in Uganda in 1989. Some of these orphans were HIV-infected; most were not. In July 2019, UNICEF, an agency of the United Nations, estimated that almost 15 million children under the age of eighteen had lost one or both parents to AIDS. Most were in sub-Saharan Africa. Millions more have been affected through a heightened risk of poverty, homelessness, school dropout, discrimination. and loss of opportunities.72 In the past several years, international organizations like UNICEF and UNAIDS, and international nongovernmental organizations like Save the Children and World Vision, have made the care of orphans, a term now usually linked with “other vulnerable children” or OVC, a priority. Despite the apparent simplicity of the term “orphan,” it is conceptually and logistically complex to conduct epidemiological studies that define the population, monitor changes, and identify areas of need. Yet it is essential to do so to target limited resources in a way that does not stigmatize these children or deprive others in similarly dire circumstances of assistance. In biblical times and patriarchal societies, orphans were defined by their fathers’ death; numerous references in both the Old and New Testaments, Islamic texts, and other religious writings, cite the obligation to care for “widows and orphans.” This phrase was further immortalized in Lincoln’s Second Inaugural Address. The Widows and Orphans Act of 2005 (S. 644) was introduced in the U.S. Senate to create new immigration categories for “certain women and children at risk of harm”; these individuals are not necessarily either widows or orphans. They are “individuals who have a credible fear of harm due to age or sex and who lack adequate protection.” Despite the current controversy over immigration, this legislation is still in effect. Despite the history and persistence of this metaphor, when it comes to real children today, the death of a mother has come to represent the primary loss, since mothers are presumed to be the nurturing parent. In U.S. epidemiological
Changing Perspectives on AIDS 215 studies, the category of “orphan” does not exist; “motherless child” is the relevant category.73 Since there are no epidemiological data on the number of children men father analogous to women’s fertility rates, there is no way to calculate the number of fatherless children, if “fatherless” means that the male parent has died and is not simply absent from the family.74 In international epidemiological usage, a child whose mother has died is a “maternal orphan”; if the dead parent is the father, the child is a “paternal orphan.” In either case, that child is a “single orphan.” If both parents are dead, the child is a “double orphan.” Until 2004, UNAIDS and UNICEF used the cutoff age of fifteen;75 in the 2004 report, these agencies extended the age to eighteen,76 which is the age cutoff for a “child” in the 1989 UN Convention on the Rights of the Child. (In reporting cases of AIDS, however, the age of adulthood is fifteen.) From a child rights perspective, Gruskin and Tarantola argue that “the inconsistency of age groupings is further compounded by the different age cut- offs at which countries recognize the legal ‘age of consent’ for consensual sex, as these may affect the degree to which children feel comfortable coming forward for needed services.” (p. 148).77 Since orphaning is more common in Africa than in the West in general, because of war, civil unrest, and tropical diseases, it is important to understand the contribution of HIV/AIDS to the total orphan population. All studies indicate that AIDS has contributed significantly to increasing the number of orphans, especially double orphans since HIV is sexually transmitted.78 One study comparing household-survey estimates with projections of mortality and orphan numbers in sub-Saharan Africa concluded that the fraction of orphans attributable to AIDS may be greater than previously estimated.79 The numbers of orphans can be estimated based on the number of children born to women who have died from AIDS over the preceding seventeen years using country-and age-specific fertility rates, as well as rates of mother-to-child transmission that would result in the death of an HIV-infected child. Declining fertility rates in the year preceding death are also included in the equation. These estimates vary according to the reliability of the data on which they are based. Using a more direct method, investigators can conduct censuses asking about the number of orphans in households and communities. This method also has limitations of both under-and over-counting. Children may not be identified as orphans to avoid stigmatization or because they may have been living with the family prior to the parent’s illness and death. Family members may be reluctant to mention disabled and sick children. On the other hand, families seeking to obtain benefits that are designed for orphans may misrepresent their number. Children may be miscategorized based on the death of one or both parents. The phenomenon of “orphan clustering” as parents migrate or send their children to relatives may affect censuses.
216 Ethics and Epidemiology Programs developed to serve orphans in a community generally try not to single out orphans due to AIDS, because this causes resentment and increases stigma. (Donor agencies and the public, however, are more likely to support programs if they are “helping orphans.”) If the definition of “orphan” is elastic, the definition of “vulnerable child” or “child in especially difficult circumstances” is even more so. The World Bank Toolkit on orphans and vulnerable children defines them as “children who are more exposed to risks than their peers . . . and most likely to fall through the cracks of regular programs.”80 Among the more problematic consequences of defining orphans as age fifteen and under is the assumption that adolescents of sixteen are fully adult and able to take care of themselves as well as their younger siblings. Members of this group have particularly urgent psychological and material needs and are at risk of becoming HIV-infected themselves through early sexual activity. Teenage girls are particularly vulnerable to sexual exploitation, HIV infection, and pregnancy. The phenomenon of “orphans of orphans” is already occurring.
AIDS in the Twenty-First Century While the definitive history of HIV/AIDS has yet to be written, Oppenheimer gives an astute preliminary assessment: From the beginning of the epidemic, epidemiologists conceptualized HIV infection as a complex social phenomenon, with dimensions that derived from the social relations, behavioral patterns, and past experiences of the population at risk. On the one hand, the epidemiologists’ approach may have skewed the choice of models and the hypotheses pursued and may have offered some justification for homophobia. On the other, by defining HIV infection as a multifactorial phenomenon, with both behavioral and microbial determinants, epidemiologists offered the possibility of primary prevention, a traditional epidemiological response to infectious and chronic diseases. Epidemiologists, in effect, established the basis for an effective public health campaign and through publications, conferences, and the continuous collection of surveillance data helped make AIDS a concern of policymakers and the public.81
Ethics played a role in providing a reasoned and principled approach to some of the conflicts that have arisen. Ethical considerations have provided models for resolutions of controversies, an emphasis on values of confidentiality and respect for individuals, and recognition of the social context of disease in designing prevention and treatment programs. The lessons learned could be applied in newer epidemics, such as the disorders associated with opioid overuse.
Changing Perspectives on AIDS 217 But, unfortunately, to revisit Lewis Thomas’s categories of technology, there is no “transformative” technology in sight, and the ethical issues will continue to evolve.
Acknowledgments For their cogent comments and suggestions, I would like to thank Abigail Zuger, MD, who treated AIDS patients in the beginning of the epidemic until her recent retirement; and Suzanne Brundage, now a colleague at the United Hospital Fund, who worked in Africa with AIDS-related youth programs.
References 1. Thomas, L. “The Technology of Medicine.” New England Journal of Medicine (1971) 285:1366–1368. 2. Global UNAIDS Update. Miles to Go: Closing Gaps, Breaking Barriers, and Righting Injustices. June 2018. https://ovcsupport.org/wp-content/uploads/2018/07/miles-to- go_en.pdf 3. LuthramS.,andGorman,A.“Out-of-PocketCostsPutHIVPreventionDrugoutofReach for Many at Risk.” California Healthline (July 3, 2018). https://californiahealthline.org/ news/out-of-pocket-costs-put-hiv-prevention-drug-out-of-reach-for-many-at-risk/ 4. Cohen, J. “A Campaign to End AIDS by 2030 Is Faltering Worldwide.” Science (July 21, 2018). https://www.google.com/search?q=jon+cohen%2C+a+campaign+to+end +aids+by+2030&oq=Jon+&aqs=chrome.0.35i39j69i57j0l4.2655j0j8&sourceid=chro me&ie=UTF-8 5. Cohen, J. “Nigeria Has More HIV-Infected Babies than Anywhere in the World.” Science (June 11, 2018). http://www.sciencemag.org/news/2018/06/nigeria-has- more-hiv-infected-babies-anywhere-world-it-s-distinction-no-country-wants 6. Cohen, J. “Russia’s HIV/AIDS Epidemic Is Getting Worse, Not Better.” Science (June 6, 2018). http://www.sciencemag.org/news/2018/06/russia-s-hivaids-epidemicgetting-worse-not-better 7. Cohen, J. “We’re in a Mess: Why Florida is Struggling with an Unusually Severe HIV/ AIDS Problem.” Science (June 12, 2018). http://www.sciencemag.org/news/2018/06/ we-re-mess-why-florida-struggling-unusually-severe-hivaids-problem 8. Centers for Disease Control and Prevention. “Racial and Ethnic Disparities in Sustained Viral Suppression and Transmission Risk Potential Among Persons Receiving HIV Care— United States, 2014.” Morbidity and Mortality Weekly Report (February 2, 2018). https://www.cdc.gov/mmwr/volumes/67/wr/pdfs/ mm6704-H.pdf 9. Centers for Disease Control and Prevention. “First Year Geographic Focus: Ending the HIV Epidemic: A Plan for America.” https://www.cdc.gov/endhiv/docs/Ending- HIV-geographic-focus-508.pdf/ 10. Shilts, R. And the Band Played On: Politics, People and the AIDS Epidemic. St. Martin’s Press, 1987.
218 Ethics and Epidemiology 11. Fee, E., and Fox, D. M. AIDS: The Making of a Chronic Disease. University of California Press, 1992. 12. Bayer, R. Private Acts, Social Consequences: AIDS and the Politics of Public Health. Free Press, 1989. 13. Oppenheimer, G. M. “In the Eye of the Storm: The Epidemiological Construction of AIDS.” In AIDS: The Burdens of History, ed. E. Fee and D. M. Fox. University of California Press, 1988, p. 267. 14. National Institute on Drug Abuse. “Heroin.” https://www.drugabuse.gov/ publications/research-reports/heroin 15. Last, J. M. “Epidemiology and Ethics.” Law, Medicine & Health Care (1991) 19:66–174. 16. Capron, A. M. “Protection of Research Subjects: Do Special Rules Apply to Epidemiology?” Law, Medicine & Health Care (1991) 19:185. 17. Cann, C. I., and Rothman, K. J. “IRBs and Epidemiological Research: How Inappropriate Restrictions Hamper Studies.” IRB: A Review of Human Subjects Research (1984) 6:5–7. 18. Keyes, K. M., and Galea, S. “Setting the Agenda for a New Discipline: Population Health Science.” American Journal of Public Health (2016) 106(4):633–634. 19. Centers for Disease Control and Prevention. “Pneumocystis Pneumonia—Los Angeles.” Morbidity and Mortality Weekly Report (1981) 30:250–252. https://www. cdc.gov/mmwr/preview/mmwrhtml/june_5.htm 20. Centers for Disease Control and Prevention. “Kaposi’s Sarcoma and Pneumocystis Pneumonia Among Homosexual Men—New York City and California.” Morbidity and Mortality Weekly Report (1981) 30:305–307. https://www.cdc.gov/mmwr/preview/mmwrhtml/00001114.htm 21. Maron, D. F. “New HIV Genetic Evidence Dispels ‘Patient Zero’ Myth.” Scientific American (October 26, 2016). https://www.scientificamerican.com/article/ new-hiv-genetic-evidence-dispels-patient-zero-myth/ 22. For the influence of Patient Zero on public perception, see McKay, R. A. “‘Patient Zero’: The Absence of a Patient’s View of the Early North American AIDS Epidemic.” Bulletin of the History of Medicine (2014) 88:161–194. 23. Oppenheimer, G. M. “Causes, Cases, and Cohorts: The Role of Epidemiology in the Historical Construction of AIDS.” In AIDS: The Making of a Chronic Disease, ed. E. Fee and D. M. Fox. University of California Press, 1992, pp. 62, 76. 24. Farmer, P. AIDS and Accusation: Haiti and the Geography of Blame. University of California Press, 1992. 25. Bayer, R., Levine, C., and Murray, T. H. “Guidelines for Confidentiality in Research on AIDS.” IRB: A Review of Human Subjects Research (November/December 1984) 6:1–3. 26. Gostin, L. O., Turek-Brezina, J., Powers, M., et al. “Privacy and Security of Personal Information in a New Health Care System.” Journal of the American Medical Association (1993) 270:2488. 27. CMS, Medicare.gov. “New Medicare Cards Are in the Mail.” https://www.medicare. gov/newcard/ 28. Mershon, E. “Insurer’s Mailing to Customers Made HIV Status Visible Through Envelope Window.” STATNEWS (August 21, 2017). 29. Coughlin, S. S., Mann, P., and Jennings, B. “Case Study: A Gay Epidemiologist and the DC Commission of Public Health AIDS Advisory Committee.” Narrative Inquiry in Bioethics (in press).
Changing Perspectives on AIDS 219 30. Fiereck, K. J. “Cultural Conundrums: The Ethics of Epidemiology and the Problems of Population in Implementing Pre-exposure Prophylaxis.” Developing World Bioethics (April 2015) 15(1):27–39. 31. Centers for Disease Control and Prevention. “Update on Acquired Immune Deficiency Syndrome (AIDS)— United States.” Morbidity and Mortality Weekly Report (1982) 31:507–514. 32. Levine, C., and Stein, G. L. “What’s in a Name? The Policy Implications of the CDC Definition of AIDS.” Law, Medicine, and Health Care (1991) 19:278–290. 33. Centers for Disease Control and Prevention. “Revised Surveillance Case Definition for HIV Infection—United States.” Morbidity and Mortality Weekly Report (2014) 63(3):1–11. 34. Centers for Disease Control and Prevention. “Revision of Case Definition of Acquired Immunodeficiency Syndrome for National Reporting—United States.” Morbidity and Mortality Weekly Report (1985) 34:373–375. 35. Centers for Disease Control and Prevention. “Revision of the CDC Surveillance Case Definition for Acquired Immunodeficiency Syndrome.” Morbidity and Mortality Weekly Report (1987) 36(Suppl):1S–15S. 36. Centers for Disease Control and Prevention. “1993 Revised Classification System for HIV Infection and Expanded Surveillance Case Definition for AIDS Among Adolescents and Adults.” Morbidity and Mortality Weekly Report (1992) 41(RR- 17):1–5. The 1991 proposal is discussed in this report: https://www.cdc.gov/mmwr/ preview/mmwrhtml/00018871.htm. 37. Centers for Disease Control and Prevention. “Update: Impact of the Expanded AIDS Surveillance Case Definition for Adolescents and Adults on Case Reporting United States, First Quarter 1993.” Morbidity and Mortality Weekly Report (1994) 43:160–161, 167–170. https://www.cdc.gov/mmwr/preview/mmwrhtml/00020374. htm 38. Centers for Disease Control and Prevention. “Guidelines for National Human Immunodeficiency Virus Case Surveillance, Including Monitoring for Human Immunodeficiency Virus Infection and Acquired Immunodeficiency Syndrome.” Morbidity and Mortality Weekly Report (1999) 48(RR-13):1–28. https://www.cdc.gov/ MMwr/preview/mmwrhtml/rr4813a1.htm 39. Kaiser Family Foundation. “State Health Facts, HIV Testing in the United States.” June 27, 2018. https://www.kff.org/hivaids/fact-sheet/hiv-testing-in-the-united-states/ 40. Food and Drug Administration. “Revised Recommendations for Reducing the Risk of Human Immunodeficiency Virus Transmission by Blood and Blood Products.” May 5, 2015. https://www.federalregister.gov/documents/2015/05/15/2015-11690/ revised-recommendations-for-reducing-the-risk-of-human-immunodeficiency- virus-transmission-by-blood 41. Sanders, G. D., Bayoumi, A. M., Sundaram, V., et al. “Cost-Effectiveness of Screening for HIV in the Era of Highly Active Antiretroviral Therapy.” New England Journal of Medicine (2005) 352(6):570–585. 42. Paltiel, A. D., Weinstein, M. C., Kimmel, A. D., et al. “Expanded Screening for HIV in the United States—An Analysis of Cost-Effectiveness.” New England Journal of Medicine (2005) 352(6):586–595. 43. Bozzette, S. A. “Routine Screening for HIV Infection—Timely and Cost-Effective.” New England Journal of Medicine (2005) 352(6):620–621. 44. Paltiel et al., “Expanded Screening.”
220 Ethics and Epidemiology 4 5. Institute of Medicine. HIV Screening of Pregnant Women and Newborns. 1991, pp. 2–3. 46. Faden, R., Geller, G., and Powers, M. AIDS, Women and the Next Generation. Oxford University Press, 1992, pp. 333–334. 47. Centers for Disease Control and Prevention. “Zidovudine for the Prevention of HIV Transmission from Mother to Infant.” Morbidity and Mortality Weekly Report (1994) 43(16):285–287. 48. Brogly, S., Williams, P., Seage, G. R., et al., for the PACTG 219C Team. “Antiretroviral Treatment in Pediatric HIV Infection in the United States: From Clinical Trials to Clinical Practice.” Journal of the American Medical Association (2005) 293(18):2213–2220. 49. Gerberding, J. L., and Jaffe, H. W. “Dear Colleague” letter, April 23, 2003. Centers for Disease Control and Prevention. 50. American College of Obstetrics and Gynecology. “ACOG Committee Opinion Number 304, November 2004. Prenatal and Perinatal Human Immunodeficiency Virus Testing: Expanding Recommendations.” Obstetrics and Gynecology (2004) 104(5 Pt 1):1119–1124. 51. Centers for Disease Control and Prevention. “HIV Among Pregnant Women and Children.”March 2018.https://www.cdc.gov/hiv/pdf/group/gender/pregnantwomen/ cdc-hiv-pregnant-women.pdf 52. Webber, D. W. “HIV Testing During Pregnancy: The Value of Optimizing Consent.” AIDS & Public Policy Journal (2004) 18(3):83–97. 53. Kelly, K. “Obtaining Consent Prior to Prenatal HIV Testing: The Value of Persuasion and the Threat of Coercion.” AIDS & Public Policy Journal (2004) 18(3):98–111. 54. Pappaloanou, M., Dondero, Jr., T. J., Peterson, L. R., et al. “The Family of HIV Seroprevalence Surveys: Objectives, Methods, and Uses of Sentinel Surveillance for HIV in the United States.” Public Health Reports (1990) 105:113–119. 55. New York State Department of Health, AIDS Institute. “Women and Children with HIV Infection in New York State: 1990–92.” New York State Department of Health, 1992. 56. New York City, Human Resources Administration, Child Welfare Administration. “Draft Bulletin: HIV Testing of Children in Foster Care.” April 23, 1993. 57. New York State AIDS Advisory Council. “Report of the Subcommittee on Newborn HIV Screening of the New York State AIDS Advisory Council.” February 10, 1994, p. 17, iii. 58. “Dissenting Comments on the January 31, 1994 Report of the Subcommittee on Newborn Screening to the AIDS Advisory Council.” February 4, 1994. Crawford, C. Protecting the weakest link: A proposal for universal, unblinded pediatric HIV testing, counseling and treatment. J Community Health 20 (1995), 125–141. https:// doi.org/10.1007/BF02260334 59. Centers for Disease Control and Prevention. HIV Prevalence Trends in Selected Populations in the United States: Results from National Serosurveillance, 1993–1997. 2001, p. 2. 60. Bayer, R., Levine, C., and Wolf, S. M. “HIV Antibody Screening: An Ethical Framework for Evaluating Proposed Programs.” Journal of the American Medical Association (1986) 256:1768–1774. 61. Fairchild, A. L., and Bayer, R. “Ethics and the Conduct of Public Health Surveillance.” Science (2004) 303:631–632.
Changing Perspectives on AIDS 221 62. Zulueta, P. “The Ethics of Anonymised HIV Testing of Pregnant Women: A Reappraisal.” Journal of Medical Ethics (2000) 26:25–26. 63. Bayer, R., Lumey, L. H., and Wan, L. “The American and Dutch Responses to Unlinked Anonymous HIV Seroprevalence Studies: An International Comparison [letter].” AIDS (1992) 4:4283–4290. 64. Isaacman, S. I. “HIV Surveillance Testing: Taking Advantage of the Disadvantaged [letter].” American Journal of Public Health (1993) 83:597. 65. Dondero, T. J., Pappaioanou, M., and Curran, J. W. “Monitoring the Levels and Trends of HIV Infection: The Public Health Service’s HIV Surveillance Program.” Public Health Reports (1998) 103:213–220. 66. UNAIDS/WHO Working Group on Global HIV/AIDS/STI Surveillance. “Guidelines for Effective Use of Data from HIV Surveillance Systems.” World Health Organization, 2004, pp. 5, 46. www.unaids.org 67. Ostoermann, J., Kumar, V., Pence, B. W., and Whetten, K. “Trends in HIV Testing and Differences Between Planned and Actual Testing in the United States, 2000–2005.” Archives of Internal Medicine (2007) 267:2128–2135. 68. Macklin, R. Double Standards in Medical Research in Developing Countries. Cambridge University Press, 2004. 69. UNAIDS and WHO. Guidance on Ethics and Equitable Access to HIV Treatment and Care. 2004. 70. Rennie, S., and Behets, F. “AIDS Care and Treatment in Sub- Saharan Africa: Implementation Ethics.” Hastings Center Report (2006) 36(3):23–31. 71. Draimin, B. H., and Reich, W. A. “Troubled Tapestries: Children, Families, and the HIV/AIDS Epidemic in the United States.” In A Generation at Risk: The Global Impact of HIV/AIDS on Orphans and Vulnerable Children, ed. G. Foster, C. Levine, and J. Williamson. Cambridge University Press, 2005, pp. 213–232. 72. UNICEF. “Global and Regional Trends.” https://data.unicef.org/topic/hivaids/ global-regional-trends/ 73. Lee, L. M., and Fleming, P. L. “Estimated Number of Children Left Motherless by AIDS in the United States, 1978–1998.” Journal of the Acquired Immune Deficiency Syndrome (2003) 34(2):231–236. 74. Michaels, D., and Levine, C. “Estimates of the Number of Motherless Youth Orphaned by AIDS in the United States.” Journal of the American Medical Association (1992) 268(24):3456–3461. 75. UNAIDS, USAID, and UNICEF. Children on the Brink 2002: A Joint Report on Orphan Estimates and Program Strategies. 2003. 76. UNAIDS, UNICEF, and USAID. Children on the Brink 2004: A Joint Report of New Orphan Estimates and a Framework for Action. 2004, pp. 4, 33–35. 77. Gruskin, S., and Tarantola, D. “Human Rights and Children Affected by HIV/AIDS.” In Foster et al., Generation at Risk, 134–158, at 148. 78. Monasch, R., and Boerma, J. T. “Orphanhood and Childcare Patterns in Sub-Saharan Africa: An Analysis of National Surveys from 40 Countries.” AIDS (2004) 18(Suppl 2):S55–S65. 79. Grassly, N. C., Lewis, J. J. C., Mahy, M., et al. “Comparison of Household-Survey Estimates with Projections of Mortality and Orphan Numbers in Sub-Saharan Africa in the Era of HIV/AIDS.” Population Studies (2004) 58(2):207–217.
222 Ethics and Epidemiology 80. World Bank. “The OVC Toolkit in SSA: A Toolkit on How to Support Orphans and Other Vulnerable Children (OVC) in Sub-Saharan Africa (SSA).” 2018. http:// documents.worldbank.org/curated/en/131531468135020637/The-OVC-toolkit-in- SSA-a-toolkit-on-how-to-support-orphans-and-other-vulnerable-children-OVC- in-Sub-Saharan-Africa-SSA 81. Oppenheimer, “Causes, Cases, and Cohorts,” p. 76.
10
Ethics Curricula in Epidemiology Kenneth W. Goodman and Ronald J. Prineas
Introduction: Ethics, Epidemiology, and Education Epidemiology has over the past quarter century joined other sciences and health professions in making ethics education a component of the larger curriculum. The change is far from total, however. Many programs and schools continue to include ethics only episodically, if at all. There are several reasons for this. One is a simple lack of familiarity: many epidemiologists and public health researchers assume—sometimes wisely—that they do not have the background or competence to introduce ethics into their courses. Indeed, philosophers and others with pedagogical competence in ethics would never presume to teach biostatistics, research design, or foundations of epidemiology without adequate preparation. Another is a shortage of curricular resources. Even if epidemiology and public health faculty were able and willing, the task of designing a new curriculum or introducing ethical issues into existing curricula can be daunting without familiarity with previous efforts. A third is uninterested leadership. While this explanation would be difficult to demonstrate or document, it should be uncontroversial to surmise that some leaders just do not regard ethics as worthy of including in the curriculum—else they would include it. Moreover, additions to established university curricula are often viewed as avoidable entanglements: the schedule is full, the students are busy, and faculty members are stretched thin. Nevertheless, progress and problems in science and other disciplines inevitably force revisions to existing curricula, and developments in epidemiology and ethics have attained such importance that they continue to merit (i) development of new course materials, (ii) training of appropriate faculty members, and (iii) inclusion of new courses into epidemiology, public health, and other curricula. Additionally, ongoing efforts to include ethics-and-epidemiology sessions in professional conferences ought to be expanded, and short courses, perhaps on special topics or problems, should be developed for students, practitioners, and university faculty. We provide support for these recommendations in what follows.
Kenneth W. Goodman and Ronald J. Prineas, Ethics Curricula in Epidemiology In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0010
224 Ethics and Epidemiology
Core Curriculum: Why Ethics Matters in Epidemiology There are a number of reasons to broaden the emphasis on ethics and epidemiology.1 First, epidemiology is a basic discipline, the security and rigor of which are essential for developing informed health policy. When sloppy research or moral shortcomings weaken scientific conclusions—or public confidence in them—there is a need to instruct students and practitioners in professional practice standards and in the moral foundations of scientific inquiry. If flawed science is used as a basis for public health policy, it can have adverse health and economic consequences. To the extent that incompetent science can lead to wasted public resources or, worse, to poorer public health, it can be characterized as a misappropriation of public funds or a threat to public health. Indeed, concerns in the United States about the public credibility of the research enterprise led the National Institutes of Health (NIH) to require training in research integrity or the responsible conduct of research (RCR) at all institutions receiving NIH training grants. Institutions can thus make a virtue out of necessity by offering high-quality programs in research integrity. (RCR is discussed later in this chapter.) Ethics-in-epidemiology courses need to be developed and linked in one way or another to courses in research design and analysis, and health policy. Research issues are already a core component of epidemiology courses offered at universities and colleges in North America, Europe, and elsewhere in the world, though links between epidemiology and health policy are less common. Coherent attempts to wed ethics to such hybrids are essential. A second motivation for expanded bioethics education is that further development of appropriate curricula will stimulate research in both bioethics and epidemiological science. Serious and sustained attention to ethical issues in epidemiology is relatively recent. Yet this subfield is rich with opportunities to identify and analyze new issues, clarify existing problems and contribute to decision procedures for practitioners, policymakers, public officials, students and others. Scientific research education, for instance, could continue to be stimulated by curricular development that includes work on topics in which uncertainty or methodological controversy raise ethical issues (study design, software engineering, meta-analysis, etc.). Third, a commitment to epidemiology, public health, or even science itself can lack focus and rigor and is weakened in the absence of a clear understanding of the values that shape inquiry and of the conflicts engendered by competing values. Expanded educational programs offer an opportunity to improve such understanding—that is, attention to ethics improves science. This can be made clearer with an example. Suppose the following: (i) an epidemiologist studies the health effects of exposure to a useful chemical compound, (ii) the research
Ethics Curricula 225 identifies a correlation between exposure and a particular malady, (iii) the epidemiologist prepares for journal publication a manuscript describing these findings, and (iv) a manufacturer of the compound learns of the manuscript and offers the epidemiologist a sum of money to alter, omit, or otherwise falsify some of the data in the manuscript. If the epidemiologist accepts the offer and submits the corrupted manuscript (which now identifies no or little correlation between exposure and the malady), and if the altered manuscript is published, readers of the journal and of news accounts based on it have a reason to believe the compound is safe—where the evidence suggests that it is not. The resulting publication is flawed, inaccurate, and misleading, and should be taken to exemplify poor science. Because the article would have been flawed, inaccurate, and misleading even if it were not intentionally corrupted, the fact that it was corrupted in this way establishes a clear link between high-quality inquiry and the moral duties of scientists. So, when we make the case that expanded educational programs can improve understanding of the “values that shape inquiry,” we should be understood to be saying that ethics and scientific rigor, or ethics and quality, go hand in hand. Now, most scientific misconduct akin to that in the example is not very interesting ethically. It is uncontroversially wrong and blameworthy under any account of morality. Fabrication, falsification, and plagiarism, say, are wrong because they deceive people, can hurt people, waste resources, pollute the scientific literature, and so on. Students should be told that these reasons are why such actions are wrong—but a fully fledged ethics education program should surely do more. There is a commonly held misconception that ethics education should comprise lessons in how to avoid doing bad things, or a kind of “virtue training.” It is not at all clear that such an approach in isolation is productive: most scientists who falsify data to please a sponsor, insert themselves as authors of papers without doing any of the work, or use information acquired during research for surreptitious financial gain do not generally do so because they were ignorant of the actions’ wrongness or suffering from any sort of lack of clarity about what to do when tempted. They simply regarded other considerations as more important. Finally, the rise of bioethics as a distinct field has seen the emergence of a wide and rich variety of pedagogical tools, and an unprecedented attention to ethical issues in medicine, scientific research, and health policy. Many of these issues are at the core of clinical practice and involve problems arising at the beginning and end of life, in obtaining valid consent, when protecting confidentiality, in assessing patient capacity, when allocating resources, and the like. But bioethics and its curricula have also included issues at the periphery of most medical and nursing practice, including the likes of xenographic transplants, cadaveric sperm procurement, and cryonics for life extension. Ethics in epidemiology and public
226 Ethics and Epidemiology health deserve at least as much attention as these rarely encountered components of what we call “boutique ethics.” We are not suggesting that rare topics are not worthwhile targets for sustained conceptual analysis and policy debate. There is almost always much to be learned from such analyses and debate—and often much that can be applied in more familiar domains. Rather, we are suggesting that the intersection of ethics and epidemiology is itself a fundamentally important area of ethical inquiry. The point can be made from another direction: many bioethicists are competent to discuss ethical issues and problems in clinical medicine and nursing, surgery, critical care, and so forth. They are also well acquainted with ethical issues related to organ transplantation, resource allocation, human subjects research, and the like. However, too few have shown evidence of familiarity with issues in epidemiological ethics, or the growing and substantial literature in this area. An early survey of faculty at U.S. schools of public health suggested that 86% of respondents thought ethics should be included in the curriculum and 66% said that they had already included discussion of ethical issues in other courses.2 Yet a survey of epidemiologists on membership lists of three major professional epidemiology organizations (American College of Epidemiology [ACE], American Heart Association Council on Epidemiology and Prevention, and Society for Epidemiologic Research) in 1995–96 found that of the 88% who responded, only 54% were aware that ethics guidelines existed for epidemiologists, and only 58% indicated that there was a need to develop syllabi on ethics in epidemiology.3 Most outlines of ethics-related courses at schools of public health included some lectures directed to epidemiology or public health research, but none presented courses that were solely in the epidemiology curriculum. To learn more about ethics-in-epidemiology courses offered in the United States, in 2006 we mailed a letter requesting a description of such courses to departments of preventive medicine in fifty-three medical schools and to forty- three accredited or associated schools of public health. We received positive replies from only 15% of those contacted. This suggested that ethics education was still lacking in the education of epidemiologists—a gap that still exists. Most syllabi for ethics-related courses at schools of public health still included some lectures directed to epidemiology or public health research, but, again, none presented courses that were solely in the epidemiology curriculum. In 2018, with support from the ACE, we updated the project.4 Using the Council on Education for Public Health database, a contact person from each accredited public health school or program was emailed a request for “ethics in public health/epidemiology” syllabi. Reminder emails were sent a month later to those who did not initially respond. Of 180 accredited public health schools and programs, twenty-four (13.33%) institutions responded with a relevant syllabus after inquiry. Of 132 institutions that were recontacted, twelve (9.09%) provided
Ethics Curricula 227 a syllabus. Twenty-nine institutions did not have a dedicated ethics course. Some schools provided more than one syllabus. In total, forty-five new syllabi were collected in 2018 and thirty-eight in 2011; thus, eighty-three syllabi have been compiled from fifty-two accredited entities. The most recent syllabi are archived on the University of Miami Miller School of Medicine Institute for Bioethics and Health Policy website (https://bioethics.miami.edu/education/public-health- ethics/epi-syllabi/index.html ). Ethics guidelines and issues in medicine, public health, and epidemiology overlap: all of these disciplines deal with individuals singly and in groups, and address ethical issues related to research. Epidemiology has unique perspectives that may well be missed in more general formal public health or medical ethics courses. The upshot is not only that ethical issues in epidemiology and public health should be addressed in epidemiology and public health curricula, but that students in other disciplines could benefit from such teaching as well.
What Epidemiologists Need to Know About Ethics Epidemiologists and philosophers have for some time recognized that epidemiology raises a number of interesting and important ethical issues. Some of these issues will be familiar to those who are acquainted with or have a background in bioethics and research ethics; these issues include informed or valid consent, confidentiality, risk/benefit assessment, patient/subject/participant rights, conflict of interest, allocation of resources, and so forth. Nevertheless, epidemiology often entails—perhaps even requires—a broader set of problems under these headings. For instance, what are a researcher’s duties to obtain consent from individuals in cultures in which family or community leaders are by custom expected to provide what we might call “consent by proxy” for kin or community? Such cultural differences in ethical norms pose an ensemble of interesting challenges for epidemiologists. Many ethical issues of concern to epidemiologists are rarely addressed in general bioethics. These issues include, but are by no means limited to, the notion of “ethical imperialism,”5,6 the danger of social stigma arising from epidemiological findings,7 the appropriate scope and governance of surveillance,8 and the tensions surrounding decisions whether, to what extent, and by which means to reveal public health risks to study communities.9 Indeed, the ethics-and- epidemiology literature has matured considerably in the past decade, and faculty and students can avail themselves of a broad variety of supplementary resources, readings, and topics. Given the unfortunate increase in the (perceived) risk of contagion such as Ebola, COVID-19, and pandemic influenza, for instance, as well as bioterrorism, there is also a burgeoning literature and pedagogical
228 Ethics and Epidemiology opportunities in “all- hazard” preparedness and response.10–12 Indeed, the COVID pandemic is a rich if not unprecedented source of case studies and ethical analyses of challenges in epidemiology and public health. Based on analogy with other bioethics curricula as well as some experience teaching ethics in epidemiology, we can offer as a first approximation the suggestion that a course in ethics and epidemiology should consider the following specific goals: • To help students identify ethical issues, problems, and conflicts in epidemiology and public health • To examine the ways in which ethical issues in epidemiology are either like or unlike ethical issues in other health sciences • To provide a decision-making procedure for approaching ethical issues, problems, and conflicts • To make plain the connections between sound science and ethically and socially responsible science. Such a course should cover several specific topics depending on available faculty resources, competence, confidence, and expertise. The topics itemized in what follows have evolved from courses first offered in the early 1990s at the University of Miami and Tulane University, and with the encouragement of the ACE, which in 2007 reinvigorated an ethics committee with a focus on education and curricular support.
Moral Foundations Students ought to be introduced to core concepts in moral philosophy and bioethics. Professional ethics curricula are often designed and courses taught by well-meaning scientists and others who somehow believe that excellence in one domain confers competence in another. Generally, students should be exposed to utilitarianism, a philosophy that right actions are those that maximize the welfare of all affected parties, and Kantianism, a philosophy that demands that we respect persons as autonomous moral agents and not use or exploit them as means to another end. The longstanding conflict between these two theories is a rich source of moral debate and insight, especially in epidemiology, in which the rights of individuals can conflict with the duties of communities to protect the public health. Students should also be exposed to major current approaches to bioethics.13,14 Key examples emphasize (i) principles such as respect for autonomy, nonmaleficence (do no harm), beneficence (providing benefits that contribute to
Ethics Curricula 229 welfare), and justice;15,16 (ii) moral rules and moral ideals undergirded by rationality and impartiality;17,18 (iii) case-based reasoning;19,20 and (iv) some account of rights that emphasizes human rights.21 Even this list is not exhaustive, and those with the inclination might profitably review advances in virtue ethics, feminist ethics, and other approaches. At least as important is a review of relativism and universalism. Many argue that there are knowable moral principles, rights, or truths that are independent of culture, era, and nationality (among other considerations). These universalists are prepared to say that some cultures just have it wrong and that their activities or practices violate human rights. Relativists deny this and identify morality as a more or less local phenomenon that cannot be separated from history, culture, nationality, or the like. They reject the possibility of finding a morally neutral vantage point from which to judge the correctness of other cultures’ beliefs. Cultural difference is the source of a number of difficult and important problems in epidemiological ethics, such as varying stances toward informed or valid consent, confidentiality, and so on. Research in cultures different than one’s own provides a rich and ready source of examples. Students should be challenged to grapple with the differences between cultural relativism (which applies to many customs or practices with little or no moral significance—some dietary practices, for instance) and moral relativism (the idea, as noted earlier, that concepts of right or wrong action vary by culture or context) and should be urged to identify and take a stand on the kinds of public health values that are often promoted as universal but that are sometimes disdained by local, religious, or other communities (think of some zealots’ objection to vaccination, wearing masks during the COVID pandemic, or the ideological opposition to fluoridation of drinking water, for example).
Research Integrity Broadly shared values can and have been transmitted between generations of scientists. The universalist will say they are the best values, or the true ones, the relativist that they are simply our values. In either case it is appropriate to share such values with students. Students should be able to evaluate the relation between science and ethics and to show that poorly wrought or imprecise science is an inefficient way to learn about the world and is also wrong because it squanders resources, can put people at risk, and wastes colleagues’ time. Here, even sloppiness becomes a moral consideration and not merely a function on economic or workflow efficiency. One does well to affirm science’s commitment to objective inquiry and not to special interests, to mentors’ responsibilities for students, and to the open and unfettered enterprise of scientific inquiry. Here too is an opportunity to address several of the touchstone principles of scientific scholarship
230 Ethics and Epidemiology related to the responsible conduct of research: the need to give credit where credit is due, the obligation not to fabricate or falsify data, the duty to respect the scientific corpus and not pollute it with unnecessary or spurious publications, and the obligations to maintain coherent records and to share data, among others. Any fascination with what are called “moral dilemmas” creates the mistaken impression that ethics is chatty and feckless, its task merely savoring problems that cannot be solved. This assessment masks the fact that in case after case of scientific misconduct, what is wanted is not a clearer sense of right and wrong but rather the mettle to do the former (which is often obvious). Dilemmas are often difficult or impossible to resolve, but this is not the case with many practical problems in epidemiological and other scientific practices. Such practical problems include communicating risk to study populations, crafting policies for mandatory vaccination, balancing privacy rights against collective health benefits, and managing scientific data that could stigmatize subpopulation groups, among many others.
Valid Consent and Refusal One of the most difficult problems facing epidemiologists is that of valid consent for people to participate in research. This course component should address the following points and issues: • Minimal criteria for valid consent (adequate information, absence of undue coercion, and competence) • The nature and context of valid refusal (informed refusal to participate) • Criteria addressing questions such as the nature of competence, the level of detail that is minimally required for informing potential subjects about risks, whether monetary and other inducements are undue because they constitute coercion, manipulation, or other forms of inappropriate influence • Special problems that attach to informed consent forms and their readability, and the relation of readability to the criterion, as noted earlier, of “adequate information” • The role, function, and constitution of institutional review boards and other forms of supervisory review • The potential need for community consent over and above individual consent. Moreover, much epidemiological research would be impossible if valid consent were required of all subjects or participants. Analysis of data sources ranging
Ethics Curricula 231 from vital statistics to vaccine registries to newborn genetic screening archives involves potentially vast numbers of people, and obtaining their individual consent is often not possible. This would however be a poor reason to prohibit such important research. Steps taken to address this issue include the anonymization of individual records and appropriate institutional oversight. Observe that research on data from which unique identifiers have been removed makes explicit the link between privacy and confidentiality on the one hand, and valid consent on the other. Examples of past abuses of subjects have inspired greater adoption of the term “participants.” Instructors therefore have an opportunity to familiarize students with historical milestones and documents that chart the evolution of relevant principles and requirements. Observe that research without consent creates a large class of exceptions to the generally preferred use of the term “participants” instead of “subjects” to describe those people whose data are being analyzed: if one does not know one is being studied, one cannot be said to be a “participant.”
Privacy and Confidentiality People expect that details about their personal lives will not be made public and that they generally enjoy the right that such information will be protected from inappropriate disclosure. As data-gathering protocols become more refined and as data-storage technology progresses, however, there are ever-new challenges to privacy and confidentiality: witness the growing use of geographic information systems and of data mining or machine learning software to make connections among data where these connections reveal facts about individuals and groups, sometimes facts about which the individuals and groups themselves are ignorant. The following curricular items require consideration in any course on ethics and epidemiology: • Privacy and the degree to which it must be protected, and circumstances under which privacy may be violated (compare this to the earlier point about plausible exceptions to rules for valid consent) • Right of epidemiologists to use databases for purposes not originally foreseen, such as advancing science and informing policy (compare with the putative “right to benefit from science” [https://www.ohchr.org/EN/Issues/ CulturalRights/Pages/benefitfromscientificprogress.aspx]) • Confidentiality and minimal criteria for keeping linked records from inappropriate publication or other disclosure • Standards for database security and access, addressing criteria for legitimate requests for access
232 Ethics and Epidemiology • The use of unlinked or de-identified information (where data or information and a person’s identity are decoupled); the relation of this to contexts in which valid consent is presumed unnecessary; the differences among de- identification, anonymization, and pseudonymization • The need for explicit and rigorous justifications for use or maintenance of linked records • The important issues raised by the wedding of biology, genetics, and epidemiology. Use of computers and networks to store and retrieve genetic information is a rapidly expanding phenomenon. It is reasonable to be concerned about the use of information-retrieval techniques to identify genetic patterns or regularities or to acquire genetic information about individuals or groups. Information technology is reshaping epidemiology22,23 as it is reshaping all other sciences, including all health sciences.24 From Big Data to artificial intelligence, it should be uncontroversial to urge that ethics and information technology be a key component of the education—and continuing education—of epidemiologists.
Risks, Harms, and Wrongs The distinction among risks, harms, and wrongs (as well as potential benefits) is an important one because of the applicability of these concepts to the consent process and their utility in addressing issues regarding human subjects protection. The term risk refers to a possible future harm, where harm is defined most broadly as a setback to interests, particularly in life, health, and welfare. Expressions such as minimal risk, reasonable risk, and high risk are often used to refer to the chance of a harm’s occurrence (its probability), but sometimes they also refer to the severity of the harm if it occurs (its magnitude). (We are grateful to Tom Beauchamp for these distinctions.) There are many kinds of harm, including physical, mental, financial, and social. The distinction is useful in the following way. A participant in a research study is at risk of being harmed. If the risk is disclosed before the participant consents to the study, and if the harm later occurs, then the outcome is an unfortunate—but not an unethical—one. If, however, information about the risk is withheld and the participant is thereby deceived, then in addition to being harmed she is wronged. Indeed, it is not necessary to endure a harm in order to be wronged: the same participant might not be harmed but be wronged nonetheless because of the deception. For these reasons, epidemiologists should be given resources for evaluating the following:
Ethics Curricula 233 • The notion of “acceptable risk” • What constitutes a harm, and how this may vary by culture, community, or even individual • What constitutes a wrong, and how wronging differs from harming • The need to eliminate or minimize risks, harms, and wrongs, and steps for accomplishing this • Success in crafting ethically optimized studies, or those that maximize and justly distribute benefits and minimize risks • What constitutes a risk, harm, or wrong to a group or community, including the problem of research-related stigma • The relationship between risks, harms, and wrongs and their inclusion in informed consent documents and processes • The need for and the role of truth telling by observers and experimenters, and whether and in what contexts deception can be permitted • Role of disease prevention and issues in mass screening • Problems and issues that arise in studies involving special or vulnerable populations, including children, the elderly, indigenous peoples, mental patients, and so forth.
Research Sponsorship, Conflicts of Interest and Commitment, and Advocacy The question of professional allegiance and integrity often finds its most difficult challenges when scientists have a financial, social, or personal interest in the results of their research. If he who pays the piper calls the tune, what are we to make of corporate or government research sponsorship, and how can such sponsorship—often a valuable source of research funding—be managed to eliminate or minimize bias and to ensure the validity of results?25,26 In general, the concern is that a scientist might be biased (not neutral or not objective) as a result of financial, personal, social, political, or other interests or considerations. In our earlier example, the epidemiologist studying toxic exposure was motivated by the payment of money—actually a bribe—to abandon neutrality as regards the data collected. If, instead of being offered a direct payment, his spouse was an employee of the company that manufactured the chemical (or he owned a great deal of stock in it), his objectivity and neutrality might have been questioned even earlier—that is, during the initial collection of the data. Such biases reduce accuracy, erode trust, and damage integrity. Financial interests, while frequently insidious and the source of no little concern, are not unique in compromising research integrity. Investigators who care deeply about the social or policy implications of research might have what has
234 Ethics and Epidemiology been termed a “conflict of conscience.” For instance, a devoutly (ir)religious and socially (liberal or) conservative epidemiologist might be suspected of bias if she were to study condom distribution or needle-exchange programs to reduce HIV transmission and found these interventions to be (in)effective. For these reasons, the question of what constitutes a conflict of interest, and what constitutes a conflict of conscience, should be addressed in any comprehensive course on ethics and epidemiology, along with the following: • Criteria for identifying appropriate and inappropriate sponsorship • The nature and role of intellectual property in scientific research • Obligations, and limits of obligations, to sponsors and employers; managing conflicts • The extent to which the appearance of a conflict should be avoided and whether an appearance of conflict is any different from conflict itself • Contexts in which conflicts should be revealed to research subjects and others • Appropriate institutional efforts to identify and manage conflicts. Issues of scientific bias are especially important for institutional compliance and policy because of liability issues, institutional credibility and reputation, and the dangers of permitting or being thought to foster an environment conducive to bias. While conflicts of conscience are generally more difficult to identify (and prevent) than conflicts of interest, they are in principle no less worrisome sources of bias. A particularly difficult challenge arises when scientists use their knowledge or expertise in attempts to guide or influence public policy. Although there is a case to be made that scientists, because of their expertise, are uniquely positioned to contribute to policy debates, there is no bright line between neutral expert opinion and biased social commentary. Though the primary goal of epidemiology—the study of health effects and improving public health—may be obvious, less clear is the appropriate stance researchers should take in attempts to use their findings or expertise to effect social change or even corporate or institutional policy. Consider, for instance, the environmental health scientist who has political commitments about the sources of climate change, the occupational health researcher with strong feelings about workplace drug abuse, or, as mentioned earlier, the socially liberal or conservative epidemiologist studying whether condoms reduce the incidence of sexually transmitted disease. These examples point to a body of questions that have been found to enliven the educational experience of adult learners: • In what contexts, if any, should epidemiologists become advocates for a particular health policy? Does advocacy inherently introduce bias, or
Ethics Curricula 235
can professionals effectively prevent personal views from coloring their research? • When does such advocacy constitute a conflict of interest or a conflict of conscience? • How should epidemiologists qua advocates address issues of cultural difference? • What if sincere health policy advocacy conflicts with values prevalent in a study community? • To what extent is it realistic and proper to demand of researchers a measure of sympathy and respect for values and customs that they find objectionable? Contrarily, is it appropriate to use study results to advocate change in values, customs, or programs?
This is a particularly exciting source of classroom discussion and debate, in that reasonable people have disagreed passionately about whether advocacy is morally required—or objectionable.
Communication and Publication Unpublished research cannot easily advance the primary goal of improving public health, but publication of scientific results is sometimes problematic. The nature of unpublished research and sometimes the publication of scientific results fall under the well-known heading “responsible conduct of research.” The following topics should be addressed: • Duties to communicate and problems in communicating study results to subjects, communities, sponsors, and so forth (issues of health literacy loom large here) • Difficulties in accurate and balanced communication of risks, harms, and wrongs • Obligation to publish study results, effects of publication, and responsibilities to colleagues and “science” • Issues in publication and authorship, including over-publication and redundant publication • Role of popular news media in communicating public health information. Each of these bears on scientific communication in a number of ways, and these connections afford an instructor opportunities to use scholarly media to foster discussion about responsible communication, as well as the popular media to inform debate about public understanding of science, especially the kinds of
236 Ethics and Epidemiology probabilistic and sometimes conflicting data that can confound or alarm laypeople about health risks.
Issues and Cases It would therefore be a sad oversight if a course or program in ethics and epidemiology were decoupled from current events. Epidemiology is often in the news, and some of the greatest challenges facing scientists and policymakers are prime candidates for inclusion in an ethics course. If applied ethics is to be of any use at all in solving real-world problems, then surely students should be shown how this works in actual cases. In what follows we offer three examples.
Risk Communication and Pandemic Preparedness and Response COVID-19 provided ample justification for health planners and others’ concern about the possibility that an influenza virus will mutate and achieve human-to- human transmission. The results of such transmission, if anything like the 1918– 19 influenza pandemic, would be disastrous. In that pandemic, 50 million to 100 million people died—as much as 5% of the human population. Though its consequences have been dramatic, COVID has not reached that magnitude. Nevertheless, as COVID spread, government and public health officials needed to make difficult decisions regarding (i) isolation and quarantine, (ii) triage and rationing, and (iii) how best to communicate the risks related to these policy challenges, and about the policy decisions themselves. These decisions were based on incomplete and probabilistic data, and so therefore were the messages communicated to concerned citizens. Pandemic risk communication is shaped by the same factors that influence other threats, the likelihood or severity of which are probabilistic: if an alarm is sounded too early or is too shrill, it can elicit panic and distrust and impede cooperation. If it is sounded too late, those at risk will resent that they were not trusted with probabilistic data in the first place. Risk communication therefore raises issues of decision-making under uncertainty, veracity, and trust. How should epidemiologists contribute to such communication? Pandemic responses related to quarantine and rationing (of vaccines, ventilators, or hospital beds) are another important and interesting source of ethical issues that bear on epidemiology and the public’s health. Instructors or leaders of an ethics-and-epidemiology course will find these issues ripe for discussion and debate. For instance, there are good, perhaps overwhelmingly good,
Ethics Curricula 237 reasons to use police powers to accomplish public health objectives. Under what circumstances, then, should mask wearing, vaccination, or quarantine and isolation be ordered? One way applied ethics can help to answer such a question—and thereby demonstrate its utility to students—is by invoking a utilitarian analysis in which maximizing the welfare of all affected parties is assigned a higher priority than protecting rights of free association and movement. (Several high- quality online resources are available to the instructor, including those of the World Health Organization.27)
Public Understanding of Epidemiology Research: The Case of Screening Mammography Do regular mammograms reduce breast-cancer mortality? Consider the following sequence (adapted in part from a book about evidence-based medicine28): 1997 • An NIH consensus conference concludes that there is insufficient evidence to recommend for or against routine mammograms for women in their forties.29 • Congress condemns the consensus statement and seeks “to refute its conclusions and to give American women ‘clearer guidance’ about the need for mammograms.”30 2000 • A Cochrane Collaboration report reviews the quality of mammography trials and two meta-analyses and finds that “screening for breast cancer with mammography is unjustified . . . there is no reliable evidence that screening decreases breast-cancer mortality.”31 2001 • A “storm of debate and criticism” follows, including criticism of the quality of the research that produced the findings.32 • A formal review of the Cochrane report is said to have “confirmed and strengthened our original conclusion.”33 • One authority is quoted in the popular news media as summing up the tension: The debate has become so sophisticated from a methodology viewpoint that as a doctor my head is spinning . . . you read an article in The Lancet and you nod your head yes. Then you read the studies by people on the other side and
238 Ethics and Epidemiology you nod your head yes. We’re witnessing this fight between the pro-and anti- mammography forces and they’re both arguing that “my data is better and we’re right and they’re wrong.”34
2016 • Fast forward a decade and a half: the authoritative U.S. Preventive Services Task Force recommends biennial screening mammography for women aged fifty to seventy-four—and, generally, not otherwise.35 The issue is rich with opportunities to address the values and goals of research and their implications for public policy; the appropriate use and application of a standard, albeit controversial, technique for data analysis (meta-analysis); and the lay public’s (mis)understanding of the growth of knowledge. Ethical issues include that of the potential personal or financial bias of partisans with commitments to or against a public health initiative, as well as the duty of scientists to communicate data in such a way as to make clear its scope and limitations. Debates over the health consequences of climate change provide a not-dissimilar opportunity, although one enhanced by an opportunity to include the role of zealotry and ideology in matters affecting public health policy.
Planning for Bioterrorism—Epidemiology and Health Informatics Epidemiologists and other scientists rely on information technology to collect, store, analyze, and propagate information. The growth of computational “early warning systems” to identify sudden and evolving threats to public health raises an ensemble of issues fertile for curricular development. Thus, in addition to pandemic preparedness, the intentional introduction of dangerous biological, chemical, or radioactive agents into the air, water, or food provides superb opportunities to introduce students to ethical issues in epidemiology. Since the anthrax attacks of 2001, which killed five people in the weeks after September 11, 2001, it became clear that a more efficient pathogen, introduced more broadly, could be devastating. What is needed, according to some, is a robust and highly sensitive and specific means of detecting such an attack as quickly as possible. With rapid identification and response, deaths and injuries could be reduced. Computational early warning systems—let us call this “emergency public health informatics” (EPHI)—raise ethical issues that are interesting both because of the use of new technology and their links to traditional challenges in epidemiology.36 These issues include:
Ethics Curricula 239 Privacy and confidentiality—While epidemiologists encounter these values on a regular basis, the volume of data needed to fuel early warning systems and potentially linked to personal identifiers is extraordinary (e.g., pharmacy purchases and satellite photos of cars in drugstore parking lots, 911 calls, school system absentees, visits to emergency rooms and veterinary clinics). The ability to establish baseline levels for these variables and quickly be able to detect increases is an essential component of early warning systems. Surveillance or research?—Citizens rely on scientists and governments to keep track of changes in threats to health and well-being and to intervene when appropriate. In open societies, at least, such monitoring or surveillance is tacitly permitted—that is, epidemiologists do not ask those citizens if they are willing to share news of births, deaths, vaccinations, domestic violence, and so forth. Indeed, it would be irresponsible not to collect and analyze data on these and many other phenomena. Contrarily, those same citizens expect their consent will be sought before health data are collected for research. Because health research must be reviewed by institutional review boards or research ethics committees, the question whether any particular act of data acquisition is part of a surveillance effort or a research project is of great significance. Computational epidemiology, including EPHI systems, presents extensive opportunities for classroom analysis and debate. Judgments under uncertainty—While computers are essential for tracking changes in variables that might signal a bioterrorist attack, they are imperfect, and the data they process can be flawed or incomplete. This means that the analyses themselves might not be accurate or reliable. But if that is the case, how reliable are the interventions, warnings, or other responses to signs of an emergency threat? Epidemiologists are generally experienced in the management of incomplete and imperfect data, and computer tracking of emergencies provides a new opportunity to review strategies and their consequences for populations in cases in which the stakes are immediate and potentially very high.
Pedagogical Opportunities There are many resources for those seeking to introduce or broaden the role of ethics in epidemiology curricula. These include codes of ethics, a global increase of interest in research integrity and the responsible conduct of science, and the efforts of others who have succeeded in developing ethics curricula. This section provides a survey of these resources.
240 Ethics and Epidemiology
Codes of Ethics Epidemiology has joined other sciences in reckoning that professionals are linked by values and that these values ought to be articulated in a code or oath. Codes of ethics, despite ancient precedents and antecedents, are notoriously difficult to draft: if they are too general, they provide little or no guidance in particular cases; if too specific, they risk omitting or overlooking issues and actions that can shape professional practice. In either case they risk sending the message that a list of rules or principles for right conduct—in the absence of ongoing education and the fostering of critical thinking skills that we earlier identified as essential to high-quality ethics education—is somehow adequate to foster such conduct. Moreover, a code of ethics risks irrelevance and stagnation if it is not regularly reviewed and updated. Nevertheless, when codes do exist and are well written by informed professionals, they can profitably be included in ethics curricula as providing exemplars of the kinds of behavior expected of professionals trained in a particular field. A committee of the ACE, for instance, drafted a set of “Ethics Guidelines” in 2000 after a series of surveys and workshops.37,38 Updates to the code were under way in 2021. After a statement of “core values, duties, and virtues,” this document, offered as one of five official “Statements of the College,” emphasizes and addresses issues in human subjects protection under headings such as “Elements of informed consent,” “Avoidance of manipulation or coercion,” and “Conditions under which informed consent requirements may be waived.” To be sure, the contents of any code are mere slogans unless there is some sort of analysis about the meaning and import of key terms. The American Public Health Association in 2001 published twelve “Principles of the Ethical Practice of Public Health” with accompanying rationale and notes.39 These principles are broad and include the statements “Public health programs and policies should be implemented in a manner that most enhances the physical and social environment” and “Public health institutions should protect the confidentiality of information that can bring harm to an individual or community if made public. Exceptions must be justified on the basis of the high likelihood of significant harm to the individual or others.” A draft revised code was in development in 2019. That a problem or issue is included or excluded is itself something that can inspire and inform a teaching moment. Students will benefit from being able to reflect on and debate contemporary challenges in the profession.
Research Integrity and the Responsible Conduct of Science For better or worse, the past three decades have seen unprecedented attention to human subjects protection and RCR in large part because of abuses of research
Ethics Curricula 241 participants and dramatic cases of scientific misconduct. While we might wish that such attention had been fostered by loftier motivations, it is, in fact, the result of scandal and public disclosure of wrongdoing. Epidemiologists have an opportunity proactively to embrace RCR curricula and, in so doing, render ethics education more than a response to bad actors and corner-cutters. That is, if one accepts that the goal of ethics pedagogy is improved critical thinking skills,40 then instead of seeking to shame evildoers we should emphasize that improved RCR education will help professionals manage conflicts of interest and of commitment, address questions related to intellectual property, and educate others about the importance of research integrity in building public trust. It is important that such pedagogy not be a mere recitation of rules against wrong acts, such as fabrication, falsification, and plagiarism. There is nothing ethically challenging about such blatant offenses. They are uncontroversially wrong. The growth of interest in RCR curricula, on the other hand, affords an opportunity to confront more difficult cases and issues and foster greater awareness—perhaps especially in epidemiology—of the importance of public trust in communicating the results of professional activities. There are many resources that can be used to advance RCR education in the classroom, several of which are listed on an NIH website, “Responsible Conduct of Research Training.”41 Many institutions have successfully established courses in ethics and epidemiology and public health42 or incorporated questions of ethics and public policy into existing courses. The question as to which approach is preferable is an interesting one, though any attempt to introduce ethics education where there was none is laudable. One might hope that both approaches could be adopted: create an ethics-and-epidemiology course and include ethics in appropriate places in courses in biostatistics, environmental health, toxicology, and so on. Either effort requires the support of leadership and faculty in academic institutions. For professional societies and industry, it also requires that leaders both believe and are willing to devote resources to the idea that ethics education is an important part of professional development. In some cases, the greatest challenge can be the identification of curricular tools and competent faculty. One organization, the ACE, has reconstituted an ethics committee (formerly the Ethics and Standards of Practice Committee), which has taken on the task of collecting ethics syllabi from institutions willing to make such resources available to the epidemiology community.43 The Association of Schools of Public Health has created an “Ethics and Public Health: Model Curriculum” containing detailed guidance and useful resources on a number of issues.44 A commitment to bioethics will also realize the opportunity to teach and use faculty from other schools or departments; for instance, computer scientists can identify issues in database security, physicians and nurses can examine culturally
242 Ethics and Epidemiology mediated local public health problems, and philosophers can inform discussions about moral foundations as well as challenges related to causation, risk and uncertainty. Additionally, links to course offerings in philosophy departments (philosophy of science, ethics, bioethics, etc.) and law schools could be especially useful for students with advanced interests.
Conclusions and Recommendations There remains a need to address ethical issues in a comprehensive and rigorous manner in epidemiology and public health curricula and at the postgraduate and professional levels. There is a concomitant need for high-quality course materials and for them to evolve. Such course development will improve the quality of public health research, stimulate research in science and ethics, clarify the values that guide epidemiological inquiry, reduce attrition of idealistic students discouraged by unethical mentors, and ensure epidemiology and public health attend to ethics at a pedagogical level commensurate with their importance. We therefore advocate a change in the “standard of care” in epidemiology education. That is, the arguments here are not intended to support mere curricular niceties but rather as requirements for training programs in epidemiology and public health. A failure to include some measure of bioethics training in the curriculum is itself both pedagogically and ethically disappointing. Institutions, professional societies, industry, and government should devote appropriate resources and personnel to realizing these goals. Ethics in epidemiology and public health should enjoy a role that reflects their importance, their potential contributions, and their role in science and society.
References 1. Coughlin, S.S., and Etheredge, G.D. On the need for ethics curricula in epidemiology. Epidemiology 6 (1995): 566–567. 2. Rossignol, A.M., and Goodmonson, S. Are ethical topics in epidemiology included in the graduate epidemiology curricula? American Journal of Epidemiology 142 (1995): 1265–1268. 3. Prineas, R.J., Goodman, K.W., Soskolne, C.L., et al., for the American College of Epidemiology Ethics and Standards of Practice Committee. Findings of the American College of Epidemiology’s survey of ethics guidelines. Annals of Epidemiology 8 (1998): 482–489. 4. Hlaing, W.W., Saddemi, J.L., and Goodman, K.W. Expanding ethics curriculum resources: American College of Epidemiology’s syllabus collection project. Annals of Epidemiology 38 (2019): 1–3.
Ethics Curricula 243 5. Angell, M. Ethical imperialism? Ethics in international collaborative research. New England Journal of Medicine 319 (1988): 1081–1083. 6. Levine, R.J. Informed consent: some challenges to the universal validity of the Western model. In Z. Bankowski, J.H. Bryant, and J.M. Last, eds. Ethics and Epidemiology: International Guidelines. Council for International Organizations of Medical Sciences (CIOMS), 1991: 47–58. 7. Gostin, L. Ethical principles for the conduct of human subject research: population- based research and ethics. Law, Medicine and Health Care 19 (1991): 175–183. 8. World Health Organization. WHO Guidelines on Ethical Issues in Public Health Surveillance. 2017. https://www.who.int/ethics/publications/public-health-surveillance/en/ 9. Sandman, P.M, Emerging communication responsibilities of epidemiologists. In W.E. Fayerweather, J. Higginson, and T.L. Beauchamp, eds. Industrial Epidemiology Forum’s Conference on Ethics in Epidemiology. Journal of Clinical Epidemiology 44 (Suppl. I) (1991): 41S–50S. 10. Moreno, J.D., ed. In the Wake of Terror: Medicine and Morality in a Time of Crisis. MIT Press, 2003. 11. Levy, B.S., and Sidel, V.W., eds. Terrorism and Public Health: A Balanced Approach to Strengthening Systems and Protecting People. Oxford University Press, 2003. 12. Siegel, M. False Alarm: The Truth About the Epidemic of Fear. Wiley, 2005. 13. Weed, D.L., and McKeown, R.E. Ethics in epidemiology and public health. I. Technical terms. Journal of Epidemiology and Community Health 55 (2001): 855–857. 14. McKeown, R.E., and Weed, D.L. Ethics in epidemiology and public health. II. Applied terms. Journal of Epidemiology and Community Health 56 (2002): 739–741. 15. Beauchamp, T.L., and Childress, J.F. Principles of Biomedical Ethics, 6th ed. Oxford University Press, 2008. 16. Beauchamp, T.L. Methods and principles in biomedical ethics. Journal of Medical Ethics 29 (2003): 269–274. 17. Gert, B. Morality: Its Nature and Justification, revised ed. Oxford University Press, 2005. 18. Clouser, K.D., and Gert, B. Common morality. In G. Kushf, ed. Handbook of Bioethics: Taking Stock of the Field from a Philosophical Perspective. Kluwer Academic Publishers, 2004: 121–141. 19. Jonsen, A.R., and Toulmin, S.E. The Abuse of Casuistry. University of California Press, 1988. 20. Arras, J.D. Getting down to cases: the revival of casuistry in bioethics. Journal of Medicine and Philosophy 16 (1991): 29–51. 21. Mann, J.M., Gruskin, S., Grodin, M.A., and Annas, G.J., eds. Health and Human Rights: A Reader. Routledge, 1999. 22. Salerno, J., Knoppers, B.M., Lee, L.M., et al. Ethics, Big Data and computing in epidemiology and public health. Annals of Epidemiology 27 (2017); 27: 297–301. 23. Lipworth, W., Mason, P.H., and Kerridge, I. Ethics and epistemology of Big Data. Journal of Bioethical Inquiry 14 (2017): 485–488. 24. Goodman, K.W. Ethics, Medicine, and Information Technology: Intelligent Machines and the Transformation of Health Care. Cambridge University Press, 2016. 25. Pearce, N. Corporate influences on epidemiology. International Journal of Epidemiology 37 (2008): 46–53.
244 Ethics and Epidemiology 26. Soskolne, C.L. Epidemiology: questions of science, ethics, morality, and law. American Journal of Epidemiology 129 (1989): 1–18. 27. https://www.who.int/emergencies/diseases/novel-coronavirus-2019 and www.who. int/ethics/influenza_project/en/index.html 28. Goodman, K.W. Ethics and Evidence-Based Medicine: Fallibility and Responsibility in Clinical Science. Cambridge University Press, 2003. 29. National Institutes of Health Consensus Development Panel. National Institutes of Health Consensus Development Conference Statement: breast cancer screening for women ages 40–49, January 21–23, 1997. Journal of the National Cancer Institute 89 (1997): 1015–1026. 30. Woolf, S.H., and Lawrence, R.S. Preserving scientific debate and patient choice: lessons from the consensus panel on mammography screening. Journal of the American Medical Association 278 (1997): 2105–2108. 31. Gøtzsche, P.C., and Olsen, O. Is screening for breast cancer with mammography justifiable? Lancet 355 (2000): 129–134. 32. Horton, R. Screening mammography— an overview revisited. Lancet 358 (2001): 1284–1285. 33. Olsen, O., and Gøtzsche, P.C. Cochrane Review on screening for breast cancer with mammography. Lancet 358 (2001): 1340–1342. 34. Kolata, G. Study sets off debate over mammograms’ value. New York Times, national ed., December 9, 2001: A1, A32. 35. U.S. Preventive Services Task Force. Final recommendation statement: breast cancer: screening. November 2016. https://www.uspreventiveservicestaskforce.org/ Page/Document/RecommendationStatementFinal/breast-cancer-screening1 36. Szczepaniak, M.C., Goodman, K.W., Wagner, M.W., et al. Advancing organizational integration: negotiation, data use agreements, law, and ethics. In M.W. Wagner, A.W. Moore, and R.M. Aryel, eds. Handbook of Biosurveillance. Academic Press, 2006: 465–480. 37. https://www.acepidemiology.org/ethics-guidelines. 38. McKeown, R.E., Weed, D.L., Kahn, J.P., and Stoto, M.A. American College of Epidemiology ethics guidelines: foundations and dissemination. Science and Engineering Ethics 9 (2003): 207–214. 39. Thomas, J.C., Sage, M., Dillenberg, J., and Guillory, V.J. A code of ethics for public health. American Journal of Public Health 92 (2002): 1057–1059. http://www.apha.org 40. Melo-Martin, I., and Intemann, K.K. Can ethical reasoning contribute to better epidemiology? A case study in research on racial health disparities. European Journal of Epidemiology 22 (2007): 215–221. 41. https:// o ir.nih.gov/ s ourcebook/ e thical- c onduct/ r esponsible- c onduct- research-training 42. Thomas, J.C. Teaching ethics in schools of public health. Public Health Reports 118 (2003): 279–286. 43. See also and again the collection of syllabi at https://bioethics.miami.edu/education/ epi-syllabi/index.html 44. https://repository.library.georgetown.edu/handle/10822/556779.
11
Conflicts of Interest Walter Ricciardi and Carlo Petrini
Definitions The term “conflicts of interest” (CoI) encompasses a wide spectrum of behaviors or actions potentially involving personal gain or financial interest: these generally arise when an individual uses his or her position in order to derive personal gain or some benefit to his or her family, household, or other party. One of the most frequently quoted definitions of CoI is taken from a report by the U.S. Institute of Medicine (IOM) dedicated solely to this issue: Conflicts of interest are defined as circumstances that create a risk that professional judgments or actions regarding a primary interest will be unduly influenced by a secondary interest. Primary interests include promoting and protecting the integrity of research, the quality of medical education, and the welfare of patients. Secondary interests include not only financial interests . . . but also other interests, such as the pursuit of professional advancement and recognition and the desire to do favors for friends, family, students, or colleagues.1 (p. 6)
This definition is cited in, among others, the Encyclopedia of Global Bioethics.2 The Encyclopedia of Applied Ethics provides details concerning what is at stake. A CoI is a situation in which some person P (whether an individual or corporate body) is (1) in relationship with another requiring P to exercise judgment on the other’s behalf and (2) P has a (special) interest tending to interfere with the proper exercise of judgment in that relationship. The crucial terms in this definition are “relationship,” “judgment,” “interest,” and “proper exercise.”3 (p. 571)
Another definition, this time from the Encyclopedia of Bioethics, draws a distinction between conflicts of interest and conflicts of obligation:
Walter Ricciardi and Carlo Petrini, Conflicts of Interest In: Ethics and Epidemiology. Third edition. Edited by: Steven S. Coughlin and Angus Dawson, Oxford University Press. © Oxford University Press 2021. DOI: 10.1093/oso/9780197587058.003.0011
246 Ethics and Epidemiology In a conflict of interest, one’s obligations to a particular person or group conflict with one’s self-interest . . . Conflicts of interest should be distinguished from conflicts of obligation, in which one’s obligations to one person or group conflict with one’s obligations to some other person or group. The latter need not necessarily involve any threat to the agent’s own interests.4 (p. 676)
Within the biomedical setting the issue of CoI acquires particular relevance in clinical practice. The New Dictionary of Medical Ethics notes that Conflicts of interest arise in clinical practice when practitioners become involved in arrangements that introduce other considerations that are potentially incompatible with the best interest of patients. A conflict of interest is not an action, but a situation that can adversely influence action. Even if they do not actually lead to unethical actions, conflicts of interest are inherently problematic because they weaken professional standards and undermine trust.5 (p. 53)
The British National Institute of Clinical Excellence describes various types of CoI: “[i]nterests can be specific or non-specific and financial or non-financial. Financial interests can be personal or non-personal”6: • Specific/non-specific: “An interest is ‘specific’ if it refers directly to the matter under discussion. An interest is ‘non-specific’ if it does not refer directly to the matter under discussion.” (pp. 4–5) • Financial/non-financial • “A personal financial interest . . . is one where there is or appears to be opportunity for personal financial gain or financial gain to a family member.” (p. 5) • “A non-personal financial interest involves payment or other benefit to a department or organisation in which the individual is employed but which is not received personally.” (p. 6) • “A personal non-financial interest . . . refers to an opinion on the matters under consideration published in the 12 months before joining an advisory committee or during the period of membership of an advisory committee.” (p. 6) It is also important to distinguish between institutional and individual CoI. The former is usually only of a financial nature, while individual CoI may also be non-financial. This distinction has been described more fully by the Association of American Universities, according to which institutional CoI may arise when an institution (in this case a university), “any of its senior management or
Conflicts of Interest 247 trustees, or a department, school, or other sub-unit, or an affiliated foundation or organization, has an external relationship or financial interest in a company that itself has a financial or other interest in a faculty research project.” Conflicts may equally arise if senior managers or trustees also sit on the boards of (or are in some kind of official relationship with) organizations that have significant business dealings with the university. The existence or even the perception of such conflicts may lead to a real or suspected bias in the management or conduct of research projects at the university, and failure to recognize and deal with them may lead to decisions or actions at variance with the university’s missions, obligations, or values.7 These citations and those that follow all refer to “conflicts of interest,” although the term “conflicting interests” is also found in the literature, given that the conflicting interests are more than one. However, as the term refers to a situation rather than to the possible number of interests involved, the term “conflicts of interest” will be used here throughout. CoI is not the same as conflicts of obligation, of commitment, or of effort. A conflict of obligation arises when the duties, or obligations, of an individual or an institution call for more than one course of action but the circumstances allow for only one. They are essentially conflicts among different primary interests.1 Conflicts of commitment arise between an employee’s primary responsibilities to an institution and his or her extra-institutional commitments.1 Conflicts of effort arise when the demands made by parties other than an employee’s primary employer interfere with the performance of the employee’s primary duties. Researchers are often required, for example, to serve on institutional or professional committees, to write papers, to source (and sometimes also to manage) funds, and to teach seminars, all of which can become so time-consuming as to compromise their primary professional duties. These conflicts are different from CoI, though circumstances may well present both, in which case the management of conflicts of effort becomes especially complex.8
Selection of Sample Reference Documents The issue of CoI has been addressed by several eminent institutions, some of whom have issued guidelines containing practical recommendations for handling both real and potential conflicts. The World Medical Association (WMA) refers explicitly to CoI in many of its key reference publications. For example:
248 Ethics and Epidemiology • The Declaration of Geneva states that “The health of my patient will be my first consideration.”9 • The International Code of Medical Ethics requires that “A physician shall act only in the patient’s interest when providing medical care which might have the effect of weakening the physical and mental condition of the patient.”10 • The Declaration of Helsinki stresses a crucial principle: “While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.”11 Alongside regulations and recommendations relating to CoI contained in several of its published documents, in 2015 the WMA adopted an updated statement dedicated specifically to this issue.12 This document addresses the particular situation of a physician who not only works in a clinical setting but also engages in research, and reiterates the fundamental criterion that the well-being of the patient overrides all other interests, including the advancement of research. Appropriate measures should therefore be “put in place to protect the patient, including disclosure of the potential conflict to the patient.” In the case of clinical trials, the various elements that could lead to CoI should be clearly set out in a contract signed by all the parties involved (sponsors, investigators, program participants): these should include, for instance, the financial compensation paid to the physician-researcher, the ownership of the results of the research, and the right of participants to be given relevant information at any time during the trial, as well as access to and publication of the results. The statement also includes a recommendation that physicians not take part in trials relating to areas of medical expertise other than their own. Physicians are also enjoined to avoid self-referrals, in other words the referral of patients to health care facilities, such as laboratories, in which the physician is not professionally engaged but in which he or she has a financial interest. The document also addresses the issue of organizational and institutional conflicts and urges the adoption of appropriate policies. Two documents published by the Council for International Organizations of Medical Sciences (CIOMS) are also of particular interest. The first is the International Ethical Guidelines for Epidemiological Studies,13 first issued in 1991. Guideline 22 of the most recent edition, published in 2009, addresses the issue of “disclosure and review of potential conflicts of interest” and states that The investigator is responsible for ensuring that the materials submitted to an ethical review committee include a declaration of any potential conflicts of interest affecting the study. Ethical review committees should develop forms that facilitate the reporting of such potential conflicts and materials explaining their use for investigators. Ethical review committees should evaluate each study in
Conflicts of Interest 249 the light of any declared conflicts and ensure that appropriate means of mitigation are provided. If a potentially serious conflict of interest cannot be adequately mitigated, the committee should not approve the project. (p. 88)
The guidelines thus attribute a crucial role in the evaluation of CoI to ethics committees. The second document is the International Ethical Guidelines for Health- Related Research Involving Humans,14 Guideline 25 of which addresses the question of CoI. This recommends that research institutions draw up and implement policies to mitigate eventual CoI and promote awareness of this issue among their employees; that researchers ensure that the material submitted to ethics committees for evaluation contains a declaration regarding CoI; and that ethics committees properly assess CoI and require their members to disclose any of their own. The commentary to Guideline 25 identifies different types of CoI that may arise in connection with research institutions, researchers, and ethics committees and establishes additional criteria for their management, including the requirement that measures adopted for the management of CoI should be proportional to their seriousness, as well as being transparent and actively communicated to those affected. A wide-ranging report by the IOM1 is dedicated entirely to CoI and examines the problem not only from the angle of a specific field but also in every possible situation: medical research, education, and practice. The report identifies basic criteria for the assessment and management of CoI: • Proportionality: Policies to manage CoI should be both effective and efficient. However, while such policies are generally beneficial, they may also be obstructive, adding to the workload and hampering progress toward the achievement of goals. They should therefore not be allowed to interfere in such a way as to become a hindrance. • Transparency: CoI management policies should be accessible and comprehensible to those involved if they are to be fairly implemented. Transparency can also be a means for institutions to help one another find the most suitable measures to address CoI. • Accountability: The name or names of whoever is responsible for monitoring, implementing, and reviewing CoI policies should be clearly indicated. • Fairness: CoI policies should apply in equal measure to all the groups involved both within an institution and across similar institutions. Of the key generic recommendations contained in the report, the following are especially relevant:1,15
250 Ethics and Epidemiology • Institutions that operate in the fields of medical research and education, in clinical care, in the development of guidelines, and in implementing health policies should adopt appropriate standardized policies to manage CoI. • Governments should develop nationwide programs requiring pharmaceutical, medical device, and biotechnology companies to report any payments to physicians and other prescribers; biomedical researchers; health care institutions; professional societies; patient advocacy and disease-specific groups; providers of continuing medical education; and foundations established by any such entities. • Medical and scientific institutions should implement adequate measures to restrict the participation in research with human subjects of researchers with CoI, and any exceptions to this rule should be justified and made public. • The competent institutions should promote training in CoI-related issues. • Special attention should be focused on relations between physicians and industrial companies. Specifically, physicians should not accept gifts or other “items of material value” from pharmaceutical, medical device, or biotechnology companies, though they may accept “payment at fair market value for a legitimate service” in specified situations. They should not participate in the presentation of data or publish articles controlled by industries. Additionally, they should not accept drug samples except “in certain situations” when the drugs are for “patients who lack financial access to medications.” • Medical companies and their foundations should review the ways in which they interact with physicians and forbid the supply to physicians of privileges and benefits of various types, such as presents, invitations, accommodation, pharmaceutical products, and so forth. Any arrangements for consultations should be “for necessary services, documented in written contracts and paid for at fair market value.” • Special care should be taken to avoid CoI when developing guidelines. • Institutions engaged in medical research and training, clinical care, the development of guidelines, or the management of health policies should set up standing committees on institutional CoI and ensure that committee members are not themselves in situations of conflict. • The competent institutions should promote research into CoI. In accordance with its mission, the Guidelines International Network addressed the issue of CoI from the specific perspective of the development of guidelines.16 Although the nine principles underlying the disclosure and management of CoI are of a general nature, they apply equally beyond the field of guideline development:
Conflicts of Interest 251 1. The developers of guidelines should make every effort to exclude panel members with direct financial or significant indirect CoI. 2. Prior to the appointment of a panel to draw up guidelines, all potential members should be vetted in relation to CoI, regardless of which disciplines or stakeholders they represent. 3. Groups set up to develop guidelines should use standardized forms for the reporting of possible CoI. 4. The disclosure of interests, including direct and indirect financial interests, by guideline development panels should be made public and be readily accessible to all potential users of the guidelines. 5. Eventual changes in the CoI status of guideline development panel members should be declared and updated at each meeting of the group and at regular intervals (i.e., annually for standing groups). 6. No person who has a direct financial or other relevant indirect CoI should be invited to chair a guideline development committee. When such CoI are unavoidable, a co-chair who has no conflict should be appointed to lead the panel. 7. Although experts who have specific knowledge or expertise but who have CoI may be permitted to participate in discussing specific topics, an appropriate balance of opinion should be maintained among those invited to offer such expert opinions. 8. No member of a guideline development panel established to decide on the direction or force of a recommendation should have a direct financial CoI. 9. An oversight committee should be appointed to develop and implement the regulations governing CoI.
CoI and Research Integrity The prevention of undue CoI and their management are intrinsic to research integrity,8 as is evident from the attention devoted to such conflicts by the major documents on the subject of integrity in the research setting. Although these documents are intended for a specific context, some of them apply to all situations. Such documents include, in the United States, the report entitled “Integrity in Scientific Research”17 published by the IOM and the National Research Council, which was followed by the report of the National Academies of Sciences, Engineering, and Medicine entitled “Fostering Integrity in Research”18 and, in Europe, “The European Code of Conduct for Research Integrity” published by the All European Academies (ALLEA),19 which was preceded by a “Memorandum on Scientific Integrity.”20
252 Ethics and Epidemiology The first of these documents recommends “transparency in conflicts of interest or potential conflicts of interest” at the individual level and the need to “anticipate, reveal, and manage individual and institutional conflicts of interest” at the institutional level.17 The ALLEA code recommends a formal undertaking between partners in “collaborative working” to comply with standards of research integrity and with regulations, ethical codes, and procedures intended to deal with CoI and with misconduct. To this end it urges all authors to “disclose any CoI and financial or other types of support for the research or for the publication of its results.”19 The commitment to combating CoI is not limited to researchers but also applies to reviewers and editors, who should undertake to “withdraw from involvement in decisions on publication, funding, appointment, promotion or reward”19 if there is a CoI.
From Clinical Medicine to Epidemiology In the biomedical setting the issue of CoI arises mainly in relation to clinical medicine, since the direct relationships between physicians and the pharmaceutical industry create an ideal environment for the development of CoI. Regulatory authorities everywhere have adopted policies to deal with these conflicts, albeit from different angles.21 Similar situations nonetheless arise in other settings, such as epidemiology. When populations or specific population groups are involved, as they are in epidemiology, CoI can arise in a number of ways. The “Global Burden of Disease,”22 for instance, estimates that approximately one third of deaths worldwide are attributable to behavioral risk factors associated in particular with the consumption of unhealthy products and exposure to harmful substances produced by profit-driven commercial entities.23 The management of CoI in epidemiology is a complex matter requiring cooperation among physicians, researchers, institutions, medical associations, and government agencies. Such conflicts typically arise when research studies are financed by commercial organizations acting for profit, a situation that can lead to bias. But the involvement of commercial operations is not the only potential cause of bias in epidemiological studies. CoI can create bias even in epidemiological research funded by institutions or government agencies, where they may interfere in the design, management, data analysis, and publication stages of studies, for instance by not publishing results unfavorable to the interests of sponsors. The American College of Epidemiology Ethics Guidelines recommend that the findings of epidemiological studies should not be allowed to become distorted by “preconceptions or organized efforts,” regardless of whether a study is financed by private or public funds, and warn that research can become biased if
Conflicts of Interest 253 researchers are subjected to any kind of pressure from persons or organizations whose interests could be promoted by the results. Epidemiologists should not accept contractual obligations that require them to reach specific conclusions.24 The “International Ethical Guidelines for Epidemiological Studies” published by the CIOMS notes, in its “Commentary on Guideline 22,” that both those in institutional roles who stipulate agreements and researchers should be particularly vigilant in this regard. In the case of individuals whose institutional roles include the stipulating of agreements, particular vigilance implies the declining of agreements or funding that could compromise the integrity of research. The CIOMS points out that the potential existence of CoI is nonetheless not a sufficient reason to reject agreements or funds a priori; the peculiar circumstances of each situation should be considered on a case-by-case basis, bearing in mind, for instance, that the relevance of a particular CoI tends to decrease as the number of sponsors increases. Researchers may find themselves in a position of conflict, despite their best intentions, if the institution in which they operate is itself in a situation of conflict. This places a heavy responsibility on those in decision-making roles. The negative impact of CoI in the epidemiological field can be exacerbated by the fact that the repercussions may affect sizeable groups of individuals rather than single patients, as is usually the case in the clinical setting. Recommendations may be adopted without adequate evidence to support them, a risk that is evidently recognized in a Council of Europe memorandum expressing concerns regarding the possibility for representatives of the pharmaceutical industry to directly influence public decisions taken with regard to the H1N1 influenza, and the question of whether some of the statements had been adopted as public health recommendations without being based on sufficient scientific evidence (such as, for example, the recommendation on double vaccination).
The memorandum is even more explicit when it states that various factors have led to the suspicion that there may have been undue influence by the pharmaceutical industry, notably the possibility of conflicts of interest of experts represented in WHO [World Health Organization] advisory groups, the early stage of preparing contractual arrangements between member states and pharmaceutical companies as well as the actual profits that companies were able to realise as a result of the influenza pandemic.25
Among the key players in the prevention and management of CoI are ethics committees,26 and their role in epidemiological studies is of particular relevance.
254 Ethics and Epidemiology Unlike clinical studies, epidemiological studies are not subject to control and surveillance mechanisms, with the possible exception of peer review of the research protocol and final results. In the first place, ethics committees can evaluate declarations of CoI, using appropriately prepared forms accompanied by exhaustive instructions for their compilation. Standard forms and instructions can be drawn up by ethics committees, groups of ethics committees, administrative bodies, scientific associations, or other entities. One of the measures ethics committees can deploy to counter possible CoI is a request to monitor each stage of a study, from its design, through the collection, processing, and interpretation of data and their ownership, to the drawing up and publication of conclusions. The fact that in some ethics committees the members receive a fee for reviewing a study does not in itself present a CoI, provided that the fee is reasonably proportionate to the costs of conducting the review, is not dependent on the outcome of the review, is uniform for all projects of comparable complexity, and is set and negotiated by persons other than those actually engaged in the ethical review process. It goes without saying that the members of ethics committees should themselves disclose any potential CoI.13 In general, epidemiologists should recognize that, in common with other members of the global scientific community, they have an absolute duty to ensure that each step of a research project is conducted in an objective manner. Scientists may have preconceived ideas, leading to prejudice in their approach to an issue under study, and it is therefore imperative to ensure that objectivity and fairness are maintained constantly throughout each stage, from the design to the conduct, interpretation of results, and reporting of findings. Researchers should recognize that frankness and fair-mindedness are elementary prerequisites in their profession and declare any potential CoI to all those involved in a study, including their colleagues, sponsors, research participants, editors, and employers, in order to ensure complete transparency and the prevention of CoI where possible; they should also learn to distinguish between perceived CoI and real conflicts.24
Conflict or Confluence of Interests? CoI have long been recognized as being an intrinsic component of research activities. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research noted as much when it stated that “investigators are always in position of potential conflict by virtue of their concern with the pursuit of knowledge as well as the welfare of human subjects of their research.”27 This is the almost inevitable consequence of the fact that one person often has to fulfill two roles. The borders between clinical practice
Conflicts of Interest 255 and research can often become blurred, as when a physician also engages in research. Because the interests of the clinician and the researcher may not be identical, any potential conflict should be addressed by making sure that whenever a single individual has to assume both roles appropriate steps are taken to protect the patient, including the disclosure to the patient of the potential conflict. The case of the physician-researcher is nonetheless not the only situation in which several roles may overlap. As already noted, the activities of scientists may be complicated by conflicts of effort, when one individual has to divide his or her time between a variety of research projects, teaching, public service, professional service, interpreting data, managing and coordinating research, sourcing funds to support research projects, administrative or managerial roles, consultations, and more. Scientists are also frequently called upon to give expert opinions in executive, legislative, or judicial deliberations. Conflicts of effort may easily evolve into CoI. To overcome these problems, the WMA recommends that Although the participation of physicians in many of these activities will ultimately serve the greater public good, the primary obligation of the individual physician continues to be the health and well-being of his or her patients. Other interests must not be allowed to influence clinical decision-making (or even have the potential to do so).12
Although the term “conflict of interest” usually carries a negative connotation regardless of circumstances, this is a reductive approach at both the institutional and individual levels. At the institutional level, partnerships between public entities and private individuals are fairly common28 and can create valuable opportunities for synergy of skills and resources (not only economic) to achieve otherwise highly challenging goals. When appropriately regulated,29 these partnerships offer an enormous potential that should not be wasted. At the individual level, it is customary for experts-scientists to participate in collaborative projects, provide advice, receive funds, and take part in working groups (as well as become “key opinion leaders”). One clinical trials investigator even commented that “If you have no conflict of interest, then you would stand out like a sore thumb.”30 The requirement for a total absence of any form of CoI is thus not only virtually impossible to achieve but also potentially harmful. It would lead, among other things, to disengagement from the networks that are an essential resource for meaningful research: the evidence tells us that these networks are the lifeblood of research, knowledge, and innovation. This state of affairs is neatly summed up in the title of an invited commentary: “No conflict, no interest,”31 and some authors
256 Ethics and Epidemiology have suggested that the expression “conflict of interests” should be replaced by “confluence of interests.”32 It therefore seems appropriate to consider the multitude of interests involved without necessarily pointing to their existence as representing an unacceptable state of conflict. Several elements should be taken into account: • In the research setting CoI are a typical fact of life for both institutions and scientists, and it is simply unrealistic to expect that they can be completely eliminated. • All the participants in a research project are at risk of CoI: universities, government agencies, politicians, nonprofit financers and sponsors, patient advocacy groups, scientists, scientific journals, journalists, and so forth. • There can be harmony, as well as conflict, between primary and secondary interests. • Cooperation between industry and health care professionals has the potential to achieve significant results above and beyond those that can be achieved by either party in isolation. • The relations between industry and universities or research institutions have evolved considerably over recent years. On the one hand, the subcontracting of research by industries is a common occurrence. On the other hand, the number of spinoffs created by universities and research institutions is constantly rising. These potentially fruitful synergies should be exploited rather than hampered, or there is a risk that the focus on CoI will become excessive and suffocate them. The IOM report recognizes that “Ties with industry are common in medicine. Some have produced important benefits, particularly through research collaborations that improve individual and public health.”1 Although CoI may create a potential risk of compromising the judgment of researchers and of creating undue bias from outside commercial interests, they may also have positive consequences: cooperation between industry and academic researchers has the potential to promote new discoveries and the development of new therapies and technologies. The expression “confluence” is in any case more appropriate than “conflicts” when nonfinancial interests are also included in the discussion.
Non-Financial Interests The personal interests that can direct or influence an individual’s actions are not always of a financial nature: the desire for fame or the chance to be published in high-ranking journals or to be invited to address a prestigious conference may
Conflicts of Interest 257 sometimes have more appeal than monetary wealth. Non-financial interests, in other words, are intellectual in nature. There is evidence that non-financial interests may “call into question the impartiality of [systematic] reviews”33 or the allocation of resources for the funding of research projects.34 Opinion as to whether or not such non-financial interests should be declared is nonetheless divided.35 Among the various considerations put forward by opponents of the need to disclose such non-financial interests are the following: • Non-financial interests may be of an extremely personal nature, and their disclosure should therefore be handled with caution. Except where other rights prevail, personal privacy should be respected and individuals (and, on occasion, their families) protected from the risk of discrimination. The limitation of public access to certain personal files is one mechanism that could help to redress the balance between disclosure and privacy requirements. • Although intellectual interests can lead to bias, this does not necessarily mean that they represent a CoI and should be treated as such.35 Such interests are intrinsic to human nature, and to consider every intellectual interest as representing a “conflict” is not only misleading but also harmful, since it strains the system. • Intellectual interests are often already evident without being specifically disclosed; they may transpire from an individual’s publications or curriculum. Financial interests, on the contrary, are generally concealed. • While financial interests can be prevented, intellectual interests are virtually impossible to extirpate and may not necessarily compromise the integrity of research. “All scientists have intellectual biases. They’re integral. They’re inherent. People couldn’t do without them”36 and “[p]erhaps science might seem more human and more believable if we all agreed that conflicts of interest are everywhere.”37 Another area in which there is particular focus on non-financial interests is that of publications in scientific journals. The International Committee of Medical Journal Editors asks that authors declare any non-financial “relationships or activities . . . that readers could perceive to have influenced” their work,38 and many scientific publications refer to non-financial CoI in their policies.39 Some requirements are radical: authors may be asked to declare every kind of influence, including those of a non-economic and non-financial nature, to which they may have been subjected.40 And not only are authors’ CoI considered relevant; reviewers and editors also need to pay attention to possible intellectual interests. The peer-review system is considered one of the fundamental safeguards for the
258 Ethics and Epidemiology quality and advancement of science, but it can be exploited for personal interests. There have been cases in which editors or members of editorial boards have postponed or hindered the publication of studies by their competitors in order to be the first to take the credit for a given result.
Conclusions For many years a serious commitment to professionalism and the suppression of self-interest seemed sufficient to handle CoI. But the evolution of health care into a high-cost, big-business phenomenon that began in the 1960s has led to the spread of both CoI and an awareness of the problem they represent. CoI are a combination of circumstances rather than an activity. They create the risk that professional judgments or actions might be unduly influenced by secondary interests. In medicine, primary interests include the promotion and protection of the health and welfare of individuals and populations, the integrity of research, and the quality of medical education and training. Secondary interests include financial interests (income, patents, stock, etc.) and non-financial interests (professional advancement, reputation, etc.).2 Successful scientists have numerous demands on their time, expertise, and attention that can compete with their primary mission to attain new knowledge. The multiplicity of interests, many of which are legitimate, shifts the barycenter from a conflict to a confluence of interests and underscores the complicated nature of the moral question in the scientific and academic fields. It is thus advisable to approach this state of affairs as one would approach a complex ecosystem,32 bearing in mind that it is neither possible nor desirable to avoid all forms of CoI at all times.
The Relevance of CoI The seriousness of a CoI depends principally on two elements: (1) the likelihood that professional decisions made in certain circumstances might be unduly influenced by a secondary interest and (2) the severity of the harm or wrong that could result from such an influence.1 CoI with the potential to harm human health are particularly serious. In an epidemiological setting, CoI could lead to the blurring of an association between cause and effect, thereby possibly compromising the adoption of appropriate action for prevention and reparation.41 In a clinical setting it is the conduct of clinical trials that is potentially most at risk from CoI.42 A physician who is paid to oversee a trial could be persuaded by a CoI to recruit patients who are not totally suited to a particular drug, thereby depriving them of more suitable treatment
Conflicts of Interest 259 and placing their health at risk,43 as well as possibly skewing the results of the trial in question, with potentially negative consequences. CoI should be distinguished from other types of conflict. As already noted, researchers are often called upon to divide their time between research and other responsibilities: the management of these responsibilities can be challenging, but is in any case different from the management of CoI. As is the case with CoI, many academic institutions adopt policies that offer some guidance on conflicts of commitment, for instance through the introduction of limits to the amount of time faculty members may spend on outside activities.44
Reference Principles On the basis of these considerations, it is possible to identify certain reference principles for the handling of CoI. The most relevant are listed here;45 they can of course be adapted to specific circumstances as necessary. • Integrity: Financial or other situations that could influence the meticulousness and honesty of an individual’s actions should be avoided. • Selflessness: Each individual’s conduct should be guided solely by the public interest and not by financial gain or other benefits for the individual or for his or her family, friends, or other parties. • Accountability: Each individual should be accountable for his or her decisions and actions and not attempt to avoid appropriate scrutiny. • Openness: Transparency in research, personal conduct, and decision- making is indispensable. • Honesty: CoI should be not only declared but also resolved.
Disclosure of CoI The first rule for handling CoI is disclosure: conflicts must be openly declared, or they cannot be properly handled. Disclosure is also an effective means of protecting the reputation of individuals and institutions. Public disclosure does not necessarily imply that researchers may not receive economic benefits, but it ensures transparency in relations with the public. As a general rule it is helpful to remember the recommendation “If in doubt, declare a competing interest.” In other words, prevention is better than cure. The WMA notes that [i]n some cases, it may be enough to acknowledge that a potential or perceived conflict exists. In others, specific steps to resolve the conflict may be required.
260 Ethics and Epidemiology Some conflicts of interest are inevitable and there is nothing inherently unethical in the occurrence of conflicts of interest in medicine but it is the manner in which they are addressed that is crucial.12
Declarations of CoI should also avoid ambiguity. The authors of articles published in scientific journals may declare that they are “unpaid consultants” to some pharmaceutical, biotech, or medical device company; indeed, such statements are increasingly frequent.46 Nonetheless, such declarations of “unpaid” consultancies are ambiguous and “may do more to conceal than illuminate.”47 The whole question of disclosing CoI could be rendered more transparent and more effective by introducing online registers. Among the prerequisites of such registers are the following:48,49
• Public accessibility • Unambiguous identification of professional individuals • The possibility to update and amend entries • Transparency • A universally agreed taxonomy system to enable proper classification and comparison of disclosures • Interoperability, so that information can be readily transferred between registers48 Although the regulations governing disclosure should necessarily be directed toward transparency, they should not violate the legitimate limits imposed by the protection of privacy, especially when family members are involved, as the latter may be harmed or face discrimination if their personal details are made public. The disclosure of individual and institutional financial relationships is a critical but limited first step in the process of identifying and responding to CoI;1 this is why appropriate policies are needed to handle them.
Managing CoI There is a pressing need for regulations to address the question of undue CoI, both individual and institutional, and the facts have shown that such regulations are an effective tool to counter corruption and preserve the integrity of research. If institutions fail to act voluntarily to strengthen their policies and procedures in the matter of CoI, the pressure for external regulation is likely to increase.1 Policies to manage CoI need to be first-rate and effective: their quality and effectiveness are greater when they are widely accepted and have already been tried and
Conflicts of Interest 261 tested. Both quality and effectiveness can be improved by a fuller understanding of the nature of CoI. The effectiveness of policies to handle them is generally greater when researchers are involved in the development process. The management of CoI should be based on a solid pragmatic approach that factors in the functions fulfilled, the interests at stake, their relevance, and their potential consequences. It is also important to establish a “statute of limitations” to determine the time limit beyond which a situation of conflict should be considered to have lapsed. Another requirement is the definition of a financial threshold over which measures to handle CoI should come into effect. At times the desire to be scrupulous leads to the adoption of such low thresholds that eminent experts will be excluded simply for having received a modest sum for addressing a conference. The application of such strict policies to minimally relevant CoI is damaging, as it excludes contributions from distinguished and influential individuals. Most policies to handle CoI stress prevention and management rather than punishment: the key goal of CoI policies is to protect the integrity of professional judgment and to preserve public trust rather than to try to deal with bias or mistrust after they occur.1 To support the management of CoI, it is necessary: • To “do business appropriately.” It is much easier to identify, and therefore to avoid and/or to handle, CoI if the entire procedure, from the assessment of needs to the decisions regarding mechanisms for consultation and the strategies and procedures for commissioning and procurement, is handled correctly from the beginning; this will ensure that the rationale for all decision-making is clear and transparent and that the procedure can withstand scrutiny. • To be proactive rather than reactive. The process of identifying and reducing the risk of CoI should be initiated as soon as possible. • To be balanced, sensible, and commensurate. The rules governing CoI should be clear and robust but should not be excessively prescriptive or restrictive. They should ensure that decision-making processes are transparent and fair without being too complicated, stringent, or cumbersome. • To be transparent. Every stage of the procedure and of decision-making processes should be fully documented. • To create an environment and a culture in which individuals feel supported and are confident about disclosing information and raising concerns.1 Yet another fundamental requirement for an effective policy governing CoI is a widespread awareness of the problem, which calls for the proper teaching of ethical conduct in scientific settings. Academic institutions have a duty to teach intellectual integrity by promoting and rewarding virtuous behavior.
262 Ethics and Epidemiology The implementation of policies to handle CoI also relies heavily on the role of ethics committees.50 These can be particularly crucial in epidemiological studies, which are subject to different administrative procedures from those governing interventionist research projects. Because they are not covered by compulsory regulations, epidemiological studies are not always subject to an ethical assessment. By introducing an initial ethical assessment and monitoring research studies throughout their duration, ethics committees can play a fundamental role in preventing and managing CoI that is otherwise omitted altogether.
References 1. Institute of Medicine. Conflict of Interest in Medical Research, Education, and Practice. National Academies Press, 2009. 2. Xie, G., and Cong, Y. “Conflict of interest.” In Encyclopedia of Global Bioethics, ed. H. Ten Have. Springer Science & Business Media, 2016: 725–729. 3. Davis, M. “Conflict of interest.” In Encyclopedia of Applied Ethics, 2nd ed., ed. R. Chadwick. London: Elsevier, 2012: 571–577. 4. Morreim, E. H. “Conflict of interest.” In Bioethics (Encyclopedia of Bioethics, 4th ed.), ed. B. Jennings. Gale, 2014: 676–683. 5. Relman, A. S. “Conflicts of interest.” In The New Dictionary of Medical Ethics, ed. K. E. Boyd, R. Higgs, and A. J. Pinching. BMJ Publishing Group, 1997: 53–54. 6. National Institute for Clinical Excellence. Policy on Conflicts of Interest. 2017. 7. Association of American Universities. Report on Individual and Institutional Conflict of Interest. 2001. 8. Bradley, G. S. “Managing competing interests.” In Scientific Integrity: Text and Cases in Responsible Conduct of Research, 3rd ed., ed. F. L. Macrina. ASM Press, 2005: 159–185. 9. World Medical Association. Declaration of Geneva. Adopted by the 2nd General Assembly of the World Medical Association, Geneva, September 1948; last revision by the 68th WMA General Assembly, Chicago, October 2017. https://www.wma.net/ policies-post/wma-declaration-of-geneva/ 10. World Medical Association. International Code of Medical Ethics. Adopted by the 3rd General Assembly of the World Medical Association, London, October 1949; last revision by the 57th WMA General Assembly, Pilanesberg, South Africa, October 2006. https://www.wma.net/policies-post/wma-international-code-of-medical-ethics/ 11. World Medical Association. Declaration of Helsinki. Ethical Principles for Medical Research Involving Human Subjects. Adopted by the 18th WMA General Assembly, Helsinki, June 1964; last revision by the 64th WMA General Assembly, Fortaleza, Brazil, October 2013. https://www.wma.net/policies-post/wma-declaration-of- helsinki-ethical-principles-for-medical-research-involving-human-subjects/ 12. World Medical Association. Statement on Conflict of Interest. Adopted by the 60th WMA General Assembly, New Delhi, October 2009; editorially revised by the 201st WMA Council Session, Moscow, October 2015. https://www.wma.net/policies-post/ wma-statement-on-conflict-of-interest/ 13. Council for International Organizations of Medical Sciences. International Ethical Guidelines for Epidemiological Studies. 2009.
Conflicts of Interest 263 14. Council for International Organizations of Medical Sciences. International Ethical Guidelines for Health-Related Research Involving Humans. 2016. 15. Steinbrook, R. “Controlling conflict of interest: proposals from the Institute of Medicine.” New England Journal of Medicine 360 (2009): 2160–2163. 16. Guidelines International Network. “Principles for disclosure of interests and management of conflicts in guidelines.” Annals of Internal Medicine 163 (2015): 548–553. 17. Institute of Medicine, National Research Council. Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct. National Academies Press, 2002. 18. National Academies of Sciences, Engineering, and Medicine. Fostering Integrity in Research. National Academies Press, 2017. 19. All European Academies. Memorandum on Scientific Integrity. 2003. 20. All European Academies. European Code of Conduct for Research Integrity, revised ed. (1st ed., 2011). 2017. 21. Lexchin, J., and O’ Donovan, O. “Prohibiting or ‘managing’ conflict of interest? A review of policies and procedures in three European drug regulation agencies.” Social Science & Medicine 70 (2010): 643–647. 22. Abajobir, A. A., Abate, K. H., Abbafati, C., et al. “Global, regional, and national under- 5 mortality, adult mortality, age-specific mortality, and life expectancy, 1970–2016: A systematic analysis for the Global Burden of Disease Study 2016. Life, death, and disability in 2016.” Lancet 390 (2016): 1084–1150. 23. Madureira, L. J., and Galea, S. “Corporate practices and health: A framework and mechanisms.” Globalization and Health 14 (2018): 21. 24. American College of Epidemiology. “American College of Epidemiology ethics guidelines.” Annals of Epidemiology 10 (2000): 487–497. 25. Council of Europe (Parliamentary Assembly), Flynn P. (Rapporteur). “The handling of the H1N1 pandemic: more transparency needed. Memorandum.” AS/Soc (2010), March 12–23, 2010. 26. Nelson, D. K. “Conflict of interest: Institutional review boards.” In Institutional Review Board: Management and Function, 2nd ed., ed. E. A. Bankert and R. J. Amdur. Jones and Bartlett Publishers, 2006: 177–181. 27. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. April 18, 1979. https://www.hhs.gov/ohrp/ regulations-and-policy/belmont-report/index.html 28. Campbell, E. G., Weissman, J. S., Ehringhaus, S., et al. “Institutional academic– industry relationships.” Journal of the American Medical Association 298 (2007): 1779–1786. 29. Hicks, S. R. C. “Conflict of interests.” In International Encyclopedia of Ethics, ed. J. K. Roth. Fitzroy Dearborn Publishers, 1995: 183–184. 30. Nelson, D. K. “Conflict of interest: Researchers.” In Institutional Review Board, 166–172. 31. Nipp, R. D., and Moy, B. “No conflict, no interest.” JAMA Oncology 2 (2016):1631–1632. 32. Cappola, A. R., and FitzGerald, G. A. “Confluence, not conflict of interest: Name change necessary.” Journal of the American Medical Association 314 (2015): 1791–1792. 33. Viswanathan, M., Carey, T. S., Belinson, S. E., et al. “A proposed approach may help systematic reviews retain needed expertise while minimizing bias from nonfinancial conflicts of interest.” Journal of Clinical Epidemiology 67 (2014): 1229–1238.
264 Ethics and Epidemiology 34. Abdoul, H., Perrey, C., Tubach, F., et al. “Non-financial conflicts of interest in academic grant evaluation: A qualitative study of multiple stakeholders in France.” PLoS One 7 (2012): e35247. 35. Wiersma, M., Kerridge, I., Lipworth, W., and Rodwin, M. “Should we try to manage non-financial interests?” British Medical Journal 361 (2018): 1240. 36. Schwab, T. “Dietary disclosures: How important are non-financial interests?” British Medical Journal 361 (2018): 1451. 37. Horrobin, D. F. “Beyond conflict of interest: Non-financial conflicts of interest are more serious than financial conflicts.” British Medical Journal 318 (1999): 466. 38. International Committee of Medical Journal Editors. “Form for Disclosure of Potential Conflicts of Interest.” http://www.icmje.org/downloads/coi_disclosure.pdf 39. Barbour, V., Clark, J., and Peiperl, L. “Making sense of non-financial competing interests.” PLoS Medicine 5 (2008): e199. 40. Editorial. “Outside interests: Nature journals tighten rules on non-financial conflicts. Authors will be asked to declare any interests that might cloud objectivity.” Nature 554 (2018): 6. 41. Mitchell, A. P., Basch, E. M., and Dusetzina, S. B. “Financial relationships with industry among national comprehensive cancer network guideline authors.” JAMA Oncology 2 (2012): 1628–1631. 42. Weinfurt, K. P., Hall, M. A., King, N. M. P., et al. “Disclosure of financial relationships to participants in clinical research.” New England Journal of Medicine 361 (2009): 916–921. 43. United Nations Educational, Scientific and Cultural Organization, Haque, O. S., De Freitas, J., Bursztajn, H. J., et al. The Ethics of Pharmaceutical Industry in Medicine. 2013. http://www.unesco-chair-bioethics.org/?mbt_book=the-ethics-of- pharmaceutical-industry-influence-in-medicine 44. National Academy of Sciences, National Academy of Engineering, Institute of Medicine. On Being a Scientist: A Guide to Responsible Conduct in Research. National Academies Press, 2009. 45. NHS England. Managing Conflicts of Interest: Revised Statutory Guidance for CCGs. 2017. https://www.england.nhs.uk/wp-content/uploads/2017/06/revised- ccg-coi-guidance-jul-17.pdf 46. Mintzes, B., and Grundy, Q. “The rise of ambiguous competing interest declarations.” British Medical Journal 361 (2018): k1464. 47. Menkes, D. B., Masters, J. D., Bröring, A., and Blum, A. “What does ‘unpaid consultant’ signify? A survey of euphemistic language in conflict of interest declarations.” Journal of General Internal Medicine 33 (2018): 139–141. 48. Dunn, A. G., Coiera, E., Mandl, K. D., and Bourgeois, F. T. “Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency.” Research Integrity and Peer Review 1 (2016): 1–8. 49. Dunn, A. G. “Set up a public registry of competing interests.” Nature 533 (2016): 9. 50. Medical Research Council. Good Research Practice: Principles and Guidelines. 2012. https://mrc.ukri.org/research/policies-and-guidance-for-researchers/good- research-practice/
Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. absolute prevention paradoxes, 91–93 acceptable risk, consent as means of, 31 accountability in management of CoI, 249, 259 in PH ethics, 161 actionable findings, offering to participants, 180–83 advance planning for disasters, 154–55, 156–57 advocacy, 161–62, 233–35 affected population, inclusion in research, 200–4 Africa, orphanhood due to AIDS in, 214, 215 age cut-offs for orphanhood, 215, 216 agenda for research, control of in CBIR, 114–15 AIDS. See HIV/AIDS AIDS Clinical Trial Group Study 076, 209–10 All European Academies (ALLEA), 251, 252 alpha levels, 165 American College of Epidemiology (ACE), 16, 228, 240, 241, 252–53 American College of Medical Genetics and Genomics, 181 American College of Obstetrics and Gynecology, 209–10 American Indian Law Center, 115–16 American Public Health Association, 240 analytically valid findings, offering to participants, 180–83 analytic decisions, ethical implications of, 163–67 anonymous unlinked HIV surveys, 210–13 anthropogenic disasters, 144, 154–57 Arendt, Hannah, 63 Association of American Universities, 246–47 Association of Schools of Public Health, 241 attributable risk, 67–68, 73 attrition, in CBIR, 120–21 Augustine, 139 autonomous choice, informed consent as requiring, 29–30 autonomy in four principles approach, 158
in public health ethics, 158–59, 161 respect for in CBIR, 120–21 and risk imposition, 76–77, 80 babies, HIV in, 208–13 Banting, Keith, 56–58 Barrett, D.H., 152, 153–54 Bayer, R., 163, 212 BC BioLibrary, 184–85 Beauchamp, T. L., 29 Beecher, Henry, 10–11 belief-relative perspective on risk, 78 Belmont Report, 124 beneficence in four principles approach, 158 implications for CBIR, 123–24 in public health ethics, 159–60 benefits, in distinction between research and practice, 150–51 Bentham, Jeremy, 5 beta levels, 165 bias caused by conflicts of interest, 252–53 Big Data in community-based intervention research, 109 and ethics of risk, 97–99 biobanking. See genetic epidemiological research bioethics, 12. See also ethics curricula in epidemiology boutique ethics, 225–26 and community-based intervention research, 125–26 conflicts of interest, 246, 247–48, 252 four principles approach, 157–61 major current approaches to, 228–29 principles applying to clinical research, 199 relationship to public health ethics, 44–45 biomedical and life-style framework, 47, 48–49 bio-socio-ecological model of PH practice, 144–45 bioterrorism, planning for, 238–39
266 Index blinded seroprevalence surveys, 210–13 boundary problem in PH, 146–47 boutique ethics, 225–26 breast-cancer mortality, relation to mammography, 237–38 British National Institute of Clinical Excellence, 246 Broadbent, Alex, 67 broad consent, 36–37, 177–80, 183–84 Brudney, D., 162 Buchanan. D.R., 161 Buehler, J., 152–53 Büttner, P., 70–71 CABs (community advisory boards), 111, 112– 13, 185–86, 188 Canadian Tri-Council Policy Statement, 159 cancer screening, and prevention paradox, 90–91, 92–93 capabilities, and mission of public health, 141–42 Capron, Alexander, 31–32, 34, 38, 199–200 Cardozo, Benjamin, 32 causation/causality, ethics of, 66, 71–73, 81–82 CBIR. See community-based intervention research CBPR (community-based participatory research), 111–12, 114, 186–87 CD4+ cells (T cells), in AIDS, 206 Centers for Disease Control and Prevention (CDC) blinded HIV seroprevalence surveys, 210, 212 early findings on AIDS, 200, 201 “Ending the HIV Epidemic” campaign, 196–97 HIV/AIDS estimates in U.S., 213–14 HIV/AIDS surveillance case definitions, 205–8 mother-to-child HIV transmission, 210 certainty, in ethics of risk Big Data, issues related to, 99 overview, 85–86, 88–89 precautionary principle, 94–97 Chadwick, Edwin, 5–6 chance, in ethics of risk Big Data, issues related to, 98 overview, 85–86, 88–89 prevention paradox, 89–94 Charles A. Dana Foundation, 202 checklist approach to PH ethics, 152, 153–54 children orphanhood, defining in context of HIV/ AIDS, 213–16 transmission of HIV from mothers, 208–13
Childress, J., 161 choice, informed consent as requiring autonomous, 29–30 CIOMS (Council for International Organizations of Medical Sciences), 10, 28, 248–49, 252–53 citizen science, 112 classification, HIV/AIDS, 204–8 “Clinical Investigations Using Human Subjects” policy statement, 11–12 clinically actionable findings, offering to participants, 180–83 clinical medicine, conflicts of interest in, 252. See also bioethics clinical research conflicts of interest in, 258–59 ethical principles in, 199 Clinical Sequencing Exploratory Research (CSER) network, 180–81 cluster randomized trials (CRTs), 39, 121 Cochrane Collaboration report on mammography, 237 codes of ethics, as resource for ethics curricula, 240 coercion, and informed consent, 30 CoI. See conflicts of interest collaboration in CBIR, guidelines for, 117–18 collaboration stage of engagement, 186–87 commitment, conflicts of, 247, 259 commonality, and solidarity, 58–59 common good beneficence principle in PH practice, 159–60 general discussion, 59–61, 62–63 and solidarity, 57, 59–60 Common Rule (U.S. Federal Policy for the Protection of Human Subjects), 149–50, 177, 178, 182 communalism, 49–50, 51 communicable diseases, 143, 144 communication in ethics curricula in epidemiology, 235–36 in ethics of risk, 86, 87, 94, 96, 97–99 risk, and pandemic preparedness, 236–37 communities healthy, in PH practice, 145–46 respect for in public health ethics, 158–59 community advisory boards (CABs), 111, 112– 13, 185–86, 188 community-based intervention research (CBIR) conducting and disseminating, 116–17 establishing researcher–community partnership, 113–16
Index 267 expansion of traditional ethical issues, 120–24 general discussion, 126–27 in global context, 119–20 guidelines for collaboration, 117–18 high-risk and vulnerable communities, 118–20 interdisciplinary health research, 125–26 methodological issues, 107–9 overview, 105–7 researcher–community partnership overview, 110–13 community-based participatory research (CBPR), 111–12, 114, 186–87 community consent, 37 community consultation, 38 community engagement, 37, 38, 39 community review boards, 115–16 compensation, in CBIR, 120–21 compensatory (restorative) justice, 160 computational early warning systems for bioterrorism, 238–39 conduct of community-engaged research, 116–17 confidentiality in community-based intervention research, 121–23 and emergency public health informatics, 239 in ethics curricula in epidemiology, 231–32 of HIV/AIDS research, 200–4 and informed consent, 37–38 conflicts of commitment, 247, 259 conflicts of conscience, 233–35 conflicts of effort, 247, 255 conflicts of interest (CoI) definitions related to, 245–47 disclosure of, 259–60 in epidemiology, 252–54 in ethics curricula in epidemiology, 233–35 general discussion, 258 as intrinsic component of research activities, 254–56 managing, 260–62 non-financial interests, 256–58 reference principles for handling of, 259 relevance of, 258–59 and research integrity, 251–52 sample reference documents, 247–51 conflicts of obligation, 245–46, 247 confluence of interests, 255–56, 258 conscience, conflicts of, 233–35 consent. See also informed consent broad, 36–37, 177–80, 183–84 in ethics curricula in epidemiology, 230–31
consequences, and prevention paradox, 93–94 consequentialist formulation of risk, 74–75 consultation stage of engagement, 184–85 control groups, in CBIR, 108, 124 cosmopolitanism, and solidarity, 58–59 Coughlin, S.S., 153–54 Council for International Organizations of Medical Sciences (CIOMS), 10, 28, 248– 49, 252–53 Council of Europe memorandum on CoI, 253 crowdsourcing analysis, 164 CRTs (cluster randomized trials), 39, 121 CSER (Clinical Sequencing Exploratory Research) network, 180–81 cultural relativism, 229 cultures, respect for in public health ethics, 159 current events, in ethics curricula, 236–39 curricula, ethics. See ethics curricula in epidemiology Cwikel, Julie, 69–70, 81 Daniels, N., 138 data collection methods, CBIR, 108–9 data protection legislation, 12–13 Dawson, A., 69–70, 73–74 decision-making, consent as promotion of informed, 30 Declaration of Geneva, 248 Declaration of Helsinki, 9–10, 248 deferred consent, 39 delayed control group design, 124 deontic theories of risk, 75 deontological view of informed consent, 34 de-Shalit, Avner, 79–80, 81 dignitary harm, 176 direct participation in decision-making, 188 disadvantage ethical issues in CBIR, 115, 118–20 relation to justice and risk, 79–80 Disadvantage (Wolff and de-Shalit), 80 disasters, 144, 154–57 disclosure of conflicts of interest, 259–60 Discourse on the Origins and Foundations of Inequality Among Men (Rousseau), 3–4 disease classification, HIV/AIDS, 204–8 disease surveillance, HIV/AIDS, 204–8 dissemination of community-engaged research, 116–17 distributive justice, 51–52, 124 Doll, R., 86 double orphans, 215 Dressler, W. W., 115 drug abuse and HIV infection, 198
268 Index Drug Amendments of 1962, 10 dual roles, in CBIR, 116–17 Dugas, Gaetan (Patient Zero), 200–1 dynamic consent, 36–37 early warning systems for bioterrorism, 238–39 economic harm, 176 ecosocial approach, 48 effectiveness, conflict of interest policies, 260–61 effort, conflicts of, 247, 255 Electronic Medical Records and Genomics (eMERGE) network, 180–81 emergency preparedness, 154–57, 236–37 emergency public health informatics (EPHI), 238–39 empowerment stage of engagement, 186–87 Encyclopedia of Applied Ethics, 245 Encyclopedia of Bioethics, 245–46 “Ending the HIV Epidemic” campaign (CDC), 196–97 engagement, in genetic epidemiological research, 183–88 England development of epidemiology in, 7 first public health system in, 5–6 Enlightenment, 4–5 epidemics, 154–57 epidemiological ethics. See also specific related topics early developments in public health and ethics, 3–6 general discussion, 16–17 origins of contemporary, 12–16 overview, 3 regulatory safeguards for human subjects research, 8–12 twentieth-century developments overview, 6–8 epidemiology education, ethics curricula in. See ethics curricula in epidemiology epidemiology research. See conflicts of interest; ethics curricula in epidemiology; informed consent; research; specific research types; translating epidemiological research into action epistemic approach to probability, 87–88 epistemic considerations in research, 87 errors, ethical implications of, 164, 165 ethics committees movement to, 12 role in prevention and management of CoI, 253–54, 262
ethics curricula in epidemiology communication and publication, 235–36 goals for, 227–28 issues and cases, focusing on, 236–39 moral foundations, 228–29 overview, 223 privacy and confidentiality, 231–32 reasons to broaden emphasis on, 224–27 recommendations for, 242 research integrity, 229–30 resources for, 239–42 risks, harms, and wrongs, 232–33 sponsorship, conflicts of interest and conscience, and advocacy, 233–35 valid consent and refusal, 230–31 Ethics Guidelines (ACE), 240, 252–53 ethnic nationalism, and solidarity, 58–59 ethos of solidarity, 59 “European Code of Conduct for Research Integrity, The” (ALLEA), 251, 252 evidence-based practice, interpreting evidence for, 163–67 evidence hierarchy, 96–97 evidence-relative perspective on risk, 78–79, 81–82 exigency, and informed consent, 38–39 exposure assessment, 68–69 fact-relative perspective on risk, 78 Faden, R., 29, 114, 138, 141, 146–47, 159, 162–63 Fairchild, A.L., 163, 212 fairness, in management of CoI, 249, 254. See also justice false negatives/positives, and ethics of certainty, 95–96 fatherless children, 214–15 FDA (U.S. Food and Drug Administration), 10 feasibility of offering research results to participants, 181–82 financial interests, 246–47 financial threshold, for conflicts of interest, 261 four principles approach, 157–61 Framingham Heart Study, 33–34 Frank, Johann Peter, 4 frequentism, 87–88 functionings perspective of justice, 79–80 fundamental axiom of preventive medicine, 89–94 Future of Public Health, The (IOM), 140–41 future research, broad consent to, 177–80, 183–84 Galea, S., 200 Galston, William, 61 Garrison, N. A., 179
Index 269 gay men, HIV/AIDS research focus on, 200–3 generalizable knowledge, 149–50 genetic epidemiological research broad consent to unspecified future research, 177–80 offering individual research results to participants, 180–83 overview, 175–77 stakeholder engagement and research governance, 183–88 Gerberding, Julie, 209–10 germ theory of disease, 6, 143 Global Burden of Disease Study, 252 global context, CBIR in, 119–20 Goddard, James Lee, 10 Goodman, D., 179 Gordis, Leon, 68–69, 70–71 Gostin, Lawrence O., 203–4 Gottlieb, Michael, 200 governance and common good, 60–61 genetic epidemiological research, 183–88 political morality of social epidemiology, 53 Governor’s AIDS Advisory Council (New York State), 211–12 Greenland, S., 165 Greenwood, Major, 7 Gregory, John, 4–5 group harm, 176, 187 groups, informed consent for, 37 Gruskin, S., 215 guidelines for collaboration in CBIR, 117–18 on confidentiality in HIV/AIDS research, 202–4 Guidelines International Network, 250–51 Habermas, Jürgen, 59 halfway technology, 196 harms. See also risk, ethics of; risk concept in epidemiology clinical and noninterventional epidemiological research, 199–200 in ethics curricula in epidemiology, 232–33 genetic epidemiological research, 176–77, 187 Hastings Center working group on HIV/AIDS research, 202–4 Hausman, D., 187 hazard identification, in risk assessment, 68–69 health in contemporary model of PH practice, 145–47 as human right, 162 in mission of public health, 140–43
health informatics, 238–39 Health Insurance Portability and Accountability Act (HIPAA), 16, 121–22, 203–4 Helsinki Code, 9–10, 248 hierarchy of evidence, 96–97 high-risk communities, in CBIR, 118–20 high-risk strategies, and prevention paradox, 89–94 Hill, A. B., 86 HIV/AIDS confidentiality and wary subjects, 200–4 defining orphanhood, 213–16 ethical issues in epidemiological studies, 199–200 ethics and epidemiology in early years of, 197–98 overview, 196–97 surveillance and mother-to-child transmission, 208–13 surveillance case definition conflict, 204–8 in twenty-first century, 196–97, 216–17 Hodge, J.G., 151 homosexual men HIV/AIDS research focus on, 200–3 terminology changes related to, 205 honesty, in handling of CoI, 259 human rights approach to PH ethics, 162–63 human subjects research. See conflicts of interest; ethics curricula in epidemiology; informed consent; research; specific research types; translating epidemiological research into action human well-being, 138, 141, 146, 162–63 Hussain, Waheed, 61 hygiene, role of in public health, 4 Hygienic Laboratory, 7. See also National Institutes of Health IAP2 (International Association for Public Participation), 184 IARC (International Agency for Research on Cancer), 85–86 implicit values, ethical implications of, 166 imposition of risk applying normative conceptions of risk in epidemiology, 81–82 justice and, 79–80 modern articulations of, 74–79 incentives, in CBIR, 120–21 inclusion of affected population in research, 200–4 Indian Health Boards, 115–16 individual conflicts of interest, 246–47
270 Index individualism and political moralities of epidemiological theorizing, 49–51 and solidarity, 58 individual research results, offering to participants, 180–84 individuals conceptual foundations of informed consent, 29 PH obligations and values related to, 145–46 respect for in public health ethics, 158–59 industrial companies. See also conflicts of interest cooperation between researchers and, 256 relations between physicians and, 250 Industrial Epidemiology Forum, 13 infants, HIV in, 208–13 infectious diseases, 143, 144 influenza pandemic, planning for, 156–57 information technology in ethics curricula in epidemiology, 231–32 and planning for bioterrorism, 238–39 informed consent adapting and modifying for epidemiology, 37–39 approaches and applications for epidemiology, 34–37 broad consent to unspecified future research, 177–80, 183–84 in community-based intervention research, 115–16, 120, 121 in ethics curricula in epidemiology, 230–31 foundations in epidemiology, 29–31 general discussion, 40 historical perspective, 32–34 overview, 27 in practice and policy, overview, 31–32 regarding offering of research results, 182 regulatory history, 8–12 and scope of epidemiology, 27–29 views on during Enlightenment, 4–5 informed decision-making, consent as promotion of, 30 inform stage of engagement, 184 insider–outsider tensions, 118–19 Institute of Medicine (IOM) conflicts of interest, 245, 249–50, 251–52, 256 HIV/AIDS testing, 209 justice in PH interventions, 160–61 mission of PH, 140–41 public aspect of PH, 145 values in science and practice, 147–48 institutional conflicts of interest, 246–47
institutional review boards (IRBs) and reciprocal relation between research and practice, 149, 152 research exempt from oversight by, 200 responsibilities of, 11–12 tribal or community, 115–16 instrumental value of health, 141 integrity, research and conflicts of interest, 251–52, 259 in ethics curricula in epidemiology, 229– 30, 240–42 “Integrity in Scientific Research” (IOM and National Research Council), 251–52 intellectual interests, 256–58 intent, and distinction between research and practice, 149–50 intentionality, and informed consent, 29 interdependencies, in relational theorizing, 53–54 interdisciplinary health research, 125–26 interest, conflicts of. See conflicts of interest interim analysis, in CBIR, 123 International Agency for Research on Cancer (IARC), 85–86 International Association for Public Participation (IAP2), 184 International Code of Medical Ethics, 248 International Committee of Medical Journal Editors, 257–58 International Ethical Guidelines for Epidemiological Studies (CIOMS), 248–49, 252–53 International Ethical Guidelines for Health- Related Research Involving Humans (CIOMS), 249 intervention strategies, CBIR, 107–8. See also community-based intervention research involvement stage of engagement, 185–86 IOM. See Institute of Medicine IRBs. See institutional review boards irrationality of public, interpreting claims about, 89 Israel, B., 111–12 Jaffe, Harold, 209–10 Jewish Chronic Disease Hospital, 10, 12–13, 33 Johns Hopkins University, 152 justice in four principles approach, 158 implications for CBIR, 124 in procedural ethics, 156 in public health ethics, 160–61 risk imposition in context of, 66–67, 79–80, 81 social, 45, 138 justification of PH practice, 161
Index 271 Kahneman, D., 166 Kantianism, 228 Kass, N.E., 149–50 Keyes, K. M., 200 King, K., 119–20 knowledge, and distinction between research and practice, 149–50 Krieger, Nancy, 44, 47–48, 51, 55–56 Kuhn, T.S., 166 Kymlicka, Will, 56–58 large-scale genetic epidemiological research. See genetic epidemiological research Last, John, 68–69 Lawrence v. Texas, 201 Lee, L.M., 137, 140, 141, 148 legal harm, 176 Leider, J.P., 154 liberal welfarism, 50–51, 52 Livingston, Robert B., 10–11 Lukes, Steven, 50 Lurie, N., 154 MacIntyre, Alasdair, 137, 139, 147–48 MacQueen, K., 152–53 male-to-male sexual contact, 205 mammography, 237–38 mandatory HIV testing, 208–9, 210 manipulation, and informed consent, 30 Mann, Jonathan, 162–63 Marine Hospital Service, 7 maternal orphans, 214–15 Mayersohn, Nettie, 211 McCullough, L.B., 155 medical companies, relations between physicians and, 250, 256. See also conflicts of interest medical ethics. See bioethics medical police, 4 Medicare, 203–4 men who have sex with men (MSM), 205 methodological holism or contextualism, 50 methodological individualism, 50–51 methodological issues and ethics in CBIR, 107–9 Mill, John Stuart, 5–6 Minkler, M., 111 mission of public health, 140–43 moral foundations, in ethics curricula, 228–29 moral relativism, 229 Morbidity and Mortality Weekly Report (MMWR), 200 More, Thomas, 3 motherless children, 214–15
mother-to-child transmission of HIV, 208–13 movement to ethics committees, 12 Muller, R., 70–71 Munthe, Christian, 75–76 mutuality, as part of solidarity, 56–60 National Academies of Sciences, Engineering, and Medicine (NASEM), 147–48, 180–83 National Advisory Health Council (NAHC), 11 National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 254–55 National Institute on Drug Abuse, 198 National Institutes of Health (NIH), 10–11, 224, 237 nationalism, and solidarity, 58–59 National Research Act of 1974, 12 National Research Council, 251–52 natural disasters, 144, 154–57 newborns, HIV in, 208–13 New Dictionary of Medical Ethics, 246 New York State, blinded HIV seroprevalence surveys in, 210–12 noncommunicable diseases, 143–44 non-control, and informed consent, 30 non-epistemic considerations in research, 87 non-financial interests, 246–47, 256–58 non-identifiability, and informed consent, 38 noninterventional epidemiological research, 199–200 nonmaleficence, 158, 160 non-normative understanding of risk. See risk concept in epidemiology non-specific interests, 246 nontechnology, 196 nontherapeutic research, 9 normative conceptions of risk. See also risk concept in epidemiology applying in epidemiology, 81–82 and causation, 72–73 objections to, 73–74 overview, 66 and SEPD of risk factors, 70–71 normative person, reasonable, 77–79, 81–82 normative relational theorizing, 54 Nuremberg Code, 9–10, 33 Oberdiek, John, 76–79, 80 objective approach to probability, 87–88 objectivity, 165, 254 obligation, conflicts of, 245–46, 247 Obstetrical HIV Counseling/Testing/Care Initiative (New York State), 211–12
272 Index O’Doherty, K.C., 188 Office for Human Research Protections (OHRP), 152–53 On Being a Scientist (NASEM), 147–48 online registers of conflicts of interest, 260 ontological individualism, 49–51 openness, in handling of CoI, 259 opioid epidemic, 197, 198 Oppenheimer, Gerald, 197–98, 216 opt-out process, 36 orphanhood, defining in context of HIV/ AIDS, 213–16 Ortmann, L.W., 147, 167–68 Osler, William, 8 Osterholm, Michael, 156–57 oversight mechanisms, and justification for broad consent, 178–79. See also institutional review boards pandemics, 154–57, 236–37 Parfit, Derek, 78 participants. See also informed consent in community-based intervention research, 110–11, 120–24 engagement in genetic epidemiological research, 183–88 offering individual research results to, 180–84 risks of genetic epidemiological research, 176–77 versus subjects, 231 participatory research, community- based, 111–12, 114, 186–87. See also community-based intervention research partnerships, researcher–community conducting and disseminating research, 116–17 continuum of community engagement, 110–11 establishing, 113–16 in global context, 119–20 guidelines for collaboration, 117–18 high-risk and vulnerable communities, 118–20 overview, 110–13 terminating, 117 paternalism in PH, 161 paternal orphans, 214–15 Patient Zero (Gaetan Dugas), 200–1 peer review, 11–12, 257–58 personalized medicine, 98 personal values, ethical implications of, 166 persuasion, and informed consent, 30 Petrini, C., 155
PH ethics. See epidemiological ethics; ethics curricula in epidemiology; public health ethics; specific related topics physical harm, 176 physical risks of research participation, 31 planning for bioterrorism, 238–39 for disasters, 154–55, 156–57 to offer research results to participants, 181–83 Polanyi, M., 166 policy, translating epidemiological research into certainty, 94–97 chance, 89–94 clarifications and motivations, 86–89 emerging issues around Big Data, 97–99 general discussion, 99–100 overview, 85–86 political moralities of epidemiological theorizing, 48–53 population health, 55–56, 200 population strategies, and prevention paradox, 89–94 Powers, M., 138, 141, 146–47, 159, 162–63 practice, defined, 137, 139. See also public health ethics precautionary principle, 95–96 precision medicine, 175 precision public health, 175 pregnant women, HIV testing for, 208–10 preparedness, 154–57, 236–37 presumed consent, 36 prevention of conflicts of interest, 261–62 role in public health, 4 prevention paradox, 89–94, 161 preventive ethics, 154–57 primary interests, 245, 258 “Principles of the Ethical Practice of Public Health” (American Public Health Association), 240 principlism, 157–61 privacy in community-based intervention research, 121–23 and emergency public health informatics, 239 in ethics curricula in epidemiology, 231–32 of HIV/AIDS research, 200–4 protecting in epidemiological research, 12–13, 16 probability, and concept of risk, 67, 87–88 procedural ethics, 155–56 process-oriented approach to sharing research results, 180–81
Index 273 professional ethics in epidemiology, 13–15 program evaluation, ethical implications of, 166–67 propensity theory, 87–88 proportionality, in management of CoI, 249 prospective ethics, 155–56 psychological harm, 176 psychosocial approach within social epidemiology, 48 psychosocial risks of research participation, 31 publication in ethics curricula in epidemiology, 235–36 non-financial interests related to, 256–58 public health (PH) ethics. See also epidemiological ethics; ethics curricula in epidemiology; specific related topics boundary problem in PH, 146–47 common good, 59–61 critical considerations for, 161–62 early developments in, 3–6 foundation for, 139–40 four principles approach, 157–61 general discussion, 167–68 and human rights, 162–63 informed consent and scope of epidemiology, 27–29 and interdisciplinary health research, 125–26 interpreting evidence for evidence-based practice, 163–67 mission of PH, 140–43 nature of PH practice, 143–48 overview, 137–39 political moralities of epidemiological theorizing, 48–53 practice, defined, 137, 139 preparedness and preventive ethics, 154–57 reciprocal relation between research and practice, 148–54 relational theorizing, 55–56, 61–63 relationship to biomedical ethics, 44–46 social epidemiology, 46–48 solidarity, 56–59 Public Health Act of 1848 (England), 5 Public Health Service, 212–13 public irrationality, interpreting claims about, 89 public justification of PH practice, 161 public thing (res publica) concept, 55 public understanding of epidemiology research, 237–38 purpose, and distinction between research and practice, 149–50 p values, 165
qualitative methods, in CBIR, 109 quality, of conflict of interest policies, 260–61 quarantine, pandemic responses related to, 236–37 Quinn, S., 124 randomized controlled trials, 108 rationality of public, interpreting claims about, 89 rationing, pandemic responses related to, 236–37 Rawls, John, 59–60, 138, 156 Raz, Joseph, 76–77 RCR (responsible conduct of research), 224, 235–36, 240–42 reasonable normative person, 77–79, 81–82 reciprocal relation between research and practice, 148–54 reciprocal understanding of risk imposition, 76–77, 80 recruitment, and stakeholder engagement, 185–86 Reed, Walter, 7, 8 reflective equilibrium, 138, 147, 163 refusal, in ethics curricula in epidemiology, 230–31 registers of conflicts of interest, 260 regulatory safeguards for human subjects research, 8–12 reimbursement, in CBIR, 120–21 relational theorizing common good, 59–61 general discussion, 53–56, 61–63 overview, 45 solidarity, 56–59 relative prevention paradoxes, 90–91, 92–93 relative progressiveness, 75–76 relativism, 229 representation of community, in CBIR, 113–14 research. See also conflicts of interest; ethics curricula in epidemiology; informed consent; specific research types; translating epidemiological research into action agenda for, control of in CBIR, 114–15 emergency public health informatics, 239 ethical issues in, 199–200 interpreting evidence for evidence-based practice, 163–67 public understanding of, 237–38 reasons to broaden emphasis on ethics, 224–25 reciprocal relation between practice and, 148–54 regulatory safeguards for human subjects, 8–12
274 Index researcher–community partnerships in CBIR conducting and disseminating research, 116–17 continuum of community engagement, 110–11 establishing, 113–16 in global context, 119–20 guidelines for collaboration, 117–18 high-risk and vulnerable communities, 118–20 overview, 110–13 terminating, 117 research integrity and conflicts of interest, 251–52, 259 in ethics curricula in epidemiology, 229– 30, 240–42 respect for autonomy in CBIR, 120–21 in four principles approach, 158 in public health ethics, 158–59 responsibility, in PH ethics, 138–39, 161 responsible conduct of research (RCR), 224, 235–36, 240–42 res publica (public thing) concept, 55 restorative (compensatory) justice, 160 results of research, offering to participants, 180–84 review boards, community, 115–16 risk, ethics of certainty, 94–97 chance, 89–94 clarifications and motivations, 86–89 clinical and noninterventional research, 199–200 emerging issues around Big Data, 97–99 general discussion, 99–100 genetic epidemiological research, 176–77 overview, 85–86 risk assessments, 68–70, 81–82 risk characterization, 68–69 risk communication, 236–37 risk concept in epidemiology applying normative conceptions of risk, 81–82 definitions and associated terms, 67–70 determining acceptable risk, 31 ethical challenges related to, 70–74 ethics curricula in epidemiology, 232–33 justice and risk, 79–80 modern articulations of risk imposition, 74–79 overview, 66–67 risk estimation, 68–69
risk factors and causality, 73 defined, 67, 68 social, economic, and political dimensions of, 70–71 in social epidemiology, 69–70 risk imposition applying normative conceptions of risk in epidemiology, 81–82 justice and, 79–80 modern articulations of, 74–79 risk management, 68–69 Román-Maestre, B., 146 Rose, Geoffrey, 89–90, 160–61 Rousseau, Jacques, 3–4 routine HIV testing, 208–10 Royo-Bordonada, M.Á., 146 Ruger, J.P., 148 Rush, Benjamin, 4–5 Sanitary Movement, 5–6, 143 Schloendorff v. Society of New York Hospital, 32 science, inseparability from ethics. See also public health ethics advocacy, 161–62 human rights approach to PH ethics, 162–63 interpreting evidence for evidence-based practice, 163–67 overview, 138 reciprocal relation between research and practice, 148–54 values in science and practice, 147 scientific rigor, relation to ethics, 224– 25, 229–30 screening in CBIR, 123 ethical implications of program design, 166–67 and public understanding of epidemiology research, 237–38 secondary interests, 245, 258 second-generation HIV surveillance, 213 self-determination, consent as promotion of, 30, 32–33 self-interest, and solidarity, 58 selflessness, in handling of CoI, 259 seroprevalence surveys, HIV, 210–13 Shannon, James, 10–11 Shattuck, Lemuel, 6 Silberzahn, R., 164 single orphans, 215 Snow, John, 33–34
Index 275 social, economic, and political dimensions (SEPD) of risk factors, 70–71 social epidemiology common good, 59–61 general discussion, 61–63 overview, 44–48 political moralities of epidemiological theorizing, 48–53 relational ethics, 53–56 risk factors in, 69–70, 81 solidarity, 56–59 social justice, 45, 138 Social Security number (SSN), and research confidentiality, 203–4 social welfare state, 53 societal value of epidemiological research, 15 society, interest of in public health, 141, 142–43 Society for Epidemiologic Research, 13 sociopolitical approach within social epidemiology, 48 solidarity, 56–60 Soskolne, Colin, 13 specific consent, 36–37 specific interests, 246 sponsorship of research, 233–35 stakeholder engagement, in genetic epidemiological research, 183–88 statistical issues, ethical implications of, 163–67 statistical understanding of risk. See risk concept in epidemiology Stewart, William H., 11–12 study designs, CBIR, 108 subjects, research. See also research; specific related topics in distinction between research and practice, 151 fear of confidentiality breaches in HIV/AIDS research, 200–4 versus participants, 231 sub-Saharan Africa, orphanhood due to AIDS in, 214, 215 surveillance blinded HIV seroprevalence surveys, 210–13 and distinction between research and practice, 150 emergency public health informatics, 239 HIV, and mother-to-child transmission, 208–13 HIV/AIDS case definition conflict, 204–8 informed consent and scope of epidemiology, 27, 28 prospective ethics and, 155
Tarantola, D., 215 T cells (CD4+ cells), in AIDS, 206 testing, HIV, 208–10 theorizing, political moralities of epidemiological, 48–53 therapeutic research, 9 Thomas, Lewis, 196 tiered consent, 36–37 transformative technology, 196 translating epidemiological research into action certainty, 94–97 chance, 89–94 clarifications and motivations, 86–89 emerging issues around Big Data, 97–99 general discussion, 99–100 overview, 85–86 transparency in management of CoI, 249, 259, 261 in PH practice, 161 triage, during emergencies, 155 tribal institutional review boards, 115–16 Tuskegee Syphilis Study, 12–13, 33 Type I errors, 165 Type II errors, 165 UNAIDS, 213, 215 (un)certainty. See certainty, in ethics of risk understanding, and informed consent, 29–30 UNICEF, 214, 215 universalism, 229 university curricula. See ethics curricula in epidemiology University of Michigan, 184–85 unjust risk, 79–80 unspecified future research, broad consent to, 177–80, 183–84 U.S. Federal Policy for the Protection of Human Subjects (Common Rule), 149–50, 177, 178, 182 U.S. Food and Drug Administration (FDA), 10 U.S. Preventive Services Task Force, 238 utilitarianism conflict between Kantianism and, 228 informed consent in epidemiology, 34 overview, 5–6 pandemic responses, 236–37 and political moralities of epidemiological theorizing, 51–52 Utilitarianism (Mill), 5–6 Utopia (More), 3 valid consent and refusal, in ethics curricula, 230–31
276 Index values ethical implications of, 166 in science and practice, 147–48 vector-borne diseases, 143, 144 Verweij, M., 69–70, 73–74 voluntariness, and informed consent, 30 voluntary HIV testing, 208–9, 210, 211–12 vulnerable children, defined, 216 vulnerable communities, in CBIR, 118–20 waivers of informed consent, 38–39 Wallerstein, N., 111, 113–14 Webb, P., 70–71 welfare liberalism, 50–51, 52 welfare state, 53 well-being, 138, 141, 146, 162–63 Widows and Orphans Act of 2005, 214
Wiggins, A., 112 Willowbrook State School, 33 Wolff, Jonathan, 72, 79–80, 81 women’s health growing focus on in 1990s, 14 HIV/AIDS case definition conflict, 206–7 HIV surveillance and mother-to-child transmission, 208–13 World Bank, 216 World Health Organization (WHO), 10, 146, 156–57, 213 World Medical Association (WMA), 9, 247–48, 255, 259–60 wrongs, in ethics curricula, 232–33 Zarowsky, C., 137, 140, 141