377 25 14MB
English Pages [68] Year 2021
SPECIAL ISSUE
WHISTLEBLOWERS • ETHICAL AI • PUBLIC HEALTH ALIEN RIGHTS • INCLUSIVE STEM • GENOMICS
AMERICAN
Scientist July–August 2021
www.americanscientist.org
TRUST WORTHY SCIENCE
Sacred Stone of the Southwest is on the Brink of Extinction
B.
26 carats of genuine Arizona turquoise
C
enturies ago, Persians, Tibetans and Mayans considered turquoise a gemstone of the heavens, believing the striking blue stones were sacred pieces of sky. Today, the rarest and most valuable turquoise is found in the American Southwest–– but the future of the blue beauty is unclear. On a recent trip to Tucson, we spoke with fourth generation turquoise traders who explained that less than five percent of turquoise mined worldwide can be set into jewelry and only about twenty mines in the Southwest supply gem-quality turquoise. Once a thriving industry, many Southwest mines have run dry and are now closed. We found a limited supply of turquoise from Arizona and snatched it up for our Sedona Turquoise C. Collection. Inspired by the work of those ancient craftsmen and designed to showcase the exceptional blue stone, each stabilized vibrant cabochon features a unique, oneof-a-kind matrix surrounded in Bali metalwork. You could drop over $1,200 on a turquoise pendant, or you could secure 26 carats of genuine Arizona turquoise for just $99. Your satisfaction is 100% guaranteed. If you aren’t completely happy with your purchase, send it back within 30 days for a complete refund of the item price. The supply of Arizona turquoise is limited, don’t miss your chance to own the Southwest’s brilliant blue treasure. Call today! Jewelry Specifications: • Arizona turquoise • Silver-finished settings
Sedona Turquoise Collection A. Pendant (26 cts) $299 B. 18" Bali Naga woven sterling silver chain C. ͕ͽΤΜ" Earrings (10 ctw) $299 Complete Set** $747
ONLY $99
“With depleting mines, turquoise, the most sacred stone to the Navajo, has become increasingly rare.” –– Smithsonian.com
Necklace enlarged to show luxurious color
A.
$99* Save $200 $149 $99* Save $200 $249 Save $498
** Complete set includes pendant, chain and earrings. Call now and mention the offer code to receive your collecion.
1-800-333-2045 Offer Code STC͕͛͘-01
Rating of A+
You must use the offer code to get our special price. * Special price only for customers using the offer code versus the price on Stauer.com without your offer code.
Stauer
® 14101 Southcross Drive W., Ste 155, Dept. STC͕͛͘-01, Burnsville, Minnesota 55337 www.stauer.com
Staue r … A f f or d the E x tr aor d i nar y .®
AMERICAN
Scientist
Volume 109 • Number 4 • July–August 2021
CONTENTS
238 Who Dares to Speak Up?
a Name? The reality of organoids
A federal agency allowed unethical experimentation on Black men for four decades before someone finally decided to blow the whistle.
210 Who Should Speak for
194 From the Editors 195 Online Preview
the Earth? Extraterrestrial communication forces us to confront cultural assumptions. JOHN W. TRAPHAGAN
210
196 What the Public Really Thinks About Scientists Surveys show a desire for greater transparency and accountability in research.
198 Genetic Blind Spots
Engineered for trust
218 Digital Ethics Online
202 Infographic Researchers reflect on trust and ethics within their fields.
LUCIANO FLORIDI
223 First Person: Stephaun Elite Wallace (Re)building trust in public health campaigns
Scientists’ search for public pathologies is unhealthy, unhelpful, and ultimately unscientific. NICOLE M. KRAUSE, DIETRAM A. SCHEUFELE, ISABELLE FREILING, AND DOMINIQUE BROSSARD
232 Treat Human Subjects
206 First Person: Insoo Hyun
with More Humanity Buy-in to medical research requires that participating communities benefit from the data collected, and can trust how their data will be used.
Guardrails for biotech
DAVID B. RESNIK
DON HOWARD
www.americanscientist.org
246 Ground Rules for Ethical Ecology Tackling environmental crises requires moral as well as scientific clarity.
and Off Molding the digital future as it simultaneously shapes us
226 The Trust Fallacy
203 The Obligation to Act Technical expertise is guided by civic virtues.
Malcom A sea change in STEM
HENRY PETROSKI
CONSTANCE B. HILLIARD
202
244 First Person: Shirley M.
246 215 Engineering
ROBERT T. PENNOCK AND GARY SCHROEDER
MARC A. EDWARDS, CAROL YANG, AND SIDDHARTHA ROY
MICHAEL PAUL NELSON
CARY FUNK
Key components of health are hidden in the neglected African genome.
Trustworthy Science
208 Minibrains: What’s in MELISSA LOPES ET AL.
198
SPECIAL ISSUE:
Special Issue: Trustworthy Science
250 Scientists’ Nightstand Cosmic testimony 253 Sigma Xi Today Of masks and mistrust • Sigma Xi launches emotional support program with Happy • Remembering Marye Anne Fox • STEM Art and Film Festival submissions open ON THE COVER Scientific advances have led to vast improvements in TRUST WORTHY daily life—from computers SCIENCE to vaccines to clean energy. But science itself is a work in progress and still has a lot of potential to be more equitable, ethical, and responsive to the needs of different communities. (Cover illustration by Clare Nicolas.) SPECIAL ISSUE
WHISTLEBLOWERS • ETHICAL AI • PUBLIC HEALTH ALIEN RIGHTS • INCLUSIVE STEM • GENOMICS
AMERICAN
Scientist July–August 2021
www.americanscientist.org
US$5.95
2021
July–August
193
Special Issue | From the Editors AMERICAN
Healthy Skepticism
Scientist www.americanscientist.org
Y
ou have likely heard of the Stanford Prison Experiment, a famous 1971 study by Stanford University psychology professor Philip Zimbardo in which students were randomly assigned to play “prisoners” and “guards.” The “guards” became unnecessarily cruel, the “prisoners” broke down, and the study was widely cited as evidence of how readily people conform to their social roles. However, the study has long been panned as being both unethical and poorly designed: The participants were selfselected, the investigator took part in his own study, the guard behavior was coached, and the consent forms did not make it clear that participants had a right to leave at any time, among other issues. Many researchers in psychological science call the experiment an embarrassment, yet it has appeared unchallenged in introductory psychology textbooks up to at least 2015. Popular internet memes criticize the generalization of the results to anyone not a college-aged white male, but paradoxically ignore that the study has already fallen from favor. All this goes to show is that once an idea takes hold, getting any corrections to sink in at a similar level can be extremely difficult. Scientists face this type of challenge all the time. The research enterprise is designed to be subject to revision as new data become available, but old paradigms can be hard to dislodge. That is all the more reason to remain vigilant about scientific ethics so as to reduce errors from the start. Recently, the idea of an erosion of public trust in science has become its own soundbite, regularly repeated without qualifications or empirical support. In this special issue on “Trustworthy Science,” we delve deeper into a wide range of topics related to trust and the ethical underpinnings of scientific research, exploring why we should consider science as worthy of trust. We start out by establishing a baseline with data about public trust. Cary Funk of the Pew Research Center provides an overview of public attitudes toward science and scientists (pages 196–197). This baseline has a counterpoint later in the issue, in “The Trust Fallacy” (pages 226–231) by Nicole Krause and her colleagues, in which they discuss how nuances of trust in science are their own ethical issue, and how a lack of information is not the only factor in gaining public trust. The issue explores a wide range of topics across fields of science, including oversights in biomedical research, digital regulation, the role of humans in the natural world, and what our efforts to contact intelligent life elsewhere say about our own culture. In addition, our authors explore the ethics of the scientific enterprise more broadly, exploring how missing genome research hampers science, arguing that scientific objectivity does not prevent researchers from acting on consequences of their work, engaging underserved communities in public health, putting people first in human-subject research, looking at the toll of calling out unethical work, and reconsidering what inclusive academic environments should look like. One overarching theme that emerges from this issue is that the research community would benefit as a whole from taking more time to step back and consider the implications and limitations of the data they collect. Another central message is that if we want to improve public trust in science, we need to acknowledge the history of how the scientific enterprise has interacted with different communities, including some that have been excluded or mistreated in the past. Trust can be achieved only when researchers listen to people’s concerns, find common ground, and demonstrate that they are willing to speak up for what’s right. Scientists are rightly proud of their remarkable accomplishments, but they should be vigilant about their biases as well, and should see ethics as a partner in progress. As bioethicist Insoo Hyun states in this issue, “Good guidelines and ethical standards do not get in the way of science. They help pave the way.” —Fenella Saunders (@Fenella Saunders) 194
American Scientist, Volume 109
VOLUME 109, NUMBER 4 Editor-in-Chief Fenella Saunders Special Issue Editor Corey S. Powell Managing Editor Stacey Lutkoski Digital Features Editor Katie L. Burke Senior Contributing Editor Sarah Webb Contributing Editors Sandra J. Ackerman, Emily Buehler, Christa Evans, Jeremy Hawkins, Jillian Mock, Efraín E. Rivera-Serrano, Diana Robinson Editorial Associate Mia Evans Art Director Barbara J. Aulicino SCIENTISTS’ NIGHTSTAND Book Review Editor Flora Taylor AMERICAN SCIENTIST ONLINE Digital Managing Editor Robert Frederick Acting Digital Media Specialist Kindra Thomas Acting Social Media Specialist Efraín E. Rivera-Serrano Publisher Jamie L. Vernon CIRCULATION AND MARKETING NPS Media Group • Beth Ulman, account director ADVERTISING SALES [email protected] • 800-282-0444 EDITORIAL AND SUBSCRIPTION CORRESPONDENCE American Scientist P.O. Box 13975 Research Triangle Park, NC 27709 919-549-0097 • 919-549-0090 fax [email protected] • [email protected] PUBLISHED BY SIGMA XI, THE SCIENTIFIC RESEARCH HONOR SOCIETY President Robert T. Pennock Treasurer David Baker President-Elect Nicholas A. Peppas Immediate Past President Sonya T. Smith Executive Director Jamie L. Vernon American Scientist gratefully acknowledges support for “Engineering” through the Leroy Record Fund. Sigma Xi, The Scientific Research Honor Society is a society of scientists and engineers, founded in 1886 to recognize scientific achievement. A diverse organization of members and chapters, the Society fosters interaction among science, technology, and society; encourages appreciation and support of original work in science and technology; and promotes ethics and excellence in scientific and engineering research. Printed in USA
Online Preview | Find multimedia at americanscientist.org
ADDITIONAL DIGITAL CONTENT Collection on Trustworthy Science
Public Health Relationship-Building
Efforts to build confidence in research are not new, as evidenced by this curated trip through our archives. The editors have chosen articles to enhance your understanding of how approaches to building trust in science have evolved over the years.
Stephaun Elite Wallace took a circuitous path to his current role as director of external relations for the COVID-19 Prevention Network (pages 223–224). In a companion podcast, he discusses his journey as well as his international work in public health advocacy.
“I Don’t Do Pipelines.” The first step to a successful career in the sciences is a supportive STEM education. The print interview with Shirley M. Malcom (pages 244–245) only scratched the surface of her efforts to build inclusive STEM environments. In an extended interview podcast, she discusses her own experiences navigating scientific institutions as a Black woman, and explains why she refers to educational pathways rather than pipelines.
Managing New Biotechnologies In the podcast companion to his Q&A (pages 206–209), Insoo Hyun delves deeper into the thorny history of bioethics, including a comparison of how the United States and the United Kingdom regulated developing technologies such as in vitro fertilization in the 1970s.
NASA
Am
Sci
Look for this icon on articles with associated podcasts online.
American Scientist (ISSN 0003-0996) is published bimonthly by Sigma Xi, The Scientific Research Honor Society, P.O. Box 13975, Research Triangle Park, NC 27709 (919-549-0097). Newsstand single copy $5.95. Back issues $7.95 per copy for 1st class mailing. U.S. subscriptions: one year print or digital $30, print and digital $36. Canadian subscriptions: add $8 for shipping; other foreign subscriptions: add $16 for shipping. U.S. institutional rate: $75; Canadian $83; other foreign $91. Copyright © 2021 by Sigma Xi, The Scientific Research Honor Society, Inc. All rights reserved. No part of this publication may be reproduced by any mechanical, photographic, or electronic process, nor may it be stored in a retrieval system, transmitted, or otherwise copied, except for onetime noncommercial, personal use, without written permission of the publisher. Second-class postage paid at Durham, NC, and additional mailing offices. Postmaster: Send change of address form 3579 to Sigma Xi, P.O. Box 13975, Research Triangle Park, NC 27709. Canadian publications mail agreement no. 40040263. Return undeliverable Canadian addresses to P. O. Box 503, RPO West Beaver Creek, Richmond Hill, Ontario L4B 4R6.
Travel Adventures September 2021 - May 2022 For information about Covid and travel, please call our office.
Black Hills & Mt. Rushmore September 18 - 25, 2021 A wonderful adventure including the buffalo roundup
Mesa Verde & the San Juan Mountains
Madagascar: Lemur Wonderland
New Zealand
Discover Egypt & the Nile
with naturalist Lloyd Esler
February 18 - March 5, 2022 September 25 - October 10, 2021 A paradise down under,
Galapagos Lunar Eclipse
September 18 - 26, 2021
December 1 - 14, 2021
Alaska Aurora Borealis
On board M/V Legend May 11 - 20, 2022
Breathtaking sites at the time of fall color!
With Red Sea Ext to December 17 Fascinating & not to be missed!
March 10 - 16, 2022
Simply Extraordinary!
We invite you to travel the World with Sigma Xi! www.americanscientist.org
See the Greatest Light Show on Earth!
SIGMA XI Expeditions THE SCIENTIFIC RESEARCH HONOR SOCIETY
Phone: (800) 252-4910
For information please contact: Betchart Expeditions Inc. 17050 Montebello Rd, Cupertino, CA 95014-5435 Email: [email protected]
Special Issue: Trustworthy Science
2021
July–August
195
Cary Funk | Surveys show a desire for greater transparency and accountability in research.
What the Public Really Thinks About Scientists
T
rust is an easy concept to grasp but a difficult one to quantify. Like scientific findings, trust in science is also tentative and subject to revision. The degree to which the public trusts scientists as a group is one key indicator of the quality of the public’s relationship with science, and in this sense, trust can be quantified. Surveys can assess public attitudes toward scientists and track how those attitudes change over time. Over the past year, scientists and their work have been at the forefront of public discussions about how to handle the coronavirus outbreak and how to develop effective treatments and vaccines for the disease. Those events might be expected to affect public trust in science. A group of us at the Pew Research Center collected data to measure any impacts. Our surveys—based on a nationally representative sample of more than 10,000 respondents—found that public confidence in scientists to act in the best interests of the public has been ticking upward since before the COVID-19 pandemic. As of November 2020, 39 percent of Americans reported a “great deal” of confidence in scientists to act in the public interest, up from 35 percent in January 2019, before the outbreak. Another 45 percent of Americans held a soft positive position, saying they had a “fair amount” of confidence in scientists. Just 15 percent said they had not too much or no confidence at all in scientists to act in the public interest. The modest rise in public trust in scientists stands out from reported public trust for other groups and institutions: Trust in the military held steady over the same period, and trust in journalists dipped further down. But the uptick in trust in scientists has been uneven among political groups. Trust in scientists has increased among Democrats and independents who 196
American Scientist, Volume 109
Roughly 4 in 10 Americans say they have a great deal of confidence in medical scientists and scientists Percentage of U.S. adults who say they have a great deal or fair amount of confidence in each of the following groups to act in the best interests of the public
Medical scientists 87 89 85
84
The military
K–12 public school principals
83 83 79 80 83 82
80 77
Scientists
83 76 79
86 87 84
83 75
65 A fair amount
A great deal 35 43 40 24
39 39 33 35 21 27
41 36 38 39 33 39
Apr ’20 Jan Nov ’20 ’19
Apr Jun Dec ’20 ’16 ’18 Feb Jan Nov ’20 ’18 ’19
’16
Jun ’16
Religious leaders
61 53 49
57
63 59
Journalists
25 21 28 21
13 ’18 ’18
’20 ’19
’16
’18
’20
Business leaders
’20 ’19
’20
Elected officials
55 48 45
41
44 43 46 48 46
37 35 37 37 27 25
13 9 15 13 17 15
15
9 9
Jun Dec Apr ’16 ’18 ’20 Feb Jan Nov ’20 ’18 ’19
Dec ’18
Apr ’20 Nov ’20
4 5 4 ’16
6 5 5
’18 ’18
’20 ’19
3 3 4 ’16
’20
4
’18 ’18
3 4 ’20
’19
’20
Note: Respondents who gave other responses or who did not give an answer are not shown. In 2016, the question asked about K–12 public school principals and superintendents. Source: Survey conducted November 18–29, 2020. “Intent to Get a COVID-19 Vaccine Rises to 60% as Confidence in Research and Development Process Increases” PEW RESEARCH CENTER
During the COVID-19 pandemic, trust in science increased among Democrats but not Republicans Percentage of U.S. adults who have __ of confidence in scientists to act in the best interests of the public Rep/lean Rep
55
58
55 A fair 56% 58
Dem/lean Dem
55
55%
45
48
45
43
16 10
9
47
39
37
52
55
9
8
amount A great deal
16
14
20
27
27 22
27
Not too much/ none at all
27
27 25
18
14
18
23
Jun Feb Dec Jan Apr Nov ’16 ’18 ’18 ’19 ’20 ’20
37
Jun Feb Dec Jan Apr Nov ’16 ’18 ’18 ’19 ’20 ’20
Note: Respondents who did not give an answer are not shown. Source: Survey conducted November 18–29, 2020. PEW RESEARCH CENTER
lean toward the Democratic Party, but not among their Republican counterparts. Among Democrats, 55 percent say they have a great deal of confidence in scientists, compared with 22 percent of Republicans, a partisan gap that has roughly doubled since January of 2019. Increasingly politicized public attitudes toward scientists can have important consequences. After all, one of the reasons for the ongoing interest in public trust in scientists stems from the idea that trust can be a harbinger of public views and behaviors regarding issues that are connected with science. For example, the willingness of Americans to get a coronavirus vaccine rose and fell with levels of public confidence in the vaccine research and development process as it was unfolding. As of February 2021, people who expressed higher trust in the vaccine-development process were 75 percentage
points more likely to say they would get, or had already gotten, a coronavirus vaccine than those with low trust. Center surveys have also documented that public trust in science is multi-faceted; public sentiment about scientists across those dimensions varies. Public judgments about scientists’ competence to do their job, for instance, can be quite distinct from views about scientists’ empathy for ordinary people or views of the accuracy of the information they provide. A Center analysis in 2019 highlighted the multiple dimensions of public attitudes: We found that trust in medical doctors was stronger than trust for medical research scientists, particularly around judgments of caring and concern. But there was a shared skepticism toward both doctors and medical researchers when it came to assessments of their scientific integrity. Just 15 percent of Americans said either group was transparent “all or most of the time” about potential conflicts of interest due to industry ties. Similarly, small shares of the public thought medical doctors or research scientists would admit mistakes they made and take responsibility for such mistakes all or most of the time. Should professional or research misconduct occur, no more than 2 in 10 survey respondents believed that medical professionals would typically face serious consequences for their misdeeds.
A majority of Americans say they are more apt to trust research when the data are openly available Percentage of U.S. adults who say when they hear each of the following, they trust scientific research findings . . . Makes no difference Less More Data are openly available to the public
8%
57%
Reviewed by an independent committee
10
52
Funded by the federal government
28
Funded by an industry group
34% 37 48
23 32
10
58
Note: Respondents who did not give an answer are not shown. Source: Survey conducted January 7–21, 2019. “Trust and Mistrust in Americans’ Views of Scientific Experts” PEW RESEARCH CENTER
Americans trust doctors more than researchers but doubt the scientific integrity of both Percentage of U.S. adults who say medical doctors or medical researchers do the following . . . MEDICAL DOCTORS
MEDICAL RESEARCHERS
Only a little/ All or most Some of none of the of the time the time time Care about people’s best interests
57%
Do a good job
49
Provide fair and accurate infomation
48
35% 43 32
Are transparent about 15 conflicts of interest
15
Admit and take responsiblity for mistakes 12
13
Face serious consequences for 20 misconduct
13
Note: Respondents who did not give an answer are not shown. Respondents were asked about whether medical doctors and dieticians care about the best interests of “their patients.” Source: Survey conducted January 7–21, 2019. “Trust and Mistrust in Americans’ Views of Scientific Experts” PEW RESEARCH CENTER
www.americanscientist.org
These findings indicate that public attitudes toward science are complex and nuanced. Survey data suggest that Americans are generally cautious or suspicious around issues of ethics and scientific integrity in medical research. We have found similar levels of skepticism toward scientists working in environmental and nutrition science. At the same time, the survey results hint at ways to build trust. Certain practices inspire greater confidence in scientists’ work. For instance, 57 percent of Americans said that they have more confidence in research findings when data are openly available to the public. And roughly half of Americans say that knowing there has been an independent or third-party review of findings increases their trust in scientific research. An encouraging sign is that overall trust in scientists starts from a high baseline. In the most recent Center survey, 84 percent of Americans expressed a fair amount or a great deal of trust in scientists—significantly higher than the numbers for business leaders, and more than twice that for elected officials. Cary Funk is director of science and society research at Pew Research Center, a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping America and the world. It does not take policy positions. The Center is a subsidiary of The Pew Charitable Trusts, its primary funder. Twitter: @surveyfunk
Special Issue: Trustworthy Science
2021
July–August
197
Constance B. Hilliard | Key components of health are hidden in the neglected African genome.
Genetic Blind Spots
I
n December 2020, an article appeared in the Journal of Global Health titled: “COVID-19 Pandemic: The African Paradox.” Its authors observed that although models had projected an exponential rate of transmission of the disease on the African continent, case fatalities there remained remarkably low. By late February 2021, the COVID-19 death count in the United States had surpassed 500,000, whereas the total fatalities for the entire continent of Africa, whose population is four times that of the United States, had reached just 100,000. It is premature to conclude that, overall, Africans are less susceptible to the virus. After all, African Americans have a higher mortality rate from coronavirus than do other American ethnicities. In addition, African death statistics are oftentimes unverifiable, and that continent’s more youthful population would be less prone to COVID mortality than America’s larger demographic of seniors. But let’s not dismiss too hastily any clues that could lead to medical breakthroughs. It’s possible that we are seeing a genuine epidemiological effect associated with the fact that Africans possess a greater range of genetic variation than other populations. Unfortunately, there is no easy way to investigate this possibility, because researchers have gathered remarkably little data on the most genetically diverse populations on the planet. Between 60,000 and 70,000 years ago, when Homo sapiens began migrating out of Africa, the branching populations that moved to Europe, Asia, and the Americas did not retain the full range of ancestral gene variants. Despite the migration of humans and the subsequent population explosion around the world, much of our species’ genetic diversity still resides in Africa. One of biology’s most foundational principles is that the greater the genetic diversity in a species—that is, the larger the number of gene variants, or single nucleotide polymorphisms—the 198
American Scientist, Volume 109
more likely it is that relevant gene variants will be present to help its members adapt to different ecological environments and new, life-threatening pathogens. From a medical perspective as well as an evolutionary one, therefore, it would be extremely useful to have detailed genomic maps of various African populations. So, why is it that—18 years after the completion of what was ambitiously named the Human Genome Project— the scientific world remains clueless as to what surprises might lie nestled
It is impossible to know what mysteries may be bundled up in the African genome because only 3 percent of it has been studied, compared to 81 percent of its European counterpart. within the as yet unstudied 97 percent of the African genome? The significance of this deficit may not be readily apparent until we confront one of the most counterintuitive discoveries of modern genomics: European and other nonAfrican genetic populations are themselves subsets of the African genome. Such typically European phenotypes as blond hair, blue eyes, and pale skin are not mutations unique to this population; rather, they lie within our species’ ancestral genome. But these Nordic features remain unexpressed among Africans because no ecological environment on that continent prioritizes them for purposes of survival. And yet, more American research focus and grant funding is devoted to the study of ar-
chaic Neanderthals than to discovering what medical breakthroughs might be contained within the full spectrum of our own species’ genome. This research disparity exists even though the Neanderthals no longer roam this planet, while the African genome offers living DNA, inheritable by any and all who belong to our human family. The reasons for this oversight are far from clear. But might the marginalization of the African genome be linked to the same Western beliefs that have for so long swathed that continent in stereotypes of cognitive inferiority and chaos? Whatever the case, nature has apparently played a diabolical trick on those who insist on racial hierarchies, by sheltering the DNA of our human species within the genome of our African ancestors. Resistance Hidden in Plain Sight Several years ago, researchers assembled another African health data set that seemed at least as paradoxical as the African COVID rate. A growing body of research purported to show that West Africans inhabiting the Tsetse Belt—99 percent of whom were lactose intolerant and consumed fewer than 250 milligrams of calcium per day—exhibited some of the lowest rates of osteoporosis in the world. Medical researchers mostly dismissed the unusually low osteoporosis risk in West Africa, suggesting as early as 1966 that the data were distorted by the shorter life expectancy of Africans and lack of medical facilities that might record a more accurate incidence rate for the disease. These are much the same explanations that some researchers now use to disregard African COVID-19 research. Follow-up studies supported the veracity of the osteoporosis data, and even suggested a possible link between low rates of osteoporosis in Africans and the prevalence in that population of expressing a gene variant of TRPV6. Whereas studies of the
Darryl Leja, Courtesy National Human Genome Research Institute
Homo sapiens began migrating around the globe between 60,000 and 70,000 years ago, but the majority of our species’ genetic diversity still resides in Africa. Populations in Europe, Asia, and the Americas lost pieces of ancestral gene variants as they adapted to their new environments, but Africans retain the full range. Consequently, studies such as the Human Genome Project, which primarily collected data from Western participants, are missing key aspects of human development.
African genome are scarce, there are copious amounts of data on their African American descendants, an admixed population that is on average 75 percent Niger-Kordofanian West African DNA, 24 percent Northern European, and 1 percent Native American. In 2016 I published a paper that took advantage of the more extensive data available on African American bone health compared with that of West Africans. Because the data showed that Black Americans had the lowest rates of osteoporosis of any ethnic group in the United States, it likewise strengthened the argument that the low rates of osteoporosis in low– calcium-consuming West Africans needed to be taken seriously. This pattern exists even though Black Americans are considered calcium deficient by United States Department of Agriculture standards, presumably on account of widespread lactose intolerance. The reason for the apparent paradox was discovered in 2009 by a team of nephrologists at the Univerwww.americanscientist.org
sity of Alabama at Birmingham, who were studying one of those inherited traits that are all but ignored in genetic research: The African variant of the TRPV6 calcium ion channel absorbs up to 50 percent more dietary calcium in the intestines than does the nonAfrican TRPV6b variant. It is impossible to know what other mysteries may be bundled up in the African genome, because only 3 percent of that genome has been studied, compared with 81 percent of its European counterpart. More to the point, Africans alone carry the full complement of our species’ genetic variation. This negligence is especially disheartening to Africans and Black Americans, both of whom are confronted with blame-the-victim memes, which rationalize the indifference of health providers and the limited availability of health resources by alleging that these communities are resistant to modern medicine. For example, according to a report from the U.S. Centers for Disease Control and PrevenSpecial Issue: Trustworthy Science
tion (CDC), only 7 percent of African Americans, compared with 65 percent of white Americans, had been given the COVID-19 vaccine as of March 1, 2021. African American fears of medical victimization are real, with one of the most disturbing examples being the infamous 1932–1972 Tuskegee syphilis study (see Edwards, pages 238– 242). But Black Americans are today balancing these historical concerns with the heartbreaking modern reality of watching a higher percentage of their loved ones die of COVID-19. In fact, CDC data show that vaccine resistance is more common among white Americans (28 percent) than among Black Americans (25 percent). The low rate of COVID-19 vaccination in Black communities is more plausibly attributed to lack of vaccine access than to unwillingness. A Genome Rich with Medical Marvels Our blind spots might be more costly to our health than we now realize. In July 2020, John Hollis, a 54-year-old African American, learned that he carried what might be termed “super antibodies” to the coronavirus. George Mason University pathologist Lance Liotta explained that the antibodies in Hollis’s blood were found to be so 2021
July–August
199
Ernesto Del Aguila III, NHGRI
The African Genome Variation Project is attempting to map the continent’s genomic variation, but it has a long way to go. In 2014, the team published data from 1,481 individuals across 18 ethnolinguistic groups. That sample represents only 7 of Africa’s 54 countries (data from the starred ethnolinguistic groups above were collected as part of the 1000 Genomes Project). Committing resources to studying the array of African genomic variation could lead to the development of better strategies to detect, treat, and prevent diseases in all populations.
potent that “even if diluted 10,000 times it would still kill 90 percent of the virus.” In short, this man’s antibodies might offer a means of identifying new treatments for the disease. Researchers await verification from ongoing clinical trials, and, perhaps for this reason, the medical literature has thus far been silent on the topic. But before we dismiss these initial findings as outlandish, let’s not forget the case of another African American, whose cancer cells proved immortal, laying the basis for some of the most momentous breakthroughs in modern medical diagnostics, genetic sequencing, and the development of the polio vaccine (see Resnik, pages 232–237). And yet the underlying genetic scaffolding that transformed Henrietta Lacks’s cervical tissue into immortal cancer cells will not be fully understood until we have more comprehensive data on her variant-rich African-descended genome. At last count, the human genome contained more than 324 million variants of the 22,000 genes our species shares; half of those variants can only be found on the African continent. It is for this reason that genetic tests conducted in the United States on African Americans are often returned with the 200
American Scientist, Volume 109
tag “variant of unknown significance,” when in fact that variant is only rare in populations of European ancestry. This lacuna makes it harder to differentiate disease-triggering variants from neutral ones in this minority group and others. For example, many Black females tested for breast cancer carry gene variants of BRCA1 and BRCA2 that are still unidentified in America’s reference genome, leading to ambiguous test results, and many promising African American athletes have had their careers derailed by genetic tests mistaking certain unknown variants for hypertrophic cardiomyopathy, a life-threatening heart condition. In recent years, the U.S. National Institutes of Health have launched several projects to address the lack of diversity in genome-wide association studies. At first, critics saw these initiatives—which sequenced the genomes of several hundred Africans, compared with projects involving millions of samples from people of European ancestry—as little more than tokenism. But the 2008 hiring of Charles Rotimi, a top Nigerian geneticist, to head up the Trans-National Institutes of Health Center for Research on Genomics and Global Health has injected
an aura of excitement and the promise of major breakthroughs. Indeed, the rewards could turn out to be priceless, as new immunological tools buried within our species’ ancestral genome await rediscovery. But Rotimi alone cannot correct this long-standing problem. Filling in this genetic blind spot will benefit all humans, regardless of race, in the same way that acknowledging white Western biases in medicine and science will free human biology from an artificial constriction. It may not be necessary to sequence the entire African genome at this time because the multibilliondollar price tag would be prohibitive, with little immediate market value to offset the expenditures. Nevertheless, if scientists wish to expand the boundaries of medicine during this COVID-19 crisis and beyond, they must first remove the attitudinal barrier that looms like an electrified fence preventing us from embracing the ancestral genome that tells so much of our human story. Bibliography Auton, A., et al. 2015. A global reference for human genetic variation. Nature 526:68–74. Campbell, M. C., and S. A. Tishkoff. 2008. African genetic diversity: Implications for human demographic history, modern human origins, and complex disease mapping. Annual Review of Genomics and Human Genetics 9:403–433. Ghosh, D., J. A. Bernstein, and T. B. Mersha. 2020. COVID-19 pandemic: The African paradox. Journal of Global Health 10:020348. Gomez, F., J. Hirbo, and S. A. Tishkoff. 2014. Genetic variation and adaptation in Africa: Implications for human evolution and disease. Cold Spring Harbor Perspectives in Biology 6:a008524. Gurdasani, D., et al. 2014. The African Genome Variation Project shapes medical genetics in Africa. Nature 517:327–332. Hilliard, C. B. 2016. High osteoporosis risk among East Africans linked to lactase persistence genotype. BoneKEy Reports 5:803. Musa, H. H., et al. 2021. Addressing Africa’s pandemic puzzle: Perspectives on COVID-19 transmission and mortality in sub-Saharan Africa. International Journal of Infectious Diseases 102:483–488. Na, T., et al. 2009. The A563T variation of the renal epithelial calcium channel TRPV5 among African Americans enhances calcium influx. American Journal of Physiology– Renal Physiology 296:F1045–1051.
Constance B. Hilliard is a professor of history at the University of North Texas. She researches the ways in which a detailed knowledge of history and ecological environments can be applied to uncovering the etiology of certain health disparities in African-descent populations. Email: [email protected]
O Dr NL o Pr Y p ice Th s ro 83 ug % h St ... au er
IN THE NEWS: Moissanite is a game changer... A hot trend in big bling— an alternative to diamond...” –– Today Show on NBC A.
Moissanite clarity is exceptional & comparable to a VVS1 diamond
Jewelry shown is not exact size. Pendant chain sold separately.
Calling This a Diamond Would Be An Insult Possessing fire, brilliance, and luster that far surpasses that of a diamond, this Nobel Prize winner’s discovery twinkles and sparkles unlike any stone on earth.
W
hen French chemist, Henri Moissan discovered an incredibly rare mineral in a 50,000 year-old meteorite, little did he know the amazing chain of events that would follow. Along with winning a Nobel prize, he gave the world a stone that surpasses even the finest diamond in virtually every aspect that makes a woman go weak in the knees. Today, you can own 2 total carats of this swoon-worthy stone for a down to earth price of $99. The most brilliant fine stone on earth. According to the GIA (Gemological Institute of America), Moissanite outperforms all jewels in terms of its brilliance, fire, and luster. Moissanite has “double refraction”–– which means light goes down into the stone and comes out not once, but twice. No diamond can do this. The way the light dances inside Moissanite is something to behold. The genius of affordability. A one-carat diamond of this color and clarity sells for more than $5,000. Two years ago Moissanite was over $1,000 a carat. Now, for the first time in history, our
team of scientists and C. craftsmen have mastered the innovative process, enabling us to price 2 carats of moissanite in precious sterling silver for less than $100. It’s pure genius. Our Nobel Prize-winning chemist would be proud. Satisfaction guaranteed or your money back.
MOISSANITE COLLECTION A. 3-Stone Ring (2 ctw) $599 † $99 Save $500 B. Solitaire Pendant (2 carat) $599 † $149 Save $450 C. Solitaire Earrings (1 ctw) $399 † $99 Save $300 BEST DEAL- Ring, Pendant & Earrings Set (5 ctw)
$1,597 $299 Save $1298
UNIQUE PROPERTIES COMPARISON Refractive Index
Dispersion
(Brilliance)
(Fire)
2.65-2.69
0.104
20.4%
2.42
0.044
17.2%
Moissanite Mined Diamond
B.
Luster
† Special price only for customers using the offer code versus the price on Stauer.com without your offer code. Pendant chain sold separately. *Double Your Money Back Guarantee requires proof of purchase to include purchase date and original sales receipt of equivalent item. Only one claim may be submitted for each purchase. Price comparisons will be made net of taxes.
Lowest price•you’ve ever seen! If not, we’ll double your money back.* Exquisite moissanite in .925 sterling silver setting • Whole ring sizes 5-10
The Moissanite Three-Stone Ring celebrates your past, present and future with unsurpassed fire & brilliance for less. Set also available in gold-finished sterling silver. Call now 1-800-333-2045
Stauer
Offer Code: MOS170-03. You must use the offer code to get our special price. ®
www.americanscientist.org
14101 Southcross Drive W., Ste 155, Dept. MOS170-03. Burnsville, Minnesota 55337
Special Issue: Trustworthy Science
www.stauer.com 2021
July–August
201
Infographic | Art by Gary Schroeder, text by Robert T. Pennock
220002 202 2
A American meeerrica mer m ica can S can Scientist, cieen nti ti tisst, sttt,, V Vo Volume olum lu um u me 1109 09
Don Howard | Technical expertise is guided by civic virtues.
The Obligation to Act
T
he sheer pace of change makes it ever harder to anticipate both the bad and the good effects of new technologies. In her 2017 book, Technology and the Virtues, technology ethicist Shannon Vallor coined the term acute technosocial opacity to name this phenomenon and argued, rightly, that it requires of us that we double down in our efforts to be as alert as possible to the consequences of technological innovation. That brings us to some core questions: Are scientists and engineers, both individually and collectively, morally responsible for the uses to which the products of their work are put, and, if so, what kind of action is called for in order to discharge that responsibility? It is an old trope, widely and often heard, that the responsibility of the scientist and the engineer ends at the laboratory door—that responsibility for the consequences of their labors lies with those who put the fruits of their labors to use, and that involving themselves in policy debates can undermine the integrity and objectivity of their scientific and engineer work. Happily, not all scientists and engineers think that way, but many do. I will argue that the possession of specialist technical expertise entails a special obligation to act—to be involved in policy making and to take part in public debate and discussion about the uses to which the products of scientific and engineering innovation are put. The argument will use the language of the civic virtues tradition, and it will turn on a consideration of the manner in which the scientist and the engineer are socially embedded in a complex topology of different communities of practice. I will argue that, far from compromising the integrity of the work of scientists and engineers, socially and politically engaged action can enhance both the integrity of their technical work and public respect for their work and their authority. www.americanscientist.org
the use of the bomb. Under the leadership of James Franck, Eugene Rabinowitz, and Leo Szilard, the group produced a remarkable document that reported the consensus view of the group. Now known as the “Franck Report” (though it was mainly authored by Rabinowitz), it argued strongly against a surprise attack on a civilian target. In the short run, Franck and his colleagues failed. Not everyone in the Manhattan Project agreed with them. The most notable dissent came from the project’s director, J. Robert Oppenheimer, although in later years he acknowledged responsibility for his work on the bomb. Oppenheimer believed that specialist technical expertise did not qualify the scientist or the engineer to be part of policy debates. Franck and his colleagues believed that, on the contrary, specialist technical expertise entailed a responsibility to participate in policy debates even when not invited to do so, that one had an obligation to take the initiative.
Among the members of the bomb physics community as a whole, it was the view of Franck, Rabinowitz, and Szilard that prevailed. Scores of leaders of the community stepped out of the laboratory and established organizations, such as the Federation of Atomic Scientists, and publications, such as the Bulletin of the Atomic Scientists, through which they could pursue the goals of public education. Surely their most ambitious move was to block legislation that would have given postwar control of nuclear energy to the military. They set up a lobbying office and succeeded in changing enough minds in the U.S. Congress so that in July 1946 it approved a bill that established the civilian-led Atomic Energy Commission, which later became the Department of Energy. But there is still no consensus among scientists and engineers regarding when and how individuals and groups may or should turn to activism and advocacy. Now climate change is forcing many scientists to confront the question of personal responsibility. A poignant case that exemplifies these tensions is that of NASA atmospheric physicist James E. Hansen, one of the pioneers of global climate modeling techniques and an early and effective advocate for aggressive action on global warming. His 1988 testimony before the U.S. Senate Committee on Energy and Natural Resources, in which he declared that NASA was “99 percent certain” that global warming was happening and that it was caused by human actions, was a kind of wake-up call to the nation and marked the beginning of a serious national policy debate about anthropogenic climate change. He continued to speak publicly about the need for urgent action, to such an extent that, in 2005 and 2006, President George W. Bush administration political appointees in NASA leadership sought to silence him and restrict his access to the press. The American Physical Society responded by recognizing Hansen with its 2007 Leo Szilard Lectureship Award.
Special Issue: Trustworthy Science
2021
A Need to Speak Out The Manhattan Project is a clarifying, extreme case study for many ethical issues. In June 1945, with planning for the use of the still-untested atomic bomb far advanced, and with rumors of a surprise attack on a civilian target being widespread within the Manhattan Project, staff of the Metallurgical Laboratory, the Manhattan Project laboratory at the University of Chicago, met to discuss
The possession of specialist technical expertise entails a special obligation to take part in the public debate about the uses to which innovations are put.
July–August
203
As the years went on, Hansen became ever more bold in his actions, so much so that some of his scientific colleagues began to think that he had crossed a line. He was arrested four times between 2009 and 2013 for nonviolent civil protest actions against coal mining, the Keystone pipeline project, and the extraction of crude oil from tar sands and tar shale. Growing more and more frustrated with the limitations on advocacy that he experienced as a government employee, Hansen finally resigned from NASA in 2013 to take up a position as director of the Program on Climate Science, Awareness and Solutions at Columbia University’s Earth Institute. The Civic Virtues Framework Are Franck, Szilard, Hansen and other activist scientists and engineers right in believing that their moral and political responsibilities extend far beyond the laboratory door, that it is precisely their technical expertise that obligates them to act? To answer that question, it will be helpful first to build a conceptual framework within which to address it. I propose the civic virtues framework. The virtue ethics tradition in moral philosophy has a long history, going back to Aristotle, but it experienced a dramatic resurgence in the last quarter
of the 20th century. The core concept is that of virtue being understood as a settled habit of action oriented toward the good. Classic examples include courage, temperance, truthfulness, and friendliness. Virtues are contextdependent in that each lies on a spectrum between two corresponding vices; where the mean lies will depend on the details of the situation. Critics see this contextuality as a problem, threatening a slide into anything-goes relativism. Proponents see it as a strength, giving virtue ethics more nuance and pliability than rigid, rule-based deontology or algorithmic consequentialism. A particularly important domain is life in community, meaning any form of intentional, collective, group activity where the group activity is oriented toward some good. We call the virtues specific to this domain civic virtues. The concept can be extended to any intentional group activity, from participation on a bowling team or in a service organization to being a member of a research team or working for a company in the tech sector. A point of central importance is that civic virtues and vices are manifested both by the individual members of communities and by those communities themselves, in the form of collective habits of action.
ZUMA Press, Inc./Alamy Stock Photo
James E. Hansen, who was a NASA atmospheric scientist at the time, was arrested in 2010 in front of the White House as he took part in a protest against “mountaintop removal” coal mining practices. Hansen’s role as a government employee placed limitations on his ability to advocate, but he stated that his expertise required him to speak out on climate issues. 204
American Scientist, Volume 109
Another point of major importance is that one is always a member of multiple communities. Multiplicity of community membership might also be hierarchical. I could be, simultaneously, a member of a project team, a corporation or government agency, a professional association, a nation, and the global community. One consequence of multiple membership is that this is the main check against the slide into radical relativism that the critics wrongly worry is a chief weakness of virtue ethics. That every human being on the planet is a member of the global community, means that when conflict arises, we can always, with modest effort, find those things that unite us and begin there the conversation about how to move forward in spite of our differences. But this complex topological structure of community membership also has important implications for the question of the moral and political responsibilities of scientists and engineers. In my role as a scientist or an engineer working in a research laboratory, I have many responsibilities that are reflected in virtuous forms of behavior through which I discharge those responsibilities. I have a responsibility to be truthful. I am obliged to be open in communicating the results of my research, subject to reasonable constraints, such as security restrictions or rules regarding proprietary information. I am obliged to invite and welcome critical scrutiny of my work by my peers. I have a responsibility to be diligent and hardworking. I am supposed to be tough-minded, critical, and skeptical. But I am also expected to work collaboratively with my colleagues and to be supportive and constructive in our interactions. In one’s role as a scientist or engineer, one is a member of multiple communities engaged in a variety of different practices. Consider the contributions that an electrical engineer might make to a college-wide review and reform of the core engineering curriculum. Even if our engineer lacked experience as an administrator, we would not argue that the engineer had no right or responsibility to become involved in the curriculum review. Indeed, we would agree that the engineer had a duty to participate in the review and was being a good campus citizen by doing so. How would matters differ if, instead of the engineer’s simultaneous membership in the laboratory and the university, it were in the laboratory, the nation, and
Elaine Thompson/AP Images
Noah Berger/AP Images
Employees of tech companies have felt obligated to speak out collectively against company policies they believed were unethical. Google employees (right) have protested against failures to address sexual harassment, but also against the company’s connection to the military
in uses of their technology. Employees of Amazon (left) protested that company’s licensing of facial recognition technology to U.S. Immigration and Customs Enforcement. Google changed some contracts in response to the employee pressure; Amazon did not.
the global community? The core civic virtue of participation in community life and work, including those broader communities, is what grounds the obligation to engage and act on the part of the scientist and the engineer. How we best discharge these citizen responsibilities depends crucially on the distinctive talents and experiences that we bring to the larger task as members of more local communities of practice. The nuclear engineer contributes in ways importantly different from the contributions appropriate to the battle-tested commander of the U.S. Army Air Corps. But both are equally obligated to act. The nuclear engineer’s expertise and experience enable him or her not only to understand how the bomb works but also to assess the impact of its use on both the health and well-being of those targeted and on the military and geopolitical situation. Deep knowledge and a lifetime of experience equip technical experts with a kind of insight that cannot be developed from a technical briefing. The responsibility to act is often an individual responsibility, but it is also a collective responsibility. Science and engineering are virtually always pursued as collective endeavors on many scales. Collective action and acceptance of responsibility are not uncommon. A much-discussed recent example is the actions taken by some 4,000 Google employees, including senior engineers, who signed a letter to Google’s chief executive officer, Sundar Pichai, in April 2018, objecting to a contract under which www.americanscientist.org
Google’s Project Maven program would develop for the Pentagon artificial intelligence tools for identifying objects in video that would improve targeting in drone strikes. Six weeks later, Google announced the cancellation of the contract. Such employee revolts do not always succeed. In June 2018, hundreds of Amazon employees signed a letter to Jeff Bezos, president and CEO of Amazon, protesting Amazon’s sale of its Rekognition facial recognition technology to U.S. Immigration and Customs Enforcement. Their protest did not work. Amazon responded with a firm declaration of its intention to continue supplying such services to the police and the military. Still, their actions evinced the same understanding of the moral and political responsibilities of scientists and engineers. The Public Good Especially interesting to me is the manner in which the engineering profession as a whole has embraced the concept of individual and collective responsibility. Every major engineering professional association stresses in its code of ethics the public responsibility of engineers (see Engineering, pages 215–217). The Order of the Engineer dates from 1970 and grew out of concerns among engineering educators and professionals in the United States about the environmental crisis and the rising tide of protests over the roles of scientists and engineers in Vietnam War–era weapons research and development. That such a moral impulse animates the work of so many engineers comes as a surprise to many of my colSpecial Issue: Trustworthy Science
leagues in the humanities, but it is a fact attested to daily in my many collaborations with engineering colleagues. One’s citizenship is not suspended when one walks through the laboratory door. The view that scientists and engineers do have citizen responsibilities to act for the public good because of their technical expertise has long been widely shared, especially within the engineering community. Many have voiced the opinion that both individual scientists and engineers and the institutions through which they work are obligated to act in the public interest, and an impressive array of honoraries, professional societies, prizes, publications, and conference venues have emerged to sustain and reward such acting for the public good. Let us work together to design new and better structures for achieving that worthy end. Bibliography Howard, D. 2020. A Philosopher’s Field Guide to Talking with Engineers. In A Guide to Field Philosophy: Case Studies and Practical Strategies, eds. E. Brister and R. Frodeman, pp. 209–221. New York: Routledge. Howard, D. 2017. Civic Virtue and Cybersecurity. In The Nature of Peace and the Morality of Armed Conflict, ed. F. Demont-Biaggi, pp. 181–201. London: Palgrave Macmillan. Wellerstein, A. 2012. The Uncensored Franck Report (1945–1946). http://blog.nuclearsecrecy .com/2012/01/11/weekly-document-9-the -uncensored-franck-report-1945-1946/ Don Howard is a professor of philosophy at the University of Notre Dame. This article is based on material prepared for a keynote address presented at From Industry 4.0 to an Inclusive Society, Bogotá, Colombia, February 20, 2020. Twitter: @DonHoward2 2021
July–August
205
Perspective First Person | Insoo Hyun
Guardrails for Biotech There is perhaps no more stereotypical image of science run amok than Frankenstein’s monster, created by a man who manipulates life without regard to the ethical consequences. And there is certainly no field of science more likely to evoke that stereotype—in news stories and in popular protests—than biotechnology. Researchers in biotech really do combine cells from human and nonhuman species (hybrids known as chimeras) and engineer stem cells that can transform into other cell types, potentially even into human embryos. Insoo Hyun, the director of research ethics at the Harvard Medical School Center for Bioethics and a professor of bioethics at Case Western Reserve University School of Medicine, works to dispel myths about biotechnology and to develop standards for doing research in a scientifically productive but responsible way. He spoke with special issue editor Corey S. Powell about lessons from past controversies and about the potentials and concerns that lie ahead. This interview has been edited for length and clarity. Steve Lipofsky, Lipofsky Photography
What are the most pressing issues in bioethics today?
Bioethics is a broad field! In general, classic research ethics and institutional review boards often deal with what we call natural kinds, those involving typical living entities such as ourselves, animals, and embryos. In my little corner of bioethics, I deal with the impact of new biotechnologies—technologies surrounding entities that do not have straightforward analogs in nature and therefore do not neatly fit in the existing categories of research ethics. Take, for example, research using selforganizing embryo-like models or the integration of human cells into pigs. How much leeway do researchers have with what they have created and what they can do with it? There needs to be a system of oversight and care taken in knowing what is going on in research institutions, what is allowed and what is not. This goes all the way to the general public, because those ethical questions will start to arise from them, too. Some of the questions bioethicists such as myself ask are foundational ones of oversight, such as what committee should review the work in question, what kinds of questions should the committee ask, and what counts as approval or disapproval. There are 206
American Scientist, Volume 109
also more philosophical questions revolving around, for instance, the moral status of new biological entities created through genetic engineering and biotechnology. Science needs public support, so an important element in bioethics is communication. Overreactions and misinformation could lead to restrictive policies that might prematurely truncate progress in the field. There had been a lot of public debate about the use of stem cells for medical research, but recently that has died down. Was the scientific hype justified? What about the ethical concerns?
Most of the initial stem cell research was aimed at understanding the basis of development and the causes of disease at the cellular level. What got the attention of the public was the idea that such technology could one day provide replacement cells for patients with, for example, diabetes—it offered hope. On the other side of the coin, because these cells initially originated from human embryos, this technology was closely linked to pro-life and abortion debates. Because there was a lack of federal funding and support at the federal level for this research, we ended up having a war in the trenches between the cheerleaders and those who demonized their use. How did it get resolved? By devel-
oping alternative technologies based on those original studies that used human embryonic stem cells. From a bioethical perspective, the introduction of induced pluripotent stem cells (iPSCs)—embryonic-like stem cells generated from adult cells—resolved many of the controversies. We now have the technology to reprogram an adult skin cell into a stem cell and then manipulate a bunch of those together into something that looks like and starts to behave like an embryo. Nonetheless, there are still scenarios in which specific research questions can be answered only with human embryonic stem cells, such as when studying very early human development. We could not have created iPSCs without the knowledge gained from human embryonic stem cells, so a lingering moral relationship exists between the two. But because we can now make stem cells that are not of embryonic origin, the focus now is not on where the cells came from, but on what they can become and do. Recently there have been ethical controversies over chimeras—engineered animals that contain cells from another species, even humans. What are the implications of chimera research?
These studies have a wide range of goals. Chimeras can be used to study
the development and fate of the newly introduced stem cells within the animal model, or to model diseases that are normally specific to humans, and do it in the context of a whole organism. These animal models can be used for more translatable research, such as for preclinical testing of stem cell– based therapeutic products, such as lab-grown pancreatic cells before human trials—or, in a more pie-in-thesky scenario, to fully understand how to grow a variety of replacement tissues in vitro. The idea here is similar to 3D printing: If you understand how something is made from scratch, then you can write the instructions to make it happen. In the future, one goal is to grow transplantable organs generated from human stem cells in livestock animals for future engraftment in patients. Chimeras, like iPSCs, will continue to be surveyed through the lens of bioethics, since their future use and the implications of potential technologies based on them are subject to similar controversies. What do you see as the biggest areas of ethical concern with human genetic engineering?
We can categorize gene editing into three broad categories. They differ not only in the aims of the technology used, but also in the level of ethical issues they raise. The first category is somatic cell gene editing, which involves changing the genetics of a living person in some particular part of their body, such as a genetically defective liver. For example, a patient may get some kind of genetic change for their own benefit—such as to overcome sickle cell disease—but that change cannot be passed on to future generations. Then you have two forms of germline engineering: one that is an in vitro–only method for research purposes, where embryos are genetically modified in a dish but not implanted in a womb, and another that is the reproductive use of germline modification to make babies that can avoid inherited disorders. Most people would categorize their ethical concerns as increasing in the order listed here, but I would argue to reorder the controversy. I think that somatic cell engineering deserves a lot more ethical scrutiny and discussion than it currently is being given. In that realm, you have the possibility of unexpected outcomes in clinical trials and terrible things happening to real people www.americanscientist.org
like Jesse Gelsinger [see Resnik, pages 232– 237]. Meanwhile, the third category— generating engineered offspring—is often considered the most controversial one, but its practice is heavily restrained by funding agencies and is punishable by law in most countries. You have been exploring the ethics surrounding another fairly new biotechnology: brain organoids, complexes of neural cells grown in the lab. What are the most immediate concerns with them?
There are a few. Organoids need to be validated as models of human development, and one way to do that is by comparing them with primary fetal tissue. Whether you directly generate the data from such tissue or use solely the published data, organoid models need that validation. In order for brain organoid research to thrive, there needs to be alongside it the possibility of fetal tissue procurement and usage, and that can be subject to legal restrictions influenced by, for example, political administrations. Another issue is communicating organoid research to the public. The researchers who work with brain organoids are people too. Sometimes they have to interact with tissue donors and their families—when, for example, there is a need for cells from a patient with a known or unknown neurological disorder to develop a brain organoid model from them. Researchers need to answer their questions about these brain organoids and what they can do for others affected by similar disorders. There are a lot of emotions built into these interactions, and some researchers are not yet prepared for these discussions. Lastly, there are the issues of the motivations behind the wide array of things researchers can do with the organoids. Some researchers want to study how brains develop; others want to use organoids for downstream applications, such as pairing them with devices to model human neural networks through computer engineering. When the research starts pushing more heavily into the engineering side, new ethical questions arise about defining the boundaries and limits on what can be made [see accompanying essay by Lopes et al., pages 208-209]. How do you decide at what point a biological possibility becomes an issue that requires ethical regulation—for Special Issue: Trustworthy Science
instance, in the case of human stem cell–based embryo modeling?
It is hard to know exactly when you have reached that point, because in this example the only way you can definitively answer whether an embryo-like model is developmentally competent is to transfer it into a womb. But no institutional research committee would allow that, not even in nonhuman primates. Or the U.S. Food and Drug Administration would intervene, as this would be a highly manipulated construct aimed, in this case, at clinical purposes. At least in the United States, the most one can do is infer the potentials from other data—from what you can analyze from the embryo model alone. In addition, the National Institutes of Health follow the Dickey-Wicker Amendment, which prohibits federal funding for any experiments in which human embryos are created, harmed, or destroyed. The definition of human embryo under this amendment is very broad and basically includes anything that could generate offspring— whether a product of fertilization, cloning, or other biotechnological approaches, or an entity made from human diploid cells. The last item would include iPSCs or an embryo model. If the entity has developmental potential, then it follows the current policies regulating embryo research using in vitro fertilization techniques. This definition is not limited to the United States; many countries are defining a human embryo not by how you created it, but by what it has the potential to become. More fundamentally, how do you develop an ethical framework for truly new biomedical technologies?
When you do bioethics, you can do it either at the scholarly or at the policy level. At the scholarly level, I can sit in my office, read philosophical and scientific literature, and come up with the best arguments for how to think about these embryo models, say, and their moral status, and I can generate my conclusions. All you are doing here is analyzing arguments and writing solutions that put forward your own thinking on the particular issue. Then you publish your article and other people discuss it. That approach is fine, but the results do not always impact policy making. Another bioethical approach—one that usually impacts policy—is to work in teams with other experts from different disciplines to arrive at practical guidelines or 2021
July–August
207
a broad ethical framework to guide the pursuit of science in a particular area. I have had the pleasure of working on both of these bioethical approaches—the scholarly and the policy levels. For example, the International Society for Stem Cell Research just released new research guidelines for 2021 on chimeras, embryo models, and genome editing, among many other issues. I had the pleasure of working closely with a global team of science and ethics experts and even got to do some really interesting philosophizing in the process. What’s an example of an ethical success story—ethics standards that helpfully guided a new biotechnology?
A good example revolves around the human cloning debate in 2005, where researchers hoped to use cloning techniques to generate stem cells that were genetically matched to particular patients. This would have involved the transfer of a patient’s DNA into an unfertilized human egg that had had its maternal nucleus removed. But back
then, scientists did not have enough eggs for any kind of research, because we were not allowed to pay women for research participation—for donating the eggs—and no participant would do it for free. Women who had experience donating their eggs at in vitro fertilization clinics knew that they would get a minimum of $5,000 from couples trying to conceive. These women knew they could get paid for their time, effort, and inconvenience, but only if they went through that very same egg procurement process for reproductive use. This seemed unfair to me. In 2006, I wrote a commentary in Nature in which I argued that there’s good reason to pay healthy volunteers for research egg donation, the same as you would pay anybody else who volunteers for an invasive procedure for basic research. Many questioned my argument, as there were a few U.S. state laws against compensation for research egg donors, and the National Academy of Sciences had established that such pay was unethical, but I wanted to at
least air out the rationale for what I believed. The result? The State of New York used my commentary to help come up with their own policies, allowing compensation for time, effort, and inconvenience to research egg donors. That ended up having an impact on science, because some people were persuaded—and they moved a variety of scientific research forward that depends on the procurement and use of human eggs for basic research. In your work, you have to confront both personal morality and institutional morality. Do you find it difficult to navigate between the two?
There is not a sense of disconnect between my academic work and how I feel about the limitations that are set in policy. There is, however, one good exception. This is the 14-day limit on embryo research, which prohibits scientists from culturing human embryos in a dish beyond two weeks of continuous development in their laboratories. Some colleagues and I published an
Melissa Lopes, Richie E. Kohman, Jeantine E. Lunshof, Bruna Paulsen, Martina Pigoni, Sasha White, John Aach, Insoo Hyun | The reality of organoids
Minibrains: What’s in a Name? B
rain organoids have emerged in recent years as a powerful new tool for neuroscience research. Unfortunately, the promise of this tool has been muddied by misplaced fear and hype, born of uncertainty about what brain organoids are and what they can do. Brain organoids have been described as minibrains and brains in a vat. Such terms conjure up horror-movie visions of self-actualized, independent entities that could run amok in the lab and beyond. A greater understanding of organoids reveals that these popular names are ambiguous at best and deceptive at worst, obscuring the true promise and limitations of the new research. In reality, brain organoids look nothing like living brains, and they are quite limited in their action. Organoids are clusters of a select population of cells found in a brain that emerge not from the brain itself but from human stem cell cultures grown in a dish. Under certain conditions, these clusters develop into three-dimensional structures that mimic various aspects of the developing brain. The resulting structures are minuscule—about the size of a pea—and are formed entirely in vitro. For decades, researchers have used animal models and two-dimensional cell cultures to gain insight into brain development and disease. That work has led to impressive advances in science and medicine. However, neither the 208
American Scientist, Volume 109
animal nor the two-dimensional cell culture model could fully capture human-specific features or reveal the circuit and structural features of human neurobiology in health and disease. By contrast, three-dimensional organoid models of human cellular structures carry unique human genetic makeup and mutations—providing scientists with the possibility to observe, test, and understand several aspects of brain formation and evolution. The human brain is incredibly complex and relies on inputs and other senses as well as outgoing connections to human tissues. One needs only to look at a baby to see the manifestation of these complex connections in real time—as babies hear repeated terms, they form words of their own; as they touch and see things, they begin to understand the world around them; signals from the brain coordinate these responses. Lacking interactions with other body tissues and lacking sense and process stimuli from the outside environment, a brain organoid is a miniature model for only some factors of brain architecture and physiology.
I
f brain organoids are so limited and so easily distinguishable from human brains, why all the fanfare? Because, under the right conditions, these organoids are better able to replicate some of the cell-type, developmental, and tissue-specific features of isolated regions of an early-stage
article recently advocating for letting some researchers have an exception to the rule and explore just a little beyond that, in slow increments. The current 14-day limit was established in the early 1980s when in vitro fertilization was just beginning and the human stem cell technology just was not there. Now, some 40 years later, this 14-day limit merits reevaluation. We recommend keeping the 14-day limit as a default policy and, for example, allowing a few qualified teams with close monitoring to explore just a bit beyond, report back the results, and then decide whether it is worth it to keep going. I do not know whether anyone is going to take that on as a policy, but I do think that science policy makers have to think about it. An important takeaway, especially important for scientists to understand, is that good guidelines and ethical standards do not get in the way of science. They help pave the way. The 14-day limit is a great example. Back in the early ‘80s it was key at the beginning of in vitro fertilization and human em-
bryo research to have such boundaries, because it carved out a playing field. Without those guidelines and the 14day limit, we would not have had human embryonic stem cell research in the ‘90s and the knowledge that came from it. With our latest article on revisiting the 14-day limit, I also think this is an area where people might now consider revising existing policies. When done right, policy can help pave the way for more good science. When you try to anticipate new bioethical concerns, how far into the future do you look?
I typically like to go in increments of five years. Although science moves quickly, one can get too far ahead, overreacting to possible but improbable future scenarios, which can hinder the future of science. For example, when Dolly, the first cloned sheep, was born in 1998, people freaked out and erroneously imagined that it would be just a short matter of time to go from sheep to human cloning. But what nobody realized
ZUMA Press, Inc./Alamy Stock Photo
Brain organoids don’t look like brains, nor do they entirely act like brains. Nevertheless, they are useful models in neurobiological research.
human brain than are previous research models. For instance, organoids provide scientists with ways to understand how genetic mutations alter brain physiology, genetically manipulate and model disease progression, and screen for novel therapeutics in real-time in the most relevant scenario in terms of both tissue and architecture. These models represent a great and challenging future opportunity to answer intriguing questions about brain development that have puzzled the field, and to better understand the root causes of certain conditions such as Alzheimer’s disease and autism spectrum disorders. A greater understanding of the stages and features of brain development and disease is necessary for scientific discoveries to move forward and to improve the quality of life of people who are affected by brain disorders. Previous systems, such as cultures of immortalized neurons and rodent models, have started us down the path to such www.americanscientist.org
was that it is not that simple to go from the sheep model to human cloning. In fact, developing a cloning procedure for nonhuman primates took well over a decade. Yet, there were immediate moves in some countries in 1998 to outlaw any type of human cloning, with restrictive, ill-defined policies. Those regulations were broadly defined and impacted cloning for basic research purposes. At the International Society for Stem Cell Research, we are looking only five years out because if we try to determine guidelines based on what could happen—without knowing whether those scenarios are even possible—we might truncate research by restricting freedom of scientific exploration. For rapidly moving biotechnologies like the ones we are discussing here, it is best to proceed with guidelines that are flexible and responsive to the science, ones that can be fine-tuned in real time.
Am
Sci
A companion podcast is available online at americanscientist.org.
understanding, but they lack features unique to the human brain and therefore pose critical limitations. In addition, the availability of human brain tissue is limited, and samples derived from biopsies often offer only a final snapshot of the disease, masking the underlying mechanisms behind the disorder. A model of brain development that allows scientists to understand the cell types and brain circuitry affected by diseases, as well as the developmental aspects that go awry, would move us further along the continuum to understanding the brain. Brain organoids have fundamental limitations, so they will never be able to answer all the big questions in neuroscience. They are also, by extension, limited in their ethical implications. This will remain true, even though the field is advancing quickly and scientists are continuing to improve and develop more complex brain organoid systems. But if this work proceeds openly and responsibly, organoids have great potential to help us understand more about neurological disorders and about the workings of the human brain. Melissa Lopes, senior research compliance officer, Harvard University; Richie E. Kohman, synthetic biology platform lead, Wyss Institute for Biologically Inspired Engineering at Harvard University; Jeantine E. Lunshof, ethicist at the Wyss Institute for Biologically Inspired Engineering at Harvard University; Bruna Paulsen, postdoctoral fellow, department of stem cell and regenerative biology, Harvard University; Martina Pigoni, postdoctoral fellow, department of stem cell and regenerative biology, Harvard University; Alexandra (Sasha) White, medical student, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University; John Aach, senior scientist, Church Lab, Harvard Medical School; Insoo Hyun, director of research ethics, Harvard Medical School, and professor of bioethics, Case Western Reserve University School of Medicine. Email for Hyun: [email protected]
Special Issue: Trustworthy Science
2021
July–August
209
John W. Traphagan | Extraterrestrial communication forces us to confront cultural assumptions.
Who Should Speak for the Earth?
O
n November 16, 1974, a small team led by astronomer Frank Drake beamed a coded signal from the Arecibo radio telescope in Puerto Rico toward globular star cluster M13, about 25,000 light years away. The message contained a rudimentary map of our Solar System, a graphic depiction of the composition of DNA, and a human stick figure. The content consisted of just 1,679 binary digits, and the odds of it ever being intercepted are extremely low, although not zero. Even if it were received, would it be understood? When I show a printout of the message to college students in my classes, they always have a difficult time interpreting most of the contents other than the stick figure—and my students are all human. No matter how unlikely success may be, though, any effort to contact extraterrestrial life invokes existential risks to humanity. Scientists and politicians would, ideally, need to figure out how to communicate the news to the world in a constructive and culturally sensitive way. The time to start thinking about such seemingly far-off issues is now. Two years before the Arecibo message, NASA had launched toward interstellar space the Pioneer 10 and Pioneer 11 probes, which carried similar messages engraved on gold-anodized aluminum plaques. Three additional space probes are now headed out of the Solar System, as are several more radio messages. Human history is littered with scientific endeavors whose ethical merits were questioned only after the
cat was out of the bag, the creation of the atomic bomb being a notable example (see Howard, pages 203–205). Any attempt at active communication with extraterrestrials poses even greater moral challenges, which past messaging attempts have tended to ignore. This problem continues today, as some private groups have started talking seriously about transmitting powerful directed signals toward other stars. In 2017, the nonprofit METI International sent a science and math
Any effort to contact extraterrestrial life invokes existential risks to humanity. tutorial to Luyten’s Star, a red dwarf with a potentially Earth-like planet located 12 light years from Earth. (Full disclosure, I am on the METI International advisory council.) Amusement park mogul Bill Kitchen has floated plans to beam the entire contents of Google’s servers into space. If we are going to continue sending messages intended for alien civilizations, we need to figure out how to maximize the possibility of meaningful contact and minimize the risk of misunderstanding. We need to figure out a useful process for assessing the potentials and risks. Above all, we need to grapple with a sweeping cultural question: Who should be allowed to speak for the Earth?
Cosmic Naivete The search for extraterrestrial intelligence, or SETI, is typically broken into two areas of work: Passive SETI (listening for messages from space) and Active SETI, which is commonly referred to as METI (messaging extraterrestrial intelligence). SETI scientists often assume that an alien civilization capable of sending or receiving a message from humans would be both technologically advanced and long-lived, existing for perhaps thousands or millions of years. Some scientists, including astronomer Jill Tarter of the SETI Institute, have argued that E.T. is likely to be altruistic. (See “First Person: Jill Tarter,” September– October 2018.) Their reasoning is that the longevity of our alien interlocutors’ civilizations means they would present no threat to humans and, instead, could be morally and socially advanced enough to help us through our own civilizational “adolescence.” As appealing as this assumption may be, it does not align with evidence from humanity’s past, with its long history of imperial and colonial subjugation of many societies by those who had economic and technological advantages. To me, this belief in benevolent aliens shows a distinct naivete—a projection of human hopes rather than a serious attempt to examine evidence from evolutionary biology and anthropology. Such naivete, in turn, has led to unwitting but serious mistakes throughout human history. With my background in East Asian studies, one example that comes to mind is Japan’s imperial expansion in the early 20th century. In
QUICK TAKE Even if the likelihood of reaching an alien civilization is extremely small, efforts to contact extraterrestrial life pose an existential risk to all of humanity.
210
American Scientist, Volume 109
Human history has shown that interactions between distinct cultures are often disruptive, challenging, and even violent. Interactions with aliens could be similarly dangerous.
Intercultural diplomacy will be a crucially important piece of communicating with aliens, and we will need people from across academic disciplines and the globe participating.
www.americanscientist.org
NASA/Ames Research Center; Smithsonian’s National Air and Space Museum
some cases, inhabitants of invaded territories believed in the Pan-Asian ideology that the Japanese government used to justify its actions and welcomed invading troops as liberators from Western imperialism, only to be faced with brutal subjugation from a new force. The Arecibo message, the Pioneer plaques, and the Golden Records— audiovisual recordings that astronomer Carl Sagan included on NASA’s twin Voyager spacecraft launched in 1977— have opened up all of humanity to the unknown risks of alien contact, however unlikely. As Clemson University philosopher Kelly Smith has noted, we know very little about our immediate interstellar neighborhood. It is possible we live in a dangerous area, populated by civilizations waiting around for an opportunity to plunder. An intentional message from Earth intercepted by such a civilization might be viewed as an invitation to come here and dominate our world, or as an excuse to send some sort of computer virus to disable much of our planetary technology just for fun, the way a child stomps on an anthill. Even a well-meaning alien scientific expedition could have disastrous effects, as has happened repeatedly among human civilizations here on Earth. These scenarios may seem far-fetched, but given our current limited under-
The plaques (circled in bottom image, detail in top image), which were mounted on NASA’s Pioneer 10 and 11 spacecraft (launched in 1972 and 1973, respectively), visually explain who sent the spacecraft into the universe, when it was sent, and from where. The European-looking human figures are supposed to be representative of all humankind, revealing the unconscious biases of the plaques’ creators. When message-carrying spacecraft such as Pioneer 10 are launched, they establish a precedent and set a challenge to do better with future communications.
standing, we have no way of evaluating whether or not that is true. Any message sent into space carries with it at least some risk to our survival and well-being. Some have argued that these moral issues are essentially moot, because Earth has been leaking electromagnetic radiation since the 1930s. But there is an important difference between leaked human broadcasts, many of which are
quite weak, and directed messages sent with the goal of drawing attention to our presence in the universe. In 1974, the Arecibo message was created on the decision of Sagan and Drake, with the assistance of the Arecibo staff, as part of a celebration of an upgrade to the Arecibo telescope. It was sent without any larger outside debate about the value or safety of sending such a message.
Special Issue: Trustworthy Science
2021
July–August
211
Anne Nordmann/CC BY-SA 3.0
If you could successfully decode the Arecibo message, which was intended to be universally intelligible, you would see (from top to bottom) the numbers 1 to 10; the atomic numbers for hydrogen, carbon, nitrogen, oxygen, and phosphorus; formulas for the chemicals that make up DNA; a visual representation of DNA; a human figure flanked by Earth’s 1974 population (4 billion) and typical human height (5 feet 9.5 inches); a schematic of the Solar System; and a sketch of the Arecibo telescope, indicating the diameter of its dish.
At the time, many scientists assumed that the people operating radio telescopes and their colleagues should lead any interstellar communication efforts, meaning astronomers with the help of a few others, such as linguists as seen 212
American Scientist, Volume 109
in the movie Arrival or in Sagan’s novel Contact. Since then, SETI and METI projects have continued to be viewed primarily as technical endeavors that need to be led by those with technical expertise, even though a positive result from such a search would be much more than a scientific challenge: It would primarily be a social and moral puzzle that involves intercultural communication and diplomacy. Astronomers generally lack training in areas such as the study of culture, social organization, and diplomacy. We also have to ask whether a small group of highly educated scientists have the moral right to speak on behalf of the entire planet. After all, most of the attempts at METI to date have been spearheaded by white male Western astronomers, who have not made significant efforts to show the diversity of human life on Earth. The Arecibo message includes no intentional cultural information, and the Pioneer plaques display decidedly European-looking male and female figures as if they were representative of all humankind. Sagan made a determined attempt to do better on the Voyager Golden Records, including photos that show some degree of Earth’s cultural, racial, and ethnic diversity. Nevertheless, the contents remain skewed toward the interests of Western-centric astronomers. Twenty-seven percent of the music included, for example, is of the European classical genre. The records also contain no images of the darker side of humanity, such as war, poverty, and environmental degradation. Despite Sagan’s efforts, the Voyager records still end up showcasing romanticized and ethnocentric representations of the human condition. Astronomers need to broaden the conversation and reframe contact with E.T. as a moment of intercultural communication, not just interstellar communication. In any such conversation, no matter how slow or complicated, diplomacy will be vital. Again, historical examples spring to mind of the devastating results from diplomatic missteps or failures of understanding between culturally different societies. In 1941, the Japanese believed that the pacifist movements of 1930s America made a strong response to the attack on Pearl Harbor unlikely. Their misreading of American thinking left close to 3 million Japanese dead by the end of World War II.
On Alien Rights In my view, all SETI and METI projects should be international efforts led by experts in the social sciences and diplomacy and designed to involve cultural experts from the start. Unfortunately, most SETI and METI work today remains dominated by natural scientists, with minimal input from social scientists and people in the humanities, although the situation has started to change somewhat. One notable step forward is the formation of the Society for Social and
The belief in benevolent aliens shows a distinct naivete—a projection of human hopes rather than a serious attempt to examine evidence from evolutionary biology and anthropology. Conceptual Implications of Astrobiology, an organization led by Kelly Smith that aims to increase the involvement of scholars outside of the natural sciences in discussions about the social and cultural implications of the search for extraterrestrial life. Another is the creation of METI International by psychologist and astrobiologist Douglas Vakoch for the purpose of supporting and developing active, collaborative SETI projects. Although these organizations represent an important step toward making the search for extraterrestrial intelligence more interdisciplinary, we could do much more. Diversifying the group of people who are listening for alien signals is also important. Even a completely benign message received from an alien society may present a risk for humans. As my colleague Ken Wisian and I have written, if governments here on Earth perceive control over information received from another world as significant from an economic or national security perspective, it is likely they will compete to gain control of that information. That competition could lead to a cold war, or even a hot war if the stakes seem high enough. Information is a valuable resource, and
www.americanscientist.org
Glasshouse Images/Alamy Stock Photo
governments will take risks to acquire it. Such action can have serious consequences. For example, Japan’s attack on Pearl Harbor was in part a calculated risk the country took to gain access to resources such as oil in Southeast Asia. The potential for an incoming message to further divide humanity compels us to ask if it is even worth listening for a signal. Awareness of how an alien message could disrupt human society at large also raises the point that any message we send carries its own moral responsibilities toward the civilizations that might receive it. A message from Earth intercepted by a divided world like our own could be highly disruptive, for instance. Even a unified planet could be destabilized if the extraterrestrials who accidentally pick up our message happen to be xenophobes. We should at least consider the idea that we have a moral obligation not to send a message, on the grounds that doing so could disrupt an alien civilization. We’ve seen such disruptions occur time and again in human history. In the 19th century, American Commodore Matthew Perry arrived with his steamships in Tokyo Bay and coerced the Japanese into trade with the West, ushering in a series of events that led to a brief civil war in Japan and the complete restructuring of Japanese society over the following decades. Disruptions don’t just mean war and violence, however, and can come in many unexpected forms. In the early 20th century, more than 100 distinct religious movements sprang up in the Melanesian islands in the southwestern Pacific in response to the arrival and influence of Western colonial outsiders. What risk aliens pose to humans— and we pose to them—comes down in large part to how we and they think about “rights.” Will we see aliens as having the same status as humans, or will we regard them as subhuman, superhuman, or simply nonhuman? For centuries, colonizers have treated their conquests as inferiors, something that Neill Blomkamp brilliantly recast in extraterrestrial terms in his film District 9. The things we call “rights” are cultural products. Fundamentalist Christians, who believe humans are created in the image of God, might in general be uncomfortable granting rights to nonhuman intelligent beings, whereas Japanese Buddhists, who make no such claim to the special status of humans, might not per-
When American Commodore Matthew Perry arrived in Tokyo Bay in 1853 with his steamships, he did not confront the Japanese with bloodshed. But he did coerce Japan into trade with the West, setting off a brief civil war and the complete restructuring of Japanese society. This is one of countless examples from human history of how contact between two distinct cultures can have unexpected and potentially devastating consequences. There is no reason to expect that contact between humans and aliens would be any different.
ceive such a conflict. There’s no reason to think an extraterrestrial civilization would be any more united than humanity is on this question. For now, the only civilization or data point we know is our own— and it isn’t one civilization but a conglomeration of many past and present civilizations. Around the world, separate populations have developed significantly different forms of social organization as well as distinct value structures. The SETI and METI debate offers a wonderful opportunity to think about our diversity in a new and more respectful way. But the complexity of human behavior indicates the absurdity of making simplistic assumptions about alien cultures. Existential Ethics The simplest way to avoid any ethical pitfalls is to bury our collective heads Special Issue: Trustworthy Science
in the Earthly sand and neither listen to messages from nor send messages into interstellar space. I don’t think that’s the solution, but I do think that the scientific community needs institutionalized methods of ensuring that SETI and METI research is done in an ethical way, one that is sensitive to a wide range of cultural and moral values. We also need to ensure that proper consideration—from diverse racial, ethnic, gendered, and cultural viewpoints—is given to the risks and potential harms of broadcasting interstellar messages. The best model for this at present is the Institutional Review Board system that government institutions, universities, and hospitals use to vet the ethics of all research on human and animal subjects (see Resnik, pages 232–237). NASA already has such boards to review, approve, and monitor research involving astronauts and other human 2021
July–August
213
subjects, but no such process is currently in place for SETI and METI projects. Going forward, sponsored research involving either passive SETI or METI targeting should be classified as research involving intelligent beings. Given that designation, it should have to undergo a full approval process, which would involve submitting a detailed justification of the research to a diverse, interdisciplinary board whose members would evaluate the proposed work for its adherence to established norms of ethical research. Ideally, there would be an international review structure in place as well. The International Astronautical Federation or the United Nations could conceivably create such a forum, although enforcement across borders would be a major challenge. In addition to protecting humanity from a potentially disastrous mistake, careful consideration of the implications of SETI and METI research could also help scientists in many fields focus on their responsibility to consider the full implications of their research. The ethical challenges of SETI and METI can be instructive precisely because they seem so unlikely and so remote from everyday pragmatic con-
cerns. Even extremely unlikely risks deserve close attention if they have the potential for catastrophic consequences. And efforts to contact extraterrestrial civilizations provide a fresh way to view the long, troubling history of contact between different cultures on our own planet. We have an obligation to interrogate who is deciding, and who should decide, whether and how we try to reach life beyond our planet. It may be too late, now that some messages have been sent, but it will certainly be too late to begin thinking about these things after a message is received. We have to remember that attempts to make contact with aliens—even if they keep coming up empty—involve all of humanity, in all its cultural, religious, gendered, racial, and ethnic diversity. Bibliography Deudney, D. 2020. Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity. New York: Oxford University Press. Haramia, C., and J. DeMarines. 2019. The imperative to develop an ethically informed METI analysis. Theology and Science 17:38–48. Rummel, J. D., and L. Billings. 2004. Issues in planetary protection: Policy, protocol and implementation. Space Policy 20:49–54.
Smith, K. C. 2020. METI or REGRETTI: Ethics, risk, and alien contact. In Social and Conceptual Issues in Astrobiology, first edition, eds. K. C. Smith and C. Mariscal. New York: Oxford University Press. Schwartz, J. S. J. 2020. The Value of Science in Space Exploration. New York: Oxford University Press. Traphagan, J. W. 2016. Science, Culture and the Search for Life on Other Worlds. Switzerland: Springer International Publishing. Traphagan, J. W. 2017. Do no harm? Cultural imperialism and the ethics of active SETI. Journal of the British Interplanetary Society 70:219–224. Traphagan, J. W. 2019. Active SETI and the problem of research ethics. Theology and Science 17:69–78. Traphagan, J. W. 2021. SETI, evolutionary eschatology, and the Star Trek imaginary. Theology and Science 19:120–131. Wisian, K. W., and J. W. Traphagan. 2020. The search for extraterrestrial intelligence: A realpolitik consideration. Space Policy 52:101377.
John W. Traphagan is professor and Mitsubishi Fellow in the Department of Religious Studies and the Program in Human Dimensions of Organizations at the University of Texas at Austin. His research focuses on the intersection of religion, science, and culture in two areas—space exploration and Japanese culture. Email: [email protected]
STUDENT RESEARCH GRANTS A
D
• Ope to u dergraduate a d graduate stude ts • Up to for Astro o , eteor, a d eteorite resear h • Up to
/ŶǀĞƐƟŶŐŝŶƚŚĞ ĨƵƚƵƌĞŽĨƐĐŝĞŶĐĞ ĂŶĚƚĞĐŚŶŽůŽŐLJ
214
American Scientist, Volume 109
for isio resear h
• Up to for ost areas of the s ie es a d e gi eeri g
Engineering |
Structures must live up to their implicit promise of safety.
The Albert Bridge in London has a notice (inset) on its green-andwhite tollbooths (left) warning that crowds with coordinated footfalls can cause the bridge to vibrate disconcertingly. Marc Zakian; Rob Potter/Alamy Stock Photo
Engineered for Trust Henry Petroski
T
wo centuries ago, all engineering was included under the rubric civil, but the advancement of technology led engineering fields to multiply and engineers to specialize. Today, each kind of engineering has its own societies and institutes, and its own code of ethics. The idea of a profession of engineering originated in late-18th-century Britain, but it soon spread to the United States. Regardless of their field of practice, members of the National Society of Professional Engineers (NPSE) are “expected to exhibit the highest standards of honesty and integrity” and “be dedicated to the protection of the public health, safety, and welfare.” In other words, they must be trustworthy. The all-inclusive NSPE was founded in 1934, Henry Petroski is the Distinguished Professor Emeritus of Civil Engineering at Duke University. Address: Box 90287, Durham, NC 27708. www.americanscientist.org
before the time when states had professional registration and licensing laws. The need for engineering codes of ethics had become evident by the end of the 19th century. Not everyone who called themselves engineers had the educational or experiential credentials expected of professionals. Such self-proclaimed but incompetent “engineers” designed structures and machines that resulted in the likes of bridge collapses and steam-boiler explosions. It was in such a milieu that individual states began to enact laws restricting the use of the title engineer and requiring the licensing and registration of “professional engineers,” the only ones who could append the suffix Professional Engineer (P.E.) to their name. In the United States to this day, the right to use the designation P.E. is granted by the individual states. Engineers who practice in more than one state must sit for the qualifying examinations in each jurisdiction that does Special Issue: Trustworthy Science
not have a reciprocal agreement with the first state of registration. (In contrast, in Britain and former Commonwealth countries it is the professional societies that administer qualifying examinations and control the use of the designation “Chartered Engineer.”) The first state to pass an engineering licensing law was Wyoming, which did so in 1907; the last was Montana, in 1947. Only a P.E. has “the authority to sign and seal engineering plans,” thereby declaring his or her ultimate responsibility for a project. Bad Vibrations Engineers are seldom known to the public as flesh-and-blood individuals; they tend to be known through their works. They may remain anonymous, but they are expected to design and bring to fruition bridges and other structures that are safe. Members of the general public do implicitly trust engineers and their works, and they demonstrate this by walking and driving across a new bridge in awe and celebration of even the most daring and unusual of designs. It was with an implied trust that hundreds of thousands of people crowded onto the Golden Gate Bridge on May 2021
July–August
215
While the Brooklyn Bridge was under construction, warning signs on catwalks advised against any kind of coordinated motion that might make the catwalks resonate dangerously. After the bridge was completed, a woman tripping and falling led to a panic that, as Frank Leslie’s Illustrated Newspaper showed (below), caused a deadly stampede.
Melissa Jooste/Alamy Stock Photo
24, 1987—the occasion of its 50th anniversary. The span was closed to vehicle traffic for the day and was to be opened up to pedestrians only after the usual pomp and circumstance. However, the people who showed up to walk across the bridge were in no mood for speeches and preempted them by swarming onto the bridge and occupying virtually every square foot of its roadway. Some early participants engaged in carnivallike activities, but after a while most just stood in place because they could not move through the tightly packed crowd. However, as the revelry was going on, engineers watching from afar noticed the bridge’s roadway visibly straining under the unprecedented load, its graceful arc flattened out. The Golden Gate Bridge did not fail that day, but engineers believe it was closer to doing so than at any other time in its existence. On the occasion of the bridge’s 75th anniversary, access to the bridge was severely restricted. In 2000, hordes of Londoners showed up on the opening day of the newest bridge across the River Thames. They wanted to be among the first to walk across the graceful pedestrian structure that was considered also to be a piece of public art. The growing crowd of people trusted the Millennium Bridge to be safe, but before long it began to sway to such an extent that those on it began to fall in step with the rhythmic motion, which caused the swaying to grow in amplitude and drive the strollers off balance and into the side rails. The bridge was closed, tested, retrofitted with what are essentially shock absorbers, and retested to ensure their effectiveness. After almost two years it was reopened and public trust was reestablished. 216
American Scientist, Volume 109
The Millennium Bridge was not the first to misbehave underfoot. The Albert Bridge, located only a mile or so away, had opened in 1873. The hybrid bridge—a combination suspension, cable-stayed, and beam design—soon developed a reputation for vibrating noticeably when large numbers of people walked across it. An attempt to steady the structure in the 1880s was only partly effective. To this day, people approaching the bridge are greeted by a notice that reads “all troops must break step when marching over this bridge.” The behavior of the Albert Bridge and similarly misbehaving spans on the Continent was known to engineers across the pond. The Brooklyn Bridge was under construction from 1869 to 1883. When catwalks were slung between its towers in preparation for spinning its steel cables, engineer-in-chief Washington Roebling had signs posted that read, “safe for only 25 men at one time. do not walk together, nor run, jump or trot. break step!” Although the catwalks were intended for use only by construction workers, inspectors, and officials, the repetition of the warning to “break step” clearly demonstrates that it was known that a flexible structure could be dangerously excited by the rhythmic force of synchronized footfalls.
Such reminders have caused some members of the civilian population to suspect any newly completed span. Because the Brooklyn Bridge was intended to carry carriage and rail traffic as well as pedestrians, the conceptual design produced by John Roebling—whose death due to an early construction accident led to his son, Washington, succeeding him as chief engineer—was for a wide, stiff, and steady structure, like the eminently successful ones he had already built across the Niagara and Ohio rivers. However, this one connecting Manhattan and Brooklyn over the East River was of unprecedented span length. In the week following its grand opening, the bridge was crowded with pedestrians when a woman slipped and fell, triggering a rumor that the bridge was collapsing. The resulting stampede led to a dozen deaths. To allay such fears, the engineer of a yet-untested bridge would sometimes stand beneath his creation as it was crossed by people, wagons, or even elephants, which were rumored to have a sixth sense that kept them from laying foot on an unsafe span. Sometimes, especially in Eastern Europe, the confident engineer would even be accompanied by his family in a show of confidence designed to gain public trust.
arms and push into their thighs of the struts, members of the public could see how an engineer could go about making those parts strong enough to obviate failure. An image of the model was incorporated into Benjamin Baker’s public lecture on the bridge, and it has become iconic. In the age of computer models, visceral assurances are not so vividly achieved. The Forth Bridge has now stood for more than 130 years, and it is as stiff and strong as ever.
KGPA Ltd/Alamy Stock Photo
In the 1880s, a cantilever bridge was proposed to cross the Firth of Forth, a large estuary in Scotland. To explain the sturdiness of the design, the engineers devised a model where people stood in for the bridge sections, making the balancing forces more intuitive to understand. Public lectures incorporating this image built up public trust in the bridge, which is still in active use.
Public Appearances A supposedly safe bridge does on occasion collapse, and it can be a challenge to regain public trust in the wake of the failure. A bridge across the River Tay at Dundee, Scotland, began carrying passengers on the North British Railway in 1878. At the time, it was the longest bridge in the world, but it was of the familiar truss type, in which everyone seemed to have gained confidence. In early June of 1879, Queen Victoria crossed the Tay Bridge in a train on the way to Balmoral, a crossing that reinforced confidence in the bridge and its engineer, Thomas Bouch, whom she knighted for his accomplishment. However, during a late December storm, the main portion of the bridge and the railcars crossing it fell into the Tay. Seventy-five lives were lost, and the reputation of the bridge and its engineer were destroyed. Bouch had already begun work on another bridge on the same rail line—one to cross the River Forth near Edinburgh. When the Tay collapsed, his design for that bridge naturally came under close scrutiny, and he was relieved of the commission. The firm of the distinguished engineer John Fowler was selected to design a structure that would not only be safe but also look safe. The details of the task were placed largely on the shoulders of Fowler’s young www.americanscientist.org
engineer Benjamin Baker. The resulting steel cantilever design was unfamiliar in type and scale; the principles by which it worked were not transparent to the nonengineers expected to trust it to remain standing as they crossed it in train cars. A strong public relations campaign was called for.
Engineers should always be cognizant of normally ignorable forces becoming dominant in the context of an unprecedented design. To assure the public more directly, engineers devised a human model of the Forth Bridge (see Engineering, March–April 2013) to illustrate its mechanics and gain public trust. The engineers sitting in for major structural parts of the bridge provided a human feel for the forces involved. By seeing in this simple model the role of the inclined steel components in supporting the weight of the middle man through the easily imagined pull on the men’s Special Issue: Trustworthy Science
Defining Failure Although the Millennium Bridge misbehaved in 2000, it did not collapse. Yet it is arguably termed a failure in that it did not meet expectations. Among the unusual aspects of its design competition was the requirement that the team include, in addition to an engineer, an architect and an artist. The last two apparently drove the appearance of the design, but it remained the responsibility of the engineer to assure that the bridge fulfilled structural expectations. In the case of the Millennium Bridge, its low-slung suspension cables and their near-horizontal connections to the walkway allowed a degree of sideways sway that was unusual. It turned out that it was not so much that synchronized footsteps pounding down on the bridge excited it to vertical motion as it was that the synchronized horizontal forces of friction between walkers’ shoes and the bridge deck imparted a sideways motion. Engineers lose sleep over the possibility of overlooking such a factor, and the public loses confidence in engineers when they do. Unfortunately, it often takes a partial or total failure before engineers become aware of some new or rarely seen anomalous behavior. This lack of prior example does not absolve them from anticipating a destructive behavior, but it does explain why it might not come to mind during a design process with a strong focus on aesthetics. Engineers should, however, always be cognizant of normally ignorable forces becoming dominant in the context of an unprecedented design. The public has every right to expect engineered structures to be safe and the use of them to be without unpleasant surprises. Engineers and their organizations agree, and that is why they benefit from regulation. Failures reveal where standards should be updated to increase reliability and so regain public trust. Q 2021
July–August
217
Luciano Floridi | Molding the digital future as it simultaneously shapes us
Digital Ethics Online and Off
I
n 1964, the year I was born, Paramount Pictures distributed Robinson Crusoe on Mars. The movie described the adventures of Commander Christopher “Kit” Draper (Paul Mantee), a United States astronaut shipwrecked on Mars. Watching it on YouTube recently reminded me how radically the world has changed in just a few decades. The computer at the very beginning of the movie looks like a Victorian engine, with levers, gears, and dials—a piece of archaeology that Dr. Frankenstein might have used. The only truly prescient element of techno-futurism comes toward the end of the story, when the character Friday (Victor Lundin) is tracked by an alien spacecraft through his bracelets. Robinson Crusoe on Mars belongs to a different age, one that was technologically and culturally closer to the previous century than to ours. The film describes a modern but not a contemporary reality, based on hardware not on software. Laptops, the internet, web services, touchscreens, smartphones, smart watches, social media, online shopping, streamed videos and music, driverless cars, robotic mowers, and virtual assistants were still to come. Artificial intelligence (AI) is mainly a project, not a reality. The movie shows a technology that is made of nuts and bolts, and mechanisms that follow the clunky laws of Newtonian physics. Even the iconic Star Trek, which debuted two years later, still imagined a future reliant on clunky mainframe computers, manual labor, and face-to-face social interactions. People born after the early 1980s have inhabited a totally different reality. To them, a world without digital technologies is like what a world without cars was for me: an abstract concept that I
had only heard described by my grandmother. The social and ethical impacts of digital technology are now so deeply embedded that they can be difficult to perceive, much less comprehend. Today’s smartphone packs far more processing power in a few centimeters, and at an almost negligible cost, than NASA could put together when Armstrong landed on the Moon five years after Robinson Crusoe on Mars. The Apollo Guidance Computer on board Apollo 11 had 32,768 bits of randomaccess memory (RAM) and 589,824 bits (72 KB) of read-only memory (ROM):
The real challenge is not innovation within the digital world, but the governance of the digital ecosystem as a whole. You could not have stored this issue of American Scientist on it. Fifty years later, your average phone comes with 4 GB of RAM and 512 GB of ROM. That is about 1 million times more RAM and 7 million times more ROM. As for the processor, the Apollo Guidance Computer ran at 0.043 megahertz. An average iPhone processor runs at 2,490 megahertz, about 58,000 times faster. To get a better sense of the acceleration, a person walks on average at 5 kilometers per hour, but a hypersonic jet travels slightly more than a thousand times faster at 6,100 kilometers per hour, just over five times the speed of sound. Only the most extreme spacecraft, such as the new Parker Solar Probe, can beat your walking speed by a factor of 58,000.
Yet, today’s world does not feel 58,000 times faster than it did when I was young. Where did all this speed and computational power go? The answer is twofold: feasibility and usability. We can do more and more in terms of applications, and we can do so in increasingly easy ways, not only in terms of programming, but above all in terms of user experience. Videos and operating systems software are computationally very hungry. Today’s common forms of AI—such as the programs used to recommend videos on YouTube, set the price for an Uber ride, or select the ads you see online—are possible because we have the computational power required to run their software. Today, thanks to this mindboggling growth in storage and processing capacities, at increasingly affordable costs, billions of people are connected and spend many hours online daily. Americans, for example, spend an average of 6.31 hours on the internet daily, just a bit less than the global average of 6 hours 41 minutes. The pandemic has surely driven these numbers even higher. AI is possible today also because we, humans, increasingly spend time in digital contexts that are AI-friendly. Managing the Data Deluge More memory, more speed, and more digital environments and interactions have generated immense quantities of data. We have all seen diagrams with exponential curves, indicating quantities that we do not even know how to imagine. According to market intelligence company IDC, in 2018, we reached 18 zettabytes of data created, captured, or replicated—that is 18 × 1021 bytes. This astonishing growth of data shows no sign of slowing down:
QUICK TAKE Computers now generate so much information that data and algorithms shape our experience of the world. Systems have shifted from logic to parsing statistical trends.
218
American Scientist, Volume 109
To navigate the digital world ethically, we must build a framework for weighing privacy, innovation, human rights, discrimination, and more.
We now live in digital and physical spaces simultaneously. Grappling with these complex ideas and ethical dilemmas will help us to build a stronger future.
AFP/Getty Images
Pedestrians and passengers move through public spaces, such as London’s Piccadilly Circus, with their smartphones, checking messages or taking selfies, and live in the digital and physical worlds simultaneously. Meanwhile, artificial intelligence algorithms are at work, using cameras to collect information about people and vehicles to display targeted ads on the large digital billboard.
According to IDC’s projections, total data consumption will grow to 175 zettabytes in 2025. This number is hard to grasp, both in terms of quantity and significance. Two consequences deserve a moment of reflection. The speed and the memory of our digital technologies are not growing at the same pace as the data universe, so we are quickly moving from a culture of recording to one of deleting. The question is no longer what to save but what to delete in order to make room for new data, transforming the idea of archiving the past. And most of the data available have been created since the 1990s, even if we include every word uttered, written, or printed in human history and every library or archive that ever existed. Just look at any diagram illustrating the data explosion: It’s not just the right-hand side, where growth skyrockets, but the left-hand corner, with so little data only a handful of years ago. Because almost all our data have been created by the current generation, they are also aging together, in terms of support and obsolete technologies, so their curation will be an increasingly pressing issue. Floppy disks www.americanscientist.org
from the 1980s are now obscure curios; compact disks from the early 2000s are quickly following them. More computational power and more data have made possible the shift from logic (if A then B) to statistics (A is related to B). Algorithms have progressed from library searches that could find a source by keyword to Amazon’s tools that can parse your buying habits and recommend new books for you. Neural networks that were interesting only theoretically have become ordinary tools in machine learning and are used daily in medical diagnostics, to issue loans and credit cards, and in security checks. They are not really meant to help us understand cognition; they are just tools that make classifications. Early versions of AI were mostly symbolic and could be interpreted as a branch of mathematical logic, but in its current iterations AI is mostly connectionist, seeking out complex patterns within datasets, and could be interpreted as a branch of statistics. AI’s main war horse is no longer logical deduction, but statistical inference and correlation. Computational power and speed, memory size, amount of data, algoSpecial Issue: Trustworthy Science
rithms, statistical tools, and online interactions have had to grow at breakneck speed to keep up with one another. The number of digital devices interacting with each other is already several times higher than the human population, but the causal connection goes both ways. So, most communication is now machine-to-machine, with no human involvement. We have computerized robots like Perseverance and Curiosity roving around on Mars, remotely controlled from Earth. Commander Christopher “Kit” Draper would have found them utterly amazing. All of these trends will keep going, relentlessly, for the foreseeable future. The economic, cultural, and technological trends propelling them forward show no sign of abating. The expansion of the digital world has changed how we learn, play, work, love, hate, choose, decide, produce, sell, buy, consume, advertise, have fun, care and take care, socialize, communicate, and so forth. It seems impossible to find a corner of our lives that has not been affected by the digital revolution. In the last half-century, our reality has become increasingly digital, made of zeros and ones, run by software and data rather than by hardware and atoms. More and more people live increasingly onlife, both online and offline, and in the infosphere, both digitally and 2021
July–August
219
Cambridge University Hospitals
Artificial intelligence can help speed the planning of complex medical procedures. Machine learning tools, such as those used with Microsoft’s InnerEye technology, can sift through dozens of computed tomography images, finding data connections and marking the boundary between tumors and healthy tissue for a doctor’s review. Oncologists sometimes need hours to go through such scans manually. AI medical tools can be more than 10 times faster.
analogically. We use Instagram and WhatsApp to keep in touch with our friends, and we compare prices online even when we are standing in a shopping mall. This digital revolution is not merely technological. It affects how we conceptualize and understand our realities, increasingly interpreted in computational and digital terms. People now routinely refer to DNA as genetic “code,” even though the idea is just a few decades old. The digital revolution has also fueled the development of AI. We now routinely share our onlife experiences and our infosphere environments with smart agents, whether they are algorithms, bots, or robots. They represent an unprecedented form of agency, one that does not need to be intelligent to be successful; for example, computers can play chess or Scrabble better than any of us. Netflix can guess what I want to watch next even before I know it. Navigating Digital Ethics What I have described so far, the digital revolution, provides huge opportunities to improve private and public life, as well as our environment. Consider, for example, the development of smart cities or the problems caused by carbon emissions. Algorithms can improve traffic flow and energy use; smart grids can reduce carbon impacts. 220
American Scientist, Volume 109
A recent report by Microsoft and PwC estimated that the use of AI in environmental applications could cut global greenhouse gas emissions by between 1.5 percent and 4 percent, while boosting global GDP by between 3.1 percent and 4.4 percent. Unfortunately, such opportunities are also coupled to significant ethical challenges. Examples include the extensive use of increasingly more data—often personal if not sensitive (Big Data)—the growing reliance on algorithms to analyze them in order to shape choices and to make decisions (including machine learning, AI, and robotics), and the gradual reduction of human involvement or even oversight over many automatic processes. These applications pose pressing issues of fairness, responsibility, and respect of human rights, among others—for example in predictive policing or when using facial recognition for monitoring behavior. The ethical challenges posed by digital technologies and practices can be addressed successfully. Boston, Portland, and San Francisco are among the cities in the United States that have banned the indiscriminate use of facial recognition by law enforcement. Carefully and ethically deployed, however, the same technology has helped identify thousands of missing children in New Delhi.
Fostering the development and applications of data-based and softwarebased innovations, while ensuring the respect of human dignity, of the wellbeing of the environment, and of the values shaping open, pluralistic, and tolerant information societies is a great opportunity of which we can and must take advantage. Building such a robust alliance between the green of all our environments (natural as well as artificial) and the blue of our digital technologies will not be an easy or simple task. But the alternative—failing to advance and leverage ethically all digital technologies, their practices, and sciences— would have regrettable consequences. On the one hand, overlooking ethical issues may prompt negative impact and social rejection, as was the case with the British National Health System’s care.data program, a failed project to extract data from the offices of general practitioners into a central database. Social preferability and environmental sustainability must guide any digital development. On the other hand, overemphasizing the protection of individual or collective rights in the wrong contexts may lead to excessively rigid regulations, which may harm the chances to harness the social and ecological value of digital innovation. For example, the LIBE amendments, initially proposed to the European Data Protection Regulation and abandoned only after objections from the research community, would have made medical research more difficult by severely restricting scientific access to patient data records. The demanding task of digital ethics is to maximize the value of digital innovation to benefit individuals, societies, and environments while navigating between social rejection and legal prohibition. To achieve this, digital ethics can build on the foundations provided by computer ethics since the 1950s, and on the later discipline of information ethics. This valuable legacy grafts digital ethics onto the great tradition of ethics more generally. Within a few decades, we have understood that the most useful focus of our ethical strategies is not on a specific technology (computers, mobile phones, online platforms, cloud computing, and so forth) but on what is done with any digital solution. The shift from computer and information ethics to digital ethics highlights the need to consider not only the technologies and sciences involved, but also
the contexts and the applications (in business or in politics, for example), and the corresponding practices. Digital ethics concerns the overall impact of the digital world, broadly construed, and narrower discussions of concepts such as “robo-ethics” or “machine ethics” miss the point. The ethical challenges brought about by the digital revolution—including privacy, anonymity, responsibility and accountability, transparency and explainability, and trust—concern a wide variety of digital phenomena, and hence they are better understood at an ecosystem level. The real challenge is not innovation within the digital world, but the governance of the digital ecosystem as a whole. An Ethical Braid of Ideas Today, digital ethics is a full branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes) in order to formulate and support morally good solutions (such as right conducts, values, and policies). There is growing interest in these fields but also a lot of confusion, which generates false hopes and ungrounded fears—for example, about what AI may really deliver. It won’t be a panacea and has nothing to do with the AI of Hollywood movies. Right now, there are many airport bestsellers and too few PhD theses. In particular, we need more experts with the multidisciplinarity required by the field. But people are increasingly talking about the right issues in the right places: not only in academia, but also in companies and in governments. The ethics of data focuses on problems posed by the collection and analysis of datasets and on issues relating to their use, such as Big Data in biomedical research and social sciences, social profiling and advertising, open data, and data philanthropy. Some of the key issues my colleagues and I debate concern the possible re-identification of individuals through the mining, linking, merging, and reuse of data. Risks extend to “group privacy,” when the identification of types of individuals, independently of the de-identification of each of them, may lead to serious ethical problems, from group discrimination www.americanscientist.org
Everett Collection
Commander Christopher “Kit” Draper (played by actor Paul Mantee) finds himself shipwrecked on Mars, in the 1964 science fiction film Robinson Crusoe on Mars. The trailer boldly promises “Tomorrow come to life . . . Today!” However, yesterday’s futurism, full of dials and knobs, looks quaint and far different from our digital present.
(including ageism, ethnicism, sexism, and more) to group-targeted forms of violence. A classic example was provided by the algorithm running Amazon’s same-day shipping service. After it was launched, the company adjusted its algorithm to end discriminating against predominantly Black neighborhoods in many major cities. Content moderation, freedom of speech, and trust are also crucial topics of ethical debate. These discussions can also help address the limited public awareness of the benefits, opportunities, risks, and challenges associated with the digital revolution. For example, politicians and technology developers often promote transparency as a measure that may foster trust. However, these proposals often leave the public unclear what information should be made transparent, to what degree, and to whom. The ethics of algorithms addresses issues posed by the increasing complexity and autonomy of algorithms broadly defined, including AI and artificial agents such as internet bots, especially in the case of machine-learning applications—for instance, imagerecognition software or automated decision-making systems. In this case, crucial challenges include the moral responsibility and accountability of both designers and scientists with respect to unforeseen and undesired Special Issue: Trustworthy Science
consequences as well as missed opportunities. The opacity, ethical design, and auditing of algorithms, and the assessment of potential, undesirable outcomes (such as racial discrimination or the promotion of misleading or incendiary content on social media) are attracting increasing research. Finally, the ethics of practices addresses questions concerning the responsibilities and liabilities of people and organizations, such as governments that implement smart cities technology, in charge of digital processes, strategies, and policies. The goal is to define an ethical framework to shape professional codes—such as avoiding racial and gender bias in facial recognition—toward responsible innovation, development, and usage, which can ensure ethical practices fostering both the progress of digital innovation and the protection of the rights of individuals and groups. Three issues are central in this line of analysis: consent, privacy, and secondary use. Although they are distinct lines of research, the ethics of data, algorithms, and practices are intertwined. For example, analyses focusing on data privacy must address issues concerning consent and professional responsibilities; and ethical auditing of algorithms often implies analyses of the responsibilities of their designers, developers, users, and adopters. Digital ethics must address the 2021
July–August
221
whole conceptual space, even if with different priorities. It must avoid narrow, ad hoc approaches but rather address the diverse set of ethical implications of digital realities within an inclusive, integrated, and consistent framework.
mented by the predictions on display at the 1964 New York World’s Fair. And yet, the digital revolution happens only once. To get it going the right way, the time to start is now. Future generations will never know an analog-only, offline, predigital reality. We are the last generation to have experienced it. The price of living at this special time in history is that we have to confront worrying uncertainties, and do so with limited understanding. The mind-blowing potential transformations justify some confusion and apprehension. You just have to look at news headlines to see the pervasiveness of those responses. However, our special position and perspective in history also bring extraordinary opportunities. Precisely because the digital revolution has just begun, we can shape it in positive ways that benefit both humanity and our planet. As Winston Churchill said, “We shape our buildings; thereafter they shape us.” The same is true of our built digital environment. We are at the very early stages of constructing our new digital realities. We can get them right, before they start affecting and influencing us and future generations in the wrong way. The debate over whether digital technologies are more beneficial on bal-
Molding Our Digital Future Life today has become inconceivable without the presence of digital technologies, services, products, and practices. Anyone not perplexed by such a digital revolution has not grasped its magnitude. We have begun a new chapter in human history. Of course, other chapters have come before, all similarly significant. Humanity has experienced a world before and after the wheel, ironworking, the alphabet, printing, the steam engine, electricity, television, and the telephone. Each transformation is unique. Some of them have changed our selfunderstanding, our reality, and our experience irreversibly, with complex and long-term implications. We are still finding new ways of exploiting the wheel; just think of the infotainment controls in cars. By the same token, what humanity will achieve using digital technologies is unimaginable. Nobody in 1964 could have guessed what the world would be like only 50 years later, as abundantly docu-
Si
a iS
r a d il
Call for Su
issio s
ance or more damaging is pointless. The technologies are coming, and the really interesting question is how we can apply them ethically. The real answer to whether the glass is half empty or half full is that it’s fillable. To identify the best path ahead in the development of our digital technologies, we need more and better digital ethics. We should not sleepwalk into the creation of an increasingly digital world. The insomnia of reason is vital, because its sleep generates monstrous mistakes. Understanding the digital revolution happening under our gaze is essential, if we wish to steer humanity toward a future that is both environmentally sustainable and socially just. Bibliography Floridi, L. 2014. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. New York: Oxford University Press. Wacks, R. 2015. Privacy: A Very Short Introduction. New York: Oxford University Press. Wiener, N. 1954. The Human Use of Human Beings: Cybernetics and Society. New York: Da Capo Press.
Luciano Floridi is professor of philosophy and ethics of information at the University of Oxford, where he is Director of the OII Digital Ethics Lab. Email: [email protected]
es
No e er 7, The Co fere e & E e t Ce ter Niagara Falls Niagara Falls, Ne York A e g art, l , a sessio Su issio Deadli e: Jul See age 222
American Scientist, Volume 109
for
ore etails
o tri ,
o s
.sig a i.org/SAFF
al
First Person | Stephaun Elite Wallace
(Re)Building Trust in Public Health Campaigns Communities have long memories about how they are treated during health crises. Throughout the COVID-19 pandemic, public health experts have grappled with how to reach people who—often for very good reasons—distrust the medical system. Stephaun Elite Wallace has worked with these disenfranchised and skeptical communities for decades, and he has found that the most reliable way to reach those populations is through people they already look to for advice—people whose voices they trust. Wallace is a clinical assistant professor of global health at the University of Washington, and he is also the director of external relations for both the HIV Vaccine Trials Network at the Fred Hutchinson Cancer Research Center and the COVID-19 Prevention Network. His approach to these roles recognizes the racial and economic disparities ingrained in society that have populationlevel health effects. Wallace spoke with American Scientist contributing editor Efraín Rivera-Serrano about what drew him to public health and the approaches he takes to engage marginalized communities. This interview has been edited for length and clarity. You have said that from a young age you wanted to help under-resourced people in your community. What are some of the issues you identified as a child, and how do they relate to your current role?
Something that I wondered about as a young person—and studied a bit about as I’ve gotten older—is the relationship between income, wealth, and health outcomes. For example, home ownership is still seen as one of the top indicators of generational wealth in this country. And home ownership largely contributes, through property taxes, to local schools. If Black and Brown communities are largely excluded from home ownership, then where are the resources for the schools? It creates this cycle. I observed it growing up, and it was one of the reasons that I was interested in law, and also policy. That awakening for me started in elementary school, growing up in South Central Los Angeles and seeing the different ways that people were treated based on how they were perceived in terms of wealth and resources. When I enlisted in the U.S. Army, I saw social justice issues pop up as well. This was during “don’t ask, don’t tell,” and I consistently saw people being dehumanized and degraded because of their perceived sexuality. I completed my tour and decided not to reenlist, because I was also experiencing some of that harassment. I later moved to Atlanta, and I met quite a few community members, many of whom are Black gay men. I found community, but I also found people who were experiencing trauwww.americanscientist.org
ma, from a community perspective but also from a systems perspective. I was meeting teenagers who were newly diagnosed with HIV or who had been living with HIV for some time. HIV hadn’t been on my radar before moving to Atlanta, but I knew enough about HIV and STIs [sexually transmitted infections] that I understood that it was not just about people’s behavior, but also about the circumstances and the environment. I linked up with a group called My Brother’s Keeper, which was really just a social support group. I thought to myself, “There’s an opportunity here. There’s clearly a need. Why don’t we turn this group into an organization that can help to address HIV, not just at the individual level, but at the community level and the systems level?” It was a “for us, by us” model that I believe has helped to inform some of the federal response to HIV in LGBTQ communities. That was my introduction to public health. There are many blatant examples of mistreatment or malpractice that have eroded some groups’ trust in the scientific community. Are there higher-level factors that have contributed to this distrust as well?
We’re in 2021—nearly 500 years after the trans-Atlantic slave trade started— and we’re just now saying that racism is a public health threat. I’m glad that we’ve gotten to this place. I think we could have gotten here much sooner if political will was there. The factors that beat back our communities are largely related to policy: Special Issue: Trustworthy Science
Aziel Gangerdine
the attitudes about policies that are being created and the interpretations of policies that are already established. We can do all of this work around racism and addressing the ways that we talk about communities, the ways we talk about racism. We can adopt PC [politically correct] language and put cultural responsiveness trainings into play. But if we don’t address the root issue—which is, in my opinion, antiBlack and anti-Indigenous racism— then we will have missed the mark significantly. In your current role as director of external relations for the COVID-19 Prevention Network, what approaches have you found are most effective in reaching Black communities?
What has worked well in the COVID-19 space in engaging Black communities has been using what we call trusted voices. Often in public health conversations or contexts, those persons might be health care providers. For many people there’s still a mystery around how science works, such as how clinical trials are conducted. We have a collective responsibility to demystify this process—and to do the work of educating communities and increasing community understanding—in any opportunity 2021
July–August
223
that presents itself. Because we’re doing a disservice if we don’t. This effort shouldn’t be solely focused on COVID-19. It should be across the board. Many of us are old enough to remember what the HIV epidemic was like in the 1980s, and the misinformation campaigns that existed then. We will continue to see the blame game and the misinformation campaigns if we don’t do our part to effectively address community awareness before it’s needed. Don’t wait until the house is on fire to buy a fire extinguisher. Buy it now, so that when the house catches fire we can avert disaster. A misstep has been the assumption that people of color—and Black communities in particular—are monolithic. That everyone who is a person of color thinks the same way. Also that Black people won’t participate in clinical trials, that they’re hard to reach. All of this coded language and excuses—to me, it’s old. It’s outdated, antiquated attitudes about who we are. It’s lazy— intellectually and scientifically lazy. And it’s also a disservice. Do you see fluctuations in the level of trust within marginalized communities with regard to participation in clinical trials or trust in science overall?
The shift that I’ve seen in the way that diversity looks, for example, in clinical trials, has been largely associated with intentional efforts to meaningfully engage those trusted voices. I’m thinking about the Moderna phase III COVID-19 vaccine trials, where we saw 20 percent Hispanic population participation in that trial. That’s unprecedented. On the one hand I think, “This is great, because we don’t typically see trials this diverse.” At the same time I also think, “We need to be doing better!” We need to engage the trusted voices, and be cognizant and mindful of the conduct of clinical trials and the implementation of public health programs. The programs should be nimble and flexible and not take the approach that, because something has been done a certain way, that it always has to be done that way. You mentioned the high participation of underrepresented populations in COVID-19 vaccine trials. What is different this time—is it because the disease is so prominent and universal?
We developed a COVID-19 vaccine in under a year. We’re 30-plus years into 224
American Scientist, Volume 109
trying to get an HIV vaccine. Part of it, to me, is about political will, and part of it is about resources. There have been metric tons of resources poured into COVID-19, not just here in the United States, but globally. If we had that resourcing in HIV research, if there was an all-hands-ondeck approach to it like there is for COVID-19, I believe we might have found an HIV vaccine by now. But I also think that COVID-19 is seen as something that people get passively, whereas HIV is largely seen as something that only those people over there get, people who are doing something morally corrupt. Of course,
“We’re all experts on our own lives. The role of health care is to provide additional support to maintain our quality of health, not to be the end-all and be-all.” I don’t believe that at all, but those attitudes exist in many communities regarding HIV, so it doesn’t get the same level of attention and prioritization. What are actions that any of us can take to improve the communication of science within our communities?
Part of how one becomes a trusted voice is by consistently being present, and by providing information that’s not just helpful, but also accurate. How one might do that—how I have done that—is to be really intentional about the conversations I’m having, and to address misinformation any time I encounter it, but also to acknowledge the skepticism that people may have. There has been a lot of trauma, damage, and harm caused by systems in this country toward people of color. Skepticism, to me, is not only warranted, it’s also, in many conversations and in many ways, healthy. There are ways that we can all be connected to the science, that we can be involved in this work. We can all be not just better, more informed consumers of information and of medicine, but we can also be better advocates.
But the medical establishment and researchers have to be more trustworthy. We can’t put all the responsibility on the public and say, “Open the doors to your homes, to your families, to your communities, and let everybody come in and do all this research.” It’s about relationships, and about being present. It’s about addressing people as individuals, not through their prognosis or their disease or their ailment. Physicians, researchers, health care professionals—it’s important for them to engage communities and their patient populations in ways that affirm the agency that people have over their own lives. We’re all experts on our own lives. The role of health care is to provide additional support to maintain our quality of health, not to be the end-all and be-all. What do you consider to be the greatest successes in your work?
What affirms, for me, that I am where I’m supposed to be, and that I’m doing the work I’m supposed to be doing, is when I talk to people in the ‘hood, or in different communities across the world, who say to me, “Because this research is happening, or because the clinical research team is providing amazing care, I feel better about myself, and that my life can continue.” I tell my team that we’re in the business of saving lives. It’s something I take very seriously. It’s important to me that we do our very best in every single interaction we have with people. Because people are being beat up enough already. We don’t need to add to that. You work with many different communities, including LGBTQ populations, African Americans, and others around the world. What are the commonalities that resonate across groups?
Sheer will and resilience. There is a desire by groups and people to beat back this system of oppression, of racism, that has plagued our society. And in truth, racism doesn’t just negatively impact people of color; it negatively impacts white people as well. Another commonality is strength. Through all of the historical and contemporary instances of abuse, neglect, and miscarriages of justice, people continue to wake up every day hoping and fighting for a better tomorrow.
Am
Sci
A companion podcast is available online at americanscientist.org.
NOVEMBER –7,
S
posia i
oots to Fr its
es o si le esear h or a Flo rishi g a it Stude t Resear h Co fere e
College a d Graduate S hool Fair
o s ie
he da e e t is a i ter a o al or or resear hers, ethi ists, ed ators, st de ts, a d s ie e o i ators to e a i e hat it ea s to o d t ethi al, res o si le resear h a ross s ie e a d e gi eeri g dis i li es The Co fere e & E e t Ce ter Niagara Falls* Old Falls Street Niagara Falls, Ne York he
STEM Art a d Fil Fes al
www.americanscientist.org
irt es ser e so iet
rre t la or a h rid i erso irt al e e t is s e t to ha ge ased o state health a d sa et re o e da o s
Register Toda !
.sig a i.org/a sr
Special Issue: Trustworthy Science
2021
July–August
225
The Trust Fallacy Scientists’ search for public pathologies is unhealthy, unhelpful, and ultimately unscientific.
Courtesy of Tori Goebel
Nicole M. Krause, Dietram A. Scheufele, Isabelle Freiling, Dominique Brossard
226
American Scientist, Volume 109
I
n late 2020, New York Governor Andrew Cuomo appeared on CNN’s Anderson Cooper 360 to talk about the recently approved COVID-19 vaccines and his concerns that the American public might reject vaccines because of a lack of trust: “[A]bout half of the people in this country, Anderson, say they don’t trust the vaccine . . . that they fear that Trump politicized the approval process,” he explained. “No one is going to put a needle in their arm if they don’t trust the vaccine.” A few months later, in January 2021, Cuomo tweeted a simple directive: “Trust science,” which he promoted with the hashtag #VaccinateNY. The tweet was liked more than 4,000 times. Replies on Twitter echoed Cuomo’s sentiment: “[It’s] sort of extraordinary that we have to tweet stuff like ‘trust science’ but I guess that’s where Trump and his supporters left us.” Frustrations or fears that “trust deficits” among segments of the population will contribute to forms of science denial that pose societal risks (such as, in this case, prolonging the COVID-19 pandemic due to vaccine refusal) are not new. Though often well-meaning, individuals employing such narratives rely on oversimplified perspectives about the idea of trust in science. Unfortunately, those assumptions aren’t consistent with social scientific evidence about how trust works, as well as about who might not trust science, and why. We are concerned that many scientists have swallowed these misconceptions—what we call trust fallacies—hook, line, and sinker. Blaming a phenomenon like vaccine hesitancy on an uninformed public that lacks trust offers an easy and convenient culprit for a complex problem, without exploring deeper issues such as where One of the oversimplified arguments about trust deficits in science is that people with strong religious affiliations are more likely to hold anti-science viewpoints. Groups such as the Young Evangelicals for Climate Action demonstrate that religious beliefs and proscience viewpoints can be compatible, even to the point of spurring activism.
public messengers have fallen short in communicating about COVID-19 risks, the importance of physical distancing or masks, and the value of vaccines. These trust fallacy narratives pose dangers of their own. Intuiting pathologies and remedies for public behavior can lead to misallocated resources and misdirected policies. Trust fallacy narratives can also politicize science in ways that inhibit efforts to preserve the very trust that scientists and other authority figures hope to maintain. Urging governments and other orga-
curs when people oversimplify trustrelated phenomena. Social scientific research has repeatedly demonstrated that whether an individual exhibits trust in an institution can vary by context and by the extent to which they perceive scientists to be competent in the topic domain at hand, as well as whether they perceive scientists to share their values. Strongly held values, such as religion or political ideology, can exert a powerful influence on the attitudes and beliefs people express, including their levels of trust and
Overall, trust in science is so nuanced and sensitive to contextual factors that it’s difficult to measure—much less to compare—trust levels across contexts and across time. nizations to fix a dubiously defined problem is both unwise and inefficient. Worse still, it is deeply unscientific and ethically questionable. Oversimplified Trust From a public health perspective, Cuomo’s concerns about vaccine hesitancy were perfectly valid, but his vague diagnosis of a trust problem is not easy to justify. He’s not alone in jumping to that conclusion. Susan Bailey, president of the American Medical Association, insisted late last year that the “effort to restore trust in science must begin now.” But the question becomes: Exactly in what or in whom should people trust? In the context of COVID-19 vaccine hesitancy, for example, where precisely does the alleged trust deficit lie? Are we concerned that people lack trust in science, in general? In the regulators at the Food and Drug Administration (FDA)? In pharmaceutical companies? A combination of all these factors? To diagnose and grapple with alleged trust problems, it is essential to be specific. The trust fallacy often oc-
the set of risks they believe they are incurring by trusting someone. Trusting a scientist to advise you against drinking raw milk clearly incurs one kind of risk, whereas trusting a scientist to advise you on raising your kids incurs a different one. Your tolerance for each of those risks may differ. Because of the complexity of trust judgments—which can be intimately entwined with risk assessments— people can exhibit lower-than-average trust in some types of science or scientists, even while exhibiting average or greater-than-average trust in others. In 2011, we polled Americans on how much they trust scientists “to tell [them] the truth about the risks and benefits of technologies and their applications.” Similarly, in 2014, we asked people how much they trust scientists “when it comes to public controversies about scientific issues.” In both years, the share of respondents exhibiting mid-scale trust or higher was greater for academic scientists than it was for industry scientists, by at least 10 percentage points.
QUICK TAKE Misconceptions about society’s lack of trust in science can misallocate resources, misdirect policies, and politicize science. We refer to these misconceptions as trust fallacies.
www.americanscientist.org
Scientific progress often involves challenging old ideas to get closer to the truth. An optimal democratic society requires a mix of trust in and skepticism of science.
Special Issue: Trustworthy Science
Public trust in science can be undermined by communications that aren’t tailored to audiences’ value systems, fail to acknowledge uncertainties, or promise simple solutions.
2021
July–August
227
Jennifer Wright/Alamy Stock Photo
During the 2020 election cycle, signs like this one—equating pro-science views with left-leaning political talking points—became common in the United States. The authors argue that such signs may lead people who lean conservative to regard science as part of an opposing ideology.
Overall, trust in science is so nuanced and sensitive to contextual factors that it’s difficult to measure, much less to compare, trust levels across contexts and across time, unless the questions are repeatedly asked in very similar ways. Once we acknowledge
Looking again to the Cuomo example, how should we reconcile his assertion that a Trump-driven “trust in science” problem will inhibit COVID-19 vaccine uptake with reports from last November, showing that Trump’s acceleration of vaccine development has
People who might be considered distrustful of science often say that they are willing to trust the science—if it tells them what they want to hear. the complexity of trust as a social phenomenon, it is easy to pull at the threads that unravel the trust fallacy. Selective Attention to Data In addition to overlooking the complexity of trust itself, the individuals and organizations advancing “trust deficit” narratives often advance these explanations without full attention to available data. The trust fallacy relies on selective empiricism. 228
American Scientist, Volume 109
long been heralded by his supporters? And what of the fact that, according to Pew Research Center polling data from 2018, Republicans and Democrats have historically shown no meaningful difference in their support for other vaccinations? Emerging research suggests that a tapestry of factors shapes public acceptance of the COVID-19 vaccine: religious beliefs, partisanship, politicization of vaccine development, an
evolving media narrative, anxiety about rushed quality control at the FDA, trust in the pharmaceutical industry, trust in government, and so on. Selecting just one piece of this puzzle makes it easier to draw alluringly simple conclusions such as “conservatives don’t trust science” or “religious people don’t trust science.” But such conclusions undermine the nuance inherent in trust and selectively dismiss available data. Political ideology and religious affiliation do not always affect attitudes toward science in the same direction. For example, scientists have consistently shown that trust in climate science has been lower among political conservatives than among liberals. However, in 2013, sociologist Aaron M. McCright at Michigan State University and his colleagues found that conservatives had higher levels of trust than liberals for “production science,” including fields such as food science or polymer chemistry, which are aimed at economic growth and in alignment with conservative values. By contrast, they had lower trust in “impact science,” areas of science aimed at identifying environmental or public health impacts, which align more with liberal values. Although there is some evidence of science skepticism or declines in science confidence among people who frequently attend church, our poll
www.americanscientist.org
90 80 70 60 50 40 30 20 10
year Democrat
Republican
Independent
Partisan confidence in science fluctuates but has remained relatively constant for more than 40 years, according to a 2019 paper in Public Opinion Quarterly by three of the authors (Krause, Scheufele, and Brossard) and their colleagues. The partisan gap as of 2018 reflects a spike in Democrats’ confidence, even as Republicans’ confidence was also trending back up.
Some of the media and academic discussions of trust in science have acknowledged the ways that people selectively attend to and utilize science. This is especially true in arguments that point to social media platforms as breeding grounds for science denial and as key contributors to alleged trust deficits. Yet, we argue that here, too, the causal role of social media has been oversimplified. Public views that are at odds with scientific consensus are hardly a new phenomenon. Flat-earthers have existed for millennia. Facebook, Twitter, and other social media platforms have not created forms of science denial; they have simply found ways to profit from it and, intentionally or not, to amplify it. For example, groups of likeminded anti-vaccination advocates can establish private information networks on social media, which can become protective biospheres in which seemingly “anti-science” viewpoints can flourish relatively undisturbed. This process happens in part because social media constantly collect data about users’ preferences to allow advertisers and other actors to precisiontarget persuasive messages to niche audiences who are likely to be recepSpecial Issue: Trustworthy Science
tive to them. Until recently, Facebook sold advertising specifically targeted based on “belief in pseudoscience.” It is difficult to measure the impact of these digital bubbles on trust in science because the user data, algorithms, and content needed to analyze these effects remain tightly guarded as intellectual property by social media companies. Maximal Trust Is Not the Goal Effective communication about science is fraught with problems even at the best of times, but it is especially complicated when science moves at breakneck speed and under immense public scrutiny, as has happened in response to COVID-19. Much of the science that emerged during the pandemic was and is surrounded by significant uncertainties. Forceful arguments that we should all trust the science about the novel coronavirus implicitly suggest that there have been consistent, undisputed scientific facts available for us to trust, and that there is therefore no reason to be skeptical of scientists’ claims other than personal anti-science sentiments. In reality, as we witnessed throughout the pandemic, the science pertaining to COVID-19 has often been in flux, as have been the related 2021
July–August
229
Nicole Krause
0 19 73 19 74 19 7 19 5 76 19 7 19 7 78 19 8 19 0 8 19 2 83 19 8 19 4 86 19 8 19 7 8 19 8 89 19 9 19 0 9 19 1 93 19 9 19 4 9 19 6 9 20 8 0 20 0 02 20 0 20 4 0 20 6 08 20 1 20 0 12 20 1 20 4 1 20 6 18
Motivated Uses of Science People who might be considered distrustful of science often say that they are willing to trust the science—if it tells them what they want to hear. For example, some vaccine opponents see themselves as motivated by the kind of skepticism and empiricism inherent to science, writes University of Colorado sociologist Jennifer Reich in her book Calling the Shots: Why Parents Reject Vaccines. They often cite scientific evidence for their views, even though the studies they cite tend to have been retracted. Similarly, some members of the U.S. Congress lean on niche scientific work to deny anthropogenic climate change or to challenge COVID-19 mask mandates. If we overemphasize a trust deficit in science, focusing on the idea of “trust” alone, we are blind to the ways that people engage in motivated reasoning about science, selectively interpreting it to bolster what they already (want to) believe. Science is still a go-to source of truth when people want to defend their views, making it dubious to claim that science as a whole is facing a trust crisis. Instead of focusing on science denial as a demand-side problem, it could be more productive to investigate possible supply-side issues. People who seek scientific evidence to support their views can find inaccurate information easily if retracted research studies continue to circulate indefinitely, or if journalists highlight isolated results from single scientific studies as if they are definitive, or if partisan politicians and news outlets selectively amplify expert voices that align with their policy goals.
100
percent reporting “a great deal of confidence”
analyses show that self-identified Christians’ confidence in science has remained stable for decades; their confidence level is not significantly different from that of Americans overall. These polls also indicate that confidence in science among nonreligious people has increased markedly since 2010. A similar dynamic occurs with partisanship: Although partisan divides in confidence have fluctuated over time, the polls we reviewed show that confidence in science has recently spiked among Democrats, rather than declined among Republicans. Republicans’ confidence in science has also risen, albeit more slowly than Democrats’, from 2012 to 2018.
Trust in science is influenced by many factors, including whether people think that scientists or other public figures share their values. Identity-oriented tweets like these could push a devout Christian, a concerned parent, or an abortion opponent toward anti-science viewpoints.
recommendations from experts. These changes in experts’ advice sometimes appeared amid scandalous retractions of scientific work, such as the Lancet study that challenged the efficacy of hydroxychloroquine, a potential COVID-19 treatment that had long been promoted by President Trump. Science is designed to test the boundaries of our existing knowledge in its efforts to get closer to the truth. But in challenging hypotheses and gathering data, new studies will inevitably show that some previous results don’t hold up anymore, whether regarding novel diseases such as COVID-19 or many other areas of research. Insisting that citizens simply trust the science on any given study is not only disingenuous, it is likely unethical. It also undermines science’s fundamental claim as society’s impartial arbiter of what we know and don’t know. Science as an institution is deeply embedded into society, and politicians have long highlighted a basic contract between science and the society that supports it. At a speech at the Ger230
American Scientist, Volume 109
man Physical Society in the 1970s, former German Chancellor Helmut Schmidt invoked what he called the “Bringschuld der Wissenschaft,” science’s responsibility to be responsive to public opinion and input. Almost a half-century later, emerging areas of science such as neurological chimeras and organoids, aspects of artificial intelligence, and applications of geneediting technologies like CRISPR have added new urgency to this notion of the science–society contract as they venture into areas that some ethicists argue might best be left alone. In our view, calls to improve generic “trust in science” are missing the point. Uncritical trust in science would be democratically undesirable. Public oversight and responsible innovation, as called for by John P. Holdren and other science advisors in the President Barack Obama administration, demand a healthy mix of epistemic trust in the technical aspects of science and a realization that science by itself will be unable to answer the many societal questions it raises. Public trust in sci-
ence and optimal societal outcomes are likely linked in an inverted U-shaped curve: Initially, public trust in science helps to optimize societal outcomes, but too much trust yields negative societal impacts. Stable and broad trust is prerequisite for evidence-based policy making in enlightened democracies, but both too little and too much trust is democratically dysfunctional. d COVID-19 laid bare health inequities that demonstrate why certain u llevels of skepticism might be democcratically desirable. Indigenous popullations and people of color were hit hardest both by the virus itself and h by its interactions with other pubb llic health inequities. People in these groups were less likely to be able to g work from home or self-isolate after w eexposure to the novel coronavirus. The scientific community was quick to T point to multiple instances of historical mistreatment of vulnerable minority patients. But blaming a lack of trust on past events falsely absolves scientists from present responsibility. Today, African American women in the United States face a threefold higher risk of dying from pregnancyrelated complications than do white women. That discrepancy is just one data point suggesting that some level of mistrust in the medical community is at least understandable, and probably justified. Tweeted assurances that “no physician is racist” in the recent controversy surrounding a JAMA deputy editor deflect from the actual issues and illustrate that some trust problems have roots in pathologies of the scientists themselves. Making Our Worst Fears a Reality In addition to the problems caused by (or nurtured by) oversimplified arguments and misdirected efforts to preserve public trust, there’s an even greater concern associated with the trust fallacy. Efforts to force scientific trust on society could make the worst fear a reality: that trust in science will become politicized. The question of who in society trusts science and who doesn’t is a worthy one. The scientific community can benefit from understanding how political statements and policy decisions can facilitate or undermine their work. It is also useful to understand when communications about science fail to reach people and why. But framing such questions in terms of certain groups
being “at war” with science implicitly validates the idea that science-related attitudes can be understood in terms of group identity. We all use social identity cues as shortcuts for forming opinions about a wide variety of topics; the phenomenon has been well known for decades. Processing information by asking, “Where do people like me typically stand on this topic?” is a mental shortcut, less cognitively taxing than poring over complex considerations. Prominent “war on science” narratives offer people exactly these kinds of identity cues. For example, the Marches for Science, which were intended to defend science against attacks, ended up replete with demonstrations in support of feminism and in opposition to
“Public Pathologies” Are Unscientific Even as medical researchers made unprecedentedly rapid success in studying and combating the pandemic, COVID-19 seems to have taken science back to its worst unscientific instincts when it comes to public communication. For this reason, countless panels on science communication at major scientific conferences now start with an impassioned rejection of the thoroughly debunked “deficit model” in science communication—specifically, a rejection of the intuitive tendency (especially among STEM scientists) to reduce public pushback against science or its applications to a simple lack of understanding or knowledge. Unfortunately, these impassioned arguments against knowledge deficit diagnoses have not prevented the
By focusing on perceived pathologies, science absolves itself from having to listen to other, equally important societal voices. Trump, visibly conflating support for science with opposition to Republicans or Trump supporters. As a result, the movement aligned “science” with the political left. A similar type of identity sorting occurs with the popular yard signs: “In this house, we believe . . .” followed by a long list of left-leaning political ideals, concluding with “SCIENCE IS REAL.” If we construct a narrative that sorts certain political viewpoints as pro-science while others are sorted as anti-science, we should not be surprised if large swaths of people decide to align with their political identity and adopt “anti-science” views. The negative consequences of dysfunctional narratives that link science with specific political viewpoints are evident in COVID-19 vaccine hesitancy. It is easy to equate vaccine hesitancy with scientific distrust, but recent evidence from focus groups with Trump voters has shown that, in their view, they want to be “educated, not indoctrinated” about the vaccines. This evidence suggests that a key cause of their hesitation has been the conflation of pro-vaccine messaging with politics and oversimplified communications that they perceive as science propaganda. www.americanscientist.org
scientific community from moving on to other (equally simplistic) singlecause explanations for science–public divides. First, we went from knowledge deficits to narrative deficits: If scientists only “told better stories,” the argument goes, then hesitant publics would follow scientists in their quest of knowledge. After that, we shifted to blaming the public for not being curious or trusting enough. Most recently, we have claimed that many parts of the public are not literate enough to distinguish misinformation and disinformation from valid claims. Despite mounting evidence that complicates or even negates the alleged deficits in public attitudes and beliefs, those deficits nonetheless persist as key premises of many science communication efforts. Ironically, social scientific research does not support the intuitions that many scientists and prominent science communicators espouse about how public opinion works. The continued search for correctable public deficits by institutional and individual actors in both science and politics might instead have created a giant blind spot for science. If there is a genuine deficit worth focusing on, it might have more to do Special Issue: Trustworthy Science
with how the scientific community views itself in relation to society than it does with people failing to value, understand, or trust science. The deficit in science’s sense of itself can have symptoms such as communications that are not tailored toward audiences’ value systems, a failure to acknowledge uncertainties, and promising simple solutions that are not warranted by the evidence. By focusing on perceived pathologies, science absolves itself from having to listen to other, equally important societal voices. An ethical quest to restore—or, rather, to retain—public trust in science might best begin with science questioning itself. Bibliography Evans, J. H. 2021. Setting ethical limits on human gene editing after the fall of the somatic/ germline barrier. Proceedings of the National Academy of Sciences of the U.S.A. doi:10.1073 /pnas.2004837117 Funk, C. 2020. Key findings about Americans’ confidence in science and their views on scientists’ role in society. Pew Research Center. https://www.pewresearch.org /fact-tank/2020/02/12/key-findings-about -americans-confidence-in-science-and-their -views-on-scientists-role-in-society Holdren, J. P., C. R. Sunstein, and I. A. Siddiqui. 2011. Memorandum: Principles for regulation and oversight of emerging technologies. Washington, D.C.: Office of Science and Technology Policy. Krause, N. M., D. Brossard, D. A. Scheufele, M. A. Xenos, and K. Franke. 2019. Trends: Americans’ trust in science and scientists. Public Opinion Quarterly 83:817–836. McCright, A. M., K. Dentzman, M. Charters, and T. Dietz. 2013. The influence of political ideology on trust in science. Environmental Research Letters 8:044029. National Academies of Science, Engineering, and Medicine. 2021. The Emerging Field of Human Neural Organoids, Transplants, and Chimeras: Science, Ethics, and Governance, pp. 18–28. Washington, D.C.: The National Academies Press. doi:10/17226/26078 Scheufele, D. A., N. M. Krause, I. Freiling, and D. Brossard. 2020. How not to lose the COVID-19 communication war. Issues in Science and Technology https://issues.org /covid-19-communication-war/
Nicole M. Krause, Dietram A. Scheufele, and Dominique Brossard are all members of the Department of Life Sciences Communication at the University of Wisconsin–Madison, where Krause is a doctoral student, Scheufele is Taylor-Bascom Chair and Vilas Distinguished Achievement Professor, and Brossard is both professor and department chair. Isabelle Freiling is a predoctoral researcher in the Department of Communication at the University of Vienna. Email for Krause: [email protected] 2021
July–August
231
Treat Human Subjects with More Humanity Buy-in to medical research requires that participating communities benefit from the data collected, and can trust how their data will be used. David B. Resnik
I
n 2020, when trials for the COVID-19 vaccines were in full swing, and Indigenous nations within the United States were being hit hard by the virus, about 460 Indigenous people from several of these nations participated in the Pfizer/BioNTech vaccine trials. The review boards of the different nations had to approve involvement in the trials first. Not all members of these nations were happy about the decision, given a long history in which Indigenous people did not give consent to medical testing, or were not fully informed about procedures or how samples would be used. Such instances highlight the need for transparency and clarity in any research involving human subjects. Indigenous people who participated in these trials had to balance their misgivings with their desire to help their nations fight the virus. The need is evident: A 2021 JAMA Network Open paper by Laura Flores of the University of Nebraska Medical Center and her colleagues found that across 230 U.S.-based vaccine clinical trials, white participants still accounted for nearly 80 percent of those enrolled. But when medical studies do focus on Indigenous people, they have made an impact: Common vaccines, such as ones for bacterial meningitis and human papillomavirus, have been shown not to be as effective for Indigenous populations, leading to dosing changes and other adjustments. The health disparities that
need addressing are obvious, but the population’s trust—which could lead to data that help resolve disparities—has been eroded. These examples and many other famous cases, such as the scandal of the Tuskegee study (see Edwards, pages 238–242), provide vivid reminders of the importance of earning and maintaining trust in research involving human participants. Trust defines and nurtures relationships among subjects, investigators, research institutions, communities, ethics oversight committees, government oversight agencies, the scientific enterprise, and the public at large. All these different individuals and organizations must work together to ensure that research with human participants meets scientific and ethical standards. When trust breaks down, significant harms can occur to research participants, researchers, institutions, communities, and society. Most of the laws, regulations, and ethical guidelines that govern research with human subjects have been developed to restore or promote trust. Trust is vital to every stage of research, from recruitment and informed consent to confidentiality and protocol compliance. The Scope of Trust Trust is a complex idea. When we think about trust involving research with human participants, our attention naturally turns to the trust between research
subjects and investigators. Research participants expect that investigators will do their best to protect them from harm, to disclose information relevant to them in deciding whether to enroll in a study, to protect their privacy and confidentiality, and to ensure the study is designed and implemented in such a way that it is likely to generate useful knowledge. Investigators also trust that participants will answer questions honestly, disclose information relevant to the research, and do their best to follow study requirements. Trust between participants and investigators is a two-way street. Informed consent is essential to establishing and maintaining trust between participants and investigators. Consent is much more than the signing of a piece of paper: It is an ongoing discussion that involves not only disclosure of information and the answering of questions, but also the sharing of concerns and perspectives. Because consent documents have become so lengthy and full of scientific and legal jargon, participants must depend on investigators to help them understand the information and to make choices that reflect their values. Trust also extends to many relationships. Institutional review boards (IRBs), for example, expect that investigators and their staff will comply with the approved research protocol and report unanticipated problems, noncompliance, and serious adverse events. Communi-
QUICK TAKE When trust breaks down in research with human participants, significant harms can occur to the research participants as well as to institutions and communities.
232
American Scientist, Volume 109
Institutional review boards (IRBs) have been in place since legislation was passed in the mid-1970s and federal agencies adopted rules for research that they fund.
Issues of trust involve not just informed consent, but also disclosure about potential risk, additional use of biological samples, and financial conflicts of interest.
Jim West/Science Source
Research with human subjects is important not only for the advancement of scientific knowledge but also for the betterment of society. The tests, treatments, and vaccines for COVID-19, for example, could not have been developed or tested without thousands of human volunteers. For research with human subjects to move forward, key stakeholders must not only develop and adhere to ethical rules and guidelines, but they must also take the steps needed to build and maintain trust.
ties must trust that research institutions will shield community members from harm or exploitation and promote community interests. Sponsors, both public and private, must trust that institutions will comply with legal and ethical requirements as well as those stated in research grants or contracts. The public expects the research enterprise to adhere to scientific and ethical standards and deliver knowledge that is useful not only for its own sake but also for society. Research Before Regulation Prior to World War II, there were no international ethics codes concerning research with human subjects, and only one country, Prussia (now part of Germany), had any human experimentation laws. The first known use of an informed consent form in medical research was by U.S. Army physician Walter Reed (1851–1902). From 1900 to 1901, Reed conducted experiments in www.americanscientist.org
Cuba that helped to show that mosquitos transmit yellow fever. Reed enrolled 33 healthy volunteers in his experiments; most of them contracted yellow fever, and six died. Indeed, the volunteers recognized that these experiments were very dangerous, but they agreed to participate because they wanted to help eradicate the disease, they wanted to earn $100 in gold (the payment for participation), or both. Reed asked the volunteers to sign informed consent documents that included information about the nature of the research and the benefits and risks of the experiments. During World War II, scientists in Germany and Japan conducted horrific experiments on thousands of prisoners against their will. The victorious Allied powers prosecuted German doctors and scientists for war crimes related to human experimentation during the Nuremberg trials. The Nuremberg judges promulgated the Nuremberg Code, Special Issue: Trustworthy Science
the world’s first international ethical guidance for human experimentation, to serve as a source of international law for prosecuting German scientists and doctors. Unfortunately, Japanese researchers were not charged with war crimes because the United States agreed not to prosecute them in exchange for access to their data. Nonetheless, the Nuremberg Code would largely become a template for modern medical ethics guidelines. The Advent of Regulations By the early 1970s, many physicians and researchers, and some concerned citizens were becoming increasingly aware of ethical problems in research with human subjects, but there were still no regulations governing this activity. The motivation for developing regulations came from the reaction of the public to the abuses of human beings that occurred in the Tuskegee study. In 1974, President Nixon signed the National Research Act to create the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research; in 1979, this commission released an influential document, known as the Belmont 2021
July–August
233
Principles of the Nuremberg Code
1 Participant Consent Researchers are responsible for obtaining consent, and consent must be devoid of stress and force.
2 Benefits of Research The research should benefit the good of society.
3 Justification Based on previous results, anticipated findings should justify the experiment.
4 No Unnecessary Harm Avoid human subjects suffering mental and physical harm.
5 Do Not Endanger Do not kill or injure anyone.
6 Balance Risks and Benefits Risks should not outweigh the benefits.
7
Barbara Aulicino/adapted from https://rayoflight2020.com/
Protect Participants Researchers should prepare to protect participants against injury or death.
8 Be Qualified The researcher must be qualified to conduct the research.
9 Voluntary Not Mandatory Participants can stop participating at any time.
234
10 Prepare to Stop The researcher must be prepared to stop the study at any time.
American Scientist, Volume 109
Report, which provided a conceptual foundation for a major revision of federal research regulations. In 1981, 15 federal agencies adopted a set of regulations known as the Common Rule, which applies to research funded by those agencies. The U.S. Food and Drug Administration (FDA) then adopted regulations that apply to human research studies involving FDA-regulated products, such as drugs or medical devices, whether publicly or privately funded. IRBs are the locus of research review and oversight. Institutions that conduct research with human subjects governed by the Common Rule or the FDA regulations must have an IRB or rely on another institution for IRB review and oversight. IRBs are composed of at least
ulations, most of which are substantially similar to the Common Rule. Courts have made numerous rulings that have established and clarified legal liability for research with human subjects. Scientific and professional organizations, such as the World Health Organization, the Council for International Organizations of Medical Sciences, the American Medical Association, and the American Psychological Association, have adopted ethical guidelines for human research. The Association for Accreditation of Human Research Protection Programs has also developed ethical standards, which are followed by hundreds of IRBs and research institutions. But regulations must continue to evolve with changing technologies; for instance, there is much
By the early 1970s, many were becoming increasingly aware of ethical problems in research with human subjects, but there were still no regulations governing it. five members of varying backgrounds, including at least one member who is not a scientist and one member who is not associated with the institution. An IRB may not approve a human research study unless it determines that risks are minimized and are reasonable in relation to benefits to the subjects or society; consent will be sought from the subject or the subject’s legal representative and appropriately documented; confidentiality and privacy will be adequately protected; selection of subjects will be equitable; and there will be additional protections in place (if needed) for subjects who may be vulnerable to exploitation, such as children, mentally disabled adults, or socioeconomically or educationally disadvantaged adults. The regulations also state various requirements that apply to the informed consent process and its documentation. Since the 1980s, dozens of other countries, and some U.S. states, have adopted their own human research laws and regFormulated in 1947 by American judges sitting in judgment of Nazi doctors accused of conducting horrendous human experiments in concentration camps, the Nuremberg Code established basic principles for protecting human subjects. It remains the most important document in the history of medical research ethics.
debate about whether it is appropriate for IRBs to govern big-data biomedical studies that use large-scale databases of anonymized, aggregated data. Scandals and Controversies Although research with human subjects is now one of the most heavily regulated areas of science, it was a laissez-faire endeavor prior to the advent of U.S. federal regulations, and scandals and ethical controversies have continued to come to light. Many of these episodes echo the type of breakdown of trust that occurred in the Tuskegee study. In 1994, President Bill Clinton declassified secret human radiation experiments conducted by scientists working for the U.S. Department of Energy and Department of Defense from the mid1940s to the mid-1970s. These experiments, which were carried out for the purpose of generating knowledge to help the nation survive a nuclear attack, exposed thousands of civilians and soldiers to ionizing radiation without their consent. In one of the most egregious studies, researchers at a free prenatal clinic run by Vanderbilt University administered radioactive iron to 751 poor, white pregnant women to learn how radiation affects fetal development. In
Who Owns Samples? Control over biological samples has continued to be an important issue since the Belmont Report. In 1976, John Moore underwent a splenectomy as part of his treatment for leukemia at the University of California at Los Angeles Medical Center. Unbeknownst to Moore, his physician, David Golde, began growing and was in the process of patenting a cell www.americanscientist.org
line extracted from Moore’s spleen. The cell line had an estimated $3 billion market value because it was overproducing lymphokines—immune system proteins that have multiple uses in biomedical research and clinical medicine. Moore learned of the deception in 1983 and sued Golde and the University of California. The California Supreme Court ruled in 1990 that the University of California did not owe Moore any compensation, because Moore had relinquished any property rights when he agreed to have the tissue removed from his body and designated as medical waste. The Court did agree that Moore’s right to informed consent had been violated. A dispute over control of biological samples took place between the Havasupai Native American tribe and Arizona State University. In 1990, researchers working for the university collected blood from 41 members of the tribe for a study on the genetics of diabetes. Although tribal leaders were under the impression that the study would focus only on diabetes, the consent forms stated that the samples and data might also be used for research on mental illness. Tribal members were outraged upon discovering that the investigators had used their samples and data to study schizophrenia and the tribe’s genetic origins, and had shared them with other researchers without permission. In 2003, the tribe barred researchers and employees from the university from the reservation and halted all research. In 2004, the tribe brought a lawsuit against the university, which was settled in 2010. Under the terms of the settlement, the university agreed to pay the participants $17,000 each and return the blood samples. In 2002 the Navajo Nation placed a moratorium on genetic research within its territory, which it revisited in 2017 because of the potential of precision medicine. The All of Us Research Program, launched by the Obama Administration in 2015, began an intensive recruitment drive to collect minority genetic and medical data. The purpose of the drive is to bring medical insights to underserved populations, but the Navajo Nation balked at allowing researchers to access data without their oversight. In 2019, Indigenous geneticists published a critique in Nature Reviews Genetics questioning All of Us’s consent procedures, and it came to light that All of Us had recruited tribal members off of Navajo lands without completing a consultation process with tribal leaders, which the Special Issue: Trustworthy Science
Navajo considered a breach of trust. The Navajo moratorium on genetic research has remained in place. Science and Money Since the 1980s, there has been increasing concern that private funding and financial interests, such as ownership of stock or intellectual property, are corrupting research by increasing the risk of bias and deliberate misbehaviors, such as data fabrication or falsification, or data suppression (such as not publishing data).
NYPL/Science Source
1998, the women won a $10.3 million lawsuit against the university. In 2010, Susan Reverby, a history professor at Wellesley College, was reviewing government documents related to the Tuskegee study and discovered an experiment funded by the U.S. Public Health Service to prevent syphilis, which took place in Guatemala from 1946 to 1948. In the experiment, investigators administered penicillin prophylactically to 696 prisoners, then exposed them to syphilis to test the efficacy of this intervention. After Reverby published her findings, President Barack Obama issued an apology to the Guatemalan government and asked a presidential bioethics commission to review U.S. laws and regulations to ensure that they provided adequate protections for human participants. About the same time that Reverby published her findings, journalist Rebecca Skloot began investigating the origins of a widely used human cell line known as HeLa. HeLa cells were the first human cells that could be easily grown in the laboratory, indefinitely. They have been, and continue to be, used extensively in research on cancer and cell structures and functions. Skloot discovered that the cell line was named after Henrietta Lacks, an African American woman who received treatment for cervical cancer at Johns Hopkins Hospital in 1951. The doctors who cared for Lacks cultured cells from a tumor they removed from her. The investigators did not obtain consent to use the cells in research, and Lacks did not receive any compensation for the use of her tissue. Skloot wrote a book about Lacks that became a national bestseller in 2011 and was adapted into a film in 2017. In 2010, Skloot established the Henrietta Lacks Foundation and donated profits from her book and movie rights to the foundation. In 2013, the NIH reached an agreement with the Lacks family concerning researchers’ access to genomic data from the cell line and acknowledgement in scientific papers.
Henrietta Lacks had cells removed during a 1951 biopsy; they were then used for experimental purposes without her consent or compensation. Her cells, known as HeLa, are still widely used in biomedical research. 2021
July–August
235
Tim Ireland/PA Images/Alamy Stock Photo
Diego Diaz/AP Images
In 1998, British physician Andrew Wakefield and his collaborators published an article in the Lancet claiming that the measles, mumps, and rubella vaccine can increase the risk of autism in children. Members of the anti-vaccine movement (above, left) seized upon this study as proof that vaccines can cause autism. Vaccination rates in the United Kingdom and the United States declined following the publication of Wakefield’s paper, and measles cases skyrocketed. Several years after this study was published, investigative journalist Brian Deer found that Wakefield had failed to disclose funding he received from a law firm that was suing vaccine manufacturers, had not obtained ethical approval for the study, and had fabricated and falsified data. The Lancet withdrew the article and the UK’s General Medical Council revoked Wakefield’s license to practice medicine. Anti-vaccine protestors still cite the study, but receive pushback from pro-science advocates (above, right).
For example, in 1999 the FDA approved Merck’s drug rofecoxib (trade name Vioxx) as a treatment for chronic pain and inflammation. In 2000, investigators with financial ties to Merck published an article in the New England Journal of Medicine comparing Vioxx to several other competing medications and claimed that Vioxx, which had $2.5 billion in annual sales, was superior to these other drugs because it had few gastrointestinal side effects. However,
company, claiming the drug had caused heart attacks or strokes. Merck withdrew the drug from the market in 2004 because of safety and liability concerns. In response to data suppression by drug companies, the U.S. Congress passed legislation in 2007, which came into effect only in 2017, requiring private or public sponsors of clinical trials involving FDA-regulated products to register their trials on a public website, such as ClinicalTrials.gov. Most biomedical
The U.S. Congress passed legislation requiring sponsors of clinical trials to register them on a public website. the editors of the journal soon discovered that Merck had underreported cardiovascular risk data in the article, and they published an expression of concern. In 2002, the FDA warned the company that it had misrepresented Vioxx’s safety profile and issued a warning for the drug. Shortly thereafter, thousands of patients—or their surviving family members—brought lawsuits against the 236
American Scientist, Volume 109
journals also adopted clinical trial registration policies. To register a clinical trial, the responsible party, such as the sponsor or principal investigator, must submit critical information about the trial, such as its design, interventions, methods, population, aims and objectives, experimental sites, investigators, and endpoints. Registration does not completely prevent data suppression, but
it does allow physicians, scientists, and patients to learn about studies and helps to promote trust in clinical research. Lack of disclosure has been a factor in clinical trials as well. The tragic death of Jesse Gelsinger, an 18-year-old patient in a Phase I gene therapy study at the University of Pennsylvania in 1999, illustrates some of the ways that financial conflicts of interest can threaten not only the integrity of research and the public’s trust in science, but also the welfare and rights of human subjects. Gelsinger had a defect in a gene that codes for an enzyme that plays an important role in protein metabolism. The goal of the study was to use an adenovirus vector to transfer a functional copy of the gene into Gelsinger’s liver cells. Unfortunately, Gelsinger succumbed to a severe immune response to the adenovirus. Investigations by the FDA and the Office for Human Research Protections found that the principal investigator for the study, James Wilson, had not fully informed Gelsinger about adverse effects of the vector in animal studies, as well as financial interests related to the study. Wilson held 30 percent of the stock in Genovo, a company that he had founded. The university also held stock in the company. Wilson also had 20 patents on gene therapy methods, some of which had been transferred to the university. The consent document included a statement saying that Wilson and the university had financial interests related to the study, but it did not describe those interests in any detail. The Gelsinger case damaged the public’s trust in gene therapy research and set back the field considerably. Funders, associations, institutions, and journals responded to this incident by strengthening financial disclosure and management policies. Promoting Trust Trustworthiness is sometimes assumed, such as when a child trusts a parent, but more often it must be earned or validated. Trust is a relationship between and among human beings built upon shared expectations, effective communications, and ethical commitments. To establish and maintain a trusting relationship, each party must regard the other as worthy of trust. When two strangers become friends, they develop trust as they get to know each other’s values, interests, and life experiences, and as they satisfy mutual expectations. If a friend keeps a secret for me, I can trust him or her
with more secrets. However, if a friend divulges secrets without permission, the trust we have formed is broken. In many social interactions, we must trust those with whom we do not have close personal relationships, such as doctors, bankers, auto mechanics, scientists, government officials, and so on. Trust between strangers depends on behavioral expectations defined by explicit or implicit norms, such as laws or ethical standards. For example, I can trust a doctor whom I don’t know personally because I believe that he or she is a member of a profession that follows ethical and professional norms, such as the Healthcare Insurance Portability and Accountability Act (HIPAA)—passed in 1996 with updated requirements for medical records privacy through 2013. I can trust that the doctor will use his or her judgment and skill to promote my wellbeing, and that he or she will keep my medical information confidential. Rules and norms can therefore play a crucial role in promoting trust in situations where people are not likely to know each other personally, such as health care or research involving human subjects. Laws, regulations, and guidelines are essential to promoting trust in research with human subjects. Government agencies, professional organizations and associations, and scientific journals have played, and continue to play, an important role in developing research rules and norms, and revising them in response to emerging technologies, areas of research, or public concerns. For rules and norms to be effective, they must be taught, applied, practiced, and enforced. Academic institutions can help to implement research rules and norms by sponsoring educational programs in research ethics for investigators, research staff, administrators, IRB members, and community members; making ethics consultation and guidance available for investigators and research staff; providing financial and logistical support for oversight committees; developing policies and procedures for human subjects research; managing financial interests; and auditing research. Professional organizations can help with implementation by educating their members about research ethics rules and norms, and by enforcing ethical and professional standards. Trust involves much more than following rules or norms, however. To nurwww.americanscientist.org
scientific enterprise
collaborators and other investigators
investigators and research staff
human subjects
general public
institutional review boards and support staff
sponsors
institutions and institutional officials
communities and populations Barbara Aulicino
Studies involving human subjects rely on an interactive network between researchers, ethics boards, the public, and stakeholders, which aims at developing and executing informative investigations while ensuring the well-being of the participants. The links connecting each node are based on trust, communication, and regulatory affairs. (Adapted from D. B. Resnik, 2018.)
ture trust, it is also important to establish the kind of mutual understanding that occurs in personal or professional relationships. Public and community engagement can be instrumental in promoting trust in research with human subjects insofar as engagement activities generate mutual understanding. Engagement is a dialogue among interested parties or stakeholders that involves the sharing not only of information, but also of ideas, interests, values, worldviews, needs, and concerns. There are many different strategies researchers can use to engage the public and communities. Some of these include conducting focus groups, seminars, or forums with members of the public; surveying public opinion; meeting with community, civic, and religious organizations; and publishing information in articles, books, editorials, or websites. Engagement is like informed consent but carried out at the level of the community or larger public. Building and maintaining trust in research with human subjects is no easy task, but it is well worth the effort. Trust is important not only for avoiding serious ethical lapses in research but also for Special Issue: Trustworthy Science
advancing scientific knowledge. Most of the knowledge we have obtained about human health, physiology, development, genetics, psychology, and behavior would not be possible without a web of trust uniting participants, researchers, institutions, communities, and sponsors toward the common goal of conducting research that benefits humanity. Bibliography Corbie-Smith, G., S. B. Thomas, and D. M. St. George. 2002. Distrust, race, and research. Archives of Internal Medicine 162:2458–2463. Kraft, S. A., et al. 2018. Beyond consent: Building trusting relationships with diverse populations in precision medicine research. American Journal of Bioethics 18(4):3–20. Resnik, D. B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Reverby, S. M. 2010. Examining Tuskegee: The Infamous Syphilis Study and Its Legacy. Chapel Hill, NC: University of North Carolina Press. David B. Resnik is a bioethicist at the National Institute of Environmental Health Sciences (NIEHS). He was chair of the NIEHS Institutional Review Board from 2008–2019 and is a Certified IRB Professional. Email: [email protected] 2021
July–August
237
Who Dares to Speak Up? National Archives
A federal agency allowed unethical experimentation on Black men for four decades before someone finally decided to blow the whistle. Marc A. Edwards, Carol Yang, and Siddhartha Roy
B
eing a whistleblower can be lonely. The public opposition to authority that it requires often results in assaults on one’s reputation. Weathering such assaults is an ordeal—one that the three of us have experienced firsthand while playing prominent roles in the effort to expose unethical behavior by government agencies that led to the poisoning of public water systems in Washington, D.C., and Flint, Michigan. By July 2016—while still engaged in litigation over D.C.’s lead crisis, which he had helped to expose 12 years earlier—one of us (Edwards) had reached a breaking point. He wrote to a colleague that he needed to “find someone like me and see how it turned out, or how it ended for them.” And so he called Peter Buxtun. Buxtun was the whistleblower who brought about the end of the infamous 238
American Scientist, Volume 109
1932–1972 “Tuskegee Study of Untreated Syphilis in the Male Negro,” the longest nontherapeutic clinical study in American history. The U.S. Public Health Service had recruited 600 African American men with syphilis in order to study the effects of the disease. During the long course of the study, the investigators—who did not obtain informed consent from the participants— deprived the men of the best available medical treatment and subjected them to painful spinal taps to collect bodily samples. As incentives for participation, they offered the men free meals, placebo treatments, and burial stipends in exchange for the right to autopsy their bodies after they died. The Tuskegee study was repeatedly framed by Public Health Service officials as a medical “opportunity” to observe the progression of untreated syphilis in the patients’ bodies. The
study was finally shut down only after a seven-year effort by Buxtun, a venereal disease contact tracer at a San Francisco medical clinic, who overheard a conversation about the study and felt compelled to expose its unethical atrocities. Edwards found Buxtun’s phone number through a simple Google search and called his home in San Francisco. After he introduced himself, Buxtun responded, “I know who you are, and I knew you would call. What took you so long?” That fall, we invited Buxtun to give a guest lecture to our graduate-level ethics class at Virginia Tech and to document his story for posterity. Although he had been quoted in news stories, he had never told his own story in full. Buxtun had just retired from a career in investing, and a few weeks previously his cousin had died,
Nurse Eunice Rivers, Dr. David Albritton, and Dr. Walter Edmondson (facing page, subjects third, fourth, and fifth from left, respectively; undated photograph) visited towns in Macon County, Alabama, to collect blood samples from participants in the U.S. Public Health Service “Tuskegee Study of Untreated Syphilis in the Male Negro.” Participants were not informed that they had syphilis and were not offered treatment, even after penicillin became widely available. The study continued for 40 years until Peter Buxtun (pictured below testifying to Congress in 1973) exposed the unethical project in 1972.
evelt for policies that they believed had prolonged the Great Depression. Peter Buxtun grew up sharing his parents’ skepticism regarding liberal politics. The Civil Rights marches of the 1960s happened far from his Oregon home, but even if they had been local, Buxtun says he probably would not have participated because they “would have been largely run by the Democrat Party in California.” Growing up on the ranch, he loved guns and became a member of the National Rifle Association. With regard to modern
leaving him with, as he described it, a sense that “I’m now an orphan [and I] want to get certain things done.” He agreed to fly out from San Francisco to speak with us about how his life experiences had informed his later decision to expose the harmful nature of the Tuskegee study. Unless otherwise noted, all quotes in this article are Buxtun’s own words, taken from his lecture or from our conversations with him. A Childhood Shaped by Atrocities Buxtun was born in Czechoslovakia (now the Czech Republic) in 1937 to a Catholic Austrian mother and a Jewish Czech father. After hearing Adolf Hitler speak at the 1938 Munich Conference, Buxtun’s father feared it might already be too late to leave Czechoslovakia. He managed to liquidate some of his assets and smuggle gold out of the country through the American consulate (for a 10 percent fee), raising enough money to flee to the United States. The family tried in vain to convince Buxtun’s Jewish uncle to leave as well, but he refused; he eventually perished in a death march to the Auschwitz concentration camp in Nazi-occupied Poland. Buxtun took from this family history the lesson that “if you’re talking to someone about [leaving a totalitarian state] and they’re laughing at you, the key to remember is this: When the laughing stops, it’s too late.” Buxtun’s family made their way to New York State when he was just 11 months old. They eventually moved to an Oregon ranch to take advantage of his father’s expertise in growing flax, which was in high demand during World War II as a key component of ropes, fabrics, and parachute harnesses. In their new country, his parents were Republicans, and they blamed President Franklin D. Rooswww.americanscientist.org
From J. H. Jones, Bad Blood
American media, he says, “I’m willing to listen. . . . If somebody is looking into a television camera and lying in my direction, I’ll take notice and say nasty things to the screen. But I won’t be surprised generally.” He switches between CNN and Russian Television for news. In high school and college, Buxtun was strongly influenced by historical accounts of war crimes and human cruelty in Germany, Russia, and Japan. His interest started with stories of the Holocaust told by his uncle, a German tank officer who fought in the battle for Stalingrad, and who eventually immigrated to the United States. After hearing his uncle describe his futile attempts at stopping Nazi murders of civilians, Buxtun asked, “Wasn’t there anything that you could do?” His uncle looked at him like he was crazy and replied, “Are you kidding? I would have been shot.” Two courses that Buxtun took at the University of Oregon also left an indelible impression. One, taught by historian William O. Shanahan, covSpecial Issue: Trustworthy Science
ered medical war crimes prosecuted at Nuremberg that were associated with concentration camp experiments; the other class was led by a Jewish professor of European history who recounted personal horror stories of living in Nazi Austria. These stories, which reminded Buxtun of what could have been his fate if his family had not left Czechoslovakia in time, shaped his understanding of authoritarian abuses of power. “I Knew What I Had to Do” In 1966, after serving in the army as a psychiatric social worker and combat medic, Buxtun responded to an ad looking for people to “work in venereal disease control in San Francisco,” because he had always wanted to live in the Golden City. His only prior relevant experience with sexually transmitted diseases had happened in college, when a group of his fraternity brothers had visited a local prostitute; he remembers them having to line up for mercury treatments after they learned that she had a venereal disease (proving, he said, the folk wisdom of “one night with Venus, a lifetime with Mercury”). Buxtun was hired as a contact tracer for gonorrhea and syphilis at the Hunt Street Clinic. It was there that he overheard a conversation about a study being run in Macon County, Alabama, that sounded “like something the PHS shouldn’t be doing”: withholding standard penicillin treatments for the test subjects. Buxtun immediately called the U.S. Centers for Disease Control and Prevention (CDC, a division of the Public Health Service) headquarters in Atlanta for more information, and he soon received a manila envelope with 11 roundup reports detailing the progression of syphilis in African American male test subjects—results from the Tuskegee study. “The doctors [in and] around Tuskegee knew about the study,” he says. “If somebody [they didn’t know] came in and was diagnosed with syphilis, they would check if their name was on the list for being a volunteer—a volunteer guinea pig for the CDC.” The study “was not a secret,” Buxtun notes. “The CDC published a report every time they did a roundup. Everything was out in the open.” It was evident from the reports that the study directors did not think anything was wrong with the work they 2021
July–August
239
From Public Health Reports/National Archives
At least 13 peer-reviewed papers were published about the “Tuskegee Study of Untreated Syphilis in the Male Negro,” including a 20-year retrospective in Public Health Reports (upper left). The Public Health Service claimed that its actions were scientifically necessary, but the study did not follow standard scientific procedures; for example, if control subjects became infected, researchers transferred them to the “syphilitic” group instead of removing them from the study (lower left). Certificates acknowledging 25 years of participation do not mention the nature of the medical research (upper right). Buxtun’s objections to the unethical study fell on deaf ears until 1972, when public outrage generated by Jean Heller’s exposé finally brought the study to a close (center).
were doing—a mindset that reminded Buxtun of the German doctors judged at Nuremberg. He felt that the CDC had learned nothing from history and was failing to see the moral implications of the Tuskegee study: “If somebody puts a hot potato in your hand, do you drop it, or do you carefully put it down? Hot potato, that is what [the study] was. I recognized that people were getting hurt, and I knew what I had to do.” Taking on the CDC Buxtun wrote a well-researched analysis and letter to the CDC, comparing the Tuskegee syphilis study to what occurred in Nazi Germany. Before mailing it, he shared the letter with coworkers. “It caused a scandal where I worked. People [including my boss] were coming up to me saying, ‘Well, when they fire you, for God’s sake, don’t mention my name.’” One doctor “came back, fire in his eyes, marched up to me, threw it at my chest, basically, and said ‘These people are all volunteers,’” wrongly implying the Tuskegee participants had provided informed consent. 240
American Scientist, Volume 109
From the New York Times
Without support from his peers, and fully expecting to be fired, Buxtun mailed the letter to the director of the Public Health Service’s Division of Venereal Diseases, William J. Brown, in November 1966. “Nothing happened [for months]. And then suddenly [in March 1967] I got orders, ‘Come to a meeting in Atlanta,’” he recalls. After being forced to wait all day, Buxtun was led to a room with a large table. “Over here was the American flag. Over there was the flag of the CDC. All the bureaucrats, of course, sitting at the head of the table in front of the flags. . . . The door was shut and we all sat down.”
Tuskegee study lead researcher John Cutler scolded Buxtun, saying, “‘See here, young man. We are getting a lot of valuable information from the Tuskegee study. This is something that is going to be of great help to the Black race here in the United States.’ And he gave me a proper tonguelashing.” Brown informed Buxtun that the Public Health Service had decided to continue the study, citing “medical judgment” as a reason for withholding treatment for the test subjects. Buxtun did not question their medical expertise, but he argued that the Tuskegee patients needed to at least be compen-
sated for not receiving treatment. Unimpressed and unfazed by the medical terminology Cutler threw around, Buxtun pulled out highlighted official CDC reports that he had examined during his research. He read aloud: “Without free transportation . . . during the time of examination, without free hot meals, treatment for other diseases and a couple of other things . . . it would have been impossible to secure the cooperation of the volunteers.” In response to that clear evidence of unethical incentives for participants, Cutler backtracked: “‘I didn’t write that. I didn’t write that. Let me see that. That must have been written by one of my colleagues.’ . . . He was shocked and he was scared a little bit, like, ‘Whoops, did I step into something? Is this going to bounce back and bite me on the nose?’ . . . They were all kind of looking at each other, you know, in the way that bureaucrats do when they’re suddenly startled by something like this that isn’t supposed to be happening. ‘Why is this happening? I’m not here. I wasn’t here.’” Buxtun was dismissed shortly afterward. “It was like, ‘We’re through with you. Go away.’ You know, ‘We’ve got business.’” Keeping Up the Fight Buxtun heard nothing more from the officials, and he left the Public Health Service in 1967 to attend law school. But he was haunted by his knowledge of the ongoing study. He sent a second letter to the CDC in 1968, at the height of the Civil Rights movement, seven months after Martin Luther King, Jr., had been assassinated. King had led marches from Selma to Montgomery, which is less than 40 miles from Tuskegee. Buxtun worried that “if people found out about [Black] people with [untreated] syphilis who were allowed to suffer, that would have blown out all over the country . . . and damaged other things the CDC was doing. I was going around saying, ‘We don’t do this sort of thing.’ Or, ‘We shouldn’t do this sort of thing.’” In response to Buxtun’s second letter, an ad hoc blue-ribbon panel was convened by the CDC in 1969 to discuss the study’s future. All of the panelists were high-ranking physicians, and none were Black or had medical ethics training. “[Only] one doctor there was not a participant in the roundups. [Dr. Gene Stollerman] www.americanscientist.org
U.S. Public Health Service
A 1945 poster produced by the U.S. Public Health Service shows the agency’s recommended response to cases of syphilis, including treatment in clinics, follow-up with the patients and their families, and education. At the same time, the Public Health Service was seven years into the “Tuskegee Study of Untreated Syphilis in the Male Negro,” in which participants were not informed of their infection or provided with treatment. Consequently, many of these men passed the disease to their partners and children.
didn’t know what the Tuskegee study was. He came in, and it didn’t take him long to figure out what it was.” Stollerman pushed for getting the patients treatment, but the rest of the panel saw the issue mainly in political and public-relations terms, and not as a moral issue. They debated the scientific merits of the study, made broad assumptions that treatment might do more harm than good, and decided Special Issue: Trustworthy Science
that the study should continue. The panel also concluded that the men in the Tuskegee study were too uneducated to give informed consent and that therefore doctors should gain “surrogate informed consent” from the Macon County Medical Society and local doctors, so that they could continue to withhold information from the subjects. In the 1970s, the Tuskegee study physicians once again successfully 2021
July–August
241
Exposing injustice can have personal and professional consequences, as noted by author Marc A. Edwards in his response to receiving the 2017 Massachusetts Institute of Technology Disobedience Award. In 2016, having spent years fighting for clean water in Washington, D.C., and Flint, Michigan, Edwards felt isolated and exhausted by attacks on his reputation. When he called fellow whistleblower Peter Buxtun for advice, Buxtun responded, “I knew you would call. What took you so long?”
persuaded local doctors to withhold syphilis treatment, including conventional antibiotics. While the official stonewalling continued, Buxtun repeated the Tuskegee story to law professors, friends, and anyone he could find, until finally Edith Lederer, a friend and reporter at the Associated Press (AP), listened. Lederer was a fairly junior reporter, and so the AP assigned the story to investigative journalist Jean Heller. On July 26, 1972, Heller’s exposé ran on the front page of the New York Times. The story became a public sensation, and marked the beginning of the end for the Tuskegee syphilis study. Merlin DuVal, the Assistant Secretary for Health and Scientific Affairs, officially terminated the study at the end of the year. Enduring Consequences The Tuskegee study demonstrated that public health agencies could not be trusted to self-regulate. Its exposure by the media, and the resulting Congressional investigations, began an everevolving process to address concerns regarding how human subjects were used in research. The 1974 National Research Act introduced the standard 242
American Scientist, Volume 109
of informed consent (see Resnik, pages 232–237). As for the individuals most directly impacted by the study, the surviving test subjects eventually received lifelong Medicare coverage and treatment, and in 1997 President Bill Clinton offered them a public apology. After Tuskegee made headlines, Buxtun testified to the U.S. Congress and continued advocating for the rights of the victims. He described Tuskegee sharecropper and test subject Herman Shaw as “one of the most remarkable people I’ve ever met.” At the 1997 White House event where Clinton apologized for the Tuskegee syphilis study, Shaw gave a speech in which he described his fellow participants: “They were hardworking men. They were men deserving of respect. They put food on the table for their families the way that their parents had.” In our interview, Buxtun grew especially passionate when discussing how science could be perverted to support a totalitarian government. He compared Edwards’s multiyear battle with the CDC to those who struggled against Soviet biologist Trofim Lysenko in the 1930s and 1940s. Lysenko, who rejected genetics and
science-based agriculture, had more than 3,000 scientific opponents dismissed from their jobs, imprisoned, or executed. Buxtun said “Lysenkoism is everywhere, wherever you look. When somebody like that is tripping up somebody like you, they should be called out for who they are.” Buxtun later volunteered that his Myers-Briggs personality type was “Debater,” a type characterized by resistance. That trait is evident not only in his long campaign against the Tuskegee study, but also in his subsequent rejection of any attempt to paint him as atypical. He would not acknowledge that there was anything at all unusual about himself or about his seven-year effort to expose unethical behavior in the Tuskegee experiment. At the time of our interview, though, Buxtun had finally begun considering himself a whistleblower, admitting that in his quest, “I walked on many tightropes, often without realizing I was walking on a tightrope until I got to the other side. . . . I was lucky.” Buxtun says he never considered giving up, because “I knew I was doing the right thing. I knew who was involved. I knew why they were involved. I knew how they were being abused. And I didn’t agree with it.” We can understand the ambivalence about describing people who speak out as whistleblowers (see “Flint Water Crisis Yields Hard Lessons in Science and Ethics,” May–June 2016). As one of us (Edwards) stated, “All I did is follow the facts and state the truth as an aspiring scientist. In that view, what I did is simply science.” There is possibly no better example of the benefits of speaking out than the story of Peter Buxtun and Tuskegee. Bibliography Elliott, C. 2017. Tuskegee Truth Teller. The American Scholar https://theamericanscholar .org/tuskegee-truth-teller/ Jones, J. H. 1993. Bad Blood: The Tuskegee Syphilis Experiment, second edition. New York: Free Press. Washington, H. A. 2006. Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present. New York: Harlem Moon.
Marc A. Edwards is the Charles Lunford Professor of civil and environmental engineering at Virginia Tech. Carol Yang is an undergraduate biosystems engineering student, and Siddhartha Roy is a research scientist, both at Virginia Tech. Email for Edwards: [email protected]
W LO
$ i sh
49
us
AS
pl
Historic 1920-1938 “Buffalos” by the Pound
pp
in
g
&
ha
nd
lin g
FREE
Stone Arrowhead with every bag
Released to the Public: Bags of Vintage Buffalo Nickels
O
ne of the most beloved coins in history is a true American Classic: The Buffalo Nickel. Although they have not been issued for over 75 years, GovMint.com is releasing to the public bags of original U.S. government Buffalo Nickels. Now they can be acquired for a limited time only—not as individual collector coins, but by weight— just $49 for a full Quarter-Pound Bag.
100% Valuable Collector Coins—GUARANTEED!
Long-Vanished Buffalos Highly Coveted by Collectors Millions of these vintage Buffalo Nickels have worn out in circulation or been recalled and destroyed by the government. Today, significant quantities can often only be found in private hoards and estate collections. As a result, these coins are becoming more soughtafter each day.
Supplies Limited—Order Now!
Supplies of vintage Buffalo Nickels are limited Every bag will be filled with collectible as the availability of these classic American vintage Buffalos from over 75 years ago, coins continues to shrink each and every year. GUARANTEED ONE COIN FROM EACH OF They make a precious gift for your children, THE FOLLOWING SERIES (dates our choice): family and friends—a gift that will be • 1920-1929—“Roaring ’20s” Buffalo appreciated for a lifetime. • 1930-1938—The Buffalo’s Last Decade NOTICE: Due to recent changes in the • Mint Marks (P,D, and S) • ALL Collector Grade Very Good Condition demand for vintage U.S. coins, this advertised price may change without notice. Call today • FREE Stone Arrowhead with each bag to avoid disappointment. Every vintage Buffalo Nickel you receive will be 30-Day Money-Back Guarantee a coveted collector coin—GUARANTEED! Plus, order a gigantic full Pound bag and you’ll You must be 100% satisfied with your bag of Buffalo Nickels or return it within 30 days of also receive a vintage Liberty Head Nickel receipt for a prompt refund (less s/h). (1883-1912), a valuable collector classic!
Order More and SAVE QUARTER POUND Buffalo Nickels (23 coins) Plus FREE Stone Arrowhead $49 + s/h HALF POUND Bag (46 coins) Plus FREE Stone Arrowhead $79 + s/h SAVE $19 ONE FULL POUND Bag (91 coins) Plus FREE Stone Arrowhead FREE Liberty Head Nickel and Liberty Head Nickel with One Full Pound $149 + FREE SHIPPING SAVE $47
FREE SHIPPING over $149! Limited time only. Product total over $149 before taxes (if any). Standard domestic shipping only. Not valid on previous purchases. For fastest service call today toll-free
1-877-566-6468 Offer Code VBB559-07 Please mention this code when you call.
GovMint.com • 14101 Southcross Dr. W., Suite 175, Dept. VBB559-07, Burnsville, Minnesota 55337
GovMint.com® is a retail distributor of coin and currency issues and is not affiliated with the U.S. government. The collectible coin market is unregulated, highly speculative and involves risk. GovMint.com reserves the right to decline to consummate any sale, within its discretion, including due to pricing errors. Prices, facts, figures and populations deemed accurate as of the date of publication but may change significantly over time. All purchases are expressly conditioned upon your acceptance of GovMint.com’s Terms and Conditions (www.govmint.com/terms-conditions or call 1-800-721-0320); to decline, return your purchase pursuant to GovMint. com’s Return Policy. © 2021 GovMint.com. All rights reserved.
www.americanscientist.org
Special Issue: Trustworthy Science
THE BEST SOURCE FOR COINS WORLDWIDE™
2021
July–August
243
First Person | Shirley M. Malcom
A Sea Change in STEM Workplace power structures often reward conformity more than talent, and STEM (science, technology, engineering, and mathematics) jobs are no exception. Even when underrepresented researchers establish careers in the sciences, they are often expected to adhere to the cultural norms of a system dominated by white males. Shirley M. Malcom is working to make STEM education and careers more welcoming to people from all backgrounds. Malcom is the director of Education and Human Resources Programs at the American Association for the Advancement of Science (AAAS). She trained as an ecologist but for the past four decades has focused her efforts on building opportunities for budding scientists. Malcom says that the first hurdle to overcoming the biases in STEM workplaces is deceivingly difficult: recognizing that there is a problem. To that end, she helped establish and is the director of SEA (STEMM Equity Achievement—the additional M is for medicine) Change, an AAAS initiative. SEA Change utilizes one of scientists’ strengths—data—to uncover biases that institutions might not recognize are present in their organizations. Malcom spoke with special issue editor Corey S. Powell about the heartbreaking structural problems she has seen in STEM workplaces, and why it is important for an institution to be inclusive. This interview has been edited for length and clarity. Michael J. Colella for AAAS
Lack of diversity in the workplace has been a problem for a long time. What are the biggest successes you’ve seen, and what are the most stubborn obstacles that still frustrate you?
I do think we have made good progress with regard to women in some areas of the sciences. We’re at parity or above in the life sciences and in the social and behavioral sciences, except for economics. Women are now the majority of those in medical school. After I’ve said the good stuff, we still have issues around questions of leadership. Advancing into higherlevel positions is still a challenge for women. Take, for example, medicine. Even though women have been running at least 40 percent [parity] for about 20 years, you don’t see that in division chairs, department chairs, or as deans. That’s a real problem. We can get in, but we can’t necessarily get up. The challenges around race are so much more stubborn. I keep telling people race is really different. We’ve made some halting progress. We slide back. But right now we’re on a downward slope in a lot of areas. Especially for Black women. Thinking that we’ve made more progress than we have can lead us to complacency. A lot of the progress has occurred because of intervention programs, things that you can put in place to bring people in, help prepare them, and support them. Those programs tend to be soft money–funded 244
American Scientist, Volume 109
and highly reliant on volunteers. And in some cases, with the increased judicial and legislative scrutiny of the programs, people are gun-shy of even running them. The nature of the racism we’re talking about is not about the way an individual treats another individual. It’s systemic racism. The systemic nature of the problems that we face has not been accompanied by the systemic nature of the solutions. People want to stick a Band-Aid on a gaping wound. It’s not gonna happen. The first thing is, you have to stitch up that gaping wound. Offering that kind of aid is often seen as favoring somebody, as opposed to addressing the wound that you might have given them. What are some ways that institutions can be more inclusive?
Historically Black colleges and universities [HBCUs] involve a fairly small proportion of all Black students, and yet they make an outsized contribution to the STEM students [see “Empowering Success,” September–October 2020]. Many of these schools are under-resourced, and they don’t necessarily have all the amenities that the predominantly white, huge research institutions have. But they’re somehow still overcontributing to the number of STEM students. It’s something about the environment that they create, and the ways they encourage the students. The cul-
ture of belonging, the community, the support, the expectations, and so on. But you can do that anywhere. You don’t have to be in an HBCU to do that. There are places that are not HBCUs that have adjusted their cultures. They’ve taken those lessons, and they have adjusted those cultures. How can you make people more aware of these issues, and get them mobilized to create a fairer system?
There are things that we just don’t see anymore. Fish don’t see the water. We don’t see certain things that are barriers. And that means that you have to start with a self-assessment that asks you about everything, the things you see and the things you may not see. If universities are going to survive, their future better be more diverse than their past has ever been. The question then is, how do you make a welcoming, diverse future possible? I have been living in this skin as a Black woman my entire life. I have faced all of the kinds of barriers. There’s always something there to remind you that you don’t belong. There’s no one around who looks like you. I expected that life within the scientific community would be different because of a stronger reverence and reliance on evidence. But I began to understand that at the end of the day, we’re just all people. We’re in the best time in the world to look at this issue because everything
is disrupted. There are things that we know from the research, about forms of instruction that are in fact more successful for all students, but especially effective for students of color and women. In the middle of a pandemic, in the middle of a reckoning on race, we have been given the opportunity to redesign, recreate, and reimagine how we educate. How do you overcome the common attitude that fairness is a zero-sum game—that fairness toward you comes at a price in fairness toward me?
Often when people talk about unfair policies, they cite facts that are not true. Their perceptions do not necessarily match the facts. Which is one of the reasons I like SEA Change as a model, because you know what it depends on? Your own data. I’ve had people argue with me about national data. One man told me that it can’t be the case that women’s pay and equity are lower than men’s because the last woman hired in his department was offered a higher starting salary than someone else. I said, “And that’s an N of one.” This man is a scientist, but everything that he otherwise would have known to do just had fallen away. Look at your own data. Look at the data over time. Do you feel you’ve been making progress? What do the data tell you about whether you have been making progress? I am hoping, at least with the scientific and technically trained community, that evidence will matter. I’m doubtful that all of the institutions that think they’ve made progress have in fact done so, but look at your own data. What arguments do you find are most effective in convincing institutions of the value of investing in diversity?
We have to begin to understand the positive reputational value that is going to be associated with being a safe and inclusive place. A lot of people want to pit diversity initiatives against merit, but, quite frankly, the research is telling us that creativity and innovation emerge from diversity. Bringing together these different experiences, these different moments that people have had in their lives, creates a lot more working capital— different perspectives that we can use to tackle the really big problems that we face. Energy, climate, sustainability, www.americanscientist.org
the next pandemic. These aren’t trivial. And we’d better get everybody off the bench and on the field. I try to understand how systemic racism has affected our populations, but also how it has interfered with real understanding. I was struck by how long the National Institutes of Health spent trying to find out what Black scientists were or were not doing that caused them to not get supported, rather than saying, “What didn’t the agency do? What message is the agency sending?” It’s this vicious cycle of blaming the victims rather than the institutions. Or, as I say sometimes, the system is the problem. People ask, “Where do I start?” And I hate to say this, but everywhere. But if you start anywhere, you will end up everywhere, which is the nature of systems. I was trained as an ecologist, so I’m very comfortable with the messiness of interconnected systems. A lot of people want linear pathways. The minute the stuff starts to deviate, folks get uncomfortable, because they can’t control all the aspects. Somebody once told me that if you can control it all, you’re not doing the work. A lot has been written recently about the need to build “trust in science.” Do you find those discussions helpful?
Trust in science? When I hear that, I want to say, whose science? People tend to trust people that they know, and the low levels of representation of people of color within the sciences may make it harder to do the kinds of cultural translation that people appreciate. I code switch between my worlds. How do I help people understand that it is to their benefit to trust this vaccine? They don’t know anything about spike proteins. They are not going to care about the structural biology that went into understanding it; however, many of them are a lot more conversant than I would have imagined about mRNA. But to a certain extent, they take hope from the fact that somebody who looks like them is working on this stuff. Over time, diversifying our community is going to be a critical component of building trust in science. We have a lot to get over, because there have been betrayals in the past [see Resnik, pages 232–237 and Edwards, pages 238–242]. Part of the reparations needs to be a pledge that this will not happen again. A pledge that we have structures in place—such as Special Issue: Trustworthy Science
institutional review boards, laws, and policies related to human subjects—to prevent a repeat of those betrayals. Lauren Resnick [professor of psychology at the University of Pittsburgh] once said to me, “We tend to value what we can measure and measure what we value.” We place one value over another one. For example, whether somebody can take and pass the basic science courses should not be what medical school is about. It ought to be about whether or not someone would be a capable, competent, compassionate health care professional. You can help someone overcome a bad biochem grade, but you can’t give somebody a shot of empathy. They either have it or they don’t. But the system is structured in a way so as to advantage one group and disadvantage another. I’m disadvantaged because I have not had what I need in order to show up well using the metrics that you chose to prioritize. There’s a familiar narrative that science is purely evidence-based, and that scientists just look at things as they are. But when you look deeper, there are layers of cultural decision-making and cultural assumptions that go into science. How can we bring people to a broader, more honest perspective?
I thought it very interesting that astronomers started including the Hawaiian goddesses in their naming protocols. They branched out of Greeks and Romans. Give us more options here in terms of what we call this stuff. We’re making progress. But you have to be so careful, so mindful. I know the people of Hawai’i have their own grievances about the Thirty Meter Telescope [a proposed observatory on Mauna Kea, a traditional sacred location]. Another example is, we’re finally realizing that the greatest genomic diversity is sitting in Africa, not among the six or seven white guys who might want to contribute to genome projects [see Hilliard, pages 198–200]. And some Africans have grievances about researchers wanting to control the ability to get their genomic information. But there’s no way to do personalized medicine if you don’t have an adequate and diverse genomic base. It may be personalized for you, but it won’t be personalized for me. Am
Sci
A companion podcast is available online at americanscientist.org.
2021
July–August
245
Michael Paul Nelson | Tackling environmental crises requires moral as well as scientific clarity.
Ground Rules for Ethical Ecology
A
s an environmental ethicist, I routinely sit in meetings where the word “sustainability” is uttered reflexively or employed as blanket justification for almost any program. Often, people utter the word without seriously considering its complex, multilayered meaning. One influential definition of sustainability comes from a 1987 United Nations–sponsored report: “meeting human needs in a socially just manner without depriving ecosystems of their health.” But as Michigan Tech ecologist John A. Vucetich and I pointed out in a 2010 article in BioScience, you can read this definition in many different ways depending on your assumptions of what is “good” and “bad” and how you interpret “human needs” and “ecosystem health.” Depending on your perspective, sustainability could mean anything from “exploit as much as desired without infringing on the future ability to exploit as much as desired” to “exploit as little as necessary to maintain a meaningful life.” The researchers and officials who craft environmental policies (or not) have to navigate this vast range of ideas about moral responsibility. And yet, when we talk about sustainability, we rarely clarify what assumptions we are bringing to the table. Failure to have direct conversations about what we value and why has contributed to inaction and political paralysis in confronting enormous ecological challenges such as climate change. Ethical arguments can move hearts and public opinion in a way that mere facts and data simply cannot. If we
want to try to live more sustainably, we need scientific information, yes, but we also need to decide what we value and what we consider ethically acceptable, and then to enact policies that encompass both scientific and ethical realities. Despite my background in philosophy, I spend most of my time working with scientists, specifically ecologists and conservation social scientists. My faculty home is in a college of forestry; I have participated on a longterm study of wolves and moose on Lake Superior; and I serve as the lead principal investigator of a Long-Term Ecological Research (LTER) program at Oregon State University studying a magnificent old-growth forest at the H. J. Andrews Experimental Forest in the Oregon Cascades. I move among scientists who care deeply about the natural world and understand much about how it is unraveling. Our current intertwined environmental crises—not just climate change, but also zoonotic disease pandemics, pollution, food insecurity, and biodiversity loss—are scientific problems; they are also economic and technological problems. But most notably, they are ethical problems that demand an ethical response. These environmental crises have grown out of ethical assumptions that Western nations have made over centuries about our proper role in the natural world. Recognizing and challenging these assumptions could transform our relationship with the Earth and shift public attitudes, allowing new approaches to policy decisions and motivating scientists to incorporate
ethics into how they think about, talk about, and conduct their research. Continuing to separate ethics from science, on the other hand, will likely result in more incomplete, and ultimately ineffective, responses to each crisis. To speak metaphorically, science (and, indeed, life itself) is not a dry land pursuit that sometimes requires fording a lake or stream of values and ethics; it is more like being on a raft in a sea of values and ethics. Although you cannot avoid this sea, you can navigate it with more or less success. Given the breadth of possible meanings of sustainability, for instance, nearly anyone working within conservation or natural resources management could believe their work fits under the sustainability banner. Those who advocate for clearcutting old-growth forests can point to the renewability of trees as consistent with sustainability, whereas those who advocate for not harvesting forests can point instead to enhanced carbon sequestration. Only by engaging directly with the ethical dimensions of sustainability—what we truly value and why—can the two sides have a meaningful and productive conversation. The relationship between science and ethics is complicated, thorny, and often misunderstood. Yet, there is great power and importance in connecting these two practices. If we are to rise to the challenges of the 21st century, we will need both in equal measure. Common Ground Many scientists tell me that they regard ethics as a subjective pursuit, contrasted
QUICK TAKE The overlapping ecological crises humanity currently faces—climate change, pollution, biodiversity loss, to name a few—are moral and ethical as well as scientific problems.
246
American Scientist, Volume 109
The relationship between science and ethics is complicated, thorny, and often misunderstood, but we need both scientific facts and ethical arguments to address these crises.
Ethical and moral arguments start with clearly stating what we value and why, and can inspire collective action and form the basis for sound policies and scientific research.
R. O. Peterson
On the remote and wild Isle Royale in Lake Superior, the isolated populations of wolves and moose are deeply interconnected. Researchers have been documenting this fascinating ecological drama for more than six decades. Their work provides a case study in conservation ethics that raises questions about the optimal role of humans in managing nature.
with the objectivity of science. But science is not as objective as many of my colleagues like to believe, and ethics is not entirely subjective. The work I do with the H. J. Andrews LTER program is classified as conservation ethics. We use the tools of philosophical analysis to formulate and evaluate real-world conservation questions such as: Should we suppress one species to save or enhance another? In the Pacific Northwest, where I live, there are proposals to kill barred owls that compete with endangered spotted owls in old-growth forests, or to kill sea lions to protect dwindling salmon populations. Our work lays out and evaluates the arguments on each side of such debates. To clarify the issues involved, ethical arguments can be formulated and assessed with a logical structure called argument analysis. A logical argument contains a set of (P) premises (or evidence) and a (C) conclusion, which break complex ideas into their comwww.americanscientist.org
ponents. In an ethical argument, at least one of the premises will contain a value or ethical statement as well. This is an example of a logical argument: P1. Old-growth forests sequester huge amounts of carbon. P2. The H. J. Andrews Experimental Forest is an old-growth forest. C. Therefore, the H. J. Andrews Experimental Forest sequesters huge amounts of carbon. And this is an example of an ethical argument: P1. Old-growth forests sequester huge amounts of carbon. P2. Sequestering carbon is critical in the effort to fight climate change. P3. We ought to do whatever we can to fight climate change. C. Therefore, we ought to protect old-growth forests. Special Issue: Trustworthy Science
Once an ethical argument is articulated, we can begin critiquing and possibly modifying, accepting, or rejecting that argument. In this way, ethical questions can be handled systematically, rigorously, and transparently, in much the same way that researchers approach scientific questions. The logical approach also suggests that a proposed action or policy (above, “C. Therefore, we ought to protect old-growth forests”) brings together both science and values. If you want morally sound, socially responsive, and feasible policy, you need both science and ethics. In order to take action on climate change, for example, you need to 1) acknowledge that it is happening, 2) acknowledge that it will harm future generations more than it harms those of us who have helped create the problem today, and 3) come to the conclusion that causing unnecessary and disproportionate harm to future generations is a moral wrong. The failure to see the necessity of both scientific and value premises, articulate each clearly, assess their veracity, and ensure the conclusions follow from real evidence almost guarantees failure to make headway on critical environmental issues. These points may 2021
July–August
247
global carbon emissions by income category, 1990–2015 share of global carbon budget for 1.5C
share of global population richest 1% richest 10%
GtCO2
remaining carbon budget will be depleted by 2030 without urgent action
30 15%
middle 40%
52% 20
9%
37%
31% 22% poorest 50%
10
41%
used 1990 to 2015
25%
0 1990
4%
7%
2015
Per capita income threshold [SPP 2011] of richest 1%: $109k; richest 10%: $38k; middle 40%: $6k; and bottom 50%: less than $6k. Global carbon budget from 1990 for 33% risk of exceeding 1.5C: 1,205 Gt. OXFAM
Over the past 25 years, the richest 10 percent of the world’s population have contributed 52 percent of the cumulative human-generated carbon emissions, whereas the poorest 50 percent contributed just 7 percent of those emissions. And yet, key impacts of climate change—including drought and rising sea levels—are expected to disproportionately affect the poorest communities.
seem obvious, yet they are frequently obscured or ignored in actual discussions of policy. Cost–Benefit Trap I’ve spent a lot of time in my career examining why ethics is so misunderstood. When I engage scientists and conservation managers in ethical conversations about their work, they commonly reduce ethics to one type of ethic: consequentialism, a moral calculus in which the ethical value of a decision is measured by weighing the costs of doing X against the benefits of doing X. For example, in parts of the United States where wolves have returned, wildlife managers debate whether we ought to let people hunt wolves. These conversations are almost entirely framed in consequentialist terms, with managers attempting to weigh tangible costs and benefits of instituting a wolf hunt. For instance, they might consider whether adding a wolf hunting season would generate useful revenue, or whether it would erode their agency’s public support. Confusing consequentialism for ethics writ large makes sense in a Western context, given our long-standing cultural focus on cost–benefit analysis as a means to judge so much in our lives. But consequentialism is hardly the only form of ethical reasoning. Sometimes we consider what rights we believe individuals possess; sometimes we strive to manifest certain virtues, such as em248
American Scientist, Volume 109
pathy, care, respect, integrity, and love; and sometimes we consider whether we ought to adhere to the commands of a divinity or strive to mesh our actions with what we assume is “natural.” In our writings about wolf hunting, Vucetich and I have argued that the morality of killing a living creature de-
Ethical arguments can move hearts and public opinion in a way that mere facts and data simply cannot. pends on being able to provide a good reason to do so. A wildlife manager who fails to look beyond a cost–benefit analysis of a wolf hunting program might also fail to fully and appropriately grapple with whether they have a “good reason” for killing a wolf based on the best available ecological research. This could lead to the introduction of bad policies that don’t reflect the best possible ethical or scientific judgments. Ethical Confusion Another source of misunderstanding is that ethics is often confused with politics. In 2007, Vucetich and I wrote an article analyzing an ethical debate among ecologists about whether it was acceptable that a group of research-
ers had killed 60 to 120 black-throated blue warblers in order to observe the behavior of a remaining mating pair. Afterward, a well-known ornithologist wrote to chide us, saying our perspective represented “politics and advocacy” instead of science and ethics. We were indeed advocating for the tools of ethical analysis, but we weren’t attempting to determine policy. We didn’t take a side in the debate. Rather, we were setting out ethical principles that others could use in making their own research or policy decisions. People often conflate political or legal decisions with ethical ones. The warbler experiment met the official standards for such research, but that does not necessarily mean it was ethically appropriate. Something can be legal and unethical at the same time. People also often conflate ethics with social science. Social science employs systematic and rigorous methods to describe some element of the human world: for example, the way that a specific group of people value wildlife, or how people attempt to explain away cognitive dissonance. Ethics, on the other hand, is a philosophical or conceptual exercise that attempts to assess and prescribe a right or good course of action: for example, whether trophy hunting is an appropriate kind of relationship with the nonhuman world, or whether reparations ought to be paid to historically oppressed communities. Social science might tell us how willing the public would be to accept a new policy, such as hunting wolves, but it cannot determine whether that policy is “right” or “wrong.” That is an exercise in ethics. “Ought,” Not “Is” It is common to confuse a description of what “is” with a prescription of what we “ought” to do—to say, “Here’s how we have done this in the past” and immediately jump to, “This is how we ought to do this now.” Just think how often people invoke the importance of “traditional values.” Or we might describe some condition as “natural” and imply that it is therefore also “good.” In forestry, researchers often try to determine how frequently and how severely a forest burned before European settlers arrived and modified the local fire regime. People often interpret this historic baseline as a description of what is natural and good, which can therefore be used to justify certain forestry practices. A timber company might argue, for instance, that
their clear-cutting mimics a historic (and therefore “natural and good”) pattern of severe and infrequent fires. This kind of reasoning is fallacious, because it conflates a supposedly natural state of being with what is ethically right. Most of the time the conflation between is and ought seems to be unintentional, but in certain instances it is designed as an intentional manipulation. Ethicists call this the is/ought fallacy: the illogical attempt to muscle out a prescription for action based on a set of factual claims alone. To arrive at a prescription for action (such as “we ought to act to avert the sixth great extinction”), you must, as a matter of logic, bring to the table both empirical premises (“the sixth great extinction is happening”) and ethical or so-called normative premises (“causing the loss of biodiversity is morally wrong”). Call to Action The widespread misunderstanding and misapplication of ethics has come with a significant cost. Social science research on persuasion and messaging over the past 50 years has demonstrated repeatedly that providing people with ostensibly objective facts typically fails to elicit behavioral change. By contrast, appeals that engage ethical reasoning can have lasting effects. And yet for a long time, even as scientists warned about the impacts of climate change, our philosopher colleagues did not speak publicly or clearly about the associated ethical implications. Too many of us who are concerned about climate change have been committing the is/ought fallacy: attempting to motivate actions in response to climate change from scientific descriptions alone, without articulating what we value, what is worth saving, and what we hold dear. In response, Oregon State University professor emeritus of philosophy Kathleen Dean Moore and I edited a climate change ethics book, Moral Ground: Ethical Action for a Planet in Peril. We wrote to 100 of the world’s moral leaders and asked them, “Is it wrong to wreck the world? Why?” We received many powerful replies. It is wrong to wreck the world, some responded, because this world is a gift and that is not how you reciprocate when given a gift. It is wrong because the world is filled with beauty, and beauty should be protected. It is wrong because it inflicts harm upon and steals from future generations. www.americanscientist.org
Ian Vorster/OSU via CC-BY from https://andrewsforest.oregonstate.edu
In his studies of old-growth forests at the H. J. Andrews Experimental Forest in the Oregon Cascade Mountains, the author and his colleagues use rigorous ethical analysis to guide conservation decisions. His work shows that ethics is not as subjective as many people tend to believe.
In the 11 years since our book was published, we have seen an outpouring of moral responses to climate change from scientists including Michael Mann and James E. Hansen, individuals who have benefitted from fossil fuels such as Valerie Rockefeller Wayne, moral leaders such as Pope Francis, and climate activists such as Greta Thunberg. As social activist Naomi Klein said in 2015: “[T]here is nothing more powerful than a values-based argument. We’re not going to win this as bean counters. . . . We’re going to win this because this is an issue of values, human rights, right and wrong.” I am hopeful that a shift in the way we talk about climate change could allow us to finally see this phenomenon as a moral, as well as scientific, crisis and to respond effectively.
there is a logic within environmental science that works to resist rethinking and reform. They argue that the current structure of power and funding focuses on the natural sciences over muchneeded sociopolitical research on urgent issues such as climate change. Shifting focus, and funding, breeds resistance and fear of losing scientific authority among those who benefit from the way things are now. This is a familiar reaction against proposed institutional change. It’s also a maladaptive logic that results in the continued exclusion of other disciplines such as ethics, and it works against critical self-reflection and perpetuates the status quo at a time when we need status quo disruption. I urge us all to see the power and importance of ethical thinking. I urge my scientific colleagues to engage in critical self-reflection and evaluation of their own disciplines, and to be more open to ideas from other fields. Climate change, biodiversity loss, food insecurity, and pandemics pose perhaps the greatest set of challenges that we humans have ever faced. Philosophy and ethics will be a crucial part of the unbridled imagination needed to solve them.
Ideas Are Choices In the end, ethical arguments matter because they guide action. We live in a world of contested ideas and concepts that make themselves known in the real world in real ways. These disputes are the source of our current challenges, and they are the solutions as well. Here’s the kicker: Many if not most of those ideas are choices. We choose to be anthropocentric (human centered), or not. We choose to attribute intrinsic value to nature, or not. We choose to see ourselves as part and parcel of the world and to empathize with the plight of species and ecosystems, or not. In a recent paper, conservationists Myanna Lahsen and Esther Turnhout of Wageningen University suggested
Michael Paul Nelson holds the Ruth H. Spaniol Chair in Renewable Resources and is professor of environmental ethics and philosophy at Oregon State University. He collaborates extensively with ecologists, social scientists, writers, and artists and strives to be as “undisciplined” as possible. Twitter: @ThePoetTree3
Special Issue: Trustworthy Science
2021
(References are available online.)
July–August
249
S c i e n t i s t s’
Nightstand
The Scientists’ Nightstand, American Scientist’s books section, offers reviews, review essays, brief excerpts, and more. For additional books coverage, please see our Science Culture blog channel, which explores how science intersects with other areas of knowledge, entertainment, and society. ONLINE New on our Science Culture blog: americanscientist.org/blogs /science-culture An Antidote to Climate Despair Digital features editor Katie L. Burke reviews All We Can Save: Truth, Courage, and Solutions for the Climate Crisis, a collection of essays by women experts and activists.
Bettering the Lives of Animals Book review editor Flora Taylor reviews Animals’ Best Friends: Putting Compassion to Work for Animals in Captivity and in the Wild, by anthropologist Barbara J. King (who is pictured below petting a goat at Farm Sanctuary in Watkins Glen, New York).
250
American Scientist, Volume 109
Cosmic Testimony Margaret Wertheim THE DISORDERED COSMOS: A Journey into Dark Matter, Spacetime, and Dreams Deferred. Chanda Prescod-Weinstein. 320 pp. Bold Type Books, 2021. $28.
C
handa Prescod-Weinstein is a particle cosmologist who studies dark matter and the interface between particle physics and gravity—or as she puts it in her new book, “I use math to figure out the history of spacetime.” The Disordered Cosmos: A Journey into Dark Matter, Spacetime, and Dreams Deferred occupies a unique place in the thriving field of popular physics books by elite academics, because Prescod-Weinstein is the first Black woman to hold a tenuretrack position in theoretical cosmology, and trenchant reflections on race and gender are a central component of the book. She describes having had to absorb “a hard lesson” in “rarified academic settings,” the lesson “that learning about the mathematics of the universe could never be an escape from the earthly phenomena of racism and sexism.” Physics and math classrooms, she says, come “complete with all of the problems that follow society wherever it goes.” Hers is a twofold tale about embarking on the scientific quest to comprehend the evolution of the cosmos, and about doing so as a Black girl who grew up dreaming of being part of that quest in a society that historically has treated dark-skinned people cruelly and considered them to be intellectually inferior. The book’s narrative illuminates both the physics itself and the structural racism that continues to impede Black people who wish to participate fully in science as a knowledge-making endeavor.
Prescod-Weinstein refers to herself as a “griot of the universe,” comparing herself to tribal African storytellers. She takes the term cosmology to mean not only the conditions of the physical world, but also the social matrix within which any articulation of the physical unfolds. Prescod-Weinstein’s doctoral research focused on the very early universe just after its inflationary phase, when theorists believe a rapid reheating put the “bang” into the Big Bang. Understanding this heating is crucial to understanding how particles such as quarks and hadrons came into being out of what had previously been empty space. “This is a question of general interest to cosmologists, as well as people working on technical issues in quantum field theory in curved spacetimes,” she writes. “I happen to fall into both categories, so this is fun for me.” In cosmic acceleration (the speeding up of the expansion of the universe), she says, we glimpse “perhaps one of our first hints at quantum gravity.” That physicists can make pronouncements about such a distant exotic time is based on “a faith in the universality of mathematical equations”—a faith she celebrates. When she is describing her research, Prescod-Weinstein’s prose lights up with joy as she speaks of the thrill of exploration and the magic of insights won from wrangling with equations. Yet after praising the power of a mathematical approach to the formation of the universe, she pivots to a more somber mode, noting that “What goes undiscussed is that it is not the only way to understand the origins of the world”; Indigenous people have their own cosmologies and knowledges. The knowledge used by Western science, and the cosmological work that Prescod-Weinstein herself does, are allied to “settler colonialism.” As a practicing scientist, how does one deal with such a bifurcation
of consciousness? This tension is at the heart of the book, which calls on us, as readers and admirers of science ourselves, to take a deep look at both the stunning epistemological successes of science and its equally arresting institutional failures. This duality is brilliantly captured in a chapter about the physics of melanin, which gives skin its color. Although race is a social construction, the melanin determining skin color is physical, and Black skin is a material phenomenon. “What is it in my skin,” Prescod-Weinstein asks, “that absorbs and emits light such that I am this color, this shade of brown that is on a spectrum of racialized Blackness?” Astounded that she has never wondered about the physics of skin color before, she digs into the scientific research. It turns out that melanin is a fascinating molecule now of interest to biophysicists and materials technologists, in part because it could be useful in understanding potential superconductors. Not thinking about the physics of skin has consequences. In 2017, a Nigerian employee at Facebook posted a video on social media of an automatic soap dispenser failing to recognize a Black hand because the darker skin was absorbing rather than reflecting light. “The dark skin was invisible,” Prescod-Weinstein writes, “not because it wasn’t there, but because the detector hadn’t been designed with dark skin in mind. . . . [Likewise] Black people disappear from view when new, life-improving technologies are being developed.” Realizing that she could use her scientific skills to understand her own Blackness was part of PrescodWeinstein’s awakening as a Black scientist. (It was harsh, she says, to realize that she had been conditioned by “artificial social structures” not to ask basic questions.) Her examination of the history of what scientists have learned about melanin leads her to reflect upon the ways in which science and society have “co-constructed” one another. In colonial and Enlightenment Europe, where the idea of darkerskinned “inferiority” was used to establish empires, scientists who should have confronted that view skeptically looked instead for ways of confirming it, thereby “consecrating” bias. Today, when scientists in the United States www.americanscientist.org
This sketch by artist Shanequa Gay, for a painting titled We Were Always Scientists, serves as the frontispiece to The Disordered Cosmos. Prescod-Weinstein says that she commissioned the painting in part because she was searching for an answer to the question, “What does freedom look like?” She asked Gay “to envision unnamed Black women scientists under slavery.”
talk about the importance of diversity in science, too often the discussion is framed in terms of people of color being a “resource” for replenishing the STEM workforce. “In other words,” Prescod-Weinstein says, “like my enslaved ancestors, Black people—and
to understand that “Black thoughts, like Black lives, matter.” Prescod-Weinstein discusses at some length some recent attempts to draw analogies between Black people and dark matter. The original idea behind the comparison, she explains,
The book calls on us to take a deep look at both the stunning epistemological successes of science and its equally arresting institutional failures. other so-called minorities—in science are constructed as a commodity for nation-building. . . . None of this is about what society can do for people of color so much as what service people of color can provide to the national establishment.” Scientists should instead be thinking about “the ways that Black scientists can shape actual science.” What she most wants is for us Special Issue: Trustworthy Science
“was specifically to highlight the ways in which Black people and our contributions are erased from cultural discourses.” But as she goes on to point out, the analogy “has also taken flight as a way of identifying with Black as a positive,” as something “special and unique.” She doesn’t like to discourage this impulse of Black culture to embrace science, because so many 2021
July–August
251
Black people have been disenfranchised from positive engagement with it. “The analogy draws on their experiences as Black people to give the phrase [dark matter] meaning,” she says—but unfortunately, “this is not the meaning that dark matter should have as a scientific concept.” She delivers a scathing indictment of the analogy, noting that Black bodies aren’t invisible because they don’t intersect with light, they’re invisibilized by social systems designed not to see them. A more apt analogy, she maintains, would be one between weak gravitational lensing and systemic racism: Weak cosmological lensing requires a sophisticated capacity for pattern detection, as does the recognition of racist microaggressions. Prescod-Weinstein describes herself as “a Black child from a biracial family growing up in Latinx eastern Los Angeles.” Her father is an Ashkenazi Jew and her mother, born in Barba-
those two and her mentor Vera Rubin, the field has largely been “a tale of great white men.” Rectifying this skew will involve acknowledging not just the achievements of the women in physics but those of the nonwhite men—for example, Elmer Imes, who in 1918 became only the second Black American man to earn a doctorate in physics, and whose spectroscopic research helped affirm the correctness of quantum mechanics. Women, sadly, are not always able to count on other women for support, and Prescod-Weinstein’s descriptions of unacknowledged racism on the part of white female colleagues are distressing to read. As someone with a degree in physics who has been writing about gender and science since the 1990s, I can attest to how lonely it can feel to be working at the interface of science and social justice, only rarely encountering other women physicists with an ethos of solidarity.
Prescod-Weinstein writes with both a deep knowledge of physics and a psychic vulnerability that is a too rarely witnessed facet of the scientific persona. dos, is Margaret Prescod, a beloved Los Angeles activist and radical radio host. In part, the daughter has written this book to explain her research to her mother. It is a story lyrically told, one that entwines cutting-edge physics with personal memoir—a genre that also includes Janna Levin’s How the Universe Got Its Spots. Levin and Prescod-Weinstein write with both a deep knowledge of physics and a psychic vulnerability that is a too rarely witnessed facet of the scientific persona. It’s no coincidence that both are women. Their weaving of an assertive subjectivity into the supposed objectivities of physics heralds an exciting phase-shift in science writing. Prescod-Weinstein’s book deals explicitly with gender. She deplores the paucity of women in 20th-century cosmology. Throughout her career, she says, she has “clung to Henrietta Swan Leavitt and Cecilia Payne-Gaposchkin as two of the only women greats in the history of cosmology,” but aside from 252
American Scientist, Volume 109
It takes courage to write a book as complex, community-exposing, and self-revealing as The Disordered Cosmos, and doing so exacts an emotional price. Being one of the few Black women in physics is also taxing. PrescodWeinstein describes the intense pressure she experiences to serve as an advisor and counselor to a stream of Black students and young Black scientists all over the country who write to her, email her, phone her, and come knocking at her door. Her moral fiber compels her to oblige. But as she notes, scant credit is given for meeting these sorts of demands— for doing what she, echoing other feminists, calls the “housework” of science—when decisions are made regarding tenure and promotions. In the cutthroat world of academic science, all focus is on publications and research. I found myself moved to tears by her frustration with this blindness. Like many other women in science, at times Prescod-Weinstein would like
to have the option of just getting on with her science. But she is too committed to forging a future in which Black people can contribute to science and have science contribute to their well-being in return. Late in the book, like a bolt from the blue comes a chapter recounting the devastating effects on PrescodWeinstein of having been raped at a science conference by a more senior scientist while she was in graduate school. She immediately signals her ambivalence about including the chapter, worrying that her book (and herself by association) will be defined by this horrendous event. But rape, she says, is part of her “disordered cosmos,” a now-irredeemable aspect of her relationship to science. Wrestling with its aftermath, she reflects on power and the ways in which “Science has become a practice of control.” The violation still intrudes on her consciousness daily, more than a decade later, during myriad moments that accumulate into so much lost time. It’s “hard to stay deeply connected to science when a scientist violates you so intimately,” she writes. And the personal becomes political, for this rape took place in the larger framework of science, a framework in which some voices have always been heard and others suppressed. “Whose observations are taken seriously?” she asks. “Who is deemed to be a competent observer?” These are questions that ring through the annals of science. To love science deeply and also to feel compelled to call out its systemic failings is a hard dichotomy to bear. Science is a set of conceptual enchantments to be celebrated and shared, yet it is also a culturally embedded activity with social and communal consequences, some of which should be occasions for shame and urgent targets for reform. Those of us who admire science are called on not just to convey what is wonderful about it but to lay bare its problems. PrescodWeinstein does both with consummate grace. Her book ought to be mandatory reading for every student of science. Margaret Wertheim is a writer, artist, and curator whose work focuses on relations between science and the cultural landscape. She is the author of several books, including Physics on the Fringe: Smoke Rings, Circlons and Alternative Theories of Everything (Walker and Company, 2011).
July–August 2021
Volume 30 Number 04
Sigma Xi Today A NEWSLETTER OF SIGMA XI, THE SCIENTIFIC RESEARCH HONOR SOCIETY
Sigma Xi Presents Fiscal Year 2020 Annual Report Sigma Xi has released its Annual Report for fiscal year 2020, encompassing July 1, 2019–June 30, 2020. The Annual Report shares data and success stories that demonstrate what the Sigma Xi community has achieved in support of the research enterprise. The report showcases the Society’s programs, events, and publications, as well as chapter and member accomplishments throughout the year. Highlights include the inaugural cohort of Sigma Xi Fellows and the Society’s first ever STEM Art and Film Festival, held in Madison, Wisconsin as the final event of the 2019 Annual Meeting and Student Research Conference. Fiscal year 2020 was defined by an extreme shift in perspective and operations with the onset of the COVID-19 pandemic. The 12-month period ended much differently than it began, and the resiliency of the Society and its members was put to the test. Like many organizations across the globe, we now face new challenges as we pivot and adapt to a new normal. Because of the hard work, generosity, and perseverance of our leaders, members, volunteers, and supporters, we have been able to continue forward with our commitment to fostering integrity in science and engineering while promoting the public’s understanding of science for the purpose of improving the human condition. To view the report, scan QR code or visit www.sigmaxi.org/annual-report.
Sigma Xi Today is managed by Jason Papagan and designed by Chao Hui Tu.
www.americanscientist.org
From the President Of Masks and Mistrust Registrants of the 2020 Sigma Xi Annual Meeting received a face mask sporting the Society’s gold key emblem and the phrase “Science Saves Lives.” I have seen a lot of masks with messages this pandemic year; one read simply: “Trust the science.” Unfortunately, many Americans don’t trust the science. As we all know, mask wearing became politicized. One mask for sale online declares, “This mask is as useless as Joe Biden.” But science’s astounding success in confronting COVID-19 is evidence that it doesn’t deserve such mistrust. Within a month of the appearance in Wuhan, China, in late December 2019 of an unknown acute respiratory disease, scientists had identified a novel coronavirus—SARS-CoV-2—as the cause. By the first week of February, the CDC was distributing the isolated virus strain to academic and industry researchers. Diagnostics tests that virologists developed appeared by spring and were broadly available by summer. It took a bit longer to determine that the risk of transmission by surface contamination is low; the virus spreads mostly by airborne droplets from the breath of infected people. (Yes, mask wearing is a useful preventive measure.) Vaccines were developed in record time. New masks express appreciation for this heroic achievement: “Thanks, Science. Vaccinated.” But science cannot save lives if people don’t trust its findings, and too many also mistrust vaccines. In a March 2020 poll, a quarter of Americans said they were unwilling to be vaccinated. That’s too high for herd immunity, the level where enough individuals have been immunized to retard contagion. We now face a situation that favors the evolution of resistant strains. So, what can we do? For science to be trusted, it must resist politicization and neutrally present the facts. Anti-vaxxers come from both political extremes—but the virus doesn’t discriminate, nor does the vaccine. Mask wearing is useful no matter which presidential candidate you supported or opposed. One mask message expressed that sentiment bluntly: “Science doesn’t care what you believe.” Nonpartisan messaging may help allay one source of mistrust, but we must dig deeper to the roots of what makes science trustworthy: its values. Curiosity and the search for truth provide a moral structure to science. With its focus on honor in science, Sigma Xi has long been a standard-bearer for scientific values. This year we highlight them, beginning with this themed issue of American Scientist on science and trust. In the coming months, I’ll write about Sigma Xi’s other ethics initiatives. Please join us in helping make science both trusted and trustworthy; it does save lives.
Robert T. Pennock
2021 July–August 253
PROGRAMS
Grants in Aid of Research Recipient Profile: Christine Hamadani Grant: $1000 (Spring 2019) Education level at time of the grant:
Graduate student
Project Description: While nanoparticles have huge potential as vehicles to improve the efficacy of drug delivery, one of the biggest challenges hindering their clinical success is surface fouling by serum proteins and premature clearance in the bloodstream circulation.
Using this Grants in Aid of Research (GIAR) award, I was able to develop and study PAIL-surface modified PLGA nanoparticles that not only resisted opsonization and extended circulation half-life inside mice, but also surprisingly hitchhiked on red blood cells after intravenous administration to selectively accumulate in the lungs. Our findings suggest huge implications for the efficiency of delivering drugs to their designated targets in the body with our carrier design. By protecting drug integrity inside PLGA, a biocompatible FDA-approved polymer, and additionally capping the polymeric nanoparticle with PAIL, we can prevent the drug from being cleared immediately from the bloodstream and give it enough time to circulate and absorb into the target disease tissue, without inducing many side effects conventionally associated with administering the free drug. With the assistance of the GIAR, we were able to complete transmission electron microscopy imaging to better understand the morphology and assembly of the PAIL surface coating. We also completed scanning electron
microscopy in a time study to examine their binding to red blood cells in the mice after injection. This work was published in the journal Science Advances in 2020. How has the project influenced you as a scientist? I learned how to tell a compelling story to reviewers so scientists from different fields could understand the significance of my research and what I needed to complete the work. My GIAR award provided me a transformative opportunity, introducing me to new and diverse toolkits from physical chemistry, medicine, and bioengineering. Where are you now? I followed my co-advisor, Dr. Eden Tanner, from Harvard University to the University of Mississippi, where in the Tanner Lab I am continuing my PhD research, investigating the physical chemistry driving the mechanism of interactions between PAIL-coated polymeric nanoparticles and red blood cells in whole blood to encourage selective cell hitchhiking and to drive targeted biodistribution for blood-based and cancer therapeutics.
Sigma Xi Offers Emotional Support Service Through Happy Partnership
Stay in Touch with Sigma Xi
Sigma Xi recently partnered with Happy—an innovative, phone- and mobile-based mental health platform— to launch a new emotional support program for Society members. The partnership fosters the benefits of emotional support and provides access to confidential, personalized, peer-based support services at no cost to Sigma Xi active members. The Happy platform uses advanced data and a unique vetting process to provide a network of 24/7, on-demand services that connect individuals with exceptionally compassionate support givers. “Sigma Xi recognizes an immediate need to offer emotional support during the pandemic emergency, economic turmoil, and social disruption its members are currently experiencing,” said Sigma Xi CEO Jamie Vernon. “Emotional support is a fundamental human need.
Update your member, affiliate, or explorer profile today to stay informed of Sigma Xi news, ensure timely delivery of your American Scientist subscription, and get the most from your Sigma Xi benefits. In addition to contact information, please take a few minutes to complete your profile by updating chapter affiliation, research discipline, education, and employment history. This information helps Sigma Xi develop relevant programming and increase networking and mentorship opportunities.
254
Sigma Xi Today
Because we recognize that stress and anxiety are universal challenges across the research community, Sigma Xi members from all career stages and research sectors are encouraged to use these services.” Sigma Xi is committed to raising awareness of and providing support for the growing emotional health challenges facing its members and the research community. Through the Happy partnership, Sigma Xi members will gain access to valuable tools and information to help reduce feelings of isolation and anxiety, enhance relationships, and improve academic and job performance.
Visit www.sigmaxi.org/my-sigma-xi to update your online profile.
MEMBERS AND CHAPTERS
Remembering Former Sigma Xi President Marye Anne Fox Sigma Xi lost one of its most distinguished and accomplished leaders on May 9, 2021 with the death of Marye Anne Fox. Her passing was announced by the University of California San Diego (UCSD), where Fox served as the seventh chancellor and first woman to be elected as permanent chancellor. After becoming the first woman to receive the Sigma Xi Monie Ferst Award in 1996, Fox was inducted into the Society in 1998 at the University of Texas at Austin chapter. She would go on to serve as Sigma Xi’s fifth woman president in 2001–2002. In addition to serving on several committees at the Society, including the Executive Committee, Committee on Finance, Committee on Strategic Planning, and Committee on Nominations, she
was the 2012 recipient of the Sigma Xi McGovern Science and Society Award. Throughout her career as an internationally renowned chemist, Fox was continually recognized for her research and advancement of the world’s understanding of renewable energy and environmental chemistry. She received the 2005 Charles Lathrop Parsons Award from the American Chemical Society and the 2012 Othmer Gold Medal from the Science History Institute. President Barack Obama named Fox to receive the National Medal of Science in 2010, the highest honor bestowed by the United States government on scientists, engineers, and inventors. Fox is survived by her husband, James K. Whitesell, UCSD professor of chemistry and biochemistry, as well
as three sons and two stepsons. She will be missed by many but her legacy and contributions to science will be an everlasting part of the fabric of Sigma Xi, as well as the research enterprise and society as a whole.
Chapter Spring Events Despite the limitations of the pandemic, Sigma Xi chapters have been active throughout the spring with a diverse array of virtual events including inductions, lecture series, award ceremonies, and more. Below are a few highlights from across the Sigma Xi chapter network: April January Northwestern Pennsylvania Chapter Research Triangle Park Chapter Virtual Student Research Conference Virtual Monthly Pizza Lunch Series Georgia Institute of Technology Chapter Virginia Tech Chapter Awards and Induction Ceremony Science on Tap Series Saint Louis University Chapter Calgary and University of New Mexico Chapters Virtual Sigma Xi Research Symposium Joint Virtual Monthly Seminar Series University of Michigan Chapter February Virtual Teacher of the Year and Science Fair Awards Ceremony University of Delaware Chapter Southern Illinois University-Carbondale Chapter Darwin Day Celebration SIUC Applied Psychology Colloquium University of Nebraska at Kearney Chapter Western Connecticut State University Chapter Virtual Science Café Series Sigma Xi Northeast Regional Research Conference Fairfield University Chapter University of Alabama at Birmingham Chapter Sigma Xi Faculty Research Talk Virtual Lecture: Fragmented Ecosystems (Crayfish vs. March People) by Zanethia C. Barnett Pennsylvania State University Chapter Villanova University Chapter Human Limits to Exploring Mars by James A. Pawelczyk Distinguished Lectureship: Plasticity, Epigenetics, and University of Cincinnati Chapter Evolution by David Pfennig Spring Mixer and Young Investigator Award Lecture College of Mount Saint Vincent Chapter Rice University-Texas Medical Center Chapter Virtual Induction Ceremony Mimi Goldschmidt Quarterly Lecture Series May Columbia-Willamette Chapter Virtual Lecture: Understanding Motor Control: From Biology University of Puerto Rico at Mayaguez Chapter to Robotics by Alexander Hunt Sigma Xi Poster Day What do you have planned? List upcoming events on your online chapter community calendar to highlight them to all Sigma Xi members in the events section of our website. Visit www.sigmaxi.org/calendar. www.americanscientist.org
22021 021 July–August 255
CALL FOR SUBMISSIONS
Showcase Your Creative Work in the STEM Art and Film Festival The Sigma Xi STEM Art and Film Festival will take place November 7, on the final day of the 2021 Annual Meeting and Student Research Conference. After going virtual in 2020, this year’s festival will be a hybrid event, as we welcome all artists to join us in Niagara Falls, New York. The event is open to the public at no cost. The STEM Art and Film Festival is an opportunity for artists, performers, and filmmakers to share work that is inspired by scientific principles and the integration of the arts in STEM. The call for submissions is open to artists, communicators, filmmakers, educators, scientists, and philosophers. The festival will feature different categories of visual arts, including but not limited to painting, photography, data visualization, documentaries, and animated films. Especially encouraged are works of art that relate to this year’s conference theme, “Roots to Fruits: Responsible Research for a Flourishing Humanity – How scientific virtues serve society.” In addition to showcasing their art at the festival, accepted contributors can choose to enter the STEM Art and Film Festival competition. Pieces will be evaluated by a panel of professional judges, as well as the public, and monetary prizes of up to $500 will be awarded in each of three categories: artwork, performance, and film. Submission Deadline Friday, July 30, 2021 To make a submission or learn more about the festival, visit www.sigmaxi.org/SAFF21.
Now Accepting Submissions for the Student Research Conference The Sigma Xi Student Research Conference will be held this November in conjunction with the Society’s 2021 Annual Meeting in Niagara Falls, New York. After going virtual in 2020, we look forward to welcoming students back together for this annual celebration of student scholarship and research excellence worldwide. The conference provides students with opportunities to share their research, connect with professionals, participate in professional development sessions, and meet school representatives at the College and Graduate School Fair. Sigma Xi invites high school, undergraduate, and graduate students to submit abstracts for oral and poster presentations. In addition to standard disciplinary research categories, this year’s conference will feature all new sessions dedicated to interdisciplinary research projects that contribute to developing solutions to global challenges. These areas include environmental challenges, food sustainability, cybersecurity, advancing vaccines and preventing pandemics, and understanding the universe. The top poster and oral presenters receive monetary awards, commemorative medals, and nomination for associate membership in Sigma Xi, including one year of paid dues.
Submission Deadlines Oral presentations: August 1, 2021 Poster presentations: September 24, 2021 For more information and submission guidelines, visit www.sigmaxi.org/src.
256
Sigma Xi Today
Flashback, 1887 Sigma Xi Chapters
&RUQHOO8QLYHUVLW\,WKDFD1HZ