Social Media and the Law: A Guidebook for Communication Students and Professionals [3 ed.] 9781032004877, 9780367772345, 9781003174363

This fully updated third edition of Social Media and the Law offers an essential guide to navigating the complex legal t

290 106 3MB

English Pages 267 [268] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title
Copyright
Contents
Preface
1 Free Speech in Social Media
2 Defamation
3 Privacy, Surveillance, and Data Protection
4 Intellectual Property
5 Commercial Speech in a Social Space
6 Account Ownership and Control
7 Student Speech
8 Obscenity, Nonconsensual Pornography, and Cyberbullying
9 Social Media Use in Courtrooms
10 Social Media Policies for Journalists
11 Social Media Policies for Advertising and Public Relations
12 The Future of Discourse in Online Spaces
Contributor
Index
Recommend Papers

Social Media and the Law: A Guidebook for Communication Students and Professionals [3 ed.]
 9781032004877, 9780367772345, 9781003174363

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Social Media and the Law

This fully updated third edition of Social Media and the Law offers an essential guide to navigating the complex legal terrain of social media. Social media platforms like Facebook, Twitter, Instagram, YouTube, and TikTok have become vital tools for professionals in the news and strategic communication fields. As these services have rapidly grown in popularity, their legal ramifications have continued to develop, resulting in students and professional communicators needing to be aware of laws relating to defamation, privacy, intellectual property, and government regulation. Editor Daxton R. Stewart brings together eleven media law scholars to address the key challenges for communicators, including issues such as copyright, account ownership, online impersonation, anonymity, and live streaming, and to address key questions, such as the following: To what extent do communicators put themselves at risk for lawsuits when they use these tools? What rights do communicators have when other users talk about them on social networks? How can people and companies manage intellectual property issues consistent with the developing law in this area? This book is essential for students of media, mass communication, strategic communication, journalism, advertising, and public relations, as well as professional communicators that use social media in their role. Daxton “Chip” R. Stewart is an attorney and professor at Texas Christian University, where he teaches courses in journalism and media law. He has more than 15 years of professional experience in news media and public relations. Along with editing Social Media and the Law, he is the author of Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech? and co-author of the textbook The Law of Public Communication.

Social Media and the Law

A Guidebook for Communication Students and Professionals Third Edition

Edited by Daxton R. Stewart

Cover image: klyaksun /Getty Images Third edition published 2023 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2023 selection and editorial matter, Daxton R. Stewart; individual chapters, the contributors The right of Daxton R. Stewart to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and explanation without intent to infringe. First edition published by Routledge 2013 Second edition published by Routledge 2017 Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-1-032-00487-7 (hbk) ISBN: 978-0-367-77234-5 (pbk) ISBN: 978-1-003-17436-3 (ebk) DOI: 10.4324/9781003174363 Typeset in Times New Roman by Apex CoVantage, LLC

Contents

Preface

vii

D A X TO N “ C H I P ” R. S T E WART

1 Free Speech in Social Media

1

J E N N I F E R J A C OBS HE NDE RS ON

2 Defamation

26

D E R I G A N S I LVE R

3 Privacy, Surveillance, and Data Protection

53

A M Y K R I S T I N S ANDE RS

4 Intellectual Property

69

K AT H L E E N K. OL S ON

5 Commercial Speech in a Social Space

91

C O U RT N E Y A. BARCL AY

6 Account Ownership and Control

114

J A S M I N E E . MCNE ALY

7 Student Speech

128

D A N V. K O Z L OWS KI

8 Obscenity, Nonconsensual Pornography, and Cyberbullying

150

A D E D AY O L . ABAH AND AMY KRI S T I N S ANDE R S

9 Social Media Use in Courtrooms P. B R O O K S F UL L E R

170

vi

Contents

10 Social Media Policies for Journalists

193

D A X TO N R . S T E WART

11 Social Media Policies for Advertising and Public Relations

208

H O L LY K AT H L E E N HAL L

12 The Future of Discourse in Online Spaces

224

J A R E D S C H R O E DE R

Contributor Index

244 247

Preface Daxton “Chip” R. Stewart TEXAS CHRISTIAN UNIVERSITY

When my colleagues and I began work on the first edition of this book back in 2011, we anticipated that new social media platforms would launch and new issues would emerge quickly. We underestimated how accurate this prediction would be. In the months leading up to publication, the photo-sharing app Instagram became one of the world’s most popular social tools, while Snapchat had just launched, soon to overtake Instagram and ultimately Twitter in daily users to become the second-most popular social media app behind Facebook.1 Livestreaming, while technologically plausible, was years away from widespread use; just five years later, Periscope and Meerkat launched to make livestreaming easily accessible, soon giving way to Facebook Live and Twitch as the dominant livestreaming tools in the market.2 With these innovations came new challenges. Periscope had to deal with copyright issues when people used it to livestream new episodes of Game of Thrones and the high-profile pay-per-view fight between Floyd Mayweather and Manny Pacquiao.3 In 2016, a woman livestreamed her suicide in France using Periscope, with nearly 1,500 viewers tuning in, just a month after an American teenager faced criminal charges for livestreaming her friend being raped.4 An Uber and Lyft driver in St. Louis was caught livestreaming hundreds of his customers on Twitch without their knowledge in 2018, leading to his firing.5 While these new technologies showed their dark sides, they also showed potential benefits as tools for shedding light on horrible abuses, as a woman used Facebook Live to broadcast police shooting her boyfriend during a traffic stop, drawing widespread attention to the tragedy.6 The previous edition of this book, completed in the summer of 2016, preceded a seismic shift in the legal and political import of social media platforms. It seemed implausible at the time: An entertainer and media figurehead who amassed a huge following on Twitter and began his political rise with lies about the birth certificate and origin of America’s first Black president would go on to ride a wave of populist discontent and white grievance to capture the Republican nomination for president in 2016, and then was elected president, backed by an online bot and troll army of dubious international origin. By the end of his presidency, Donald Trump would have been removed from the very

viii

Preface

social media platforms that fueled his rise after he used them to push lies and conspiracy theories about his loss in the 2020 election, which manifested in a violent and disturbing assault on the US Capitol by his loyal followers on January 6, 2021, who used those and other platforms to coordinate and plan their attack. For good or ill, social media have become an essential part of modern human communications. As such, their use has proliferated among journalists and other professional communicators as a forum for engaging with their audiences. Using social media for these purposes raises several important legal questions in a variety of areas that professional communicators should be aware of as they do their jobs. These kinds of questions have become more and more prevalent when I have spoken to professional journalism and public relations groups and have worked with students over the past decade. New issues arise nearly every day brought on by social media use by media professionals or other citizens, issues that are sometimes uncharted terrain for the law. In the days that I wrote this preface in late February 2022, the role of social media in modern communication has revealed itself over and over again. Former President Trump launched his own social network, a Twitter clone called Truth Social, which promised to be a platform emphasizing free speech and verified white supremacists who had been booted from other platforms. The platform also raised copyright and trademark concerns by using the logos and brands of companies such as the NFL, ESPN, and Fox Sports, seemingly without their permission, and copying content from their other social media accounts to the new network.7 Meanwhile, a federal district judge seemed hostile to Trump’s First Amendment lawsuit against Twitter trying to get him reinstated to the platform, telling Trump’s lawyer that “one thing that’s been more or less a constant going back 20 years is that private companies like Twitter are not subject to the First Amendment.”8 Iowa State University police arrested two students for making vague threats to the community, warning them to stay away from campus buildings on the anonymous gossip platform Yik Yak.9 A federal appeals court dismissed an effort by a copyright troll to sue a Texas high school softball team for tweeting an inspirational quote from his 1982 book, finding the tweet was fair use and awarding the school district attorney fees for having to fight the lawsuit.10 And as Russia invaded Ukraine, the most dangerous escalation of hostilities on the European continent since World War II, social media played an important role in how people around the globe experienced what was happening. Tech companies blocked Russian media and propaganda efforts, as Twitter and Facebook froze accounts of networks such as Russia Today, and YouTube and Google blocked them from monetization and ads.11 Russian leaders retaliated by blocking access to Facebook within the country, alleging “censorship” against their state media but also thus limiting the possibility for organization by dissenters and critics concerned about the invasion.12 Meanwhile, scammers on sites such as Instagram and TikTok were using the invasion to pose as

Preface

ix

journalists on the ground13 or as sympathetic Ukrainians seeking donations for relief even though they were nowhere near the conflict.14 Social media have unquestionably permeated the practice of communicators, such as journalists, public relations, and advertising professionals. Our audience, our clients, and our colleagues expect that we, as professional communicators, become experts in using all available communication tools to do our respective jobs – and to do so in a way that dodges potential legal and ethical pitfalls. Centuries of jurisprudence about media law provide a foundation for understanding the particular challenges we face when using social media. However, courts, lawmakers, and regulators continue to struggle to keep up with these challenges. The confusion from courts and legislators is understandable, of course. New communication tools emerge – and disappear – at a rapid pace, faster than the legal system can evolve to handle the particular issues each presents. The purpose of this book is to bridge this gap, providing practical guidance for communication students and professionals as they navigate the dangers of daily use of social media tools. To what extent can we use photos users have voluntarily shared on a social media site? Who is responsible when a person’s reputation is harmed or one’s privacy is violated through social media communications? How can social media tools be used to gather information or transmit news and commercial messages? These questions and more are addressed in this volume.

What Are Social Media? Before addressing the particular challenges social media present, it is helpful to understand exactly what social media are. Communication scholars generally begin with the definition authored by danah boyd and Nicole Ellison in 2007 of social networking sites as “webbased services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and transverse their list of connections and those made by others within the system.”15 It is the third item – allowing users to make their social networks visible, thus permitting new connections to be made and networks to become larger – that boyd and Ellison say makes social networking sites unique. These core commonalities between social sites are visible in the most trafficked social media sites today. Whether social tools are used primarily for social networking (such as Facebook and LinkedIn), for sharing videos and photos (YouTube, Instagram, Snapchat, TikTok), for broadcasting live to the world (Facebook Live, Twitch), or for sharing content and ideas through microblogs (Twitter, Tumblr), they have at their center a transformational way of human interaction.

x

Preface

Social media, as such, are tools that have changed the way people communicate, as noted by Clay Shirky. “The tools that a society uses to create and maintain itself are as central to human life as a hive is to bee life,” Shirky wrote in 2008, noting how social tools enable sharing and group formation in new ways.16 The structure of the tools – individual profiles in bounded systems that can make connections public – has inexorably led to the culture of sharing and voluntariness on social networks. But sharing and voluntariness are difficult concepts for the law, which often seeks more rigid definitions and boundaries to regulate human affairs. As such, legal definitions of social media often struggle to nail down these concepts. For example, when the Texas legislature sought to outlaw online impersonation – that is, creating a false profile online to harass or defraud another person – it chose the following definition of “commercial social networking site”: any business, organization, or other similar entity operating a website that permits persons to become registered users for the purpose of establishing personal relationships with other users through direct or real-time communication with other users or the creation of web pages or profiles available to the public or to other users. The term does not include an electronic mail program or a message board program.17 Such a definition is so broad as to encompass nearly any online activity, whether publicly available or not, as long as it is not email or message boards – though even message boards, which enable discussion among strangers or friends, have a social aspect to them. In fact, such statutory definitions have been deemed overbroad by courts when it comes to First Amendment concerns. The US Supreme Court ruled as such in Packingham v. North Carolina in 2017, when the court unanimously struck down a North Carolina law that prohibited sex offenders from accessing “a commercial social networking Web site where the sex offender knows that the site permits minor children to become members,” finding that it was so broad that it would essentially prohibit access to the entire Internet. Justice Kennedy noted in his opinion, While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace – the “vast democratic forums of the Internet” in general, and social media in particular. Seven in ten American adults use at least one Internet social networking service.18 US courts have repeatedly ruled that the First Amendment does not govern social media platforms’ decision-making, so they are free to edit or remove content and users without crossing free speech or free press guarantees. For

Preface

xi

example, the Ninth Circuit Court of Appeals ruled in 2020 that YouTube was not required by the First Amendment to host videos posted by the conservative website PragerU, noting that decades of US jurisprudence has established that the First Amendment only applies to government entities, and YouTube – regardless of its ubiquity as a platform and its assertions of commitment to freedom of expression – is a private entity. “The Free Speech Clause of the First Amendment prohibits the government – not a private party – from abridging speech,” the court explained.19 However, while social media platforms may be operated by private companies, courts have increasingly found that government officials create a public forum when they use social media accounts such as Twitter and Facebook for government business. In 2019, for example, a federal district court ruled that three Republican members of the Wisconsin state assembly violated the First Amendment when they blocked a liberal advocacy organization called One Wisconsin Now. One legislator said he used his account “to notify the public about topics such as legislation, upcoming legislative hearings, and government reports” and considered it a “forum for constituents.” Another similarly said he used the account to “communicate with his legislative constituents,” and the third, the speaker of the assembly, said he used it to speak “on behalf of the Assembly and Republicans in general.” The court ruled the officials were acting “under color of state law” when they created and operated the accounts and that the interactive nature of the accounts – the ability both to speak and to listen to constituents on Twitter as a platform – made these accounts a “designated public forum,” and thus ruled that blocking a citizen group based on their viewpoint violated their First Amendment rights.20 The Fourth Circuit issued a similar ruling, finding that the chair of a county board of supervisors in Virginia violated the First Amendment rights of a constituent by blocking him from the chair’s Facebook page. The court found that the chair used the Facebook page “as a tool of governance” and to “further her duties as a municipal officer,” including providing information about public safety issues.21 The Knight First Amendment Center at Columbia University filed the most high-profile lawsuit in this area on behalf of several journalists and other citizens who were blocked by then-President Donald Trump from his personal account (@realDonaldTrump), which he created before taking office but continued to use in an official capacity after becoming president. Trump conceded that he blocked people “after posting replies in which they criticized the President or his policies and that they were blocked as a result of this criticism.” The court noted that Trump used the account “almost on a daily basis ‘as a channel for communicating and interacting with the public about his administration,’” including hiring and firing staff and announcing changes to national policies. The court reasoned that because he was using the account in his official capacity, it became a public forum, and because he blocked users based on their viewpoint, the court found that blocking them violated the First Amendment.22

xii

Preface

Trump appealed the ruling to the Supreme Court, which ultimately dismissed the case as moot after Trump lost his re-election effort and was no longer in office.23 American courts have been handling disputes involving social media for almost two decades now. The first reported opinion I was able to find was in 2004, involving Classmates.com, a site that allowed high school and college acquaintances to register and reconnect. An attorney in Oregon was publicly reprimanded by the state’s supreme court by posing as a classmate who had become a teacher at their former high school and posting, “Hey all! How is it going. I am married to an incredibly beautiful woman, AND I get to hang out with high school chicks all day (and some evenings too). I have even been lucky with a few. It just doesn’t get better than this.”24 Since then, courts have been handling cases involving social media in greater numbers each year. Facebook has been the subject of litigation almost since its founding, with Mark Zuckerberg battling ConnectU LLC, the operation put together by Tyler and Cameron Winklevoss and Divya Narendra, over who owned what intellectual property rights to the site. Just one case – ConnectU v. Zuckerberg – was the subject of an opinion in 2006.25 The caseload has steadily grown as social media platforms have become ever-present in the 15 years since, from just 10 cases mentioning Facebook in 2010 according to a LexisNexis search to nearly 4,300 state and federal cases in 2021 alone. One challenge in particular in social media law today is the suddenly unsteady ground of one of the bedrock principles of Internet law in the United States. For decades now, Section 230 of the Communications Decency Act has largely shielded social media platforms in the United States from lawsuits for their content moderation decisions, such as taking down posts or deleting user accounts, and for the activities of their users, who may use the platforms to harass or defame other people. This lack of liability for core platform activities has allowed companies like Facebook and Google to develop and flourish, leading lawyer and professor Jeff Kosseff to label Section 230 as the “Twenty-Six Words that Created the Internet.”26 But social media platforms have undoubtedly caused some harm, as users face harassment, defamation, and threats with little or no recourse, and communicators, companies, and even politicians can be banned from these platforms with no remedy in the law. Social media companies drew the ire of both major US political parties in 2020, as both the Democratic and Republican candidates for president said they wished to repeal Section 230 as part of their campaigns.27 Members of Congress introduced 18 bills in 2020 and another 18 in 2021 to either fully repeal or heavily reform Section 230, though for very different reasons.28 Some bills, such as the Protect Speech Act introduced by House Republicans, would undo Section 230 “Good Samaritan” protections for content moderation policies, allowing lawsuits against companies that remove content or delete user accounts for what critics viewed as political reasons; the bill would

Preface

xiii

make it harder for companies like Facebook and Twitter to remove Trump and others who promoted election falsehoods and pandemic misinformation.29 But this kind of “must carry” provision stood in direct opposition to other proposals making web hosts liable for hosting hate speech or misinformation. For example, the Protecting Americans from Dangerous Algorithms Act, sponsored by House Democrats, would remove Section 230 protections for platforms that amplify speech promoting terrorism or depriving people of civil rights.30 Similarly, the Health Misinformation Act introduced by Senate Democrats would remove Section 230 protections for algorithms that promote the kinds of false claims about vaccines and public health that became rampant during the COVID-19 pandemic.31 One bipartisan effort, the Platform Accountability and Consumer Transparency (PACT) Act, would place higher requirements on platforms to disclose and enforce their content moderation policies, including a place for consumers to file complaints, in order to receive Section 230 protections.32 Dozens of state legislatures also made efforts in 2021 to strip social media websites of legal protections, though the First Amendment has proven to be a hurdle. Florida passed its Stop Social Media Censorship Act, which attempted to make online companies liable for up to $250,000 a day for removing politicians from their platforms, as well as other punishments aimed at forcing companies to carry content they say violates their terms and community standards. A federal court barred Florida from enforcing the law, however, finding that it came “nowhere close” to passing strict scrutiny because “balancing the exchange of ideas among private speakers” is not a legitimate state interest. Further, the court noted, the law was not narrowly tailored to protect those asserted interests but rather was “an instance of burning the house to roast a pig.”33 Likewise, a federal judge enjoined enforcement of a similar law in Texas aimed at preventing censorship of users or viewpoints by tech companies, authorizing lawsuits by citizens or the state’s attorney general. The court found the law was discriminatory by targeting “large social media companies perceived as being biased against conservative views,” that the threat of lawsuits “chills the social media platforms’ speech rights,” and that the law was unconstitutionally vague. These flaws meant the law could not pass either the strict or intermediate scrutiny standards.34 Even if the federal and state bills fail to undo the broad liability shield that Section 230 has provided to social media companies, other efforts seem to be gaining traction to limit these protections. In 2021, for example, a federal appeals court ruled that Snapchat could be found liable for its “speed filter,” which documented the real-life speed of the app’s users, after three teenagers died in a car accident after going 123 miles per hour to show off their speed on the app.35 Around the world, these limits are being tested as well. Australia’s high court ruled in 2021 that social media companies could be held liable for the comments of users posted under news stories, holding that Facebook and

xiv

Preface

two news companies could be sued by a teenager for defamatory comments posted under stories about him on the Facebook pages of The Australian and Sky News Australia.

Looking Forward The particular challenge of a volume like this is to nail down the landscape of social media and the law at a fixed moment in time – as social tools are launching and evolving, as legislatures and regulators are trying to come up with ways to manage the impact of these tools, as professionals are trying to maximize effective use of social tools while minimizing legal risks. All of this is being done in the shadow of the developing culture of social media, one that is rooted in voluntariness, sharing, and group formation rather than legal formalities, such as contracts, property, statutes, and regulations. Nevertheless, courts have already handled thousands of cases that involve social media tools, with thousands more on the way. Online human affairs can be and certainly have been the subject of our laws, and those who would use social media tools should be aware of the legal obligations, duties, and expectations created by the law. Fortunately, we are informed by precedent and experience. While social media may be revolutionary technologies, so were the telegraph, the telephone, the radio, the television, and the Internet. We are guided by the First Amendment and centuries of jurisprudence regarding the law of communication. And the more we understand how social tools work, and how they have fundamentally transformed human interaction, the more we should be able to understand how to use them legally and responsibly. This volume comprises 12 chapters by media law scholars examining the way the law interacts with social media in their areas of expertise. Media professionals continue to face many of the same challenges they have in the past – defamation, privacy, intellectual property, commercial speech regulations, access to court proceedings – so this volume is organized around these particular challenges. Some areas touch on other aspects of communication – obscenity, student speech, account control – in which the implications of social media on the law have developed, shedding light on how courts may treat these issues for communicators in the future. Each chapter opens with an overview of the law in that area, examining how legislatures, courts, and regulators have handled the law in the digital environment. The authors go on to examine the particular challenges that social tools present and how professionals and the law have responded to them. Finally, each chapter concludes with a “Frequently Asked Questions” or “Best Practices” section, with answers to practical questions that professionals and students may most often encounter. The first nine chapters of this volume focus on areas of substantive law, while the next two examine practical consequences for professionals, offering guidance on developing social media

Preface

xv

policies for journalism and strategic communication professionals. The final chapter looks forward to some of the potential implications of social media on the free speech and free press guarantees of the First Amendment. Our hope in writing this book was not to provide a comprehensive, definitive volume on the law of social media as it pertains to media professionals – that would, of course, be impossible in this time of great upheaval in the media landscape. Instead, our goal is to provide professional communicators with a foundation of knowledge with practical guidance in what we know to be the most dangerous terrain, with an eye on what is happening now and what is to come. While the law may often seem ill-suited to adapt to technological and social changes that have transformed the way people communicate, adapt it has and will continue to do.

Notes 1. Sarah Frier, Snapchat Passes Twitter in Daily Usage, Bloomberg Technology, June 2, 2016, www.bloomberg.com/news/articles/2016-06-02/snapchat-passestwitter-in-daily-usage. 2. Lexi Sydow, The Evolution of Social Media Apps: Live Streaming: The New Frontier for Social Media, Data AI, September 6, 2021, www.data.ai/en/insights/ market-data/evolution-of-social-media-report. 3. Nicholas Thompson, Pirates Crash the Mayweather-Pacquiao Fight, The New Yorker, May 4, 2015, www.newyorker.com/business/currency/pirates-crash-the-mayweatherpacquiao-fight. 4. Caitlin Dewey, The (Very) Dark Side of Live Streaming That No One Seems Able to Stop, Washington Post, May 26, 2016; Peter Holley, “She Got Caught Up in the Likes”: Teen Accused of Live-streaming Friend’s Rape for Attention, Washington Post, April 19, 2016. 5. Erin Heffernan, St. Louis Uber Driver Put Video of Hundreds of Passengers Online. Most Have No Idea, St. Louis Post-Dispatch, July 24, 2018. 6. Abby Ohlheiser, Is Facebook Ready for Live Video’s Important Role in Police Accountability, Washington Post, July 7, 2016. 7. Sara Fischer, Scoop: Truth Social verifies white nationalist Nick Fuentes, Axios, February 27, 2022, www.axios.com/truth-social-verifies-white-nationalist-fuentestrump-0aeddf49-5827-4dec-801e-fca54256e4d7.html. 8. Maria Dinzeo, Trump’s First Amendment Lawsuit Against Twitter on Thin Ice, Courthouse News Service, February 24, 2022, www.courthousenews.com/trumpsfirst-amendment-lawsuit-against-twitter-on-thin-ice. 9. Vanessa Miller, Iowa State University Police Arrest Freshmen Accused of Yik Yak Threats, The Gazette, February 23, 2022, www.thegazette.com/higher-education/ iowa-state-police-arrest-freshman-accused-of-yik-yak-threats. 10. Bell v. Eagle Mountain Saginaw Independent School Dist., 2022 U.S. App. LEXIS 5176 (5th Cir., February 25, 2022). 11. Shannon Bond, Facebook, Google and Twitter Limit Ads Over Russia’s Invasion of Ukraine, National Public Radio, February 27, 2022, www.npr.org/2022/ 02/26/1083291122/russia-ukraine-facebook-google-youtube-twitter. 12. Justin Sherman, Facebook is Going Up Against Russia, Slate, February 26, 2022, https://slate.com/technology/2022/02/russia-facebook-throttle-information-warfare. html.

xvi

Preface

13. Taylor Lorenz, Scammy Instagram “War Pages” are Capitalizing on Ukraine Conflict, Input, February 25, 2022, www.inputmag.com/culture/ukraine-russiawar-pages-instagram-meme-scams. 14. Kat Tenbarge & Ben Collins, War in Ukraine Sparks New Wave of Misinformation, NBC News, February 25, 2022, www.nbcnews.com/tech/tech-news/war-ukrainesparks-new-wave-misinformation-rcna17779. 15. danah m. boyd & Nicole B. Ellison, Social Network Sites: Definition, History, and Scholarship, 13 J. Computer-Mediated Communication 210, 211 (2007). 16. Clay Shirky, Here Comes Everybody 17 (2008). 17. Texas Penal Code § 33.07(f)(1) (2012). 18. Packingham v. North Carolina, 137 S.Ct. 1730, 1735 (2017). 19. Prager Univ. v. Google LLC, 951 F.3d 991, 996 (9th Cir. 2020). 20. One Wisconsin Now v. Kremer, 354 F. Supp. 3d 940 (W.D. Wisc. 2019). 21. Davison v. Randall, 912 F.3d 666, 680 (4th Cir. 2019). 22. Knight First Amendment Institute v. Trump, 928 F.3d 226 (2nd Cir. 2019). 23. Biden v. Knight First Amendment Institute, 593 U.S. ___ (2021). 24. In re Carpenter, 95 P. 3d 203 (Oregon 2004). 25. See ConnectU LLC v. Zuckerberg, 2006 U.S. Dist. LEXIS 86118 (D. Mass, 2006); ConnectU LLC v. Zuckerberg, 522 F.3d 82 (1st Cir. 2008). 26. Jeffrey Kosseff, The Twenty-Six Words That Created the Internet (2019). 27. Brian Pietsch, Trump and Biden Both Want to Revoke Section 230, but for Different Reasons, Business Insider, May 30, 2020. 28. Meghan Anand, Kiran Jeevanjee, Daniel Johnson, Brian Lim, Ireny Ly, Matt Perault, Jenna Ruddock, Tim Schmeling, Niharika Vattikonda, Noelle Wilson, & Joyce Zhou, All the Ways Congress Wants to Change Section 230, Slate, March 23, 2021. 29. Protect Speech Act, H.R. 3827, 117th Cong. (2021). 30. Protecting Americans from Dangerous Algorithms Act, H.R. 2154, 117th Cong. (2021). 31. Health Misinformation Act of 2021, S. 2448, 117th Cong. (2021). 32. Platform Accountability and Consumer Transparency Act, S. 4066, 117th Cong. (2021). 33. NetChoice v. Moody, ___F.Supp. 3d. ___ (N.D. Fla. 2021). 34. NetChoice v. Paxton ___ F.Supp. 3d. ___ (W.D. Tex. 2021). 35. Lemmon v. Snap, Inc., No. 2:19-cv-04504-MWF-KS (9th Cir., May 4, 2021).

Chapter 1

Free Speech in Social Media Jennifer Jacobs Henderson 1 TRINITY UNIVERSITY

At a time when more people in the United States use social media than vote, what happens in these spaces is becoming increasingly significant – shaping politics, economics, and history. If healthcare policies, stock market movements, and entertainment franchises are now debated in social networks rather than traditional media, understanding the boundaries of allowable speech in these spaces is essential. The amount of and ability to access digital content means that the potential for disagreement over speech has increased exponentially, and thus the number of legal challenges regarding speech has increased. The fundamental questions raised, however, are not so different from those posed when people with a podium and a bullhorn spoke a century ago: What kinds of speech should be allowed? How should misinformation be handled? When can government intervene? How can someone protect private speech from public eyes? The distributed and participatory nature of social networks, however, has left judges analyzing complex new communication patterns as they attempt to draw the boundaries of free speech, government intervention, user and consumer harms, and privacy rights. Some of those boundaries, seemingly well established in US law for a quarter of a century, have come under deeper scrutiny as social media companies have increasingly become significant players in the political sphere. For example, a 2016 lawsuit against the social media app Snapchat alleged it encouraged reckless driving by a Georgia teenager who slammed her car into another while using a Snapchat “lens” that tracks vehicle speed. At the time of impact, her speed was clocked at 107 mph.2 A 12-year-old girl in Virginia faced criminal charges for emojis of a gun, bomb, and knife, posted to her Instagram that read: “Killing (gun emoji) . . . meet me in the library Tuesday (gun, knife, bomb emojis).”3 Chanel, the Paris fashion house, filed an intellectual property infringement action against an Indiana salon owner for the use of its logo on the @Chanel Instagram account, which Chanel does not own (Chanel’s Instagram handle is @officialChanel).4 While the meaning of emojis, @s, likes, and #s will be debated in courts for years to come, like all of the legal battles that arise from this space, the meaning of words and symbols is both an old and a new problem. Judicial DOI: 10.4324/9781003174363-1

2

Jennifer Jacobs Henderson

interpretation of intention behind speech is and always has been difficult. A slip of the tongue has simply morphed into a slip of the thumb.

Access to Social Media For almost a quarter of a century, public interest organizations have fought for physical access to the most basic Internet services, with scholars noting gaps in access based on race, ethnicity, income, and geographic location. And while battles against “information redlining” and “the digital divide” might seem to be a relic of another time, the core issue of access still remains. Many individuals in the United States and around the world still have limited access to the Internet, and by extension, to social media communities and content. Their speech in these prolific spaces is hampered even before they begin to create. After several state legislatures enacted laws to restrict access of certain kinds of users, the US Supreme Court issued a ruling that robustly protected access as a First Amendment right. North Carolina had passed a law making it a felony for a registered sex offender to use a “commercial social networking site” if those sites allowed access to children. The law was used to prosecute more than 1,000 people, including Lester Packingham, who made a post on Facebook in 2010 celebrating a court dismissing a traffic ticket against him, including the line, “Praise be to GOD, WOW! Thanks JESUS!” A police officer saw the post and referred it to a prosecutor, who charged Packingham, who was convicted and received a suspended jail sentence. But Packingham challenged the law on First Amendment grounds, arguing it unconstitutionally restricted his free speech rights by effectively barring him from communicating on the Internet. Justice Anthony Kennedy, writing for a unanimous court, agreed with Packingham and struck down the law. Citing the widespread prevalence and use of platforms such as Facebook, LinkedIn, and Twitter, the court recognized the role of social media as places that “for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” The court went on to say that “the State may not enact this complete bar to the exercise of First Amendment rights on websites integral to the fabric of our modern society and culture.”5 Such freedom to access the Internet and social media sites varies greatly around the world. Freedom House, a nonprofit organization concerned with global speech and press freedoms, issues annual reports on Internet freedom, said free speech on the Internet was “under unprecedented strain” in its 2021 report, with 48 countries attempting new Internet regulations, with legitimate concerns such as harassment and fraud “being exploited to subdue free expression and gain greater access to private data.” Freedom House ranked China as the worst country for Internet freedom for the seventh straight year, and it noted that the United States’ Internet freedom score declined for the fifth consecutive year.6

Free Speech in Social Media

3

It is also important to remember that governments have always attempted to control the flow of information. From the jury verdict convicting Socrates in 339 BC to the prosecution of US socialists Charles Schenck and Eugene Debs, who spoke out against involvement in World War I,7 to the sentencing of eight youth in Iran for a combined 127 years in prison for anti-government posts on Facebook,8 governments have sought to suppress speech they find to work against the interests of the state. At this time of unrest worldwide, it is not surprising that government authorities, fearing their loss of power, are turning to access restrictions and censorship for control. Though the censorship and suppression of speech by the government are both common and historically grounded, pressure against oppressive regimes is growing. For example, opposition parties in Russia, dissidents in China, and insurgents in Syria are all using new technologies to secure access to social media streams. The Green Revolution in Iran in 2010 became known by some as the Twitter Revolution after much of the protests and unrest in Teheran were organized on the social media platform.9 In 2011, the United Nations General Assembly’s Human Rights Council declared that access to the Internet was a “basic human right” and, if restricted, would be a violation of international law.10 And in the United States, social networking has led to widespread organizing for causes ranging from racial justice protests to rallies against pandemic protocols, all the way to an effort to assault the US Capitol while Congress was in session to stop the election of Joe Biden following his victory in the 2020 presidential election. It is unquestionable that, today, social media have become a place where people go to connect, organize, work, play, and engage in their world. The extent to which this is regulated by the law, specifically regarding free speech and surveillance, is discussed next.

Social Media Content Media content in the second decade of the 21st century has been characterized as “mobile,” “ubiquitous,” “voyeuristic,” and “mean.” The most common term used to describe this period, however, has been “social.” Social sounds friendly enough, but the content of social media (online participatory sharing communities) has been anything but. Today, media accounts are filled with stories of cyberbullying, brash examples of defamation, and unauthorized celebrity photos. Speech in social media is being challenged – along traditional legal lines and in new ways. Obscenity, libel, and copyright cases related to social media are on the rise in many countries. Private Speech What were once considered private matters, topics such as sex, spousal conflict, personal habits, and finance, are now broadcast 24 hours and 7 days a week via streaming services, podcasts, and social media sites. Where in past

4

Jennifer Jacobs Henderson

years media attention was focused mainly on those who placed themselves in the public limelight – politicians, celebrities, and athletes – people now have a near-equal opportunity to create their own public stage. While many participants in online social networks behave as if their content is private, these are not “backstage” areas where scandal and intrigue can be revealed to some and kept from others. Instead, content posted on social networking sites such as TikTok is, more often than not, searchable, sold, and shared. Concerns regarding private speech were raised in early 2012 when Google announced its new Search Plus Your World feature that combed through both one’s personal Google+ social media content and the Internet to return results.11 In 2015, LinkedIn settled a class action lawsuit brought by users whose names and likenesses were used to grow LinkedIn’s customer base. LinkedIn agreed to pay $13 million to users for the unauthorized use of a service called Add Connections. Through this service, LinkedIn would access user email accounts without their consent, sending user contacts’ invitations and reminder emails to join LinkedIn. While the court, in this case, found LinkedIn users consented (through the terms of service agreement) to accessing their email contacts and sending the initial invitation to join, users did not consent to sending follow-up emails on their behalf.12 Privacy concerns extend beyond the collection and use of personal connections and data. Social media sites have also altered content in users’ accounts. In 2012, Facebook allowed researchers to manipulate the news feeds of almost 700,000 users to see if it could alter the mood expressed in posts. Users were never notified of the study or contacted for consent. Findings from the study, published in the Proceedings of the National Academy of Sciences, showed that Facebook “could affect the content which those users posted to Facebook. More negative news feeds led to more negative status messages, as more positive news feeds led to positive statuses.”13 After changing friend lists from private to public without informing users, Facebook was investigated by the Federal Trade Commission in 2012, resulting in a settlement in which Facebook made additional promises to protect user privacy. But Facebook flouted that agreement, continuing to share data beyond the reasonable expectations of users, drawing a $5 billion penalty from the FTC in 2019.14 The settlement was of little effect; in fact, Facebook’s stock price on Wall Street went up on reports of the deal.15 Privacy invasion of this sort is a business model for social media companies, as well as third parties such as Clearview AI, a facial recognition technology company that scraped millions of user photographs on Facebook and Twitter to build its database, leading platforms to issue cease-and-desist letters in efforts to clear them from their sites.16 Instead of relying on the hit-or-miss privacy protections of many social media sites, users are beginning to take privacy into their own hands by untagging photos, deleting comments, and unfriending or blocking problematic contacts. Some apps became known as “privacy-promising technologies” by

Free Speech in Social Media

5

building their business model on features such as auto-deletion of messages on Snapchat or protecting the anonymity of users on Yik Yak.17 A 2016 study found US respondents have very nuanced opinions regarding privacy and information sharing, noting that they take into account “the company or organization,” “how trustworthy or safe they perceive the firm to be,” “what happens to their data after they are collected,” and “how long the data are retained.”18 Many privacy cases hinge on whether the person had an expectation of privacy in the particular forum in which the utterance was made. The courts have long held there is no expectation of privacy in public. On the user’s side, this is where confusion reigns. Private companies (or publicly traded ones) run these sites. A user must register and agree to terms of service. And when the privacy promises are violated, the Federal Trade Commission has led investigations resulting in billions of dollars of settlements from companies such as Facebook and Google. Political Speech Over the course of 350 years, the idea of a free and open venue for speech has percolated. John Milton wrote, “Where there is much desire to learn, there of necessity will be much arguing, much writing, many opinions; for opinion in good men is but knowledge in the making.”19 It was 1644. John Stuart Mill’s “marketplace of ideas,” where the best ideas rise to the fore through free and open debate,20 resounded in 1859, and Justice Oliver Wendell Holmes Jr. contended that “the best test of truth is the power of the thought to get itself accepted in the competition of the market” in 1919.21 The maxim exhorted is just as applicable today. When they work well, social media encourage a wide range of often competing voices. When they work poorly, they are an echo chamber of one-sided, often offensive, commentaries rejecting all opposing ideas. For almost 100 years in the United States – from 1920 at the dawn of radio to 2010 and the rise of social media – the biggest concern among free speech activists was how individual citizens could be heard. Due to the cost of production and distribution, media remained solidly in the hands of the few.22 Beyond the states, ownership also resided in the hands of the few, though often in the hands of a few government officials. “The marketplace theory justifies free speech as a means to an end,” wrote Professor Rodney Smolla. “But free speech is also an end itself, an end intimately intertwined with human autonomy and dignity.”23 In this theoretical vein, the UN’s 2011 declaration regarding access to the Internet is a part of a larger report “on the promotion and protection of the right to freedom of opinion and expression” and begins by outlining the rights provided for in article 19 of the International Covenant on Civil and Political Rights. They are “the right to hold opinions without interference,” “the right to seek and receive information and the right of access to information,” and “the right to impart information and ideas of all kinds.”24

6

Jennifer Jacobs Henderson

These basic rights are clearly consistent with First Amendment protections; however, even in the United States, political expression on social media is being squelched. For example, several states have some kind of law that makes it a crime to state a falsehood in an election campaign.25 Misinformation about election security and voting circulated broadly on social media platforms after the 2020 elections in the United States, creating a toxic online environment that culminated in a real-life assault on Congress on January 6, 2021, as well as threats and acts of violence aimed at state officials. In Washington, pro-Trump demonstrators, some of whom were armed, breached the gates of the governor’s residence as they aired their misguided complaints. Governor Jay Inslee, who had to flee to a secure location, later supported a bill in the state legislature in 2022 that would make it a crime to lie about election results if those lies led to violence. The bill was widely criticized by First Amendment advocates, who properly noted that the US Supreme Court has very clearly ruled that lies of this sort are protected as free speech.26 In 2012, a similar law against false political statements in Ohio was used to target Mark W. Miller, a Cincinnati resident concerned with how the city was allocating funds for a new streetcar project who sent regular tweets voicing his complaints. For example, “15% of Cincinnati’s Fire Dept browned out today to help pay for a streetcar boondoggle. If you think it’s a waste of money, VOTE YES on 48.” When charged with a crime under the Ohio law forbidding such speech as a lie, his nonprofit, the Coalition Opposed to Additional Spending of Taxes (COAST), sued the Ohio Election Commission, claiming the law was an unconstitutional restriction on free speech.27 Miller’s tweet suit inspired a similar case challenging the same Ohio law. In 2014, the US Supreme Court unanimously found that the restriction of speech violated free speech rights and remanded it to the lower courts for reconsideration.28 In 2016, the Sixth Circuit Court of Appeals agreed, stating, “Ohio’s political false-statements laws are content-based restrictions targeting core political speech that are not narrowly tailored to serve the state’s admittedly compelling interest in conducting fair elections.”29 Political speech in Facebook posts has also led to First Amendment cases. In Hampton, Virginia, the sheriff fired six employees who supported an opposing candidate during his re-election campaign. One of these workers contended in federal court that he was fired for expressing his support of the other candidate by “liking” him on Facebook. Essentially, he was being punished for expressing his protected political speech. The district court judge, in this case, ruled that the firing could not be linked to the employee’s support of the opposition candidate because clicking the like button on Facebook was not equivalent to writing a message of support for the opposition candidate. The like button, the court found, was not expressive speech. In 2013, the Fourth Circuit overruled the lower court decision, siding with the employee against the sheriff. Here, the court ruled that liking was protected by the First Amendment because it was the “Internet equivalent of displaying a political sign in one’s front yard, which the Supreme Court has held is substantive speech.”30

Free Speech in Social Media

7

During the 2016 elections, “ballot selfies,” photos of ballots or voters filling out ballots in polling places, became an issue. Laws governing picture-taking in voting locations and individual voting booths vary by state. In Pennsylvania, for example, taking a photo inside a voting booth can lead to a $1,000 fine and up to 12 months in jail. Showing someone else a completed ballot is a felony in Wisconsin.31 In 2015, the US District Court for the District of New Hampshire struck down a New Hampshire law that prohibited ballot selfies.32 Snapchat argued that these selfies were important to young voters and compared them to “I Voted” stickers, noting in their brief that the ballot selfie “dramatizes the power that one person has to influence our government.”33 These election laws, especially as they have been applied to social media, restrict speech that has always been afforded the highest protection under the First Amendment. In challenges involving the limitation of political speech, courts rule the law constitutional only if (1) a compelling government interest is articulated, (2) the law is narrowly tailored to meet that interest, and (3) it is the least restrictive means necessary to address the government interest.34 This strict scrutiny test should protect political speech in social media as it does in traditional media, erecting a legal firewall between government and the opinions of the people. Government Regulation of Media Content Over the past decade, the debate has escalated between those who believe the Wild West of the Internet Age supports free speech and democracy and those who argue that the new “vast wasteland” of hate, violence, and porn opens it up to deeper regulation. These rumblings are growing louder as politicians are pressured to do something, and public interest organizations are preparing to defend freedom online. It all began when the Internet first started worrying elected officials, way back in the days of BBBs systems. In 1996, Congress, concerned about the potential for the Internet to become a red-light district,35 passed the Communications Decency Act (CDA), Title V of the Telecommunications Act. Section 223 of the CDA regulated access by minors to indecent material on the Internet. In June 1997, the Supreme Court ruled on the constitutionality of the Communications Decency Act in Reno v. ACLU.36 The Court unanimously agreed that Section 223 of the CDA was unconstitutional. Here, Justice Stevens, writing for the Court, contended that the CDA, as written, was overbroad and vague. When considering the issue of overbreadth, the Court found that act’s use of the term “indecency” was not consistent with the First Amendment protection of all but “obscene” materials when adult audiences were considered. In addition, the Court said the CDA was a content-based restriction, which “raises special First Amendment concerns because of its obvious chilling effect on free speech.”37 The lasting effect of Reno was the blanket of First Amendment protection placed on the Internet by the Supreme Court. Eight years earlier, in the

8

Jennifer Jacobs Henderson

Sable decision,38 the Court had made clear that each telecommunication medium should be considered individually when determining the breadth of First Amendment protection. In Reno, the justices gave the widest possible berth to the fledgling Internet, comparing its First Amendment rights to that of the traditional press, such as newspapers and not broadcasters, whose regulation had been upheld due to the passive nature of the audience in relation to the medium. The regulation and censorship of speech in social media by government organizations remain an issue. State schools have used their power to fire college professors and high school teachers who say things on social media that draw negative attention. For example, in 2015, the University of Illinois rescinded a job offer for Professor Steven Salaita after uncovering some old tweets in which he had criticized Israel’s military strikes in Gaza, drawing condemnation from free speech groups such as the Foundation for Individual Rights in Education and the American Association of University Professors.39 Dozens of such cases have been documented in recent years. A Texas high school teacher was fired for tweeting at then-President Donald Trump to “remove the illegals from Fort Worth,” a firing that was upheld after an appeals court rejected her argument that this was protected speech under the First Amendment in 2021.40 Another Texas professor was fired for commenting during the 2020 vice presidential debate that former Vice President Mike Pence should “shut his little demon mouth,” ultimately winning a settlement from the college.41 Each of these kinds of cases sends cautions to government employees such as teachers that they are being watched, and they may have to go to court to vindicate any free speech rights they assert. Students at public colleges and universities have also been punished for commentary on social media sites. A mortuary science student at the University of Minnesota posted comments on Facebook in 2009 regarding her coursework. In one post, she wrote about the cadaver she was working on: “[I get] to play, I mean dissect, Bernie today.” She also wrote of her ex-boyfriend that she would like to use an embalming tool “to stab a certain someone in the throat.” In response to her Facebook posts, the University of Minnesota gave her an F in the course and required her to complete an ethics class and undergo a psychiatric exam. The woman has appealed the university’s decision in court, saying that the actions violated her right to free speech.42 Similarly, a nursing student at the University of Louisville was kicked out of college for violating the honor code and a course confidentiality agreement when she posted a description of a live birth on her MySpace page. The student, in turn, brought suit against the university for a violation of her First and Fourteenth Amendment rights as well as injunctive relief and damages under the Civil Rights Act.43 The US District Court for the Western District of Kentucky found that this was not a free speech issue but a contractual one and that Yoder did not violate the contract of the honor code or the confidentiality agreement with her post. The court ruled that the woman must be reinstated as a student.44

Free Speech in Social Media

9

On the other hand, the Eighth Circuit Court of Appeals found that twin brothers did not have their rights violated when they were expelled for using school computers to create a blog about their Lee’s Summit, Missouri, high school. On the blog, they posted “a variety of offensive and racist comments as well as sexually explicit and degrading comments about particular female classmates, whom they identify by name.” In response, the brothers were expelled from school for 180 days. They filed suit against the school district, alleging that their First Amendment rights had been violated. Ruling on a preliminary injunction against the expulsion, the Eighth Circuit Court of Appeals found that the school’s actions to be constitutional, explaining that the speech was “targeted at” Lee’s Summit North High School and “could reasonably be expected to reach the school or impact the environment.”45 In sum, these cases show that courts are apt to allow school punishment of offensive student speech when the content is created on campus, using school technology, or causes disruption to student learning at a specific school. There is currently a battle raging on college campuses regarding the kind and extent of speech encouraged and allowed. Trying to find a balance between spaces of open inquiry and discussion and hateful, derogatory comments, many private colleges and universities have come down on the side of restriction, while many public universities have walked a very fine line between civility and censorship. Yik Yak, which made a return in 2021 after shutting down for several years, is a mobile app that allows users to post anonymous comments to anyone within a five-mile radius, based on the user’s location. These posts, known as Yaks, created substantial controversy and several legal cases surrounding issues of free speech. Following protests at the University of Missouri regarding racial inequality and police brutality, three students on Missouri campuses were arrested for posting threatening comments on Yik Yak. University of Missouri police arrested one student at Missouri University of Science and Technology in Rolla for “making a terrorist threat” via Yik Yak46 after the student posted, “I’m going to stand my ground tomorrow and shoot every black person I see.”47 A second student at Missouri University of Science and Technology in Rolla was prosecuted for “posting online threats to attack a college campus” after writing “I’m gonna shoot up this school” on Yik Yak. A third student, also 19, was arrested in his dormitory at Missouri State University for “making racist threats on social media.”48 In cases such as these, law enforcement may seek an emergency subpoena for social media posts that “poses a risk of imminent harm.”49 In these instances, Yik Yak had to turn over information they “believe would prevent the harm,” which “may include a user’s IP address, GPS coordinates, message timestamps, telephone number, user-agent string and/or the contents of other messages from the user’s posting history.”50 This was the procedure used when Texas A&M police obtained an emergency subpoena to arrest a student for

10

Jennifer Jacobs Henderson

posting, “This is not a joke! Don’t go to campus between 7 and 7:30. This will be my only warning!”51 Law enforcement officials, however, are having a very difficult time determining what is a true threat, what is a hoax, and what is simply a poorly executed joke on social media. At the extreme end of social media harassment, “swatting” is also on the rise. Swatting is when those watching livestream video game players call or hack into 911 systems with a terror threat to send the SWAT team to the home of the player so the reaction of the gamer is caught on webcam.52 Officials have estimated that a single swatting episode can cost more than $25,000 for law enforcement agencies.53 For these “pranks,” swatters have received up to 11 years in jail.54 In response to concerns regarding social media posts, some athletic programs have either banned the use of social media or required athletes to turn over passwords to the coaching staff. While legislation in many states forbids this kind of social media monitoring, the outright ban of social media use is acceptable in athletic programs as participation is seen as a privilege rather than a right.55 As will be seen in Chapter 7 of this book, students at public schools also have free speech rights on social media, especially in light of a 2021 Supreme Court case that vindicated the First Amendment rights of a high school cheerleader who was punished for making a rude post on Snapchat.56 Personal Threats, Revenge, and Social Media In addition to general threats of terrorism and harm, individual threats via social media have been at the center of free speech case law in recent years. The most high-profile case involving online personal threats, Elonis v. United States, was decided by the Supreme Court in 2015. In this case, Anthony Elonis posted threats on his personal account and as his rap persona, Tone Dougie, to Facebook. These comments threatened his estranged wife (“I’m not going to rest until your body is a mess, soaked in blood and dying from a thousand tiny cuts”), local schools (“Enough elementary schools in a ten mile radius to initiate the most heinous school shooting ever imagined/Hell hath no fury like a crazy man in a kindergarten class”), and a female FBI agent working on his case (“Pull my knife, flick my wrist, and slit her throat”).57 For these posts and others, Elonis was convicted under federal law and sentenced to 44 months in prison. The Supreme Court ruled in favor of Elonis, reasoning that to be convicted under this law, the state must prove Elonis made his statements “for the purpose of issuing a threat, or with knowledge that the communication will be viewed as a threat.”58 While this decision certainly did not limit the boundaries of free speech allowed in social media, the actual court decision relied heavily on the criminal law requirements of intent rather than First Amendment arguments related to speech. Three years earlier, courts ruled on a similar case and offered a very different solution to social media threats. Mark Byron, engaged in a bitter divorce and custody suit over his young son, wrote on his Facebook page, “If you are

Free Speech in Social Media

11

an evil, vindictive woman who wants to ruin your husband’s life and take your son’s father away from him completely – all you need to do is say you’re scared of your husband or domestic partner and they’ll take him away.” Based on these comments, a magistrate found Byron in contempt of a protective order. To avoid a 60-day jail sentence and a $500 fine, the magistrate said Byron could post an apology on his Facebook page every day by 9:00 a.m. between February 13, 2012, and March 16, 2012, when he returned to court. Byron contended that the apology, which stated he was placing his ex-wife in “an unfavorable light” and “attempting to mislead” his friends, was untrue.59 Many free speech advocates found this court-compelled speech to be concerning as governmentforced speech is equivalent to restricting free speech. Social media posts that cause personal harm also include nonconsensual pornography and doxxing. Nonconsensual pornography, sometimes known as revenge porn, refers to the posting of nude or intimate photographs or videos via social media or websites. Nonconsensual pornography has increased with the rise of the Internet as a means of communicating with partners and romantic interests, and it has been further exacerbated with the pandemic as online engagement has heightened in connecting with others. Photos sent to specific individuals can easily be redistributed across platforms to other individuals or audiences without the consent of the subject involved. Such actions are often used as a method of retaliation, humiliation, or manipulation of the subject, typically targeting women specifically. One harrowing example is Hoewischer v. White, in which White posted nude photographs he had received from Hoewischer to a “revenge porn website” after she refused to engage in sexual acts with him and broke up with him. White also posted her contact information and her home address to the site. In the aftermath, Hoewischer began receiving messages from Internet users who had viewed the photographs online. Many of these strangers solicited Hoewischer for sex, and many others from her community addressed Hoewischer regarding the photos, as well. As a result, Hoewischer has suffered severe emotional distress, and the posts have inhibited her potential job opportunities. Furthermore, she was not able to remove the posts and information from the website.60 As of 2021, 48 states and the District of Columbia had passed criminal laws targeting nonconsensual pornography, which had largely survived challenges to their constitutionality on First Amendment grounds.61 Doxxing, which involves the unauthorized release of an individual’s personal information on the Internet, is sometimes used as a method of retribution between individuals or as a threat following disagreements. This tactic increases targeted attention or actions, specifically those intended to humiliate, shame, or harm the subject’s reputation. Federal laws detailing the ramifications of doxxing are currently limited in scope, with formal legislation only applying to specific individuals within the government, but many defendants’ actions overlap with stalking offenses, which can be investigated. In addition, doxxing tends to have harmful effects when sensitive information, such

12

Jennifer Jacobs Henderson

as addresses, contact information, or personal data, such as social security or credit card numbers, are published. Doxxing can also include personal messages that may cause harm to an individual. In the case of Hogan v. Weymouth, the defendant published screenshots of a text message between him and the plaintiff regarding a football game on social media. The text was framed to depict the plaintiff speaking insensitively regarding a certain player on the team, and as a result, the plaintiff was fired from his job and “received harassing messages and has been subjected to scorn, hatred, ridicule, and contempt on social media,” despite the original text messages being “false and misleading.”62 In another case, the plaintiff was falsely identified as responsible for killing and injuring multiple attendees of the Unite the Right white nationalist rally in Charlottesville, Virginia, in 2017. The individual’s personal information, such as his home address, was soon publicized and spread across social media, and he and his family received multiple threats as a result.63 In addition to concerns of safety and livelihood, victims of doxxing often report experiencing long-term effects of emotional distress and harm to other aspects of their lives. Social media accounts can also be hacked, resulting in great harm to the people who put so much about their lives on the platforms. In the 2012 Barclay v. Haney case, the plaintiff was subject to harassment on multiple levels from the defendant, someone she was once romantically intimate with. He reported having faked his identity as he called and harassed her with over 50 calls and texts a day. Aside from this, she said he had hacked her Facebook and email accounts. The lack of privacy, she claimed, reduced her to a highly paranoid state of living, even to the point where she didn’t feel safe in her own home.64 Government-Requested and Court-Ordered Information The most high-profile government information request case in recent years involved Apple and the FBI. In 2015, the Department of Justice obtained a court order instructing Apple to unlock the security code and provide backdoor access to the iPhone of a man who committed a terrorist attack in San Bernardino. Apple refused to comply with the order, citing privacy promises made to its users. While this case did not involve social media directly, it was one of the first to acknowledge that access to a user’s phone would provide instant access to any social media accounts set to remain open. As Apple explained in its Open Letter to Customers in 2015: “We built strong security into the iPhone because people carry so much personal information on our phones today.”65 Requests for user identification and account information are regularly issued to social media companies. A very large percentage of these requests are fulfilled. For example, between January 1 and June 31, 2021, there were 18,742 US criminal legal requests involving 31,848 accounts made to Snapchat. In turn, Snapchat complied with the request, producing at least “some data” in 78% of the cases.66

Free Speech in Social Media

13

The information provided in most cases does not include message content, only user account information or account metadata. For example, law enforcement subpoenas for documentation from Snapchat allow access to basic account information such as “account name, email address, phone number and when the account was created.” To get a log of metadata, when messages were sent and to whom, requires a state or federal search warrant. The only time content of snaps would be released is if the receiving party had not opened them within 30 days of the original message being sent. After 30 days, Snapchat says, unopened snaps are “wiped from their servers.”67

Surveillance of Social Media Surveillance of social media networks and the messages created in them in is increasing daily. In many ways, social media sites are now more like a fish tank than a lockbox. Government agencies, employers, site operators, and advertisers are all peeking into the aquarium of social media to see what’s up. Users of social networks, just now realizing that everyone is a Peeping Tom, are beginning to close their blinds. Government Agencies Security agencies for governments around the world are reading, watching, and listening in social media networks. The ultimate “lurkers,” these agencies are tracking conversations, looking for clues to crimes and other misdeeds. As law enforcement agents link social media to terrorist recruitment efforts and plots of violence,68 additional oversight is inevitable. For example, the National Security Agency (NSA), a long-time surveillant of private citizen information, stated in March 2012 that it would begin storing digital information collected on US citizens even when they were not under investigation. Information, once only stored for those suspected of terrorist activities and only for 18 months, would not be held for anyone caught in the net of surveillance for up to five years.69 After Edward Snowden’s unauthorized release of US government surveillance program details, many Americans took action to protect their online information. The Pew Research Center reported in 2015 that “34% of those who are aware of the surveillance programs (30% of all adults) have taken at least one step to hide or shield their information from the government.” These steps included “changing their privacy settings on social media,” using “social media less often,” and avoiding or installing certain apps to help with privacy.70 Since the revelation in December 2015 that one of the San Bernardino attackers had sent private messages on Facebook “pledging her support for Islamic jihad and saying she hoped to join the fight one day,”71 the Department of Homeland Security (DHS) has come under growing pressure from Congress to increase surveillance of the social media accounts of individuals applying

14

Jennifer Jacobs Henderson

for visas, work permits, and citizenship. In January 2016, the DHS began searching for social media analytics software to aid in their investigations.72 Government agencies in many nations also monitor social media sites for material that is deemed inappropriate for its citizens. Because social media companies are private corporations, they must abide by the laws of each country in order to expand and secure customers. Reporters Without Borders’ 2015 Enemies of the Internet report summed up the state of international government surveillance this way: In practice, surveillance of communications networks continues to grow. It allows governments to identify Internet users and their contacts, to read their email and to know where they are. In authoritarian countries, this surveillance results in the arrest and mistreatment of human rights defenders, journalists, netizens and other civil society representatives. The fight for human rights has spread to the Internet, and more and more dissidents are ending up in prison after their online communications are intercepted.73 Employers While employers have always known that applicants selectively construct resumes, with multiple data points of information displayed through social networking sites, those who make hiring decisions are now able to construct their own, more comprehensive profiles of potential employees. A 2016 survey found that 60% of employers use social media to screen job candidates, an increase of 500% over the last decade. Most often, hiring managers want to “see if the candidate has a professional persona” and are “looking for information that supports their qualifications for the job.”74 Employer surveillance of applicants is commonly conducted by compiling publicly accessible information available to everyone through search engines such as Google. A separate survey found that 75% of all human resource professionals have been instructed to search the Internet for public information on applicants.75 Some firms, however, have moved into more legally dangerous territory, demanding that applicants turn over the passwords to their social media accounts. These requests have become so commonplace that in March 2012, Facebook made a formal announcement to employers telling them to stop demanding the passwords of users looking for a job with their organizations and reminding employers that practice goes against Facebook’s privacy policy, which forbids the exchange of account password information to a third party. Maryland became the first to prohibit employers from requesting usernames and passwords for social media sites as a requirement for employment, and several states followed with efforts to protect passwords.76 Surveillance on the job goes far beyond the hiring process. While employers have the right to sift through email messages stored on their own servers, for example, the National Labor Relations Board (NLRB) has come down on

Free Speech in Social Media

15

the side of the employees in a series of cases involving negative messages posted on blogs and personal social media accounts. In January 2012, the NLRB issued a memo outlining more than a dozen instances in which employers went too far in punishing employees for personal speech.77 In one case, a woman used her cell phone at work to post sexist comments made by a supervisor and her disapproval of a coworker’s firing on her Facebook page. She continued to post Facebook messages that criticized management and ignored a supervisor’s warning to “not get involved in other worker’s problems” even after being asked to refrain from such activity. In this case, the NLRB found that by punishing the woman for discussing wage and working conditions with fellow employees via Facebook, the employer had unlawfully terminated her employment.78 Similarly, in 2016, an administrative law judge in Havertown, Pennsylvania, ordered Chipotle to rehire an employee who had been fired for tweeting about the company’s “cheap” labor policies. The judge found the firing violated the National Labor Relations Act.79 In many of these cases outlined in the report, as in the ones mentioned previously, social media policies prohibited “discussions of wages or working conditions,” a violation of Section 7 of the National Labor Relations Act. These NLRB regulations, which originally ensured face-to-face deliberations among employees and with labor union representatives regarding working conditions and salaries, are now being applied to online communications to multiple, diverse parties. What was once a whisper in the break room to a colleague is now a Twitter or LinkedIn post to hundreds or thousands of “friends” or “followers.” The challenge for the NLRB and other regulatory agencies is how to best balance the protection of employee rights with the potential scope and power of social media messages. As with many other legal questions, laws about employee termination for out-of-work speech varies from state to state. In Texas, for example, you can be fired for private conduct at home that is legal but does not align with the values of the employer. Much of this private conduct is on public display for employers to see through Facebook, Instagram, and Twitter. In recent years, many broadcasting professionals have been fired for posting inappropriate comments – often racist, sexist, or homophobic – that have nothing to do with their jobs, wages, or working conditions. For example, CNN analyst Roland Martin was suspended from his job for a homophobic tweet during the 2012 Super Bowl that read, “If a dude at your Super Bowl party is hyped about David Beckham’s H&M underwear ad, smack the ish out of him!”80 In 2016, Curt Schilling, analyst for Monday Night Baseball, was fired from ESPN for a Facebook post reacting to North Carolina’s transgender bathroom law.81 Similarly, ESPN suspended anchor Jemele Hill for tweets criticizing the NFL and Dallas Cowboys owner Jerry Jones in 2017, claiming that the tweets violated their social media policy.82 Arguments that their First Amendment rights had been violated were obviously incorrect as CNN and ESPN are private businesses, not government entities that trigger First Amendment protections for individuals.

16

Jennifer Jacobs Henderson

Advertisers Social media users may be unaware that advertisers and media companies are tracking their movements online, paying close attention to keywords, hashtags, and likes. Gone are the days of simple cookies deposited on one’s computer through an Internet browser. Today, advertisers track mouse movements, widget use,83 online purchases, GIS location data, and even your latest prescription refill. They compile this information in real time and often sell it to the highest bidder.84 When users find out about these practices, they are often astonished, then appalled. For example, a USA Today/Gallup poll from December 2010 found that 67% of US Internet users surveyed believed advertisers should not be able to target messages based on “your specific interests and websites you have visited.”85 A 2014 survey confirmed that social media users do not want their data aiding advertisers. It found that “only 35 percent of respondents agreed with the statement “I use free services online and on smartphones/tablets and don’t mind if my data is potentially also used for advertising purposes.”86 Currently, there are no federal government regulations limiting the kind or amount of information that can be collected by advertisers online after multiple efforts to enact Do Not Track legislation and a Consumer Privacy Bill of Rights failed to pass Congress.87 Some states have made efforts to fill the gaps, such as California’s Consumer Privacy Act, which went into effect in 2020. The law requires websites to disclose what personal data they are gathering and allows users to opt out of selling their personal information, giving the state’s attorney general the right to sue when these promises are breached.88

Site Surveillance Those who own, and thus control, social media sites are also key surveillants of content. For example, Facebook employs a team of user operations analysts who act upon complaints regarding content and behavior violations and determine whether users have violated the terms of the service agreement. In many ways, these Facebook staff and those of other social media sites are asked to determine the outcome of free speech issues usually decided in the courtroom. While everyone accepts the terms of social media user agreements with little consideration for the contractual obligations set forth, each of these agreements has restrictions on free speech. At their most basic, user agreements for social media sites contain two kinds of restrictions: those, courts have ruled, are acceptable limits on free speech (such as copyright and defamation) and restrictions that would be ruled on as violations of free speech rights, such as hateful or objectionable language. An example of this first category, LinkedIn does not allow material that “infringes upon patents, trademarks, trade secrets, copyright, or other proprietary rights.” In the second category of extra-legal restrictions, Facebook will not allow “hateful, threatening, or pornographic”

Free Speech in Social Media

17

material or content that “contains nudity” or “graphic violence.”89 LinkedIn reminds users that their accounts can be shut down if they participate in “discriminatory” discussions.90 Moderators in online gaming communities monitor real-time in-game chat. Offending remarks made in these chat logs can get players removed indefinitely for speech that would otherwise be protected on the sidewalk outside their door. Xbox Live has a Code of Conduct that bans “topics or content of a sexual nature,” and immediate suspension can be applied for “severe racial remarks.”91 The online social bulletin board Pinterest has an even more restrictive speech policy, banning speech that may be “racially or ethnically offensive,” “profane,” or “otherwise objectionable.”92 Pinterest notes in its terms of service agreement that “we have the right, but not the obligation, to remove User Content from the Service for any reason.”93 YouTube, one of the most popular social media sites with more than a billion users,94 has very specific limits on expression outlined in its community guidelines. YouTube will remove any videos that violate the community guidelines, including those that feature “pornography,” “child exploitation,” “bad stuff like animal abuse, drug abuse, under-age drinking and smoking, or bomb making,” “graphic or gratuitous violence,” “predatory behavior, stalking threats, harassment, intimidation, invading privacy, revealing other people’s personal information, and inciting others to commit violent acts.”95 YouTube also reminds users on the gommunity guidelines website that videos are screened for unseemly content 24 hours a day. In 2016, Twitter formed a Trust & Safety Council to “ensure people can continue to express themselves freely and safely on Twitter.” The council, a response to CEO Dick Costolo’s memo the year before stating that “we suck at dealing with abuse and trolls on the platform,” is one of several remedies being implemented by the company.96 Twitter’s policy on “Abusive Behavior” states, “In order to ensure that people feel safe expressing diverse opinions and beliefs, we do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.”97 These rules, revised and implemented by Twitter in 2015, led to the removal of Breitbart contributor and conservative rabble-rouser Milo Yiannopoulos’ verified account status.98 Self-Surveillance When Facebook was a shiny new social media platform, just out of the box, users friended with abandon. High school boyfriend? Friend. College roommate? Friend. Random woman you met at a conference? Friend. Don’t really recognize the person but somehow they found you online? Friend. In recent years, however, the “friendzy” has subsided. Over the past decade, we have read thousands of status updates, viewed hundreds of cats playing piano videos, and given a big “thumbs up” to more news articles than we can count. Just as users learned Second Life and MySpace no longer contributed

18

Jennifer Jacobs Henderson

positively to their online identities, users of contemporary social media platforms have become savvier through self-surveillance. Now, people are removing those they once considered “friends” from social networks. A 2012 survey on social media privacy found that 63% of those surveyed had removed someone as a friend from their social network.99 In the process, these “unfrienders” have limited the kinds of and the number of voices they are willing to hear. The power to pick and choose whom to hear was granted by the Supreme Court in the 1943 case Martin v. City of Struthers.100 In this case, a city ordinance forbade the distribution of handbills door-to-door. The ordinance, challenged by the Jehovah’s Witnesses who delivered literature door-to-door as part of their ministry, was found to be an unconstitutional restriction on free speech. Here, the Supreme Court ruled that individuals should have the right to listen to or turn away speech at their own door. It was unconstitutional, the Court concluded, for the government to take on this role of speech gatekeeper. This decision was reaffirmed almost 60 years later in Watchtower Bible and Tract Society v. Village of Stratton,101 when the Court again found the doorto-door distribution of literature constitutional. Many people who found these rulings more nuisance than fundamental right as politicians, solicitors, and religious adherents rang their doorbells are now using this power to control their personal information online. Each time a person is unfriended, it is as if a homeowner has said, “No, thank you. Please don’t knock on my door again.” Legislatures and courts have extended the right not to listen to the Internet. For example, the CAN-SPAM Act of 2003 (Controlling the Assault of NonSolicited Pornography and Marketing Act) banned false, misleading, or deceptive messages from being sent via email and gave email recipients an opt-out option for receiving commercial messages (spam) via email.102 In 2011, a federal district court in California expanded the interpretation of the CANSPAM Act to include social media sites such as Facebook, concluding that deceptive commercial messages posted on Facebook walls, sent as messages to “friends,” or included on newsfeeds were also “electronic mail messages” and illegal under the act.103 By opting out of advertising on social media sites, users now have additional control over their participation in social media. In an effort to protect young people from cyberbullying (a laudable interest), Facebook created a Network of Support that encourages users to “block bullies,” “report harassment,” “stick up for others,” “think twice before posting,” and “get help if you feel overwhelmed.” While this is an important initiative, some of the suggested actions clearly encourage self-censorship. For example, the Network of Support guidelines suggest that “it’s also important to be aware of how your own behavior can harm others, even unintentionally. Before you post a comment or a photo that you think is funny, ask yourself [if] it could embarrass or hurt someone. If in doubt, don’t post it.”104 Users of social media are more aware than ever that their comments and photos follow them. To help reconstruct (or rescue) their digital identities, users are now self-censoring across social media. Photos deleted from Facebook may linger on the Internet for months or years, and snaps on Snapchat,

Free Speech in Social Media

19

once promised to “disappear,” are no longer “ephemeral.” In 2014, Snapchat settled a suit brought by the Federal Trade Commission for false, misleading, and deceptive advertising. The FTC found that users could save messages in many ways, including through third-party applications. Following the settlement, Snapchat can no longer legally claim that photos sent via its social media app are “ephemeral,” “disappear forever” and “aren’t saved.”105 Self-surveillance also involves online reputation control, otherwise known as getting rid of past mistakes. Whether it is photos from a drunken night in Las Vegas, a sweet love note to a past boyfriend, or a heated rant in a refrigerator repair forum, past imperfections can be erased. You can now hire “cleaners” to help you lose your online memory. Just like the mafia version of cleaners from your favorite detective novel, online reputation companies such as reputation. com and several attorneys promise to eliminate what’s following you, with varying degrees of success. The rise of social media surveillance adds another dimension of concern to the narrowing boundaries of free speech in social media. While these spaces are often praised as opening up debate, they may, in fact, be on the front lines of future censorship efforts by governments, employers, and the sites themselves. To ensure a breadth of discourse in social media, all users must be aware of the limits placed on them and speak up with the walls of free speech closed in.

FREQUENTLY ASKED QUESTIONS 1.

Can I be fired for saying I hate my job on TikTok? It depends.The National Labor Relations Board finds employee termination to be unlawful if the social media message discusses “working conditions or wages” and the firing is a punishment for the discussion of these issues with coworkers. So if you are criticizing your employer for the work environment, job expectations, or salary, you should keep your job. If you comment on the boss’s pattern of dating very young women or repeatedly post material during working hours, probably not. Your employer can set guidelines for your use of social media related to work – whether it is time spent on sites or what information you are allowed to post.This is especially true if your employer is one in which federal regulations apply to the distribution of information (such as in the financial services industry). Businesses that create new products or services are also likely to have a restrictive social media policy, as leaking information regarding the newest product before the launch date can have disastrous consequences for the company’s profits and image.

20

Jennifer Jacobs Henderson

2.

What kinds of statements on social media could get me arrested or charged with a crime? Both website managers and law enforcement officials are concerned with what you post on social media sites. Guidelines for acceptable content are outlined on each site’s Community Standards or Terms of Use pages. In general, you can have your membership in social networking sites revoked for a number of content categories that are protected under the First Amendment but not acceptable in these privately owned spaces, such as hate speech, pornography, the use of humiliating or bullying language or images, and derogatory statements dealing with race, ethnicity, sexual orientation, and disabilities. Law enforcement officials are much more concerned with illegal content: terror and other national security threats, images of drug production or use, obscenity, child pornography, stalking, or physical abuse. Several courts have accepted evidence from social media sites to support other claims. For example, a woman on probation for drinking and driving had her probation revoked after evidence was submitted showing her intoxicated in a Facebook post.106 Social media messages and images are also being used regularly in divorce and child custody cases to show infidelity or parenting ability.107 Civil suits involving injuries are now commonly challenged by evidence of activities shown on social media sites. The most common legal concern on social media sites, though, is copyright infringement.You could be notified by either the site manager or the lawyer for a copyright holder to take down copyrighted material from social media sites.

3.

Can my anonymous posts on social media be tied to me in real life? Yes. Depending on the content of the post, courts or law enforcement officials may request your account information, including username, address, and contact information, from the social media company. If a court order is issued, more specific information regarding your account, including how often and what kind of additional posts were made, may be obtained.

4.

How are American laws and policies about social media different from the rules in other countries? Many countries have laws that allow for surveillance, censorship, and restriction of content on social network sites.When using social media sites in other countries, it is very important to know the boundaries

Free Speech in Social Media

21

of speech in those nations. If you are traveling or studying abroad, you should assume that you do not have the same breadth of freedom allowed here – even on the most restrictive sites. In many countries, journalists, activists, and citizens have been arrested and jailed for comments made on social media. It is also important to remember that social media companies are private corporations and set their own terms of service for users.While these vary, they all contain detailed policies regarding what is acceptable and unacceptable content on the site for each country of service.And remember that Section 230 of the Communications Decency Act is only a shield in the United States. Other countries do not offer the same protections. 5.

Are social media sites private or public spaces? To date, social network sites have been treated as private spaces rather than public ones. For example, in Noah v. AOL Time Warner, Inc.,108 the court ruled that Google’s search engine was not a place of public accommodation under Title II of the Civil Rights Act because it was located in a virtual rather than a physical space.“Places of public accommodation cannot discriminate on the basis of race, gender, or ethnicity” would be applied.This means that social media websites are allowed to make their own rules, even if those rules are more restrictive or more offensive than allowed under the Constitution or US law. But for public officials using social media for government purposes, courts have regularly ruled that these are public forums, and citizens cannot be blocked or excluded without violating their First Amendment rights (see preface).

Notes 1. Special thanks to Jocelyn Brooks and Gage Adams, students at Trinity University, who conducted research in support of this chapter. 2. Katie Rogers, Snapchat at 107 M.P.H.? Lawsuit Blames Teenager (and Snapchat), New York Times, May 3, 2016. 3. Jusin Jouvenal, A 12-year-old Girl is Facing Criminal Charges for Using Certain Emoji. She’s Not Alone, Washington Post, February 27, 2016. 4. Chanel May Have Just Won a Battle for the Chanel Instagram Account, The Fashion Law, January 8, 2016, www.thefashionlaw.com/home/chanel-may-havejust-won-a-battle-for-the-chanel-instagram-account. 5. Packingham v. North Carolina, 582 U.S. _____ (2017). 6. Freedom House, Freedom on the Net 2021: The Global Drive to Control Big Tech (2021), https://freedomhouse.org/report/freedom-net/2021/global-drive-controlbig-tech.

22 7. 8. 9. 10.

11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39.

Jennifer Jacobs Henderson Schenck v. U.S., 249 U.S. 47 (1919); U.S. v. U.S. v. Debs, 249 U.S. 211 (1919). Freedom House, supra note 6, at 3. Jared Keller, Evaluating Iran’s Twitter Revolution, The Atlantic, June 18, 2010. United Nations General Assembly Human Rights Council, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Mr. Frank LaRue, May 16, 2011, http://www2.ohchr.org/english/ bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf. Danny Sullivan, Google’s Results Get More Personal with “Search Plus Your World,” Search Engine Land, January 10, 2012, http://searchengineland.com/ googles-results-get-more-personal-with-search-plus-your-world-107285. Perkins v. LinkedIn Corp., 2016 U.S. Dist. LEXIS 18649 (N.D. Cal. 2016). Robinson Meyer, Everything We Know about Facebooks Mood Manipulation Experiment, The Atlantic, June 28, 2014. Lesley Fair, FTC’s $5 Billion Facebook Settlement: Record-breaking and Historymaking, Federal Trade Commission, July 24, 2019. Peter Kafka, The US Government is Fining Facebook $5 Billion for Privacy Violations, and Wall Street Thinks That’s Great News, Vox, July 12, 2019. Louise Matsakis, Scraping the Web is a Powerful Tool. Clearview AI Abused It, Wired, January 25, 2020. Jasmine McNealy & Heather Schoenberger, Reconsidering Privacy-Promising Technologies, 19 Tulane J. Tech & Intell. Prop. 1 (2016). Lee Rainie & Maeve Duggan, Privacy and Information Sharing, Pew Research Center, January 14, 2016, www.pewinternet.org/2016/01/14/privacy-and-informationsharing. John Milton, The Areopagitica (1644). John Stuart Mill, On Liberty (1859). Abrams v. U.S., 250 U.S. 616, 630 (1919). See, e.g., Robert W. McChesney, Rich Media, Poor Democracy: Communication Politics in Dubious Times (1999); Ben Bagdikian, The Media Monopoly (1997); Edward S. Herman & Noam Chomsky, Manufacturing Consent (2002). Rodney A. Smolla, Free Speech in an Open Society 9 (1992). United Nations General Assembly Human Rights Council, supra note 10, at 5. Adam Liptak, Was That Twitter Blast False, or Just Honest Hyperbole? New York Times, March 5, 2012, at A12. Joseph O’Sullivan, In Testimony WA Gov. Inslee Says Bill on Lying about Election Results “Written to Protect the First Amendment,” Seattle Times, January 28, 2022. Liptak, supra note 25. Susan B. Anthony List v. Driehaus, 134 S. Ct. 2334 (2014). Susan B. Anthony List v. Driehaus, 814 F.3d 466 (6th Cir. 2016). Bland v. Roberts, 2013 U.S. App. LEXIS 19268, 46 (4th Cir. 2013). Heather Kelly, Snapchat Fights Ban on “Ballot Selfies,” CNN Money, April 22, 2016, http://money.cnn.com/2016/04/22/technology/snapchat-ballot-selfie. Rideout v. Gardner, 123 F. Supp. 3d (D.N.H. 2015). Kelly, supra note 31. United States v. Carolene Products 304 U.S. 144, note 4 (1938). 141 Cong. Rec. S1953 (daily ed. February 1, 1995). 521 U.S. 884 (1997). Id. at 871–872. Sable Communications of California, Inc. v. FCC, 492 U.S. 115 (1989). Sarah Kaplan, University of Illinois Censured after Professor Loses Job over Tweets Critical of Israel, Washington Post, June 15, 2015.

Free Speech in Social Media

23

40. Kaley Johnson, Firing of Fort Worth High School Teacher Georgia Clark Upheld, Fort Worth Star-Telegram, March 24, 2021. 41. Michael Vasquez, Collin College Will Pay Ousted Professor $70,000 Plus Fees in Free-Speech Case, Chronicle of Higher Education, January 25, 2022. 42. Sarah Brown, Student Free Speech Case Concerning Social Media Arises in Minnesota, The Daily Tar Heel, February 13, 2012. 43. 42 U.S.C. § 1983 (2016). 44. Yoder v. University of Louisville, 2012 U.S.Dist LEXIS 45264 (6th Cir. 2012). 45. S.J.W. v. Lee’s Summit R-7 School District, 696 F.3d 771 (8th Cir. 2012). 46. Edwin Rios, Everything You Need to Know about Yik Yak, the Social App at the Center of Missouri’s Racist Threats, Mother Jones, November 11, 2016. 47. Robert A. Cronkleton, Jason Hancock & Ian Cummings, Student Charged with Allegedly Making Online Threat Targeting African-American Students on MU Campus, Kansas City Star, November 10, 2015. 48. Ian Cummings, Third Missouri Man Charged with Posting Yik Yak Threats against College Campus, Kansas City Star, November 12, 2015. 49. Rios, supra note 46. 50. Yik Yak, Yik Yak Guidelines for Law Enforcement (2021), https://yikyak.notion. site/Yik-Yak-Guidelines-for-Law-Enforcement-b2d3ad6112bf410e892e624c18661789 (last accessed February 28, 2022). 51. Dan Solomon, A Texas A&M Student Was Arrested for Posting an Anonymous Shooting Threat on Yik Yak, Texas Monthly, October 22, 2015. 52. Jason Fagone, The Serial Swatter, New York Times Magazine, November 24, 2015. 53. Alan Gathright, “Swatting” Hoax Cost $25,000 for Law Enforcement Response to Bogus Hostage Taking Incident in Greeley, Denver ABC 7, 15 June 2015. 54. Federal Bureau of Investigation, The Crime of “Swatting,” September 3, 2013, www.fbi.gov/news/stories/2013/september/the-crime-of-swatting-fake-9-1-1calls-have-real-consequences. 55. Meg Penrose, Outspoken: Social Media and the Modern College Athlete, 12 J. Marshall Rev. Intell. Prop. L. 509, 526 (2013). 56. See Mahanoy Area School District v. B.L., 594 U.S. ____ (2021). 57. Elonis v. United States, 575 U.S. _____ (2015). 58. Id. 59. Deborah Netburn, Court Orders Man to Apologize to Estranged Wife on Facebook, Los Angeles Times, February 23, 2012. 60. Hoewischer v. White, 551 B.R. 814 (Bankr. S.D. Ohio 2016). 61. Chance Carter, An Update on the Legal Landscape of Revenge Porn, National Association of Attorneys General, November 16, 2021, www.naag.org/attorney-generaljournal/an-update-on-the-legal-landscape-of-revenge-porn. 62. Hogan v. Weymouth, 2020 U.S. Dist. LEXIS 248066 (C.D. Cal. 2020). 63. Vangheluwe v. Got News, 365 F.Supp. 3d 850 (E.D. Mich. 2019). 64. Barclay v. Haney, 2012 WL 6034070 (Ohio. Ct. App. 2012). 65. Apple, Answers to Your Questions about Apple and Security (2016), www.apple. com/customer-letter/answers. 66. Snapchat, Transparency Report, November 22, 2021, www.snapchat.com/ transparency. 67. Ian Hoppe, Does Law Enforcement Have Access to Your Snapchat Photos? A Simple Guide, Al.com, November 14, 2014, www.al.com/business/index.ssf/2014/11/ snapchat_subpeona.html. 68. Dina Temple-Raston, Terrorists Struggle to Gain Recruits on the Web, National Public Radio, December 29, 2011.

24

Jennifer Jacobs Henderson

69. Jaikumar Vijayan, Privacy Tussle Brews over Social Media Monitoring, Computerworld, February 16, 2012, www.computerworld.com/article/2501725/dataprivacy/privacy-tussle-brews-over-social-media-monitoring.html. 70. Lee Raine & Mary Madden, Americans’ Privacy Strategies Post-Snowden, Pew Research Center, March 16, 2015, www.pewinternet.org/2015/03/16/americansprivacy-strategies-post-snowden. 71. Richard A. Serrano, Tashfeen Malik Messaged Facebook Friends about Her Support for Jihad, Los Angeles Times, December 14, 2015. 72. Molly Bernhart Walker, DHS Begins Hunt for Social Media Monitoring Technology, Fierce Government IT, January 27, 2016, www.fiercegovernmentit.com/ story/dhs-begins-hunt-social-media-monitoring-technology/2016-01-27. 73. Reporters without Borders, Enemies of the Internet 10 (2014), https://12mars.rsf. org/wp-content/uploads/EN_RAPPORT_INTERNET_BD.pdf. 74. Amy McDonnell, 60% Employers Use Social Media to Screen Job Candidates, The Hiring Site Blog, April 28, 2016, http://thehiringsite.careerbuilder.com/2016/ 04/28/37823. 75. Bidhan Parmar, Should You Check Facebook before Hiring? Washington Post, January 22, 2011. 76. Catherine Ho, Maryland Becomes First State to Prohibit Employers for Asking for Facebook Logins, Washington Post, May 3, 2012. 77. Office of the General Counsel, Division of Operations-Management, National Labor Relations Board, Report of the General Counsel Concerning Social Media, January 24, 2012. 78. Id. at 18–20. 79. An Administrative Judge Has Ordered Chipotle to Rehire a Philadelphia-Area Employee Who Was Fired after Criticizing the Company on Twitter Last Year, US News & World Report, March 16, 2016. 80. Nedra Rhone, Social Media; with Iffy Tweets, Job Can Delete, Atlanta JournalConstitution, February 11, 2012, at 1D. 81. Richard Sandomir, Curt Shilling, ESPN Analyst, Is Fired over Offensive Facebook Post, New York Times, April 20, 2016. 82. Matt Bonesteel, ESPN’s Jemele Hill: “I Deserved that Suspension,” Washington Post, October 21, 2017. 83. Amir Efrati, “Like” Button Follows Web Users, Wall Street Journal, May 18, 2011. 84. Getting to Know You, The Economist, September 13, 2014. 85. Lymari Morales, U.S. Internet Users Ready to Limit Online Advertisers Tracking for Ads, Gallup.com, December 21, 2010, www.gallup.com/poll/145337/ Internet-Users-Ready-Limit-Online-Tracking-Ads.aspx. 86. Jack Marshall, Do Consumers Really Want Targeted Ads? Wall Street Journal, April 17, 2014. 87. Jessica Rich, After 20 Years, It’s Time for Congress to Finally Pass a Baseline Privacy Law, Brookings, January 14, 2021. 88. Rachel Myrow, California Rings in the New Year with a New Data Privacy Law, NPR, December 30, 2019. 89. Facebook, Statement of Rights and Responsibilities (Updated January 4, 2022), www.facebook.com/legal/terms. 90. LinkedIn, UserAgreement (2022), www.linkedin.com/static?key=user_agreement. 91. Microsoft’s Code of Conduct Explained for Xbox Live Customers, www.xbox. com/en-US/Legal/CodeOfConduct (last accessed February 28, 2022). 92. Pinterest, Acceptable Use Policy (2022), http://pinterest.com/about/use. 93. Pinterest, Terms of Service (2022), http://pinterest.com/about/terms. 94. YouTube, Statistics (2022), www.youtube.com/yt/press/statistics.html.

Free Speech in Social Media

25

95. YouTube, YouTube Community Guidelines (2022), www.youtube.com/t/community_ guidelines. 96. Natasha Lomas, Twitter Forms A “Trust & Safety Council” to Balance Abuse vs Fee Speech, Tech Crunch, February 9, 2016, http://techcrunch.com/2016/02/09/ twitter-forms-a-trust-safety-council-to-balance-abuse-vs-free-speech. 97. Twitter, The Twitter Rules (2022), https://support.twitter.com/articles/18311. 98. Emily Bell, Twitter Tackles the Free Speech Conundrum, The Guardian, January 10, 2016. 99. Mary Madden, Privacy Management on Social Media Sites, Pew Research Center, February 24, 2012, at 2, www.pewinternet.org/2012/02/24/privacy-managementon-social-media-sites. 100. 319 U.S. 141 (1943). 101. Watchtower Bible and Tract Society of New York v. Village of Stratton, 536 U.S. 150 (2006). 102. 15 U.S.C. 103 (2016). 103. Facebook, Inc. v. MaxBounty, Inc., 2011 U.S. Dist. LEXIS 32343 (N.D. Cal. 2011). 104. Facebook Safety, Facebook’s’ Network of Support (October 19, 2010), www. facebook.com/note.php?note_id=161164070571050. 105. Jill Scharr, Snapchat Admits Its Photos Don’t “Disappear Forever,” Yahoo News, May 9, 2014, www.yahoo.com/news/snapchat-admits-photos-dont-disappear124756882.html?ref=gs. 106. State of Connecticut v. Altajir, 33 A.3d 193 (Conn. 2012). 107. Leanne Italie, Divorce Lawyers: Facebook Tops in Online Evidence in Court, USA Today, June 29, 2010. 108. 261 F. Supp. 2d 532 (E.D. Va. 2003).

Chapter 2

Defamation Derigan Silver UNIVERSITY OF DENVER

Since according libel protection under the First Amendment in 1964, the US Supreme Court has attempted to find the “proper accommodation” between society’s interest in the free dissemination of information and the state’s interest in protecting an individual’s reputation. However, with its roots in feudal times, the law of defamation often has trouble keeping up with both society and technology. Global networks such as the Internet have made reputation “more enduring and yet more ephemeral.”1 Maintaining your reputation in a networked society, replete with anonymous postings that can be instantly updated from nearly anywhere in the world, is no easy task. In addition, viral videos are now common, and many individuals have found recordings of themselves featured on others’ social media accounts that can then spread quickly. Frequently, however, little defamation law is created specifically to address issues created by a new medium. Thus, while the invention of the Internet and the spread of online speech have not required the formulation of a new area of “cyberspace tort law,”2 the widespread use of social media has forced courts to apply old laws to new situations. Already called one of the most complicated areas of communication law, with many tenets that run counter to common sense, the law of defamation is even more complicated when online communication is at issue. For example, because of ambiguity in several Supreme Court decisions, lower courts are divided over whether there is or should be a different standard of review for media and non-media defendants, an issue much more likely to be raised in situations involving social media because of the ability of average citizens to widely spread information via sites such as Twitter and Facebook. In addition, since 1996, qualified “Internet service providers” have been immune from defamation suits under Section 230 of the Communications Decency Act3 for the postings of third-party content providers. This chapter explains both how the law for traditional media and online media are the same and explores the many ways the law has adapted to – or struggled to adapt to – the new world of social media. First, the traditional elements needed to successfully prove a libel claim and the various options available to defendants with an emphasis on the Internet and digital DOI: 10.4324/9781003174363-2

Defamation

27

media are explored. Each section also explains some of the particular challenges presented by social media. The chapter concludes by discussing how scholars have speculated the law might change and examining some issues regarding social media and defamation that courts have dealt with in the past decade.

The Law of Defamation Traditional defamation law recognizes that our society considers reputation one of a person’s most valuable possessions and an individual has an interest in preserving his good name. Defamation is a tort, or a civil wrong, that attempts to redress damages to reputation. Defamation is a communication that “harm[s] the reputation of another as to lower him in the estimation of the community.”4 Written or printed defamation is libel, while spoken defamation is slander. Although the law used to treat libel and slander distinctly, the introduction of broadcasting blurred the distinction between the two over time. Some jurisdictions, however, still make the distinction, with most considering defamation on the Internet to be libel. Because they allow individuals to sue for reputational damages, libel and slander create financial risks for journalists, other professional communicators, and average people posting on sites such as Facebook and Twitter. However, defamation law also tries to balance an individual’s interest in maintaining his reputation with our society’s deep commitment to freedom of expression. In some situations, then, defamation law subordinates a person’s reputational interest to freedom of expression. Any living individual, business, nonprofit corporation, or unincorporated association can sue for defamation. While government organizations cannot sue, government officials may file a suit. Dead people cannot sue for reputational damage in the United States.5 In some states, there are laws designed to protect particular products from harm by false allegations, typically known as veggie libel laws. For example, Oprah Winfrey was sued in Texas for statements she made during a show about mad cow disease. Following a drastic drop in cattle prices after the show aired, Texas cattle ranchers sued Winfrey for violating the state’s False Disparagement of Perishable Food Products Act, although the jury eventually ruled in Winfrey’s favor.6 A person who files a defamation complaint becomes the plaintiff. The person being sued becomes the defendant. The burden of proof in defamation is typically on the plaintiff. That is, to win a defamation suit, the plaintiff must establish certain claims or satisfy individual elements. They include 1. 2. 3.

publication, identification, defamation,

28

Derigan Silver

4. 5. 6.

fault, falsity, and injury or harm.

Most plaintiffs have to satisfy all six elements of a defamation suit in order to win. Even if a plaintiff can prove all six elements, the defendant may present a defense based on the First Amendment or common law. Defamation is state common law and often varies by jurisdiction. All jurisdictions are similar in some regards, however, and it is these similarities this chapter will focus on. Publication Publication, in legal terms, requires at least three people – one to communicate the defamatory statement, the person being defamed, and at least one other person to hear, see, or read the defamatory statement. A false statement made directly to an individual, but no one else, cannot be the subject of a defamation claim because there has been no reputational damage. No third party thinks less of the individual because no third party received the message. Any article that appears in a printed medium or online or any story that is distributed via broadcast, cable, satellite, or the Internet is considered published. Libel can also be published in press releases, advertisements, letters, and other personal communications. Information is also considered published when it appears on a personal website, a social media website – such as Facebook or Twitter – or in an email. You are also liable for repeating – or “republishing” – defamation if the defamatory statement does not come from a privileged source, which is explained later. Under the common law of libel, a person who repeats a libel is just as responsible for the damage caused as the original publisher. This includes a journalist who accurately quotes a source or media organizations that publish or broadcast defamatory advertisements. However, this doctrine is limited by several factors, especially when it comes to social media. First, the republication rule is typically limited to situations where the publisher controls the content. Media companies, for example, are responsible for the republication of libelous statements because their employees write, edit, select, or are otherwise responsible for the content of the communication, even if the content did not originate with the media company. Common carriers – such as telephone companies, libraries, bookstores, and newsstands – are not typically liable for the defamatory content they distribute. Asking a bookstore to review the content of every book offered for sale for defamatory material would hinder the flow of information. Second, every distribution of a libelous statement does not constitute a separate publication. Under the “single publication rule,” a libel plaintiff may only sue once, eliminating the possibility of multiple suits across multiple jurisdictions for the same defamatory statement. Thus, a plaintiff may sue in only one

Defamation

29

jurisdiction even if a defamatory statement published on Facebook was accessible in every state where the defendant had a friend. Third, under federal statutory law, operators of websites are not considered publishers and are thus not liable for statements posted on their sites by third parties, even if the website’s operator attempts to edit or screen material for defamatory content. This includes social media sites like Facebook and Twitter. Section 230 of the Communications Decency Act, enacted as part of the Telecommunications Act of 1996, provides a safe harbor that protects online service providers from liability for their users’ posts. Section 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”7 You may have heard of Section 230 as it has recently been attacked by politicians on all sides. Democrats believe social media sites like Facebook are not doing enough to combat fake news on their sites. Republicans, meanwhile, claim that social media sites are biased against conservative voices. The law, however, was originally passed to encourage Internet service providers (ISPs) to restrict the flow of objectionable content and in reaction to the idea that by editing this material on the Internet, the ISP could be held liable under the republication rule. In Stratton Oakmont, Inc. v. Prodigy Services Co.,8 a New York court held that Prodigy, an online service provider that appeared to be exercising editor control over its service by using software to screen out offensive language and a moderator to enforce content provisions, was liable for its users’ posts. Thus, in an effort to encourage interactive service to make good faith attempts to control indecency on the Internet, Congress provided ISPs with immunity from libel suits. After all, if policing content had made Prodigy liable for defamatory content posted by a third party, few providers would want to exercise editorial control over content. Thus, ISPs are granted immunity in order to encourage them to edit indecent material. Section 230 attempts to distinguish between ISPs and Internet information content providers. Section 230 defines an interactive computer service as “an information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”9 The interactive computer service is protected from defamatory statements made by other information content providers, or “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”10 Since Section 230 was passed, courts have interpreted “interactive service providers” quite broadly. Section 230 has been applied to websites, forums, listservs, and social media sites. In 2018, Congress passed the first major change to Section 230 since it was enacted. The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) states that Section 230 does not apply to civil and criminal charges of sex trafficking or to conduct that “promotes or facilitates

30

Derigan Silver

prostitution.”11 The change revolved around websites like Backpage.com, a classified sites well known for adult-services advertisements. In 2013, numerous commentators questioned the future of Section 230 when a federal jury awarded $338,000 to a former Cincinnati Bengals cheerleader for postings made by a third party to thedirty.com. In that case, the trial judge ruled that thedirty.com was not protected by Section 230. In 2014, however, the Sixth Circuit Court of Appeals ruled thedirty.com was protected under Section 230 because the website did not materially contribute to the statements posted on the site about the former cheerleader.12 Today, Section 230 continues to provide strong protections to websites despite calls for reform. In 2020, for example, a Virginia court ruled that Twitter could not be sued under Section 230 for tweets by third parties. US Congressman Devin Nunes sued Twitter and three Twitter users – Republican consultant Elizabeth A. Mair, an anonymous user posting as Devin Nunes’ Mom, and an anonymous user posting as Devin Nunes’ Cow. Nunes argued that Twitter’s “liberal bias” transformed it into a publisher that was not protected by Section 230. The court disagreed.13 In addition, Section 230 has been invoked to bar claims of invasion of privacy and other areas of law.14 Numerous courts have also held that linking to and/or a reference to an allegedly defamatory Internet posting is not a republication. The Fourth Circuit Court of Appeals, for example, held in 2020 that sharing a hyperlink to a Washington Post article was not a republication or new publication. Svetlana Lokhova argued that each time a link to an article that stated she was a Russian spy involved in the alleged collusion between Russia and former President Donald Trump’s campaign was shared or retweeted was a new publication. The court noted that multiple courts have ruled that distributing a hyperlink was not a new publication or a republication.15 Not all courts have agreed, however. In 2021, in yet another suit brought by Congressman Nunes, the Eight Circuit Court of Appeals ruled that tweeting a link to an article could constitute a republication of an article and evidence of what is known as “actual malice.” In Nunes v. Lizza,16 Representative Nunes sued Ryan Lizza and Hearst Magazine Media, Inc., for a 2018 article published in Esquire magazine about Representative Nunes and his family’s dairy farm in Iowa. The Nunes family farm has been the source of controversy for years (hence the anonymous Twitter user who goes by the name Devin Nunes’ Cow) and has led to a barrage of defamation lawsuits by Nunes. In 2006, the family sold a farm in California to buy a farm in Iowa. The article suggested Nunes was keeping this fact a “secret” and asked if the family was “hiding something politically explosive.” The article asserted that the farm used undocumented labor, suggested this was why the family was hiding the move from California to Iowa, and accused Nunes of improper conduct during his time as Chairman of the House Intelligence Committee. Nunes’ suit was initially dismissed for a variety of reasons by the trial court. The trial court ruled that as a public official, Nunes could not prove the original article was published with “actual

Defamation

31

malice” (see the section later on “Fault”) and that Lizza was protected under the republication rule when he hyperlinked to the article in a subsequent tweet. On appeal, however, the Eighth Circuit ruled that when Lizza posted a tweet with a link to the article on November 20, 2019, Lizza was “on notice of the article’s alleged defamatory implication by virtue of the lawsuit.” Therefore, the court ruled, Lizza “republished” the article by linking to it knowing that Nunes’ denied knowledge of undocumented labor on the farm. This, the court ruled, could be evidence that Lizza acted with actual malice – publishing a statement with knowledge of its falsity or with reckless disregard for the truth of the statement – by republishing an article on his Twitter account he knew might be erroneous. An additional area of concern that may arise regarding publication and Internet defamation relates to statutes of limitations. All states have a statute of limitations on defamation claims, typically ranging from one to three years from the date of publication. Courts have nearly uniformly ruled that the continuous nature of online publications does not indefinitely extend the statute of limitations for Internet libel. The date of publication is the date the allegedly defamatory material is posted.17 In addition, courts have ruled that updating a website with unrelated content or making technical changes to an already published website does not constitute a new publication unless the defamatory statement itself is altered or updated.18 Identification Plaintiffs must establish that a published communication is about them. This requires a plaintiff to prove a defamatory statement was “of and concerning” them. That is, they must show that someone else could identify them as the subject of the defamatory statements. A plaintiff can be identified by name, picture, description, nickname, caricature, cartoon, and even a set of circumstances. Any information about the individual may identify them. Some defamation suits arise because of the naming of the wrong person. For example, reporting that Adam Smith of 123 Main Street was arrested for murder could lead to a defamation suit if the Adam Smith arrested lived at 456 First Avenue. For this reason, some media lawyers recommend identifying subjects in several ways – for example, by full name, including a middle name or initial if available, address, and occupation. Some commentators have argued, however, that the ability of Internet publications to reach a very large audience reduces the likelihood that a plaintiff can be identified by “mere commonality of a name or other shared attribute.”19 After all, if there are millions of Adam Smiths within the reach of the Internet, it is difficult to say your friends assumed you were the Adam Smith who was identified in a post on social media. Although persons who are part of a large group are usually not able to prove identification, in some situations, members of a small group may be able to prove they have been identified by reference to the group as a whole. Under the

32

Derigan Silver

group libel rule, large groups, such as all college professors, could not sue for libel based on statements such as “All college professors are lazy and most are unqualified to teach.” However, members of smaller groups might be able to sue depending on the size of the group and the language used. The smaller the group, the more likely it is that a statement identifies members of the group.20 Defamatory Content Defamation is a communication that exposes a person to hatred, ridicule, or contempt; lowers them in the esteem of others; causes the person to be shunned; or injures them in their business or profession. The New York Court of Appeals defined a defamatory statement as one which “tends to expose a person to hatred, contempt or aversion, or to induce an evil or unsavory opinion of him in the minds of a substantial number of the community, even though it may impute no moral turpitude to him.”21 Whether a statement has a defamatory meaning ultimately depends on how a community responds to it. The general rule in the United States is that a statement conveys a defamatory meaning if it would harm a person’s reputation to “a substantial and respectable minority” of the community. In every case, the court must determine the particular words or phrases that were used and determine if those words lower the individual’s reputation among a significant number of “right-minded” people in the community. Words should be considered in light of their ordinary meaning. A judge is responsible for determining as a matter of law if the words are capable of being defamatory. It is then up to the jury to determine if the words actually conveyed a defamatory meaning. Libel per se refers to statements that are defamatory on their face. This typically includes accusations of criminal conduct or activity; attacks on one’s character traits or lifestyle, including claims of sexual promiscuity, sexual behaviors that deviate from accepted norms, and marital infidelity; allegations that a plaintiff has a communicable or loathsome disease; and allegations that tend to injure a person in his business, trade, profession, office, or calling. While the inclusion of statements regarding unchaste behavior or deviant sexual behavior may seem anachronistic because community mores about appropriate sexual conduct vary dramatically, statements relating to such conduct may be the subject of lawsuits, particularly when published on the Internet, which effectively distributes such statements internationally.22 Libel per quod refers to statements where the defamatory meaning of a statement results from extrinsic knowledge. Libel per quod is akin to libel by implication or innuendo. Libel per quod can be difficult for a plaintiff to prove. Typically, you cannot be held liable for statements that are defamatory because of facts you had no reason to know. In addition, as discussed later, in many jurisdictions, plaintiffs alleging libel per quod must prove actual monetary loss or “special” damages. In some jurisdictions, lawsuits involving libel per se proof of defamation itself establishes the existence of damages.

Defamation

33

Other jurisdictions, however, have abandoned the per se/per quod distinction. In addition, the distinction between the two was weakened by the US Supreme Court in Gertz v. Robert Welch, Inc., when the Court ruled in matters of public concern, all plaintiffs must prove statements are made negligently or recklessly.23 Fault Fault is perhaps the most complicated area of defamation, and the element has changed dramatically over time. As Justice Byron White noted in Gertz v. Robert Welch, Inc., the Supreme Court’s “consistent view” before New York Times v. Sullivan24 was that defamatory statements “were wholly unprotected by the First Amendment.”25 Thus, prior to 1964, a defendant was held strictly liable unless they could prove their statement was either true or privileged. 26 Under strict liability, if a defendant commits an act, even if by accident or in ignorance, they are liable for damages. Today, however, based on a series of decisions by the Supreme Court, the level of fault a plaintiff must prove depends on the identity of the plaintiff, the subject matter of the defamatory statement, and the type of damages, or monetary awards, the defendant is seeking. In Sullivan, the Court held that a public official could not recover damages for a defamatory falsehood relating to his official conduct unless the defendant acted with “actual malice,”27 that is, with knowledge of the statement’s falsity or reckless disregard for the truth. In addition, the Court provided added protection for defamatory speech by requiring the plaintiff to prove actual malice with “convincing clarity” rather than the normal preponderance of evidence.28 In Curtis Publishing Co. v. Butts,29 the Court extended the protection afforded by the actual malice standard to “public figures.”30 The court reasoned that the distinction between public officials and public figures in 1960s America was artificial. In Gertz v. Welch, Inc.,31 the Court ruled while public officials and public figures must prove actual malice to win their libel lawsuits, private figure plaintiffs had to prove some degree of fault, at least negligence.32 The Court, however, also made a distinction between winning a defamation suit and the ability to recover presumed and punitive damages. Because presumed and punitive damages are typically large, the Court ruled that the states could not permit their recovery unless the plaintiff could show actual malice, regardless of their status as a public official, public figure, or private individual.33 The Court wrote presumed damages invited “juries to punish unpopular opinion rather than to compensate individuals for injury sustained by the publication of a false fact.”34 The plurality opinion in Gertz, it is also important to note, consistently referred to the need to protect “publishers,” “broadcasters,” and “the media” from juries who might award large presumed and punitive damages.35 As discussed later, some lower courts have interpreted

34

Derigan Silver

these statements to mean that the Constitution requires a different standard for media and non-media defendants, a move that causes concern given the ability of non-media defendants to publish information on social media sites with relative ease. In 1985, in Dun & Bradstreet v. Greenmoss Builders, the Court continued to refine the law of defamation. In the case, the Court ruled that the limitation on the recovery of presumed and punitive damages established in Gertz did not apply “when the defamatory statements do not involve matters of public concern.”36 Justice Lewis F. Powell Jr. wrote that Gertz’s ban on recovery of presumed and punitive damages absent a showing of actual malice appropriately struck a balance between this strong state interest and the strong First Amendment interest in protecting speech on matters of public concern, yet nothing in Gertz that “indicated that the same balance would be struck regardless of the type of speech involved.”37 In sum, public officials and public figures must prove actual malice to win their lawsuits. Private persons must prove at least negligence to win their lawsuits and collect compensatory damages. All plaintiffs, both public and private, must prove actual malice to collect punitive damages if the subject of the report is a matter of public concern. The Supreme Court has never ruled if private figures must prove fault if the subject matter of the defamatory statement does not involve a public concern. Different jurisdictions have reached different conclusions.38 A public official is not simply any individual who works for the government. The Supreme Court has determined that the public official category includes “at the very least . . . those among the hierarchy of government employees who have, or appear to the public to have, substantial responsibility for or control over the conduct of governmental affairs.”39 Courts look at a number of indicators to determine if a plaintiff is a public official. Courts consider if the individual controls the expenditure of public money or has the ability to set government policy or make governmental decisions or if they have control over citizens or are responsible for health, safety, or welfare. Public officials do not have to wield great power or occupy lofty positions. For example, law enforcement personnel, even beat cops, are typically categorized as public officials. According to one scholar, school superintendents, a county medical examiner, and a director of financial aid for a public college have all been ruled public officials.40 In Gertz, the Court ruled there were several types of public figures. Allpurpose public figures are individuals who “occupy positions of such pervasive power and influence that they are deemed public figures for all purposes.”41 This can include an individual with widespread fame or notoriety or individuals who occupy a position of continuing news value. The distinction between the fault level required of public and private figures has, in part, been justified by the distinction that public figures have the resources and media access to refute defamatory content.

Defamation

35

In Gertz, the Court also ruled that an individual could be a limited-purpose public figure. Limited-purpose public figures are individuals who have “thrust themselves to the forefront of particular public controversies in order to influence the resolution of the issues involved.”42 Like all-purpose public figures, limited-purpose public figures are assumed to have access to the media based on their involvement in the public controversy and can rebut false statements about themselves. When considering Internet defamation, a court might consider independent chat room discussions about the same controversy or extensive or multiple postings on social media by a plaintiff to support a finding that an individual has thrust themselves into the controversy in an effort to affect public opinion.43 In 2020, in a lawsuit involving “C.M.,” a 12-year-old whose Facebook videos supporting then-candidate Donald Trump went viral, the Eleventh Circuit Court of Appeals ruled C.M. injected himself into the controversy surrounding President Trump and his critics by posting the videos online. One of C.M.’s videos attracted more than 325,000 views on Facebook.44 In his videos, the preteen endorsed Trump and called Hillary Clinton “deplorable.” Finally, in Gertz, the Court also wrote that it was possible some individuals could become public figures inadvertently or involuntarily. Involuntary public figures are individuals who are “drawn into a particular public controversy” and “become public figure through no purposeful action of their own.”45 In contrast to public officials and public figures, a private individual is an average person who has not voluntarily injected themselves into public controversy. Many individuals who allege a defamatory statement was made about them on a social media site would probably fall into this category of plaintiff. Proving actual malice can be very difficult for a plaintiff. As noted earlier, actual malice is publishing with knowledge of falsity or reckless disregard for the truth, and plaintiffs must prove actual malice with clear and convincing evidence, a higher burden than most civil cases, which only require proof by a preponderance of the evidence, a more-likely-than-not standard. While knowledge of falsity is easily understandable – the defendant knew what they published was false – reckless disregard for the truth can be a more nuanced concept. Reckless disregard has been described by the Supreme Court as “the purposeful avoidance of the truth,”46 “serious doubts as to the truth of the publication,”47 and “a high degree of awareness of probable falsity.”48 The crucial focus of actual malice is the defendant’s state of mind or attitude about the allegedly defamatory statement at the time the statement was made. Actual malice is usually either a knowingly false statement or a combination of reckless behaviors that led someone to publish a story even if there were obvious reasons to doubt its veracity. Neither ill will nor even “extreme departure from professional standards”49 qualifies as actual malice, although along with other factors, these might contribute to a finding of actual malice. For example, in 2021, the Fourth Circuit Court of Appeals ruled that while failure to investigate a source’s story alone was not enough to prove actual malice, failure to

36

Derigan Silver

investigate the source’s story combined with other factors that would lead a journalist to doubt the source’s story might be considered reckless disregard.50 As noted earlier, in Nunes v. Lizza,51 the Eighth Circuit Court of Appeals ruled that retweeting a link to a story that was the subject of litigation could be an example of actual malice. In addition to inquiring into the state of the mind of the defendant at the time of publication,52 courts consider the sources that were used or not used, the nature of the story – especially whether the story was “hot news” or time-sensitive – and the inherent probability or believability of the defamatory statement. Negligence is the fault standard most states have adopted for defamation involving a private individual and a matter of public concern. Widely used in tort law, negligence is a failure to act as a reasonable person would in similar circumstances. In libel cases, courts sometimes use the “professional” standard rather than the reasonable person standard. For journalists, this standard asks if they failed to follow accepted professional standards and practices. When the professional standard is used, a journalist’s behavior is measured against what is considered acceptable behavior within the profession. While there is no set definitive list of what practices are considered negligent, negligence by a journalist might include failure to check public records, failure to contact the subject of the defamatory statement unless a thorough investigation had already been conducted, failure to contact multiple sources or verify information from more than one source, or a failure to address a discrepancy between what a source says he told the journalist and what the journalist reported. As with actual malice, courts consider the sources that were used or not used, whether the story was time-sensitive, and the inherent probability or believability of the defamatory statement. Courts do not usually demand that a story is investigated exhaustively before publication, so long as a communicator contacts the subject of the defamation directly and checks all information carefully with reliable sources. Falsity Defamatory communication must be harmful to someone’s reputation and false. To be protected, a defamatory statement does not have to be absolutely accurate in every aspect. Minor flaws are not actionable so long as the statement is “substantially true.” Mistakes in a statement that do not harm a plaintiff’s reputation cannot be the subject of a defamation suit. Under American law, public officials, public figures, and private persons involved in matters of public concern must prove falsity. Put another way, anytime a matter of public concern is involved, the plaintiff must prove the defamatory statement is false. Writing for the majority in Philadelphia Newspapers, Inc. v. Hepps,53 Justice Sandra Day O’Connor said Sullivan and its progeny reflected “two forces” that had reshaped the common law of libel: the plaintiff’s status and “whether the speech at issue is of public concern.”54 The

Defamation

37

Court ruled that whenever the speech involved a matter of public concern, the Constitution required that the plaintiff bear the burden of proving falsity and fault in a suit for libel.55 Because both fault and falsity deal with determining what is a matter of public concern, both elements raise some difficult questions regarding the subject matter of defamatory speech, particularly as it applies to Internet defamation. As online speech becomes more ubiquitous in our culture, it has replaced other means of communication. However, much of the speech on Facebook, Twitter, and other social media sites may interest a relatively small number of people, and it may be difficult to determine whether it is a matter of public or private concern. This is particularly troublesome because, in some situations, the conclusion that a statement is merely a matter of private concern may ultimately determine the outcome of a defamation suit. Further, it’s unclear if the public nature of social media publications – unless you have protected your account, you can publish to the whole world – may cause problems. Complicating matters, the US Supreme Court has never defined “matters of public concern,” although the phrase appears in a wide variety of cases, including cases involving speech by government employees,56 intentional infliction of emotional distress,57 and false light invasion of privacy,58 as well as others.59 In Snyder v. Phelps, a case involving the tort of intentional infliction of emotional distress, the Court explicated the concept. After writing that the Court itself had admitted “the boundaries of the public concern test are not well defined,”60 Chief Justice John Roberts nonetheless set out to articulate some principles. Roberts began by noting that speech on a matter of public concern can “‘be fairly considered as relating to any matter of political, social, or other concern to the community’ or when it ‘is a subject of legitimate news interest; that is, a subject of general interest and of value and concern to the public.’”61 Private speech, on the other hand, “concerns no public issue.”62 Roberts said one factor that “confirmed” the credit report at issue in Dun & Bradstreet was private speech was that it “was sent to only five subscribers.”63 Roberts then contrasted that with the speech at issue in Snyder v. Phelps, which was “designed, unlike the private speech in Dun & Bradstreet, to reach as broad a public audience as possible.”64 Complicating matters, some lower courts have ruled expression is private even when widely distributed.65 Additionally, adding audience size to the public interest calculus is especially problematic in today’s environment as it would appear to require courts to take into account how large an audience or how many “followers” a speaker might have in a forum like Facebook or Twitter. In addition, as noted earlier, while some users have elected to enact privacy settings on Twitter, others have not, which makes publication on a public Twitter account in effect publication to the whole world regardless of the number of followers you have. Blogs also might become problematic forums for expression. Would, for example, a blog post by an unknown writer be treated differently than a blog post on the widely read Volokh Conspiracy66 even if the topic of both posts was similar? Finally,

38

Derigan Silver

what does this say about email? The First Circuit Court of Appeals upheld a lower court’s determination that an email sent to approximately 1,500 employees detailing the firing of a fellow employee for violating company policies was a matter of private concern involving a private plaintiff.67 Surely 1,500 people is a fairly large “intended audience.” And what if the expression is meant for a more limited audience but the content goes viral? In the age of the Internet, every video, picture, Facebook post, or tweet can quickly have an audience of thousands, if not millions. For example, two days after a Twitter user posted a photo of a Target employee with the hashtag #AlexFromTarget, the hashtag had over a million Twitter hits. Before long, the employee had 250,000 followers, appeared on the Ellen show, began receiving death threats, and was the subject of fabricated stories.68 Before boarding a flight to Cape Town, South Africa, Justine Sacco tweeted, “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white.” After the tweet went viral, the hashtag #HasJustineLandedYet started trending and was the number one worldwide trend by the time Sacco landed. Sacco’s story was picked up by major media outlets, and Sacco lost her job.69 As discussed later, this is further complicated by the fact that some courts treat media and nonmedia defendants differently. Injury or Harm In libel suits, plaintiffs must prove damages that go beyond embarrassment or being upset. Plaintiffs must show harm, sometimes called injury to reputation. Harm can be intangible: loss of reputation, standing in the community, mental harm, emotional distress, and so on. However, sometimes harm may include loss of income. Not all defamatory statements bring about harm, while with other statements, harm to reputation may be “presumed.” For states that still recognize the difference, damage may be presumed in cases of libel per se but not in libel per quod. In some states, in cases of libel, per quod plaintiffs must prove the special circumstances of the defamation – that the audience understood the defamatory connotation – and actual monetary loss before a plaintiff can recover for damages based on emotional distress, damage to reputation, or other intangible harm. Thus, libel plaintiffs may sue for presumed damages or harm that loss of reputation is assumed to cause. They may also sue for compensatory damages, awards designed to compensate for the proven loss of a good name or reputation (called actual damages in libel law) or awards designed to compensate for lost revenue or other monetary loss (called special damages in libel law). Finally, plaintiffs may sue for punitive damages, awards imposed to punish the plaintiff rather than compensate the defendant. The system for calculating damages awards is complex, even before courts begin to consider harm as it applies to Internet communications.70 Public officials and public figures can only be awarded damages if they prove actual

Defamation

39

malice. Private figures in matters of public concern must show actual malice if they wish to collect presumed or punitive damages and at least negligence if they wish to collect compensatory damages. Punitive damages require a typically larger sums than other damages, and thus require a showing of actual malice when the statement is a matter of public concern. States vary greatly in how they approach the defamation of private individuals who are not involved in a matter of public concern. In some states, they are allowed to collect presumed and punitive damages without a showing of actual malice or are allowed to collect compensatory damages without a showing of negligence. Some jurisdictions have even ruled that defamation involving a private plaintiff and a matter of private concern are completely governed by state common law and do not implicate the Constitution whatsoever.71

Defenses to Defamation In a defamation suit, defendants have a number of defenses they can actively assert. It is widely recognized in American law that truth is an absolute defense to libel claims.72 As noted earlier, in all cases involving a matter of public concern, the plaintiff has the burden of proving falsity. In these cases, if a plaintiff fails to prove a statement was false, the defendant will win without having to prove the statement was true. However, defendants can also use truth as a proactive defense. The truth of a statement rests on the overall gist of a statement or story. Minor inaccuracies will not destroy a truth defense, and “substantially” true statements will be considered true for a defamation suit. In addition, the US Supreme Court has ruled that intentionally changing quotes in a story does not equate to actual malice unless the changes substantially increase damage to the plaintiff’s reputation or add to the defamatory meaning of the words.73 Opinion, Hyperbole, and Fair Comment and Criticism Some statements are incapable of being true or false. Thus, while in 1990, in Milkovich v. Lorain Journal Co.,74 the US Supreme Court declined to establish separate constitutional protection for opinion, the Court has ruled that some statements are not actionable in a libel suit because they are not false statements of fact. First, exaggerated, loose, figurative language; rhetorical hyperbole; and parody are all protected by the First Amendment. For example, calling a worker who crossed picket lines “a traitor to his God, his country, his family and his class” does not literally mean the worker was guilty of treason, and no one would take the statement to imply it.75 Second, statements incapable of being proven true or false – such as imprecise evaluations like good, bad, or ugly – cannot be proven true or false. However, this does not mean there is a wholesale exemption for anything labeled

40

Derigan Silver

“opinion” or using the phrase “in my opinion” (or IMO for those familiar with Twitter) will automatically protect a speaker. Writing “In my opinion, John Jones is a child molester” on Facebook or tweeting “IMO, John Jones is a liar when he says he doesn’t know where his wife’s body is” would most certainly not be a protected opinion. Furthermore, even unverifiable statements of opinion can lose their protection if they imply the existence of false, defamatory, but undisclosed facts, or the statement is based on disclosed but false or incomplete facts, or the statement is based on an erroneous assessment of accurate information. Statements of opinion are also protected under the common law defense of fair comment and criticism. This privilege protects non-malicious statements of opinion about matters of public interest. The opinion must be “fair,” and many courts have stated this means the opinion must have a factual basis that has either been provided by the plaintiff, is generally known to the public, or is readily available to the public. Thus, food critics, movie reviewers, and other commentators who might post a scathing review of a new restaurant, film, or music album are generally protected. For example, posting that a restaurant’s new sauce tasted “like rotten garbage” is not an actionable statement, whereas “in my opinion, the restaurant regularly uses expired food products” would be actionable because it implies a statement of fact – that the restaurant uses unsafe food in the preparation of its dishes. This is particularly important given the proliferation of websites on which individuals can share their opinions about everything from the restaurant down to the street,76 the reliability of a plumber,77 or the quality of a tour guide78 to the suitability of miniature ponies as guide animals for the vision-impaired.79 Privileges Some false, defamatory statements of fact are still protected by law. An absolute privilege protects a speaker regardless of the speaker’s accuracy or motives. In other situations, a speaker may have a qualified privilege. A qualified privilege protects a statement only when certain conditions are met that vary from jurisdiction to jurisdiction. The US Constitution provides an absolute privilege from libel litigation for members of Congress when making remarks on the floor of either house.80 Federal legislators are also protected when communicating in committee hearings, legislative reports, and other activities. Statements made beyond the legislative process, however, are not protected.81 The official statements of executive branch officers and all comments made during judicial proceedings – whether by judges, lawyers, or witnesses – also have absolute privilege from libel litigation. Similarly, the fair report privilege, sometimes called the reporter’s privilege, is a common law qualified privilege that protects reports of official government proceedings and records if the reports are (1) accurate, (2) fair or balanced, (3) substantially complete, and (4) not motivated by malice or ill will.

Defamation

41

Under common law, courts recognize that the public needs access to information regarding the workings of government, and reporters should be free to communicate what happens at public meetings and the contents of public documents. Thus, someone who reports a defamatory comment made in an official government proceeding or a defamatory statement from an official government document cannot be sued for libel as long as the conditions of the privilege are met. Courts in some states have said the report must be attributed to the official record or meeting for the privilege to apply, while other states have ruled the privilege can only be claimed by a member of the press. In addition, the privilege does not extend to comments made by government officials outside of official proceedings. A reporter’s privilege also extends to any and all statements made during official judicial proceedings by judges, witnesses, jurors, litigants, and attorneys and any information obtained from most official court records and documents filed with a court. But the defense has its limits. In 2021, for example, a federal court ruled that CNN was not protected by the defense when it was sued for $300 million by law professor and legal commentator Alan Dershowitz. The court ruled that CNN used an edited and misleading version of comments Dershowitz made to Congress during former President Trump’s impeachment trial.82 Privileges for communications of mutual interest protect communication between two individuals when those individuals have a common or shared interest. In these situations, a statement is privileged if (1) it is about something in which the speaker has an interest or duty, (2) the hearer has a corresponding interest or duty, (3) the statement is made to protect that interest or performance of that duty, and (4) the speaker honestly believes the statement to be true. Thus, an email between business partners or corporate employees that defamed a third party might be protected under this privilege. The privilege also protects members of religious organizations, fraternities, sororities, and educational institutions. Some jurisdictions recognize a privilege known as neutral reportage. This privilege allows third parties to report statements by reliable sources even if the third party doubts the accuracy of the statement. The privilege is based on the idea that the accusation itself is newsworthy, even if the accusation might not be true. The privilege has not been widely recognized by courts and has never been recognized by the US Supreme Court.83 Those courts that have recognized the privilege typically require the charges must be (1) newsworthy and related to a public controversy, (2) made by a responsible person or organization, (3) made against a public official or public figure, (4) accurately reported with opposing views, and (5) reported impartially. In addition to being limited to only those jurisdictions where it has been recognized, like reporter’s privilege, this defense also has its limits. In 2021, a Delaware judge denied a motion by Fox News to dismiss Dominion Voting Systems’ $1.6 billion defamation lawsuit against the network.84 The judge in the case noted that the neutral reportage defense requires a defendant to show

42

Derigan Silver

they “accurately and dispassionately” reported a newsworthy event. The judge wrote that it was conceivable that Fox’s reporting was inaccurate and unbalanced. For example, the judge noted the network refused to report evidence that the election was fair and didn’t report information provided to the network by Dominion or the Department of Justice. The judge concluded there was evidence that “Fox intended to keep Dominion’s side of the story out of the narrative” and that “Fox failed to report the issue truthfully or dispassionately by skewing questioning and approving responses in a way that fit or promoted a narrative in which Dominion committed election fraud.” SLAPP Lawsuits In some situations, libel suits can be filed in an attempt to silence criticism or stifle political expression. These suits are referred to as Strategic Lawsuits Against Public Participation, or SLAPP suits, a term coined by two University of Denver professors in articles that described lawsuits to silence opposition to a developer’s plan to cut trees or suits meant to stifle reports of police brutality.85 The purpose of a SLAPP suit is not necessarily to win damages. Rather, the goal is to discourage criticism because libel suits can be time-consuming and costly. As Professor Robert D. Richards wrote, “[T]he SLAPP filer does not have to win the lawsuit to accomplish his objective. Indeed, it is through the legal process itself – dragging the unwitting target through the churning waters of litigation – that the SLAPP filer prevails.”86 A proliferation of SLAPP suits led many states to pass anti-SLAPP laws. According to one source, 31 states and the District of Columbia have anti-SLAPP statutes designed to protect political speech and criticism by stopping SLAPP suits.87 In 2014, professional boxer Floyd Mayweather posted on Facebook and Instagram that his ex-girlfriend had an abortion and “killed our twin babies.” When the ex-girlfriend sued him for defamation, the boxer filed a motion to strike the claim under California’s anti-SLAPP statute. In 2017, on appeal, a California state appellate court ruled Mayweather’s social media posts were covered by the law.88 In recent years, SLAPP suits have become an increasingly popular way for businesses and professionals to silence criticism on the Internet or made via social media.89 For example, in 2010, a woman was sued for defamation over negative comments she made on Yelp.com about her dentist,90 while another man was sued for negative reviews of his chiropractor.91 Other suits have come from postings on Twitter92 or Facebook.93 In addition, SLAPP suits can also be designed simply to unmask anonymous and pseudonymous posters to social networking sites, blogs, and consumer gripe sites.94 Fortunately, online commentators have several remedies for SLAPP suits. First, citizens can countersue and contend a lawsuit was filed from spite or maliciousness rather than on legitimate legal concerns. Second, defendants in states with anti-SLAPP statutes can have the cases dismissed and, in some states, can recover court costs and attorneys’ fees. Anti-SLAPP statutes can

Defamation

43

also switch the burden of proof from the defendant, and the plaintiff has to show that the case should not be dismissed. In addition, some anti-SLAPP statutes allow defendants to recover compensatory damages if they can show a suit was filed against them to harass, intimidate, punish, or inhibit free speech. Unfortunately, the scope of protection provided by anti-SLAPP laws can vary widely, and different laws can have very different mechanics. The key to determining if an anti-SLAPP law applies to social media is whether the expression is covered by a particular state’s statute. According to Professors Matthew Bunker and Emily Erickson, in several cases involving social media, courts have found the expression is covered by anti-SLAPP statutes.95 In 2017, for example, a Texas appellate court ruled that a blog post critical of a company that sold dietary supplements, suggesting the company might be a scam. The Texas appellate court found that the post was an exercise of free speech about a matter of public concern, and thus, the Texas antiSLAPP statute applied.96 In 2018, a federal judge ordered Stephanie Clifford (better known as adult film actress Stormy Daniels) to pay former President Trump’s legal costs under Texas’ anti-SLAPP law. Clifford sued Trump for a tweet suggesting she lied when she stated that a man threatened her on behalf of Trump when she agreed to speak with In Touch Magazine about an affair the actress had with Trump.97

Social Media Challenges In addition to the many ways defamation has adapted to the Internet discussed earlier, there are a number of ways the law has adapted to – or struggled to adapt to – social media specifically. For over a decade, lawyers and scholars have speculated how the courts would treat allegedly defamatory social media posts. One author, for example, suggested social media’s characteristics should lead courts to treat all posts on these mediums as opinion rather than fact. Noting that courts have consistently considered the context of a statement when determining if a statement was fact or opinion, attorney William Charron argued the nature of tweets should cause courts to mitigate the otherwise libelous nature of a tweet. Charron wrote, “Twitter is a ‘buyer beware’ shopping mart of thoughts, making it an ideal public forum to spark imagination and further discussion.”98 Others have suggested the unique nature of Twitter calls for new statutory or common law approaches to libel via Twitter. Legal commentator Julie Hilden, for example, wrote that current libel law is not a good fit for social media because much of it was crafted for a time when newspapers reigned supreme. Hilden predicted many, many more defamation suits based on Twitter.99 In 2013, attorney Ellyn Angelotti Kamke wrote that traditional legal approaches to libel law did not translate well to online speech, and she proposed numerous solutions to help libel law deal with the novelties presented by sites like Twitter. Angelotti Kamke focused on non-legal remedies to online defamatory

44

Derigan Silver

speech, including focusing on counter-speech (or fighting “bad speech” with “good speech”) and private sector reputation management tools. She also suggested online dispute resolution was a less expensive and more convenient option than litigation.100 David Lat and Zachary Shemtob argued that the “Internet has rendered Gertz not only obsolete but legally incoherent.”101 Lat and Shemtob wrote that digital media has “blurred, if not eliminated, the entire public/private distinction” Gertz relied on. Ashley Messenger and Kevin Delaney questioned whether being “Internet famous” was enough to be deemed a public figure and if all Internet users would soon be public figures.102 Other scholars have focused on how social media regulates itself. Professors Thomas E. Kadri and Kate Klonick examined the issue by comparing remedies available in the legal system with remedies available via content moderation systems, focusing on Facebook. Kadri and Klonick noted that although private companies are now platforms that host vast amounts of content and have great power over online discourse, as private companies, they are not bound by the First Amendment. Kadri and Klonick concluded, however, that Facebook’s approach to regulating speech has much in common with traditional judicial approaches, noting that “[b]oth have developed rules that seek to regulate harmful speech while protecting robust public discourse that is essential to self-governance.”103 One of the biggest issues facing non-media users of social media is the distinctions some courts have created between the constitutional protections afforded the media and those afforded to average individuals. As noted earlier, in effect, lower courts are removing a wide range of speech from constitutional protections at the very time new communication technologies, such as email, Facebook, and Twitter, are giving non-media individuals the power to reach wider and wider audiences.104 From the outset, it is important to note that the Supreme Court has never explicitly stated there should be a difference between media and non-media defendants. The confusion comes from a series of cases in which the Court appeared to be making the distinction without directly saying so and the Court’s unwillingness to directly answer the question since then. While clarifying the rules about which plaintiffs would have to prove which level of fault to win their libel cases, the majority opinion in Gertz also introduced uncertainty as to whether the fault rules applied to all defendants. Justice Powell defined the issue in Gertz as “whether a newspaper or broadcaster that publishes defamatory falsehoods about an individual who is neither a public official nor a public figure may claim a constitutional privilege against liability for the injury inflicted by those statements.”105 In addition, when limiting presumed and punitive damages to showings of actual malice, Powell repeatedly referred to the need to protect “publishers,” “broadcasters,” and “the media” from juries,106 words that led some courts to conclude that constitutional limits did not apply in cases involving non-media defendants.107 Indeed, it was

Defamation

45

the Vermont Supreme Court’s decision that Gertz did not apply to non-media defendants that brought Dun & Bradstreet v. Greenmoss Builders108 to the Court in the first place. After acknowledging lower court confusion over the media–non-media issue and citing six state supreme court decisions to illustrate the disagreement,109 Justice Powell’s plurality opinion ignored the issue altogether, even though it would have been the perfect opportunity to settle the question. The following year, in Hepps, while Justice Sandra Day O’Connor cited only “two forces” determining constitutional requirements in libel cases – the plaintiff’s status and “whether the speech at issue is of public concern”110 – her opinion served to further fuel confusion over whether defendant’s status was a third force to be considered. In discussing various issues the Court was not required to address, O’Connor stated, “Nor need we consider what standards would apply if the plaintiff sues a nonmedia defendant.”111 Finally, in 1990, in Milkovich v. Lorain Journal Co., the Court continued to keep alive the possibility of a media–non-media distinction by declaring that provable falsity was required “at least in situations, like the present, where a media defendant is involved.”112 That statement was followed by a footnote stating, “In Hepps the Court reserved judgment on cases involving nonmedia defendants, and accordingly we do the same.”113 Based on these statements, and lower courts that have definitively held them to mean non-media defendants are not entitled to the same level of protection as media defendants, private individuals posting on social media sites need to be particularly wary of what they post. Numerous lower courts have been willing to treat media and non-media defendants differently in defamation cases.114 In 2011, the question of whether non-media libel defendants enjoy constitutional protection drew renewed attention – especially in the online world – when the US District Court for the District of Oregon held that a blogger did not qualify as a media defendant and, therefore, was not entitled to the protections of Gertz v. Robert Welch, Inc.115 because the blogger was “not media.”116 The judge’s ruling led to a jury verdict of $2.5 million against blogger Crystal Cox, whom a jury found had libeled Obsidian Finance Group and its cofounder, Kevin Padrick, in one of her posts.117 Ultimately, in 2014, the Ninth Circuit Court of Appeals overturned the verdict and ordered a new trial, ruling that Cox was covered by the First Amendment protections of the Gertz standard.118 In 2012, when this book was first published, it was unclear how the law would handle cases involving social media and defamation. In the last decade, however, numerous defamation lawsuits involving social media have made it to court. In 2012, for example, a Texas jury awarded $13.78 million in damages to a Texas couple, Mark and Rhonda Lesher, for statements made by multiple individuals on Topix.com, self-described as “the world’s largest community news site.” The case involved 1,700 separate posts – which began after the Leshers were accused of sexual assault – that describe the couple as sexual deviants, molesters, and drug dealers.119 Even though the verdict was ultimately overturned by the trial

46

Derigan Silver

judge, such a large award directed at multiple defendants raised numerous questions about social media and damages related to the number of negative posts, how spread out the posts are over time, and how audience reaction and the mob mentality of some social media sites should factor into damages. In 2016, political strategist and public relations consultant Cheryl Jacobus sued then-presidential candidate Donald Trump over tweets he made about her. In response to comments Jacobus made about Trump on CNN, Trump tweeted, “@cherijacobus begged us for a job. We said no and she went hostile. A real dummy!” Two days later, Trump tweeted, “Really dumb @CheriJacobus. Begged my people for a job. Turned her down twice and she went hostile. Major loser, zero credibility!”120 The judge in the case ruled that Trump’s tweets about Jacobus were “loose, figurative, hyperbolic” and thus not capable of verification. Trump’s tweets, according to the judge, were vague and simplistic insults not worthy of serious consideration.121 Similarly, as noted earlier, in 2018, Stephanie Clifford, better known as adult film star Stormy Daniels, sued Trump over a tweet Trump posted in response to statements made by Daniels that related to her alleged affair with Trump.122 In 2020, the Ninth Circuit Court of Appeals upheld a lower court’s decision that Trump’s tweet about Stormy Daniels was an “extravagant exaggeration employed for rhetorical effect”123 and was thus was not a false statement of fact that could be the subject of a defamation lawsuit. As noted, Clifford was also ordered to pay Trump’s legal fees. As these cases demonstrate, courts are not creating brand-new laws to deal with defamation via social media. As noted, the law of defamation is centuries old, yet typically, little of it has been created specifically for a new medium. Although Section 230 stands out in contrast, most defamation law has simply had to adapt to new technologies over time. To date, the same can be said of social media.

FREQUENTLY ASKED QUESTIONS 1.

Can I get in trouble for posting a vicious lie about somebody on Facebook? Yes, if the lie damages the person’s reputation. Posting something on Facebook is considered “publishing.” If you post a vicious lie about somebody on Facebook and, as a result, people think less of them, you’ve damaged their reputation with a false statement of fact.You can be sued for defamation for a Facebook post, a tweet, or a video posted on the Internet.

2.

If I repeat something I thought was true but turned out to be wrong on Twitter, will I be responsible for it? It depends. Generally, anyone who repeats someone else’s statements is just as responsible for the defamatory content as the original speaker.

Defamation

This is sometimes described as “The bearer of tales is as guilty as the teller of tales.” However, simply posting a link to a defamatory statement or retweeting someone else’s defamatory tweet usually will not make you liable. As noted, most – though not all – courts have ruled that sharing a link to defamatory content is not considered a “republication” or a new publication. On the other hand, if you create your own tweet based on a false, defamatory tweet, modify the original tweet in a defamatory way, or add your own defamatory statement to the tweet, you can be liable. And at least one court has ruled you might be liable if you have reason to believe the tweet was defamatory and you seek to share it with a new audience. 3.

Is it libelous to record something in public and post it on YouTube? Not if it’s true. Defamation is a false statement of fact. Recording a real event can’t be defamatory unless you misidentify the individuals in the video, suggest the video is something that it is not, or add some sort of false connotation to the video. For example, in 2019, an interaction near the Lincoln Memorial between Nicholas Sandmann, a Park Hills, Kentucky, Covington Catholic High School student, and Nathan Phillips, a Native American activist, was videoed and then uploaded to social media platforms Instagram, Twitter, and YouTube, receiving millions of views. Several media outlets caricatured Sandmann as a smirking teenage racist, something that turned out to be questionable when one viewed the entire unedited video. Based on media coverage of the incident, Sandmann’s family filed lawsuits against The Washington Post, CNN, and other media outlets.The suits against The Washington Post and CNN were settled for undisclosed sums.

4.

What should I do if somebody sues me for libel for posting an opinion on a site such as Yelp? You should consult a lawyer who is familiar with your state’s laws regarding defamation. It’s possible your statement will be protected because it is constitutionally protected opinion, it is considered fair comment and criticism under common law or because the lawsuit is actually designed to stifle your speech rather than win damages against you. Many states have anti-SLAPP statutes that are designed to protect people who are sued just to keep them quiet. If you qualify for protection under an anti-SLAPP statute, you can have the case dismissed before trial, and in some states, you can be awarded attorney fees from the plaintiff.

47

48

Derigan Silver

In addition, in 2016, Congress passed the Consumer Review Fairness Act. This law banned so-called gag clauses in contracts. These clauses, often included in the fine print of a contract, are designed to prevent consumers from publicly criticizing the companies they do business with. Businesses that violate the law can be fined $5,000, and consumers can receive up to $10,000 if they can prove a business acted recklessly in violating the law. 5.

Why can’t I sue a social media site like Facebook or Twitter for libel if someone defames me on the site? In 1996, in an effort to get Internet service providers (ISPs) to police indecent sexual content, Congress passed a law that granted immunity to websites that contain defamatory content posted by third parties. Prior to this law, ISPs would not edit the content on their websites because doing so made them liable for the defamatory postings of third parties. In the years after the law was passed, courts interpreted its protections broadly, granting blanket immunity to almost all websites that host the content of others, including Facebook and Twitter.This also has allowed websites like College ACB, Yik Yak, and AutoAdmit to host vicious, untrue, defamatory statements without having to worry about being sued.

Notes 1. David S. Ardia, Reputation in a Networked World: Revisiting the Social Foundations of Defamation Law, 45 Harv. C.R.-C.L. L. Rev. 261, 262 (2010). 2. See Frank H. Easterbrook, Cyberspace and the Law of the Horse, U. Chi. Legal F. 207 (1996) for a discussion of the application of real space laws to cyberspace situations. 3. 47 U.S.C. 230 (1996). 4. Restatement (Second) of Torts § 559 (1977). 5. Reputation is a personal right and cannot be inherited. Friends, relatives, or associates of a person may only sue for defamation if they have been defamed personally. 6. Texas Beef Group v. Winfrey, 11 F. Supp. 2d 858 (1998), aff’d, 201 F.3d 980 (5th Cir. 2000). 7. 47 U.S.C. 230 (1996). 8. 1995 WL323710 (N.Y. Sup. Ct. 1995). 9. 47 U.S.C. 230 (f) (2). 10. 47 U.S.C. 230 (f) (3). 11. Public Law No: 115–164 (November 4, 2018). 12. Jones v. Dirty World Entertainment Recording, LLC., 755 F.3d 398 (6th Cir. 2014). 13. Nunes v. Twitter et al., Case No. CL19–1715–00 (Va. Cir. Ct. June 24, 2020).

Defamation

49

14. But see Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (holding immunity under Section 230 did not apply to an interactive online operator whose questionnaire violated the Fair Housing Act); Federal Trade Commission v. Accusearch, 570 F.3d 1187 (10th Cir. 2009) (holding that section 230 did not grant a website immunity from prosecution under federal wiretapping law for providing access to phone records by virtue of the fact that the information was provided by third parties). 15. Lukhova v. Halper, No. 20–1368 (April 15, 4th Cir. 2020). See also, Sundance Image Technology, Inc. v. Cone Editions Press, Ltd., No. 02-CV-2258 (S.D. Cal. March 7, 2007). 16. 12 F. 4th 890 (2021). 17. Firth v. State, 98 N.Y.2d 365 (2002). See also, Nationwide Biweekly Administration, Inc., v. Belo Corp., 512 F.3d 137 (5th Cir. 2007). 18. In re Philadelphia Newspapers, L.L.C., 3 F.3d. 161 (3rd 2012). 19. Madeleine Schachter & Joel Kurtzber, Law of Internet Speech 424 (2008). 20. See, e.g., Fawcett Publications, Inc. v. Morriss, 377 P. 2d 42 (Okla.) (holding that a statement that “members” of a university football team used an amphetamine nasal spray to increase aggressiveness libeled all 60 members of the team). 21. Nichols v. Item Publishers, Inc., 309 N.Y. 596, 600–601 (1956). 22. For example, some courts continue to hold that a false statement that an individual is gay is defamatory, and some still view this characterization as defamatory per se. Other courts have held that changing moral attitudes toward homosexuality have made such rulings outdated, and some recent cases have questioned whether an allegation of homosexuality should ever be construed as defamatory. See Matthew D. Bunker, Drew E. Shenkman, and Charles D. Tobin, “Not That There’s Anything Wrong with That: Imputations of Homosexuality and the Normative Structure of Defamation Law,” 21 Fordham Intell. Prop. Media & Ent. L.J. 581 (2011). 23. 388 U.S. 130, 347–348 (1967). 24. 376 U.S. 254 (1964). 25. 418 U.S. 323, 384–385 (1974) (White, J., dissenting). 26. Nat Stern, Private Concerns of Private Plaintiffs: Revisiting a Problematic Defamation Category, 65 Mo. L. Rev. 597, 599 (2000). 27. Sullivan, 376 U.S. at 279–280 (1964). 28. Id. 29. 388 U.S. 130 (1967). 30. Id. at 164 (Warren, C.J., concurring). 31. 418 U.S. 323 (1974). 32. Id. at 347–348. 33. Id. at 349. 34. Id. 35. See, e.g., id. at 350. 36. 472 U.S. 749, 763 (1985). 37. Id. at 756–757. 38. See Ruth Walden & Derigan Silver, Deciphering Dun & Bradstreet: Does the First Amendment Matter in Private Figure-Private Concern Defamation Cases? 14 Comm. L. & Pol’y 1 (2009). 39. Rosenblatt v. Baer, 383 U.S. 75, 85 (1966). 40. Bruce Stanford, Libel and Privacy § 7.2.2.2, at 260–264 (2d ed. 1999). 41. Gertz v. Robert Welch, Inc., 418 U.S. 323, 345 (1974). 42. Id.

50

Derigan Silver

43. See, e.g., Ellis v. Time, Inc., WL 863267 (D.D.C. 1997) (holding a plaintiff’s multiple postings on CompuServe about a public controversy made him a limitedpurpose public figure). 44. McCafferty v. Newsweek Media Group, Ltd., 955 F. 3d 352 (2020). 45. 418 U.S. at 345. 46. Harte-Hanks Communications, Inc. v. Connaughton, 491 U.S. 657, 692 (1989). 47. St. Amant v. Thompson, 390 U.S. 727, 730 (1968). 48. Garrison v. Louisiana, 379 U.S. 64, 74 (1964). 49. Harte-Hanks Communications, Inc., 491 U.S. at 664. 50. Fairfax v. CBS Corp., Case No. 20–1298 (4th Cir. June 23, 2021). 51. 12 F. 4th 890 (2021). 52. In Herbert v. Lando, 441 U.S. 153 (1979), the Supreme Court ruled that the First Amendment did not bar examining the editorial process and a reporter for 60 Minutes could be asked how he evaluated information prior to publication of a story about a controversial retired Army officer. The Court held that the actual malice standard made it important for public officials and public figures to know both the actions and state of mind of defendants. 53. 475 U.S. 767, 771 (1986). 54. Id. at 775. 55. Id. at 776. 56. See, e.g., Connick v. Myers, 461 U.S. 138 (1983); Pickering v. Bd. of Educ., 391 U.S. 563 (1968). 57. See, e.g., Snyder v. Phelps, 562 U.S. 443 (2011). 58. See, e.g., Time, Inc. v. Hill, 385 U.S. 374 (1967). 59. See, e.g., Bartnicki v. Vopper, 532 U.S. 514 (2001). 60. 131 S. Ct. 1207, 1216 (2011) (quoting San Diego v. Roe, 543 U.S. 77, 83 (2004)). 61. Id. (quoting Connick, 461 U.S. at 146; and San Diego v. Roe, 543 U.S. at 83–83). 62. Id. (quoting Dun & Bradstreet, 472 U.S. at 762). 63. Id. 64. Id. at 1217. 65. See, e.g., Roffman v. Trump, 754 F. Supp. 411, 418 (E.D. Pa. 1990) (holding statements published in the Wall Street Journal, Business Week, Fortune, and the New York Post were “of no concern to the general public”); Katz v. Gladstone, 673 F. Supp. 76, 83 (D. Conn. 1987) (statements made in book reviews that appeared “in a number of periodicals” lacked “public concern”). 66. See https://reason.com/volokh/ (last visited October 22, 2021). 67. Noonan v. Staples, 556 F.3d 20, 22 (1st Cir. 2009) (holding that an email sent to 1,500 Staples employees regarding the firing of a fellow employee for violating the company’s travel and expense policy and code of ethics was a matter of private concern). 68. Nick Bilton, Alex From Target: The Other Side of Fame, New York Times, November 12, 2014. 69. Jon Ronson, How One Stupid Tweet Blew Up Justine Sacco’s Live, New York Times Magazine, February 12, 2015. 70. For a discussion of harm and the unique nature of defamation in American tort law, see David Anderson, Reputation, Compensation & Proof, 25 Wm. & Mary L. Rev. 747 (1984). For a discussion of the concept of harm in the context of Internet-based defamation, see Amy Kristin Sanders, Defining Defamation: Evaluating Harm in the Age of the Internet, 3 UB J. Media L. & Ethics 112 (2012). 71. See Walden & Silver, supra note 38, for a discussion of how different jurisdictions have handled the Supreme Court’s decision in Dun & Bradstreet v. Greenmoss Builders.

Defamation

51

72. There is at least one anomaly to this statement. In Noonan v. Staples, 561 F.3d 4, 7 (1st Cir. 2009) the US Court of Appeals for the First Circuit refused to hold unconstitutional a Massachusetts law that allowed liability for true defamatory statements. It is important to note, however, the First Circuit apparently never addressed the constitutionality of the 1902 Massachusetts statute in its opinion because the defendant’s attorneys never raised the issue. 73. Masson v. New Yorker Magazine Inc., 501 U.S. 496 (1991). 74. 497 U.S. 1 (1990). 75. National Ass’n of Letter Carriers v. Austin, 418 U.S. 264 (1974). 76. See, e.g., www.yelp.com. 77. See, e.g., www.angieslist.com/AngiesList. 78. See, e.g., www.tripadvisor.com. 79. See, e.g., Burleson v. Toback, 391 F. Supp. 2d 401 (M.D.N.C. 2005). 80. U.S. Const., art. 1, § 6. 81. Hutchinson v. Proxmire, 443 U.S. 111 (1979). 82. Dershowitz v. Cable News Network, No. 20–61872-CIV-SINGHAL (S.D.N.Y. 2021). 83. Several states and the District of Columbia have accepted neutral reportage in some form, as have several US Circuit Courts of Appeal. The inconsistent manner in which it has been applied, however, makes it an unreliable defense. 84. US Dominion, Inc. v. Fox News Network, LLC, No. N21C-03–257 EMD (Del. Super. Ct. December 16, 2021). 85. See Penelope Canan & George W. Pring, Strategic Lawsuits Against Public Participation, 35 Soc. Prob. 506 (1988); George W. Pring & Penelope Canan, SLAPPs: Getting Sued For Speaking Out (1996). 86. Robert D. Richards, A SLAPP In the Facebook: Assessing the Impact of Strategic Lawsuits Against Public Participation on Social Networks, Blogs and Consumer Gripe Sites, 21 DePaul J. Art Tech. & Intell. Prop. L 221, 231 (2011). 87. Austin Vining & Sarah Matthews, Overview of Anti-SLAPP Laws, Reporters Committee for Freedom of the Press, June 2021, www.rcfp.org/introduction-antislapp-guide. 88. Jackson v. Mayweather, 10 Cal. App. 5th 1240 (2017). 89. Richards, supra note 86 at 221–242. 90. Want to Complain Online? Look Out. You Might Be Sued, USA Today, June 9, 2010, at 8A. 91. Elinor Mills, Yelp User Faces Lawsuit over Negative Review, CNET News, January 6, 2009, http://news.cnet.com/8301-1023_3-10133466-93.html. 92. See, e.g., Lisa Donovan, Tenant’s Twitter Slam Draws Suit, Chicago Sun-Times, July 28, 2009; Dan Frosch, Venting Online, Consumers Can Land in Court, New York Times, June 1, 2010, at A1. 93. Rex Hall Jr., Western Michigan University Student Sued in Battle with Towing Company: Facebook Group Airing Complaints about T & J Towing Takes Off, Kalamazoo Gazette, April 14, 2010. 94. See Richards, supra note 86 at 242–253. 95. Matthew D. Bunker & Emily Erickson, Ain’t Turning the other Cheek: Using Anti-SLAPP Law as a Defense in Social Media, 87 UMKC L. Rev. 801 (2019). 96. MacFarland v. Le-Vel Brands, LLC., No. 05–16–00672-CV (Tex. App. March 23, 2017). 97. Clifford v. Trump, 339 F. Supp. 3d 9154 (C.D. Cal. 2018). 98. Michael Charron, Twitter: A “Caveat Emptor” Exception to Libel Law, 1 Berkley J. Ent. & Sports L. 57, 58 (2012).

52

Derigan Silver

99. Julie Hilden, Should the Law Treat Tweets the Same Way it Treats Printed Defamation? Verdict, October 3, 2011, http://verdict.justia.com/2011/10/03/shouldthe-law-treat-defamatory-tweets-the-same-way-it-treats-printed-defamation. 100. See Ellyn M. Angelotti, Twibel Law: What Defamation and Its Remedies Look Like in the Age of Twitter, 13 J. High Tech. L. 430, 487–500 (2013). 101. David Lat & Zach Shemtob, Public Figurehood in the Digital Age, 9 J. Telecomm. & High Tech. L. R. 404 (2011). 102. Ashley Messenger & Kevin Delaney, In the Future We Will All Be LimitedPurpose Public Figures, 30 Comm. Law. 4, 5 (2014). 103. Thomas E. Kadri & Kate Klonick, Facebook v. Sullivan: Public Figures and Newsworthiness in Online Speech, 93 S. Cal. L. Rev. 37 (2019). 104. Rebecca Phillips, Constitutional Protection for Nonmedia Defendants: Should There be a Distinction between You and Larry King? 33 Campbell L. Rev. 173 (2010). 105. 418 U.S. 323, 332 (1974) (emphasis added). 106. See, e.g., id. at 340–341. 107. See, e.g., Rowe v. Metz, 579 P. 2d 83, 84 (Colo. 1978); Harley-Davidson Motorsports, Inc. v. Markley, 568 P. 2d 1359, 1363 (Ore. 1977); Greenmoss Builders v. Dun & Bradstreet, 461 A.2d 414, 417–418 (Vt. 1983); Denny v. Mertz, 318 N.W.2d 141, 153 (Wis. 1982). 108. 472 U.S. 749 (1985). 109. Id. at 753 n.1. 110. Id. at 775. 111. Id. at 779 n.4. 112. 497 U.S. 1, 19–20 (1990). 113. Id. at 20 n.6 (citation omitted). 114. For further discussion, see Walden & Silver, supra note 38; Phillips, supra note 104. 115. Obsidian Finance Group LLC v. Cox, 812 F. Supp. 2d 1220 (D. Oregon 2011). 116. Id. 117. Douglas Lee, Troubling Rulings Paved Way for Blogger’s Libel Conviction, First Amendment Center, December 19, 2011, www.firstamendmentcenter.org/ troubling-rulings-paved-way-for-bloggers-libel-conviction. 118. Obsidian Finance Group v. Cox, 740 F.3d 1284 (9th Cir. 2014). 119. Debra Cassens Weiss, Judge Overturns $13.8 Million Award to Lawyer and Wife for Online Libel, ABA Journal, June 14, 2012. 120. Jacobus v. Trump, 51 N.Y.S. 3d 330 (NY Sup. Ct. 2017). 121. Id. 122. Clifford v. Trump, No. 18–56351(9th Cir. 2020). 123. Id.

Chapter 3

Privacy, Surveillance, and Data Protection Amy Kristin Sanders 1 UNIVERSITY OF TEXAS AT AUSTIN

Introduction Pick up any newspaper, turn on any 24-hour news channel, or scroll through your timeline on the social media platform of your choice, and you will no doubt come across the discussion of how Big Tech is capitalizing on users’ personal information. Concerns about digital privacy permeate conversations about today’s social media landscape. Listen to Mark Zuckerberg or Jack Dorsey speak, and they’ll tell you their companies provide a valuable service – albeit a service that collects user data at every turn.2 Some of this data is voluntarily disclosed by users, while other pieces of this data are collected behind the scenes as we browse, scroll, and post. But the disclosure of this information can leave users vulnerable – both in the online world and their physical world – to violations of their privacy. Given the massive collection of this data, from the features of our faces to the everyday items we buy from Amazon, some scholars have begun to ask whether it’s too late to turn back the clock. In fact, tech reporter Sara Morrison dubbed 2020 as “the year we gave up on privacy.”3 Meanwhile, lawmakers have been proposing drastic – and sometimes constitutionally suspect – measures purportedly aimed at protecting users without great success.4 Certainly, in the United States, the absence of a comprehensive data protection scheme has left users vulnerable to tech companies’ collection of information. Instead, users are at the mercy of a patchwork of legal protections assembled under common law, statutory law, and administrative rules and regulations. Unlike citizens in a growing number of countries, Americans have little control over how their data is collected, stored, processed, or transferred – particularly when it’s being gathered through their voluntary use of social media platforms. Quite simply, they are subject to the adhesion contracts presented by tech companies and other websites. These contracts typically offer users little option other than to agree to the stated terms if they want to use the service or site. In 2021, Security.org released a report rating the various tech companies’ privacy policies. Google earned an F, Twitter earned a C−, Facebook earned a C, Amazon earned a B−, and Apple led the way with an A+ when evaluated for their data collection, storage, and sharing practices.5 DOI:10.4324/9781003174363-3

54

Amy Kristin Sanders

Not surprisingly, social media companies draft these contracts, typically referred to as terms of service or user agreements, using the language most favorable to their business practices – business practices that require the continual inflow of user data. Written by the companies’ attorneys, the densely worded agreements, loaded with jargon and boilerplate language, are often so onerous that average users – and even many attorneys – just click through them without a second thought. As the Internet and social media platforms have matured, they have become a nearly essential aspect of our lives. Once lauded by the US Supreme Court as a “new marketplace of ideas,”6 the Internet of the 2020s has interwoven itself into our work, our education, and our entertainment – a fact that became abundantly clear during the COVID-19 pandemic when even the youngest schoolchildren learned to read and write from teachers livestreaming classes into our homes.7 Nine out of ten American adults surveyed by Pew Research Center reported they believed the Internet was important to them personally during the pandemic. More than half (58%) said it was “essential.” Although the “digital divide” between underprivileged members of society and those with widespread and affordable access to computers and the Internet around the world remains a serious concern, global lockdowns demonstrated the value of online technologies to provide a semblance of normalcy. But as we worked, learned, communicated, and even shopped online, the companies behind these services were scooping up valuable personal data. The legal landscape is struggling to keep up with the technological changes happening in cyberspace. Lawmakers, judges, and even privacy scholars are grappling with how to define and protect “private information.” Internationally renowned privacy scholar Daniel Solove has defined privacy as including “freedom of thought, control over one’s body, solitude in one’s home, control over personal information, freedom from surveillance, protection of one’s reputation, and protection from searches and interrogations.”8 Historically, American privacy law has been grounded in the concept of a person’s “reasonable expectation of privacy.”9 But can you have a reasonable expectation of privacy with regard to self-disclosed information? Does it matter if you know whom you are disclosing it to? If you do have an expectation of privacy, how should we protect it? Do you ever have a right to demand the removal of information that is already public? This chapter will endeavor to answer those key questions by outlining the various legal protections that might be employed to protect users’ privacy.

The Aging Privacy Torts Not long after Louis Brandeis and Samuel Warren published their influential Harvard Law Review article in 1890, lawmakers began to consider the best legal remedies for violations of “the right to be let alone.”10 Despite most Americans’ belief in their “right to privacy,” the word “privacy” never

Privacy, Surveillance, and Data Protection

55

appears in the US Constitution. Instead, our privacy rights are largely tied to the US Supreme Court’s jurisprudence recognizing an implied right based on the rights granted by the first, Third, Fourth, and Ninth Amendments.11 Privacy advocates point to the First Amendment’s protection of both anonymous speech and assembly right as a basis for an implied right to privacy. Similarly, the Third Amendment mandates against a requirement that citizens open their homes to troops. The Fourth Amendment’s protections against unlawful search and seizure suggest rights to privacy in both your person and your effects. The Ninth Amendment makes clear that the mere exclusion of a right from the Constitution should not be used as evidence that such a right does not exist. Our Constitution, though, was primarily concerned with protecting citizens from intrusions by the government rather than private individuals or corporations. At the outset, then, the civil law system largely protected our privacy rights from invasion by non-government actors through the development of four privacy torts, defined and recognized on a state-by-state basis. Statutory law also comes into play, including the patchwork of sector-specific laws that govern the collection, storage, and use of particular types of information, including education, health, and financial records. Although a discussion of privacy torts, such as appropriation, trespass, false light, or intrusion upon seclusion, is largely outside the scope of this chapter, one of them is particularly germane. Publication of private facts, originally conceived in the era of newspapers and print media, protects individuals who have found matters concerning their private lives publicized in a way that would be highly offensive to a reasonable person.12 Generally, plaintiffs cannot recover if the information published is of a legitimate concern to the public. But over time, this distinction between what is reasonably expected to be private and what the public has a right to know has become increasingly blurry. As a result, the privacy tort is largely meaningless for the types of privacy harms occurring on social media because they often involve voluntary disclosures of information. Many scholars and privacy advocates have argued against the idea that information cannot be considered private if there’s been any public disclosure made.13 The Internet alone is not solely responsible for the waning scope of the private facts tort; it was in peril long before we joined Facebook and Twitter. Even before the Internet was commonplace, Joseph Elford wrote, “The private facts tort is a mess. It has disappointed those who hope it would enhance individual privacy while it has exceeded all estimations of its chilling effect on speech.”14 The US Supreme Court’s 1989 decision in Florida Star v. B.J.F. paved the way for significant erosion of tort protections, with the Court ruling that the First Amendment protected a news organization that published lawfully obtained information on a matter of public concern unless the state could prove the publication interfered with a state interest of the highest order. Fast forward 30 years, and the courts have only further constrained the tort.

56

Amy Kristin Sanders

The modern development of social media – designed to encourage the sharing of information with the goal of expanding a person’s network – is squarely at odds with the private facts tort’s desire to prevent the disclosure of information. Further, the tort’s requirement that the information sharing be “highly offensive to a reasonable person” demands judges and juries subjectively evaluate what society considers outside the bounds of decency – a standard that has clearly been affected by the wide array of content shared on the Internet. As envisioned by Warren and Brandeis, the privacy right protected individuals from the prying eyes of the media who sought to publish the private moments of well-known people. The information shared on social media, in nearly every instance, is disclosed by users themselves – rendering the private facts torts largely ineffective.

Waning Contractual Protections In many instances, contract law steps in to protect the privacy of an individual or entity through the execution of nondisclosure agreements or other documents designed to limit the flow of information. But as mentioned earlier, the parties drafting these contracts often wield significant power over the parties they intend to bind. This allows them to control the terms of the agreement, ensuring themselves a more favorable position. Even 100 years ago, Warren and Brandeis foresaw the challenges posed by relying on contract law to protect privacy, noting “modern devices afford abundant opportunities for the perpetuation of [harm] without any participation by the injured party” and asserting the need for another legal approach. Discussing the emerging technology of the late 19th century, they wrote, While, for instance, the state of photographic art was such that one’s picture could seldom be taken without consciously “sitting” for the purpose, the law of contract or trust might afford the prudent man sufficient safeguards against the improper circulation of his portrait; But since the latest advances in the photographic art have rendered it possible to take pictures surreptitiously, the doctrines of contract and of trust are inadequate to support the required protection, and the law of tort must be resorted to.15 Given their concerns about the crude art of still photography at the time, one only wonders what Warren and Brandeis would have said about the prevalence of cellphone cameras, the noxiousness of up-the-skirt photography, or the myriad other nearly undetectable surveillance practices that have become commonplace in the 2020s.

Patchy US Statutory Protection and Movement in the States In the US, personal data is primarily protected through the enactment of statutory laws such as the Health Insurance Portability and Accountability Act

Privacy, Surveillance, and Data Protection

57

(HIPAA),16 the Family Education Rights and Privacy Act (FERPA),17 and the Fair Credit Reporting Act (FCRA)18 that cover limited types of personal data. One important federal law of note for anyone using social media is the Children’s Online Privacy Protection Act (COPPA).19 Dating back to 1999 and revised more recently in 2013, the law required that the FTC regulate the collection of children’s data collection online. If a website or online service is primarily directed at users under the age of 13, it must comply with extensive notice and parental consent requirements to collect personally identifiable data, including photos, geolocation data, audio and video files with children’s images or voices, and any persistent identifiers, such as cookies or IP addresses. The law also regulated the sharing and retention of children’s information. Not long after COPPA was amended, the FTC took action against two app developers, LAI Systems and Retro Dreamer, for violating its terms. They were fined more than $350,000.20 Unlike Europe, no comprehensive federal data privacy law or central data protection authority exists in the United States. Because of this, a number of states have taken action individually to pass statutory data protection mandates. By 2022, every state had enacted some form of data breach notification law that requires consumers be notified if their personal data had been compromised through a security breach. More than half of the states have enacted data disposal laws that establish a rule for the retention and removal of personal data, and a growing number of states have data protection laws. The International Association of Privacy Professionals tracks these initiatives on its website.21 In many areas, California has led the way, and its California Consumer Privacy Act (CCPA) is perhaps the most well-known data protection mandate in the United States.22 Passed in 2018, the CCPA took effect in January 2020. The law gives users the right to know what information is being collected and how it is being used and shared, the right to delete information that’s been collected, and the right to opt out of the sale of personal information. A new measure, titled the California Privacy Rights Act of 2020, received overwhelming support from voters when it appeared on the November 2020 ballot. It creates a state-level agency – the California Privacy Protection Agency – that is responsible for implementing and overseeing the state’s data protection laws. The new law further expands the type of data protected to include sensitive information, including race, ethnicity, religion, genetic data, sexual orientation, and specified health information. The law will take effect on January 1, 2023, but it applies to all information collected after January 1, 2022. Further, it triples the potential fines for violations that involve data from users under the age of 16. Two other states – Colorado and Virginia – have also passed comprehensive data protection statutes. Maryland, Massachusetts, Minnesota, Nevada, New York, and Rhode Island have some form of a data privacy law, though they aren’t as comprehensive in nature as California’s law. Given the nature of California’s law, it’s likely that most companies in the United States will seek to follow it because of the significance of doing business

58

Amy Kristin Sanders

in California. However, like the European Union’s privacy regulations detailed in what follows, this comprehensive data privacy law is likely to create compliance challenges for small-business owners or businesses who do a majority of their work in a small number of states and do not have the resources of a national corporation.

Lagging Enforcement Efforts Even though the United States relies on a set of sector-specific data protection laws, the Federal Trade Commission has taken action against companies for deceptive practices related to the use of cookies. One of the first cases involved the online advertiser ScanScout, who claimed consumers could opt out of receiving targeted ads by setting their browsers to block cookies, but its practices still permitted some data collection. As part of a 2011 settlement with the FTC, ScanScout had to take better steps to disclose its data collection practices. It was even required to place a notice on its website. More recently, the focus has been on enforcement efforts against big tech companies, such as Facebook and Google. Early on, Google faced a significant backlash around the world from its Street View application. Using unencrypted Wi-Fi, the company intercepted personal data, including usernames, passwords, and emails, and captured images that invaded people’s privacy with its Street View cameras. Nearly 40 US states brought a case against Google for Street View data collection. But the 2013 settlement was a mere $7 million. The FTC also investigated Google’s Street View program, but it took no action. It did, however, levy its largest-ever civil penalty (at the time) against Google – $22.5 million – for bypassing privacy settings on the Safari web browser. Moreover, Google agreed to be audited for 20 years as a result of deception in the launch of its short-lived Google Buzz social platform. More recently, tech company leaders have been called before Congress on several occasions. Although lawmakers on both sides of the aisle seem unified in their concerns about Big Tech, no major legislation has successfully passed as of early 2022. In part, that is because the parties cannot agree on the regulatory priorities. Law professor Rebecca Allensworth noted, Everyone has a bone to pick with Big Tech, but when it comes to doing something that’s when bipartisanship falls apart. At the end of the day, regulation is regulation so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown through the regulation Big Tech.23 Although US regulators have long taken a light touch with American tech companies, Big Tech has received greater scrutiny in Europe. Throughout Europe, courts have required Google to take more privacy-protecting actions with Street View, including obscuring faces and license places and prohibiting

Privacy, Surveillance, and Data Protection

59

the publication of certain photos unless the owners’ consented. Google is also required to anonymize the images of people caught on Street View if they request it. Google must notify the public in Europe before sending its Street View cars out on the roads.

The General Data Protection Regulation Leads the Way Globally The European Union adopted extensive privacy regulations even before the Internet and social media became pervasive. In 1995, the EU enacted its Data Protection Directive (95/46/EC) with the goal of harmonizing data collection and processing laws. At the outset, EU member states had to comply with specific measures to protect individuals’ personal data. A major update to these regulations took place in 2018. The General Data Protection Regulation, or GDPR, removed some national control over data protection in favor of a unified approach, particularly in the area of enforcement.24 Although the GDPR permits the collection of personal data, it places limits on the reasons that data can be collected. The gap between the US data protection laws and the EU scheme has become even more apparent as e-commerce has internationalized. But the strength of the GDPR and other countries’ laws that have been modeled after it often creates challenges for US businesses. One of the most controversial – and litigated – issues that have been raised involves the transfer of personal data from the EU to countries – like the United States – whose data protection schemes are not deemed “adequate” by EU standards. In 2000, the US and the EU had enacted a safe harbor that allowed US firms to continue to operate under the EU requirements without major changes. Under the program, the Commerce Department required US organizations to annually self-certify in writing that they agreed to adhere to the safe harbor’s requirements, including notice, choice, onward transfer, access, security, data integrity, and enforcement. But in 2013, Edward Snowden revealed the National Security Agency had been involved in large-scale surveillance, including programs that collected information about EU citizens. In response, a well-known privacy activist named Max Schrems filed a complaint with the Irish Data Protection Authority (DPA) about the transfer of his data between Facebook servers in Ireland and the United States. In Schrems I, the European Court of Justice invalidated the safe harbor agreement, and US firms operating internationally scrambled to ensure business continuity.25 One part of the solution was the 2016 Judicial Redress Act, which allowed non-US citizens to take legal action in American courts if US companies disclosed their private data to the US government. The European Commission also announced a new Privacy Shield around the same time. It was designed to replace the invalidated safe harbor agreement to allow US businesses to again

60

Amy Kristin Sanders

collect and transfer personal data from Europe. But many in the privacy community did not believe the protections were adequate. Schrems led the challenge to the Privacy Shield. In July 2020, the EU Court of Justice invalidated the Privacy Shield, saying in Schrems II that the protections were not “essentially equivalent” to those in the GDPR. US surveillance programs were chief among the court’s concerns.26 Since then, European privacy advocates have lodged hundreds of complaints with 30 EU regulatory bodies, targeting companies operating in the e-commerce, telecommunication, banking, and higher education spaces. The use of Google Analytics has also drawn scrutiny. In August 2020, Facebook was ordered to halt trans-Atlantic data flows to the United States. Yet digital news outlet TechCrunch reported when it surveyed 30 major companies, they seemed to be “burying their head in the sand and hoping the legal nightmare goes away.” To be sure, the collection, transfer, and processing of EU citizens’ data represent a particularly thorny issue. American organizations would be wise to work with legal counsel to ensure compliance with the GDPR if their business model relies at all on a European customer base. Moreover, a number of other countries have passed comprehensive data protection schemes similar to the GDPR – some even adopt its provisions wholesale. As a result, organizations engaged in multinational transactions must be particularly cautious about what data they collect, how they use it and with whom they share it.

Other Attempts to Regulate Privacy on Social Media Platforms One recent phenomenon that has lawmakers up in arms is doxxing, or harassment that stems from information that has been posted online. As an example, a troll will publish someone’s photo, phone number, address, social media handle, or even information about their children or spouse online with the goal of encouraging a large number of people to harass the person named. These “troll storms” might include nonstop phone calls, vulgar name-calling, and even threats of physical violence. By 2021, the Anti-Defamation League estimated that nearly 10% of Americans had been doxxed.27 Sometimes the goal is to force people to disengage from online life and activity, and those who have been doxxed frequently find it necessary to delete all their social profiles. Other times the intent is even more severe, requiring the intervention of law enforcement when threats of physical violence are received. Unfortunately, the release of such information is often not illegal. Those who have been targeted might be able to invoke harassment or cyberstalking laws, but often they have typically been without legal recourse. Not surprisingly then, many states have begun to enact specific doxxing laws aimed at punishing the release of this type of personal information. In 2021 alone, at least 11 states either enacted doxxing laws or amended their cyberstalking

Privacy, Surveillance, and Data Protection

61

provisions to cover doxxing. Additional states are still considering similar provisions. Generally speaking, the laws that currently exist typically take one of three forms: • • •

Laws that criminalize the act of doxxing. Laws that provide those who have been doxxed with a civil right to sue. Laws that protect a specific group of people (jurors, for example) from targeted online harassment.

Civil rights organizations, including the ACLU (American Civil Liberties Union), have expressed worries that doxxing laws could be abused as a means of chilling freedom of expression. After all, some of the information that doxxing laws cover is publicly available elsewhere. As one example, could a police officer use a doxxing law against a protestor who recorded a video of the officer that included the officer’s name or badge number? The act of recording police in public is protected by the First Amendment. What if the video led to the officer being targeted and threatened even though that wasn’t the protestor’s intent? Should intent matter? How could the protestor prove their actions were not meant to endanger? In particular, whistleblowers may be targeted by anti-doxxing laws, but some supporters argue a carefully crafted law will prevent injustice. In an interview with The Markup, Anti-Defamation League attorney Lauren Krapf noted, “To the extent that these people are publishing information to share facts – and not acting with a level of intent that the information posted will be used to carry out criminal conduct such as death, bodily injury, or stalking – the Nebraska anti-doxxing law would not apply.”28 Others remain skeptical. Renowned privacy scholar Bruce Schneier noted, “Any of these laws could be subverted by the powerful.” Because these laws are quite new, few cases have made their way past the trial courts, so it remains to be seen just how many anti-doxxing laws the courts will consider to be constitutional.

Social Media Platform Privacy Policies In addition to the legal regulations that govern privacy on social media platforms, users must also adhere to the site’s privacy policy. Privacy policies inform users about how the site or platform will use any personal information.29 Some of these policies were developed voluntarily, while others were drafted in response to legal mandates.30 The Federal Trade Commission plays an important role in overseeing social media platforms through its authority over unfair and deceptive trade practices. Companies that do not live up to the promises in their privacy policies or terms of service may find themselves in the FTC’s crosshairs.

62

Amy Kristin Sanders

Privacy policies can take many forms. Some are standalone policies that are presented separately from terms of service, while others may be demarcated as a section in the terms of use. Initially, these agreements took the form of “browsewrap” agreements, where the use of the site signaled a person’s intent to be bound by the agreements. Many privacy advocates bemoan the use of browsewrap agreements, arguing that users should have to actively manifest their intent to be bound through the act of checking a box that says they have read the policies and agree to the terms. Some countries’ data protection schemes now require the use of these “clickwrap” agreements. Typically, laws require that these policies be posted on the website or platform, that the organization abide by the policy, and occasionally that users be notified of any changes or updates to policies. These policies, particularly in the European Union, often go further because of legal requirements. They may require the company to tell you what information is collected, how it is stored, who it is shared with, and how it is used. This is particularly true when a platform wants to share user information with third parties, which is a common practice.

Terms of Service Agreements and Unauthorized Access Breaching a platform’s terms of service agreement could legal to criminal charges in addition to a civil claim for breach of contract. Many statutes targeting cybercrime have provisions that address fraud and unauthorized use. As a result, many of these provisions have the effect of protecting users’ personal information by actively discouraging and punishing actions that compromise data protection. One federal law in particular, the Computer Fraud and Abuse Act (CFAA), has played a prominent role in instances where terms of service agreements have been violated. The CFAA makes it a crime to engage in authorized access or exceeding the authority of one’s access.31 It permits a social media platform in its terms of service agreement to “spell out explicitly what is forbidden” or constitutes unauthorized access on its platform.32 By virtue of breaching the terms of service, a person could then be considered to have violated the CFAA. The statute proscribes the “intentional access of a computer without authorization or exceed[ing] authorized access and thereby obtains . . . information from any protected computer if the conduct involved an interstate or foreign communication.”33 Early on, some federal appellate courts had construed the statute quite broadly, but a recent US Supreme Court case – the first time the high court has addressed the statute – suggested a narrower construction would be appropriate. Before the Court decided Van Buren v. United States,34 many legal scholars debated what might count as unauthorized access. Clearly, hacking would be unauthorized, but what about merely using a computer in a manner the owner

Privacy, Surveillance, and Data Protection

63

disapproved? Luckily, it seems we now have a clearer idea of how to apply the CFAA. The facts of Van Buren are quite illustrative. Nathan Van Buren was a police officer in Georgia. As a part of his job, he had access to an FBI database that was for work-related use only. The FBI conducted a sting operation, during which Van Buren accessed the database to look up a woman at the request of a third party outside law enforcement. Van Buren was indicted and subsequently convicted on two charges relating to his access to the FBI database. On appeal, the Eleventh Circuit upheld his conviction under the CFAA based on existing precedent in the circuit. Writing for the Supreme Court in a 6–3 decision, Justice Amy Barrett Coney said that an individual exceeds authorized access when they use their authorization to obtain information located in parts of the computer or system that are supposed to be off-limits. This is a narrower reading than some federal appellate courts had previously given the statute. In those cases, the courts had ruled that person violated the CFAA simply by obtaining available information with improper motives. The Court noted that such a broad interpretation “would attach criminal penalties to a breath-taking amount of commonplace computer activity.”35

Surveillance and Social Media One major concern that has arisen given the amount of data we share on social media is that we may be surveilled based on that data, whether by the government, corporations, or even our own employers. The idea that others could access our data – even when we set our settings to private – is troubling, and lawmakers have taken action to place some limits on that access. Schools and employers, in particular, became quite interested in accessing private social media accounts. Often this provided a way to evaluate a person’s behavior away from the classroom or worksite. One of the earliest cases involved a cheerleading coach who asked team members to provide their usernames and passwords.36 Not only did this allow school officials to see team members’ activity on the platform, it also allowed them to observe what other students in their networks were posting. Soon after, professional organizations and employers followed suit – despite platform terms of service agreements that often prohibit users from sharing login credentials. Although no federal law in the United States prevents employers or professional organizations from asking for this information, at least half the states have passed laws targeting this behavior. Arkansas passed a statute that prevents employers from requiring, suggesting, or requesting prospective employees to disclose login credentials.37 Other provisions prohibit employers from requesting that employees connect with them on social platforms or change their privacy settings so employers can access their profiles.38 The increase in employees working from home has brought the issue of employer surveillance to the forefront. Typically, employers must get employee

64

Amy Kristin Sanders

consent to monitor employees’ emails, phone calls, text messages, and other online activity. But many employers have chosen to infer consent based on employees’ use of company-provided technology, including laptops, tablets, and phones, combined with explicit notice of the company’s acceptable use policy. But the pandemic has raised a host of issues related to privacy, including whether employers can track employee productivity through keystroke monitoring or monitor ambient noise through microphones and speakers. Generally speaking, employees have very little protection from employers when using workplace-provided technology or platforms.39 The Supreme Court ruled as such in City of Ontario v. Quon in 2010, saying that a police officer did not have a reasonable expectation of privacy in text messages sent on a pager his employer provided and paid for.40 You should expect that your employer could be monitoring your email, your productivity, your browser activity, your use of company collaborative communication tools (Slack, Teams, etc.), and even your physical surroundings. Cybersecurity advocates encourage employees to adjust their behavior on employer-owned or provided technology with surveillance in mind. But employers are not the only ones snooping around social media profiles. Government entities might also want access to users’ information.41 Even though several federal laws regulate surveillance, users’ affirmative decision to click their agreement to privacy policy and terms of service agreements might be enough to constitute consent. For example, the Wiretap Act – part of the Electronic Communications Privacy Act – does not protect against surveillance if one of the parties has consented.42

FREQUENTLY ASKED QUESTIONS 1.

If a social media user posts information to their profile, can I assume it is not private information and thus free to use? There is no simple answer to this question.Whether you can legally use the information depends on a number of factors, including the nature of the information, how the user obtained the information, and your relationship with the user, among others.Terms of service agreements and privacy policy terms also often govern whether you can make use of the information posted on the platform by other users. It is also important to recall that you are subject to defamation law, intellectual property law, and other generally applicable laws that apply to content whether it lives online or offline.

2.

What are some of the best ways to protect my privacy on social media platforms?

Privacy, Surveillance, and Data Protection

Nearly all social media platforms have privacy settings that give users some control over who can see their information. Although these aren’t foolproof, they do offer an opportunity to limit the reach of your social sharing. If you are concerned about privacy, you’ll want to consider keeping your profile private or semi-private rather than public. You may also want to limit who can see your posts or contact you. Most importantly, though, users need to remember that information they share with contacts in their networks can unwittingly escape the confines of those networks. Someone could take a screenshot of a post you share and reshare it publicly. Hackers or others looking to do harm could target your account and share your information. As a rule of thumb, you should be judicious about the information you share because once it is public, it becomes very hard to get it taken down. 3.

If someone posts my information or photo on a social media platform, what are my rights? Your right to legal recourse varies dramatically.This is an area in which the law is rapidly changing. The GDPR, for instance, gives EU citizens some rights to request content be de-listed from search engines. A new California law gives minors some rights to request social platforms remove their information or photos. In recent years, a few states have passed doxxing laws that cover the posting of personal information with the intent to harass or intimidate. Other states have enacted laws that address the unwanted/nonconsensual release of sexually explicit photos. Many of these laws have been the subject of legal challenges claiming they violate First Amendment rights. Copyright law might also be helpful in getting information taken down, assuming you own the copyright to the photos or content. You always have the opportunity to contact the social media platform and ask that the content be removed. Unless the content violates the platform’s terms of service, though, you may not have legal recourse to force the company to remove the content.

4.

The terms of service and privacy policies of social media platforms are so long and difficult to understand that I don’t bother to read them. Can I still be required to comply? Yes. Although courts may be willing to find specific aspects of these agreements are unconscionable and unenforceable, you are generally bound by them even if you don’t read them. No court will recognize

65

66

Amy Kristin Sanders

“I didn’t understand this document” as a legal defense for violating its terms. 5.

The landscape here seems to be changing pretty rapidly. How can I keep up-to-date on what is happening with regard to data protection and privacy in the United States? A number of organizations are monitoring these issues closely and providing near real-time updates in language the average person can understand.The International Association of Privacy Professionals tracks data protection legislation in the United States on its website (iapp.org).The Electronic Privacy Information Center has robust resources related to international data protection developments on its website (epic.org). The Electronic Frontier Foundation’s Surveillance Self-Defense site is perfect for anyone looking to increase the security of their online communication (ssd.eff.org).

Notes 1. A sincere thank you to Professor Woodrow Hartzog, whose previous iterations of this chapter provided a strong foundation and whose ideas still animate sections of this chapter. Any errors and omissions remain solely my own. 2. When asked about his plans to bring the Internet to the farthest corners of the planet, Mark Zuckerberg told Time, “We were thinking about the first decade of the company, and what were the next set of big things that we wanted to take on, and we came to this realization that connecting a billion people is an awesome milestone, but there’s nothing magical about the number 1 billion. If your mission is to connect the world, then a billion might just be bigger than any other service that had been built. But that doesn’t mean that you’re anywhere near fulfilling the actual mission.” See Lev Grossman, Inside Facebook’s Plan to Wire the World, Time.com, December 15, 2014. 3. Sara Morrison, The Year We Gave Up on Privacy, Vox, December 23, 2020, www. vox.com/recode/22189727/2020-pandemic-ruined-digital-privacy. 4. The International Association of Privacy Professionals has a useful tool for anyone interested in tracking data privacy legislation. It can be found at: https://iapp.org/ resources/article/us-state-privacy-legislation-tracker. 5. Aliza Vigderman & Gabe Turner, The Data Big Tech Companies Have on You, Security.org, August 23, 2021, www.security.org/resources/data-tech-companieshave. 6. See Reno v. ACLU, 521 U.S. 844, 885 (1997) (striking down sections of the Communications Decency Act that attempted to regulate indecent content transmitted via the Internet). 7. See Colleen Mcclaim, Emily A. Vogels, Andrew Perris, Stella Sechopoulos, & Lee Rainie, The Internet and the Pandemic, Pew Research Center, September 1, 2021, www.pewresearch.org/internet/2021/09/01/the-internet-and-the-pandemic.

Privacy, Surveillance, and Data Protection

67

8. See Daniel Solove, Understanding Privacy 1 (2008). 9. See Katz v. United States, 389 U.S. 347 (1967) (holding that where a person has a reasonable expectation privacy, the Fourth Amendment forbids warrantless search and seizure except in very limited circumstances). 10. Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890). 11. See T. Barton Carter, Marc A. Franklin, Amy Kristin Sanders & Jay B. Wright, The First Amendment and the Fourth Estate: The Law of Mass Media 183–185 (13th ed. 2021). 12. See Restatement (Second) of Torts §652D. “One who gives publicity to a matter concerning the private life of another is subject to liability to the other for invasion of his privacy, if the matter publicized is of a kind that (as) would be highly offensive to a reasonable person, and (b) is not of legitimate concern to the public.” Id. 13. See Woodrow Hartzog, The Public Information Fallacy, 98 Boston U. L. Rev. 459 (2019). 14. Joseph Elford, Trafficking in Stolen Information: A “Hierarchy of Rights” Approach to the Private Facts Tort, 105 Yale L. J. 727 (1995). 15. Warren & Brandeis, supra note 10, at 211. 16. Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104–191 (1996). 17. Family Educational Rights and Privacy Act of 1974, Pub. L. No. 93–380; see 20 U.S.C. § 1232g. 18. Fair Credit Reporting Act, Pub. L. No. 91–508 (1977); see 15 U.S.C. § 1681 et. seq. 19. Children’s Online Privacy Protection Act of 1998, Pub. L. No. 105–277; see 15 U.S.C. § 6501–6506. 20. Federal Trade Commission, Two App Developers Settle FTC Charges They Violated Children’s Online Privacy Protection Act, December 17, 2015, www.ftc.gov/ news-events/press-releases/2015/12/two-app-developers-settle-ftc-charges-theyviolated-childrens. 21. Taylor Kay Lively, US State Privacy Protection Tracker, IAPP (2022), https://iapp. org/resources/article/us-state-privacy-legislation-tracker/ (last accessed February 28, 2022). 22. California’s Office of the Attorney General Has Published an FAQ Covering the California Consumer Privacy Act, https://oag.ca.gov/privacy/ccpa (last accessed February 28, 2022). 23. Cecilia King & David McCabe, Efforts to Rein in Big Tech May Be Running Out of Time, New York Times, January 20, 2022. 24. Full text of the GDPR can be found at https://eur-lex.europa.eu/eli/reg/2016/ 679/oj. 25. Schrems v. Data Protection Commissioner, Opinion of Advocate General, C-362/14 (September 23, 2015), https://curia.europa.eu/juris/liste.jsf?num=C-362/14. 26. Court of Justice of the European Union, The Court of Justice invalidates Decision 2016/1250 on the adequacy of the protection provided by the EU-US Data Protection Shield, July 16, 2020, https://curia.europa.eu/jcms/upload/docs/application/ pdf/2020-07/cp200091en.pdf. 27. Online Hate and Harassment, Anti-Defamation League, March 2021, www.adl.org/ media/16219/download (last accessed February 28, 2022). 28. Should Doxxing be Illegal? The Markup, September 3, 2021, https://clinics.law. harvard.edu/blog/2021/09/should-doxing-be-illegal. 29. Allyson Haynes, Online Privacy Policies: Contracting Away Control over Personal Information? 111 Penn. St. L. Rev. 587, 594 (2007).

68

Amy Kristin Sanders

30. 31. 32. 33. 34. 35. 36.

Id. 18 U.S.C §1030(a)(2) (2001). EF Cultural Travel B.C. c. Zefer Corp., 318 F.3d 58.63 (1st Cir. 2003). 18 U.S.C §1030 (2001). 593 U.S. ___ (2021). Id. Student Files Lawsuit After Coach Distributed Private Facebook Content, Student Press Law Center, July 22, 2009, https://splc.org/2009/07/student-files-lawsuitafter-coach-distributed-private-facebook-content/ AR Code § 11–2–124 (2014). Id. Danielle Abril & Drew Harwell, Keystroke Tracking, Screenshots, and Facial Recognition: The Boss May Be Watching Long after the Pandemic Ends, Washington Post, September 24, 2021. City of Ontario v. Quon, 560 U.S. 746 (2010). Victoria Kim, Who’s Watching? How Governments Used the Pandemic to Normalize Surveillance, Los Angeles Times, December 9, 2021. 18 U.S.C § 2511(2)(c) (2008).

37. 38. 39. 40. 41. 42.

Chapter 4

Intellectual Property Kathleen K. Olson LEHIGH UNIVERSITY

Social media users engage with content in new ways: They create it, share it, curate it, remix it, and collaborate with others to create new works from it. Questions about authorship and ownership challenge traditional conceptions of intellectual property and have led to a re-examination of the law to determine ways to cope with these and future challenges. At its most basic, however, intellectual property law generally applies to content on social media sites in much the same way as in other contexts, both with regard to the original content and to the use of material that belongs to others. While the billions of pieces of content copied and distributed on sites such as Facebook, Twitter, Instagram, Pinterest, and YouTube may make it impossible for copyright owners to fully enforce their rights, those rights are not forfeited. Because social media users are both creators and users of works, they need to understand both the rights and the limitations of intellectual property law. This chapter discusses the basic rules of copyright, “hot news,” trademark, and publicity rights and describes the current state of the law governing content on social media platforms.

Copyright Most content on social media sites is governed by copyright law. Copyright refers to ownership rights given to those who produce creative expression, whether it is text, photographs, video, music, software, works of art, or other creative works. Among these ownership rights are the right to reproduce the work; to prepare derivative works based on the original, such as a translation or a movie version; to distribute copies; and to perform or display the work. Copyright protection begins as soon as the work is “fixed in a tangible medium of expression,” which includes posting material online. Copyright is governed exclusively by federal law and has as its source the Copyright Clause of the US Constitution, which gives Congress the power to grant authors exclusive rights in their works, for a limited time, in order to “promote the progress of science and useful arts.”1 Copyright provides creators with a limited monopoly right in their works so they can profit economically from them. This helps promote the progress of science and the arts because DOI: 10.4324/9781003174363-4

70

Kathleen K. Olson

it gives authors the incentive to create and disseminate political, social, and artistic expression that adds to the free flow of information and ideas. In return, their works will eventually become part of the public domain, free for others to use and build upon as material for new creative works. Copyright protects original works of authorship, created from an individual’s own efforts and not by copying existing works, although originality requires only a minimal level of creativity. Facts are not copyrightable, so tweeting a piece of information from a news story is not an infringement (although breaking news may be protected by the “hot news” doctrine, which is described in more detail later in this chapter). Recipes – a popular item on sites such as Pinterest – are not copyrightable if they consist mainly of a list of ingredients. Copyright also does not protect ideas. The law limits property rights to the particular expression of an idea, but not the idea itself. In some cases, the idea and the expression can’t be separated. Under the “merger doctrine,” if an idea can be expressed only in a limited number of ways, the idea and its expression “merge” and the expression also will not be protected. Whether a post on a social media platform is protected by copyright depends, therefore, on the level of original expression it contains. Posts that consist of personal updates or other statements of fact or that express an opinion or idea without the minimal level of creative expression necessary to escape the merger doctrine are not copyrightable and are free for others to copy and share. The length of the post may also preclude copyright protection because short posts are less likely to fulfill the originality requirement. Historically, titles and short phrases or expressions have not been copyright-protected (they may be protected by trademark if used commercially, however). Given Twitter’s character limit, tweets may not rise to the required level of creativity. Still, no bright-line rule exists: A haiku or other short work that is sufficiently creative may be protected. Compilations of tweets – such as the New York Times bestseller Sh*t My Dad Says, derived from the popular Twitter account – can be copyright-protected if there is creativity in the selection or arrangement of the tweets and if there is additional text that meets the originality requirements for copyrightability. If content shared on social media is copyrightable, it is automatically protected as soon as it is “fixed” by being posted on a site. Written expression, sound recordings, photographs, and videos retain their copyright when uploaded to a social media or photo-sharing site and do not become part of the public domain just by being freely available online. An artist named Richard Prince was sued for copyright infringement in 2015 for artwork that consisted of blown-up screenshots of other people’s Instagram posts. A federal judge denied his motion to dismiss based on fair use, and in early 2022 the case had still not been resolved.2 Most social media sites’ terms of service specify that you must be the copyright owner of the content you post or have permission to post it. While the terms of service generally give those sites a blanket license for the use of your

Intellectual Property

71

content, that license applies only to that site and does not give others the right to use your content without permission. A federal court recognized this in 2013 when it ruled in favor of professional photographer Daniel Morel in his copyright infringement lawsuit against the French wire service Agence France Press and Getty Images for copying and distributing photos of the Haitian earthquake he had taken and uploaded to Twitter. The defendants argued that their use of Morel’s photos was authorized because Twitter’s terms of service granted third parties a license to use the site’s content, but the judge ruled that the license extended only to Twitter and its partners.3 After appeal, a federal jury awarded Morel $1.2 million in damages, the maximum statutory penalty available under the Copyright Act. Using copyrighted material to create something new may be an infringement if it is not considered fair use (more on that later in this chapter). Even music heard in the background of user-generated videos is copyright-protected, as beauty blogger Michelle Phan found out when she was sued in 2014 over songs used in her makeup tutorials on YouTube.4 Since then, platforms such as Instagram and TikTok have arranged to pay royalties to copyright owners to make some of their songs freely available for use by content creators. To avoid a takedown, users should upload their own original music or make use of the platform’s music library. To win a copyright infringement suit, plaintiffs must prove they hold a valid copyright in the work and that the defendant copied the original elements in the work without permission. If there is no direct proof that the defendant copied the work, the plaintiff must prove that the defendant had access to the work and that the two works are “substantially similar.” Access may be proven by showing the defendant had a reasonable opportunity to see the original work – if it was posted on the web, that may be enough. The similarity standard is based on whether an average observer would see the two works as similar enough to recognize that the alleged infringing work was copied from the original. Most copyright owners concerned about possible infringement will begin by contacting whoever posted their content and asking them to take it down. Such cease-and-desist letters should never be ignored because the next step may be a lawsuit in federal court. While copyright infringement may be prosecuted as a criminal offense, most cases are civil proceedings. Remedies in civil infringement lawsuits may include an injunction to stop the infringement, attorney’s fees, and actual damages and profits. Statutory damages may also be available, and they can be as high as $30,000 per work for innocent infringement and $150,000 for willful infringement.5 In the early 2000s, juries awarded substantial statutory damages in lawsuits brought by record companies against individuals who downloaded songs from file-sharing sites. In the most wellknown case, a college student named Joel Tenenbaum was found liable in 2009 for statutory damages of $22,500 per song, for a total of $675,000. Although the judge reduced the damages award, it was later reinstated and upheld by the court of appeals.6

72

Kathleen K. Olson

Social media users who make use of others’ copyrighted works should be aware of so-called copyright trolls: individuals or firms who make a business out of suing for copyright infringement, usually by using algorithms to search or “troll” for infringement online, including on social media. One company, Righthaven LLC, filed more than 200 infringement suits against bloggers and other nonprofit websites in 2010 and 2011 for copying news stories without issuing the customary cease-and-desist letters or takedown notices. In each case, Righthaven sought the maximum statutory damages, which were many times higher than the value of a license to use the material. One New York lawyer, Richard Liebowitz, has filed over 1,000 copyright infringement suits since 2017 and has been sanctioned repeatedly by courts for misconduct in connection with these suits. One method used by copyright trolls is to set up “free” stock photo sites that include terms of use that are easy for users to violate inadvertently. Those who do are sent “litigation settlement” demand letters that threaten costly lawsuits if the recipient doesn’t pay a fee. To avoid this, posted content should be original or come from established photo sites such as Shutterstock or Wikimedia Commons, and all terms and conditions should be followed, including those for material from Creative Commons, a source of copyright-free material that is discussed later in the chapter. Copyright owners will soon have another way to combat infringement by social media users, although damage awards will be limited. The Copyright Alternative in Small-Claims Enforcement Act of 2020 (CASE Act) gave the US Copyright Office until June 2022 to set up a Copyright Claims Board to hear cases involving less than $30,000. Because use of the board is voluntary, procedures are streamlined, and parties do not need an attorney, small businesses and individual creators will be able to protect their intellectual property without the expense of suing in federal court. Critics of the board fear it will also cause corporate copyright holders to go after individuals on social media for infringement that would have been tolerated in the past. Because individuals generally use social media for personal sharing rather than for commercial purposes, copyright owners have often ignored practices that may infringe their rights, such as using copyrighted images as avatars, creating and sharing Internet memes, or adding copyrighted music to a user’s original content. In the past, a social media user may at most have received a cease-and-desist letter from an attorney asking for infringing material to be removed. Under the CASE Act, a copyright owner may instead start with a notice of infringement that, if ignored, could swiftly result in a default judgment against the user for up to $30,000 in damages, with limited recourse for appeal. It remains to be seen whether corporations will use the Copyright Claims Board to target social media users for infringement on a large scale. The draconian legal action that Joel Tenenbaum experienced was based in part on the direct and significant economic harm illegal file-sharing caused the music industry – harm not caused by most individuals’ social media uses of copyrighted

Intellectual Property

73

content. In the end, however, the record companies found that suing individual file sharers was both ineffectual, given the scope of the problem, and a public relations disaster. Instead of trying to control social media sharing, many copyright owners have embraced it as a marketing and promotional tool. Still, social media users should be aware of what is infringement and what is permissible and be ready to respond to any takedown notices or cease-and-desist letters they receive. Linking and Embedding Content If copying without permission is infringement, what about linking to copyrighted material without permission? A simple hyperlink to content on another site does not generally raise intellectual property concerns. When content from one site is “framed” – embedded on a second site and displayed through a scrollable window or frame – claims of unfair competition and trademark infringement or dilution may arise if the defendant attempts to “pass off” the embedded content as its own or otherwise causes confusion as to its source. A finding of copyright infringement is unlikely if the framing or embedding is done by inline linking, the most commonly used method to display images or embed videos on a site. It allows a video to be played and viewed on a website or social media post while the actual video file remains on another server, such as YouTube. Visitors viewing the video on the website or post have no indication that it is not part of that site, but since no copying is involved – only linking – the reproduction right of the copyright owner is not infringed. The US Court of Appeals for the Ninth Circuit affirmed that position in a 2007 ruling in a case involving Google’s visual search engine. In Perfect 10 v. Amazon.com, Google was sued for copyright infringement because the search engine used inline links to display the images that matched search terms while the actual image files remained on their original servers. The court ruled in favor of Google and adopted the “server test,” which turned on whether a copy of the infringing content was actually stored on the defendant’s server. If it was, the defendant had copied the original and therefore infringed its copyright. If it was not and the defendant had merely embedded the content on its site using inline linking, there was no infringement of either the copyright owner’s reproduction rights or its display rights.7 In practice, the server test allows websites to embed and display others’ tweets, photographs, and videos without having to worry about getting permission or whether fair use applies. While many regarded it as well-settled law, it has been challenged in both the Ninth Circuit and the Second Circuit in New York. In 2021, a district court judge in the Ninth Circuit applied Perfect 10 to reject a challenge to the server test made by photographers suing Instagram over its embedding tool. The plaintiffs had claimed that the tool helps and encourages third-party online publishers to infringe their works, but the judge ruled that because they did not store copies of the images, there was no infringement.8

74

Kathleen K. Olson

In the Second Circuit, two district court judges in Manhattan have declined to adopt the server test, raising questions about its applicability in that jurisdiction. In 2018, after a photo of NFL quarterback Tom Brady went viral, several news sites embedded tweets displaying the photo in their stories online. The photographer sued, claiming that the news organizations had violated his exclusive right to publicly display the photo. In 2021, several news organizations were sued in federal court in New York for embedding a video of an emaciated polar bear from Facebook and Instagram. In both cases, the judges declined to adopt the Ninth Circuit’s server test. One judge found it was contrary to the definition of “to display” in the Copyright Act, which is “‘to show a copy of’ a work, not ‘to make and then show a copy of the copyrighted work.’”9 The end of the server test would substantially alter the way social media posts and other content are shared online. Still, many instances of embedding content may be permissible even without its protection. The fair use doctrine, explained in detail later in this chapter, would protect the embedding of a photograph or tweet if done for protected uses, such as commenting on the photo or tweet itself. For example, in 2020, professional tennis player Caroline Wozniacki announced her retirement in a post on Instagram that included a copyrighted photo. A judge ruled that a sports site that embedded her post was protected by fair use against an infringement claim by the photographer because the post itself was news.10 Limitations on Copyright Protection Along with the threshold copyrightability requirements, other rules further the goals of copyright by putting limits on the monopoly copyright owners hold over their works. Copyright terms ensure that their monopoly does not last forever, for example, and the fair use doctrine ensures that their control over the use of their works is never absolute. Because copyright terms limit the period during which a copyright owner has exclusive rights to the work, they foster the growth of the public domain by making material available for future authors to draw on to create new works. Congress has the power to give authors the exclusive right to their writings “for limited times,” and this has been consistently interpreted to prohibit perpetual copyrights. In 1998, the Copyright Term Extension Act extended copyright terms by 20 years so that today, the copyright term for a work of individual authorship is the life of the author plus 70 years, and for a work of corporate authorship, it is 120 years after its creation or 95 years after its publication, whichever is shorter.11 The fair use doctrine is one of the most important tools in copyright law to maintain the proper balance between an individual’s property rights and the social benefits that come from a free flow of information. By allowing the use of material without the permission of the copyright owner, fair use helps

Intellectual Property

75

resolve some of the potential conflicts that may arise between copyright and free speech. Fair use is an exception to the exclusive rights of a copyright owner; it allows creative works to be used by others for certain purposes that are socially beneficial. To determine fair use, courts examine and weigh four different factors, which are found in Section 107 of the 1976 Copyright Act. These are 1. 2. 3. 4.

the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work.12

The first factor requires an examination of the purpose of the secondary use. News reporting, comment, criticism, research, scholarship, and teaching are all listed in Section 107 as protected purposes under the fair use doctrine, although this is not an exclusive list. While nonprofit uses are favored, a commercial use does not preclude a finding of fair use; the Supreme Court has said that other characteristics, such as “transformativeness,” may be more important. This factor is an important one for social media users, so it will be discussed in more detail in what follows. The second factor looks at whether the original work is highly creative or primarily factual; a factual work is given less copyright protection, while artistic works such as fiction or music have more protection. The third factor looks at how much of the original work was taken and is measured in proportion to the entire work – secondary users should take only as much of the original as is needed for their purpose. Generally, only a small portion of the original work needs to be borrowed: When commenting on a news story, for example, excerpts may be quoted, but not the entire article. Sometimes using the entire work is necessary, however, as when commenting on a photograph, and this will not prevent a finding of fair use. The fourth factor, the effect on the use on the value of or potential market for the original work, is regarded by some as the most important factor. It measures the economic harm that may be caused by the secondary use of the work, which is often closely tied to the purpose of that use – if a secondary use serves mainly as a replacement for the original in the marketplace, for example, neither the first nor the fourth factor will weigh in favor of fair use. Secondary uses that go beyond merely substituting for the original work and instead create something new are less likely to harm the market for the original, and they also do more to further the goals of copyright. This concept of transformativeness has therefore come to have added weight in fair use analysis, particularly with regard to the first factor.

76

Kathleen K. Olson

The Purpose and Character of the Use While news reporting, comment, criticism, research, scholarship, and teaching are listed as protected purposes under the fair use doctrine, even they may not be fair use if a secondary work serves the same purpose as the original. Instead, the law favors a use that transforms the original or “adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message.”13 The protected categories found in Section 107 are often themselves transformative, however; examples include copying a short video clip for a movie review and quoting excerpts from a political speech in a scholarly biography or news story. Parody is generally considered transformative as a humorous form of comment or criticism, as the Supreme Court ruled in a landmark case in 1994 involving a parody of Roy Orbison’s hit “Oh, Pretty Woman” by 2 Live Crew.14 The importance of transformativeness to the Court in that case led lower courts to make it a central issue in subsequent fair use cases, even as they retained the four-factor analysis required by Section 107. Thus, in North Jersey Media Group, Inc. v. Pirro, a 2015 case involving Fox News’ Facebook pages, the court denied the network’s motion for summary judgment after an extensive analysis of transformativeness. North Jersey Media Group was the copyright owner of the iconic 9/11 photo depicting firefighters raising the American flag at Ground Zero. On the anniversary of 9/11, the photo was posted on the Facebook pages of two Fox News shows next to a photo of Marines raising the flag at Iwo Jima during World War II and the hashtag #neverforget. The court said that even though the photo had been cropped, resized, and combined with the other elements, it could not conclude as a matter of law that it was transformed sufficiently to be fair use.15 To be transformative, then, the use must do more than simply repackage or republish the original work – it must change it in some way. At the same time, the change may be less in the appearance of the original work than in its use for a different purpose. In the Perfect 10 case discussed earlier in the chapter, the Ninth Circuit ruled that thumbnail-sized copies of the images in question that were also made by Google’s visual search engine were protected by fair use because those images were used for a different purpose – indexing the web, rather than aesthetics – that benefitted the public.16 Courts that adopt an expanded reading of what is transformative are more likely to find fair use. How does all of this apply to social media? While no bright-line rules apply, a variety of fair use defenses may be available for those who use the works of others. Material that is posted on social media sites is frequently done so for purposes of comment or criticism, whether it’s a news story commented on in a blog or the latest Avengers movie trailer posted and critiqued on Facebook. Parodies, wikis, and creative remixes of works, such as mashups and certain kinds of fan fiction, may be highly transformative. Compilations of photos or other copyrighted material on social curation sites like Pinterest may be

Intellectual Property

77

protected by fair use, depending on the degree to which the aggregation of individual pieces of content transforms the material into something new or uses it for a different purpose, such as comment or criticism, or for a different aesthetic. Curating material just to compile it or share it, without adding something extra that transforms it or otherwise gives it a new purpose, is not fair use. While the purpose or character of the use is central to a finding of fair use, the other fair use factors are still part of the analysis, and they may outweigh the first factor, even when the use is transformative. Posts that include a comment or criticism of a work should copy only the amount of material needed to make the point. Fan fiction is more likely to be fair use if it borrows relatively little material and does not harm the market for the original. Noncommercial uses are favored under both the first and fourth factors. Finally, practices such as attribution and linking back to the original source may be helpful to demonstrate good faith on the part of the secondary user. Although neither necessary nor sufficient for a finding of fair use, these practices may have some bearing on a judge’s determination of the equities of the particular case. A lack of good faith on the part of a copyright plaintiff may also affect the outcome. Some judges have sided with secondary users in infringement cases brought by copyright trolls based on their view that such suits are an “abuse” of the copyright law.17 Because fair use analysis accounts for the individual facts and equities of each case, specific guidelines for what is permissible are difficult to provide. The American University’s Center for Media and Social Impact has created codes of best practices regarding fair use in different contexts, including one for online video, that may be helpful.18 A use that is noncommercial, that transforms the original work or otherwise has some public benefit, that doesn’t take more than is needed, and that doesn’t replace the original in the market has a plausible claim of fair use. Still, generalities such as these cannot predict the outcome in a particular case. The fair use doctrine, because it is meant to be flexible, can also be frustratingly unclear. As the use and sharing of content through social networking continues to grow, standards will develop for acceptable practices in the social media context. The law, especially the fair use doctrine, reflects contemporary social norms, so as these continue to change, so will the legal standards for judges trying to make equitable decisions in fair use cases. Liability for Infringement by Others If an individual using a social media platform infringes copyright, the site itself may also be liable on the theory of contributory infringement, which occurs when the defendant has knowledge of the infringing activity and induces, causes, or materially contributes to the infringement. Social media sites receive statutory protection from secondary liability for content they host under the

78

Kathleen K. Olson

Digital Millennium Copyright Act of 1998 (DMCA), a federal statute that was aimed at addressing the unique copyright concerns of content online. The act’s primary purpose was to combat online piracy, and the statute raised the penalties for copyright infringement online and made it illegal to gain unauthorized access to a copyrighted work by circumventing anti-piracy protections. For online service providers such as social media platforms, the DMCA provides protection against secondary liability for copyright infringement when they act simply as hosts of the content of others and do not play a role in determining its content. Without this protection, Web hosts would be forced to actively monitor their sites for possible infringement, but the vast amount of material makes it impossible to do this effectively. Instead, sites might simply bar or severely limit the ability of users to post content to their sites. Without DMCA protection, the explosion of social networking and user-generated content never would have occurred. Section 512 of the statute provides a safe harbor for online service providers and other web-based hosts of content against liability for contributory infringement if they follow certain procedures to safeguard copyrights on their sites. In order to be protected, an online service provider providing a forum for material to be posted by its users must not have “actual knowledge” that the material is infringing or, in the absence of such knowledge, must not be aware of “facts or circumstances from which infringing activity is apparent.” The host site must also establish certain “notice and takedown” procedures, including designating an agent to receive notices of copyright infringement in order to remove infringing material when notified of its existence on the site by a copyright owner. While the DMCA does not require an online service provider to actively monitor its service for infringing activity, it must act “expeditiously” to remove or disable access to the material once it is made aware of it.19 Although the act requires that a copyright owner provide a statement of a “good faith belief” that the material is infringing when it issues a takedown notice, critics contend that copyright owners can overreach by issuing takedown notices that are unwarranted or when the use of their content is fair use. One user fought back when YouTube, after receiving a takedown notice from Universal Music, removed a 30-second home video of her toddler dancing to the Prince song “Let’s Go Crazy.” Stephanie Lenz filed a counter-notification to challenge the removal but then went further and sued the copyright owner in federal court for misrepresenting its claim of infringement under the act. The trial court’s denial of Universal’s motion to dismiss was affirmed on appeal by the Ninth Circuit, which held that a copyright owner must consider fair use before it issues a takedown notice and must have a subjective good faith belief that the use was not protected or it may be held liable for damages under the DMCA.20 Litigation focusing on secondary liability under Section 512 has sometimes centered on what level of knowledge of infringing activity is sufficient for

Intellectual Property

79

an online service provider to be disqualified from safe harbor protection. In general, a site will lose its protection if it has actual knowledge of specific instances of infringing material on its site or if it is aware of facts and circumstances that raise a red flag that would indicate to a reasonable person that specific infringing activity has occurred on the site. Courts have held that sites cannot turn a blind eye to such red flags: In 2012, the Second Circuit ruled in a case involving YouTube that the exercise of “willful blindness” by sites to avoid knowing about specific infringing clips on its site would jeopardize its immunity under the DMCA.21 Some sites have implemented technological measures to supplement their notice and takedown procedures and limit their liability for user infringement. Since 2007, YouTube has used Content ID, a content management tool that allows the site not only to identify copyrighted material but to take action, at the request of the copyright owner; to block it, track it, and provide viewing statistics; or to monetize it by linking to an official website or inserting advertising. Critics charge that the identifications are not always accurate, however, and that the system does not sufficiently protect fair use. While technological tools may alleviate some of the uncertainty about the extent of safe harbor protection, they do not replace statutory protection. Some online service providers fear that the “willful blindness” standard has made it harder for them to enjoy the protection of DMCA’s safe harbor and requires them to be more proactive about monitoring possible infringement on their sites. At the same time, copyright owners have grown tired of the “whack-amole game” they must play in dealing with the widespread posting of their content.22 Legislation introduced in Congress in 2011 would have significantly changed the safe harbor provision of the DMCA in a way that critics feared would have discouraged innovation on the web. The Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA)23 were companion bills introduced in the House and Senate in 2011 that opponents said would require online service providers to become responsible for their users’ content. The purpose of the legislation was to target foreign websites that exist primarily to distribute pirated content such as movies and music, but analysts said the language of the bill was so broad that it would undo the safe harbor protections of the DMCA and force online service providers into the role of “content police,” chilling free expression. Opposition led by Internet and technology companies culminated in the biggest online protest in history on January 18, 2012, and the House and Senate quickly tabled consideration of the bills. In 2020, the Copyright Office completed a study of Section 512 and concluded that the safe harbor system was out of sync with the original intent of Congress, calling for changes to the law that would include lowering the knowledge standards for online service providers. Other proposals would require sites to crack down on repeat infringers and make efforts to ensure that infringing material that is taken down is not uploaded again. Critics of the

80

Kathleen K. Olson

proposed changes say they would overly burden online service providers and upset the balance that has been struck between them and copyright holders. Creative Commons Some copyright owners would like to voluntarily give up some or all of their property rights with regard to their creative works. In 2001, in order to promote “universal access to research, education, and culture,” Lawrence Lessig and other copyright activists founded Creative Commons, a nonprofit organization that encourages the sharing of information online by providing “a free, public, and standardized infrastructure that creates a balance between the reality of the Internet and the reality of copyright laws.”24 Creative Commons sets up a “some rights reserved” system of licenses that allows copyright owners to permit certain uses of their work by others without forfeiting the entire bundle of rights that come with copyright. The system lets copyright owners indicate up front whether they will permit others to use their works and for what purposes. Copyright owners can waive all of their rights and indicate that they have chosen to place the work in the public domain, or they can choose from a number of different licenses represented on their websites by symbols indicating different levels of permission. The most permissive license is an “Attribution” (CC BY) license, which allows others to copy, distribute, remix, or “tweak” the work, even for commercial purposes, as long as the original author is credited. The “Attribution-ShareAlike” (CC BY-SA) license adds the requirement that anyone who creates a derivative work by changing or building upon the original must make the new work available under the same licensing terms. This type of license is used by open-source software projects and by Wikipedia and its affiliated Wikimedia sites. Flickr, the online photo-sharing community, was one of the first social media sites to make Creative Commons licensing available as part of its user interface. In 2011, YouTube added Creative Commons licensing as an option for users who upload videos, and it created a library of CC BY–licensed videos from sources such as C-SPAN and Al Jazeera. Today more than 1.4 billion works have been shared through Creative Commons licenses on platforms such as these. A Creative Commons license presumes, of course, that the licensor actually owns the copyright in the first place. Derivative works that are created out of copyrighted works cannot be licensed under a Creative Commons license unless the permission of the original copyright owner is obtained, either expressly or through an existing Creative Commons license, unless the derivative work is protected by fair use. The most restrictive license reserves all rights other than the right to download the work and to share it by copying, distributing, and transmitting it, with attribution and for noncommercial purposes only. Indicating even this level

Intellectual Property

81

of permission is useful, however, because otherwise sharing it would most likely be an infringement. The utility of the Creative Commons licensing system is that it eliminates the often-difficult task of identifying and contacting a copyright owner in order to ask permission and the uncertainty that comes with guessing if something is protected by fair use. It allows the free use of copyrighted works for everything from simple sharing on social media sites to incorporating images or music into new creative works without the fear of liability for infringement.

The “Hot News” Doctrine Social media has become a significant source of news. When Osama bin Laden was killed in May 2011, it was perhaps the first time that more people learned about breaking news through social networking than from a news site.25 What happens when breaking news is reported – do news organizations have a legal right to control access to their scoop? Since the facts of the news report are not copyrightable, any such right must come from another area of the law. The Supreme Court created such a right in 1918 and based it on the tort of unfair competition. Called “hot news misappropriation,” it was developed to prevent competitors from free-riding on the time and effort required to report timesensitive news and other information and preserve the economic incentive for companies to invest in those efforts.26 No one who tweeted the news of bin Laden’s death was liable for repeating that fact, of course – the tort is available only in very limited circumstances. The public has the right to tell others about breaking news on Facebook or to live-tweet a basketball game. In some circumstances, however, if done repeatedly by a competitor for commercial advantage, those activities may be actionable. The 1918 Supreme Court case that produced the “hot news” doctrine, INS v. Associated Press, featured a battle between competing wire services during World War I. The Associated Press sued the International News Service for rewriting AP dispatches and calling them their own. The Court granted the AP an injunction against this practice and created a limited quasi-property right in the news – in facts – based on a theory of unfair competition to protect the labor and expense required to gather and disseminate the news.27 Hot news misappropriation still exists, but only in the common law of a handful of states, and its scope was narrowed to avoid overlapping with and being preempted by federal copyright law. One important limitation is that the property right is strictly limited by time – the news must still be “hot” to be protected – in order to avoid First Amendment concerns. In 1997, the Second Circuit outlined the elements that are generally required in order for a hot news case to avoid copyright preemption. NBA v. Motorola centered on whether the transmission of real-time scores and statistics from NBA games via pagers constituted hot news misappropriation. The district court granted the NBA an

82

Kathleen K. Olson

injunction, but the Second Circuit reversed, holding that while the NBA and other sports leagues may own the copyright in the broadcast descriptions of their games, they do not own the underlying facts, which include statistical information and updated scores. The court held that, unlike INS, the pager service did not act as a substitute or direct competitor with the league’s “product,” which the court defined as the experience of watching the game in person or on TV. The pager service also did not free-ride on the effort required to provide the information because the service hired its own freelancers to watch the games and send out the score updates. The court set out some general guidelines for what would constitute an actionable hot news case. Although state laws differ to some degree, hot news appropriation has generally required that 1. 2. 3. 4. 5.

a plaintiff generates or gathers information at a cost, the information is time-sensitive, a defendant’s use of the information constitutes free-riding on the plaintiff’s efforts, the defendant is in direct competition with a product or service offered by the plaintiffs, and the ability of other parties to free-ride on the efforts of the plaintiff or others would so reduce the incentive to produce the product or service that its existence or quality would be substantially threatened.28

Using these guidelines, fans or even commercial ventures that live-tweet a sporting event would not be liable under the “hot news” doctrine if they are monitoring the games themselves and unless they pose direct competition sufficient to threaten the viability of the event itself. In the news context, the doctrine had something of a revival as news organizations in the early 2000s fought aggregators, such as Google News, that reuse headlines and lead paragraphs or summaries on the web. The Associated Press entered into a licensing agreement with Google in 2006, but it and other media companies may continue to pursue hot news claims against other news aggregators. In 2011, the Second Circuit affirmed the continuing survival of hot news claims but limited its future applicability in a case involving an online newsletter that regularly reported investment banks’ time-sensitive analysis and stock recommendations. In Barclays Capital v. TheFlyOnTheWall.com, the court ruled the banks could not state a hot news claim because TheFly did not freeride on the banks’ efforts by making its own recommendations – it reported the banks’ recommendations as news, not as its own product, the way INS did. The court also indicated that the guidelines in the NBA case were not binding and that future hot news claims would need to involve facts that closely parallel those in the INS case in order to be successful.29 Twitter and other Internet companies submitted amicus briefs urging the court to throw out hot news as obsolete. Public interest groups urged the court to renounce the doctrine because it restricts free speech, but the court did not reach the First Amendment issue.

Intellectual Property

83

Calls by news organizations for federal hot news legislation have so far been unsuccessful. Any such legislation would have to be strictly limited to avoid limiting free speech, and it should retain most of the restrictive elements from the NBA case in order to limit claims to those that would have a direct and substantial anticompetitive effect and lead to long-term harm to the business of news- and information-gathering.

Trademarks A trademark can be a word, phrase, symbol, or design or a combination of words, phrases, symbols, or designs that identifies and distinguishes the source of the goods of one party from those of others. The purpose of a trademark or a service mark, which protects services rather than goods, is to protect businesses from the unauthorized use of their corporate or product names, slogans, or logos for commercial purposes. Unlike copyright, which is exclusively federal law, trademarks may be governed by both the federal Lanham Act and state trademark laws. To qualify for protection, a trademark cannot be deceptive or confusingly similar to another mark, and it must be distinctive – it must be capable of identifying the source of a particular good or service. Infringement occurs when a trademark is used in a commercial context in a manner that is likely to cause consumer confusion as to the source of the product or service. Trademark dilution occurs when the distinctive quality of the mark is diminished, even though consumer confusion is not likely. A trademark may be diluted by blurring, when it becomes identified with goods that are dissimilar to what made it famous, or by tarnishment, when the mark is portrayed negatively, such as being associated with inferior or disreputable products or services. To bring a dilution claim under federal law, a trademark must be “famous” – one that is widely known, such as Xerox or Exxon. As with copyright, free speech considerations require that a company’s trademark may be used for descriptive purposes if necessary to accurately identify the product or service. As a result, a Facebook post that describes a celebrity’s outfit at an awards ceremony using trademarks (“Zooey Deschanel wore Prada to the Golden Globes”) infringes neither Prada’s nor the Golden Globes’ trademark. Parodies of trademarks are generally protected as critical commentary, although the degree to which the parody involves a commercial use of the mark may determine whether the use crosses the line into infringement or tarnishment. Editorial uses are favored over uses in commercial products, such as posters or T-shirts, so a trademark parody posted on a social networking site would most likely be protected, although no bright-line rules apply. In 2013, the US Patent and Trademark Office (PTO) began to allow hashtags to be registered as trademarks, and today they are an integral part of social media marketing. Hashtags are words or phrases preceded by the hash or pound sign (#) that are used to label or tag the messages they accompany. The PTO allows the registration of a hashtag as a trademark as long as it functions,

84

Kathleen K. Olson

like any trademark, as an identifier of the source of goods or services. Company and product names can be registered as hashtags (e.g., #IBM, #Coke), as can slogans or taglines, such as Nike’s #justdoit. The hashtag must be shown to have been used in commerce to be registrable. Use of another’s trademarked hashtag in commerce may be an infringement if it is likely to confuse the public about the source of the goods or services. In 2015, clothing maker Fraternity Collection sued its former designer for trademark infringement over her use on social media of the tags #fratcollection and #fraternitycollection. The court dismissed the defendant’s motion to dismiss, ruling that the inclusion of the hashtag of a competitor’s name or product in a social media post could lead to consumer confusion.30 At the same time, the free speech protections given for descriptive purposes and parodies for trademarks also extend to hashtags, so that adding a hashtag to words or phrases that are merely descriptive of goods or services is not enough. In another 2015 case involving a settlement dispute between two vaping companies, the court ruled that the hashtag #cloudpen was “merely descriptive” and functional and not a trademark in and of itself.31 The use of corporate trademarks in fake social media accounts has resulted in controversy, but not much litigation to date, in part because social media platforms police the practice and have takedown procedures in place to deal with impostors. Twitter, for example, allows users to sign up under another’s name for purposes of parody as long as the account profile makes clear that the account is an impersonation and they do not pretend to be someone else “in order to mislead or deceive” – if they do, their account will be suspended.32 The guidelines also advise users to differentiate their account name from that of their parody subject; accounts such as Google Brain, AT&T Parody Relations, and The Fake CNN make clear to consumers that the account is meant as an exercise in free speech, not commerce. Fake accounts on social media sites may be actionable if they use trademarks for commercial purposes and if consumer confusion is likely to occur. Some have argued that confusion may be more likely on fast-moving and decontextualized Twitter feeds than on a static website. The fake Twitter account @BPGlobalPR brought wide attention to the practice when it satirized the oil company’s attempts at public relations after the massive 2010 Gulf Coast oil spill (example: “The ocean looks just a bit slimmer today. Dressing it in black really did the trick! #bpcares”). Although the account did not contain a disclaimer, a spokesman for the real BP indicated the company would not take legal action, saying, “People are entitled to their views on what we’re doing and we have to live with those.”33 Celebrity Impersonation and Trademark Impersonation is not limited to corporate identities on social media sites. In 2009, St. Louis Cardinals manager Tony La Russa sued Twitter over a fake

Intellectual Property

85

account that used his name and image. Tweets were posted in his name that referred to team-related incidents such as the death of pitcher Josh Hancock and La Russa’s own DUI arrest: “Lost 2 out of 3, but we made it out of Chicago without one drunk driving incident or dead pitcher,” read one tweet. The only disclaimer was in the user profile, which stated, “Bio parodies are fun for everyone.” Although other celebrities had also had their names “twitterjacked,” this was the first reported instance of a celebrity suing Twitter over a fake account. The complaint listed a number of claims against Twitter, including trademark infringement and dilution.34 The case is interesting for what it reveals about the limits federal law has put on plaintiffs hoping to hold online service providers responsible for harm caused by posts on social media sites. La Russa’s real complaint about the fake account was that the tasteless statements in the tweets would be ascribed to him and would hurt his reputation. La Russa could not easily sue the actual impersonator because the person who set up the account was unknown, and Twitter deleted the fake account the same day the suit was filed. Although his harm had more to do with false light or defamation, he also would have been barred from suing Twitter on those claims because the federal Communications Decency Act (CDA) immunizes online service providers from liability for damages caused by offensive or harmful content posted by its users. Section 230 of the CDA gives sites like Twitter a safe harbor against secondary liability for a variety of state claims, which different courts have held to include defamation, misappropriation, obscenity, invasion of privacy, and state trademark dilution claims.35 Federal intellectual property claims are not included in the CDA’s safe harbor provision but are covered by Section 512 of the DMCA and by a similar provision in the Lanham Act that protects online service providers from secondary liability for federal trademark infringement claims.36 La Russa had little choice, then, but to sue Twitter for direct trademark infringement. He did so by claiming that the fake account falsely implied a personal endorsement of Twitter because of statements that said “Tony La Russa is using Twitter” and “Join today to start receiving Tony La Russa’s updates.” To win, he would have had to show that the statements were likely to confuse consumers into thinking he had personally endorsed the site and that, as a result, he had been harmed. This may have been especially difficult to prove – according to news reports, the fake account had only four followers.37 Several states have passed laws making it a crime to impersonate someone online in order to defraud or cause harm. California’s law also provides for civil actions to be filed for an injunction or damages. While the stated purpose of such laws is to combat fraud and cyberbullying, critics charge that it undermines First Amendment rights and chills protected speech, such as parody and satirical commentary.38 Today, Twitter protects celebrity accounts with a verification policy that authenticates the identities of public figures and attaches a blue checkmark next to the profile name on verified accounts, although in at

86

Kathleen K. Olson

least one case it has authenticated the wrong account – due to a copyediting error, an account spoofing Wendi Deng Murdoch, then-wife of News Corp. CEO Rupert Murdoch, was authenticated instead of the real account.39 And in 2019, Twitter founder Jack Dorsey found his own Twitter account compromised, although it was quickly discovered and secured.

The Right of Publicity Twitterjacking may also violate a celebrity’s right of publicity if the account is used for commercial purposes. The right of publicity protects against the commercial use of a person’s name, likeness, or other aspects of his identity, including voice, signature, or persona. Most lawsuits concern unauthorized use of a person’s name or likeness in advertising or by putting a celebrity’s name or image on commercial products, such as T-shirts or coffee mugs, without permission. The publicity right, which is governed by state law, is rooted in both privacy law – to safeguard a person’s privacy from being exploited commercially – and intellectual property law – to protect the economic value a person has established in his name, likeness, and persona. First Amendment concerns arise when a person’s name or likeness is used for descriptive purposes or for criticism, parody, news reporting, or art. A number of tests have been developed by the courts to balance free speech and publicity rights, with First Amendment protections yielding when the primary purpose of the use is commercial exploitation. This is not always clear: Can a restaurant report on its Twitter feed that a celebrity was spotted there, or does that imply endorsement? What about tweeting a photo? In 2014, actress Katherine Heigl sued the New York drugstore chain Duane Reade in 2014 for $6 million for tweeting a paparazzi photo of her carrying a Duane Reade shopping bag with the comment “Love a quick #DuaneReade run? Even @KatieHeigl can’t resist shopping #NYC’s favorite drugstore.” While Heigl quickly settled with Duane Reade without going to trial, the case served to warn companies of the potential for future litigation in this area. Finally, although most cases involve celebrities, it is not necessary to be well-known to win a right of publicity lawsuit. In 2013, Facebook settled a class action lawsuit over its inclusion of users’ names and likenesses in advertisements it called “sponsored stories.” A user’s likes triggered the creation of a sponsored story – an ad – that included the user’s name and image. In 2016, the Ninth Circuit upheld the settlement, which awarded class members $15 each and imposed changes to Facebook’s disclosure policies.40

Intellectual Property: The Need for Reform Intellectual property is a complex issue and an increasingly important one in the digital age. The tension between protecting property rights in works and

Intellectual Property

87

promoting creativity and the free flow of ideas is evident in the controversies and cases discussed in this chapter. Social networking highlights this tension because traditional copyright rules, which emphasize exclusive authorship and ownership, are not well suited to fostering a system based on collaborative and interactive creation. While activists and copyright scholars have called for changes in the law to encourage this new means of creative production and reduce the uncertainty surrounding social media sharing, others have created their own guidelines. The Center for Media and Social Impact’s development of fair use standards in their Codes of Best Practices is just one example of the kind of bottom-up rulemaking practiced by creative communities today. Self-regulation can be effective because it reflects pre-existing social norms and therefore has automatic moral authority and buy-in from the community and because it can adapt to changes that occur in community practices and norms. At the same time, it risks tilting too much toward the interests of the community at the expense of copyright owners’ property rights or the public interest. Technology also provides a way for copyright owners to control their works online, as examples like YouTube’s Content ID system show. These tools also have the capability to create an imbalance of interests, however. In 1999, Lawrence Lessig contrasted the two types of “code” that regulate cyberspace: “East Coast Code,” or the code that Congress enacts, and “West Coast Code,” which is “the code that code writers ‘enact’ – the instructions imbedded in the software and hardware that make cyberspace work.”41 Lessig warned that as copyright owners increasingly turned to regulation by computer code – cheaper, more efficient, and more flexible than East Coast Code – society must ensure that fundamental values like fair use are protected. Finally, private ordering through contract can give social media sites the means to regulate their own sites with the flexibility needed to respond to changing circumstances – Twitter’s terms of service, for example, give it the authority to regulate the use of its site and shut down a user’s account without resorting to legal action. Here, too, however, default legal rules that try to strike a fair balance give way to whatever realignment of rights the private parties have bargained for. In the case of most terms of service agreements on social media sites, these rights are not truly bargained for and tend to favor the site rather than its users or the public interest. Legislation – East Coast Code – can be slow, cumbersome, and complex. At the same time, while private ordering through self-regulation, technology, or contract can fill in some of the gaps in the law, it can never completely replace it. When it comes to regulating the diverse and fast-changing social media environment, the challenge for Congress and the courts will be to strike the proper balance between private interests and public benefits so that intellectual property rights continue to be protected without limiting the future possibilities for expression that social networking represents.

88

Kathleen K. Olson

For more information Berkman Center for Internet and Society: http://cyber.law.harvard.edu Center for Democracy and Technology: www.cdt.org Center for Media and Social Impact: https://cmsimpact.org Copyright Clearance Center: https://www.copyright.com Creative Commons: http://creativecommons.org The Fair Use Project at Stanford Law School’s Center for Internet and Society: http:// cyberlaw.stanford.edu/focus-areas/copyright-and-fair-use Electronic Frontier Foundation: https://www.eff.org Motion Picture Association of America: http://mpaa.org US Copyright Office: https://www.copyright.gov US Patent and Trademark Office: https://www.uspto.gov

FREQUENTLY ASKED QUESTIONS 1.

Can I use a photo I find on Facebook? It depends.The absence of a copyright notice does not mean the photo isn’t copyrighted, and you should generally get permission unless you believe your use would be protected by the fair use doctrine. Noncommercial uses that are transformative or otherwise productive, such as comment, criticism, news reporting, research, scholarship, or teaching, may be fair use. Copying the photo for the same purpose as or to substitute for the original is not fair use. Attribution without permission (“Photo courtesy of CNN.com”) is still infringement.

2.

Can I embed a video from someone else’s website on my Facebook page? Probably. The Ninth Circuit has ruled that because embedding works by linking to a file that resides on another server, rather than by making a copy, it does not infringe copyright, although the “server test” has recently come under attack.Trying to “pass off” the video as your own content may violate the rights of the video owner, however. Commenting on the video in the post would also make a fair use defense possible.

3.

Can I be sued if someone posts a copyrighted photo on my blog? You could be, but generally, you will not be if you take the photo down when notified by the copyright owner. Because you host content on your blog that you do not control, you may be protected from secondary liability as an online service provider under Section 512 of the Digital Millennium Copyright Act. In order to take advantage of the

Intellectual Property

89

safe harbor provision, you should include a notice to your readers not to post infringing content and register with the Copyright Office (at https://copyright.gov/dmca-directory) to designate an agent for takedown notification. 4.

Can I be sued for trademark infringement for creating a parody Twitter account? Maybe, but you would most likely be protected by the First Amendment. In addition, trademark infringement requires commercial use of a trademark and the likelihood of consumer confusion. If you made clear your account was a parody, it would be hard for the subject of your parody to show it caused consumer confusion or that it caused any real harm to its trademark.

5.

What is a Creative Commons license? A Creative Commons license gives copyright owners a way to give permission in advance for certain uses of their content. Most of the licenses include some restrictions on use, such as requiring attribution or forbidding commercial use. Look for the CC logo on sites that use the licenses.

Notes 1. U.S. CONST. art. I, § 8. 2. Mahita Gajanan, Controversial Artist Richard Prince Sued for Copyright Infringement, The Guardian, January 4, 2016; Motion for Oral Argument, Graham v. Prince, No. 15-cv-10160 (SHS) (S.D.N.Y. 9/15/2021) (requesting oral argument to consider recent Supreme Court cases in support of summary judgment motion). 3. Agence France Presse v. Morel, 769 F. Supp. 2d 295 (S.D.N.Y. 2011). 4. Eriq Gardner, YouTube Star Michelle Phan Settles Dispute with Dance Label Ultra Records, The Hollywood Reporter, August 13, 2015. 5. 17 U.S.C. §§ 502–506. 6. Sony BMG Music Entertainment v. Tenenbaum, 719 F.3d 67 (1st Cir. 2013). 7. Perfect 10, Inc. v. Amazon.com, Inc., 508 F.3d 1146 (9th Cir. 2007). 8. Hunley v. Instagram LLC, 2021 WL 4243385 (N.D. Cal. September 17, 2021). 9. Opinion and Order, Nicklen v. Sinclair Broadcast Group, No. 20-cv-10300 (S.D.N.Y. July 30, 2021). 10. Boesen v. United Sports Publications, Ltd., No. 20-cv-1552 (ARR) (SIL) (E.D.N.Y., November 2, 2020). 11. 17 U.S.C. § 302. 12. 17 U.S.C. § 107. 13. Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994). 14. Id. 15. 74 F. Supp. 3d 605 (S.D.N.Y. 2015).

90

Kathleen K. Olson

16. Perfect 10 v. Amazon.com, Inc., 508 F.3d 1146 (9th Cir. 2007). 17. Righthaven, LLC v. Wolf, 813 F. Supp. 2d 1265, 1273 (D. Colo. 2011). 18. Codes of Best Practices, Center for Media and Social Impact, https://cmsimpact. org/report-list/codes/ (last accessed February 28, 2022). 19. Digital Millennium Copyright Act, 17 U.S.C. § 512 (1998). 20. Lenz v. Universal Music Corp., 801 F.3d 1126 (9th Cir. 2015) opinion amended and superseded on denial of reh’g, 815 F.3d 1145 (9th Cir. 2016). 21. Viacom Int’l, Inc. v. YouTube, Inc., 676 F.3d 19 (2d Cir. 2012). 22. Section 512 of Title 17: Hearing Before the House Judiciary Committee, Subcommittee on Courts, Intellectual Property, and the Internet, 113th Cong., 2d Sess. (2014) (opening remarks by Chairman Goodlatte). 23. Stop Online Piracy Act, H.R. 3261, 112th Cong. § 2(a)(2) (2011); Protect IP Act of 2011, S. 986, 112th Cong. § 6(b) (2011). 24. What We Do, Creative Commons, http://creativecommons.org/about (last accessed February 28, 2022). 25. Public “Relieved” by bin Laden’s Death, Obama’s Job Approval Rises, Pew Research Center for the People & the Press, May 3, 2011, http://pewresearch.org/ pubs/1978/poll-osama-bin-laden-death-reaction-obama-bush-military-cia-creditfirst-heard-news. 26. See Victoria Smith Ekstrand, Hot News in the Age of Big Data (2015). 27. Int’l News Serv. v. Associated Press, 248 U.S. 215, 236 (1918). 28. National Basketball Association v. Motorola, 105 F.3d 841 (2nd Cir. 1997). 29. Barclays Capital, Inc. v. TheFlyOnTheWall.com, Inc., 650 F.3d 876 (2d Cir. 2011). 30. Fraternity Collection, LLC v. Fargnoli, 2015 WL 1486375 (S.D. Miss. March 31, 2015). 31. Eksouzian v. Albanes, 2015 WL 4720478 (C.D. Cal. August 7, 2015). 32. Impersonation Policy, Twitter, https://support.twitter.com/articles/18366# 33. Paulina Reso, Imposter BP Twitter Account That Parodies Oil Giant Left Untouched, New York Daily News, May 27, 2010. 34. Complaint, La Russa v. Twitter, Inc., No. CGC09488101 (Cal. Sup. Ct. May 6, 2009). 35. Mark A. Lemley, Rationalizing Internet Safe Harbors, 6 J. on Telecomm. & High Tech. L. 101, 101 n. 2 (2007). 36. 15 U.S.C. § 1114(2). 37. William McGeveran, Celebrity Impersonation and Section 230, Info/Law, June 25, 2009, http://blogs.law.harvard.edu/infolaw/2009/06/25/impersonation-and-230. 38. Cal. Penal Code § 528.5 (West 2011). 39. Adam Clark Estes, How Twitter Accidentally Verified the Wrong Wendi Deng, The Atlantic Wire, January 4, 2012. 40. Fraley v. Batman, 638 Fed. Appx. 594 (9th Cir. 2016). 41. Lawrence Lessig, Code and Other Laws of Cyberspace 53 (1999).

Chapter 5

Commercial Speech in a Social Space Courtney A. Barclay JACKSONVILLE UNIVERSITY

Historically, commercial speech – that speech that promotes financial transactions – received no protection under the First Amendment. However, this approach shifted in the 1970s, when the US Supreme Court recognized that society “may have a strong interest in the free flow of commercial information.”1 The protection the Court recognized is limited, classifying commercial speech, such as advertising and public relations communications, as deserving lesser protection than other forms of protected expression. In 1980, the Court established a four-part test for reviewing government restrictions on commercial speech.2 In Central Hudson Gas & Electric Corporation v. Public Service Commission, the Court ruled that for a commercial speech regulation to be upheld, a court must first find that the speech is eligible for First Amendment protection; false or misleading advertising or the promotion of an illegal product or service is not protected expression. If the speech is a lawful and truthful commercial expression, the government must prove that there is a substantial government interest in restricting the speech and that the regulation directly advances that interest. Finally, the government must prove that the regulation is narrowly tailored and thus not so broad as to prevent valuable speech. This test demonstrates the lower status afforded to commercial speech. Even truthful speech can be strongly regulated to serve an important government interest. A variety of federal agencies have been granted the authority to regulate commercial speech in an effort to protect consumers. In 1914, the federal government passed the Federal Trade Commission Act, which charges the Federal Trade Commission (FTC) to protect consumers from unfair and deceptive practices.3 The FTC has exercised its authority on a variety of issues, including monopolistic practices, advertising directed to children, health-related claims, and consumer privacy. Americans are using social media sites in rising numbers. As consumers have turned more attention to social media, advertisers have increased spending to reach them in this new space. Social advertising provides new methods to target and analyze messages. In 2019, revenues from digital marketing reached $124.6 billion.4 Social media advertising revenues grew to $41.5 DOI: 10.4324/9781003174363-5

92

Courtney A. Barclay

billion.5 Throughout this growth of online and mobile brand communications, existing laws have been extended to these paid messages, shared content, and social media interactions. However, the growth of online and mobile communications, including social media, has posed new challenges for lawmakers, as well as exacerbating extant enforcement issues. This chapter will discuss key developments in the rules and regulatory actions that impact commercial speech and brand communication.

Truth in Advertising and Unfair Practices The FTC is the federal agency primarily responsible for protecting consumers from deceptive or misleading messages and unfair business practices. In discharging this duty, the FTC has maintained advertisers must have a reasonable basis for all claims, either expressed or implied, for reasonable consumers.6 The FTC works to ensure truthful information in advertising through the requirement of disclosures, substantiation, and enforcement actions. The FTC has applied these rules to online communications in the same way they have been applied to traditional media in print, radio, and television. For example, in 2020, the FTC settled deceptive marketing charges with Teami, a wellness-centered brand that sells teas and detox products.7 Teami, through a social media campaign employing celebrities and influencers, made claims that its products had certain health benefits, including causing weight loss, treating migraines, fighting cancer, and preventing the flu. Teami had no reliable scientific evidence to substantiate any of these claims. The settlement ordered payment of more than $15 million – the equivalent of Teami product sales. Although the FTC has held that online communications will be subjected to the same standards as traditional media, the development of new technologies, delivery methods, and user-generated content has pushed the FTC to focus on certain issues specific to protecting consumers online. Deceptive Endorsements Perhaps the most pervasive deception in online brand communications is deceptive endorsements. Whether it is an undisclosed relationship between the brand and a social media influencer or fake product reviews posted by a brand’s employees, these practices create a false impression of credibility for products and brands. The FTC has provided specific guidelines for brands and influencers engaging in online endorsements. In 1975, the FTC issued the first Guides Concerning Use of Endorsements and Testimonials in Advertising, which set out rules for advertisements using testimonials and endorsements. Specifically, the Guides define endorsement relationships, require that certain standards are met in all endorsement communications, and mandate that all “material connections” between an advertiser and an endorser be disclosed to the public.

Commercial Speech in a Social Space

93

An endorsement is defined as “any advertising message that consumers are likely to believe reflects the opinions, beliefs, findings, or experiences of a party other than the sponsoring advertiser, even if the views expressed by that party are identical to those of the sponsoring advertiser.” In general, the FTC Endorsement Guides require that endorsements “reflect the honest opinions, findings, beliefs, or experience of the endorser.”8 The Guides specifically address rules for endorsements by consumers, organizations, and experts. In these cases, the Guides require that where the audience would not reasonably expect it, any material connection between the endorser and the advertiser must be “fully disclosed.”9 The public generally expects that celebrity statements endorsing a product or service represent a relationship between the advertiser and the celebrity; no disclosure needs to be made in that instance. However, individuals unknown to the general public do not raise similar expectations for the audience. Therefore, a clear disclosure must be made regarding payment or other compensation received for these types of endorsements. These word-of-mouth campaigns, centered on endorsements and testimonials, were expensive and required a great deal of effort before the technological advancements of interactive media. Social media “amplifies” consumer reviews, celebrity statements, and expert endorsements, as well as improving the processes for monitoring and promoting consumer opinions.10 As brands and individuals began leveraging these powers, it was unclear how sponsored campaigns online would be regulated under the Deceptive Trade Practices Act.11 Since social media have emerged as a major channel for consumers and marketers over the past decade, the FTC has, through investigations, guidelines, and rulemakings, addressed several key issues surrounding deceptive endorsements: fake reviews, influencer relationships, and deceptive formats. Fake Reviews Fake product reviews have been particularly problematic in the e-commerce and social media spaces. In one of the first actions against brands engaging in this type of deception, the FTC found that Reverb Communications, Inc., had engaged in misleading advertisements when it failed to disclose its relationships with reviewers.12 Reverb provided marketing and public relations services to clients selling gaming applications via the Apple iTunes store. As part of these marketing efforts, employees of Reverb posted public reviews on iTunes with account names that did not disclose the relationship they had with the game developers. All of the reviews provided four- and five-star ratings and written accolades. The FTC found that these reviews misled consumers into believing independent users of the applications wrote them. Reverb was ordered to remove all reviews that violated the disclosure requirements.13 Nearly ten years later, the FTC banned Sunday Riley Skincare from posting fake reviews of the company’s products.14 The FTC complaint in the case revealed that Sunday Riley Skincare managers, for two years, posted product reviews from fake accounts on the website of

94

Courtney A. Barclay

a major cosmetics retailer, Sephora. Once Sephora identified the fake reviews and removed them, Sunday Riley attempted to post more reviews using a VPN to hide its identity and mask the activity. Attempts to pay third parties to review products are just as concerning as using employees to post reviews. In 2019, the FTC settled a case with Cure Encapsulations, a manufacturer of weightloss products. Cure Encapsulations contracted with a third-party website for Amazon product reviews, with the specific intent of keeping the product a five-star listing.15 Influencer Relationships Increasingly, brands are including popular personalities – celebrities and social media influencers – in their communication strategies. When these brand affiliates or partners post reviews or information on products to their own social media accounts, FTC guidelines require that the relationship be disclosed. It is the brand’s responsibility to ensure that disclosure happens in a clear and conspicuous way. FTC guidelines about the use of these kinds of commercial messages have specifically addressed online and social communications. In one example provided in the Guides, the FTC considers a tennis player who provides information on her social networking site about the results of her laser eye surgery. In the post, the tennis player names the clinic where the surgery was performed. Because her followers on the social network may not realize that she is a paid endorser for the clinic, the relationship needs to be disclosed.16 In the revised Guides, the FTC was careful to include examples of online endorsements by consumers, as well. One such example discusses a college student blogger who posts a review of a video game and system he has received for free from the hardware manufacturer. The FTC states that “the blogger should clearly and conspicuously disclose that he received the gaming system free of charge.” Further, the FTC requires that the manufacturer advise the blogger that the relationship be disclosed and that the manufacturer monitor the blogger’s posts for compliance.17 Teami, the weight-loss and wellness-centered company cited by the FTC for false health claims, also ran afoul of the agency’s rules on endorsements when it partnered with well-known social media influencers, including Grammy-winning artist Cardi B. In addition to concerns about unsubstantiated health claims in these influencers’ social media posts, the posts did not consistently disclose the brand partner relationship. Although the FTC historically placed responsibility on brands for this kind of omission, the celebrities and influencers in this case also received warning letters from the agency. The FTC requires relationships like that between Teami and Cardi B to be “clearly and conspicuously” disclosed, using “unambiguous language that consumers would easily notice and understand” before the more link in a social media post.18

Commercial Speech in a Social Space

95

The FTC has ruled that a brand imposing a contractual requirement for compliance is not enough to preclude an advertiser from being held responsible for the nondisclosures of its advertising affiliates. In 2011, the FTC fined Legacy Learning Systems $250,000 for failing to disclose material connections with online reviewers.19 Legacy Learning Systems manufactures and sells instructional courses on DVD through a website. An affiliate program is the primary advertising method for these courses. Affiliates of the program work to direct traffic to legacy’s website and then receive between 20 and 45% commission for each course sold. Some of the affiliates are Review Ad affiliates, who place positive reviews in articles, blogs, and other online content. The reviews include hyperlinks to Legacy’s website. Many of these reviews give the impression that they have been written by ordinary consumers. Although the contracts for the program mandated that the affiliates “comply with the FTC guidelines on disclosures,” the FTC found that Legacy had not instituted a compliance-monitoring program, resulting in consumers receiving deceptive and misleading information. Moreover, agencies or publications that aid in the deception are held accountable.20 In 2016, the Creaxion agency managed promotion and endorsement deals for a mosquito repellant. The PR firm worked with Inside Gymnastics to promote the product in the magazine and connect with Olympic gold medalists to endorse the product on social media. The athletes failed to disclose the payments they received in exchange for the positive mentions. The magazine shared the athletes’ posts with no disclosure of the relationship to the brand or the PR firm. The FTC entered a consent agreement with both the PR firm and the publisher of Inside Gymnastics, prohibiting future misrepresentations and ordering compliance monitoring for any future endorsement relationships. One step that advertisers can take to promote this kind of compliance is to develop an established social media policy that clearly addresses under what circumstances disclosures must be made, as well as a system for compliance checks.21 In 2011, the FTC determined not to pursue an enforcement action against Hyundai Motor America, in part due to the fact that such a policy was in place.22 The FTC staff reviewed a blogging campaign executed in the leadup to the 2011 Super Bowl broadcast. Some of the bloggers were given gift certificates, but not all of them disclosed that Information. The question was whether any of the bloggers were instructed not to disclose the gift certificates. The FTC staff determined that any such actions were the efforts of an individual working for a media firm hired by Hyundai to conduct the blogging campaign. Those actions conflicted with the social media policies at both Hyundai and the media firm, which specifically directed bloggers to disclose any compensation. Therefore, the FTC closed the investigation with no action against the media firm or Hyundai.23 However, cases like Creaxion indicate the FTC expects more active monitoring in addition to clear directives and policies. A particular concern in social media has been the amount of space that may be required to make these disclosures. For example, the FTC Bureau of

96

Courtney A. Barclay

Consumer Protection (BCP) received questions about how to effectively make disclosures via the character limits on Twitter’s social media platform. The BCP has suggested that users may disclose an endorsement relationship by appending a hashtag identifier, such as #paid or #ad. This should provide readers with sufficient information to evaluate the message without overburdening the advertiser.24 Deceptive Advertising Formats The FTC has found that one measure of deceptive or misleading advertising is the format it takes. Native advertising is content that “bears similarity to the news, feature articles, product reviews, entertainment, and other material that surrounds it online.”25 For example, in its 2015 Guide, the FTC considers a feature story in an online lifestyle magazine that features and promotes the sponsor’s products.26 Advertising in news format is not new to online media. In the 1960s, the FTC found that what appeared to be a newspaper column offering unbiased reviews of local restaurants was, in fact, a misleading advertisement.27 Although the column did not include pricing information, the FTC determined that it would not change the effect on readers. Instead, the FTC recommended clear labels for advertising messages. Since then, the FTC has continued to find advertising messages disguised as independent, editorial content to be misleading.28 In 2019, the FTC settled with Inside Publications, which published the online magazine Inside Gymnastics. In partnership with a public relations firm, the magazine published what appeared to be independent journalistic content on the 2016 Zika outbreak and mosquito repellant, with features on the brand promoted by the PR firm.29 The magazine included the product in two feature “articles” and designated it the “Official Mosquito Repellant” of the company. Employees were encouraged to use and promote the brand during the 2016 Olympic Games. Inside Publications was paid for all of these activities but failed to disclose that relationship to its audiences and ran the articles as independent content. Despite statements that the product had been provided free of charge to the athletes, the content was still found to be misleading because the payment to the athletes and the magazine were omitted. Similarly, in 2016, the FTC found that not only had a retailer failed to disclose paid relationships with social media posters, but it also engaged in misleading native advertising. Lord & Taylor paid Nylon, an online fashion magazine, to publish an article about a new Lord & Taylor apparel line, including a feature photo of a single piece of apparel in the new line. The retailer reviewed the article prior to publication but did not instruct Nylon to include any disclosure statement. The FTC further defines the disclosure necessary for native advertising in the consent agreement reached with Lord & Taylor. Such disclosure must be “clear and conspicuous,” meaning (1) it must be made in the same medium as the overall message, (2) visual disclosures must stand out

Commercial Speech in a Social Space

97

so that they are easily noticed, (3) audible disclosures must be easily heard and understood, and (4) interactive media message must include unavoidable disclosures. Additionally, the disclosure must generally be in “close proximity” to the triggering message, meaning it must be simultaneously viewed. A link or popup is not considered in “close proximity.”30 In 2013, the FTC issued a guidance document providing specific application of the “clear and conspicuous” disclosure requirements, as well as illustrative examples.31 The FTC Enforcement Policy Statement on Deceptively Formatted Advertising underscores the importance for audiences to understand the source of promotional messages.32 Any format that may be misleading to audiences must be accompanied by a disclosure of the brand’s relationship to the publication. Social Contests and Sweepstakes To build and maintain relevancy on social media, brands are engaging in a variety of promotions to increase followers, engagement rates, and sales. One strategy brands rely on to boost engagement is to employ social media promotions in the form of contests and sweepstakes. Not only does this marketing strategy build brand awareness and affinity, but it also provides brands with consumer data and user-generated content. However, there are legal considerations for brands engaging in this promotional strategy. Contests are generally defined as competitions measured by some standard of performance, while sweepstakes are won by chance and cannot require any sort of legal consideration in exchange for entry or increased odds.33 These distinctions were important to differentiate from lotteries, which have a turbulent past in the United States, periodically being legally prohibited and severely restricted. The FTC has regulated against unfair contests and sweepstakes, mandating disclosures of rules, processes, types, and number of prizes, as well as the odds of winning.34 Sweepstakes and contests have consistently been one of the most common issues about which the federal agency receives complaints. In addition to the FTC rules, there are other federal and state laws governing various aspects of sweepstakes and contests. Of particular concern for sponsors of these promotions are the definition of legal consideration and the telecom and email marketing laws. Additionally, sponsors should pay close attention to any usage rules of the social media platforms that will host the promotions. Legal Consideration and Chance in Contests and Sweepstakes on Social Media Social media has significantly eased the logistics of contests and sweepstakes. Instead of mailing in a form, entries can be submitted digitally – perhaps as easy as clicking the like button. Tracking the entries is automated through social media analytics tools. And the benefits include valuable user data and

98

Courtney A. Barclay

user-generated content. Deploying contests or sweepstakes as an engagement strategy on social media requires careful planning. Brands should clearly identify the type of promotion being used to ensure compliance with the relevant laws and regulations. For a sweepstakes promotion, the brand must ensure that the public can enter without any required consideration. In some states, the ease of participation and data collection may count as legal consideration, shifting the legal framework from sweepstakes to a lottery and triggering stricter regulations or even prohibitions. In these jurisdictions, consideration is not limited to cash entry – intellectual property licensing, such as submission of a photo, or even liking and sharing a brand’s post may be legal consideration. Therefore, social media sweepstakes must have an additional method of entry, allowing the public to bypass any of these entry requirements. If the promotion is a contest, the brand must clearly define the criteria for a winning entry, avoiding any element of chance. Contests traditionally have been distinguished from lotteries and sweepstakes by involving skill or some other measurement of performance. Common examples on social media are photo and video challenges, which have the added benefit of producing usergenerated content that engages audiences. Sponsors of these kinds of contests must be careful to ensure the winner is not chosen by chance or arbitrarily by an employee. Even contests that are decided by a voting public – using likes on posts or votes on a website – are not always considered exclusive of chance. Best practices include a mix of public voting and official judges applying a set of published criteria.35 Regardless of the type of prize promotion – sweepstakes or contest – brands must clearly disclose the official rules for the promotion. These disclosures should include the identity of the contest host, rules for and methods of entry, eligible dates of entry, how winners are selected, odds of winning (if chance is a factor), and an accurate description of the prizes. Every social post related to the promotion should include a link to these disclosures.36 In addition to promotional rules disclosures, the brand must ensure all participants are following FTC endorsement disclosures. Posts for the purpose of winning a contest or sweepstakes are considered endorsements and should be labeled that way to avoid consumer confusion. The FTC has suggested the use of #contest or #sweepstakes be part of the entry instructions and eligibility requirements.37 Laws on Text and Email Marketing Spam generally refers to unsolicited commercial emails.38 Since the mid1990s, the FTC has worked to protect consumers from spam-related unfair practices.39 In 2003, the US Congress granted the FTC special jurisdiction over spamming practices when it passed the Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM Act). The CAN-SPAM Act places certain requirements on marketers to protect consumers against unfair

Commercial Speech in a Social Space

99

practices. For example, marketers may not use deceptive subject lines, they must include a valid return address, and the messages must include a method for the recipient to opt out of future messages. The FTC has been particularly vigilant against spam that contains fraudulent content, malware, and links to phishing websites – a form of identity theft that uses emails to get consumers to provide personal and financial information.40 In FTC v. Hill, Zachary Hill was charged – and pleaded guilty – to fraudulently acquiring credit card numbers and Internet account information through email ostensibly sent to users from AOL and PayPal.41 One of the difficulties of controlling spam and related fraudulent practices is the constant development of new delivery methods. For example, botnets, or networks of hijacked computers, allow spammers to send bulk messages anonymously and remotely. The FTC continues to pursue these cases. In 2010, a district court judge permanently enjoined the operation of an internet service provider (ISP) that shielded and assisted botnet spammers.42 The ISP, which the FTC estimated controlled nearly 5,000 malicious software programs, also was ordered to pay more than $1 million to the US Treasury for disgorgement of ill-gotten gains. In 2011, the FTC filed the first spam action for deceptive short message service (SMS) ads sent to mobile phones.43 In the complaint, the FTC charged Phil Flora, a California resident, with sending “at least 5 million unsolicited commercial electronic text messages.” Many of the messages advertised mortgage and debt relief programs, referring recipients to a website, loanmod-gov. net. The inclusion of “gov” as part of the website address misled consumers to believe that these programs were government-sponsored. The FTC settled the case for a judgment of nearly $60,000. The settlement also includes a prohibition on Flora sending any unsolicited commercial text messages or making any misrepresentations that he, or anyone else, is affiliated with a government agency.44 The FTC also enforces spam actions against companies misleading message recipients as to the source of the emails, such as in 2016, when it investigated a marketing firm that was paid for emails to be sent from hacked email accounts, misleading consumers to believe emails were sent from their friends or family members. The emails linked to fake news stories about a weight-loss product and included fake testimonials about weight loss.45 In addition to case-by-case enforcement, the FTC and Congress have explored preventative options to protect consumers on a larger scale. Most significantly, Congress charged the FTC with establishing a national Do-Not-Email Registry.46 Upon investigation, the FTC reported that such a registry would be largely ineffective at reducing the incidence of spam. Rather, a registry of email addresses may pose privacy and security risks.47 However, the FTC does work with a variety of federal agencies, industry associations, and consumer groups to lead consumer education efforts through an interactive site launched in 2005.48 OnGuardOnline.gov provides tips for consumers in the form of articles, games, and videos.49 Additionally, the FTC hosts a spam-reporting

100

Courtney A. Barclay

service that allows consumers to forward unsolicited messages, including phishing scams, to be stored in a law enforcement database for further investigation. However, the FTC continues to urge the industry to develop technological solutions, such as authentication systems, to combat these issues.50 Platform License Agreements Sponsors of contests and sweepstakes must craft the promotion to ensure it comports with all state and federal laws, as well as platform license agreements. Each social media platform has its own set of user guidelines that may directly or indirectly address promotions by brands or influencers. For example, Meta-owned platforms, including Facebook and Instagram, require a release by participants that acknowledge the promotion is not in any way associated with the platform.51 Twitter’s terms of service allow brands to conduct contests and sweepstakes but specifically requires the rules to make ineligible anyone creating multiple accounts to enter more than once.52 YouTube does not allow any contests or sweepstakes to be conducted through ad units, but brands can use their content channels as long as they publish and abide by clear rules for the contest or sweepstakes. These guidelines have changed over time, so brands must be aware of any new limits or requirements to comply with the platform terms of service.53 Audience Profiling Social media companies have come under fire for the extent to which they target users for advertising purposes. Conspiracy theories have cropped up wondering – is Facebook hijacking my microphone and listening to my conversations?54 These accusations generally follow on the heels of paid promotions that seem a little too coincidental. CEOs at Facebook and Instagram have denied the charges, explaining that the ads are likely just effective profile targeting. But this effective targeting involves the collection, use, and – sometimes – sharing of vast data, including individuals’ online activity. The FTC has urged stringent self-regulation to particularly address the consumer privacy concerns raised by this type of behavioral advertising. The necessary data collection involves using tracking devices, such as cookies, to scan web activity in real time to assess a user’s location, age, income, and in some instances, medical conditions. Additional information collected and used to target ads and content includes political affiliation, racial or ethnic affinity, religion, hobbies, and interests. And despite the seeming ubiquity of these trackers, most Americans don’t realize the scope of information being collected, stored, and used to serve them commercial messages. In a survey of Facebook users, the Pew Research Center found 74% did not know Facebook

Commercial Speech in a Social Space

101

collected this kind of data, and approximately half were uncomfortable with the practice.55 Advertisers engaging in behavioral tracking are able to analyze a person’s web viewing habits “to predict user preferences or interests to deliver advertising to that computer or device based on the preferences or interests inferred from such Web viewing behaviors.”56 Approximately 80% of all online advertisements are served as a result of behavioral targeting.57 Behavioral advertising revenues for 12 advertising networks were approximately $598 million.58 The FTC has long recognized the serious concerns these practices raise for consumers, including loss of privacy, fraud, and deceptive marketing.59 In 1996, the FTC staff recommended that the FTC continue to monitor issues of online privacy but concluded that self-regulation and technological solutions may be sufficient to protect consumers’ privacy in the marketplace. However, in 2000, the FTC reported to Congress on online profiling and recommended that Congress legislate online profiling to mandate compliance with established fair information practices.60 Although the FTC praised industry efforts at selfregulation, it noted that not all advertisers and website owners were allied with the organizations issuing these guidelines.61 Federal legislation would mandate compliance for all websites and advertising networks and provide an agency with authority to enforce privacy protections.62 Congress has failed to pass any such law, and regulation continues to rest on FTC enforcement, which largely relies on self-regulation. Nearly ten years after those initial efforts in Congress, the FTC released a statement supporting a policy of industry self-regulation, including proposed principles to guide the industry’s efforts.63 These principles focused on transparency, data security, changes in privacy policies, and sensitive data.64 In response, the industry, represented by two key groups – the Network Advertising Initiative (NAI) and the Digital Advertising Alliance (DAA) – released a series of guidelines and tools to advance privacy protections for online consumers.65 These guidelines have focused on improving the transparency and clarity of privacy notices, educating consumers about behavioral or interestbased advertising techniques, and providing consumers with user-friendly optout tools.66 The NAI conducts annual compliance reviews of member companies, examining the data collection, use, and retention policies, as well as disclosure practices.67 These reports indicate that member companies improved their notices in various ways, including specific data retention period information, increasing visibility and readability of notices, using more prominent “privacy” or “opt-out” labels, and using an industry-developed advertising option icon on served advertisements.68 However, in December 2010, the FTC issued a report stating that these efforts “have been too slow and up to now have failed to provide adequate and meaningful protection.”69 The report called for a do-not-track mechanism that

102

Courtney A. Barclay

would allow consumers to opt out of data collection for ad targeting with one simple, easy-to-use tool.70 Following this report, privacy issues became a hot commodity in Congress, with more than ten privacy-related bills introduced to the 112th Congress in 2011. Among them was the Do Not Track Me Online Act to authorize the FTC to adopt regulations requiring companies to use an online opt-out tool.71 Despite these efforts, no federal law has been passed to regulate data collection for targeted advertising practices. The FTC has engaged in enforcement actions against companies for failing to protect consumers’ personal information. For example, in 2011, the FTC settled complaints against social networking sites Facebook and Twitter that focused on the failure of those companies to protect users’ personal information. A settlement against Google Buzz alleged inadequate privacy policies and ordered the company to implement a “comprehensive privacy policy.” The settlement agreement also orders the company to submit to independent privacy audits for 20 years. The agency also has pursued cases specifically dealing with behavioral targeting.72 Chitika, an online advertising company, engages in behavioral tracking and reportedly delivers three billion ad impressions a month. In a complaint against the ad company, the FTC alleged that Chitika deceptively offered consumers an opt-out mechanism that only stored the consumer’s preference to not be tracked for ten days. After ten days had passed, Chitika would begin tracking those consumers again until they opted out again. The settlement agreement requires that every targeted ad served by Chitika include a hyperlink to a clear opt-out tool that allows consumers to opt out of tracking for at least five years.73 The FTC has continued to focus on consumer choice, specifically broad optout schemes, especially as marketers have focused more on cross-device tracking and messaging. According to former FTC commissioner Julie Brill, the FTC found that much of the cross-device tracking was done without notifying consumers. Brill noted that consumers do not have “adequate” ability to opt out of tracking, cross-device or otherwise, despite nearly two decades of the FTC pushing for more consumer control.74 Children’s Privacy Online Since Congress and the FTC began monitoring behavioral advertising, a key concern has been protecting information collected on minors using online services. In 1998, Congress passed the Children’s Online Privacy Protection Act (COPPA).75 This delegated to the FTC the authority to regulate websites targeting children under the age of 13. COPPA requires that these websites limit collection to information that is necessary for the child to use the websites’ functions. Additionally, these sites must notify parents about the type of information being collected from children and get consent from the parents before collection, use, or disclosure begins. These sites must develop procedures to

Commercial Speech in a Social Space

103

reasonably secure the personally identifiable data and allow parents access to any data collected about their children. The FTC has engaged in enforcement actions against smartphone app developers and social network sites for their deceptive practices in collecting information from children. For example, in 2011, the FTC settled with a mobile app developer for failing to provide notice and gain consent before collecting information from children using games, such as Emily’s Girl World, which included games and a journal for users to record “private” thoughts.76 In response to children’s increasing use of mobile technology and social networks, the FTC made several key changes to the COPPA rule in 2013, including expanding the definition of personal information to include IP addresses and geolocation information, as well as persistent identifiers placed on a computer for tracking purposes, requiring websites to obtain parental consent for these practices.77 The Internet of Things Smart devices have provided brands potential not only for additional data collection and analysis but for consumer interactions through chatbots and artificialintelligence-powered conversation. Advertisers use artificial intelligence (AI) not only to help target ads but to create the most effective ads. This has largely been limited to crafting effective headlines for email marketing and calls-to-action statements for social ads, but the potential for more directed conversations is fast becoming a reality. Chatbots are now a common tool for first-level customer service interactions, with many companies working to achieve a more “humanlike” experience.78 And AI “personal assistants,” such as Alexa and Siri, are becoming more integrated into consumers’ daily lives. And brands can take advantage of this by creating content specifically for these engagements. For example, brands can create skills that Alexa can draw from in her responses to voice searches. Tide, marketed by P&G, created a skill for when people need to remove a stain, and Purina created a skill to help new dog owners clean up puppy messes.79 As these types of interactions increase and expand, there are concerns about how this will transform commercial speech. One concern is the ubiquity of these communications and that, being integrated into daily life, they may not seem like advertising at all. In 2018, California became the first state to pass a law that regulates the deceptive use of this type of communication. Specifically, the California Bot Act makes it unlawful to use a chatbot to talk with a person in California with the intent to hide or misrepresent the fact that the “speaker” is not a live person.80 Presumably, the content delivered by these chatbots and AI devices would be regulated by existing federal and state false advertising laws. However, some scholars have questioned how liability would be imposed if the false or misleading statements originate from an AI.81 Brands, and the advertising industry at large, must develop ethical fail-safes for any AI systems deployed in consumer interactions.

104

Courtney A. Barclay

Social Advertising in Heavily Regulated Industries Although the FTC is the agency that protects consumers in the general marketplace, other federal agencies offer specific rules and guidance related to advertising in industries that are heavily regulated industries. The Food and Drug Administration (FDA) polices the labeling, advertising, and public relations efforts for prescription medications and medical devices.82 The Securities and Exchange Commission (SEC) ensures communications from corporations comply with the regulation of publicly traded securities.83 Brand managers operating in these industries have had to address specific regulatory concerns and restrictions. Pharmaceutical Advertising In addition to general requirements of truth and accuracy, the FDA requires that commercial messages for pharmaceuticals include the generic or “established” name of the drug, a detailed list of ingredients, and a summary of side effects and contraindications.84 These mandates commonly are enforced through the use of notices of violation and warning letters sent to pharmaceutical companies, describing the offending conduct, asking for remediation (such as removal or corrective advertising), and threatening further legal action. Despite significant delays in issuing guidance for online communications, the FDA in 2014 began issuing draft guidance documents for pharmaceutical companies engaging with the public on online and mobile platforms, including company websites, online videos, online advertising, and social media. Generally, the concerns expressed by the FDA are that pharmaceutical companies either overstate the efficacy of a drug, omit or minimize the risks associated with the drug, or fail to correct inaccurate third-party information about a drug. These common concerns are the subject of the first two draft guidance documents on social media. Additional guidance documents provide direction for specific concerns, including how pharmaceutical companies can comply with the requirement to include the established drug name in all marketing materials and under which circumstances the companies must report adverse drug effects reported on the Internet. Although these draft guidances had not been finalized as of 2022, the FDA initiated the study of the use of social influencers by pharmaceutical companies85 and has issued warning letters for misleading communication in social media posts.86

Investor Relations Investors are increasingly turning to the Internet as a source of information. Brokerage firms, such as TD Ameritrade and Charles Schwab, are devoting resources to provide their clients with social media access and tools. As online communications have become increasingly important to investors, the SEC

Commercial Speech in a Social Space

105

has encouraged companies to use these new platforms while underscoring the importance of compliance with existing securities regulations. The SEC was established in 1934 to ensure the public received timely, complete, and truthful information about publicly traded securities.87 This is accomplished through the requirement for and oversight of specific financial filings, such as corporate registration forms, annual and quarterly filings, annual reports to shareholders, and other corporate financial communications.88 An industry self-regulatory agency – the Financial Industry Regulatory Authority (FINRA) – works both independently and in coordination with the SEC to offer guidance to securities firms on a variety of regulatory issues and act to penalize firms that violate these regulations. Both the SEC and FINRA have released documents addressing the application of existing financial communications regulations to social media messaging. These regulations allow for social media platforms to be used by companies to announce “key information . . . so long as investors have been alerted about which social media will be used to disseminate such information.”89 However, FINRA has developed guides that specifically address the source and type of information disseminated. Additionally, US law and SEC rules regulate the use of social media for raising capital investments. Online Communications to the Public Publicly traded companies are required to disclose to the public information relevant to investment decisions. Additionally, companies must ensure that the information disclosed is truthful and accurate.90 The SEC has ruled that these disclosure requirements apply equally to any online disclosures as well as those made through traditional print and broadcast media. The SEC has found that company websites, as well as social media platforms, can be used as official channels of communication.91 A 2008 SEC guidance letter provides that when a company’s website is recognized by the public as a channel of information, posting material information on the site would satisfy the requirement of disclosing information to the public at large, as opposed to a more selective group.92 The SEC has specifically noted that any online communications must be truthful and accurate, avoiding any fraudulent or misleading messaging. For example, the SEC recommends carefully explaining the context of any link to third-party content that the company is not adopting. This will ensure that consumers do not rely on the information found on the linked page as information directly from the company. The SEC recognized the contribution that certain interactive features of a website can make to the “robust use” of online communication. The SEC encourages the use of these technologies but cautions companies to educate any brand representative using blogs, or other forums, that they are acting “on behalf of the company.” Companies should, therefore, develop procedures for

106

Courtney A. Barclay

monitoring these communications. The SEC guidance provides that responsibility transfers to statements made on behalf of the company via third-party websites, as well. This indicates that companies should rely on the SEC recommendation for sponsored social media accounts, as well as traditional, owned website content. In 2010, FINRA released guidance more specific to the features and use of social media channels. FINRA has categorized online communications tools as advertisements, correspondence, sales literature, and public appearance.93 For example, information posted on publicly available websites and feeds, such as a Twitter profile, is considered an advertisement. However, password-protected websites and social media feeds are considered sales literature. Content posted in real-time that allows for consumer interaction, such as a chat or a live feed, is considered a public appearance. FINRA’s 2010 guidance document addressed the recordkeeping and pre-approval requirements for these various communications to the public.94 Advertisements, or static content on social media sites, such as profile information, require pre-approval by a principal in the company. Interactive elements, or public appearances, do not require pre-approval, but companies should archive all communications through these media for record retention. This should be considered as the company determines which technologies and social networks to use for public communication. Additionally, companies must develop a method to supervise these communications to ensure content regulations are not violated. FINRA also has addressed third-party statements. Generally, FINRA does not consider a post by a consumer to represent the company. Therefore, approval and recordkeeping rules would not apply. However, if a company republishes the comment through a Facebook share or a retweet on Twitter, the company can be found to have endorsed the statement.95 Some statements online may constitute investment fraud, as individuals have used social media to manipulate the stock market by posting false or misleading information about a company. For example, in 2015, the SEC filed charges against James Craig for tweeting false statements that publicly traded companies were under investigation, allowing Craig to purchase stock in the company when the price dropped, while in another case, posters illegally published false information to increase the stock price in order to profit from the sale of their shares.96 Crowdfunding Social media tools also have enabled companies to raise funds in new ways. For example, crowdfunding allows companies to solicit small investments from thousands of investors.97 In 2011, the SEC settled with two entrepreneurs over an “experiment” to raise funds online to purchase Pabst Brewing Company. The SEC stated that the entrepreneurs violated securities law when

Commercial Speech in a Social Space

107

they launched a website seeking pledges of money in exchange for ownership shares. This online offering triggered requirements for security registration and financial information disclosures.98 This case demonstrated the fundraising opportunities available for small companies via online communication tools. Other crowdfunding websites, such as Kickstarter, allow entrepreneurs to raise capital without triggering an SEC investigation because the capital is considered a donation; investors do not receive an ownership share of the company. Pebble, a startup company developing a digital watch that links to smartphones to run and control apps, has set a new record for money raised through Kickstarter. The company offered returns, including exclusive updates, prototypes of the watch, and the opportunity to vote on the fourth color option for the watch. The company raised more than $10 million from more than 65,000 backers. In 2012, the JOBS Act created an exception to traditional securities requirements for small companies going public via online offerings.99 The JOBS Act exception would allow companies to raise up to $1 million within a 12-month period through the online issuance of stock shares. In 2015, the SEC adopted new rules to address the impact of this law on existing crowdfunding rules. These rules permit companies to engage in equity crowdfunding without registering with the SEC as brokers, meaning that individual investors can now invest in startups in exchange for a share of the company rather than a nonmonetary return.

FREQUENTLY ASKED QUESTIONS 1.

Can social media influencers endorse products? Yes.The FTC defines an endorsement as “[a]ny advertising message . . . that consumers are likely to believe reflects the opinions, beliefs, findings, or experiences of a party other than the sponsoring advertiser, even if the views expressed by that party are identical to those of the sponsoring advertiser.” Bloggers and social media users can endorse commercial products and services through online and social media posts. However, if the poster has any relationship with the company that offers the product or service for sale, that relationship must be disclosed in a clear and prominent way. For example, if a blogger receives a free gaming system to test, he or she must disclose in any blog post about the performance of that system that he or she received the product for free. The FTC’s Bureau of Consumer Protection has noted that such disclosures can be made via social media shortcuts. For example, on

108

Courtney A. Barclay

Twitter, users were concerned that the limit of 140 characters to a post would make the required disclosures too difficult.The FTC Bureau of Consumer Protection has suggested that including such hashtags as #paidad, #paid, or #ad may be sufficient to help consumers evaluate the usefulness of the sponsored statements. 2.

Are there any restrictions on social media giveaways? Yes. There are federal and state law considerations when conducting sweepstakes and contests, as well as license agreements for each social platform. Brands should avoid promotions that would be considered lotteries as these are more heavily regulated or even prohibited throughout the United States. Lotteries are characterized by (1) legal consideration, (2) chance, and (3) prizes. Social media promotions or giveaways should instead focus on sweepstakes (no consideration required) or contest (no chance involved). Brands must clearly disclose all official rules and criteria for winning. Developing these rules requires careful planning, as chance and consideration are not always straightforward or standardized concepts.

3.

Does engaging in behavioral targeting online run afoul of FTC regulations? Currently, the only FTC rules on behavioral targeting are principles for self-regulation. These principles address the need to protect consumer privacy and data security. Companies collecting or maintaining consumer data should ensure reasonable data security measures are in place.Additionally, online consumers should receive adequate notice of the data collection and use, as well as a meaningful opportunity to opt out of targeting.

4.

Are pharmaceutical companies liable for third-party comments on Facebook about off-label uses of drugs? Pharmaceutical companies are not liable under FDA regulations for unsolicited comments regarding off-label uses of drugs. However, companies can respond to individual questions about these uses in a truthful and accurate way using one-on-one communications. Companies may also respond publicly by providing non-promotional materials that clarify the FDA-approved uses of a drug. For example, the company could post to a public forum a statement that the drug is not approved for the related use with a link to the FDA-approved drug packaging or patient information sheet.

Commercial Speech in a Social Space

5.

109

Do social media posts, and other interactive social media posts, trigger SEC scrutiny? If the posts are on behalf of a publicly traded company, they are subject to securities law requirements. Companies must ensure that the communications are truthful and accurate. Additionally, companies should develop methods to archive these communications to the public under SEC recordkeeping requirements.

6.

Can companies use online websites and social media platforms to seek investments for startup companies? Yes, but these efforts are subject to securities regulations. In 2012, President Obama signed into law the JOBS Act, which does allow companies to raise up to $1 million within a 12-month period through the online issuance of stocks without triggering many of the existing SEC requirements. However, the SEC has not yet drafted the required regulations for this provision. There may be additional requirements triggered by this practice.All investments exceeding $1 million per year trigger traditional SEC registration and disclosure requirements.

Notes 1. Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 748, 764 (1976). 2. Central Hudson Gas & Elec. Corp. v. Public Serv. Comm’n, 447 U.S. 557, 564 (1980). 3. 15 U.S.C. §§ 41 et seq. 4. Internet Advertising Revenue Report, IAB 3 (2020), www.iab.com/wp-content/ uploads/2020/05/FY19-IAB-Internet-Ad-Revenue-Report_Final.pdf. 5. Press Release, IAB Internet Advertising Revenue Report for 2020 Shows 12.2% Increase in Digital Advertising, Despite COVID-19 Economic Impacts, PR Newswire, April 7, 202l. 6. Advertising Substantiation Policy Statement, appended to Thompson Med. Co., 104 F.T.C. 648, 839 (1984), aff’d, 791 F.2d 189 (D.C. Cir. 1986). 7. Teami, LLC, et al., (U.S. M.D. Fla. 2020) (No. 8:20-cv-518-T-33TGW). 8. 16 C.F.R. § 255.1. 9. 16 C.F.R. 255.5. 10. William McGeveran, Disclosure, Endorsement, and Identity in Social Marketing, 2009 Univ. Ill. L. Rev. 1105, 1112 (2009). 11. 15 U.S.C. § 45 (2016). 12. Reverb Communications, Inc., F.T.C. Docket No. C-4310 (November 22, 2010). 13. Id. 14. Sunday Riley Modern Skincare, LLC, F.T.C. Docket No. C-4729 (November 6, 2020).

110 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.

28. 29. 30. 31. 32. 33. 34. 35. 36. 37.

38. 39. 40. 41. 42. 43. 44. 45. 46. 47.

Courtney A. Barclay Cure Encapsulations, Inc., (E.D. N.Y. 2019)(No. 19-cv-982). 16 C.F.R. § 255.5, Example 3. Id. at Example 7. Teami, LLC, et al., (U.S. M.D. Fla. 2020) (No. 8:20-cv-518-T-33TGW). In the Matter of Legacy Learning Systems, Inc., F.T.C. Docket No. C-4323 (June 1, 2011). Creaxion Corp., et al., F.T.C. Docket No. C-4668 (January 31, 2019), www.ftc. gov/system/files/documents/cases/c-4668_172_3066_creaxion_decision_and_ order_2-8-19.pdf. See, e.g., In re: Hyundai Motor America, FTC File No. 112–3110 (November 16, 2011); In re: AnnTaylor Stores Corp., F.T.C. File No. 102–3147 (April 20, 2010). Hyundai Motor America, F.T.C. File No. 112–3110. Id. Bureau of Consumer Protection, Fed. Trade Comm’n, Revised Endorsement Guides: What People Are Asking (June 2010), http://business.ftc.gov/documents/ bus71-ftcs-revised-endorsement-guideswhat-people-are-asking. Fed. Trade Comm’n, Native Advertising: A Guide for Businesses (2015). Id. Fed. Trade Comm’n, Enforcement Policy Statement on Deceptively Formatted Advertisements (2016) (citing Statement in Regard to Advertisements That Appear in Feature Article Format, FTC Release (November 28, 1967); Advisory Opinion No. 191, Advertisements which appear in news format, 73 F.T.C. 1307 (1968)). Id. at 3. Creaxion Corp., et al., F.T.C. Docket No. C-4668 (January 31, 2019). In Re Lord & Taylor, F.T.C. Docket No. C-4576 (May 20, 2016). Fed. Trade Comm’n, Dot Com Disclosures: How to Make Effective Disclosures in Digital Advertising (March 2013). Fed. Trade Comm’n, Enforcement Policy Statement on Deceptively Formatted Advertisements (2016). James B. Egle & Jeffrey A. Mandell, Sweepstakes and Contests in the Digital Age, 37 Franchise L.J. 43, 48 (2017). Id. at 50–51. Id.; see also Kelly L. Galligan & Michael A. Hellbusch, Navigating Social Media Sweepstakes and Contests, 61 Orange County Lawyer 48 (2019). Galligan & Hellbusch, supra note 35. Press Release, Fed. Trade Comm’n, FTC Staff Reminds Influencers and Brands to Clearly Disclose Relationship: Commission Aims to Improve Disclosures in Social Media Endorsements (April 19, 2017), www.ftc.gov/news-events/ press-releases/2017/04/ftc-staff-reminds-influencers-brands-clearly-disclose. See Black’s Law Dictionary 1430 (8th ed. 2004). Division of Marketing Practices, Fed. Trade Comm’n, Spam Summit: The Next Generation of Threats and Solutions, Federal Trade Commission (November 2007). Id. CV No. H 03–5537 (S.D. Tex. Filed December 3, 2003), www.ftc.gov/os/caselist/ 0323102/0323102zkhill.htm. Fed. Trade Comm’n v. Pricewert, No. C-09-CV-2407 (N.D. Cal. April 8, 2010). Fed. Trade Comm’n v. Flora, No. 11-CV-00299 (C.D. Cal. August 16, 2011). Id. Fed. Trade Comm’n v. Tachht, No. 8:16-cv-01397 (M.D. Fla., filed June 2, 2016). 15 U.S.C. § 7708. Report to Congress, Fed. Trade Comm’n, National Do Not Email Registry (June 2004).

Commercial Speech in a Social Space

111

48. Fed. Trade Comm’n, Protecting Consumer Privacy in an Era of Rapid Change, Preliminary Staff Report 14 note 32 (December 2010). 49. See generally, OnGuardOnline.gov. 50. Spam Summit, supra note 39 at 19. 51. Pages, Groups, and Events, Facebook Terms of Service, www.facebook.com/ policies_center/pages_groups_events. 52. Guidelines for Promotions on Twitter, Twitter Platform Use Guidelines, https:// help.twitter.com/en/rules-and-policies/twitter-contest-rules. 53. Egle & Mandell, supra note 33. 54. See, e.g., Kaitlyn Tiffany, The Perennial Debate about Whether Your Phone Is Secretly Listening To You, Explained, Vox, December 28, 2018, www.vox.com/ the-goods/2018/12/28/18158968/facebook-microphone-tapping-recording-instagramads. 55. Pew Research Center, Facebook Algorithms and Personal Data, January 16, 2019, www.pewresearch.org/internet/2019/01/16/facebook-algorithms-and-personal-data. 56. American Ass’n of Advertising Agencies, the Ass’n of National Advertisers, the Better Business Bureau, the Direct Marketing Ass’n, and the Interactive Advertising Bureau, Self-Regulatory Principles for Online Behavioral Advertising (July 2009) [hereinafter Coop Principles], www.iab.net/media/file/ven-principles-07-01-09.pdf. 57. IAB Tells Congress Privacy Bills May Harm Business and Consumers, Interactive Advertising Bureau, www.iab.net/public_policy/1296039 (last accessed March 28, 2011); see also Omar Tawakol, Forget Targeted Ads – I’d Rather Pay for Content, MediaPost.com, February 15, 2011, www.iab.net/public_policy/1296039. 58. Howard Beales, The Value of Behavioral Targeting, Network Advertising Initiative, March 24, 2011, www.networkadvertising.org/pdfs/Beales_NAI_Study.pdf. The report states that total ad revenues for the 12 companies equaled $3.32 billion; 17.9% of that was attributed to behavioral targeting. 59. Fed. Trade Comm’n, Staff Report: Public Workshop on Consumer Privacy on the Global Information Infrastructure (1996). 60. Fed. Trade Comm’n, Privacy Online: Fair Information Practices in the Electronic Marketplace: A Report to Congress (2000). 61. Id. at 35. 62. Id. at 36. 63. Fed. Trade Comm’n, Online Behavioral Advertising: Moving the Discussion Forward to Possible Self-Regulatory Principles (December 2007), www.ftc.gov/ os/2007/12/P859900stmt.pdf. 64. Id. 65. See generally, NAI website, www.networkadvertising.org; The Self-Regulatory Program for Online Behavioral Advertising, www.aboutads.info (last accessed March 2, 2012). 66. See, e.g., Interactive Advertising Bureau, Understanding Online Advertising, www. iab.net/privacymatters/; Self-Regulatory Principles for Online Behavioral Advertising, July 2009, www.aboutads.info/resource/download/seven-principles-07-01-09. pdf; Self-Regulatory Principles for Online Behavioral Advertising Implementation Guide, October 2010, www.ftc.gov/sites/default/files/documents/reports/ federal-trade-commission-staff-report-self-regulatory-principles-online-behavioraladvertising/p085400behavadreport.pdf; NAI, Opt Out of Behavioral Advertising, www.networkadvertising.org/managing/opt_out.asp. 67. NAI, 2009 Annual Compliance Report, December 30, 2009, www.network advertising.org/pdfs/2009_NAI_Compliance_Report_12-30-09.pdf; NAI, 2010 Annual Compliance Report, February 18, 2011, www.networkadvertising.org/ pdfs/2010_NAI_Compliance_Report.pdf.

112

Courtney A. Barclay

68. NAI, 2010 Annual Compliance Report, 13–15. For more information on the icon, see Advertising Option Icon Application, www.aboutads.info/participants/icon. 69. Fed. Trade Comm’n, Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers, Preliminary FTC Staff Report iii (December 2010). 70. Id. at 63. 71. H.R. 654, 112th Cong. (2011). 72. In the Matter of Chitika, Inc., F.T.C. File No. 1023087 (Complaint filed March 14, 2011). 73. In the Matter of Chitika, Inc., F.T.C. Docket No. C-4324 (June 7, 2011). 74. Mike Shields, FTC Commissioner Urges Ad Industry to Let Consumers Opt Out of Tracking; Julie Brill Also Warns Publishers and Advertisers to Make Sure Native Ads Are Clearly Labeled, The Wall Street Journal, January 21, 2016. 75. 15 U.S.C. §§6501–6506. 76. U.S. v. W3 Innovations, CV11–03958 (N.D. Cal. 2011). 77. Children’s Online Privacy Protection Rule, 16 C.F.R. Part 32, 78 Fed. Reg. 3927 (January 17, 2013). 78. Cammy Crolic, When Humanlike Chatbots Work for Consumers – and When They Don’t, The Wall Street Journal, November 22, 2021. 79. Nick Chasinov, The Ultimate Guide to Alexa Skills Marketing, HubSpot.com, April 11, 2019, https://blog.hubspot.com/marketing/alexa-skills-marketing. 80. California Business and Professions Code, §§ 17940–17943. 81. Kate Tokeley, The Power of the “Internet of Things” to Mislead and Manipulate Consumers: A Regulatory Challenge, 2 Notre Dame J. Emerging Tech. 111, 129– 140 (2021). 82. Food, Drug and Cosmetic Act, 21 U.S.C. §§ 352–353. 83. Sec. & Exchange Comm’n, The Investor’s Advocate: How the SEC Protects Investors, Maintains Market Integrity, and Facilitates Capital Formation, www.sec.gov/ about/whatwedo. 84. Prescription Drug Advertisements Rule, 21 CFR 202.1 (2008). 85. Notice, Agency Information Collection Activities; Proposed Collection; Comment Request; Endorser Status and Explicitness of Payment in Direct-to-Consumer Promotion, 85 FR 4994 (January 28, 2020). 86. See, e.g., Press Release, FDA, FTC Take Action to Protect Kids by Citing Four Firms That Make, Sell Flavored e-liquids for Violations Related to Online Posts by Social Media Influencers on Their Behalf, FDA, June 7, 2019, www.fda.gov/newsevents/press-announcements/fda-ftc-take-action-protect-kids-citing-four-firmsmake-sell-flavored-e-liquids-violations-related; Naja Rayne, Kim Kardashian West Warned by FDA over Promoting Morning Sickness Drug, Time, August 13, 2015. 87. 15 U.S.C. § 78gg. 88. Sec. & Exchange Comm’n, The Investor’s Advocate: How the SEC Protects Investors, Maintains Market Integrity, and Facilitates Capital Formation, www.sec.gov/ about/whatwedo. 89. Sec. & Exchange Comm’n, Report of Investigation Pursuant to Section 21(a) of the Securities Exchange Act of 1934: Netflix, Inc., and Reed Hastings, Release No. 69279, April 2 2013. 90. 17 C.F.R. § 240.10b-5. 91. Commission Guidance on the Use of Company Web Sites, 73 Fed. Reg. 45,862 (August 7, 2008), codified at 17 C.F.R. pt. 241 (2008). 92. Id. 93. FINRA, Guide to the Web for Registered Representatives, www.finra.org/industry/ issues/advertising/p006118.

Commercial Speech in a Social Space

113

94. FINRA, Guidance on Blogs and Social Networking Web Sites, Reg. Notice 10–06 (2010). 95. Id. 96. See SEC v. Craig, 15-cr-00517 (N.D. Cal. filed November 5, 2015); SEC v. McKeown, Civil Action 10–80748-CIV-Cohn (S.D. Fla. June 23, 2010). 97. Jean Easglesham & Jessica Holzer, SEC Boots Up for Internet Age, The Wall Street Journal, April 9, 2011 at B1. 98. Michael Migliozzi, II & Brian William Flatow, SEC No. 3–14415 (June 8, 2011). 99. Jumpstart Our Business Startups Act, Pub. L. No. 112–106 (2012).

Chapter 6

Account Ownership and Control Jasmine E. McNealy UNIVERSITY OF FLORIDA

Social capital is a concept with many definitions. Some scholars have called it the goodwill, fellowship, sympathy, and social intercourse that happens in families or close-knit groups. Others have likened it to the function of networked relationships or social structures.1 However defined, nowhere is the concept of social capital more exemplified than on social networking sites. From followers to friends to likes and shares, the connections inherent in social media are valuable to both individuals and businesses. Odds are, you have created an account on some social media platform. Actually, you probably have not stopped after creating the account but have gone about the business of creating a presence on that platform. On Twitter, this would involve choosing an account handle and associated name, uploading a photo or other avatar, choosing a background image, and curating your timeline. Timeline curation is the most time-consuming facet of your social media use because it requires careful (or not so careful) selection of other accounts to follow or friend. Although timeline curation may be the most time-consuming activity, arguably the most important activity is posting. An important decision to make before posting concerns the types of posts that will come from your account. On the most basic level, this decision may be as simple as whether the account will be personal or professional or whether you will separate the two purposes at all. You must also decide the kinds of things you will post. Will it be strictly news, personal thoughts, topic-specific information, etc.? Whatever your choices, in making these decisions – from which avatar to use to what things to post – you are constructing an online identity, or persona, connected with that social media account (or accounts) and ultimately yourself. Now, imagine that your account becomes popular, perhaps not Charli D’Amelio levels of popularity, but you amass a large number of followers and your posts are shared often. Your account becomes so popular that it helps you land a job, a job that requires you to post information about your position to your social media account and to maintain and increase the number of followers. Once the account becomes a part of the job, what are the rules about who DOI:10.4324/9781003174363-6

Account Ownership and Control

115

may access, control, or otherwise use the account? And what happens to the account when you leave that job? The need for answers to these and related questions has become more urgent as the use of social media as a tool for marketing continues to grow. Corporations, civil society, and governmental organizations have discovered the benefits of leveraging online personal networks and relationships. Several professional communication industries use the number of social media connections as a measure of the qualifications of prospective job applicants. Certainly, the use and management of social media accounts, both personal and business-related, are now important parts of corporate culture. Businesses believe that the corporate brand and consumer base may grow with the addition of employees with a strong social media presence. This chapter examines the legal issues arising that may emerge in connection to the creation and use of personal and business social media accounts.

Bring Your Own Persona As many businesses once told employees to “bring your own device” to the workplace, the phenomenon at issue in the use and ownership of social media accounts can be thought of as “bring your own persona” (BYOP), an evocation of the marketing practices from which business social media gets its foundation. Within the context of a social media account, BYOP describes two possibilities. First is a requirement that employees have and maintain an online presence. An example of this would be when a company hires a “(whatever social media site)-famous” person and requires that they post about company products and/or content. Beauty and style influencers, for instance, may be hired to promote the products of a particular company. In this situation, brands are attempting to capitalize on the established and potential following of a known social media personality. With millions of followers across many social media sites, including Instagram, Twitter, and Snapchat, the D’Amelio sisters (and their current and former relations) are a good example of this. Brands recognize the influence that the family has on certain demographics and hire them to use/promote products on social media. A second BYOP scenario is that of an organization hiring an individual and placing them in charge of a social media account, though this may not be the firm’s official account. An example of this is when an individual specifies their affiliation in the account name. An account created for an employee of a news station may be, for instance, @janedoe_WXXX. If not as explicit, the account may also indicate affiliation in the description and/ or biography. In both cases, the expectation is that the employee use social media in such a way so as to attract an audience of current and potential consumers. To do this, the employee creates an online persona. Before leaving the New York Times, then-assistant managing editor Jim Roberts, for example, used @NYTJim as his Twitter handle. When Roberts joined Reuters in

116

Jasmine E. McNealy

2013, he changed his handle to @NYCJim, indicating that he was no longer with the newspaper. The creation of a persona, for business or other purposes, is not uncommon at all. One of the first things that occur when a person begins using a social networking site is the creation of a persona – the curated personality or the identity to be shared with friends, followers, and the rest of the world. From Instagram to Snapchat, TikTok to Twitter, whenever a new user signs up, the social media site prompts them to create a profile, to tell a little about themselves, to tell others what makes them unique. But a persona is not a profile, although the profile certainly is a part of the persona, along with the avatar or profile pictures and background images, among other things. The persona is made up of all of the items from which an identity or reputation, online or otherwise, is constructed. The definition of “persona” has a Latin origin and means the face the individual wears in public or during social interaction. Carl Jung described the persona as a kind of mask, designed on the one hand to make a definite impression upon others and, on the other, to conceal the true nature of the individual.2 The persona was how the individual represented themselves to other members of society. Businesses have used what’s called persona marketing since the 1990s.3 Persona marketing’s foundations are in customer or market segmentation, the practice of dividing a pool of potential customers based on demographic information (e.g., race, gender, education, and spending habits). Used also in user experience design, persona marketing takes the customer segmentation concept further and constructs and tells the story of a fictional character, providing their name, age, race, gender, and lifestyle characteristics. The fictionalization also establishes that persona’s motivation for purchasing and or using certain products or services and the times at which they do so. Personas are thought to be beneficial because they visualize the humans behind the data, letting marketers and user experience designers understand the needs and desires of their product customers. Fujitsu, for example, was able to improve its Fujitsu for kids site after creating three different personas, including that of Misaki Sato, a virtual nine-year-old. A case study of the Fujitsu for kids site by Fujitsu brand designer Yumi Hisanabe found that the Misaki persona allowed designers to ask and answer questions about the usability of the site from the perspective of a child, a specific child.4 A similar rationale is at play when individuals decide which portions of their personality or professional identity to display on social media. The goal is to gain as many consumers – friends, followers, connections – as possible using points of interest attractive to a selected audience. This is how a social media account becomes a tool for marketing a personality and the connected organization.

Who Owns the Persona? But when an account gains popularity, who owns that persona? Though created and maintained on third-party platforms, it is easy to think of social media

Account Ownership and Control

117

personas and accounts as personal property. Designating something as “property” infers that the owner is provided with a “bundle of rights” that allows her to control its use and access. As intangible objects, social media property ownership issues could fall into the area of intellectual property, specifically in the areas of copyright, trade secret, and trademark. Copyright Copyright law protects original, creative works, including fictional or fictionalized characters. Copyright protection for fictional persons was established in 1930 by Judge Learned Hand in Nichols v. Universal Pictures Corp., in which the court found that the more developed or distinctive a character, the more it can be copyrighted. This is not to say that online personas are the same as characters like Samurai Jack or the Powerpuff Girls. But the combination of a social media account profile, timeline curation, and aggregation of tweets from the account may be enough to reach the originality requirement for copyright. Copyright protects the expression of ideas. It is not a stretch to conclude that copyright protects the combination of choices mentioned earlier, all of which are made to express the user’s ideas about her social media identity. Assuming that the combination of elements that create a persona is enough for copyright protection, the issue of copyright ownership arises. Along with the eight categories of creative works that copyright protects, the law protects certain specialized works as well. More specifically it protects “work[s] made for hire.” Section 101 of the Copyright Act details two kinds of works made for hire: (1) a work made by an employee within the scope of his employment and (2) a work made by a freelancer for a specific purpose recognized in the Copyright Act and memorialized in a contract.5 The US Supreme Court considered the works-for-hire doctrine in Community for Creative Non-violence v. Reid.6 At issue in CCN was ownership of a sculpture commissioned from an artist by a nonprofit organization, as the parties failed to create a contract. As a preliminary issue, authorship and copyright usually rest with the creator of a work. When an employee creates a work within the scope of her employment, the employer holds the copyright. CCN argued that Reid, the artist, was an employee when he created the sculpture. The Court disagreed, enumerating the characteristics of an employeremployee relationship within the scope of works for hire, none of which Reid fit. In Aymes v. Bonelli,7 a works-for-hire case focused on the ownership of computer software, the Second Circuit refined these factors to include (1) the hiring party’s right to control the manner and means of creation; (2) the skill required; (3) the provision of employee benefits; (4) the tax treatment of the hired party; and (5) whether the hiring party has the right to assign additional projects to the hired party.8

118

Jasmine E. McNealy

The works-for-hire doctrine, then, has possible implications for those using social media in connection with their work. The scenario of an organization hiring an employee who uses their own social media would only give rise to the works-for-hire doctrine when there is a written contract between the parties. In the second situation, when social media is used as part of employment duties, the doctrine would dictate that if the employee were to leave the organization, the organization would retain the account, including the friends or followers with which it is associated. Some journalists, so as not to involve themselves in the struggle over account ownership, have abandoned the accounts connected to their former employers. Sports journalists Brian Windhorst and Adam Rubin, for example, abandoned their former newspaper sports beat-connect accounts when they started working for ESPN. Others like Pat Forde and Michelle Beadle simply changed their Twitter account name while retaining their same followers. But organizations have argued for ownership of Twitter accounts in court. The most famous case to date is Phonedog v. Kravitz, which involved a technology reporter who left his job at one publication to work for another, taking his corporate account with him but changing the related Twitter handle. This case was not, however, brought under copyright law but trade secret, discussed in the next section. Trade Secret Ownership of friends lists or followers on a social media account has been the subject of more than one lawsuit. In all of these cases, employees separated from their companies, intending to take their social media accounts with them. In suing their former employees for ownership of these accounts and account connections, the organizations have claimed that the employee has violated trade secret law. Trade secret law, predominantly governed by each individual state, is aimed at protecting the competitive advantage organizations have in secret, or secretive, information. This information can be embodied in chemical processes, machinery, and even client lists. Within the context of social media, friend lists and followers have been likened to client lists. For example, in Phonedog v. Kravitz,9 a case that the parties settled in 2012, a company sued its former employee over ownership of a Twitter account under the California Uniform Trade Secret Act.10 The company claimed that it owned both the employee’s Twitter account name and the 17,000 followers connected to that account. The Phonedog court never reached the question of whether the list of followers should actually be protected under trade secret law. But other courts have considered whether social media connections should be thought of as trade secret to differing results. In Christou v. Beatport,11 the Tenth Circuit Court of Appeals found that a man’s MySpace profile and the friends list connected to it could be considered

Account Ownership and Control

119

trade secrets. The case involved a former nightclub talent promoter who left his job and started his own nightclub and music promotion business using his former employer’s online list of friends. The ex-employee argued that the social media list could not be a trade secret because it was public. The court was unpersuaded, instead deciding the case based on eight factors: (1) whether proper and reasonable steps were taken by the owner to protect the secrecy of the information; (2) whether access to the information was restricted; (3) whether employees knew customers’ names from general experience; (4) whether customers commonly dealt with more than one supplier; (5) whether customer information could be readily obtained from public directories; (6) whether customer information is readily ascertainable from sources outside the owner’s business; (7) whether the owner of the customer list expended great cost and effort over a considerable period of time to develop the files; and (8) whether it would be difficult for a competitor to duplicate the information.12 In contrast, a federal district court in Pennsylvania in Eagle v. Morgan found that a publicly available list of connections was not the subject of trade secret.13 Linda Eagle and a colleague founded an online banking services company that was later bought by another organization. Her LinkedIn account was used for marketing the company. A year after the purchase, Eagle and other company officers were fired from the organization, which maintained control over the LinkedIn account. Eagle regained access to the account, and the company sued for trade secret misappropriation. The court found that the company had not shown how LinkedIn connections and their information were a competitive advantage. Although it is not yet settled as to whether social media accounts and connections are trade secrets, the implications of trade secret law for social media profiles could be significant. If a company could demonstrate that the names and contact information in its social media connections were of value, it may be able to persuade a court to recognize trade secret. In fact, before the Phonedog case settled, the company was asking for $340,000, or the number of followers (17,000) valued at $2.50 each, times the amount of time that Kravitz had used the Twitter account after leaving the company (eight months).14 Trademark and Unfair Competition Trademark is the third area of intellectual property implicated in the use and ownership of social media accounts. A trademark is “any word, name, symbol or device, or any combination thereof” used to identify the origin of goods or services.15 Social media personas and accounts could, therefore, be considered akin to trademarks. Section 43(a) of the Lanham Act, along with state law,

120

Jasmine E. McNealy

protects trademarks from false descriptions, designations of origin, and dilution. Public relations professional Jill E. Maremont, for instance, filed a lawsuit against her former employer, Susan Fredman Design Group (SFDG), claiming the company violated the Lanham Act when SFDG employees made posts to her personal social media accounts, as well as posted to the business social media account in her name.16 Maremont was unable to return to work where she ran the social media accounts for the SFDG after being struck by a car. During the time she was recovering, she claimed that the business’ employees posted to both her personal social media account and the corporate account in her name. The court found that Maremont could go forward with her Lanham Act claims against her former employer.17 Plaintiffs must prove three things to successfully win a trademark infringement or unfair competition suit under the Lanham Act: (1) a protectable mark exists, (2) the plaintiff owns the trademark, and (3) a consumer is likely to be confused.18 The likelihood of confusion is a question of whether someone would believe that the good, service, or communication was coming from a particular source when it actually originated elsewhere. In her case against her former employer, Linda Eagle claimed that when she was locked out of her LinkedIn account, the organization changed the name and photograph. Her current and potential customers were, therefore, rerouted to the changed profile. Eagle lost her Lanham Act case, however, because she failed to show confusion among her customers. In 2020, a fired college baseball coach was sued by his former employer who claimed that he created a rival Twitter account and tricked users into believing it was the official account. The former coach has denied the claims, stating that he did not use the official logos of his former employer and that it was not disparaging to retweet comments about the school from others.19 Patent Patent, too, may provide a claim for those embroiled in a conflict over a social media account. Patents can be granted to anyone who invents or discovers “any new and useful process, machine, manufacture, or composition of matter.”20 Social media accounts would, seemingly, not fall under the rubric of the things that could be patentable, although the actual platform and tools may be the subject of patent. Account users are not inventing or discovering anything new but using the possibly patented inventions of others. Patent law does, however, offer a way of thinking about the relationship between employees and organizations with respect to ownership of social media accounts. Shop rights are a well-established doctrine within patent law.21 In shop rights, if an employee invents a patentable invention within the scope of their employment or by using the materials or other resources from their employer, their employer has an implied license to use that patent. This does not transfer

Account Ownership and Control

121

ownership of the patent from the employee to the employer;22 the employing organization cannot sell or license the patent to other organizations. Within the context of social media accounts, then, an organization could argue that it has a shop right into the account of an employee. If recognized, this would allow the organization to access and use the account. Employers may also attempt to assert rights under the doctrine of “hired to invent,” a state law doctrine similar to copyright’s works for hire.23

Appropriation and Right of Publicity Similar to the property rights mentioned earlier, individuals may have common law rights to protect their social media personas. Two related claims of injury – appropriation and right of publicity – may provide remedies for the individuals seeking to assert ownership or access claims to social media accounts. Appropriation, also called misappropriation or commercialization, is one of the four common law privacy torts enumerated by Professor Prosser in 1964.24 In general, appropriation is the use of someone’s name or likeness for a commercial purpose without consent. Like the other invasion of privacy claims, appropriation is aimed at protecting or remedying an individual’s shame and humiliation. In White v. Ortiz, for example, the mother of mixed martial arts association owner Dana White sued a woman who posted under the Twitter handle @RealJuneWhite.25 The court found Ortiz’s use of the handle was meant to capitalize on White’s name to make it appear as though the tweets from the account were real. Further, the nature of the tweets from the account, which were to appear like true confessions about White’s relationship with her children and other relatives, her mental health, and demeaning comments about other people, exposed her to shame and humiliation. Of course, White is not a case in which someone has had their account taken over by another individual or organization. It does demonstrate, however, the kinds of appropriation claims possible with social media. To be sure, an account like @RealJuneWhite is much different from those many Twitter accounts that attempt to be humorous takes on celebrity personalities and well-known business accounts. The account @BoredElonMusk, for example, posts innovations supposedly from the mind of the founder of SpaceX and Tesla Motors. Twitter allows these kinds accounts under its rules, so long as the bio indicates that it is a parody. And parody is a defense against appropriation claims. Unlike appropriation, which focuses on the emotional and/or psychic harms of shame and humiliation, the right of publicity aims at protecting an individual’s economic interest in their name or likeness. It is a property-style right like trademark. Unlike appropriation, which allows any plaintiff to sue for unauthorized use in their name or likeness, the right of publicity applies only

122

Jasmine E. McNealy

to those who have attained a certain level of fame or notoriety. An obvious example of this would be when an organization uses a celebrity’s name or likeness in an advertising campaign without their permission. This happened in Midler v. Ford Motor Co.,26 where the automobile manufacturer employed a Bette Midler sound-alike singer to perform for its commercial. Right of publicity can also inhere in nicknames27 and catchphrases.28 In Hirsch v. SC Johnson & Son, Inc., for example, a famous football player was able to claim a right of publicity in the use of his nickname, Crazylegs. In Carson v. Here’s Johnny Portable Toilets Inc., the famous comedian was able to protect his famous catchphrase from a company attempting to use the economic value in the phrase and connected references to sell a new brand of commode. As in the nickname and catchphrase cases mentioned earlier, the right of publicity can also be found in images or likenesses that call a person to mind. In Motschenbacher v. RJ Reynolds Tobacco,29 for instance, the court found a valid right of publicity in a racecar driver’s claim that a tobacco advertisement used his likeness. Although Motschenbacher’s face was not visible, the use of a car that mirrored the one he drove was enough to be considered a use of his image. A more well-known case involved Vanna White of Wheel of Fortune fame. In White v. Samsung Elec. Am. Inc.,30 White sued the electronics manufacturer for using her likeness in an advertisement. The ad contained a robot, wearing a blond wig and an evening gown, standing in front of a Wheel of Fortune word board. The court reasoned that those viewing the ad would think of White when they saw it. Similarly, in Wendt v. Host Int’l,31 actors George Wendt and John Ratzenberger, otherwise known as Norm and Cliff on Cheers, were able to recover for the right of publicity when a Cheers-themed bar used Norm- and Cliff-resembling robots. The examples of the right of publicity claims would seemingly not be an issue for regular people using social media. But cases involving social media accounts may come out differently. In the Maremont case mentioned earlier, the court found that Jill Maremont had demonstrated that she had a significant social media following and was well known in the public relations community. She could, therefore, sustain a right of publicity claim, although she would later fail to prove the injury. And the court in Eagle found that Linda Eagle’s right of publicity had been violated when her former employer changed the password to her LinkedIn account and altered the account to reflect the new CEO’s information.32 Both appropriation and right of publicity are state law claims and not federal. Not all states, however, recognize either or both of these claims. Therefore, it is important to be aware of the law in your state.

The CFAA and Employee Duties When an organization hires an individual, that employee is endowed with certain responsibilities. Many employees are hired under a contract. As a general

Account Ownership and Control

123

matter, an employment contract is an agreement between parties detailing not only their assigned tasks but also terms of employment, salary, and expectations related to intellectual property, among other things. For those using social media as part of their employment, the contract may explain how and to what extent the employee may use the social media account. The contract may also explain who controls and or owns the social media account. Even though not always explicitly stated within a contract or terms of employment, employees have duties related to their employer. Of special concern is that of the duty of loyalty, or the requirement not to act in a way that places the organization’s interests below that of the employee. Along with the traditional method of enforcing contracts and employment duties, organizations turn to the Computer Fraud and Abuse Act (CFAA)33 as a remedy for breach of the duty of loyalty. The CFAA, known as an anti-hacking statute, prohibits the unauthorized accessing of a computer and punishes the unauthorized accessing of a computer with the intent to commit fraud. For either action, individuals and organizations can sue in a civil case to recover damages. Within the context of employees and social media, an employee may at one time have had permission to access a social media account; once employment ended, so too did authorization. A CFAA claim may also be used in situations in which the positions are reversed, where an individual files a claim against their former employer for accessing a social media account. In Eagle v. Morgan, Linda Eagle claimed that her former employer violated the CFAA when it took over her LinkedIn account. But because Eagle could not demonstrate recognizable damages, she could not recover under the CFAA. To win a case under the CFAA, the plaintiff has to demonstrate actual harm to recover under the CFAA. The Eagle court found that the kind of injury recognized under the law is limited to only those related to damage to a computer system.34 The damage threshold for the CFAA is $5,000;35 an organization or individual must prove damages in at least this amount to win their CFAA claim. The law also includes provisions for criminal penalties of up to 20 years jail time for each separate claim. Those asserting the CFAA in a case related to a social media account will have to prove the worth of the account, its related materials (including friends/followers/connections), and the potential connections. Though marketing and advertising organizations can offer estimated values for certain related social media materials, it may be difficult to prove damages in an exact amount. It would be more difficult to estimate the value of potential connections because they do not yet exist, and there is no guarantee that such connections will actually happen.

Best Practices Though many of the claims may not be successful against a former employee or organization, the act of having to mount a defense against the claims may be

124

Jasmine E. McNealy

just as, if not more, professionally and financially devastating. A few best practices may be useful for helping to avoid many of the legal conflicts mentioned here. First, employees may want to keep their personal social media accounts separate from that used in connection with their job or within the scope of their employment. It is easy and convenient to use one account for connecting to others and distributing information on the Internet. The conflicts detailed here, however, indicate that there can be negative consequences when the personal and professional are mixed. Second, it is best that both employees and employers have a meeting of the minds as to ownership and control of social media accounts. This information can be defined in the employment contract or some other employee-employer agreement and should be as descriptive as possible to avoid confusion in the event of separation or change in relationship. Of course, taking either or both of these measures is not a guarantee against a conflict over ownership and access to a social media account. These practices do, however, allow for organizations and individuals to discuss the implications of the use of social media for employment purposes. As the conflicts mentioned earlier indicate, because of a lack of legal bright-line rules related to social media, this area is unsettled and continues to evolve with professional and financial consequences. As a final matter, although the rhetoric surrounding social media accounts points to personal or organizational ownership, the terms of service for most platforms indicate that the social media site retains property rights to individual accounts. These rights include the ability to boot usage policy violators from the site and to allow users to reaccess their accounts after being hacked. In fact, in Eagle v. Morgan mentioned earlier, it was LinkedIn that allowed Linda Eagle to regain access to her profile after her former employer changed the password. Courts have also treated platforms as the actual owners of individual accounts. In New York v. Harris,36 the New York state court ruled that Twitter had to turn over the tweets connected to the account of Malcolm Harris, an activist being prosecuted for his role in Occupy Wall Street activities. Twitter had argued that users had ownership interests in their accounts. The court was unpersuaded. More importantly, certain members of the US Supreme Court have recognized that although a person and even the government might claim to own a social media account, what they might actually have is more akin to a license to use the system. The platform retains the true power. Twitter provided an example of this power after the 2020 US presidential election when, as a part of the transition to the new administration, the platform reset the followers for US executive branch accounts like @POTUS and @WhiteHouse. This was a change in policy from that of the 2016 election, when the new administration received access to the accounts, including the same number of followers, while the tweets from the previous holders were archived.37 State legislatures have taken up the cause of employees, both actual and prospective, wishing to maintain control over their social media accounts.

Account Ownership and Control

125

In 2012, Maryland became the first state to pass a law banning organizations from requesting or requiring that employees turn over passwords. Louisiana, New Hampshire, and Wisconsin, among others, followed suit, and similar laws are pending in many other states. The laws, framed as protecting employee privacy, indicate a recognition of personal ownership of social media accounts.

FREQUENTLY ASKED QUESTIONS 1.

Are social media accounts considered property? Yes. Social media accounts are considered property, although it is not always clear to whom, aside from the platform, the accounts belong.The accounts and the personalities connected to them have the attributes of many of the different kinds of intellectual property, including copyright, trademark, trade secret, and patent. No court has said definitively whether a social media account and its related materials are a particular kind of intellectual property, but courts have noted the similarities. A ruling from a federal circuit court of appeals, in the absence of a ruling from the Supreme Court, would provide a more definitive statement on whether and what kind of intellectual property exist in social media accounts.

2.

May I use my social media account for work purposes? It depends.Your employer may permit you to use your personal social media account for work purposes.You should, however, be careful of mixing personal and professional communications and responsibilities. It may be best to obtain a written list of expectations for the use of social media in your professional capacity.

3.

May my employer access my social media accounts? Maybe. If the social media account that you are using was created before or without respect to your employment, you may be considered the account owner. If your employer has placed you in charge of an account that the company has created for you to use within the scope of your employment, that account belongs to the organization. Because the account belongs to the organization, it can access it freely and without your permission. It may, therefore, change passwords and transfer control of the account to another employee. To leave little room for conflict, information about accessing social media accounts should be detailed in the employment contract or agreement.

126

4.

Jasmine E. McNealy

What happens to my social media account if I leave my job? This depends on to whom the account belongs.As mentioned earlier, if the account was created by the organization for use for organizational purposes, the account usually remains with the organization. If, however, you created the account outside the scope of your employment, you would retain the use of the account.Again, this detail may be described within the text of an employment contract or agreement.

5.

How do I protect my social media from takeover by my employer? Although it is convenient and easy to use your personal social media accounts to communicate information related to your professional position, it may be best to keep separate personal and professional accounts.Although this may take more work on your part, there can be little conflict about the scope and purpose of each account.This does not mean that there will be no overlap in content. It should be helpful, however, in sorting out to whom an account belongs.

Notes 1. Alejandro Portes, Social Capital: Its Origins and Applications in Modern Sociology, Eric L. Lesser, Knowledge and Social Capital 43–76 (2000). 2. Carl Jung, Basic Writings, Vol. 300 (1959). 3. See Derek Armstrong & Kam wai Yu, The Persona Principle: How to Succeed in Business with Image Marketing (1997). 4. Yumi Hisanabe, Persona Marketing for Fujitsu Kids Site, 45(2) Fujitsu Scientific and Technical Journal 210–218 (2009). 5. Copyright Act, 17 U.S.C. §101 et seq. 6. 490 U.S. 730 (1989). 7. 980 F. 2d 857 (2d Cir. 1992). 8. Id. at 861. 9. 2011 US Dist LEXIS 129229 (2011). 10. California Civil Code section 3426.1. 11. 849 F. Supp. 2d 1055 (Dist. Col. 2012). 12. Id. at 1075 (quoting Hertz v. Luzenac Group, 576 F.3d 1103, 115 (10th Cir. 2009)). 13. Eagle v. Morgan, Civil Action No. 11–4303 (E.D. Pa. December 22, 2011). 14. Phonedog v. Kravitz, No. C 11–03474 MEJ (N.D. Cal. November 8, 2011). 15. 15 U.S.C. § 1127 (2006). 16. Maremont v. Susan Fredman Design Group, Ltd., 772 F. Supp. 2d 967 (ND Ill. 2011). 17. Maremont’s claim would later be dismissed for failure to provide proof of financial injury. Maremont v. Susan Fredman Design Group, Ltd., No. 10 C 7811 (ND Ill. March 3, 2014). 18. A&H Sportswear, Inc. v. Victoria’s Secret Stores, Inc., 237 F.3d 198, 210–211 (3d Cir. 2000).

Account Ownership and Control

127

19. Celest Bott, Fired University Coach Wants Out of Twitter Account Suit, Law360, December 18, 2020, www.law360.com/articles/1339077/fired-university-coachwants-out-of-twitter-account-suit. 20. 35 USC § 101. 21. See Courtney J. Mitchell, Keep Your Friends Close: A Framework for Addressing Rights to Social Media Contacts, 67 Vand. L. Rev. 1459 (2014). 22. U.S. v. Dubilier Condenser Corp., 289 U.S. 178, 198 (1933). 23. Id. at 187. See also Paul M. Rivard, Protection of Business Investments in Human Capital: Shop Right and Related Doctrines, 79 J. Pat. & Trademark Off. Soc’y 753, 754–755 (1997). 24. William Prosser, Privacy, 48 Calif. L. Rev. 383, 389–392 (1960). 25. White v. Ortiz, No. 13-cv-251-SM (D.N.H. September 14, 2015). 26. 849 F.2d 460 (9th Cir. 1988). 27. Hirsch v. SC Johnson & Son, Inc., 280 NW 2d 129 (WI 1979). 28. Carson v. Here’s Johnny Portable Toilets Inc., 698 F.2d 831 (6th Cir. 1983). 29. 498 F.2d 821 (6th Cir. 1974). 30. 971 F.2d 1395 (9th Cir. 1992). 31. 1995 U.S. App. LEXIS 5464, 1995 WL 115571 (9th Cir. 1995). 32. Eagle v. Morgan, Civil Action No. 11–4303 (E.D. Pa. March 12, 2013). 33. 18 U.S.C. §1030. 34. Eagle v. Morgan, Civil Action No. 11–4303 (October 4, 2012) (citing Fontana v. Corry, No. Civ. A. 10–1685, 2011 WL 4473285, at 7 (WD Pa August 30, 2011)). 35. 18 U.S.C. §1030(a)(4). 36. Docket No. 2011NY080152 (N.Y. Crim. Ct. June 30, 2012). Biden v. Knight First Amendment Institute at Columbia University, 141 S. Ct. 1220, 1221–1227 (2021), Thomas, J. concurring. 37. Jacob Kastrenakes, As Promised, Twitter Made Joe Biden’s @Potus Account Start with Zero Followers, The Verge, January 20, 2021.

Chapter 7

Student Speech Dan V. Kozlowski DEPARTMENT OF COMMUNICATION, SAINT LOUIS UNIVERSITY

Ninety-five percent of teens have access to a smartphone, and 45% say they are online “almost constantly,” according to a 2018 study from the Pew Research Center.1 The study found that YouTube, Instagram, and Snapchat were among the most popular online platforms for teens.2 By 2021, TikTok had emerged, and its usage had spiked among young Americans, with more than 60% of US youth between the ages of 12 and 17 saying they used TikTok on at least a weekly basis.3 It is fair to say that social media no doubt pervade the lives of students. And while these media give students the power to connect and learn and influence, social media also represent, in the words of one commentator, a school’s “new bathroom wall” – a place where a student’s “online postings . . . can destroy reputations, end relationships and intensify negative feelings.”4 School officials’ attempts to regulate that virtual “bathroom wall” and to punish students for publishing on it constitute the all-important but stilllargely unresolved quandary facing student speech law: How far does school authority reach? When can schools constitutionally punish students for speech they create and post online, even if they do so off school grounds (which is typically the case)? For years, lower courts issued an array of often inconsistent, incongruent rulings as they wrestled with those questions. As one legal scholar wrote, school authorities filled the “judicial vacuum”5 by seizing the opportunity for censorship, punishing students for social media posts, for example, or banning social media use by athletes entirely. In 2021, the US Supreme Court finally ruled on a case involving a student’s social media speech. Ruling in favor of a high school student in Mahanoy Area School District v. B.L.,6 the Court provided guidance but still left many questions unanswered, likely inviting more court cases to come. Instances of schools doling out discipline for social media speech occur constantly. In 2021, for instance, high school students in California faced punishment after they posted a racist video on TikTok.7 That same year, a principal in Hawaii punished a student for cyberbullying after the student posted on Instagram that the principal was unqualified for her job. The school lessened the student’s punishment and downgraded his “offense” after community DOI: 10.4324/9781003174363-7

Student Speech

129

members rallied to support him.8 And a pharmacy graduate student at the University of Tennessee was expelled after a disciplinary panel determined that her social media posts were “‘vulgar,’ ‘crude’ and not in keeping with the mores of her chosen profession.”9 After the student appealed, the dean of the college reversed the expulsion. The student, though, sued anyway, arguing that the university violated her constitutional rights by policing her off-campus speech on social media. The list of examples could go on and on, which is why scholars such as Clay Calvert have called the issue of how to treat students’ online speech a “pervasive and pernicious First Amendment problem.”10 This chapter will trace existing cases and developing issues as the legal and school communities both wrestle with how to handle student speech in a culture that is defined by its always-evolving technology and media use.

Student Speech and the US Supreme Court Before the US Supreme Court’s 2021 ruling in Mahanoy Area School District v. B.L., the Court had decided four cases that governed the First Amendment rights of public school students11 within a roughly 50-year period. In its first ruling, the 1969 case Tinker v. Des Moines Independent Community School District, the Court broadly supported student expression, famously declaring that “it can hardly be argued that either students or teachers shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.”12 In the case, school officials suspended three students for wearing armbands in school to protest the Vietnam War. The Court ruled that students’ speech could not be punished unless school officials reasonably conclude that the speech did, or would, “materially and substantially disrupt the work and discipline of the school.”13 And the Court indicated that standards should be interpreted stringently. Even though the armbands caused “comments, warnings by other students, the poking of fun at [the students],” and even though a teacher said his lesson was “practically wrecked”14 that day, the Court said the disruption caused by the armbands was not enough to justify censorship. The Court’s next three student speech cases scaled back Tinker’s protections. In Bethel School District v. Fraser, the Court upheld the suspension of a high school student who gave a “lewd”15 speech laced with “pervasive sexual innuendo”16 at a school assembly. Emphasizing that “schools must teach by example the shared values of a civilized social order,”17 the Court sided with the school, concluding that the “First Amendment does not prevent the school officials from determining that to permit a vulgar and lewd speech such as [the student’s] would undermine the school’s basic educational mission.”18 Two years later, the Court held Tinker was inapplicable again, this time in a case involving a school-sponsored student newspaper. In the 1988 case Hazelwood School District v. Kuhlmeier, the Court ruled that educators could regulate

130

Dan V. Kozlowski

school-sponsored student speech – curricular speech that bears “the imprimatur of the school”19 – so long as their actions are “reasonably related to legitimate pedagogical concerns.”20 In 2007, in Morse v. Frederick, the Court carved out another exception to Tinker: speech that school officials “reasonably regard as promoting illegal drug use.”21 The Court in Morse acknowledged that “there is some uncertainty at the outer boundaries as to when courts should apply school-speech precedents.”22 But the Court did nothing to resolve that uncertainty in Morse. Instead, the Court ruled that, even though the student in the case was across the street from the school, off school grounds, he was participating in a “schoolsanctioned and school-supervised event”23 when he held up his controversial BONG HiTS 4 JESUS banner as the Olympic Torch Relay passed in front of his high school. Those four cases thus did not involve what the Court considered off-campus speech, and – with that lack of guidance from the Court – lower courts offered differing rulings as social media emerged and cases required judges to interpret how far administrators’ arms could reach to punish students for their online, off-campus speech.

Confusion in the Lower Courts The dilemma over whether schools can punish students for online speech they create off campus isn’t new. The first federal court opinion to deal with the issue was decided back in 1998.24 But the rise and ubiquity of social media escalated both the number of disciplinary sanctions for such speech and the number of legal cases challenging punishment. Students, of course, have badmouthed teachers, school officials, and other students for generations. Whereas before, though, students might have kept a diary or shared conversations involving those topics on the bus, over landline phones, or at the shopping mall, now students can use social media tools to reach a wide audience rapidly.25 And courts have struggled with how to respond. Tracing the path of one case nicely illustrates the degree of confusion. In J.S. v. Blue Mountain School District, a middle school student and her friend – on a weekend and on a home computer – created a fake MySpace profile as a parody of their principal. It included his photograph as well as “profanity-laced statements insinuating that he was a sex addict and pedophile.”26 In the words of the en banc Third Circuit Court of Appeals, “[t]he profile contained crude content and vulgar language, ranging from nonsense and juvenile humor to profanity and shameful personal attacks aimed at the principal and his family.”27 The profile initially was accessible to anyone, but the day after they created it, one of the students, known in the court proceedings as J.S., set the profile to “private” and limited access to about 20 students to whom she had granted “friend” status. The school’s computers blocked access to MySpace, so no student ever viewed the profile while at school. Another student, however, told the principal about the profile and who had created it, and – at the

Student Speech

131

principal’s request – that student printed out a copy of the profile and brought it to him. The principal then suspended J.S. and her friend for ten days. J.S. sued, sending the case on a meandering journey through the federal courts. At the district court level, she lost. US district judge James Munley ruled that, even though the profile arguably did not cause any Tinker-level disruption, it was “vulgar, lewd, and potentially illegal speech that had an effect on campus.”28 The judge relied on Fraser and Morse and said that J.S. could be punished. In 2010, a panel of the Third Circuit agreed – but for different reasons. The Third Circuit panel said that Tinker did govern the case and that the suspension was constitutional because the school could forecast disruption. “We hold that off-campus speech that causes or reasonably threatens to cause a substantial disruption of or material interference with a school need not satisfy any geographical technicality in order to be regulated pursuant to Tinker,” Judge D. Michael Fisher wrote for the court. Because the profile accused the principal of having “interest or engagement in sexually inappropriate behavior and illegal conduct,”29 the court said it was foreseeable that the profile would disrupt his work as a principal and encourage others to question his demeanor and conduct. Two months later, however, the Third Circuit vacated the panel opinion and ordered en banc review. And in 2011, a divided en banc court of the Third Circuit ruled in favor of J.S. This time, an eight-judge majority ruled that it did not need to decide definitively if Tinker controlled the case because, even if it did, the profile caused no actual disruption, nor could the school reasonably forecast disruption. Although the court acknowledged that the profile was “indisputably vulgar,” it ultimately ruled the speech “was so juvenile and nonsensical that no reasonable person could take its content seriously, and the record clearly demonstrates that no one did.”30 The court also categorically ruled that Fraser does not apply to off-campus speech. Five judges concurred in the case and wanted to go much further, though. The concurrence argued that Tinker should not apply to students’ off-campus speech at all because “the First Amendment protects students engaging in off-campus speech to the same extent it protects speech by citizens in the community at large.”31 Judge Fisher wrote in dissent this time, joined by five other judges. He argued again that Tinker applied and that disruption from the profile was reasonably foreseeable. Moreover, he chided the majority for adopting a rule that he thought was unworkable for schools. “[W]ith near-constant student access to social networking sites on and off campus, when offensive and malicious speech is directed at school officials and disseminated online to the student body, it is reasonable to anticipate an impact on the classroom environment,” he argued.32 J.S. thus perfectly exemplifies disagreements the judiciary has had over when school authority begins, what standard to apply, and how to apply it. Judges, first, have differed about how or when speech even becomes eligible for on-campus punishment. Some courts confronting the issue have used what

132

Dan V. Kozlowski

Mary-Rose Papandrea has called a territorial approach33 – asking whether a student either used school computers or servers to create, share, or view the speech in question or whether someone actually brought hard copies of the speech to school.34 Other courts, on the other hand, have said that schools have jurisdiction to regulate off-campus speech if it is “reasonably foreseeable”35 that the speech will come to the attention of school officials, whether because the speech is directed or targeted at students or school officials or because it is about school generally. Still other courts, though, have generally skipped over the threshold question of whether speech amounts to on-campus or off-campus expression and instead have just directly applied the Court’s school speech precedents.36 Courts have been especially willing to apply Tinker in cases involving off-campus social media speech, with outcomes hinging on whether schools can persuade courts that speech either did or would create a material and substantial disruption at school. And different courts have seemingly required different levels of disruption, similar to what occurred in J.S.: In that case, a panel of the Third Circuit said that disruption was reasonably foreseeable, while the en banc majority disagreed. Social media obviously allow speech to spread fast, to a wide audience, at virtually any time, making the speech both easier for school officials to monitor than traditional oral or written speech and also easier for a whistleblower student to find and bring in.37 “[W]ireless internet access, smart phones, tablet computers, social networking services like Facebook, and stream-of-consciousness communications via Twitter give an omnipresence to speech,” one federal judge wrote.38 School officials argue that, given that “omnipresence,” online speech inevitably finds its way to school and can thus be punished under Tinker because school officials can forecast disruption. Some courts also have loosely interpreted what amounts to an actual disruption at school. Those courts that have interpreted Tinker’s standard so leniently have thus made it easy for school officials to extend their reach off campus. That was arguably the case in Doninger v. Niehoff, a Second Circuit Court of Appeals decision involving a student blog post that disparaged school officials and encouraged readers to contact the superintendent in order “to piss her off more.”39 The student who wrote the blog, Avery Doninger, was frustrated about the scheduling of a band contest at school known as Jamfest. Her blog post, which she wrote from home, called school administrators “douchebags” for canceling the event and encouraged students to contact the superintendent.40 When the principal discovered the post, two weeks after it was written, she punished Doninger by barring her from running for senior class secretary. Doninger sued, seeking an injunction that prevented her discipline. A panel of the Second Circuit held that Tinker applied and that the post created a risk of substantial disruption. The court first said that the language Doninger used in her post was “not only plainly offensive, but also potentially disruptive of efforts to resolve the ongoing controversy.”41 The school, moreover, argued that the post was misleading because school officials had

Student Speech

133

informed Doninger that the event would be rescheduled rather than canceled. The Second Circuit thus said that Doninger used “at best misleading and at worst false” information in an effort to solicit more calls and emails.42 And because rumors were already swirling at school about the status of the event when Doninger wrote on her blog, the court deferred to the school and concluded that “it was foreseeable in this context that school operations might well be disrupted further by the need to correct misinformation as a consequence of [the] post.”43 Courts have been especially likely to rule against students when applying Tinker in cases involving threatening speech.44 In the 2015 case Bell v. Itawamba County School Board,45 for instance, a high school student posted to Facebook and YouTube a rap song he recorded that alleged two coaches at his school had improper contact with female students. The song included the lyrics “looking down girls shirts/drool running down your mouth/you fucking with the wrong one/going to get a pistol down your mouth.”46 The student insisted the allegations about inappropriate behavior were true and that he wrote the song to “increase awareness of the situation,”47 but school officials interpreted the lyrics as “threatening, harassing, and intimidating,”48 and they thus suspended the student for seven days and transferred him to an alternative school for the remaining five weeks of the period. The student sued. The Fifth Circuit Court of Appeals ruled49 that because the student intended for his song to reach school, Tinker applied. “Tinker governs our analysis . . . when a student intentionally directs at the school community speech reasonably understood by school officials to threaten, harass, and intimidate a teacher, even when such speech originated, and was disseminated, off-campus without the use of school resources,” the court wrote.50 And the court ruled that the student’s punishment was constitutional because “a substantial disruption reasonably could have been forecast” by school officials since the rap “pertained directly to events occurring at school, identified the two teachers by name, and was understood by one to threaten his safety and by neutral, third parties as threatening.”51 In 2011, the Fourth Circuit Court of Appeals faced a case involving speech not about, nor directed at, a teacher or school official – instead, the incident at issue resulted from student speech targeting another student, amounting to what the court said was impermissible cyberbullying on MySpace.52 In the case, high school senior Kara Kowalski created a discussion group on MySpace titled S.A.S.H., which she claimed was an acronym for Students Against Sluts’ Herpes. The comments and pictures posted to the group, though, targeted one student, named Shay N., and one of the nearly two dozen students who joined the group at Kowalski’s invitation said S.A.S.H. was actually an acronym for Students Against Shay’s Herpes.53 Shay and her parents went to school officials the next day to report the site and file a harassment complaint. For creating a “hate website” in violation of a school policy against bullying, Kowalski received out-of-school suspension for five days and a 90-day social

134

Dan V. Kozlowski

suspension, which prevented her from attending school events in which she was not a direct participant.54 The Fourth Circuit upheld the punishment. The court said Kowalski knew talk about the MySpace group would reach school and that thus the “nexus of Kowalski’s speech to [the school’s] pedagogical interests was sufficiently strong to justify” the school’s actions under Tinker.55 “Given the targeted, defamatory nature of Kowalski’s speech, aimed at a fellow classmate, it created ‘actual or nascent’ substantial disorder and disruption in the school,” Judge Paul Niemeyer wrote for the court.56 He pointed out that Shay missed school in order to avoid further abuse. Moreover, the court ruled, “had the school not intervened, the potential for continuing and more serious harassment of Shay N. as well as other students was real.”57 The court said that “harassment and bullying is inappropriate” and that the First Amendment does not “hinder school administrators’ good faith efforts to address the problem” of bullying generally.58 Not all students who have faced punishment for social media speech have lost their cases under Tinker, however. J.S., of course, offers one example. In another case, a district court ruled in favor of two high school students who were barred from participating in a quarter of their fall extracurricular activities – including volleyball games and a show choir performance – as punishment for posting pictures of themselves simulating sex acts with phallicshaped lollipops. The students took the pictures during a slumber party over the summer and posted them on MySpace, Facebook, and Photobucket. A parent brought printouts of the pictures to the school superintendent and said they were causing “divisiveness” among the girls on the volleyball team.59 The principal subsequently said the students violated a code of conduct that forbade students participating in extracurricular activities from bringing “discredit or dishonor upon yourself or your school.”60 The students sued, and in T.V. v. Smith-Green Community School, US district judge Philip Simon struck down the punishment. He assumed without deciding that Tinker controlled but ruled that the actual disruption in the case did not “come close” to meeting Tinker’s standard.61 “In sum, at most, this case involved two complaints from parents and some petty sniping among a group of 15 and 16 year olds. This can’t be what the Supreme Court had in mind when it enunciated the ‘substantial disruption’ standard in Tinker,” Simon wrote.62 He concluded that the school also did not demonstrate any factual basis to justify that officials reasonably forecast disruption. Even though the speech at issue in the case amounted to “crass foolishness,”63 Simon said the First Amendment forbids any attempt at line drawing between worthy and unworthy speech. Moreover, Simon also ruled that the school’s code of conduct was unconstitutionally vague and overbroad. A school’s code cannot nullify students’ First Amendment rights, and Simon said that the code at issue here poorly defined the subjective terms “discredit” and “dishonor,” and the

Student Speech

135

code was overbroad because it potentially could be applied to a variety of constitutionally protected out-of-school student conduct.64

Mahanoy Area School District v. B.L . Given the just-described confusion in the lower courts, student speech advocates clamored for years for the US Supreme Court to rule on a case involving students’ online speech, hopeful the Court would provide clarity. That case finally arrived in 2021. But even though the Court did rule in favor of a student in Mahanoy Area School District v. B.L.,65 the Court’s opinion left many questions unanswered, and more litigation ahead seems likely. Mahanoy involved a Pennsylvania high school student, identified by the initials B.L., who was punished for a Snapchat message. The student, frustrated that she failed to make her high school’s varsity cheerleading team (she made the junior varsity team instead) and frustrated that she did not get her preferred position on a private softball team, used Snapchat while away from school, over a weekend, to post a picture of herself with her middle finger raised and accompanied by the caption “Fuck school fuck softball fuck cheer fuck everything.”66 At least one of B.L.’s Snapchat friends took a picture of the snap and shared it with other members of the cheerleading team, and one of the students who received the picture then showed her mother, who was a cheerleading team coach. After talking with the school’s principal, the cheerleading coaches decided that because the snap “used profanity in connection with a school extracurricular activity,” it violated team and school rules.67 As a result, the coaches punished the student by suspending her from the junior varsity team for the year. In an 8–1 decision, the Supreme Court ruled that the punishment violated the student’s First Amendment rights. Importantly, the Court noted that B.L. did not identify the school in her posts or target any member of the school community with vulgar or abusive language. [She] also transmitted her speech through a personal cellphone, to an audience consisting of her private circle of Snapchat friends. These features of her speech, while risking transmission to the school itself, nonetheless . . . diminish the school’s interest in punishing [her].68 Moreover, the Court also rejected the school’s argument that punishment was necessary as a way to prevent disruption. The only evidence of any disruption was minor, the Court said: There was a “discussion of [the snaps that] took, at most, 5 to 10 minutes of an Algebra class ‘for just a couple of days’ and . . . some members of the cheerleading team were ‘upset’ about the content of [the] Snapchats.”69 The Court concluded those alleged disturbances did not “meet Tinker’s demanding standard.”70 Finally, the Court also ruled that the

136

Dan V. Kozlowski

student’s use of profanity, in and of itself, did not necessitate punishment. The Court said the school “presented no evidence of any general effort to prevent students from using vulgarity outside the classroom.”71 Here, the student sent the snap while she was off campus over the weekend. Given that context, the Court concluded, “sometimes it is necessary to protect the superfluous in order to preserve the necessary.”72 The Court’s decision, though, was framed narrowly and left larger questions unanswered. Although the Court did recognize that a school’s interest in regulating off-campus student speech is “diminished”73 compared to its interest in regulating students’ speech on campus – in part because off-campus speech normally falls “within the zone of parental, rather than school-related, responsibility”74 – the Court made it clear that it was not creating any sort of bright-line rule that said schools could never punish students for their offcampus, online speech. “We do not believe the special characteristics that give schools additional license to regulate student speech always disappear when a school regulates speech that takes place off campus,” Justice Stephen Breyer wrote for the Court. “The school’s regulatory interests remain significant in some off-campus circumstances.” Those circumstances could include, the Court ruled, “serious or severe bullying or harassment targeting particular individuals; threats aimed at teachers or other students; the failure to follow rules concerning lessons, the writing of papers, the use of computers, or participation in other online school activities; and breaches of school security devices, including material maintained within school computers.”75 And Justice Breyer emphasized that other circumstances might be possible too, writing that the Court hesitates “to determine precisely which of many school-related off-campus activities belong on such a list.” Thus, the Court concluded, “[W]e do not now set forth a broad, highly general First Amendment rule stating just what counts as ‘off campus’ speech and whether or how ordinary First Amendment standards must give way off campus to a school’s special need to prevent, e.g., substantial disruption of learning-related activities or the protection of those who make up a school community.” While B.L., then, won her First Amendment lawsuit because of the specific circumstances of her case, legal battles over online, off-campus speech will continue to be fought in the years ahead given the ambiguity of the Court’s ruling.

Social Media and College Students The court cases discussed so far have involved middle or high school students, but college students also have found themselves embroiled in legal disputes over their social media speech. In Yoder v. University of Louisville,76 for instance, a federal judge ruled that a former nursing student, Nina Yoder, effectively waived her First Amendment rights when she signed a consent form as part of a childbearing course that required her to follow a mother through the

Student Speech

137

birthing process. The form, which was signed by both Yoder and the mother she followed, provided that any information shared with the student “will be presented in written or oral form to the student’s instructor only.”77 After Yoder witnessed the mother’s labor and delivery, though, she wrote a blog post on her MySpace page about the experience. Although she did not reveal any information that specifically identified the mother or her family, Yoder’s post described, “in intimate detail,” what took place, including “medical treatment the birth-mother received, such as an epidural,” along with other health-related issues.78 One of Yoder’s classmates told the course’s instructor about the post, and the instructor then told School of Nursing administrators, who decided to dismiss Yoder from school. US district judge Charles Simpson III upheld the punishment, ruling that the school had legitimate reasons for having patients and students sign the consent form, and “because Yoder herself agreed not to publicly disseminate the information she posted on the internet, she is not entitled to now claim that she had a constitutional right to do so.”79 In another example, in a much-publicized case, Minnesota courts ruled that a college student could be punished for her Facebook posts. In Tatro v. University of Minnesota, Amanda Tatro, a student in the mortuary science department, was punished for a series of Facebook posts she made to her wall over two months in late 2009. In one post, she wrote that she “gets to play, I mean dissect, Bernie today” – which was the name she had given to the donated cadaver on which she was working.80 In another post, she wrote that she wanted to use an embalming tool “to stab a certain someone in the throat. . . . [P]erhaps I will spend the evening updating my ‘Death List #5’ and making friends with the crematory guy.”81 The comments worried a fellow mortuary science student, who reported them to university officials. The officials notified university police, who conducted an investigation but determined no crime had been committed and Tatro could return to class. The university, though, instead filed a formal complaint against Tatro, alleging she violated the school’s student conduct code by engaging in “threatening, harassing, or assaultive conduct” and by engaging in conduct “contrary to university rules related to the mortuary-science program.”82 A panel of the campus committee on student behavior agreed that Tatro violated the code. As punishment, she was given a failing grade in the course and required to enroll in an ethics course, write a letter of apology, and complete a psychiatric evaluation. She also was placed on academic probation for the remainder of her undergraduate career. Tatro’s attorney argued that Tinker should not apply to a university student at all, particularly in a case involving off-campus speech. The Minnesota Court of Appeals disagreed, though. Citing a recent Third Circuit Court of Appeals decision that applied Tinker in the university setting, the Minnesota court reasoned that Tinker controlled the case, but “what constitutes a substantial disruption in a primary school may look very different in a university.”83 The court nevertheless arguably leniently applied Tinker and sided with the school. Tatro said she jokingly was referring to her ex-boyfriend when she wrote about

138

Dan V. Kozlowski

stabbing someone, and police determined that she posed no threat. Even so, the court said, “[T]he fact that the university’s concerns were later assuaged does not diminish the substantial nature of the disruption that Tatro’s conduct caused or the university’s need to respond to the disruptive expression.”84 Tatro then appealed to the Minnesota Supreme Court, which also upheld her punishment – but on a different ground. The state supreme court concluded that Tatro’s posts violated the rules for her mortuary science program. “[W]e hold that a university does not violate the free speech rights of a student enrolled in a professional program, when the university imposes sanctions for Facebook posts that violate academic program rules that are narrowly tailored and directly related to established professional conduct standards,” the court wrote.85 The court emphasized what it said were the unique circumstances of a case that involved “a program that gives students access to donated human cadavers and requires a high degree of sensitivity.”86 Tinker protected a college student from discipline for his online speech in Murakowski v. University of Delaware. There, a student’s “sophomoric, immature, crude and highly offensive” essays – some of which referenced violence and sexual abuse – that he posted on a website he created led to his suspension.87 The website, maintained on the university’s server, came to the attention of school authorities after two students complained about the student and said they had visited his website and felt “uneasy and fearful around him.”88 US magistrate Judge Mary Pat Thynge ruled that, although the writings contained “graphic descriptions of violent behavior,” they did not “evidence a serious expression of intent to inflict harm” to specific individuals.89 The court also ruled that the complaints did not amount to a substantial disruption, nor had the university presented evidence that demonstrated it reasonably could forecast disruption. The suspension was thus unconstitutional.

Privacy and Social Media Another area of controversy involves student privacy on social media. Issues surrounding the monitoring of student-athletes have been especially visible. In one high-profile incident, for instance, a top high school football player saw an elite university withdraw its scholarship offer to him after the school discovered the student had used racial and sexual slurs in his tweets.90 Schools say such offensive speech risks damaging both students’ reputations and the school’s image, so to curb any controversy some colleges adopt policies that, among other things, require their student-athletes to “friend” and/or to provide their social media handles to either a coach or a compliance officer.91 For example, a University of North Carolina social media policy required that “each team must identify at least one coach or administrator who is responsible for having access to and regularly monitoring the content of team members’ social networking sites and postings.”92 The policy emphasized to students that playing for the school “is a privilege, not a right.”93

Student Speech

139

Along the way, other schools have relied on software packages that automate the monitoring of athletes’ social media use.94 And some schools make it explicit that student-athletes face content restrictions on what they can post. A University of Kentucky policy, for instance, emphasized that the school had “zero tolerance for athletes who speak negatively about their university.”95 In another example, the University of Georgia’s recent handbook for studentathletes outlined that, while on social media, student-athletes “have the right to express opinions outside of [their] participation in athletics,” but they may be subject to discipline if the speech occurs “in any situation in which [they] are acting as a representative of the Athletic Association” or “holding [them]self out as a UGA student-athlete.”96 Other college coaches have alarmed student speech advocates by instituting outright bans on social media use – banning social media generally, at least in season, or focusing specifically on Twitter. In recent seasons, coaches at Boise State, Clemson, Iowa, Kansas, Louisville, Mississippi State, New Mexico State, South Carolina, and Texas Tech, among other places, have barred student-athletes from using Twitter or other social media platforms.97 The bans have frequently involved football or men’s basketball programs, but in 2012, the University of North Carolina’s women’s basketball coach also instituted a Twitter ban for her players,98 and women’s basketball coaches at the University of Connecticut and South Carolina, among other places, have banned social media use during their seasons. To date, none of the bans has been challenged in court. Commentators, though, have questioned their legality.99 Rather than punishing students for specific tweets – critical comments about the coaching staff, for instance, which the coach might argue could cause a disruption100 – the outright bans impose a blanket prior restraint that extends into the athletes’ private lives, encompassing speech about their activities and interests off the field and away from school. More recent years, however, have seen some programs gradually loosen restrictions. As just one example, in June 2020, the University of Iowa’s football program lifted its longtime ban on players using Twitter. The head coach said that he made the change after multiple players approached him about using the platform to join the conversation then going on nationally about the death of George Floyd.101 Moreover, in July 2021, the National Collegiate Athletic Association (NCAA) officially changed its rules to allow college athletes to profit from their name, image, and likeness (referred to as NIL). The new rules permit student-athletes to, among other things, sign with agents or other legal representatives to get help obtaining endorsement or licensing deals and to also create their own businesses or participate in advertising campaigns.102 This newfound ability thus should allow student-athletes to promote those activities on their social media platforms, changing the way student-athletes’ speech is regulated. Issues other than those surrounding college athletes have also raised privacy concerns. For example, in 2012, a sixth-grade student in Minnesota sued

140

Dan V. Kozlowski

her school district, claiming, among other things, that her Fourth Amendment rights were violated when school officials demanded that she give them her Facebook password, and they then searched her account – without a warrant – while she was in the room.103 In that same month, a school district investigation revealed that a Florida high school teacher undertook a series of questionable actions aimed at determinining whether students had made disparaging comments about her on Facebook. The teacher reportedly called a student to the front of the classroom and told her to sign into her Facebook account on the teacher’s personal cell phone.104 According to news coverage of the incident, the teacher also “gave a small group of students a list with red marks next to the names of those suspected of making comments. She asked them to review Facebook accounts and write ‘ok’ next to those who did not write anything negative.”105 The school superintendent found the teacher’s behavior “very troubling” and suspended her.106 Another privacy controversy involves attempts to regulate social media interactions between students and teachers. School officials and legislators have insisted guidelines and restrictions serve to keep students safe from educators who misuse social media. Although administrators recognize that the vast majority of teachers use social media properly, examples do exist of teachers engaging in inappropriate contact that blurs teacher-student boundaries.107 And in extreme cases, educators have been arrested for sexual misconduct – for relationships that law enforcement officials say began with online communication.108 But strict regulations banning social media interactions entirely have faced pushback from teachers. Missouri, for instance, passed a state law in 2011 that provided, in part, “No teacher shall establish, maintain, or use a nonwork-related Internet site which allows exclusive access with a current or former student.”109 The law, among other things, presumably would have thus forbidden teachers from “friending” students on Facebook. Teachers criticized the law, saying they used social media as a valuable pedagogical tool and to engage students. The state teachers union sued, and a judge issued an injunction that barred the law from going into effect. The judge said the breadth of the prohibition was “staggering” and that it would have “a chilling effect on speech.”110 On the heels of the injunction, legislators dropped the ban and instead ordered school districts to develop their own policies.

Guidance and Best Practices Confusion and uncertainty about social media and students’ rights still prevail as the law struggles to catch up to the technology around it. Educators, though, would be wise not to be so quick to censor and punish. Such overreactions invite negative attention and miss an opportunity to teach – about free speech, about tolerance, and about civility.

Student Speech

141

In 2011, for instance, a Kansas high school senior on a field trip to the state capitol tweeted, “Just made mean comments at gov. brown-back and told him he sucked, in person #heblowsalot.” The student, Emma Sullivan, was frustrated at Brownback, the Kansas governor, for cutting arts funding in schools, but the tweet was untrue – she had not actually spoken to him. Members of the governor’s social media staff saw the tweet and alerted program managers leading the Youth in Government trip Sullivan was on. The program managers then told the student’s principal, who demanded that she apologize. Sullivan refused, and the incident quickly became a national story. Her Twitter account had around 60 followers at the time of the controversial tweet; once the story attracted national attention, her followers rocketed to more than 11,000.111 Within a week, the principal backed off the demand that she apologize. Ken Paulson of the First Amendment Center wrote at that time, “[E]fforts to punish her for her free expression backfired on every adult involved.”112 Brownback even issued a statement, apologizing for his staff’s overreaction. “Freedom of speech is among our most treasured freedoms,” he said.113 Sullivan’s tweet itself – occurring at an off-campus, school-sanctioned event – certainly did not cause any Tinker-level disruption. The controversy that ensued instead was brought on by the social media staff’s monitoring and then by the principal’s readiness to punish a student for criticizing her governor. That temptation to monitor and punish is hard for school officials to resist. Western Kentucky University, for instance, attracted media attention in 2012 for its aggressive monitoring of social media.114 The school persuaded Twitter to briefly shut down an account parodying the university, and the university’s president used Facebook to scold students for social media etiquette, telling them that employers “can and will track ways in which prospective employees have used social media. We, at WKU, track such things as well.”115 One student who said school officials had rebuked her friends for their social media posts told a reporter, “I don’t ever criticize the school on Twitter because I don’t want an ordeal made.”116 But discouraging criticism of the government and encouraging self-censorship rather than voicing complaints are not lessons we want schools to embrace. School officials instead should handle off-campus social media speech with the legal remedies already available for off-campus speech generally: If the speech is libelous, for instance, school officials can sue. If the speech communicates a serious intent to commit violence against a particular person or group, it can be punished as a true threat. If a school becomes aware of cyberbullying that is “sufficiently severe, pervasive, or persistent as to interfere with or limit a student’s ability to participate in or benefit from the services, activities or opportunities offered by a school,” the speech may constitute harassment.117 But if the speech is criticism or parody or inflammatory in general – whether crude or sophisticated, juvenile or mature – schools are better off resisting punishment. Obviously, no school officials will be thrilled at discovering a vulgar social

142

Dan V. Kozlowski

media profile parodying them. Arguably the best approach to social media, though, is not to punish or ban but instead to educate, to teach social media literacy, talking with students – and students’ parents – about the positives and negatives of social media and about the possible consequences and impact of social media speech (for students and for those whom they write about). Schools, in other words, can emphasize the importance of civility, treating others – even those with whom they disagree – respectfully, and using social media and First Amendment rights responsibly.118 Fear of social media instead has led school officials to impulsively punish students, monitor their social media use, and ban social media entirely inside many K–12 schools. When used appropriately, though, social media tools can help educators reach students on their own terms. Armed with that perspective, in 2012, a coalition of education and technology advocates put forth a series of recommendations “aimed at rebooting school technology policies.”119 The recommendations, in part, called for K–12 schools to move away from blanket bans on cell phones and social networking sites inside schools and instead to adopt a “responsible-use policy . . . that emphasizes education and treats the student as a person responsible for ethical and healthy use of the Internet and mobile devices.”120 Student Media Given social media’s popularity for students, student media advisers at both the high school and college levels should encourage their staff to embrace social media as a reporting tool. Presumably, how school officials could regulate a Facebook page created as a way for a high school student newspaper to distribute news, for instance, or a tweet by a student reporter breaking news for that publication would depend on the legal categorization of the student media outlet. School-sponsored media are governed by Hazelwood and can be censored if school officials point to a legitimate pedagogical concern. Student media designated – either by policy or practice – as public forums for student expression, where student editors make their own content decisions, are instead governed by Tinker. It remains an open question whether Hazelwood applies at all to college media. A controversial 2005 Seventh Circuit Court of Appeals decision said it did,121 but other courts have ruled that college media are free from Hazelwood’s constraints. No matter the outlet, advisers should push their staff to consider developing social media guidelines that encourage responsible use and professionalism when using social media to represent their publication.122 Research conducted by the Knight Foundation has shown that students who work on student media are more aware and more supportive of First Amendment rights. That research has also now found that as students’ social media use grows generally, so too does their support for free expression.123 By championing social media, then, schools are better preparing students for civic life.

Student Speech

143

FREQUENTLY ASKED QUESTIONS 1.

Can K–12 public schools ban students’ social media use at school? Yes. Although outright bans on cell phones and all social media use during the school day might not be the best pedagogical policy, schools can constitutionally bar students from accessing social media while at school. The bans, schools say, are reasonable content-neutral policies put in place to protect the learning environment from distractions.

2.

Can a school punish students for speech they create using social media off campus, away from school? The US Supreme Court addressed this question in its 2021 decision in Mahanoy Area School District v. B.L., providing guidance but leaving many questions still unanswered. Given the particular facts of the case – a student posting vulgarity, which did not identify the school or target any member of the school community, on a Snapchat message over the weekend while off campus – the Court ruled in the student’s favor. But the Court framed its decision narrowly, making it clear the case did not “set forth a broad, highly general First Amendment rule stating just what counts as ‘off campus’ speech and whether or how ordinary First Amendment standards must give way off campus to a school’s special need to prevent, e.g., substantial disruption of learning-related activities or the protection of those who make up a school community.” More litigation thus seems likely in the years ahead.

3.

Is a college student’s off-campus social media speech treated with a different legal standard than speech from a middle or high school student? Not necessarily, according to some recent court decisions. Although the Supreme Court ruled in 1972 that the First Amendment applies with “[no] less force on college campuses than in the community at large,”124 lower courts have applied Tinker to cases involving college students’ social media speech. And in Yoder, a federal judge ruled that a college student effectively waived her First Amendment rights when she signed a consent form as part of a childbearing course, which meant that she could thus be dismissed from school for her MySpace blog that described the labor and delivery she witnessed.

4.

Can schools ban student-athletes from using social media entirely? That’s unresolved. Coaches have instituted social media bans at several universities. None of the bans has been challenged in court. In a 1995 case involving student-athletes’ privacy, the Supreme Court did say that

144

Dan V. Kozlowski

students “who voluntarily participate in school athletics have reason to expect intrusions upon normal rights and privileges.”125 The outright bans on social media use, though, impose a blanket prior restraint that extends far into the athletes’ private lives. More recent years have seen some athletic programs gradually loosen social media restrictions. And the NCAA’s 2021 change to allow student-athletes to profit from their name, image, and likeness arguably shifts the way student-athletes’ speech is regulated.

Notes 1. Monica Anderson & Jingjing Jiang, Teens, Social Media & Technology 2018, Pew Research Center, May 31, 2018, www.pewresearch.org/internet/2018/05/31/ teens-social-media-technology-2018. 2. Id. 3. Salvador Rodriguez, TikTok Usage Surpassed Instagram This Year among Kids Aged 12 to 17, Forrester Survey Says, CNBC, November 18, 2021. 4. Evie Blad, Networking Web Sites Enable New Generation of Bullies, Ark. Democrat-Gazette, April 6, 2008, at A1. 5. Clay Calvert, Punishing Public School Students for Bashing Principals, Teachers and Classmates in Cyberspace: The Speech Issue the Supreme Court Must Now Resolve, 7 First Amend. L. Rev. 210, 219 (2009). 6. 141 S. Ct. 2038 (2021). 7. See David Rodriguez, Disciplinary Action Taken Against Salinas High School Students Involved in Racist Social Media Posts, The Californian, August 26, 2021. 8. See Chelsea Davis, School’s Harsh Punishment of Star Student on Maui Turns into Test of Free Speech, Hawaii News Now, October 8, 2021. 9. See Students Punished for “Vulgar” Social Media Posts Are Fighting Back, The New York Times, February 5, 2021. 10. Calvert, supra note 5 at 211. 11. The First Amendment does not apply to private schools because they are not government agencies. Private school students could possibly find relief, though, either in state constitutions or state laws or in claiming school censorship amounts to a breach of the guidelines or rules established by the private school itself – if those guidelines or rules promise protections for free speech. See generally Legal Guide for the Private School Press, Student Press L. Ctr., December 1, 2012, www.splc.org/article/2002/12/legal-guide-for-the-private-school-press_1201. 12. 393 U.S. 503, 506 (1969). 13. Id. at 513. 14. Id. at 517 (Black, J., dissenting). 15. 478 U.S. 675, 677 (1986). 16. Id. at 683. 17. Id. 18. Id. at 685. 19. 484 U.S. 260, 271 (1988).

Student Speech 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34.

35.

36. 37. 38.

39. 40. 41. 42. 43.

44. 45. 46. 47.

145

Id. at 273. 551 U.S. 393, 408 (2007). Id. at 401. Id. at 396. See Beussink v. Woodland R-IV Sch. Dist., 30 F. Supp. 2d 1175 (E.D. Mo. 1998). See generally Mary-Rose Papandrea, Student Speech Rights in the Digital Age, 60 Fla. L. Rev. 1027, 1036–1037 (2008). 593 F.3d 286, 290 (3rd Cir. 2010), rev’d en banc, 650 F.3d 915 (3rd Cir. 2011). 650 F.3d 915, 920 (3rd Cir. 2011). 2008 U.S. Dist. LEXIS 72685, 18 (M.D. Pa. 2008). 593 F.3d 286, 308 (3rd Cir. 2010), rev’d en banc, 650 F.3d 915 (3rd Cir. 2011). 650 F.3d 915, 929 (3rd Cir. 2011). Id. at 936 (Smith, J., concurring). Id. at 950–951 (Fisher, J., dissenting). Papandrea, supra note 25 at 1056. See, e.g., J.S. v. Bethlehem Area Sch. Dist., 569 Pa. 638, 668 (Pa. 2002) (“We hold that where speech that is aimed at a specific school and/or its personnel is brought onto the school campus or accessed at school by its originator, the speech will be considered on-campus speech”). Wisniewski v. Weedsport Cent. Sch. Dist., 494 F.3d 34, 39 (2nd Cir. 2007) (“[I]t was reasonably foreseeable that the IM icon would come to the attention of school authorities and the teacher whom the icon depicted being shot”). See also D.J.M. v. Hannibal Pub. Sch. Dist., 647 F.3d 754, 766 (8th Cir. 2011) (“[I]t was reasonably foreseeable that D.J.M.’s threats about shooting specific students in school would be brought to the attention of school authorities and create a risk of substantial disruption within the school environment”); J.C. v. Beverly Hills Unified Sch. Dist., 711 F. Supp. 2d 1094, 1108 (C.D. Cal. 2010) (“Several cases have applied Tinker where speech published or transmitted via the Internet subsequently comes to the attention of school administrators, even where there is no evidence that students accessed the speech while at school”). Papandrea, supra note 25 at 1064. See Calvert, supra note 5 at 235. Layshock v. Hermitage Sch. Dist., 650 F.3d 205, 220–221 (3rd Cir. 2011) (Jordan, J., concurring). The judge continued, “Modern communications technology, for all its positive applications, can be a potent tool for distraction and fomenting disruption.” Id. at 222. Doninger v. Niehoff, 527 F.3d 41, 45 (2nd Cir. 2008). Id. Id. at 50–51. Id. at 51. Id. After being denied the injunction, Doninger continued her case in court and argued that she deserved monetary damages because she was denied her First Amendment right to criticize school officials. In 2011, a different Second Circuit panel ruled that it did not need to decide whether her punishment in fact violated the First Amendment because the law surrounding off-campus speech is so unsettled that the school officials were entitled to qualified immunity. See Doninger v. Niehoff, 642 F.3d 334 (2nd Cir. 2011). This is generally true in all student speech cases, not just those involving social media. See generally Calvert, supra note 5 at 243–244. 799 F.3d 379 (5th Cir. 2015). Id. at 384. Id. at 386.

146

Dan V. Kozlowski

48. Id. at 383. 49. A Fifth Circuit panel had actually ruled in favor of the student in 2014, but the full Fifth Circuit subsequently vacated that panel opinion and ordered en banc review. 50. 799 F.3d at 396. 51. Id. at 398–399. See also O.Z. v. Bd. of Tr. of the Long Beach Unified Sch. Dist., 2008 U.S. Dist. LEXIS 110409, 11 (C.D. Cal. 2008) (upholding the transfer of a student as punishment for a slide show she posted to YouTube that depicted the killing of her English teacher because “although O.Z. created the slide show offcampus, it created a foreseeable risk of disruption within the school”). 52. Kowalski v. Berkeley County Sch., 652 F.3d 565 (4th Cir. 2011). In extreme instances in recent years, students have committed suicide as a result of bullying from their peers, some of which occurred online. See, e.g., Martin Finucane, DA Defends Light Sentences in Phoebe Prince Case, Boston Globe, May 5, 2011. 53. For example, one student uploaded a picture of himself and a friend holding their noses while displaying a sign that read “Shay Has Herpes.” 652 F.3d at 568. 54. Id. at 568–569. 55. Id. at 573. 56. Id. at 574 (internal quotations and citations omitted). 57. Id. 58. Id. at 577. But see J.C. v. Beverly Hills Unified Sch. Dist., 711 F. Supp. 2d 1094, 1122 (C.D. Cal. 2010) (striking down a student’s punishment for posting a video to YouTube that demeaned another student because the court “cannot uphold school discipline of student speech simply because young persons are unpredictable or immature, or because, in general, teenagers are emotionally fragile and may often fight over hurtful comments”). 59. 807 F. Supp. 2d 767, 5 (N.D. Ind. 2011). 60. Id. at 7. 61. Id. at 35. 62. Id. at 38. 63. Id. at 41. 64. Id. at 51–56. 65. 141 S. Ct. 2038 (2021). 66. Id. at 2043. 67. Id. 68. Id. at 2047. 69. Id. at 2047–2048. 70. Id. at 2048. 71. Id. at 2047. 72. Id. at 2048. 73. Id. at 2046. 74. Id. at 2045. 75. Id. 76. 2012 U.S. Dist. LEXIS 45264 (W.D. Ky. 2012). 77. Id. at 3. 78. Id. at 17. 79. Id. See also Snyder v. Millersville University, where a judge upheld the removal of a university student, Stacey Snyder, from a student teaching placement at a local high school. The punishment kept her from obtaining teacher certification. The judge ruled that Snyder was more akin to a teacher – a public employee – than a student given that she did not take university classes during the placement and instead was responsible for curriculum planning, grading, and attending faculty meetings at the high school. She could thus be punished, in part, for her MySpace postings that referenced the school and her students and for a picture that showed

Student Speech

80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90.

91.

92. 93. 94. 95. 96.

97. 98. 99.

100.

147

her drinking since the posts did not discuss matters of public concern but instead “raised only personal matters.” 2008 U.S. Dist. LEXIS 97943, 42 (E.D. Pa. 2008). 800 N.W.2d 811, 814 (Minn. Ct. App. 2011). Id. Id. at 815. Id. at 821. The court cited DeJohn v. Temple University, 537 F.3d 301 (3rd Cir. 2008). 800 N.W.2d at 822. Tatro v. Univ. of Minnesota, 816 N.W.2d 509, 521 (Minn. 2012). Id. at 524. 575 F. Supp. 2d 571, 590 (D. Del. 2008). Id. at 578. Id. at 590. See Nick Glunt, Expulsion Over Raunchy Tweets May Cost High School Football Star His College Dream, Student Press Law Center, January 23, 2012, www.splc. org/wordpress/?p= 3109. The student subsequently accepted a scholarship offer from the University of Colorado. For an even more recent example, see Walter Smith-Randolph, Xavier Revokes Student Athlete’s Admission after Racist Social Media Posts, WKRC, June 3, 2020, https://local12.com/news/local/xavierrevokes-student-athletes-admission-after-racist-social-media-posts-cincinnati. At least a dozen states have passed laws that limit colleges’ ability to gain access to their students’ social media postings. The laws generally forbid colleges “from requiring students or prospective students to provide passwords or otherwise allow access to private social media accounts.” The laws do not prohibit schools from monitoring publicly available social media information, though. Denielle Burl, Youndy C. Cook, & Margaret Wu, Social Media, Anonymous Speech and When Social Media becomes the Crisis, The National Association of College and University Attorneys (2015), www.nacua.org/securedocuments/programs/ June2015/8E_15_6_63.pdf. The University of North Carolina at Chapel Hill, Department of Athletics, Policy on Student-Athlete Social Networking and Media Use (2011). Id. See, e.g., Kathleen Nelson, Services Monitor Athletes on Facebook, Other Sites, St. Louis Post-Dispatch, February 1, 2012. Jon Hale, Don’t Feed the Trolls: Highlights from Colleges’ Social Media Policies for Athletes, Courier Journal, January 11, 2018. Emily Garcia, To Tweet or Not to Tweet: How Social Media Policies Burden Student-Athletes’ Free Speech Rights, Foundation for Individual Rights in Education, July 13, 2021, www.thefire.org/to-tweet-or-not-to-tweet-how-social-mediapolicies-burden-student-athletes-free-speech-rights. See generally Eric Robinson, Intentional Grounding: Can Public Colleges Limit Athletes’ Tweets? Citizen Media Law Project, November 9, 2010, www. citmedialaw. org/blog/2010/intentional-grounding-can-public-colleges-limit-athletes-tweets. See Michael Lananna, Sylvia Hatchell Bans UNC Women’s Basketball Team’s Twitter Use, The Daily Tar Heel, January 27, 2012, www.dailytarheel.com/index. php/article/2012/01/sylvia_hatchell_bans_womens_teams_twitter_use. See, e.g., Robinson, supra note 97. But see Mary Margaret “Meg” Penrose, Free Speech Versus Free Education, 1 Miss. Sports L. Rev. 1, 94 (2011) (arguing that “athletic departments should feel confident in regulating or banning their studentathletes’ use of social media”). See, e.g., Terry Hutchens, IU DB Andre Kates Now Suspended Indefinitely, Indystar.com, October 31, 2010, http://blogs.indystar.com/hoosiersinsider/2010/10/31/ iu-db-andre-kates-now-suspended-indefinitely/ (Indiana University football player

148

101.

102. 103. 104. 105. 106. 107. 108.

109. 110. 111. 112. 113. 114. 115. 116. 117. 118.

Dan V. Kozlowski suspended for tweets critical of the coaching staff). In the high school setting, courts have upheld punishment of athletes under Tinker for speech that criticized coaches and sought to undermine their authority. See, e.g., Lowery v. Euverard, 497 F.3d 584 (6th Cir. 2007). Robert Read, Iowa Football Players Take to Twitter to Address Current State of Hawkeye Program, The Daily Iowan, June 8, 2020, https://dailyiowan. com/2020/06/08/iowa-football-players-take-to-twitter-to-address-current-stateof-hawkeye-program. NCAA to Allow College Athletes to Profit on Name, Image, and Likeness, JD Supra, July 2, 2021, www.jdsupra.com/legalnews/ncaa-to-allow-collegeathletes-to-1825678. See Emily Summars, Lawsuit Claims Minn. School Officials Demanded Sixthgrader’s Facebook Password, Student Press Law Center, March 7, 2012, www. splc.org/news/newsflash.asp?id=2344. See Jeffrey Solochek, Pasco Teacher Accused of Policing Students’ Facebook Comments, Tampa Bay Times, March 22, 2012. Id. Id. See Jennifer Preston, Rules to Stop Pupil and Teacher From Getting Too Social Online, New York Times, December 17, 2011. Id. (“In Sacramento, a 37-year-old high school band director pleaded guilty to sexual misconduct stemming from his relationship with a 16-year-old female student; her Facebook page had more than 1,200 private messages from him, some about massages”). S.B. 54, 96th General Assembly, 1st Regular Sess. (Mo. 2011). Mo. State Teachers Ass’n v. Mo., No. 11AC-CC00553 (Cir. Ct. Cole County August 2011). See Nicole Hill, School District Won’t Make Kan. Student Apologize for Tweet Against Governor, Student Press Law Center, November 28, 2011, www.splc.org/ news/newsflash.asp?id=2302. Ken Paulson, Tweet Backlash: Kan. Officials Learn Lesson about Free Speech, First Amendment Center, November 28, 2011, www.firstamendmentcenter.org/ tweet-backlash-kan-officials-learn-lesson-about-free-speech. Id. See, e.g., Ky. School Aggressively Fights Twitter Criticism, Fox News.com, February 27, 2012, www.foxnews.com/us/2012/02/27/ky-school-aggressivelyfights-twitter-criticism. Id. Id. See Department of Education, “Dear Colleague” Letter Re Bullying (October 26, 2010), www.nacua.org/documents/DearColleagueLetter_Bullying.pdf. See, e.g., Denielle Burl, From Tinker to Twitter: Managing Student Speech on Social Media, 9 NACUA Notes (2011), www.studentaffairs.uconn.edu/docs/risk_ mgt/nacua5.pdf (advising that the best way to prevent student misuse of social media is to “fight speech with speech”); Neal Hutchens, You Can’t Post That . . . Or Can You? Legal Issues Related to College and University Students’ Online Speech, 49 J. of Student Aff. Res. and Prac. 1, 13 (2012) (arguing that “colleges and universities should engage students more broadly and deeply regarding issues related to their online expression”); Papandrea, supra note 25 at 1098 (arguing that “the primary approach that schools should take to most digital speech is not to punish their students, but to educate their students about how to use digital media responsibly”).

Student Speech

149

119. Frank LoMonte, In “Making Progress” Report, Education Leaders Call for a Reboot of Schools’ Restrictive Technology Policies, Student Press Law Center, April 11, 2012, www.splc.org/wordpress/?p=3508. 120. Making Progress: Rethinking State and School District Policies Concerning Mobile Technologies and Social Media (2012), www.splc.org/pdf/making_progress_ 2012.pdf. 121. Hosty v. Carter, 412 F.3d 731 (7th Cir. 2005). 122. See, e.g., Nick Dean, Living Social: College Newsrooms Revisiting Ethics Policies for the Twitter Generation, 32 Student Press Law Center Rep. 30 (2011), www.splc.org/news/report_detail.asp?id=1611&edition=56. 123. Knight Foundation, Future of the First Amendment (2011), www.knight foundation.org/media/uploads/article_pdfs/Future-of-the-First-Amendment2011-full.pdf. 124. Healy v. James, 408 U.S. 169, 180 (1972). 125. Vernonia Sch. Dist. v. Acton, 515 U.S. 646, 657 (1995).

Chapter 8

Obscenity, Nonconsensual Pornography, and Cyberbullying Adedayo L. Abah WASHINGTON & LEE UNIVERSITY

Amy Kristin Sanders UNIVERSITY OF TEXAS

As social media has grown, so too has its power to spread harm. Nonconsensual pornography, which traces its roots to notorious website IsAnyoneUp.com, has made headlines worldwide as victims grapple with the consequences of having their sexually explicit images spread across the Internet. Similarly, recent research suggests that online harassment – from nonconsensual and deepfake pornography to cyberbullying – has become a way of life in cyberspace. According to a September 2020 study by the Pew Research Center, roughly two-thirds of adults under 30 have experienced some form of online harassment activities, and 41% of Americans have personally experienced some form of online harassment in at least one of the six ways that were measured. The number of people who reported they have been sexually harassed online went up from 6% in 2017 to 11% in 2020. Seventy-five percent of all adults who have been personally harassed online say their most recent experience occurred on social media.1 The law and the courts have grappled with ways to legally respond to the increase in hostility that social media has made a regular part of our everyday lives. The development of electronic communication technology and social media tools opened the door for criminal and civil litigation in a variety of areas, including defamation, privacy, and intellectual property, as this book has suggested. Many of the early developments in cyberspace law related to the availability and regulation of sexually explicit content on the Internet. In 2004, one researcher testified before Congress, likening Internet pornography to illegal drugs: The internet is a perfect drug delivery system because you are anonymous, aroused and have role models for these behaviors. To have [a] drug pumped into your house 24/7, free, and children know how to use it better than grown-ups know how to use it. [I]t’s a perfect delivery system if we want to have a whole generation of young addicts who will never have the drug out of their mind.2 DOI:10.4324/9781003174363-8

Obscenity, Pornography, and Cyberbullying

151

Congressional interest in regulating sexually explicit content – including obscenity and pornography – on the Internet predates the development of social media and even the aforementioned 2004 legislative hearing.3 However, those efforts largely inform current attempts to proscribe cyberbullying and the transmission of sexually explicit content resulting from the use of social media. After a basic introduction to the regulation of obscenity and indecency, this chapter provides an overview of early legislation that attempted to stem the tide of sexually explicit content on the Internet. Particular attention will be paid to early legislative efforts that resulted in litigation culminating in the US Supreme Court. Next, the recent attempts to regulate nonconsensual pornography (the practice of maliciously posting sexually explicit photographs of another person) and deepfake pornography are explored. Deepfake pornography is the process of digitally manipulating video using AI technology to change the face of one person for another. Oftentimes, but not always, it is a celebrity or a public figure whose face is swapped for the face of the person that is engaged in a pornographic act. The discussion focuses on several prominent cases, including what is believed to be the first conviction under a law specifically aimed at revenge pornography. The chapter then addresses cyberbullying, including an update on the status of legislation designed to curtail the practice. Finally, the chapter concludes by highlighting related areas of legal concern that are likely to develop in the future.

Regulating Sexually Explicit Speech Offline and Online The US Supreme Court set the stage for the regulation of obscene and indecent expression long before the Internet arrived in our homes. Several decades ago, in Roth v. United States,4 Justice William Brennan, writing for a six-justice majority, clearly enunciated the Court’s belief that obscene speech fell outside the protection of the First Amendment: The dispositive question is whether obscenity is utterance within the area of protected speech and press. Although this is the first time the question has been squarely presented to this Court, either under the First Amendment or under the Fourteenth Amendment, expressions found in numerous opinions indicate that this Court has always assumed that obscenity is not protected by the freedoms of speech and press.5 He went on, noting that as far back as 1942, the Court acknowledged in Chaplinsky v. New Hampshire the permissible regulation of sexually explicit speech rising to the level of obscenity.6 Shortly after the Roth decision, however, consensus broke down on the Court as to what standard should be used to determine whether speech was obscene.

152

Adedayo L. Abah and Amy Kristin Sanders

It was during this time period that Justice Potter Stewart, when describing obscenity, made his oft-repeated remark: I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.7 Obscenity law remained in a state of disarray, with the Court deciding cases on an almost ad hoc basis until 1973.8 In that year, a 5–4 majority in Miller v. California agreed upon the standard currently used to determine whether expression should be considered obscene and outside the scope of First Amendment protection.9 The Supreme Court vacated a California criminal conviction under the state’s obscenity statute, ruling that the First Amendment required a three-part showing for speech to be considered obscene: (a) whether the average person, applying contemporary community standards would find that the work, taken as a whole, appeals to the prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.10 The ruling served to cement the Court’s approach to obscenity regulation, and the Miller test has returned to the forefront as Congress has attempted to legislate expression occurring on the Internet and via social media. Legislative efforts to regulate sexually explicit expression on the Internet have not solely been based on obscenity’s status outside the scope of First Amendment protection. In fact, a number of legislative attempts have tried to curtail indecent speech and pornography as well. Such a regulatory approach would have limited the rights of Internet speakers in much the same way the Court limited the rights of broadcasters in its 1978 ruling in FCC v. Pacifica.11 There, a majority of the Court upheld the FCC’s right to sanction broadcasters for airing indecent content, in part based on the notion that broadcast television and radio were pervasive and uniquely accessible to children.12 As a result, the FCC has the authority to fine broadcasters who air indecent content outside the Safe Harbor hours of 10:00 p.m. to 6:00 a.m. Indecency, however, remained a concept reserved solely for broadcast media throughout much of the 1980s and 1990s. Print publishers and cable operators were not subject to the FCC’s regulations, which prohibited the broadcast of “language that describes, in terms patently offensive as measured by contemporary community standards for the broadcast medium, sexual or excretory

Obscenity, Pornography, and Cyberbullying

153

activities and organs, at times of the day when there is a reasonable risk that children may be in the audience.”13 However, that changed when President Bill Clinton signed the Telecommunications Act of 1996 into law. The Telecommunications Act of 1996 represented the first comprehensive overhaul of telecommunication policy since the 1934 Communications Act, and it attempted to solidify the Internet’s place within the FCC’s regulatory authority.14 Contained within the act’s 128 pages was Title V, targeting obscenity and violence in various telecommunication media, including the Internet. Title V, also known as the Communications Decency Act (CDA), was of particular interest to those concerned about freedom of expression because Section 223 criminalized the knowing transmission of obscene or indecent material to recipients under 18.15 On the day President Clinton signed the law, the American Civil Liberties Union (ACLU) and 19 additional plaintiffs challenged Section 223 of the Communications Decency Act in federal court,16 laying the groundwork for the US Supreme Court’s first decision17 in the area of cyberspace law. In the case, the ACLU argued that two of the CDA’s provisions designed to protect minors from harmful material on the Internet were unconstitutional.18 After the issuance of a temporary restraining order, a three-judge panel in a US district court made a lengthy finding of fact before issuing a preliminary injunction that prevented the government from enforcing the provisions against material claimed to be indecent under §223(a)(1)(B) or any material under §223(d). Based on a special provision in the statute, the government appealed the district court decision directly to the US Supreme Court. Justice Stevens, writing for the majority in ACLU v. Reno, penned an opinion that laid the foundation for the Court’s subsequent jurisprudence in the realm of cyberspace and electronic communication. Observing that the rules of traditional media should not automatically be applied to the Internet, Justice Stevens opined, Neither before nor after the enactment of the CDA have the vast democratic fora of the Internet been subject to the type of government supervision and regulation that has attended the broadcast industry. Moreover, the Internet is not as “invasive” as radio or television.19 Distinguishing the Internet from the regulations permitted in broadcast, Justice Stevens noted that the Court need not be bound by the Pacifica ruling.20 Section 223(a)(1)(B), the opinion asserted, was more akin to regulations the Court had struck down in a 1989 case involving the prohibition of sexually oriented commercial telephone messages.21 In Sable Communications, the Court ruled that a provision in the Communications Act that prohibited the transmission of both obscene and indecent dial-a-porn messages was unconstitutional as to the indecent content, which was entitled to some First Amendment protection.22 Relying heavily on Pacifica’s reasoning that broadcast is

154

Adedayo L. Abah and Amy Kristin Sanders

uniquely pervasive, the Court distinguished commercial messages transmitted via the telephone: In contrast to public displays, unsolicited mailings and other means of expression which the recipient has no meaningful opportunity to avoid, the dial-it medium requires the listener to take affirmative steps to receive the communication. There is no “captive audience” problem here; callers will generally not be unwilling listeners. . . . Unlike an unexpected outburst on a radio broadcast, the message received by one who places a call to a dial-a-porn service is not so invasive or surprising that it prevents an unwilling listener from avoiding exposure to it.23 Justice Stevens also observed that the scarcity rationale used by Congress to justify the regulation of broadcast and embraced by the Court in Red Lion Broadcasting v. FCC24 simply did not apply in the Internet context. The Court likened the Internet more to the print medium, establishing a jurisprudential approach that would make content regulation extremely difficult.25 In addition to the hands-off regulatory approach that seemed to be embraced by the Court in Reno, the majority took issue with the breadth of the CDA’s restrictions on speech. Foreshadowed by Justice Stevens’ discussion of Sable, the majority held both provisions to be unconstitutionally overbroad. The majority found troubling the statute’s restriction on “indecent” content and its prohibition on “patently offensive” material – neither of which were defined in the legislation. Both terms, Justice Stevens wrote, could be interpreted to include within their sweep content that adults possessed a First Amendment right to access: We are persuaded that the CDA lacks the precision that the First Amendment requires when a statute regulates the content of speech. In order to deny minors access to potentially harmful speech, the CDA effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another. That burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the legitimate purpose that the statute was enacted to serve.26 The Court further noted the breadth of application of the CDA’s provisions.27 As written, the statute applied to all transmissions of material harmful to minors – commercial or otherwise. No exceptions were made for nonprofit entities, individuals, or educational institutions, leaving within the provisions’ scope non-pornographic materials that may have educational or other social value. On the heels of the loss in Reno, legislators returned to the drawing board in an attempt to craft a new statute that could address the Court’s concerns. As a result, the Child Online Protection Act28 (COPA) was enacted in October 1998.29 It prohibited any person from

Obscenity, Pornography, and Cyberbullying

155

knowingly and with knowledge of the character of the material, in interstate or foreign commerce by means of the World Wide Web, mak[ing] any communication for commercial purposes that is available to any minor and that includes any material that is harmful to minors.30 Taking a cue from the Court’s decision in Reno, legislators attempted to draft COPA in a narrower fashion, making three distinct changes. First, the new provision applied only to World Wide Web content, whereas the CDA provisions applied to all expression via the Internet, including email.31 Second, COPA was limited to communication for commercial purposes, meaning educational or other nonprofit communication would be exempted from its criminal penalties. Finally, lawmakers reworked the CDA prohibition on “indecent” and “patently offensive” speech – terms that had doomed the legislation in the Supreme Court – by substituting in the phrase “material harmful to minors,” which was drawn from a previous Supreme Court case, New York v. Ferber,32 and defining the term using a test similar to the Miller test. One month prior to COPA taking effect, free speech advocates, including the ACLU, challenged the legislation in court, claiming it violated the First Amendment.33 After winning an initial injunction against COPA, the ACLU was victorious when the Third Circuit ruled the use of “community standards” to define “material harmful to minors” was overbroad.34 However, the US Supreme Court, in 2002, rejected this reasoning, and the case returned to the Third Circuit.35 Hearing the case a second time, the Third Circuit once again struck down the law as unconstitutional, finding it limited adults’ rights to access content to which they had a constitutional right.36 In 2004, the US Supreme Court again heard Ashcroft v. ACLU, ruling this time that the law was likely to be unconstitutional and noting the ability of filtering software to serve as an alternative to more harsh restrictions.37 After a rehearing at the district court, where a permanent injunction was issued, the Third Circuit upheld that decision, effectively killing COPA.38 After the ACLU’s victories in both cases, it seemed the government was at a loss to regulate sexually explicit content on the Internet, adding support to the colloquial references to the Internet as the Wild West. On the heels of the government’s early losses in Reno and Ashcroft, Congress was busy drafting additional legislation aimed at protecting children from sexually explicit Internet content. The Children’s Internet Protection Act (CIPA), signed into law in 2000, took a different approach than its predecessors. Instead of tackling all Internet use, CIPA focused on two key providers of Internet access to children: schools and libraries. Although Congress could not force all school and library boards to require the use of filtering software on their Internet stations, it could condition funding through its E-Rate program on their installation. Given the large number of schools and libraries that were dependent on the funding, the legislation swept broadly across the country.

156

Adedayo L. Abah and Amy Kristin Sanders

In response, the American Library Association sued, claiming CIPA required librarians to violate patrons’ constitutional rights by blocking access to protected speech. In 2002, a three-judge panel in a US district court agreed with the librarians’ group, ruling that less restrictive means – including supervision by librarians – would better achieve the government’s objective without infringing on patrons’ First Amendment rights. The US Supreme Court, in a 6–3 vote, overruled the lower court’s decision, characterizing CIPA not as a speech restriction but instead as a condition of participating in a government funding program – which libraries were not required to do.39 The government’s victory in U.S. v. American Library Association empowered lawmakers, who have introduced numerous pieces of legislation involving minors and sexually explicit speech on the Internet since the Court’s 2003 decision. Legislation that would require schools and libraries to protect children from sexual predators when using social networking and chat sites has been introduced in Congress several times – though never been passed by both chambers. Known as the Deleting Online Predators Act (DOPA), the legislation would have limited access to a wide range of websites based on its broad definition of “social networking site.”40 Several states, including Georgia, North Carolina, and Illinois, have attempted to take similar actions to protect children using the Internet. Typically, these laws take one of two forms. The first group of laws – similar to the proposed Illinois Social Networking Prohibition Act – imposes restrictions on libraries and schools, mandating limitations on what students can access or prohibiting the access of certain sites altogether. The second type – more akin to the North Carolina Protect Children from Sexual Predators Act enacted in 200841 – places the onus on social networking sites by punishing those that allow minors to create profiles without parental consent and requiring the site to allow parents full ability to monitor their children’s profiles. The constitutionality of these types of laws is questionable. The controlling case in banning sex offenders from accessing websites frequented by minors is Packingham v. North Carolina.42 Decided by the Supreme Court in 2017, the Court ruled that a North Carolina law that prohibits sex offenders from accessing websites that might be frequented by minors and in which minors might have accounts, regardless of whether the sex offender directly interacts with a minor or not, violates the First Amendment. One of the biggest issues social media sites contend with is the desire to attract young users while balancing the risks facing those users. In October 2013, Facebook announced it was loosening privacy restrictions that had previously been placed on teens aged 13–17. Under its new policy, it has allowed teens to share their posts publicly with anyone on the Internet. Previously, teen content would only be shared within friends’ circles. Critics denounced this decision, citing instances of cyberbullying and online harassment and claiming that Facebook had made the decision to lure in new sources of revenue based on advertisers’ ability to reach these young consumers and see their data.

Obscenity, Pornography, and Cyberbullying

157

Facebook, however, defended its actions by saying that it was allowing teens the opportunity to be heard more broadly in the world. Facebook continues its under-13 ban, largely out of a desire to steer clear of the Children’s Online Privacy Protection Act (COPPA), which took effect in the United States in 1998 and was strengthened in 2013.43 Facebook also announced in October 2021 that it was pausing the development of an Instagram Kids service that it was tailoring for children 13 years old and younger due to increasing criticism of the effects of the app on the mental health of young people. COPPA requires that commercial websites directed at children under 13 who wish to collect personal information must engage in several privacy protection measures. These measures include posting privacy policies, providing parents with notice of their site’s information practices, and verifying parental consent before collecting personal information from children. COPPA’s requirements are far-reaching, not only affecting websites and services that directly target children under 13 but also covering sites and services aimed at a general audience where the site has “actual knowledge” it is collecting information from children under 13. Facebook has been unrelenting in its desire to build an Instagram product targeted at children 13 years old and younger. The tech giant makes the argument that this product would provide a more “age appropriate experience” but was willing to postpone the plans due to criticism.

The Public’s Revenge – Cracking Down on Nonconsensual Pornography Websites Studies suggest adults and children alike are using their mobile devices to take, share or receive sexually explicit messages and photos. A 2013 study by McAfee found that half of the respondents admitted to such practices, and 30% admitted to cyberstalking their former lovers on social media.44 The same study suggested that one in ten exes threatened to reveal intimate photos and that 60% of those who threatened to do so actually followed through by posting the images online. A report released in December 2016 by Data & Society stated that one in 25 Americans had been a victim of revenge porn,45 also known as nonconsensual pornography. Legal scholars have suggested numerous approaches to combatting nonconsensual pornography, from civil to criminal approaches. Victims have tried to pursue civil litigation based on a theory of invasion of privacy, but that approach has been largely unsuccessful in the courts. Similar attempts based on a contract law theory have also come under scrutiny as these relationships often fail to be based on explicit agreements. Although a copyright-based claim is more likely to succeed, research suggests that at least one-fifth of revenge porn photos are not “selfies,” meaning that the copyright would not be held by the victim. As a result, several legislatures in the United States and around the world have been applying criminal law theories to quell nonconsensual pornography.

158

Adedayo L. Abah and Amy Kristin Sanders

Notorious nonconsensual porn king Hunter Moore, of IsAnyoneUp.com fame, spent 30 months in federal prison after being sentenced in December 2015. His sentence, which includes three additional years of supervised probation and a $2,000 fine, was part of a negotiated plea deal in which Moore pleaded guilty to two charges, unauthorized access to a private computer and aggravated identity theft, after several years of running his notorious nonconsensual porn website.46 Moore’s case demonstrates the challenges associated with taking down purveyors of nonconsensual porn using criminal law. He and other well-known defendants often plead guilty to or are convicted of crimes not covered by the nonconsensual porn laws that have been passed by more than half of US states. Instead, they often are sentenced under fraud, identity theft, or other laws unrelated to the sexually explicit nature of the photos they post. Even in states that have nonconsensual porn laws – such as California – routine offenders like Kevin Bollaert, who operated revenge porn site uGotPosted, as well as another website where users had to pay to get their photographs removed, are prosecuted under other laws. Then California attorney general Kamala Harris said the minor penalties available under California’s law were enough to get justice for what Bollaert had done. As a result, he was found guilty on six counts of extortion as well as 21 counts of identity theft, for which he was sentenced to 18 years in prison and restitution payments totaling $450,000.47 Other concerns remain as well. Chief among them are the paltry sentences often associated with the crime, which in California and many other states amounts to a mere misdemeanor. In what was heralded as the first conviction under California’s nonconsensual porn law – which took effect in October 2013 and criminalizes the distribution of nude or explicit photos or videos of someone without their consent – the defendant, Noe Iniguez, posted a topless photo of an ex-girlfriend to her employer’s Facebook account and posted messages urging that she be fired because she was a “drunk” and a “slut.”48 For his actions, Iniguez was sentenced to a year in jail. Additional prosecutions have also occurred against website operators who are attempting to extort money from nonconsensual porn victims. As of 2021, 48 states and the District of Columbia have some form of law banning the distribution of nonconsensual pornography.49 Many of these laws provide criminal liability for the perpetrators of the distribution of nonconsensual pornography, which is a deterrent, but several of them do not provide an effective remedy for victims in the form of removal of those pictures from circulation. Former representative Katie Hill, Democrat from California, continues to fight for a federal law against sharing nude pictures without consent under any circumstance. Hill resigned from her elected role as a lawmaker when nude pictures of her with a campaign aide were published without her consent by a British tabloid and a conservative blog. She sued the Daily Mail, her estranged husband, and a journalist involved in the photos’ publication, arguing that publishing her nude pictures without her consent violated the California

Obscenity, Pornography, and Cyberbullying

159

nonconsensual pornography law. Judge Yolanda Orozco of the Los Angeles County Superior Court dismissed the lawsuit under an exemption in the law that allows the publication of provocative pictures if it is in the “public interest.” At the federal level, the House passed a proposal through the reauthorization of the Violence Against Women Act, HR 1620, after it was amended by adding the Stopping Harmful Image Exploitation and Limiting Distribution Act of 2021 (the SHIELD Act). If it were to become law, the SHIELD Act would ensure that the Department of Justice has an appropriate and effective tool to address these serious privacy violations; narrowly establish federal criminal liability for individuals who share private, sexually explicit, or nude images without consent; and strike an effective balance between protecting the victims of these serious privacy violations and ensuring that vibrant online speech is not burdened. Two key sticking points are efforts to expand existing gun restrictions to those convicted of abuse or stalking in dating relationships and to provide protection for the LGBTQ community.

TLK 2 Me Durty – Stemming the Tide of Sexting A Pennsylvania high school math teacher made headlines after he was charged with sending a teenage student sexually explicit pictures, followed by a video of himself performing a sex act. What started out as a Facebook friendship between a 26-year-old teacher and a 17-year-old student resulted in the teacher facing charges of sending sexual materials to a minor and corruption of a minor, both misdemeanors. Court records revealed a history of two-way contact, and the student admitted to sending sexually explicit nude photos to the teacher after receiving some from him. When the man pleaded guilty to a charge of corrupting minors, his punishment included six months of house arrest, and he lost his teaching certification.50 The case in Pennsylvania represents only the tip of the sexting iceberg – albeit one of the most concerning types of sexting situations. With the increased availability of sexually explicit content on the Internet, it should come as no surprise that individuals quickly began sending and receiving pornographic emails, text messages, and photos using their mobile devices. Programs like Snapchat, where messages “disappear” within x seconds, only furthered the clandestine practice of sharing sexually explicit content. Four distinct situations emerge within the law in relation to sexting: sexting between consenting adults, sexting between two adults where one does not consent, sexting between an adult and a minor child, and sexting between two minor children. Traditionally, prosecutors have used a number of existing laws to address sexting, where one adult sends another unwilling adult pornographic or nude messages. These could include harassment, stalking, and similar criminal statutes. In a most interesting development, attorneys and legal scholars are divided about whether criminal statutes could be used to prosecute a person who records the audio of his neighbors engaged in sexual relations and then posts the sound file on the Internet. A post to the user-generated

160

Adedayo L. Abah and Amy Kristin Sanders

news site Reddit drew significant attention after the poster mentioned a friend had recorded his neighbors’ noisy encounter and uploaded the audio to SoundCloud.51 Prosecutors looking to bring a case might be able to turn to the Electronic Communications Privacy Act (ECPA), a federal law that is designed to protect the transfer of information through wire, radio, electromagnetic, photoelectronic, or photo-optical communications systems. Although the chance of a successful federal prosecution is slim, it isn’t clear it wouldn’t be possible. Even withstanding ECPA, a number of state laws, which vary from state to state, could apply. These would include wiretapping laws, communication privacy statutes similar to ECPA, and possibly any statutory privacy protections. To further complicate the issue, the aggrieved couple might be able to sue civilly based on the state’s common law privacy torts or a trespass if the recorder entered onto private property. Whether a court would allow the use of these existing statutes and common law torts in this way is unclear. Courts have been willing to use existing statutes to deal with sexting that occurs between an adult and a minor child – whether the child consented or not. Often these might include prostitution, child sexual abuse, or misconduct statutes. More disturbing than sexual indiscretions involving adults is the rise in the practice of sexting among “consenting” minors – a practice the law has struggled to combat. A 2014 study conducted by researchers at Drexel University found that 54% of college students reported sending sexually explicit text messages or photos before they were 18 years old.52 These reports, along with stories in the media about “collect the sexts” challenges going on in schools, suggest the need for additional education to make students aware of the risks of sexting. Research suggests that when students are made aware that sexting often falls under child pornography laws, they are less likely to engage in the practice once they have been made aware of the harsh sanctions. Additionally, lawmakers across the United States have responded to these media reports as well, often attempting to pass legislation prohibiting various forms of sexting. According to the Cyberbullying Research Center, 26 states had passed sexting laws by July 2019.53 Only nine of the state laws includes the word “sexting.” All but one of the 26 laws address minors sending sexually explicit messages, and 23 of them address minors receiving the messages. Some states have even made it a felony for minors to engage in sexting, akin to how sexting is often handled using existing child pornography laws. Although most legislation aims to prohibit the sending of sexually explicit, nude, or partially nude text and photo messages, the penalties in each state to achieve the goal vary. It is a felony in seven of the 26 states with a law and a misdemeanor in 17 states. About 12 states have informal penalties for sexting. In Arkansas, for example, if a minor pleads guilty or nolo contendere or is found guilty of violating the law against sexting, the minor would be sentenced to eight hours of community service if it is a first offense for

Obscenity, Pornography, and Cyberbullying

161

the minor. However, adults who induce explicit content from a child could be found guilty of a felony. If school districts or other employers try to restrict personal communication among employees and other consenting adults, other challenges will likely arise. Given the emphasis on freedom of speech in the United States, courts are likely to find policies that prohibit communication completely to be unconstitutional, even for teachers who deal with minors. Policies related to texting and other social media will have a better chance of withstanding judicial scrutiny if they are limited in scope – addressing only sexually explicit communication, for example. Further, policies that attempt to restrict communication between parties over the age of 18 will no doubt face higher scrutiny. Employers who attempt to block access during working hours are likely to have more success with their policies than those who attempt a blanket prohibition that would include employees’ personal time. Regardless of the effort taken by employers, any social media policy put in place by a government body will face significant obstacles based on its potential to chill speech.

Banishing the Bully from the Bedroom The fact pattern has become all too familiar. A young person – often a preteen or teenager – commits suicide after being the target of online harassment. Cyberbullying – the name for online harassment most often used when minors are the victims – rose to the public purview in a 2006 case involving a mother, Lori Drew, whom prosecutors alleged created a MySpace profile impersonating Josh Evans in order to befriend one of her daughter’s female classmates.54 After deceiving 13-year-old Megan Meier into believing Evans liked her, prosecutors argued that Drew (acting under her cyber persona) then told her the world would be a better place without her. Afterward, Meier hanged herself in her bedroom. Although state prosecutors wanted to charge Drew with a crime, there was no federal cyberbullying statute to rely on. After being charged with four felony counts of unauthorized computer access under the federal Computer Fraud and Abuse Act (CFAA), a jury found Drew guilty of three lesser charges. In July 2009, a federal judge overturned the jury verdict, acquitting Drew of all charges. Although teasing and other forms of bullying have been around as long as there have been schoolyards for the bullies to occupy, cyberbullying cases, like the one involving Lori Drew, present an entirely new set of challenges. Initially, one of the most difficult challenges posed was a definitional one. Even if lawmakers were to come to a consensus as to what constituted cyberbullying, regulating expressive activities without violating the First Amendment presented a further challenge. Add to that the complex nature of regulating on-campus/off-campus activities, and the plot thickens. Finally, even with laws in place, cyberbullying that goes unreported cannot be prohibited. All of these

162

Adedayo L. Abah and Amy Kristin Sanders

factors combine to create an environment in which most undesirable behaviors go unpunished. Child psychologists and educators in the United States often have little difficulty defining cyberbullying, and even though they may not all agree on the exact terms of a single definition, the similarities are striking. One of the first definitions of cyberbullying is attributed to Canadian teacher Bill Belsey, who started Bullying.org. Belsey defines it as “the use of information and communication technologies to support deliberate, repeated, and hostile behavior by an individual or group, that is intended to harm others.” Most definitions include a number of these requirements, including the desire to harm, the use of technology to communicate, and deliberate contact. Other organizations limit cyberbullying to cases involving victims who are minors. StopCyberbullying.Org has one of the narrowest definitions of cyberbullying: Cyberbullying is when a child, preteen or teen is tormented, threatened, harassed, humiliated, embarrassed or otherwise targeted by another child, preteen or teen using the Internet, interactive and digital technologies or mobile phones. It has to have a minor on both sides, or at least have been instigated by a minor against another minor. Although there may be minor differences in the various definitions, those in education and healthcare largely agree on the primary characteristics of the behavior. Despite the nearly universal agreement as to the types of undesirable behavior that constitutes cyberbullying among health and education professionals, a larger disagreement resulted among lawmakers, who are tasked with defining the type of conduct their states desired to punish. Given the expressive nature of the conduct at issue, lawmakers have had a particularly difficult task. This is because the First Amendment to the US Constitution provides a certain amount of protection for expression – and even children attending school are not completely without these protections.55 Thus, when drafting statutes to curtail cyberbullying, lawmakers must walk the fine line between prohibiting the conduct they seek to regulate and running afoul of the First Amendment. The constitutional challenges have not stopped lawmakers in their efforts. By 2021, all 50 states had laws in place to address bullying, and 48 of those include cyberbullying/electronic-harassment-specific provisions, according to the Cyberbullying Research Center. Of the laws in place, 44 state laws provide for a criminal penalty while they also allow the school to sanction students. The protections in various states run the gamut. Nearly all states require the school district to establish a policy but also allow the school district to sanction students. Far fewer allow the regulation of off-campus activity or criminal penalties. Nearly all of the states have enacted or updated their laws in the past five years, and they will not come without legal challenges.

Obscenity, Pornography, and Cyberbullying

163

In addition to state laws, legislation has been proposed at the federal level. At the time of publication, though, no federal law specifically prohibited cyberbullying. To date, most legal challenges have resulted when school districts have attempted to sanction students for assaultive speech aimed at teachers and administrators. In those cases, courts are less likely to consider the conduct cyberbullying or harassment and often uphold the students’ free speech rights to maintain their websites or make their online postings. Deepfakes and Pornography Deepfake is the use of AI to manipulate videos by splicing one person’s face onto another’s. This level of deep learning technology can also be used to input messages ad actions that someone has spoken or done into the individual. While deepfakes first emerged as tools for culture jamming through the lampooning of politicians for laughs, they became notorious in 2018 when a group of Redditors used the technology to transplant the faces of their favorite actors onto porn movies. Nina Schick said that “the most malicious use of deepfakes so far is the use of nonconsensual pornography . . . it is not only celebrities and famous people, but normal women including minors.”56 Henry Ajder, who studies deepfakes, did a survey in 2019 and found that 96% of deepfake content was pornographic. However, the use of deepfakes goes beyond porn. The president of India’s ruling party (BJP), Manoj Tiwari, used deepfake technology to create a video in which he gave a speech in Haryanvi (a Hindi dialect) when in the original video, he spoke in English. In the United States, the First Amendment protection for speech protects the use of celebrities’ and politicians’ faces and likeness, especially in the cases of satire and parody. However, there are state laws against nonconsensual pornography. Danielle Citron, vice president of the Cyber Civil Right Initiative, says the effect of seeing oneself in a deepfake video cannot be overestimated: When you see a deepfake sex video of yourself, it feels viscerally like it’s your body. It’s an appropriation of your sexual identity without your permission. It feels like a terrible autonomy and body violation, and it rents space in your head.57 Citron believes that people engage in this form of nonconsensual pornography because they mistakenly believe that it is a victimless crime. Lawmakers are responding to this form of nonconsensual pornography at both state and federal levels. The Commonwealth of Virginia officially expanded its nonconsensual pornography ban to include fake videos, photos, and deepfakes in 2019. Since 2014, Virginia has banned the spreading of nude images with “the intent to coerce, harass, or intimidate” another person. The amendment added “falsely created videographic or still image” to the ban. A violation of this law is a class 1 misdemeanor, which carries up to 12 months in

164

Adedayo L. Abah and Amy Kristin Sanders

prison and up to $2,500 in fines. Texas has also banned political manipulation using deepfakes but not directly its use for nonconsensual pornography. New York proposed a bill that would ban “digital replicas” of people without their consent, but the Motion Picture Association of America has warned that the law “would restrict the ability of our members to tell stories about and inspired by real people and events.” At the federal level, the 2021 National Defense Authorization Act (NDAA) included provisions that address deepfakes. The law requires the Department of Homeland Security (DHS) to provide an annual report for the next five years on deepfakes. This report will cover all potential harm of deepfakes, including against different populations. The law also charges the DHS to study deepfake creations and detection and mitigation solutions. A concern in the law is the ability of potential adversaries of the United States to create deepfake content that depicts the US military personnel or their families and the security implications of such depictions. In 2020, Congress passed the Identifying Outputs of Generative Adversarial Networks Act, which requires the National Science Foundation to research deepfake technology and authenticity measures and instructs the agency to work with the private sector on deepfake identification capabilities.58 The risk in the use of deepfake is our continued ability to trust what we see and hear.

Predicting the Future of Liability As social media proliferate as part of our everyday lives, so too will the legal changes resulting from electronic communication and online conduct. Courts have struggled with whether to apply traditional laws to new forms of communication or wait for lawmakers to create specific laws addressing social media and other emerging technologies. In general, the most successful prosecutions have occurred using traditional laws that target conduct instead of expression, but that hasn’t stopped legislatures from attempting to craft bills addressing revenge pornography, sexting, cyberbullying, and deepfakes. Particularly when the harms involve children, members of the public initially seem more willing to curtail constitutional free speech rights in the name of protecting minors. However, the calculus grows more complicated when restrictions target consenting adults or create sweeping prohibitions on all forms of online communication. Whether and how to regulate electronic communication and social media use in the workplace and schools also raises a number of constitutional issues, including the First Amendment. What is clear is that social media policies dealing with sexually explicit communication or bullying/harassment must be tightly worded to clearly define the types of conduct to be regulated. The policies should differentiate between conduct involving minors and conduct involving adults, given that courts would be more likely – though not guaranteed – to support incidental restrictions on speech aimed at protecting minors. Policies aimed at adults should clearly address professional versus

Obscenity, Pornography, and Cyberbullying

165

personal conduct, including appropriate use of employer-provided communication devices. Enforcement of these types of policies should be done uniformly and with a high level of documentation. When possible, action should be taken under existing laws aimed to curtail undesirable conduct – stalking, harassment, and sexual misconduct statutes – that have already withstood constitutional scrutiny. Not only must Congress and employers worry about outcomes in the court of law, but they also must be concerned with an even more unpredictable jury: the court of public opinion. As news stories surface of employers spying on employees and inappropriately using social media to screen applicants, employers must be cautious about the effects of the publicity on the next generation of employees and consumers. Policies made by executives who came of age in the pre-Internet era are likely to strike prospective employees as antiquated and overbearing. Further, the gaps in law surrounding social media and its uses suggest it is incumbent upon sites like Facebook, Instagram, Twitter, and TikTok to set their own standards of conduct. Although they have largely attempted to do so through terms of service and privacy policy agreements, the language in those agreements remains obscure and tedious for the average user. But as many users know, the language in those agreements is often obscure and tedious for the non-legal mind. The challenge for all of these sites is not only to lay out what is impermissible in concrete enough terms that users can understand but to also do so in a manner that creates a policy that is adaptable as the technology and its uses change.

FREQUENTLY ASKED QUESTIONS 1.

Can I get in trouble for posting sexually explicit photographs on a social media site? Facebook – like most social media sites – expressly prohibits the sharing of content it labels “pornographic.” In its Community Standards section, you will find language that suggests users can have their content removed or be blocked from using the service if they violate the standards, which include posting pornographic content and violating copyright terms, among other things. Further, because Facebook is not a government entity, the First Amendment will not protect your right to post content to the site – pornographic or otherwise.The law does not require Facebook to allow you to use its site to post any or all of the content you wish. In addition, any sexually explicit content that rises to the level of obscenity, as defined by state law, could be regulated

166

Adedayo L. Abah and Amy Kristin Sanders

because obscene speech falls outside the scope of First Amendment protection. 2.

Does the First Amendment protect the use of foul language on Twitter? Like Facebook, Twitter also need not follow the First Amendment. Therefore, it has the legal right to make rules about permissible content on its site. Interestingly, although Twitter’s rules make mention of a number of restricted areas, including threats, copyright, trademark, and pornography, they make no mention of profane language – suggesting that your taste in language is for you to decide.

3.

Can my employer examine my mobile device or email for indecent or obscene materials? In light of the Quon case, it would seem that employers have the right to examine employer-issued devices, including cell phones and laptops – particularly if they are doing so with probable cause or in accordance with a standard procedure.A smart employee should think twice about using an employer-issued device to engage in any conduct that would violate the employer’s rules.Although it might seem extreme, employees would be wise to act as though they have no right to privacy with regard to employer-provided devices, email, and landline phone service. Similarly, an employer would be wise to establish rules and regulations for the acceptable use of technology both at and away from the workplace to inform their employees as to what is permissible.

4.

If a user makes an offensive post about me or tags me in an inappropriate photograph on Facebook, can I demand that the post or photograph be removed? Under Section 230 of the Communications Decency Act, you are unlikely to be able to demand that Facebook remove unwanted content about you. However, you may well have a cause of action against the specific user who posted the content – if it is defamatory or violated your privacy, for example. Even if you succeed in getting the content removed from one site, there is no guarantee that it has not been archived, or it may reappear on another site. The soundest course of action is often simply to ask the poster to remove the offending content.

5.

What kinds of social media activities are most likely to lead to criminal charges?

Obscenity, Pornography, and Cyberbullying

167

Users of social media can face criminal charges for their expression, particularly if the speech is not protected by the First Amendment. In most states, this means speech that amounts to obscenity, true threats, incitement, fighting words, and false advertising. Additionally, the rise of livestreaming apps has increased the likelihood that someone who shares live content of a criminal act could be held liable as an accessory to that crime. In April 2016, an 18-year-old woman was charged with four felony counts, including rape and kidnapping, for livestreaming her friend’s rape on Periscope.59 However, the US Supreme Court overturned the conviction of a Pennsylvania man who posted violent messages on Facebook after a split with his wife.60 In Elonis v. United States, the Court ruled a conviction could not stand based on negligence alone. It ruled the government needed to show a higher level of intent for criminal conviction. As a result, the Court has clearly indicated that social media does not merit a lower level of intent than similar activities carried out offline.

Notes 1. The State of Online Harasshment, Pew Research Center, January 13, 2021, www. pewresearch.org/internet/2021/01/13/the-state-of-online-harassment. 2. The Science behind Pornography Addiction: Hearing before the Subcommittee on Science, Technology and Space of the S. Comm. on Commerce, 108th Cong. (2004) (statement of Mary Anne Layden, co-director of the Sexual Trauma and Psychopathology Program at the University of Pennsylvania’s Center for Cognitive Therapy). 3. See Communications Decency Act of 1996, 47 U.S.C. §223 (1997). 4. 354 U.S. 476 (1957). 5. Id. at 481. 6. Id. at 485 (quoting Chaplinsky v. State of New Hampshire, 315 U.S. 568, 571–572 (1942)). 7. Jacobellis v. Ohio, 378 U.S. 184, 197 (1964) (Stewart, J., concurring). 8. See, e.g., Jacobellis, 378 U.S. at 184; Memoirs v. Massachusetts, 383 U.S. 413 (1966); Interstate Circuit v. Dallas, 390 U.S. 676 (1968). 9. 413 U.S. 15 (1973). 10. Id. 11. 438 U.S. 726 (1978). 12. Id. at 748–749. 13. Id. at 732 (quoting 56 F. C. C. 2d, at 98). 14. Telecommunications Act of 1996. Pub L. No. 104–104 (104th Congress). 15. 47 U.S.C. §223(1)(a). 16. 929 F. Supp. 824 (E.D. Pa. 1996). 17. Reno v. ACLU, 521 U.S. 844 (1997).

168

Adedayo L. Abah and Amy Kristin Sanders

18. The lawsuit specifically challenged §223(a)(1)(B), which criminalizes the “knowing” transmission of “obscene or indecent” messages to any recipient under 18 years of age and §223(d) prohibits the “knowin[g]” sending or displaying to a person under 18 of any message “that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.” 19. Reno, 521 U.S. at 868–869. 20. Id. 21. Sable Communications of Cal., Inc. v. FCC, 492 U.S. 115 (1989). 22. Id. at 128. 23. Id. 24. Red Lion Broadcasting v. FCC, 397 U.S. 367 (1969). 25. Id. 26. Id. 27. Id. 28. Pub. L. No. 105–277, 112 Stat. 2681–2736 (1998). 29. Child Online Protection Act, Pub. L. No. 105–277 (105th Congress), see 47 U.S. §231. 30. 47 U.S. §231(a)(1). “Whoever knowingly and with knowledge of the character of the material, in interstate or foreign commerce by means of the World Wide Web, makes any communication for commercial purposes that is available to any minor and that includes any material that is harmful to minors shall be fined not more than $50,000, imprisoned not more than 6 months, or both.” 31. Id. 32. 458 U.S. 747 (1982). 33. American Civil Liberties Union v. Reno, 31 F. Supp. 2d 473 (E.D. Pa. 1999). 34. American Civil Liberties Union v. Reno, 217 F.3d 162 (3d. Cir. 2000). 35. Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002). 36. American Civil Liberties Union v. Ashcroft, 322 F.3d 240 (3d. Cir. 2003). 37. Ashcroft v. American Civil Liberties Union, 542 U.S. 656 (2004). 38. American Civil Liberties Union v. Mukasey, 534 F.3d 181 (3d. Cir. 2008). 39. U.S. v. American Library Ass’n, Inc., 539 U.S. 194 (2003). 40. Deleting Online Predators Act of 2006, H.R. 5319. 41. Senate Bill 132 / S.L. 2008–218 (signed in law August 16, 2008). 42. Packingham v. North Carolina,137 S. Ct. 1730 (2017). 43. Federal Trade Commission, Children’s Online Privacy Protection Rule: Not Just for Kids’ Sites (April 2013), www.ftc.gov/tips-advice/business-center/guidance/ childrens-online-privacy-protection-rule-not-just-kids-sites. 44. McAfee, Love, Relationships & Technology, November 17, 2014, http://promos. mcafee.com/offer.aspx?id=605436&culture=en-us&affid=0&cid=140624. 45. Amanda Lenhart, Michele Ybarra & Myeshia Price-Feeney, Nonconsensual Image Sharing, Data & Society, December 12, 2016, https://datasociety.net/library/ nonconsensual-image-sharing. 46. Abby Ohlheiser, Revenge Porn Purveyor Hunter Moore Is Sentenced to Prison, Washington Post, December 3, 2015. 47. Allie Conti, Will Giving Kevin Bollaert 18 Years in Prison Finally End Revenge Porn? Vice, April 6, 2015. 48. Veronica Rocha, Revenge Porn Conviction Is First under California Law, Los Angeles Times, December 4, 2014. 49. See Cyber Civil Rights Initiative, www.cybercivilrights.org/revenge-porn-laws/ (last accessed February 28, 2022). 50. Liz Zemba, Ex-Connellsville High School Staffer Sentenced in Text Case, Trib Live, July 17, 2013.

Obscenity, Pornography, and Cyberbullying

169

51. Helen A.S. Popkin, Is It Illegal to Record and Post Noisy Neighbors Having Sex? MSNBC.com, May 19, 2012. 52. Randye Hoder, Study Finds Most Teens Sext before They’re 18, Time, July 3, 2014. 53. Sameer Hinduja & Justin W. Patchkin, State Sexting Laws, Cyberbullying Research Center (2015), http://cyberbullying.org/state-sexting-laws.pdf. 54. Kim Zetter, Judge Acquits Lori Drew in Cyberbullying Case, Overrules Jury, Wired.com, July 2, 2009. 55. Tinker v. Des Moines Indep. Sch. Dist., 393 U.S. 503, 506 (1969). 56. As Quoted in Chris Stokel-Walker, What to Say about Deepfakes: A Guide for Beginners, Sunday Times, March 11, 2021. 57. E.J. Dickson, TikTok Stars Are Being Turned Into Deepfake Porn without Their Consent, Rolling Stone, October 26, 2020. 58. Scott Briscoe, U.S. Laws Address Deepfakes, Security Management, January 12, 2021, www.asisonline.org/security-management-magazine/latest-news/today-insecurity/2021/january/U-S-Laws-Address-Deepfakes/ 59. Aarti Shahani, Live-Streaming of Alleged Rape Shows Challenges of Flagging Video in Real Time, National Public Radio, April 19, 2016. 60. Ariane De Vogue, SCOTUS Rules in Favor of Man Convicted of Posting Threatening Messages on Facebook, CNN.com, June 1, 2015.

Chapter 9

Social Media Use in Courtrooms P. Brooks Fuller ELON UNIVERSITY

Introduction Attorneys in the United States swear an oath not to mislead the court. So when attorney Rod Ponton joined a civil forfeiture hearing conducted over Zoom donning the digitally filtered visage of a fluffy white cat and promised the judge “I’m here live, I’m not a cat,”1 perhaps we had no choice but to believe him. Ponton’s (also known as “lawyer cat”) humorous struggle to turn off the cat filter on his private Zoom account during trial showed just how “pervasive and insistent [a] part of daily life”2 smartphones and social media have become, even in courtrooms. Social media offers users such “unlimited, low-cost capacity for communication of all kinds”3 that courts have had no choice but to adapt to the rise of new technologies even if that means the judicial process is “memeified” along the way. In general, government-imposed limits on the Internet and social media access raise serious First Amendment implications.4 However, despite the ubiquity of mobile devices and social media among American adults, courts do not allow unfettered use of electronic devices and social media inside courtrooms. Judges constantly grapple with whether or how to control how jurors, attorneys, and journalists use smartphones during judicial proceedings. In doing so, they must balance defendants’ fair trial rights against the public’s right to observe and engage with criminal and civil judicial proceedings. These considerations have produced a massive system of local rules that vary across jurisdictions. If journalists and members of the public understand the basic contours of these rules and how they apply, they can become more capable watchdogs of the justice system and better inform the public about newsworthy events in American courtrooms. This chapter outlines some of the key Supreme Court precedents that govern the right to record judicial proceedings and then provides an overview of the current state of the local rules that govern electronic devices in courtrooms. Along the way, it discusses how courts have dealt with fair trial concerns and ethical considerations raised by social media use in court. The chapter concludes with a hopeful discussion of the future of social media technology in DOI: 10.4324/9781003174363-9

Social Media Use in Courtrooms

171

courts, specifically around efforts to promote transparency in the judicial process by modernizing norms of information access and accepting the proliferation of low-cost digital technologies as a blessing for open courts.

Access to Courtrooms: A First Amendment Right Members of the press and the general public have a constitutional right to access most courts and an overwhelming interest in sharing information about court proceedings. But reporters’ access to courtrooms is not absolute. There are a few narrow categories of judicial proceedings, such as grand jury hearings and certain juvenile or family court proceedings, where the press does not have a strong right of access because of the sensitive nature of the process. Nevertheless, courtrooms are essential to the fair administration of justice, especially in criminal matters where the government may deprive a criminal defendant of their freedom or – in death penalty cases – take their life. Because citizens cede so much power to the government in our democratic system, open and transparent courts are crucial to a healthy relationship between government and citizenry. Political philosopher Jeremy Bentham wrote in the essay “Rationale of Judicial Evidence,” “Without publicity, all other checks are insufficient: in comparison of publicity, all other checks are of small account.”5 The Supreme Court enshrined these principles in Richmond Newspapers v. Virginia (1980), one of the strongest opinions in defense of the public’s presumptive right to attend criminal trials: [T]he right to attend criminal trials is implicit in the guarantees of the First Amendment; without the freedom to attend such trials, which people have exercised for centuries, important aspects of freedom of speech and “of the press could be eviscerated.”6 Although Richmond Newspapers dealt solely with a criminal matter, the justices overwhelmingly acknowledged that civil proceedings are also presumptively open “absent an overriding interest” to justify their closure.7 Similarly, the press has a profound interest in reporting on criminal proceedings, which can only be overcome by meeting an extraordinarily heavy burden of proof that the functions of a free press jeopardize the right to a fair criminal trial with no reasonable, less-speech-restrictive alternative means of protecting the accused’s fair trial rights.8 Some federal courts have interpreted the presumption of openness from Richmond Newspapers as a constitutional guarantee for journalists to inconspicuously use analog reporting tools such as notebooks, writing utensils, and sketchpads in the courtroom. But what do these cases tell us, if anything, about limits on modern technology in courtrooms? Every smartphone is equipped with an unobtrusive note-taking app and sound recording capabilities, so journalists are left to wonder whether courts will treat smartphones more like notepads or like television cameras.

172

P. Brooks Fuller

Several federal courts have held that the presumption of access to criminal trials does not grant the press a right to record or livestream those trials.9 However, the Supreme Court has not settled this issue, and there is no consensus among lower courts on the status of social media in courtrooms. The various jurisdictional splits trace their origins to Sheppard v. Maxwell10 and Chandler v. Florida,11 two landmark Supreme Court decisions that dealt with whether televised trials presented an unreasonable risk to a defendant’s fair trial rights. Sheppard v. Maxwell involved the high-profile murder trial of Sam Sheppard, a physician accused of viciously bludgeoning his wife to death inside the couple’s suburban Ohio home.12 The trial was a bona fide media circus. Before the trial, the prosecution questioned Sheppard on three separate days before spectators and live cameras in a packed school gymnasium.13 The trial judge, Edward J. Blythin, never intervened. After his death, a reporter claimed that Judge Blythin told her during the trial that Sam Sheppard was “guilty as hell.”14 The circus continued during the trial. Judge Blythin claimed that he had no power to prevent the press from disseminating prejudicial information. Sheppard was convicted of murder and eventually appealed his conviction to the Supreme Court. The Supreme Court overturned Sheppard’s conviction, citing multiple instances of irreparable prejudicial publicity and undue influence on the jurors’ ability to render an impartial verdict. Justice Tom C. Clark, writing for an eight-justice majority, commented on the press’s contributions to the prejudicial conditions: Bearing in mind the massive pretrial publicity, the judge should have adopted stricter rules governing the use of the courtroom by newsmen. . . . The number of reporters in the courtroom itself could have been limited at the first sign that their presence would disrupt the trial. They certainly should not have been placed [adjacent to the jurors].15 No doubt cognizant of the high bar required to restrain the press, the Court declined to discuss specific measures the trial judge could have taken to control reporters. The Court reaffirmed that “[f]reedom of discussion should be given the widest range compatible with the essential requirement of the fair and orderly administration of justice”16 and reiterated that criminal defendants have recourse when press coverage diverts the presentation, interpretation, and assessment of evidence from the courtroom to the public sphere. The First Amendment protects the dissemination of information, but it does not give the press the right to run roughshod over the courtroom. As the broadcast television industry grew rapidly after World War II, states began to allow cameras inside courtrooms. Some states, including Florida, changed their codes of judicial conduct to expressly permit video and still photography inside courts to improve transparency in the criminal justice system. Chandler v. Florida (1981) dealt squarely with whether cameras in the

Social Media Use in Courtrooms

173

courtroom create inherent distractions or prejudices that justify their limitation or exclusion from criminal trials. Chandler involved the trial of two police officers accused of burglarizing a Miami restaurant.17 The officers argued that they were deprived of a fair trial because media were allowed to broadcast the proceedings.18 The Supreme Court upheld their convictions over objections that media involvement had violated their Sixth Amendment rights. Although the Court in Chandler did not recognize an unequivocal right for reporters to use cameras inside courtrooms, it opened the door to considerably more coverage of criminal trials by the booming broadcast industry: The risk of juror prejudice in some cases does not justify an absolute ban on news coverage of trials by the printed media; so also the risk of such prejudice does not warrant an absolute constitutional ban on all broadcast coverage.19 The Court made it clear that earlier cases that barred the use of technology in courtrooms did not “stand for an absolute ban on state experimentation with evolving technology, which, in terms of modes of mass communication, was in its relative infancy in 1964, and is, even now, in a stage of continuing change.”20 Today, all states allow cameras in at least some of their courtrooms. Federal courts have been far more reluctant to permit access. From 2011 to 2015, 14 federal district courts participated in a pilot program to study the use of cameras in courtrooms.21 At the conclusion of the program, the District of Guam, the Northern District of California, and the Western District of Washington continued to allow cameras. Federal appellate courts in the Ninth, Second, and Third Circuits currently allow cameras in their courtrooms citing positive experiences with the Judicial Conference pilot program. Despite incremental advances in access, the Supreme Court famously prohibits audio and video recording inside the courtroom. As will be discussed later, however, technological advancements and the global COVID-19 pandemic prompted the Supreme Court to experiment with livestreamed audio for the first time with positive results. Courts have been slow to adapt to the age of social media, and many courts have resisted calls for more permissive rules for social media in courtrooms. For example, in 2017, a federal judge prohibited journalists, including New York Times tech reporter Mike Isaac, from live-tweeting Mark Zuckerberg’s testimony during an intellectual property dispute involving Facebook’s thenrecently acquired virtual reality subsidiary Oculus.22 In Clackamas County, Oregon, which includes southwest Portland, local judicial district rules instruct journalists to ask the presiding judge for advance written permission to tweet or live-blog.23 In 2015, the Reporters Committee for Freedom of the Press identified no broad consensus regarding social media usage in courtrooms.24 That remains true today. Decisions are usually made on a case-by-case basis

174

P. Brooks Fuller

subject to federal rules of civil and criminal procedure, state and local court rules, and judicial discretion. When trial participants – judges, jurors, attorneys, litigants, and defendants – want to tweet, blog, stream, or record information related to an ongoing matter, judges have even more authority to limit social media use. The legal problems, however, are different from the usual clashes involving the First Amendment rights of journalists to engage in press activities. The threats of prejudicial publicity, jury taint, and disruption caused when trial participants engage with social media do not necessarily trigger the countervailing First Amendment values that are at play when journalists want to use social media for reporting. When jurors learn information about the court proceedings in which they are involved, the parties in court are presumptively prejudiced. In Remmer v. United States, a case involving improper communications with a juror, the Supreme Court said, In a criminal case, any private communication, contact, or tampering directly or indirectly, with a juror during a trial about the matter pending before the jury is, for obvious reasons, deemed presumptively prejudicial, if not made in pursuance of known rules of the court and the instructions and directions of the court made during the trial, with full knowledge of the parties.25 The Supreme Court vacated the judgment in Remmer and ordered the lower courts to conduct an evidentiary hearing on whether the ex parte discussions with the juror violated the defendant’s fair trial rights.26 In light of Remmer, the ubiquity of social media technology has put pressure on judges to control access to social media, and judges have done so. In March 2021, a federal judge banned Gina Bisignano from accessing social media while she awaited trial on charges stemming from her participation in the insurrectionist riot at the United States Capitol.27 Prosecutors argued that the judge should limit alleged rioters’ social media access because social media was integral to extremist groups’ criminal conspiracies to violently obstruct Congress’s certification of the 2020 US presidential election. Likewise, judges have attempted to ban civil trial participants from social media when they fear that a participant’s social media messages could taint the jury pool. In 2017, a federal judge banned trial participants from posting social media messages during a trademark dispute. San Diego Comic-Con International, the company that organizes the world’s largest comics and pop culture festival, had sued the organizers of then-named Salt Lake Comic Con over the confusingly similar “Comic Con” moniker.28 The Ninth Circuit struck down the trial court’s gag order because it amounted to a prior restraint on the litigants’ First Amendment rights to repost public documents and to express their views related to the “comic-con” dispute.29

Social Media Use in Courtrooms

175

These concerns and disputes are not new. In the 1965 case Estes v. Texas, involving a high-profile trial of a conman in Tyler, Texas, the Supreme Court warned that “ever-advancing techniques of public communication and the adjustment of the public to its presence may bring about a change in the effect of telecasting upon the fairness of criminal trials.”30 Justice Potter Stewart, however, warned of courts’ knee-jerk reactions to the “continuous and unforeseeable change” in communications technology: [I]t is important to remember that we move in an area touching the realm of free communication, and, for that reason, if for no other, I would be wary of imposing any per se rule which, in the light of future technology, might serve to stifle or abridge true First Amendment rights.31 When judges take drastic measures to prevent trial participants from accessing social media, they must ensure that the restrictions are narrowly tailored to uphold the fair administration of justice or prevent irreparable harm.

Social Media in Federal Courts Electronic media coverage of federal criminal proceedings has been expressly prohibited since the Federal Rules of Criminal Procedure were adopted in 1946.32 Specifically, Rule 53 bans photographs or “broadcasting of judicial proceedings from the courtroom” unless coverage is authorized under a separate statute or elsewhere in federal rules. Although the Federal Rules of Civil Procedure do not include a broadcast ban, federal courts have resisted allowing broadcasts of civil proceedings. In 1973, the Judicial Conference, a group of senior federal judges that crafts policy for federal courts, incorporated a ban on cameras and broadcasting in civil courts in its Code of Conduct for United States Judges.33 Despite repeated attempts to ease the restrictions on recording and broadcast imposed by federal rules – and public opinion in favor of using broadcast technology to improve judicial transparency – the ban on cameras and broadcasting persists in most federal courts. Furthermore, no codified rules provide guidance for journalists to use social media inside federal courtrooms for non-broadcast purposes. Like many areas where the law is slow to adapt to technological progress, judges have had to experiment with social media in the courtroom. Federal Criminal Courts Even if the Judicial Conference repealed or substantially altered Rule 53, federal district judges would still have the discretion to regulate the conduct of courtroom observers and trial participants under Rule 57(b) of the Federal Rules of Criminal Procedure. Rule 57(b) provides, “A judge may regulate

176

P. Brooks Fuller

practice in any manner consistent with federal law, these rules, and the local rules of the district.”34 Coupled with Rule 53 authority, Rule 57(b) gives judges wide latitude to limit social media usage in the courtroom. In 2007, a federal judge permitted bloggers to cover the trial of former Vice President Dick Cheney’s chief of staff, Scooter Libby, who was indicted for obstruction of justice and lying to the FBI during an investigation into the unmasking of a CIA employee whose husband criticized US intelligence in the run-up to the invasion of Iraq under President George W. Bush.35 The case generated considerable media coverage, and members of the nascent blogosphere wanted the same access as the mainstream press pool. A federal judge in the District of Columbia permitted the bloggers to live blog the proceedings from an overflow room alongside reporters from the New York Times and Washington Post, signaling that federal courts might be open to accommodating emerging media outlets and reporting styles. In 2009, federal judge Mark Bennett allowed reporters to live blog during the federal tax fraud trial of an Iowa landlord, provided that the reporters sat far enough back in the courtroom to minimize distraction.36 Judge Bennett later urged his federal district to adopt clear guidelines for permitting reporters to live blog during hearings “so it’s just not an ad hoc decision.”37 Just a month later, a federal judge in Kansas, for the first time, used Rule 57(b) to permit a reporter to use a smartphone to cover a federal criminal trial. The Wichita Eagle sought a court order allowing a reporter to cover a federal racketeering trial over Twitter. One common denominator in these cases is a presiding judge who believed that the benefits of open courts and social media outweigh the potential for prejudice. But for every tech-savvy judge willing to permit journalists to blog (and tweet) from court, there is a bearish judge who questions the benefits of social media inside the courtroom. In United States v. Shelnutt, Judge Clay D. Land of the Middle District of Georgia interpreted Rule 53 broadly to prohibit a reporter from tweeting inside the courtroom: [T]he Court finds that the contemporaneous transmission of electronic messages from the courtroom describing the trial proceedings, and the dissemination of those messages in a manner such that they are widely and instantaneously accessible to the general public, falls within the definition of “broadcasting” as used in Rule 53.38 Ultimately, whether journalists or members of the public can post from the courtroom in federal criminal trials will largely depend on how the presiding judge classifies a particular mobile technology. If a judge views smartphones and tweets as the 21st-century equivalent of a journalist’s pen and notepad – quiet and unobtrusive – then the judge will be more likely to find that the use of the smartphone in court falls outside Rule 53, and the judge may use their powers under Rule 57(b) to accommodate mobile technology.39 If not,

Social Media Use in Courtrooms

177

the judge will almost certainly prohibit its use. Without clear guidance from federal appellate courts, journalists must appeal to federal judges on a trial-bytrial basis to cover federal criminal trials over social media. Federal Civil Courts Although cameras, recording, and broadcast technologies are not prohibited in federal civil court, their use is fairly rare. By local rule, the Second Circuit and Ninth Circuit appellate courts permit journalists to record and broadcast oral arguments in civil cases.40 However, judges may still drastically limit the use of cameras. For example, the Second Circuit requires news media to pool their resources if more than one person requests the right to broadcast a hearing via radio or television.41 Since 1996, the Judicial Conference has permitted federal courts to decide for themselves whether and how journalists can use electronic devices in civil court. This has produced mixed results. Many federal courts’ local rules explicitly permit observers, including journalists, to bring smartphones, computers, and other electronic devices into certain types of civil proceedings, but these rules vary dramatically and remain subject to the procedural rules that give judges the final say in how journalists report from their courtrooms. Although the First Amendment clearly grants journalists a presumptive right of access to trials, it does not grant a corresponding right to preferred mobile reporting tools. The Supreme Court has flatly rejected the argument that the First Amendment right of access creates a right to broadcast.42 Local Federal Court Rules and Their Application Each federal district has adopted local rules that cover whether electronic devices are allowed in the courtroom and how those devices can be used. Generally, experimentation across federal districts has produced fairly restrictive rules, though the particulars differ in some important ways. For example, the Western District of North Carolina totally prohibits all photographic and audio recording in court.43 The ban extends to adjacent corridors, presumably to prevent journalists from gathering outside a courtroom and using sensitive microphones to record and transmit audio of proceedings on the other side of the door. In 2018, the rule was revised to explicitly ban transmittal by social media. But the rules go further. Only specific persons, such as attorneys, paralegals, and law enforcement officers, may bring electronic devices into the courtroom at all, and all devices except those permitted by the local rules must be powered off while court is in session. Switching to “Do Not Disturb” could still land a reporter in contempt of court in the Western District of North Carolina. The Eastern District of Michigan has a somewhat more permissive rule that exempts “bona fide members of the press” who obtain satisfactory credentials from the general rules prohibiting electronic devices in court facilities.44 While

178

P. Brooks Fuller

members of the public are not permitted to use electronic devices anywhere in the courthouse, journalists are able to operate laptops and other “personal computing devices” in designated areas such as the Media Center of the federal courthouse in Detroit and inside the courtroom with the judge’s permission. Unless the Judicial Conference adopts a contrary resolution, broadcast equipment is strictly prohibited except in ceremonial proceedings. Many federal courts have issued orders that adapt decades-old rules to emerging technologies. Based on the “evergrowing number of wireless communication devices that have the capability of recording and/or transmitting sound, pictures, and video,” the US District Court for the Northern District of New York issued a standing order prohibiting anyone not directly engaged in court business from bringing an electronic device into federal court.45 Practically speaking, the Northern District of New York treats any electronic device capable of transmitting a signal or recording audiovisual content, even wearable tech like the Apple Watch, just as it would a bulky television camera. The district’s local rule also limits the use of electronic devices to officers of the court, such as attorneys. In addition to restrictive standing rules regarding social media use, judges often limit media coverage in high-profile cases. For example, when Dennis Hastert, a former speaker of the United States House of Representatives, was indicted for lying to the FBI during an investigation into alleged “hush money” payments Hastert made to cover up sexual misconduct, the presiding judge issued an order permitting media in an overflow room to use laptops and smartphones to write and file stories.46 Because the rules vary by district and case, journalists should always consult local rules and check with the clerk of court if they are unsure about whether the court permits live blogging, tweeting, or recording of a proceeding. Congressional Efforts to Improve Access to Federal Courts Efforts to improve transparency in the courts have failed to gain significant traction in Congress despite successful periods of reform following the Watergate and Pentagon Papers scandals and growing concerns over government secrecy during the Cold War. Congress has occasionally targeted the limitations that Rule 53 imposes on reporting from federal criminal courts, but with minimal success. Legislation to allow video broadcasts from federal courts was introduced in the 116th Congress (2019–2021), but it languished. The bipartisan Cameras in the Courtroom Act would have required the Supreme Court to broadcast its proceedings unless a majority of the justices agreed that televising proceedings would constitute a due process violation for one of the parties.47 The Sunshine in the Courtroom Act would have authorized broadcasting from the federal circuit and district courts. Neither bill made it out of committee.48 Some congressional leaders have tried to support transparency infrastructure through expenditures rather than legislative mandates, but these efforts have

Social Media Use in Courtrooms

179

similarly failed to move the political needle. The House Appropriations Committee’s fiscal year 2020 report encouraged allocations for installing broadcast technology in the Supreme Court. Although no major legislative initiatives appear poised to open up the federal courts to new technologies anytime soon, reporters and the public should watch for progress among small bipartisan coalitions of transparency-minded representatives. For now, most of the action arises in state courts. Social Media in State Courts The Supreme Court has provided little clarity to state courts when it comes to applying pre-digital doctrines to social media. Only a few cases have raised First Amendment free speech and free press challenges to broad restrictions on the use of electronic devices in courtrooms. In a case arising out of the Sixth Circuit, Robert McKay, a Michigan resident, filed a lawsuit arguing that a broad ban on “electronic communication devices” inside the Saginaw County Governmental Center (including common areas outside the courtrooms) violated his First Amendment free speech rights, Fifth Amendment due process rights, and Fourteenth Amendment equal protection rights.49 The Sixth Circuit Court of Appeals dismissed the case in 2016, saying that McKay lacked standing to sue. When McKay petitioned the Supreme Court to resolve the procedural matter and the substantive question of whether the policy abridged his constitutional rights, the Supreme Court declined, leaving unresolved the question of whether there is a constitutional right to possess communication devices inside a court building or courtroom. Unsurprisingly, state court regulations are mixed with respect to the types of devices allowed in courtrooms and the persons permitted to use them. Scholarly commentators persistently point to the democratizing effect that the free flow of information has on the public’s understanding of the opaque and esoteric judicial process as a reason to adopt more liberal rules for social media technology.50 Since the Supreme Court laid the constitutional groundwork for welcoming cameras into courtrooms in Chandler v. Florida, each state has adopted its own rules governing the use of recording technology in court. Most states, like federal civil courts, allow broad judicial discretion when determining whether and how cameras may be used. Most courts limit the number of cameras in a courtroom, restrict their placement or make provisions for pool reporting during high-profile trials.51 Some allow cameras in the courtroom only by consent of the parties in civil matters. The Radio Television Digital News Association publishes a running list of camera policies for state courts, which is a helpful reference for journalists who have to navigate deviations in state-by-state rules.52 Policies governing cell phones and personal electronic devices in state court are similarly varied and (sometimes frustratingly) subject to the discretion of the presiding judge. During the mid-2010s, judges seemed to be increasingly

180

P. Brooks Fuller

open to allowing Internet-connected devices in courtrooms. The Conference of Court Public Information Officers conducted annual surveys from 2011 to 2014 and found that judges were increasingly likely to allow reporters to send text and social media messages during court.53 At the time, approximately 37% of judges surveyed had adopted social media policies for their courtrooms, up from 29% the year before.54 During the same period, the percentage of judges who considered it “inappropriate” to use social media in a courtroom dropped from 66% to 46%.55 These findings signaled a potentially significant shift in policy preferences by those in power. Since that survey, the CCPIO has not updated its findings, perhaps because of the growing consensus among judges to allow devices on a case-by-case basis so long as they do not disrupt the proceedings. Although that consensus is reflected in a review of state-level social media guidelines, jurisdictions vary, sometimes by county or municipality. Electronic devices are currently banned from the Honorable George N. Leighton Criminal Court Building in Cook County, Illinois, which encompasses Chicago and many of its surrounding suburbs.56 State courts in Idaho, on the other hand, broadly permit the use of electronic devices for notetaking and transmission of messages from inside courtrooms.57 The National Conference of State Courts publishes a repository of state court guidelines related to the use of personal electronic devices in court.58 While not exhaustive, the list provides an excellent starting point for understanding the policies journalists might confront when using social media technology to report from state courthouses. Despite some improvement in judicial attitudes toward social media, judges have punished reporters who violate standing orders by using social media to cover trials. In 2019, a reporter for the St. Louis Post-Dispatch was held in contempt for live-tweeting psychiatrist testimony during a mental competency hearing that was closed to the public to protect the accused’s personal health information.59 The reporter, Joel Currier, said he “[w]ouldn’t have bothered tweeting had [he] been allowed to stay.”60 Currier was ordered to write apology letters to the defendant, the victim, and their attorneys. Other judges have ordered harsher penalties in similar cases. Circuit court judge Brad Karren of Bentonville, Arkansas, ordered a reporter jailed for three days after she used her cell phone to record portions of a 2019 capital murder case in violation of standing rules.61 Nkiruka Omeronye, a television reporter for KNWA/Fox 24, of Fayetteville, Arkansas, was sentenced to ten days in jail (with seven days suspended) even after asking Judge Karren for leniency, claiming she had missed posted signs prohibiting recording devices and failed to read a standing order prohibiting audio and video recording inside the courtroom. Even after acknowledging that Omeronye had shown “proper remorse,” Judge Karren took the unusual step of trespassing Omeronye from the courtroom, meaning she would no longer be allowed to cover the murder trial, with or without a cell phone.

Social Media Use in Courtrooms

181

Prior Restraint In the 1998 film The Big Lebowski, fictional bowler-philosopher Walter Sobchak summarized, “The Supreme Court has roundly rejected prior restraint.”62 Indeed, the Supreme Court has said that any attempt to prevent publication of lawfully acquired information “comes bearing a heavy presumption against its constitutional validity.”63 Nevertheless, given social media’s ability to facilitate the rapid flow of information from the courtroom to the public, some judges have responded in a knee-jerk fashion to stifle digital reporting that reveals potentially prejudicial information. Such instances deserve attention because even time-limited restraints on publication can cause real damage to the free press even when journalists know the orders will almost inevitably be overturned. When a judge seeks to censor or augment coverage of courtroom proceedings, there is a very real threat of unconstitutional, but nevertheless chillingly effective, prior restraint. When Alene Tchekmedyian, a Los Angeles Times reporter, learned that a Glendale, California, narcotics detective pleaded guilty to charges of obstruction of justice and lying to federal law enforcement about ties to organized crime, he worked diligently to prepare the story for the Times’ digital platforms. Tchekmedyian found the plea deal posted on PACER, a publicly accessible website for federal court records, and filed a story detailing the charges and the plea arrangement on the Los Angeles Times website the next day.64 However, the officer’s attorneys rapidly intervened, arguing that the plea information should be made secret because the plea deal was filed under seal and posted publicly by mistake. In a shocking turn of events, federal judge John F. Walter ordered the Times to remove information about the plea deal from their website, essentially installing the court as editor of a major American newspaper. Three days later, while the newspaper’s appeal was pending before the US Court of Appeals for the Ninth Circuit, Judge Walter reversed himself.65 However, the Los Angeles Times was left vulnerable to serious legal sanctions throughout the intervening period if it violated the judge’s order. First Amendment scholar Jason M. Shepard said of this and other similar prior restraints on digital publications: “They serve as a reminder that fundamental principles of American press freedom need vigilant defense, even when it comes to reporting truthful, lawfully obtained information about public affairs.”66 In 2019, a Cook County, Illinois, judge ordered the digital investigative reporting organization ProPublica to censor the “names and/or any other information that would permit the ready identification” of children involved in a child welfare proceeding. The judge’s order specifically prohibited the publication of identifying information such as the names of the children’s school and other demographics. Judge Patricia M. Martin ultimately narrowed the order after litigation by the Reporters Committee, several friend-of-thecourt filings, and public outcry. However, in her modified order, Judge Martin cautioned, “Nevertheless, if ProPublica publishes the children’s names or

182

P. Brooks Fuller

likenesses, it would be the actions of ProPublica, and ProPublica alone that would be responsible for inflicting a gratuitous harm on these children.”67 The throughline of prior restraint cases In the digital age is this: although decades of First Amendment jurisprudence weigh heavily in favor of a free and unencumbered press (digital or otherwise), judges still act out. Some judges attempt to limit the digital publication of lawfully acquired, newsworthy information, arguing that censorship can stem harms created by the reach and speed of digital platforms. Thankfully, although judges are sensitive to the costs and potential harms posed by the rapid flow of private, proprietary, or prejudicial information, the presumption against prior restraints almost always prevails.

Jurors’ Use of Social Media Decades ago, concerns over prejudicial publicity, fair trial rights of defendants, and media-related mistrials focused almost exclusively on preventing major broadcast and print outlets’ coverage from influencing criminal proceedings. Nowadays, scholarly and popular commentary on the role social media plays in the criminal justice system has begun to focus considerably more attention on rogue jurors’ use of social media to access potentially prejudicial information while serving on juries, often against the admonition of the presiding judge. When jurors conduct independent research on the parties in a lawsuit, the defendant in a criminal case, or the relevant legal issues, they risk forcing a case into a mistrial if the juror’s digital misadventure comes to light. In rare and troublesome cases, “stealth jurors”68 might attempt to deceive attorneys and the court during jury selection to get chosen for a high-profile case so that they can leak information, hijack deliberations, and use their anonymity on certain social media apps, like Reddit or Twitter, to sway public opinion and undermine the trial. Whether it involves merely a curious juror’s social media scrolling or the destabilization of a jury by a nefarious stealth actor, juror engagement with social media can carry high stakes. If the court discovers genuine juror misconduct, a judge may order a new trial or the prosecution may dismiss the case. These outcomes potentially waste tremendous time and resources and represent heavy opportunity costs for the justice system as prosecutors have to start at square one. Concerns over jurors’ digital activities often revolve around jurors researching the material facts involved in litigation or a specific prosecution and communicating those facts or their impressions of the case with other jurors outside of authorized deliberations. Both scenarios involve serious concerns about jury impartiality. If a juror gathers information (or misinformation) from social media rather than from the evidence admitted during trial, the juror may base their deliberations on false, outdated, or prejudicial information that might never have been presented in open court. If a juror communicates these facts to other jurors, then the effects of misinformation or prejudicial evidence may totally subvert impartial deliberations.

Social Media Use in Courtrooms

183

Mark Zimny, principal of an education consulting company, was convicted on multiple counts of fraud for unlawfully accepting hundreds of thousands of dollars from parents attempting to secure admission for their children at elite boarding schools. After trial, his conviction hung in limbo while the court investigated allegations of juror misconduct through blogging and social media use.69 During Zimny’s trial, the defense learned that a person claiming to be a juror posted a comment on a blog known as Shots in the Dark, where posters and commenters had discussed the charges against Zimny and ridiculed him personally.70 A commenter claimed that a member of the jury had been “spouting off” about the blog since the beginning of the trial, which suggested that unauthorized juror communications potentially prejudiced the jury from the start. The First Circuit concluded that a defendant need only present a “colorable claim of juror misconduct” to prompt an investigation. Thus, a time-consuming and costly investigation proceeded. At scale, such investigations can be potentially disastrous for otherwise proper criminal prosecutions. Judges often try to rein in jurors’ digital meandering through clear and forceful instructions. In 2020, the Judicial Conference Committee on Court Administration and Case Management issued a revised set of proposed jury instructions for federal judges to communicate social media guidelines at significant stages of the trial process (such as jury selection), immediately before trial after empaneling the jury, immediately before deliberations, and at the close of each day in court. The model instructions recommend that judges instruct jurors not to communicate about a case with anyone and list various forms of digital communication (including the comments sections of social media platforms) that are prohibited.71 Legal and criminal justice scholars have attempted to capture the nature and scope of juror misconduct, but results have been mixed, thanks largely to jurors’ lack of candor in surveys about their misbehavior.72 Judges’ and attorneys’ reports of suspected and actual juror misconduct do not track against admitted instances of unauthorized Internet and social media use among jurors.73 A 2014 study based on informal survey responses from more than 500 jurors in federal and state courts in Chicago suggested that a small but significant number of jurors (about 8%) were tempted to communicate information learned during the trial using social media.74 Most refrained and attributed their decision to judges’ clear social media guidance and reminders about the role and importance of the impartial jury in the criminal justice system. Commentators have not reached a meaningful consensus on which solutions effectively discourage jurors’ social media misconduct. Recommendations range from fairly draconian punishments such as criminal contempt75 to enhanced juror education and more open dialogue between jurors and the court.76 Many have recommended increasing the frequency and detail of jury instructions. Even with strictures in place, there is no guarantee that jurors will refrain from tweeting or posting about criminal trials, especially trials that lack newsworthiness or trials involving high-profile claims or parties but nevertheless involve salacious details.

184

P. Brooks Fuller

Social Media Use by Judges, Attorneys, and Court Personnel Since the early 1900s, attorneys and judges have mostly self-governed according to model rules of professional conduct promulgated by the American Bar Association (ABA). Since the 1908 Code of Professional Ethics77 (for attorneys) and the 1924 Canons of Judicial Conduct78 (for judges), model rules for both groups have cautioned against engaging in behavior that presents real or apparent conflicts of interest, constitutes self-dealing, yields to outside influences, violates client confidences, or invites bias or differential treatment by a judicial officer in favor of a particular attorney or party. In 1973, the Judicial Conference also adopted a code of conduct for federal district court judges, which was recently updated in 2019.79 The modern codes impose upon attorneys a duty of loyalty to their clients and to the justice system. Attorneys are prohibited from exerting undue influence or capitalizing on personal relationships to engage in ex parte (out-of-court) communications with judges. Judges similarly accept a solemn obligation to administer justice impartially. The reality today is that the professional mandates related to judges’ and attorneys’ social interactions are now murkier because of social media. While social media usage by judges and attorneys is not inherently problematic, the communication platforms invite users to mix personal and professional business in ways that could run afoul of ethics rules. Are judges and attorneys allowed to be friends on Facebook? Despite the commonplace disclaimer that “RT ≠ endorsement,” would an attorney be violating rules of professional conduct by retweeting a judge’s campaign announcement on Twitter? Can a judge endorse an attorney on LinkedIn for having particular skills in mediation or trial advocacy? The answers to these questions turn on the model rules and canons adopted by national organizations and state-by-state guidance. Like other areas discussed in this chapter, some states are clearer than others in their guidance to attorneys and judges who must navigate the precarious terrain of social media engagement. For judges, the propriety of a particular type of engagement on social media will depend on whether a judge is engaging in inappropriate commentary on the merits of a case that is before the court or likely to come before the court. The Code of Judicial Conduct says judges can be disqualified from proceedings if they engage in commentary that “commits or appears to commit the judge to reach a particular result or rule in a particular way in the proceeding or controversy.”80 These rules do not extend to the judge’s official public duties or public comments made for the purposes of legal education or scholarly presentations. However, these exceptions do not appear to cover a judge’s legal commentary on social media platforms. In 2013, the ABA published a formal advisory opinion urging judges to refrain from using social media to obtain information about matters that come before them, similar to the social media guidance judges often give jurors. The

Social Media Use in Courtrooms

185

ABA guidance also instructed judges to conduct a case-by-case analysis of their social media connections with attorneys or litigants that have business in their courtrooms. Judges should consider the frequency and nature of their communications with attorneys and litigants when deciding whether those relationships rise to the level of a conflict of interest or personal relationship that should be disclosed and potentially prompt the judge’s recusal.81 The ABA is more forceful in its guidance to judges related to partisan political activity on social media. Federal judges are prohibited from engaging in political activity, including publicly endorsing candidates for political office.82 Attorneys similarly face strict limitations on their use of social media in circumstances that might affect ongoing cases or potentially jeopardize attorneyclient confidentiality. A 2018 ABA advisory opinion warned attorneys to avoid using social media to comment about any client matters. According to the ABA, the Model Rules of Professional Conduct prohibit an attorney from discussing any “information related to the representation, no matter its source,” even if the information is known by others or publicly available elsewhere.83 This likely prevents attorneys from retweeting or sharing stories about their clients’ legal issues if the subject matter is germane to ongoing litigation or criminal prosecution. Even the client’s identity could be protected information. Since social media platforms are highly networked, facile users may be able to use an attorney’s seemingly innocuous anonymized post to discern facts about the representation in ways that were not possible before social media became so commonplace. Lawyers should also be careful when taking public positions on issues that might be adverse to their clients’ interests. The DC Bar opined that if an attorney publishes their personal positions on legal, social, or political issues on social media, they may be creating positional conflicts of interest that preclude them from future representation of clients whose interests conflict with their previous public statements.84 At a minimum, the attorney would most likely have to disclose the conflict and obtain their client’s waiver of the conflict to continue representation. Given the ease of posting on Facebook and Twitter, attorneys should exercise restraint when commenting on public events on social media. The ABA also admonishes attorneys from criticizing judges’ rulings on social media. In 2015, a Louisiana lawyer was disbarred after he went on social media accusing judges in child custody cases of failing to admit evidence and encouraging concerned citizens to urge the judges to reconsider the sealed domestic cases.85 The attorney asserted that the First Amendment protected his public statements, but he was disbarred nonetheless. Social media algorithms play a significant role in driving content that is more likely to engage and keep users glued to the platform.86 An attorney’s off-color tweet about a high-profile opposing party or a judge’s indignant Facebook post about a political candidate might, if the environment is just right, create a firestorm fueled by the platforms’ recommendation and trending

186

P. Brooks Fuller

functions. Material attributes of social media, embedded in the platform architecture, raise the specter of improper influence over the justice system in new and startling ways. Judges and attorneys need to take special care when communicating and sharing information on digital platforms.

Looking Forward: New Forays in Social Media Use in Courts Many court watchers hoped that judges and their rulemaking bodies would adapt and begin universally allowing journalists and courtroom observers to bring personal electronic devices into the courtroom. That hasn’t happened. With few exceptions, rules governing the use of electronic devices and social media inside state and federal courtrooms are still quite restrictive. Despite the ubiquity of electronic devices, many courts prohibit them entirely regardless of whether the sensitivities of a particular case invite the risk of disruption or interference. Journalists and concerned citizens should watch local courts closely for rule changes that affect their ability to communicate about court proceedings and should make their voices heard when judges are asked to revisit the rules or decide cases that implicate the First Amendment right to attend and cover trials. In jurisdictions where judges are elected, judicial transparency may be a ripe political issue where voters have some degree of influence at the ballot box. For years, commentators have urged courts to embrace modern modes of communication and communication strategies, including social media, to improve the public understanding of the court system and its role in a democratic society.87 Some Supreme Court justices have openly decried the idea of cameras and livestreaming at the highest court in the land. They argue that these technologies might tempt egotistical litigants and justices to perform for the audience during oral arguments88 or stoke existing misconceptions about the role of the justices.89 At one point in time, Justice Samuel Alito simply didn’t think anyone would tune in.90 For a range of reasons, transparency advocates’ calls for more and better streaming technology in courtrooms have gone unanswered. Fortunately, political and social pressures are not the only method of getting courts to embrace modern technologies. Even for a cautious Supreme Court, necessity is still the mother of invention. Legislation that would require the Supreme Court to livestream oral arguments and opinion readings has languished in Congress.91 But in 2020, the United States saw much of its critical infrastructure strained by a global pandemic. Courtrooms across the country were shuttered to the public, and recalcitrant judges everywhere were forced to shift hearings onto online platforms such as Zoom and WebEx.92 But results with courtroom livestreams were mixed. In an asbestos litigation case in California, defendants moved for a mistrial after prospective jurors were reported working out, sleeping, and even talking with opposing counsel in a separate video chat during remote jury

Social Media Use in Courtrooms

187

selection.93 In Michigan, however, the state’s chief justice recognized that the pandemic had “laid bare the lack of technology and the archaic rules and processes embedded in our justice system,” and she vowed to invest in livestreaming the court’s proceedings and recommitted to growing Michigan’s digital dispute resolution platform.94 For transparency advocates, livestreamed court proceedings revealed some silver linings around the deeply threatening clouds of COVID-19. Digital technologies may help level the playing field between parties who have easy access to transportation, childcare, and flexible work schedules and those who do not. However, research suggests that while Internet technologies were crucial for conducting public business consistent with public health guidance, remote hearings did not necessarily achieve equitable access to justice. Persons with disabilities, non-English speakers, and participants in immigration and eviction proceedings still face structural and systemic obstacles that may be exacerbated when hearings are held via livestream. What good is an innovative livestreaming platform or online mediation program in the face of digital redlining and the systematic denial of modern Internet infrastructure to marginalized groups?95 New technologies often prompt fear of change and promise of progress. Judges, attorneys, and journalists across the country are no strangers to social media platforms and digital technologies in their personal lives. Many are digital natives. But legal rules on social media in the courtroom have adapted slowly, if at all. New cases involving smartphones and wearable digital technology are still governed by Supreme Court precedents and judicial guidance from pre-digital times. In states where courts have experimented with digital permissiveness, little has surfaced to suggest that updated rules have diminished the sanctity or security of the justice system. The question now is whether any influential institutions or individuals can muster the political or cultural capital to prompt broad-scale adaptation to new technologies by courts. Recent experience suggests that the digital canaries in the coal mine are doing just fine. We can even hear them tweeting.

FREQUENTLY ASKED QUESTIONS 1.

Can judges prohibit journalists, the public, and trial participants from using smartphones and social media apps inside courtrooms? Generally, yes. Judges have broad authority to control how courtrooms operate and can limit the use of electronic devices. Most courts have rules that prohibit the use of smartphones and social media inside courtrooms. Journalists do not have special rights to use electronic devices.These rules are somewhat relaxed for attorneys, but only when

188

P. Brooks Fuller

attorneys must use their electronic devices for courtroom business. The rules also vary by jurisdiction, so it’s critical to know the local rules and to ask the judge’s clerk for permission to take notes, record, or use an electronic device. 2.

So how do I report on a trial if I can’t record the proceedings, live tweet, or take notes on my phone? You should use public records laws and local court rules to get recordings of proceedings as quickly as possible.You might consider asking the judge for permission to quietly record the proceeding for notetaking purposes if the judge (or local rules) prohibits livestreaming from your phone.You should take the best notes you can and then find the nearest location, perhaps a media room in the courthouse, where tweeting is allowed. If you violate the judge’s orders regarding electronic devices, you could be held in contempt of court.

3.

Would criminal contempt of a journalist violate the First Amendment freedom of the press? It depends. If a judge issues an order governing the use of electronic devices in court and a journalist violates that order, the First Amendment probably does not protect the journalist. However, if a judge attempts to hold a journalist in contempt because of the content of their reporting or tries to order the journalist to delete tweets or other information posted online, the judge is likely engaging in an unconstitutional restraint of protected speech.

4.

Are courts livestreaming their own hearings to improve access? Some are, especially as streaming technology has become easier to use, cheaper, and more accessible. Many courts began livestreaming hearings, trials, and appellate arguments to cope with restrictions imposed during COVID-19, and some courts have decided to make some form of livestreaming permanent. Congress is still considering legislation that would require the Supreme Court to livestream the audio of oral arguments and opinion announcements.

Notes 1. Daniel Victor, “I’m Not a Cat,” Says Lawyer Having Zoom Difficulties, New York Times, February 9, 2021. 2. Riley v. California, 573 U.S. 373, 382 (2014). 3. See Reno v. ACLU, 521 U. S. 844, 870 (1997).

Social Media Use in Courtrooms 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.

189

See Packingham v. North Carolina, 137 S. Ct. 1730 (2017). Jeremy Bentham, Rational of Judicial Evidence 524 (1827). Richmond Newspapers v. Virginia, 448 U.S. at 570–571 (internal citations omitted). Id. at 581. Nebraska Press Assoc. v. Stuart, 427 U.S. 539 (1976). See, e.g., United States v. Moussaoui, 205 F.R.D. 183 (E.D.Va. 2002); United States v. Hastings, 695 F.2d. 1278 (11th Cir. 1983). 384 U.S. 333 (1966). 449 U.S. 560 (1981). 384 U.S. at 335–336. Id. at 339. Id. at 358 n.11. Id. at 358. Id. at 350 (citing Pennekamp v. Florida, 328 U. S. 331 (1946)). 499 U.S. 560, 567 (1981). Id. at 567–568. Id. at 575. Id. at 573–574. Sarah J. Eckman, Video Broadcasting from the Federal Courts: Issues for Congress, Congressional Research Service, October 28, 2019, https://fas.org/sgp/crs/ misc/R44514.pdf. Mike Isaac, A Trial and a Twitterstorm: On Live-Tweeting From a Federal Courthouse, New York Times, January 24, 2017. Guidelines for Media Coverage, Going to Court: Information about Going to Court in Clackamas County, www.courts.oregon.gov/courts/clackamas/go/Pages/mediaguidelines.aspx (last accessed February 28, 2022). Tom Isler, Tweeting from Courts Still Slow in Catching on, The News Media & the Law (Spring 2015), www.rcfp.org/wp-content/uploads/2019/01/Spring_2015.pdf. Remmer v. United States, 347 U.S. 227, 229 (1954). Id. at 230. Evan Halper, “Uncharted Waters.” Judges Are Banning Some Capitol Riot Suspects from the Internet, Los Angeles Times, March 25, 2021. In re Dan Farr Prod., 874 F.3d. 590, 591–592 (9th Cir., 2017). Id. Estes v. Texas, 381 U.S. 532, 551–552 (1965). Id. at 603–604 (Stewart, J., dissenting). United States Courts, History of Cameras in Courts, www.uscourts.gov/aboutfederal-courts/judicial-administration/cameras-courts/history-cameras-courts (last accessed February 28, 2022). Eckman, supra note 21 at 8. Fed. R. Crim. P. 53. Andy Sullivan, Bloggers Gain Access to “Scooter” Libby Trial, Reuters, January 21, 2007. Debra Cassens Weiss, Judge Explains Why He Allowed Reporter to Live Blog Federal Criminal Trial, ABA J., January 16, 2009. Id. United States v. Shelnutt, 2009 U.S. Dist. LEXIS 101427 (M.D. Ga. 2009). Cathy L. Packer, Should Courtroom Observers Be Allowed to Use Their Smartphones and Computers in Court: An Examination of the Arguments, 36 Am. J. Trial Advoc. 573 (2013). United States Courts, Federal Court: Media Basics – Journalist’s Guide, www. uscourts.gov/statistics-reports/federal-court-media-basics-journalists-guide#recording (last accessed February 28, 2022).

190

P. Brooks Fuller

41. Cameras in the Courtroom – Second Circuit Guidelines (October 1, 2019), www. ca2.uscourts.gov/docs/CE/Cameras.pdf. 42. Nixon v. Warner Commc’ns, Inc., 435 U.S. 589, 610 (1978). 43. Local Rule 83.3 (U.S. Dist. Ct. for W.D.N.C.), www.ncwd.uscourts.gov/sites/default/ files/local_rules/Revised_Local_Rules_0.pdf (last accessed February 28, 2022). 44. Local Rule 83.32 (US. Dist. Ct. for E.D. Mich.), www.mied.uscourts.gov/altindex. cfm?pagefunction=localruleView&lrnumber=LR83.32 (last accessed February 28, 2022). 45. In the Matter of Courthouse Security and Limitations on the use of Electronic Devices within United States Courthouses in the N.D.N.Y., Order 26 (N.D.N.Y, September 19, 2019). 46. United States v. Hastert, 15 CR 315 (N.D. Ill., June 8, 2015). 47. Cameras in the Courtroom Act, S.B. 822, 116th Cong. (2019). 48. The Cameras in the Courtroom Act was reintroduced in June 2021, but it has not moved to the floor for a vote. A related House bill, the Transparency in Government Act of 2021, which would mandate broadcast of Supreme Court hearings and instantaneous audio uploads of Supreme Court oral arguments, also languishes between committees. See Transparency in Government Act of 2021, H.B. 2055, 117th Cong. 49. McKay v. Federspiel, 823 F.3d 862 (6th Cir. 2016). 50. Tamara A. Small & Kate Puddister, Play-by-Play Justice: Tweeting Criminal Trials in the Digital Age, 35 Canadian J. Law & Soc. 1 (2020). 51. See Stacy Blasiola, Say Cheese: Cameras and Bloggers in Wisconsin’s Courtrooms, 1 Reynolds Ct. & Media L.J. 197 (2011). 52. Cameras in the Courts: State by state guide, www.rtdna.org/content/cameras_in_ court (last accessed February 28, 2022). 53. New Media Survey, Conference of Court Public Information Officers, August 6, 2014, https://ccpio.org/wp-content/uploads/2014/08/CCPIO-New-Media-surveyreport_2014.pdf. 54. Id. 55. Id. 56. Cook County Cell Phone and Electronic Device Ban, www.cookcountycourt.org/ HOME/Cell-Phone-Electronic-Device-Ban#3513282-are-all-courthouses-underthe-ban (last accessed February 28, 2022). 57. Idaho Ct. Admin Rule 49, Electronic Devices in Court Facilities (November 29, 2012), https://isc.idaho.gov/icar49. 58. Cell Phone and Electronic Device Policies – Social Media and the Courts State Links, Nat’l Center for State Cts., www.ncsc.org/topics/media/social-media-andthe-courts/state-links5 (last accessed February 28, 2022). 59. Sarah Fenske, Post-Dispatch Reporter Found in Contempt after Live-Tweeting Closed Hearing, Riverfront Times, June 4, 2019. 60. Id. 61. Max Brantley, Judge Orders Reporter Jailed for Three Days for Recording Court Hearing, Ark. Times, November 19, 2019, 2:44 PM. 62. The Big Lebowski (Working Title Films, 1998). 63. Bantam Books, Inc. v. Sullivan, 372 U. S. 58, 70 (1963). 64. Jason M. Shepard, Prior Restraint Still Makes Important Stories a Hassle, Calif. Publisher (Winter 2019), https://jasonmshepard.medium.com/prior-restraint-stillmakes-important-stories-a-hassle-e8471abb7603. 65. Alene Tchekmedyian, Glendale Police Detective Pleads Guilty to Obstruction, Lying to Feds about Ties to Organized Crime, Los Angeles Times, July 17, 2018. 66. Shepard, supra note 64.

Social Media Use in Courtrooms

191

67. Mick Dumke & Steve Mills, Cook County Judge Loosens Unusual Restrictions on Publishing Details of Child Welfare Case, ProPublica, April 15, 2019, www.propublica.org/article/propublica-illinois-prior-restraint-court-case-ordercook-county-patricia-martin. 68. Bennett L. Gershman, Contaminating the Verdict: The Problem of Juror Misconduct, 50 S.D. L. Rev. 322, 345 (2005). 69. United States v. Zimny, 846 F.3d 458 (1st Cir. 2017). 70. Id. at 462. 71. Proposed Model Jury Instructions, The Use of Electronic Technology to Learn or Communicate about a Case, Judicial Conference (June 2020), www.uscourts.gov/ sites/default/files/proposed_model_jury_instructions.pdf. 72. Thaddeus Hoffmeister & Ann Charles Watts, Social Media, the Internet, and Trial by Jury, 14 Annual Rev. Law. & Soc. Sci. 259 (2018). 73. Id. 74. Amy J. St. Eve, Charles P. Burns & Michael A. Zuckerman, More from the #Jury Box: The Latest on Juries and Social Media, 1 Duke. L. & Tech. J. 65 (2014). 75. Matthew Aglialoro, Criminalization of Juror Misconduct Arising from Social Media Use, 28 Notre Dame J.L. Ethics & Pub. Pol’y 102 (2015). 76. Thaddeus Hoffmeister, Google, Gadgets, and Guilt: Juror Misconduct in the Digital Age, 83 Univ. Colo. Law Rev. 409 (2012). 77. Code of Professional Ethics, American Bar Assoc. (1908). 78. Canons of Judicial Conduct, American Bar Assoc. (1924). 79. Code of Conduct for United States Judges, in 2A Guide to Judiciary Policy (2019), www.uscourts.gov/sites/default/files/code_of_conduct_for_united_states_judges_ effective_march_12_2019.pdf. 80. Id. 81. American Bar Association, Judge’s Use of Electronic Social Networking Media, Formal Op. 462 (2013). 82. Code of Conduct for United States Judges, supra note 79. 83. Confidentiality Obligations for Lawyer Blogging and Other Public Commentary, Am. Bar. Assoc. Op. 480 (2018), https://bit.ly/2pPrfB3. 84. See D.C. Bar Op. 370 (2016) (discussing lawyers’ use of social media advising that “[c]aution should be exercised when stating positions on issues, as those stated positions could be adverse to an interest of a client, thus inadvertently creating a conflict.”). 85. Debra Cassens Weiss, Lawyer Is Disbarred for “Social Media Blitz” Intended to Influence Custody Case and Top State Court, ABA Journal, July 8, 2015. 86. Luke Munn, Angry by Design: Toxic Communication and Technical Architectures, 7 Humanities & Soc. Sci. Comm. 53 (2020). 87. Mary-Rose Papandrea, Moving Beyond Cameras in the Courtroom: Technology, the Media, and the Supreme Court, 2012 B.Y.U. L. Rev. 1901 (2012). 88. Robert Kessler, Why Aren’t Cameras Allowed at the Supreme Court Again? The Atlantic, March 28, 2013. 89. Moses Merakov, Keeping Cameras (and Cat Filters) Out of the Supreme Court, Wash. J. L. Tech. & Arts, March 8, 2021. 90. Justice Alito: Few Would Watch High Court Arguments on TV, First Amendment Center, October 22, 2007, http://www.firstamendmentcenter.org/justicealito-few-would-watch-high. 91. Twenty-First Century Courts Act, H.R. 6017, 116th Cong. (2020). 92. Courts Deliver Justice Virtually Amid Coronavirus Outbreak, USCourts.gov, April 8, 2020, www.uscourts.gov/news/2020/04/08/courts-deliver-justice-virtuallyamid-coronavirus-outbreak.

192

P. Brooks Fuller

93. Janna Adelstein, Courts Continue to Adapt to Covid-19, Brennan Center for Justice, September 10, 2020, www.brennancenter.org/our-work/analysis-opinion/ courts-continue-adapt-covid-19. 94. Katelyn Kivel, How the Coronavirus Revolutionized Michigan’s Courts, The Gander, July 14, 2020, https://gandernewsroom.com/2020/07/14/coronavirusrevolutionized-courts. 95. Ernesto Falcon, The FCC and States Must Ban Digital Redlining, Electronic Frontier Foundation, January 11, 2021, www.eff.org/deeplinks/2021/01/fcc-andstates-must-ban-digital-redlining.

Chapter 10

Social Media Policies for Journalists Daxton R. Stewart T EXAS CHRISTIAN UNIVERSITY

The tweet seemed harmless enough. The day before President Joe Biden was inaugurated in January 2021, freelance New York Times editor Lauren Wolfe grabbed a screenshot of his plane landing and tweeted, “Biden landing at Joint Base Andrews now. I have chills.” The tweet drew the ire of conservative critics, who asserted that the “chills” were evidence of the New York Times’ political bias. Within days, Wolfe was no longer working for the Times, and not by choice. This action drew even more heat as journalists and other observers came to Wolfe’s defense, noting the bad-faith of her attackers – a right-wing mob that centers anti-media sentiment as part of their ideology – and the lack of spine shown by Times leadership, which said the tweet violated the company’s editorial standards barring “anything that damages the Times’s reputation for strict neutrality in reporting on politics and government” but also denied that Wolfe was fired for the tweet.1 It was an incident that has become sadly common, as journalists try to do their jobs and use social media but find themselves at risk both of losing their jobs and opening themselves to harassment and threats by people who oppose them. The Washington Post suspended reporter Felicia Sonmez for tweeting about Kobe Bryant’s 2003 rape case on the day he died in a plane crash in 2020, saying she had violated the company’s social media policy, only to walk back the suspension the next day after widespread outrage for punishing a journalist for writing truthful things about sexual assault.2 After a smear campaign by conservative activists and politicians complaining about reporting intern Emily Wilder’s pro-Palestinian tweets as a college student, the Associated Press fired her in 2021, also drawing condemnation from journalists who saw the heavy-handed response as an overreaction to bad-faith critics and leading to reconsideration of the AP’s social media guidelines later that year.3 But the consequences for social media violations were sometimes disproportionate based on the power and identity of the journalists involved. In 2021, when it was revealed that popular columnist David Brooks at the New York Times was secretly receiving a second salary from the Weave project largely funded by Facebook while also writing positive things about Weave and Facebook in his columns, he DOI:10.4324/9781003174363-10

194

Daxton R. Stewart

faced no punishment despite clear violations of the Times’ conflict of interest policies.4 These kinds of dilemmas date back to the earliest days of journalists using social media. One of the early social media blunders in newsrooms came in 2009 when President Obama was in what he thought was an off-the-record discussion with a pool of White House reporters. Less than a week before, Kanye West had famously interrupted country music star Taylor Swift’s speech during the Video Music Awards, and a reporter from CNBC casually asked what President Obama thought about West’s outburst. An ABC employee, listening on a shared live feed of the discussion, circulated Obama’s slightly crude response, and soon after, Nightline co-anchor Terry Moran sent out the following on Twitter: “Pres. Obama just called Kanye West a ‘jackass’ for his outburst at VMAs when Taylor Swift won. Now THAT’s presidential.” Before ABC officials could respond or make a decision regarding whether this should be published, the damage was done. Moran had more than one million followers on Twitter, and even though Moran later deleted the tweet, the word was out. ABC was widely condemned for its lack of professionalism in the matter, and the network soon issued an apology, noting that its “employees prematurely tweeted a portion of (Obama’s) remarks that turned out to be from an off-the-record portion of the interview. This was done before our editorial process had been completed. That was wrong.”5 Social media tools present great opportunities for communicators, including news media professionals, to engage with the audience in ways impossible just two decades ago. However, the benefits social media allow communicators are tempered by the risks inherent in tools that allow messages to be sent immediately and spread rapidly. Further, laws and professional ethics policies drafted with a 20th-century understanding of mass media may not be in tune with communication tools that emerge, develop, spread, and change constantly. What steps can media organizations take to prevent embarrassing, unprofessional, or even illegal behavior when their employees use social media tools? Over the past decade, most news media organizations have developed guidelines and policies for employee use of social media. The earliest of these came in 2011, when the American Society of News Editors issued its “10 Best Practices for Social Media,” at a time when Facebook and Twitter were dominant in the field, but before Instagram had developed much at all, and well before the launch of tools like Snapchat and TikTok. It also included guidance that has largely been abandoned and seems quaint in hindsight, like “Break news on your website, not on Twitter.”6 When the Society of Professional Journalists (SPJ) revised its Code of Ethics in 2014, it attempted to make it clear that the principles applied to all forms of communication by journalists, encouraging “use in its practice by all people in all media.”7 The Online News Association created a Newsgathering Ethics Code to help aid journalists in using user-generated content shared on social platforms for their reporting.8

Social Media Policies for Journalists

195

A study by Jacob Nelson from the Tow Center for Digital Journalism in 2021 made several recommendations after interviewing dozens of journalists. More than a decade of social media experience in newsrooms showed that journalists feared being fired or scapegoated for harmless posts and that reporters – especially women and people of color – were overly exposed to harassment and threats from bad-faith attackers. Journalists, Nelson reported, find themselves walking a “Twitter tightrope,” where they feel compelled to use social media platforms to build awareness for and trust in their work, yet simultaneously apprehensive about the professional pitfalls and “dark participation” they encounter while doing so.9 Meanwhile, newsroom leaders come under fire for overreacting to tweets while also trying to cater to critics who argue they should stick to classic notions of journalistic objectivity, neutrality, and balance – concepts frequently misunderstood to mean that journalists should be unthinking, opinionless automatons with a “view from nowhere,” but that expose many reporters and news outlets to blistering criticism, especially women and journalists of color.10 Despite such risks, journalists cannot avoid engaging their audiences through social media. News media companies have incorporated social media into their plans, trying to build online followings as print circulation and broadcast audiences have dwindled. Facebook and Twitter have become essential publishing platforms for journalists, with nearly every newsroom employing a social media editor of some kind.11 As Snapchat emerged as a popular platform, particularly among younger audiences, journalists began to explore the possibilities for news, with the New York Times, Buzzfeed, ESPN, the Wall Street Journal, CNN, National Public Radio and more establishing presence.12 In 2019, video producer and editor Dave Jorgenson launched The Washington Post’s account on TikTok, posting hundreds of goofy videos and amassing nearly a million followers since, while also showing young audiences an inside look at how the Post operates, bringing on Pulitzer Prize winners like Bob Woodward and Robin Givhan, or inviting food reporter Maura Judkis on to talk about pumpkin spice flavored Spam.13 The widespread use of social media by journalists has triggered most news companies and professional organizations to develop guidelines for best practices, either in their codes of ethics or in stand-alone social media policies. For this chapter, several policies of news media companies and professional journalism organizations were reviewed to examine how they define social media tools and determine how guidelines about them should apply, the main topics the policies addressed, and themes regarding the way these organizations advised practitioners to handle the particular challenges of social media. The chapter concludes with best practices for designing social media policies for journalists in light of the policies discussed and the previous chapters of this book.

196

Daxton R. Stewart

Major Themes from Journalism Social Media Policies When the American Society of News Editors Ethics and Values Committee issued its “10 Best Practices for Social Media in 2011,” it referenced 18 other social media policies from news organizations and included guidelines, such as the following: • • • • • • • • •

Traditional ethics rules still apply online. Assume everything you write online will become public. Use social media to engage with readers, but professionally. Beware of perceptions. Independently authenticate anything found on a social networking site. Always identify yourself as a journalist. Social networks are tools, not toys. Be transparent and admit when you’re wrong online. Keep internal deliberations confidential.

These have largely held as news organizations have drafted and refined their social media policies in the years since, though the tool-versus-toy distinction has been less widely embraced as journalists show their human side on social networks such as TikTok and Instagram, building rapport with sources and the community. Major themes present in current social media policies, reviewed by the author in early 2022, include transparency, friending matters, clearance and review, sourcing, balancing personal and private matters, confidentiality, and intellectual property concerns. Each is discussed briefly in the following sections. Transparency The SPJ Code of Ethics calls for journalists to identify sources when possible and to “avoid undercover or other surreptitious methods of gathering information” in most situations.14 This call for openness in reporting methods is reflected in the social media policies as well, most of which demand that journalists identify themselves as journalists in two particular circumstances. First, they should always identify themselves as journalists who are representing a particular organization before posting comments or updates on social media sites or blogs or while commenting on other news stories. As NPR notes in its social media guidelines, revised in 2019, honesty is a core aspect of their social media use: “If as part of our work we are doing anything on social media or other online forums, we do not hide the fact that we work for NPR. We do not use pseudonyms when doing such work.”15 The Radio Television Digital News Association (RTDNA) extends this principle of transparency to avatars and forbids anonymous blogging.16

Social Media Policies for Journalists

197

Second, journalists should also be transparent about who they are when they contact potential sources for reporting purposes. The Wall Street Journal requires that its employees never “us(e) a false name when you’re acting on behalf of your Dow Jones publication or service” and always self-identify as a reporter for the Journal when gathering information for a story.17 Friending and Following Journalists are called to “act independently” under the SPJ Code of Ethics, in particular by avoiding conflicts of interest, “real or perceived.” This concern is at the heart of statements in nearly every news organization’s social media policy reviewed for this chapter, reflected by specific guidelines for who can be added to a list of “friends” or what organizations or movements journalists can become a “fan” or “follower” of. Journalists are warned to be careful in who they associate with online for fear of compromising their appearance of independence and neutrality. First, becoming a “friend” of a source or subject of coverage invites risk. A 2017 update of the New York Times social media guidelines expects that everything one of the company’s journalists does on social media draws scrutiny and is “likely to be associated with the Times,” including what you like and who you friend or follow. The Times policy tells journalists to “avoid joining private and ‘secret’ groups on Facebook and other platforms that may have a partisan orientation” or “registering for partisan events on social media” and urges caution even if the journalists are joining the group or event for reporting purposes.18 NPR says that following political parties or advocacy groups as part of reporting a beat is permissible, though they urge that reporters “should be following those on the other sides of the issues as well.”19 The Wall Street Journal requires approval by an editor before a source who may demand confidentiality can be added as a friend. “Openly ‘friending’ sources is akin to publicly publishing your Rolodex,”20 according to the Journal’s policy. Issues can also arise in newsrooms between managers and employees who may be friends on social media. In its 2013 guidelines, the AP says that managers “should not issue friend requests to subordinates, since that could be awkward for employees. It’s fine if employees want to initiate the friend process with their bosses.”21 Second, becoming a friend of a person involved in a controversial issue, or becoming a fan of a movement, may present issues. When news organizations began creating social media policies a decade ago, many forbade their reporters from becoming entangled in political causes or advocacy. That is still the expectation at the New York Times, which explicitly says that on social media, “our journalists must not express partisan opinions, promote political views, endorse candidates, make offensive comments or do anything else that undercuts The Times’s journalistic reputation.”22

198

Daxton R. Stewart

The Roanoke Times advised caution and consistency in its guidelines updated in 2015: You may sign up for a group or become a “fan” of something, perhaps even to get story ideas, but others could construe that as bias toward a business or organization that the newspaper covers. If you follow a group or account that represents one side of a controversial issue, seek out the group that represents the other side and follow them as well. . . . Manage your friends carefully. Having one source on your friends list but not another is easily construed as bias. Be consistent: Accept no sources or people you cover as friends, or welcome them all.23 Clearance and Review News organizations, which a decade ago were more likely to require journalists to receive clearance from editors or managers before engaging in social media or releasing news items publicly, have relaxed these guidelines in recent years. As an example, in its earliest drafts of social media guidelines in 2011, ESPN had required employees to get approval from supervisors before posting or blogging about anything involving sports. When the guidelines were revised in 2017, this was softened, so employees are now “strongly encouraged to seek advice from a trusted colleague or supervisor before tweeting or posting something that may conflict with our guidelines and damage your reputation.”24 Today, most policies are less formal, seeking more guidance and feedback for difficult circumstances rather than needing clear preapproval of all content. As NPR advises, “when in doubt, consult the Social Media Team,” a group made of experience journalists used to sorting through information from a variety of sources online.25 Similarly, the Roanoke Times requires employees who blog to “notify their immediate supervisor that they have or regularly participate on/ contribute to a blog, and talk through any potential conflicts of interests or complications.”26 The British Broadcasting Corporation (BBC), however, still expects “editorial oversight and responsibility for all our activity in BBC spaces,” including social media, in guidelines it revised in 2019. Before starting an activity on a new platform, the BBC says journalists “should consider carefully what the editorial purpose is behind that activity” and “whether we have the resources to manage the account appropriately.”27 Sourcing The SPJ Code of Ethics requires journalists to “test the accuracy of information from all sources,” a demand that can be challenging when reporters use social

Social Media Policies for Journalists

199

media tools to engage with sources. A healthy skepticism of sources contacted or uncovered through social media tools is built into many of the news organizations’ social media policies. The RTDNA treats information found on social media sites as similar to “scanner traffic or phone tips,”28 which must be confirmed independently. Similarly, the Roanoke Times notes that “Facebook and MySpace are not a substitute for actual interviews by phone or in person, or other means of information gathering, and should not be solely relied upon,” instead of requiring offline confirmation and verification of claims made through these sites.29 The Associated Press and the Los Angeles Times specifically extended requirements of verification and authentication to retweeting items found on Twitter. The AP notes, “Sources discovered [on social networks] should be vetted in the same way as those found by any other means.”30 In the Online News Association (ONA) Social Newsgathering Ethics Code, the organization set forward several best practices standards in verifying and safely using user-generated content found on social networks. These include the following: • • • •

Trying to verify the authenticity of such content before publishing or distributing it and holding it to standards equal to information gathered through other means. Being transparent with the audience about the extent to which the content has been verified. Getting informed consent from the creator of the content. Giving proper credit to the owner of the content, presuming it is safe to identify them.31

This concern about intellectual property rights was not common in the social media policies for newsrooms. In fact, the BBC removed previous guidance they had included in 2012 in their updated 2019 guidelines.32 One example of an intellectual property policy was in the NPR guidelines, which reminds employees to respect the company’s copyrights, saying that linking to stories on NPR.org is fine, but employees “should not copy the full text or audio onto a personal site or page.”33 Personal versus Professional The primary concern expressed in social media policies of news organizations was the blurring of the line between a journalist’s personal life and his or her professional life. Several policies suggest that journalists assume that there is no divide between one’s professional life and one’s personal life. “We know that everything we write or receive on a social media site is public,” as NPR notes.34 The Los Angeles Times, in a policy articulated in 2014, noted, “No

200

Daxton R. Stewart

matter how careful staff members might be to distinguish their personal work from their professional affiliation with The Times, outsiders are likely to see them as intertwined.”35 The Washington Post acknowledged that while journalists may “express ourselves in more personal and informal ways to forge better connections with our readers,” it should not be done in a way that jeopardizes the “reputation of The Washington Post for journalistic excellence, fairness and independence.”36 NPR and the New York Times extend this caution to reporters expressing personal opinions in a similar manner to concerns about following or becoming a “fan” of a political person or movement mentioned earlier. The AP notes that expressions of opinion “may damage the AP’s reputation as an unbiased source of news,” and thus, employees should avoid “declaring their views on contentious public issues in any public forum,” such as social media.37 NPR revised its company policies on advocacy in 2021 to allow more participation in “civic and cultural events that do not pose conflicts of interest,” such as serving on community advisory boards or on boards of educational institutions and religious organizations, as long as they don’t engage in lobbying. The bar against political participation – such as running for office or endorsing or campaigning for political candidates or causes – remained, as did the bar on advocating for political issues or joining political groups on social media. However, the guide was more permissive on attending marches and rallies that “express support for democratic, civic values that are core to NPR’s work,” such as “the freedom and dignity of human beings” (for example, attending a Pride parade) and “the rights of a free and independent press.”38 The BBC was a bit more forgiving on how staff should use official BBC social media accounts, saying that they should be used “primarily in an official capacity, although they may choose to include some personal detail where they and their line managers are in agreement.”39

Confidentiality Several social media policies demand that journalists avoid revealing confidential information. The AP forbids “[p]osting AP proprietary or confidential material,”40 while the Wall Street Journal advises journalists to avoid discussing “articles that haven’t been published, meetings you’ve attended or plan to attend with staff or sources, or interviews that you’ve conducted.”41 NPR also strives to keep internal deliberations and discussions internal. In its “how we treat each other” section, NPR encourages employees not to take criticism of colleagues public on social media platforms. “We also treat each other with respect when using social media platforms such as Slack to communicate internally,” the policy notes.42

Social Media Policies for Journalists

201

Best Practices for Developing Social Media Policies John Paton, CEO of Digital First Media, once stated his three employee rules for using social media as follows: 1. 2. 3.43 This minimalist approach – one that trusts journalists to make responsible decisions while using social media – has not been the reality for news organizations. In general, news media social media guidelines for employees seem to be quite restrictive, both in terms of what kinds of social media tools are typically used and how they should be used. Critics have rightly assailed these policies as damaging to the essential nature of social media tools. However, the policies from both perspectives so far seem to avoid addressing the legal challenges presented by these tools. Journalism organizations mostly focus on Facebook and Twitter, and the policies about these seem largely concerned with protecting the organization’s status as an objective, neutral reporter of the news. This is to be expected considering the ethical demands of the field. However, it can also be unnecessarily limiting. One of the great benefits of social media tools is enhancing interconnectivity with the audience, and the journalism organization policies seem to inhibit the ability of journalists to engage the audience in this manner. When organizations do not allow journalists besides those in the business of providing opinions to blog about personal or political matters, it limits how the audience understands who journalists are and what they do. This policy may detract from, rather than enhance, transparency. If journalists cannot publicly “friend” some people or become fans or followers of their organizations, the audience may be left in the dark as to their motivations and affiliations. Beyond transparency matters, the social media policies for journalists reviewed in this chapter have several weaknesses. For one, they do not specifically address several very important concerns of professionals. For journalism organizations, the rogue tweet of President Obama’s off-the-record aside still seems likely to occur, and staffers cannot help but acknowledge major moments and movements in society touching on political elections, civil rights, war and peace, and other cultural touchstones. While the policies mention using social media posts as sources of information and seeking clearances for breaking news, handling informal comments and items perhaps not suited for publication may fall in between the cracks of these policies. Further, journalism organizations should approach social media in a more expansive and inclusive manner, recognizing sites beyond Facebook and Twitter. Over the past decade, social media apps with other primary features have emerged, such as location-based check-ins (Foursquare), ephemeral or disappearing

202

Daxton R. Stewart

content (Snapchat, WhatsApp), anonymous gossip (Yik Yak), and livestreaming (Facebook Live, Twitch). As social media tools develop, social media policies should adapt to handle them. Broad statements of principles that guide engagement through social media tools can help practitioners, but specific advice for different sites is of value as well. These policies should be constantly updated. Overall, the social media policy debate amongst journalists shows that while individual news organizations have developed social media policies that provide guidance to practitioners, there is much more work to be done to ensure that communicators understand the benefits and risks of the broad array of social media tools. Professional organizations such as SPJ and RTDNA should continue to revise and update their guidelines to make sure they are in line with current technology and the best practices in the field.

FREQUENTLY ASKED QUESTIONS

What Are the Five Things Every Social Media Policy for Journalists Should Address? 1.

Transparency Transparency is a hallmark of journalism. Professional standards require journalists to be honest about who they are and their methods, and deception is strongly discouraged. As such, journalism organizations should require employees to use their real names and to disclose their affiliations when using social media for work purposes. For example, a Twitter account used by J. Jonah Jameson for the Daily Bugle should be something along the lines of “JJJameson_DB” or should otherwise include a note that Jameson works for the Bugle in his profile information. He should not skulk about using pseudonyms, either on Twitter or while commenting on stories on Facebook or elsewhere, even in communities built on anonymity such as Reddit. One of the great strengths of social tools is that they allow interaction with citizens.While citizens may hide behind false profiles or comment anonymously, journalists should not respond in kind; instead, they should promote honest communication and accountability to the public.

2.

Friending and Following Journalism organizations should make clear what the rules are for journalists who use social media accounts, both in their professional and in their private activities. However, such guidelines should provide some

Social Media Policies for Journalists

203

flexibility for journalists to maintain a private life in which they can participate meaningfully in democracy, culture, and relationships. While fairness and objectivity are noted professional standards for journalists, these concepts have flaws, as noted by Bill Kovach and Tom Rosenstiel in their manifesto, Elements of Journalism.44 More important, they argue, is independence from factions and avoiding conflict of interests. As such, journalists should be able to friend or follow whomever they wish, as long as they remain independent from those friends or causes and are transparent about any connections they may have. One possible policy would be urging journalists to maintain separate professional and private accounts – one for business, one for personal connections. However, even then, skeptical members of the public or subjects of coverage may uncover the journalist’s private account, leading to potential embarrassment for his or her organization. To avoid the appearance of a conflict of interest or bias, once a journalist follows or friends one side of a cause, he or she should look to follow/friend other sides as well.And perhaps most essentially, journalists should make clear in their profiles that personal statements are their own and not representative of their employer’s thoughts. 3.

Intellectual Property Intellectual property matters – particularly copyright – present some of the greatest challenges for journalists using social tools. Journalism organizations should ensure that employees are of the mindset that anything not created by the organization needs permission from the copyright holder before it can be republished. This means that hosting photographs, YouTube videos, and texts from sites other than your own are all potentially dangerous. True, news reporting is one of the categories protected by the fair use doctrine (see Chapter 4), but because news is under commercial use and because photographs and videos are typically used in full, there is a strong likelihood that republishing them for news purposes does not qualify as fair use. Using trademarks is another matter – logos and such used for news reporting purposes have stronger protection under the Federal Trademark Anti-Dilution Act – but still, journalism organizations should be cautious of such uses without permission. Therefore, journalism organizations should get in the habit of asking permission to use the works of others. Social media guidelines should establish a process for seeking permission and confirming that it has

204

Daxton R. Stewart

been granted. And when in doubt, journalists should seek the aid of their attorneys before publishing something that could cost the organization damages for infringement. 4.

Sharing and Retweeting The culture of social media is one of sharing, and journalists should recognize this for the effective use of social tools. However, the culture of sharing does not automatically mean sharing has strong legal protections. First, before posting the video, audio, or words of another person, journalists should consider potential intellectual property and copyright issues (see the third asnwer, “Intellectual Property,” and Chapter 4). Then journalists should provide proper attribution for the source of the shared material. If a reporter hears a news tip or breaking story from another organization, he or she should note the source in the social media post – for example, by using the HT (“hat tip”) notation on Twitter. The easiest and most widely accepted form of acknowledgment is the hyperlink. Journalism organizations should take advantage of linking to provide both backgrounds to their stories and credit where it is due. Under Section 230 of the Communications Decency Act, hyperlinking also is largely protected. If the source turns out to be defamatory or cause other harms, users are not liable for the hyperlinked material. Another very easy way to share information gathered by or stated by others on Twitter is the retweet, which has caused headaches for several news organizations such as the Associated Press, which generally discourages retweeting as a form of reporting. Retweeting is Twitter’s form of sharing – either a link, a photo, a video, or even a tip or snippet of information provided by citizens.The culture of social media makes it clear that retweeting is not an endorsement or even a statement that the underlying information is truthful. It’s more of a heads-up to the audience – though if a journalist has doubts about the veracity of a statement or if it is yet to be independently confirmed, the journalist certainly should make this clear in the process of sharing.

5.

Protecting Journalists from Harassment The most concerning development over the past decade is how social media platforms have become essential tools for journalists but have also drastically increased the ability of people to have direct access to journalists and their platforms, making them more open to threats and

Social Media Policies for Journalists

205

harassment than ever before. As noted in the opening of this chapter, partisan activists know how to manipulate newsroom leaders and management by targeted attacks on journalists for innocuous social media activity.That these attacks are successful is an indictment of journalism leaders and the human resources departments that set out to regulate the activities of newsroom staff, even in their personal lives. Journalism organizations should proactively expect this kind of harassment, which has especially targeted women and journalists of color in recent years, as noted by Dr. Michelle Ferrier in her research in the area.45 Taking personnel disputes public and firing or disciplining journalists for social media posts only fuels the trolls and harassers further, showing them that their campaigns can be successful. One positive example would be the New York Times, which in 2021 came to the defense of reporter Taylor Lorenz after she became the target of Fox News host Tucker Carlson, who attacked her on his program based on a tweet, leading to a torrent of hateful and dangerous comments aimed at her on her social media accounts.The Times responded with a statement calling out the behavior and standing behind Lorenz: In a now familiar move,Tucker Carlson opened his show by attacking a journalist. It was a calculated and cruel tactic, which he regularly deploys to unleash a wave of harassment and vitriol at his intended target.Taylor Lorenz is a talented New York Times journalist doing timely and essential reporting. Journalists should be able to do their jobs without facing harassment.46 Acknowledgment of these risks and dangers and consistent guidance for how newsrooms will respond to threats of this kind should be made clear in social media policies at every journalism institution.

Notes 1. Jeremy Barr, The New York Times Says It Didn’t Part Ways with Editor over Biden “Chills” Tweet, Washington Post, January 24, 2021. 2. Lindsey Ellefson, Washington Post Reinstates Reporter, Now Says Her Tweet Didn’t Violate Rules, The Wrap, January 29, 2020, www.thewrap.com/washingtonpost-reinstates-reporter-now-says-her-kobe-bryant-tweet-didnt-violate-rules. 3. Charlotte Klein, Ousted AP Journalist Says She Was “Hung Out to Dry” by the News Agency, Vanity Fair, May 23, 2021, www.vanityfair.com/news/2021/05/ emily-wilder-ap-journalist-fired. 4. Craig Silverman, Facebook Helped Fund David Brooks’s Second Job. Nobody Told the Readers of The New York Times, BuzzFeed News, March 3, 2021,

206

5. 6.

7. 8. 9. 10. 11. 12.

13.

14. 15. 16. 17.

18. 19. 20. 21. 22.

Daxton R. Stewart www.buzzfeednews.com/article/craigsilverman/david-brooks-nyt-weave-facebookbezos. Clint Hendler, Pinning Down the “Jackass” Tale, Columbia Journalism Review, September 18, 2009, www.cjr.org/transparency/pinning_down_the_jackass_tale. php. James Hohmann & the 2010–2011 ASNE Ethics and Values Committee, 10 Best Practices for Social Media: Helpful Guidelines for News Organizations 3 (May 2011), http://asne.org/portals/0/publications/public/10_best_practices_for_social_ media.pdf. Society of Professional Journalists, SPJ Code of Ethics (2014), www.spj.org/ ethicscode.asp. ONA Social Newsgathering Ethics Code, Online News Association, https:// journalists.org/tools/social-newsgathering/ (last accessed February 25, 2022). Jacob L. Nelson, A Twitter Tightrope without a Net: Journalists’ Reactions to Newsroom Social Media Policies, Columbia Journalism Review, December 2, 2021, www.cjr.org/tow_center_reports/newsroom-social-media-policies.php. Matthew Ingram, Objectivity Isn’t a Magic Wand, Columbia Journalism Review, June 25, 2020, www.cjr.org/analysis/objectivity-isnt-a-magic-wand.php. Melanie Stone, Social Media Editors in the Newsroom: What the Job Is Really Like, MediaShift, March 17, 2014, http://mediashift.org/2014/03/social-media-editorsin-the-newsroom-what-the-job-is-really-like/ See Talya Minsberg, Snapchat: A New Mobile Challenge for Storytelling, New York Times, May 18, 2015, www.nytimes.com/times-insider/2015/05/18/ snapchat-a-new-mobile-challenge-for-storytelling/?_r=0; Joseph Lichterman, Snapchat Stories: Here’s How 6 News Orgs Are Thinking about the Chat App, NiemanLab, February 23, 2015, www.niemanlab.org/2015/02/snapchat-stories-heres-how-6-newsorgs-are-thinking-about-the-chat-app. Nicole Gallucci, Dave Jorgenson Chats about Life as the Washington Post TikTok Guy, His Love of Spam, and More, Mashable, March 22, 2021, https://mashable. com/article/dave-jorgenson-washington-post-tik-tok-guy-interview; Alex Mahadevan, How The Washington Post’s TikTok Guy Dave Jorgenson Gets Millions of Views by Being Uncool, Poynter, October 2, 2019, www.poynter.org/reportingediting/2019/how-the-washington-posts-tiktok-guy-dave-jorgenson-gets-millionsof-views-by-being-uncool/ Society of Professional Journalists, supra note 7. Mark Memmott, Wright Bryan & Lori Todd, NPR Ethics Handbook, Special Section: Social Media, National Public Radio, February 11, 2019, www.npr.org/ about-npr/688418842/special-section-social-media. Radio Television Digital News Association, Social Media & Blogging Guidelines, www.rtdna.org/article/social_media_blogging_guidelines (last accessed February 25, 2022). News Leaders Association, The Wall Street Journal Policies for Employees of the News Departments of the Wall Street Journal, Newswires, and MarketWatch, https://members.newsleaders.org/resources-ethics-wsj (last accessed February 25, 2022). Social Media Guidelines for the Newsroom, The New York Times, October 13, 2017, www.nytimes.com/editorial-standards/social-media-guidelines.html. Memmott et al., supra note 15. News Leaders Association, supra note 17. Associated Press, Social Media Guidelines for AP Employees (May 2013), www. ap.org/assets/documents/social-media-guidelines_tcm28-9832.pdf (last accessed February 25, 2022). The New York Times, supra note 18.

Social Media Policies for Journalists

207

23. Professional Standards and Content Policies, The Roanoke Times, July 21, 2015, www.roanoke.com/site/professional_standards.html. 24. John Skipper, ESPN’s Social Media Guidelines, ESPN, November 2, 2017, www. espnfrontrow.com/2017/11/espns-social-media-guidelines/amp. 25. Memmott et al., supra note 15. 26. Roanoke Times, supra note 23. 27. British Broadcasting Corporation, Guidance: Social Media (July 2019), www.bbc. com/editorialguidelines/guidance/social-media/ (last accessed February 25, 2022). 28. Radio Television Digital News Association, supra note 16. 29. Roanoke Times, supra note 23. 30. Associated Press, supra note 21. 31. ONA Social Newsgathering Ethics Code, supra note 8. 32. Previous BBC guidance told staffers to be aware of the “necessary rights to any content we put on a third-party site” and that the company is “aware of, and comfortable with, the site’s own terms and conditions,” which may limit uses to personal or noncommercial purposes. See British Broadcasting Corporation, Social Networking, Microblogs and other Third Party Websites: BBC Use (2012). 33. Memmott et al., supra note 15. 34. Id. 35. Ethics Guidelines, The Los Angeles Times, June 26, 2014, www.latimes.com/la-timesethics-guidelines-story.html. 36. Ethics Policy, The Washington Post, www.washingtonpost.com/policies-andstandards/ (last accessed February 25, 2022). 37. Associated Press, supra note 21. 38. NPR Ethics Handbook: Impartiality, National Public Radio, July 7, 2021, www. npr.org/templates/story/story.php?storyId=688413430 (last accessed February 25, 2022). 39. BBC, supra note 27. 40. Associated Press, supra note 21. 41. News Leaders Association, supra note 17. 42. Memmott et al, supra note 15. 43. John Paton, JRC Employee Rules for Using Social Media, Digital First, April 30, 2011, http://jxpaton.wordpress.com/2011/04/30/jrc-employee-rules-for-using-socialmedia. 44. See Bill Kovach & Tom Rosenstiel, Elements of Journalism (3rd ed. 2014). 45. Michelle Ferrier, Social Media Policies Put Journalists at Risk, Columbia Journalism Review, December 3, 2021, www.cjr.org/tow_center_reports/social-medianewsroom-policies-responses.php. 46. @NYTimesPR, Twitter (March 10, 2021, 2:30 p.m.), https://twitter.com/NY TimesPR/status/1369747504565256193.

Chapter 11

Social Media Policies for Advertising and Public Relations Holly Kathleen Hall ARKANSAS STATE UNIVERSITY

Organizations are increasingly discovering they need to be utilizing social media strategies and tactics in their strategic communication plans and campaigns. Before the advent of social media, defined as “a group of Internet-based applications that . . . allow the creation and exchange of user-generated content,”1 organizations more often communicated through controlled, one-way channels. The interactivity and instantaneous manner of social media present some unique challenges. Particular areas of attention for strategic communication professionals include Federal Trade Commission (FTC) regulations on endorsements and testimonials, sweepstakes and contests, native advertising, intellectual property concerns, and issues regarding privacy and the boundaries of labor laws. This chapter will address those concerns and provide recommendations for drafting effective and dynamic social media guidelines. Every company and campaign is unique and will differ on what is appropriate, in some respects. Yet there are some standard principles that can be applied universally.

Uses of Social Media in the Strategic Communication Industry The appeal of social media in public relations and advertising is multifaceted. Social media can provide instantaneous feedback from consumers or potential consumers and an unparalleled level of engagement with audiences. Social media is less about selling products and more about helping solve consumers’ problems, whereby the organization’s brand and reputation are enhanced, and the organization positions itself as the trusted expert in its particular area. Organizations can also use social media in unique ways to connect with audiences during difficult times, such as the global COVID-19 pandemic. Steak-umm, for example, demonstrated the benefits of using a multidimensional approach in its tone to resonate with audiences. Nathan Allebach, social media manager at Allebach Communications and the person behind Steak-umm’s tweets, maintains, “It’s not just funny or just serious. . . . It’s a combination of memes and humor and then a stream of consciousness.”2 DOI:10.4324/9781003174363-11

Advertising and Public Relations

209

When social media programs are executed well, the benefits to the brand can be exponential. When implemented incorrectly, the consequences range from no translation to the bottom line to big-time losses and a brand that is irreparably damaged; from the loss of one or two customers to the loss of thousands of customers; from the loss of one or two ill-thinking employees to a major lawsuit against the entire organization. And that can mean money damages and attorneys’ fees in addition to a tarnished reputation. A proactive, effective social media policy can mitigate these situations. There is a litany of social media mishaps that provide solid proof of the need for social media policies. Take, for example, the instances of failing to disclose certain information, such as the FTC consent order filed against California-based online entertainment network Machinima, Inc., in 2016 for the lack of disclosures when it paid influencers to post YouTube videos or other online product endorsements for the Microsoft’s Xbox One system and several games. The influencers paid by Machinima failed to adequately disclose that they were being paid for their opinions. The FTC barred Machinima from misrepresenting in any influencer campaign that the endorser is an independent user of the product. The agency also compelled Machinima to make certain that all of its influencers were aware of their responsibility concerning disclosures.3 There are differing philosophies for handling customer complaints on social media. Some brands exercise an open strategy where complaints are discussed using public messaging on platforms like Twitter. Others prefer a closed strategy, encouraging customers to converse about issues using fewer public methods. Delta Airlines, for example, employs the open strategy, which increases transparency but also places a spotlight on a catalog of negative experiences. In contrast, McDonald’s uses a closed strategy, providing a survey link in response to negative tweets, which moves the conversation to a private channel instead of allowing such complaints and negativity to overshadow its Twitter presence.4 Professional organizations such as the Page Society and the Public Relations Society of America (PRSA) emphasize transparency in their principles and code of ethics. The Page Principles explicitly advocate telling the truth and listening to stakeholders. Social media platforms can be used by brands to promote these values.5 Some organizations have found their social media policies violate the National Labor Relations Act (NLRA). Restaurant chain Chipotle Mexican Grill, for example, was sued by an employee after he was fired for commentary made on social media that was critical of Chipotle. The National Labor Relations Board (NLRB) found the policy contained language that was both overbroad and that would likely chill employee speech.6 Without a guiding framework, some kind of harm is very probable. We need only look at a few more examples to see the kind of damage a single tweet can do. In March 2011, an employee of New Media Strategies, the agency

210

Holly Kathleen Hall

responsible for Chrysler’s consumer-facing Twitter account, tweeted, “I find it ironic that Detroit is known as the #motorcity and yet no one here knows how to f***ing drive.” Chrysler dropped the agency, fearing the resulting firestorm from the tweet would impair their relationship with the Motor City.7 Around the same time, attention turned to the aftermath of the earthquake and tsunami in Japan. Gilbert Gottfried, the voice of the Aflac Duck, posted a stream of jokes on his Twitter feed related to the tragedy. In addition to displaying incredible insensitivity, Gottfried’s actions had the potential to destroy 75% of Aflac’s revenue, which came from Japan. The company soon announced the duck would be voiced by a new actor.8 Technology company LG attempted to poke fun at Apple in 2014 when complaints poured into the iPhone maker regarding the “bendability” of its new iPhone 6. The tweet from the LG France account stated, “Our smartphones don’t bend, they are naturally curved;).” That seems innocuous enough. The problem? The tweet was sent from an iPhone, and the joke, therefore, backfired on LG.9 The inability to control the message in the marketplace is the major reason cited that some organizations outright forbid their employees from using social media altogether. This is an impractical approach for companies who need the level of engagement and the data that social media can provide. Instead, strategic communicators should draft social media policies that, as Lansons head of digital Simon Sanders noted, “set the boundaries of what can be said and offer guidelines on how it can be said. As much as they restrict, they also enable and empower, giving freedom within a framework.”10 In a post-COVID-19 world, as personal and professional online boundaries are more blurred than ever, social media places employees’ personal beliefs and opinions into the public space. Separating these two domains is increasingly difficult. Social media policies must balance issues of privacy, free expression, and protecting an organization’s reputation. So are you at risk? What is the likelihood of a Chipotle, Chrysler, or Aflac type of incident in your organization? The key is to participate in the social media conversation. Provide employees the opportunity to contribute to those spaces. Give them the freedom, but provide them with the tools and principles to guide them in this new environment.

Legal Cases and Considerations One of the aspects making social media policy development so challenging is that many times the drafters feel compelled to touch on every aspect of laws or regulations that might be implicated. This is, obviously, an impossible task. Some of the most pressing areas of the law that should be considered in the policy development phase include noting when and how disclosures are required, such as those in the FTC endorsement and sponsorship regulations, sweepstakes and contest rules, intellectual property considerations, labor/ union/workplace issues, defamation, harassment, privacy, and obscenity.

Advertising and Public Relations

211

One of the most interesting legal issues to watch in recent years is how the US courts have dealt with privacy (see Chapter 3). Three cases in particular are viewed as groundbreaking. In Pietrylo v. Hillstone Restaurant Group, a federal district court in New Jersey found the restaurant group liable for violating the Stored Communications Act.11 Two employees of the restaurant group developed a password-protected MySpace page in which they aired their grievances about their employment. A manager learned of the site and asked for the log-in ID and password. One of the employees provided the information, and the two creators of the site were fired “for damaging employee morale and for violating the restaurant’s ‘core values.’”12 During the trial, the employee stated she felt she had been coerced into providing the ID and password. The court felt the restaurant group was at fault; the managers had not been authorized to view the site. Had this been a non-password-protected site, the outcome might have been different. In Stengart v. Loving Care Agency, a home health employee used her companyprovided laptop to access her Yahoo! mail account to communicate with her attorney regarding issues with her work situation.13 Again, this was a personal, password-protected site, and the court felt this employee had a certain expectation of privacy in emails to her attorney. This case also incorporated the aspect of attorney-client privilege. So does this mean companies cannot monitor workplace computers? Not necessarily. The New Jersey Supreme Court opinion stated, Our conclusion that Stengart had an expectation of privacy in emails with her lawyer does not mean that employers cannot monitor or regulate the use of workplace computers. Companies can adopt and enforce lawful policies relating to computer use to protect the assets, reputation, and productivity of a business and to ensure compliance with legitimate corporate policies. . . . But employers have no need or basis to read specific contents of personal, privileged, attorney-client communications in order to enforce corporate policy.14 Perhaps the most fascinating decision was in City of Ontario v. Quon. In this case, the city of Ontario, California, combed through an employee’s text messages from his city-issued pager to see how many messages were personal or work-related due to overage fees that were being assessed to the city. The legal issue at hand was whether this search violated the Fourth Amendment. The Supreme Court declined to decide whether Quon had a reasonable expectation of privacy. While the Court held the city did not violate Quon’s privacy, the justices acted with an abundance of restraint regarding privacy expectations and new technologies, urging prudence concerning emerging technology and stating: Cell phone and text message communications are so pervasive that some persons may consider them to be essential means or necessary instruments for self-expression, even self-identification.

212

Holly Kathleen Hall

That might strengthen the case for an expectation of privacy. On the other hand, the ubiquity of those devices has made them generally affordable, so one could counter that employees who need cell phones . . . could purchase and pay for their own. And employer policies concerning communications will of course shape the reasonable expectations of their employees, especially to the extent that such policies are clearly communicated.15 While the absence of a sudden and perhaps inflexible decision from the court was welcome, the case also leaves many questions unanswered for employers and employees alike. The key takeaway from these privacy cases seems to be: Employers need to have a policy in place that specifically and clearly outlines the level of privacy employees can expect in their workplace, both for email and social media accounts. And any employer searches conducted of employee sites or content should be done for legitimate business reasons. When courts are determining if an employee had a reasonable expectation of privacy, any employer policy or terms-of-service-type document will likely be examined to assist in characterizing what is reasonable.16 Somewhere in the middle, the right of the employer to protect his or her enterprise collides with the right of an employee to exercise speech that might very well be protected by the NLRA, which applies to union and non-union employees. Under the act, employees should be allowed to discuss online “wages, hours, or terms or conditions of employment.”17 So employers have to determine the fine line between working condition discussions, for example, and disparaging the company’s leaders. The first groundbreaking case relating to workers and social media dealt with an employee of a Connecticut ambulance service who criticized her employer on Facebook, using several vulgarities to ridicule her supervisor. Regardless, the NLRB labeled her speech under the “working conditions” discussion category.18 In February 2011, the parties settled, leaving the question open as to the range of speech allowed to employees in social media policies. The company did agree to revise its rules, recognizing they were too broad. The case does demonstrate, however, that the NLRB is willing to fight for employees’ rights in this area, despite the vulgarities and mocking that may be involved in an employees’ postings.19 Hundreds of actions have been brought by employees under the heading of unfair labor practices regarding the enforcement and maintenance of social media policies. Most of these cases did not go to trial and were settled after the NLRB issued a complaint and the company agreed to modify its social media policy.20 The aforementioned Chipotle case was perceived by some as a “wild expansion” of protected concerted activity under the NLRA.21 One employee posted tweets about working at Chipotle. One tweet containing a news article about hourly employees having to work on snow days was aimed at Chipotle communications director, Chris Arnold. The employee’s other tweets were reactions to customer tweets. For example, one customer tweeted, “Free chipotle

Advertising and Public Relations

213

is the best thanks.” The employee’s retort was, “nothing is free, only cheap #labor. Crew members only make $8.50hr how much is that steak bowl really?” The employee’s manager requested the employee delete the tweets, and he complied. The NLRB found the tweets involved the employee’s work conditions, but they were not directed to his fellow employees. So the tweets did not amount to concerted activity protected by the NLRA. Chipotle lost on every other issue, however. The NLRB took issue with Chipotle’s social media policy, which barred posting “incomplete, confidential, or inaccurate information” and “disparaging, false, [or] misleading” information about the company. The NLRB found this language would likely chill employee speech because the terms “confidential” and “disparaging” were not adequately explained and could be seen to include statements protected by the NLRA. Similarly, they found the ban on false or misleading statements was overly broad and could be adequately narrowed by including a provision that false and misleading statements were made with intended malice.22 In 2004, the NLRB’s Lutheran Heritage decision held that employers could violate Section 7 of the NLRA by keeping handbook policies and rules that might “reasonably be construed” by an employee to “chill” protected activity under the NLRA. Until 2017, this “reasonably construe” standard was utilized to invalidate many employer social media policies.23 In December 2017, the NLRB’s decision in The Boeing Company case was significant in that it enunciated a new standard for facially neutral workplace policies. Boeing had a policy prohibiting employees from using camera-enabled devices without a permit. The NLRB upheld the policy, saying they would now examine “(1) the nature and extent of the rule’s potential impact on NLRA rules and (2) an employer’s legitimate justification associated with the rule”24 in an effort to strike a balance between the business’s rationale and intrusion on employee rights. While the Boeing decision is clearly more employer-friendly, there are signs this could be a short-lived trend due to the new composition of the NLRB after a change in presidential administrations.25 In Union Tank Car Company (2020), the NLRB found that the employer unlawfully maintained a non-disparagement rule that barred communication meant to damage the reputation of the organization with customers and/or employees. The NLRB found that while it is acceptable for employers to keep employees from making statements that injure the company’s reputation, it was improper to keep employees from making statements about the company to other employees.26 One conclusion we can draw about social media and labor issues is that employers simply cannot have a blanket policy stating employees cannot talk about their organizations. Companies have to be specific about the type

214

Holly Kathleen Hall

of speech that is and is not protected. Speech that has as its purpose of improving working conditions, including critique of supervisors, should be protected to be in compliance with labor laws, including the NLRA and any federal and state laws such as Sarbanes-Oxley and whistleblower statutes that protect employees who complain about working conditions or bring potential fraud to light. Speech that simply defames or insults supervisors would most likely not be protected. Policies need to clearly define vague terms such as “disparaging” and include a provision explaining any false and misleading statements must be made with malice to fall outside the protection of the NLRA. When it comes to social media disclosure requirements, the FTC continues to clarify standards related to native advertising and endorsements/ sponsorships. In both situations, Section 5 of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce,” and deception is defined as “a representation, omission, or practice . . . likely to mislead consumers acting reasonably under the circumstances and is material to consumers – that is, it would likely affect the consumer’s conduct or decisions with regard to a product or service.”27 Native advertising messages can often be impossible to differentiate from news or similar informational subject matter. Advertisements can be formatted to match the style and layout of the content into which it is incorporated. To ensure native advertising is not deceptive, the FTC recommends the following in order to promote transparency with the consumer: An advertisement or promotional message shouldn’t suggest or imply to consumers that it’s anything other than an ad. Some native ads may be so clearly commercial in nature that they are unlikely to mislead consumers even without a specific disclosure. In other instances, a disclosure may be necessary to ensure that consumers understand that the content is advertising. If a disclosure is necessary to prevent deception, the disclosure must be clear and prominent.28 An FTC settlement in 2020 with two Michigan companies, Physician’s Technology, LLC, and Willow Labs, LLC, and two individuals affiliated with both businesses, Dr. Ronald Shapiro and David Sutton, was the result of ads airing on national TV, satellite radio, online, and in social media in which the defendants claimed that for people suffering from rheumatoid arthritis, diabetic neuropathy, and other serious conditions, the light-emitting Willow Curve would provide “real, lasting, proven pain relief, without the costs, risks or inconvenience associated with elective surgery, prescription drugs or office visits.” The defendants also employed native advertising. In one online magazine, for example, what appeared to be an article, “Understanding the Gender Gap in Sports Injuries and Treatment,” was actually advertising for Willow Curve. The complaint alleges consumers would not be able to determine that

Advertising and Public Relations

215

the articles were actually advertisements. The settlement included a judgment of $22 million.29 The FTC’s Guides Concerning the Use of Endorsements and Testimonials in Advertising were enacted in 1980 and amended in 2009. The Guides provide direction, ensuring that advertising using endorsements or testimonials complies with the requirements of the FTC Act. Included in the Guides is a provision that “if there is a connection between an endorser and a seller of an advertised product that could affect the weight or credibility of the endorsement, the connection must be clearly and conspicuously disclosed.”30 The guides were revised in 2013 to provide examples and more detail for digital endorsements.31 While their general philosophy maintains that online communications are subject to the same regulations and principles as traditional media, the FTC finds itself having to step in and carve out specific policies and recommendations for online commercial speech. In 2019, the FTC published a brochure providing tips on when and how to make a good disclosure, such as “Disclose when you have any financial, employment, personal, or family relationship with a brand,” “Place it so it’s hard to miss,” “Use simple and clear language,” and offering platform-specific recommendations.32 A 2018 report showed that among 800 Instagram influencers studied, 71.5% attempted to disclose their relationships, but only one in four actually did it in a way that complied with FTC regulations.33 The FTC does monitor accounts of popular influencers and has a history of enforcement actions. For example, in March 2020, iTeami, a marketer of teas and skincare products, settled with the FTC after being accused of using endorsements by well-known social media influencers who did not adequately disclose that they were being paid to promote its products.34 The key lesson is this: Be clear, transparent, and obvious with disclosures, and you’ll stay in compliance with FTC regulations on endorsements and sponsorships. Tighter restrictions in this area could be forthcoming. In 2020, after a vote to update the current endorsement guides, FTC commissioner Rohit Chopra issued a statement suggesting that the commission would reevaluate its approach regarding influencer marketing. Speculation is that the FTC will be focused on major companies in this round of revisions – both social media organizations and brands.35 Another area of significance to address in social media policies relating to strategic communication organizations involves consumer reviews. The Consumer Review Fairness Act, effective in March 2017, “protects consumers’ ability to share their honest opinions about a business’s products, services, or conduct in any forum – and that includes social media.”36 The act passed after some questionable business practices surfaced, such as contract provisions that permitted suing or penalizing consumers who post negative reviews. The act makes the following contract provisions illegal: those who keep someone from reviewing a company’s products, services, or conduct; those who enforce a penalty against someone who gives a review; and those who ask consumers to part with their intellectual property rights in the content of their reviews.37

216

Holly Kathleen Hall

Organizations need to make sure contracts, as well as online terms of service, do not contain any provision that keeps consumers from sharing truthful reviews and/or punishes those who do. Sweepstakes and contests are an effective way to engage consumers and collect useful data. It’s vital to understand the rules and regulations in this area, which can be bewildering, as there are regulations on the federal and state level, as well as platform-specific rules. In 2014, retailer Cole Haan received a warning letter from the FTC because of the lack of disclosures from entrants in their Pinterest-based contest. Participants were directed to create a Pinterest board and pin five photos using the hashtag #wanderingsoles. But the instructions did not inform entrants to disclose their posts were part of a contest to win a $1,000 shopping spree. In essence, the posts were undisclosed endorsements.38 In 2015, the FTC updated its FAQs to their Guides Concerning Use of Endorsements and Testimonials in Advertising regarding social media contests, advising that contest-type posts include the hashtags #sweepstakes, #contest, or #giveaway and/or the words “enter for a chance to win.”39 The contest instructions need to use clear language such as “No Purchase Necessary,” visibly identify the entity sponsoring the contest, detail entry procedures and eligibility requirements, accurately describe the awards, and explain the method of choosing a winner. In addition, any publicity rights regarding winners and participants should be mentioned, and if written consent can be obtained, that is preferable. Some aspects, such as prize eligibility, may be governed by the state the entrant dwells in.40 There are clear advantages for brands and strategic communicators when it comes to using social media to promote products, services, and causes, as well as engage audiences. These tactics, as noted, are not free from risk. So how can organizations protect themselves? A strong social media policy should detail how your organization and employees should use social media responsibly in order to protect you from a range of issues. Yet recent statistics show many organizations still are not utilizing this valuable tool. A 2016 report from the Pew Research Center noted that 63% of Americans said their employer has no social media policy.41 This leaves them vulnerable to a variety of difficulties, from security threats to legal liability to just plain bad publicity. According to the social media management platform Hootsuite, these are the minimal elements every policy should include from a security standpoint: Brand guidelines that explain how to talk about your company on social. Rules related to confidentiality and personal social media use. Social media activities to avoid, like Facebook quizzes that ask for personal information. Which departments or team members are responsible for each social media account. Guidelines related to copyright and confidentiality.

Advertising and Public Relations

217

Guidelines on how to create an effective password and how often to change passwords. Expectations for keeping software and devices updated. How to identify and avoid scams, attacks, and other security threats. Who to notify and how to respond if a social media security concern arises.42 Hootsuite also highlights some policies it considers “best in class”: Adidas Group This policy is fairly brief, easy to read, promotes employee engagement online, and prompts employees to remember their responsibilities.43 FedEx The strength of FedEx’s policy is “question and answer” illustrations within each section, highlighting specific issues employees may be faced with. The following is an example: Question: I have been asked to write a recommendation for a former subordinate on LinkedIn. Is it okay to write the recommendation? Answer: No. As a FedEx manager, you may not write a social media recommendation if the basis for your knowledge is their work for you at FedEx.44 Mayo Clinic The Mayo Clinic knowingly focuses its their policy on its core service interests: patient interactions and privacy. Patient confidentiality and privacy are listed in the first section of the policy.45 General Motors GM has both an abbreviated and full version of its social media policy. The full version is built around values such as honesty, clarity, carefulness, respect, awareness, and responsibility. It also recognizes some differences in local and regional laws in other nations, such as India, Germany, and Austria.46 These are examples of some of the principles that can be universally incorporated into any social media policy. Every organization is unique and will have to decide what kind of social media usage will be appropriate and effective. Organizations can begin the policy formulation process by bringing the right team together to set the stage.

218

Holly Kathleen Hall

Mario Sundar, formerly of LinkedIn, advocates bringing in “your most active social media employees to collaborate,” which accomplishes two objectives: you have the knowledge base of these social media savvy employees, and you have a set of social media evangelists to encourage appropriate social media use.47 Think about including people from representative areas, such as human resources, technology, public relations, and marketing. Once the actual content-crafting begins, consider the can-dos rather than the thou-shalt-nots in order to nurture social media practices. If further clarifications are needed on acceptable use, provide brief illustrations and examples of what is appropriate rather than tacking on additional don’ts or augmenting the policy with legalese. Ad agency Spark Foundry is also an advocate of simplicity. The essence of the policy is “Be careful discussing any proprietary work as related to the agency or its clients.”48 The clear and direct approach of their policy encourages employees to be authentic on social media and reveals the tone of their company culture and brand.49 Sherer, McLellan, and Yantis suggest the following steps for developing organizational social media policies: do an audit to find every platform where your brand/organization is featured; identify what goals social media can help the organization accomplish; specify who is responsible for monitoring social media (including “a strategy to avoid genericide and related copyright considerations”); determine areas of responsibility for certain policies the organization must comply with on social media such as the Americans with Disabilities Act, FTC disclosure requirements, and privacy rights (i.e., the use of photographs with identifiable people); and consider what level of acceptable employee social media use the organization is comfortable with.50 In addition to the actual policy development, regular training sessions are recommended that are tailored to employees’ skills and talents. Again, making sure the policy is communicated in a way that is clear and easily comprehended using real-life scenarios and visually engaging pieces such as infographics.51 Drafting policies with a global perspective is increasingly necessary, although this can be a complex endeavor. Differences in privacy, labor, data protection, and free speech laws between countries, for example, mean policies should be adaptable and sometimes will need to address with precision the countries and regulations involved.52

Conclusion Changes in the social media realm happen swiftly, and organizations always seem to be in a race to keep up with new platforms and new regulations. Effective social media policies are critical to mitigating risks. No matter the policy or its content, it is worthless if an organization does not adequately train and educate its staff and seek numerous opportunities to communicate the policy’s substance. The policy needs to be highly visible, dynamic, understandable, and employee-centric. Laws in this area will continue to evolve; so will social

Advertising and Public Relations

219

media policies. Is there a risk for an organization that uses social media? Yes. Is there potential liability? Yes. Will a good social media policy protect an organization from every possible harm? No. Social media policies are not a cure-all. They are, however, essential. And they need to be well-constructed and administered. Every organization will differ on what they believe is appropriate. But all organizations should have guidelines, principles, goals, or statements – something that provides a framework but also freedom.

FREQUENTLY ASKED QUESTIONS

Six Things Every Social Media Policy for Strategic Communication Should Address 1.

Transparency Every policy needs to address certain facets of the transparency principle: that you are who you say you are and that the content you write is your opinion. Use the simple statement “I work for X, and this is my personal opinion.” It is also a necessity that you clearly disclose if you are being given any money, products, or services by an organization you may choose to write about.The statement can be as straightforward as “Company X gave me this product to try.”

2.

Privacy Employers can monitor employee social media use – to a degree. Employees will have an expectation of privacy in passwords and areas such as their communications with their attorney. It is vital that employers specifically and clearly state the level of privacy employees can expect in their workplace and as they are working with clients. Something as simple as a Facebook check-in for a meeting at a client’s workplace could be in contradiction to the “Safeguarding Confidences” provision of the PRSA Code of Ethics by revealing the name of a client who may wish to remain private.

3.

Employee Control The NLRB seems very willing and eager to step in and fight for an employee’s right to express themselves freely on social media sites if the speech relates to working conditions and terms of employment. It is important for employers to realize their policies should not overly restrict speech, such as including a blanket statement asking employees

220

Holly Kathleen Hall

not to post content about their work, or else they potentially run afoul of the NLRA, Sarbanes-Oxley, and other federal and state laws. With this freedom also comes the need for employer responsibility and employee expectations in any monitoring of the conversations taking place. Define the depth of monitoring taking place. 4.

Intellectual Property While materials are increasingly available and simple to copy online, it is important to give credit where it is due and to ask permission before using someone else’s content, such as photographs, videos, and logos. Strategic communicators working for a specific client should also actively monitor the web for suspicious uses of the client’s brand for nefarious purposes, such as online impersonation or setting up fake accounts (see Chapter 4).

5.

FTC Regulations The FTC has stepped up its enforcement of regulations concerning deception in online advertising to include areas such as native advertising and endorsements/sponsorships.Any business that regularly works with influencers, bloggers, affiliate marketers, and the like should pay meticulous attention to this area.

6.

Tone When the actual policy-crafting begins, approach the discussion with a “here’s what we can do” viewpoint rather than creating a catalog of thou-shalt-nots to cultivate appropriate social media practices. Provide illustrations, examples, and visuals such as infographics to demonstrate acceptable online behavior and discourage negative tone of voice and online battles or fights, which can be brand-damaging.

Notes 1. Andreas M. Kaplan & Michael Haenlein, Users of the World, Unite! 61 Business Horizons (2010). 2. Diana Bradley, How Wendy’s, Denny’s and Steak-umm Are Approaching Twitter Amid a Pandemic, PR Week, April 8, 2020, www.prweek.com/article/1679821/ wendys-dennys-steak-umm-approaching-twitter-amid-pandemic. 3. Federal Trade Commission, FTC Approves Final Order Prohibiting Machinima, Inc. from Misrepresenting that Paid Endorsers in Influencer Campaigns are Independent Reviewers, March 17, 2016, www.ftc.gov/news-events/press-releases/2016/03/ ftc-approves-final-order-prohibiting-machinima-inc.

Advertising and Public Relations

221

4. Alireza Golmohammadi, Taha Havakhor, Dinesh Gauri & Joseph Comprix, Why You Shouldn’t Engage with Customer Complaints on Twitter, Harvard Business Review, April 29, 2021, https://hbr.org/2021/04/why-you-shouldnt-engage-withcustomer-complaints-on-twitter. 5. The Arthur W. Page Center, Page Principles and PRSA Guidelines, www.pagecentertraining.psu.edu/public-relations-ethics/transparency/lesson-2-corporate-socialresponsibility-and-regulation/page-principles-and-prsa-guidelines/ (last accessed February 28, 2022). 6. William Comcowich, Is Your Employee Social Media Policy Outdated? Glean, August 26, 2016, https://glean.info/is-your-employee-social-media-policy-outdated/? doing_wp_cron=1618932677.7149569988250732421875 7. Stuart Elliot, When the Marketing Reach of Social Media Backfires, New York Times, March 16, 2011. 8. Id. 9. Rebecca Borison, The Top 10 Social Media Fails of 2014, Inc., December 10, 2014, www.inc.com/rebecca-borison/top-10-social-media-fails-2014.html. 10. Social Media – What’s Your Policy? PR Week, November 19, 2010, www.prweek. com/article/1041541/digital-social-media-policy-whats-policy. 11. Pietrylo v. Hillstone Restaurant Group d/b/a Houston’s, 2009 U.S. Dist. LEXIS 88702 (D. N.J. 2009). 12. Brian Hall, Court Upholds Jury Verdict in Pietrylo v. Hillstone Restaurant Group, Employer Law Report, October 19, 2009, www.employerlawreport.com/2009/10/ articles/workplace-privacy/court-upholds-jury-verdict-in-pietrylov-hillstonerestaurant-group/#axzz24I04aHO1. 13. Stengart v. Loving Care Agency, Inc., 990 A.2d 650 (2010). 14. Id. 15. City of Ontario v. Quon, 560 U.S. 746 (2010). 16. Lukowski v. County of Seneca, 2009 WL 467075 (W.D.N.Y. 2009) (finding that the “terms of service agreements between customers and businesses have been considered relevant to characterization of privacy interests”). 17. 323 N.L.R.B. 244, 1997. 18. Steven Greenhouse, Company Accused of Firing Over Facebook Post, New York Times, November 8, 2010. 19. Philip L. Gordon, Settlement in NLRB’s AMR/Facebook Case Contains Message for Employers about Social Media Policies, Workplace Privacy Counsel, February 8, 2011, http://privacyblog.littler.com/2011/02/articles/social-networking-1/ settlement-innlrbs-amrfacebook-case-contains-message-for-employers-aboutsocial-mediapolicies. 20. Proskauer, Social Media in the Workplace around the World 3.0, April 29, 2014, www.proskauer.com/files/uploads/social-media-in-the-workplace-2014.pdf. 21. Comcowich, supra note 6. 22. Mark G. Jeffries, NLRB Finds Chipotle’s Policies Unlawful, The National Law Review, January 3, 2017, www.natlawreview.com/article/nlrb-finds-chipotle-spolicies-unlawful. 23. Stefanie M. Renaud, The Boeing Company: In a Win for Employers, NLRB Dumps the “Reasonably Construe” Standard for Determining Whether Employee Handbooks’ Violate NLRA Rights, Hirschfeld Kraemer, December 15, 2017, www.hkemploymentlaw.com/blog/boeing-company-win-employers-nlrb-dumpsreasonably-construe-standard-determining-whether-employee-handbooksviolate-nlra-rights. 24. Carolyn S. Toto, Boeing Decision Forges New Balance Between NLRA Rights and Social Media Policies, Pillsbury Winthrop Shaw Pittman, January 17, 2018, www. internetandtechnologylaw.com/boeing-nlra-rights-social-media-policies.

222

Holly Kathleen Hall

25. James Korte, Boeing: The NLRB Decision That Keeps on Giving, JD Supra, December 16, 2020, www.jdsupra.com/legalnews/boeing-the-nlrb-decision-thatkeeps-on-62059/#:~:text=On%20December%2010%2C%202020%2C%20a, provisions%20in%20its%20Associate%20Guidebook. 26. Faegre Drinker Biddle & Reath LLP, NLRB Expands Employer Options for Social Media and Non-Disparagement Rules, JD Supra, August 27, 2020, www.jdsupra. com/legalnews/nlrb-expands-employer-options-for-89398/ 27. Federal Trade Commission, Enforcement Policy Statement on Deceptively Formatted Advertisements, Federal Register (2016), www.ftc.gov/system/files/documents/ federal_register_notices/2016/04/160418enforcementpolicyfrn.pdf. 28. Federal Trade Commission, Native Advertising: A Guide for Businesses (December 2015), www.ftc.gov/tips-advice/business-center/guidance/native-advertising-guidebusinesses. 29. Lesley Fair, Lights Out on Unsubstantiated Pain Relief Claims and Deceptive Native Advertising, Federal Trade Commission, June 25, 2020, www.ftc. gov/news-events/blogs/business-blog/2020/06/lights-out-unsubstantiated-painrelief-claims-deceptive. 30. Federal Trade Commission, FTC Seeks Public Comment on its Endorsement Guides, February 12, 2020, www.ftc.gov/news-events/press-releases/2020/02/ftcseeks-public-comment-its-endorsement-guides. 31. Federal Trade Commission, How to Make Effective Disclosures in Digital Advertising, March 2013, www.ftc.gov/sites/default/files/attachments/press-releases/ftcstaff-revises-online-advertising-disclosure-guidelines/130312dotcomdisclosures. pdf. 32. Federal Trade Commission, Disclosures 101 for Social Media Influencers, October 2019, www.ftc.gov/system/files/documents/plain-language/1001a-influencerguide-508_1.pdf. 33. Jim Tobin, Ignorance, Apathy Or Greed? Why Most Influencers Still Don’t Comply With FTC Guidelines, Forbes, April 27, 2018. 34. Federal Trade Commission, Teami, LLC., March 20, 2020, www.ftc.gov/enforcement/ cases-proceedings/182-3174/teami-llc. 35. Eric Dahan, The FTC Plans for a New Review of Influencer Marketing Disclosures: Four Things Brands Need to Know, Forbes, March 27, 2020. 36. Federal Trade Commission, Consumer Review Fairness Act: What Businesses Need to Know, February 2017, www.ftc.gov/tips-advice/business-center/guidance/ consumer-review-fairness-act-what-businesses-need-know. 37. Id. 38. Kerry Gorgone, But Everyone Else is Doing It: Why There are So Many Illegal Social Media Contests, Schaefer Marketing Solutions, February 11, 2015, https:// businessesgrow.com/2015/02/11/social-media-contests. 39. Carolyn Wilman, Why Social Media Contests Must Always Have Official Rules, CFA, December 5, 2017, www.cfapromo.com/why-social-media-contests-mustalways-have-official-rules. 40. Jim Belosic, Social Media Contests and the Law: How to Keep Things Legal, Inc., August 7, 2015, www.inc.com/jim-belosic/social-media-contests-and-the-law-howto-keep-things-legal.html. 41. Cliff Lampe & Nicole B. Ellison, Social Media and the Workplace, Pew Research Center, June 22, 2016, www.pewresearch.org/internet/2016/06/22/social-mediaand-the-workplace. 42. Christina Newberry, Social Media Security Tips and Tools to Mitigate Risks, Hootsuite, May 20, 2020, https://blog.hootsuite.com/social-media-security-for-business. 43. Id.

Advertising and Public Relations

223

44. FedEx Social Media Guidelines (2018), www.fedex.com/content/dam/fedex/usunited-states/cic/Social-Media-Guidelines-and-FAQ.pdf. 45. Mayo Clinic, Sharing Mayo Clinic, https://sharing.mayoclinic.org/guidelines/formayo-clinic-employees/ (last accessed February 28, 2022). 46. General Motors Social Media Policy (2018), www.gm.com/full-social-media-policy. html. 47. Tiffany Black, How to Write a Social Media Policy, Inc., May 27, 2010, www.inc. com/guides/2010/05/writing-a-social-media-policy.html. 48. Steve Heilser, Who Speaks for Your Company on Social? AMA, October 1, 2019, www.ama.org/marketing-news/who-speaks-for-your-company-on-social. 49. Id. 50. James A. Sherer, Melinda L. McLellan & Brittany A Yantis, The (Social) Media Is the Message: Navigating Legal and Reputational Risks Associated with Employee Social Media Use, The Computer & Internet Lawyer (2019). 51. Justin Walden, Guiding the Conversation: A Study of PR Practitioner Expectations for Nonnominated Employees’ Social Media Use, Corporate Communications (2018). 52. Alonzo Martinez, What Employers Should Consider When Drafting a Social Media Policy, Forbes, February 6, 2020.

Chapter 12

The Future of Discourse in Online Spaces Jared Schroeder SOUTHERN METHODIST UNIVERSITY

The conversation about social media and the law changed abruptly on January 6, 2021, when rioters, mostly armed with their phones, attacked the US Capitol. As the frenzied mob shoved and attacked police officers and damaged the iconic building, concerns about social-media-enabled radicalization, extremism, the power of algorithms, and the strength and prevalence of online misinformation and disinformation campaigns transformed from generally hypothetical concerns to a real-life, social-media-infused furor. Like Instagram posts from a spring break trip, the riot, much of which was planned on social media, was documented in every possible way for online consumption.1 The attack was a made-for-streaming event, with rioters, many dressed in costumes or military-like gear, documenting their every action.2 Some rioters livestreamed the entire attack.3 People posted selfies while sitting in the speaker of the House’s office and streamed their friends attacking police and threatening to harm politicians. Others recorded themselves berating journalists and posing beside a noose set up near the building.4 Days later, when federal officials started making arrests, many of the rioters attributed their behavior to conspiracy theories and seemed shocked their extremely well-documented crime spree would be used as evidence against them and that they would face consequences for their behavior. The riot’s impact on how we think about social media and the law was not complete, however. Social media firms such as Twitter and Facebook took the unprecedented step of banning President Trump from their spaces since many rioters had gone directly from his speech to attacking the Capitol.5 Google and Apple removed Parler, a social media platform that had become popular with conspiracy theorists and those involved with the riot, from their stores, stopping future downloads of the app.6 Soon after, Amazon informed Parler it would no longer host its content, meaning the social media firm would, at least temporarily, disappear.7 Parler sued Amazon, asking the government to force Amazon to continue hosting it, but eventually dropped the lawsuit.8 The tech companies’ decisions gave new life to discussions about how powerful these firms have become when it comes to who can speak, whose messages are DOI: 10.4324/9781003174363-12

The Future of Discourse in Online Spaces

225

seen, and whose voices can be blocked or removed. In many ways, they have become the new gatekeepers, managing the flow of information in society. The attack on the Capitol changed the nature of how we talk about social media regulation and the various harms it poses to democracy, but it did not provide any solutions. The attack provided a stark, tangible, and difficult-toforget image of the types of unresolved law-and-policy challenges we face. The riot moved what was once a fear or something seen in generally ethereal ways into an inescapable reality – social media are having a substantial effect on democracy. This chapter examines important and unresolved frontiers within social media and the law, most of which were starkly displayed in the Capitol riot. This chapter first considers social-media-related radicalization and extremism before exploring the growing power of AI and bots to determine our worlds and the influence of false and misleading information campaigns on democratic discourse, all with an eye toward the unique legal and policy challenges they pose. These concerns share a common thread – there are no easy remedies for protecting society from these harms.

The Extremist Next Door Don’t believe Mark Zuckerberg. During his testimony before a House committee in March 2021, Facebook’s creator and CEO placed the blame for increasing levels of extremism elsewhere, generally away from social media firms. He told Congress, “Polarization was rising in America long before social networks were even invented, and it’s falling or stable in many other countries where social networks are popular.”9 His statement was only partially correct. While the 24-hour news cycle, brought about by cable news firms beginning in the 1990s, increased polarization, social media supercharged it.10 Think about it this way: The information we consume creates a world for us. Most of us, for example, have never met the celebrities we see on Instagram, TikTok, and YouTube and in magazines, but we still have opinions and views about them. It seems lots of people have dating and career advice for Taylor Swift. How are these opinions about her and others possible if we don’t have any direct knowledge of them? It’s possible because we use the information we consume to make conclusions. If the information we consume influences how we understand the world, whether it’s celebrities’ lives or political matter, then how and where we get those building blocks of reality matters. Social media have fundamentally changed the nature of our information diets, which either nourish or poison the way we see the world, depending on their nutrition levels. This does not mean all social media are a concern for democracy, but the potential for radicalization and extremism increases substantially when tens of millions of people are encountering, and even funneled toward, all manner of conspiracy theories and false and misleading information.11 Unlike previous eras, where much of the information people used to construct their worlds came from

226

Jared Schroeder

reporters and editors who did their best to fact-check information before sharing it with the public, little of the information on social media is vetted before it is published.12 Algorithms sort people and ideas, encouraging us to engage with like-minded individuals and information sources.13 After the algorithms do their work, people, faced with an overwhelming, information-rich environment, pick which reality they prefer.14 The choice-rich online environment contributes to another concern regarding extremism and radicalization. Social media have also supercharged these concerns because, when given a choice, people generally align themselves with those who share similar opinions and beliefs.15 Each subsequent wave of the Internet’s evolution has provided more choices regarding the people, sources, and ideas we encounter. On the surface, there’s nothing wrong with people gathering in like-minded communities. In fact, the Internet forces us to make choices in a way that previous information eras did not. Previous generations shared relatively similar reports from a daily newspaper and a few TV stations rather than a multiverse of publishers on a variety of forums. The choices we make regarding the ideas, people, and sources of information we engage with only become a concern when we realize that, when we mostly encounter ideas that align with our pre-existing beliefs, we can only become more extreme and never more open-minded. Constitutional scholar Cass Sunstein concluded, “Members of a democratic public will not do well if they are unable to appreciate the views of their fellow citizens, or if they see one another as enemies or adversaries in some kind of war.”16 Similarly, sociologist Manuel Castells explained, “When communication breaks down, when it does not exist any longer, . . . social groups and individuals become alienated from each other, and see the other as a stranger, eventually a threat.”17 Ultimately, both scholars’ ideas link social media and extremism. As a worldwide network for communication, the Internet has provided those who were once generally physically isolated in their extreme and potentially dangerous views with like-minded individuals. In other words, they were alone, because of their extreme views, in their geographic communities, but found confidence and reinforcement regarding their extreme views online, on places like Facebook and YouTube. Social media have, unintentionally, helped solve a problem for extremists. They have given them a place to create solidarity and increase the likelihood that people will act upon their extremist views. Indeed, the increasing use of social media spaces as gathering points for hate, militia, and conspiracy groups can help explain the growing numbers of crimes in which the victim was harmed because of their race, religion, or nationality.18 It’s easy to wonder, as the repercussions of such extremism grow, how this problem can be addressed. There is no simple answer. All of the easy avenues are blocked by foundational First Amendment principles that protect our freedom of expression. The government generally cannot halt communication because its content is unpopular or distasteful.19 As the Supreme Court determined in the Texas v. Johnson flag-burning case in 1989, “If there is a bedrock

The Future of Discourse in Online Spaces

227

principle underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.”20 This includes hateful speech. Justices reinforced this idea in 2010 in Snyder v. Phelps, which dealt with the Westboro Baptist Church’s generally hateful, cruel, homophobic protests at military funerals.21 Chief Justice John Roberts explained, “Such speech cannot be restricted simply because it is upsetting or arouses contempt.”22 Thus, the government cannot simply penalize extreme, potentially dangerous ideas that do not include plans for immediate, lawbreaking danger.23 The government also lacks the power to censor publishers, which blocks another potential solution: laws that would give the government the power to halt online accounts and groups from publishing.24 Furthermore, long before social media existed, the Supreme Court established in Reno v. ACLU in 1997 that First Amendment rights not only apply to the Internet, but the Internet is a powerful tool for democratic discourse.25 In other words, the Supreme Court didn’t begrudgingly extend First Amendment protections to the Internet. Justices enthusiastically concluded networked communication was a democratizing force that deserves substantial protection.26 Thus, the obvious approaches to criminalize or censor hateful, extremist speech online would run into important precedents that support crucial First Amendment protections. What about the corporations that provide forums for extremists, hate speech, and threats? Can they be held accountable for the content that moves through their spaces? Generally not. The Supreme Court has extended humanlike protections to corporations, meaning laws that require Big Tech firms, such as Facebook and Google, to take down certain content would, for the most part, violate the First Amendment.27 The Court reasoned in the First National Bank v. Bellotti corporate speech case in 1978 that the nature of the speaker cannot determine whether it has First Amendment rights. Justices reasoned, “We thus find no support in the First or Fourteenth Amendment, or in the decisions of this Court, for the proposition that speech that otherwise would be within the protection of the First Amendment loses that protection simply because its source is a corporation”28 When corporations, including those who provide online forums, are considered people when it comes to First Amendment protections, it becomes difficult to limit their expression or the spaces they provide for others to communicate. Online forum providers are also protected by Section 230 of the Communications Decency Act, a small part of the same law that was the focus in Reno.29 The law protects online forum providers, including social media firms, from liability for how people use their services.30 Thus, generally, if dangerous groups are planning crimes on social media or a person defames another online, the forum providers are not liable. Lawmakers have consistently criticized Section 230, with many proposing bills that would revise it or create more exemptions, but none of the bills have succeeded.31 Missouri senator

228

Jared Schroeder

Josh Hawley proposed the Limiting Section 230 Immunity to Good Samaritans Act in 2020, which would have opened the door for citizens to sue tech firms that block or take down political speech.32 The bill would have required tech companies to operate in “good faith” when moderating content.33 Like all the other Section 230 repeal or reform bills proposed in 2020, it died in committee. Importantly, as a potential solution to problems social media firms create, the bill would have violated the First Amendment because it gave the government power to force firms to publish what they otherwise would not publish. It’s important to remember, not only does the First Amendment protects us from government censorship, but it also protects us from compelled speech.34 The Court ruled in Miami Herald Publishing v. Tornillo, which dealt with a Florida law that required newspapers to publish responses from politicians that were criticized in their pages, that “[t]he clear implication has been that any such a compulsion to publish that which ‘reason tells them should not be published’ is unconstitutional.”35 In the wake of the January 6 riot and social media firms banning Trump and others, lawmakers in Texas and Florida passed laws that would force companies like Facebook and Twitter to provide forums for certain speakers. Federal judges stopped both laws from going into effect because they violated the First Amendment.36 Both are being appealed in the federal court system. To recap all the things the government can’t do to address dangerous, extreme speech on social media: The government generally cannot tell citizens what they can and cannot publish, censor people or publishers, make tech firms liable for what people publish, or force these companies to leave content up that they would take down. So who can limit dangerous, extremist content on social media in the United States? So far, only the companies that provide the forum. This has had mixed results. Social media firms have been inconsistent in what they remove and the accounts they block and ban.37 This has led some politicians to argue more must be done to limit dangerous content.38 At the same time, others have decried social media firms’ content moderation decisions, contending they are politically biased.39 As mentioned, Texas and Florida, using ideas similar to Senator Hawley’s bill, passed laws that require social media firms to leave certain content online, even if the firm concludes it violates community guidelines.40 These laws would seem to violate the First Amendment, since it protects us, including corporations, from compelled speech, but as of June 2022, courts were continuing to consider whether they were constitutional or not.41 Absent any obvious regulatory avenue, we are dependent on social media firms to regulate themselves. The self-regulation, however, is inconsistent within and across companies. Twitter and YouTube, via its parent company, Google, have created policies regarding hate speech and violent organizations in their spaces.42 The policies include content deletion, account suspensions, and if continued, account terminations. Researchers still contend YouTube users are regularly encountering extremist content.43 Twitter has suspended or blocked

The Future of Discourse in Online Spaces

229

hundreds of thousands of accounts for hateful or threatening expression in recent years and produces detailed reports about its decisions.44 Facebook created an Oversight Board in 2018 to help the company make decisions about content and expression on the platform. The board is an international body of scholars and leaders who are charged with reviewing the social media giant’s content decisions.45 The board is unprecedented in every way. The board focuses on just one corporation rather than an entire industry. It also creates a quasi-judicial branch of a corporation that is allegedly independent of its parent. The board appears to have the power to overrule Facebook’s content decisions, having overturned the company’s decisions in four of its first six cases.46 In May 2021, the board declined to overturn Facebook’s indefinite ban on Trump but concluded the company’s community guidelines are unclear and must be revised.47 Skeptics contend the Oversight Board, because it is appointed by Facebook and does not serve the entire industry, will remain ineffective.48 Still, the board represents an unprecedented experiment in industry self-regulation that might just be the best, most plausible avenue through which social media extremism and radicalization can be addressed.

Our AI Overlords Every time we open a social media app, powerful algorithms make dozens of decisions on our behalf, determining the ideas and people we encounter based on the immeasurable amounts of data the companies have collected about us.49 The AIs presort us and predetermine our online worlds.50 In this sense, AIs have become the overlords of the world-building information we consume. This does not exactly promote them to the status of robot overlords like the Terminator or Ultron from the Avengers movies, at least not yet. They are not physically endangering us. Instead, they are controlling our information universes, which should not be dismissed as inconsequential. Rather than malevolent robots, perhaps aIs can be understood more as puppets that are set in motion by human programmers with a variety of motives. The algorithms used by firms such as YouTube, Instagram, and Snapchat play powerful roles in predetermining the people and ideas users encounter. If we understand the information we encounter as shaping how we make sense of the world, then the decisions by a computer program to sort people or present certain ideas but not others, before a user can make any affirmative choices on their own, can have a dramatic influence on democracy. YouTube’s viewing suggestions, found in the “up next” sidebar and the videos that appear on the home page when people log in, account for more than 70% of its traffic.51 That means the videos YouTube thinks we want to see, its algorithmically selected videos, carry a disproportionate ability to influence people, particularly in regard to reinforcing a viewer’s existing beliefs.52 If a person watches a clip about the Capitol riot, for example, and they have

230

Jared Schroeder

watched videos about Donald Trump before, the algorithm might suggest they watch a Trump speech. After that speech, the algorithm might suggest a video that relates to a conspiracy theory Trump suggested. The next suggestion might be an extremist group. Before long, a viewer, thanks to suggestions made by algorithms, has been inundated with false and misleading radicalizing information. The algorithm, in other words, can lead people down a rabbit hole, which is often characterized by more and more extreme information. Does this hypothetical sound too far-fetched? Information scholar Zeynep Tufekci documented a similar experience on YouTube, where she started watching Trump rallies and ended up watching “white supremacist rants, Holocaust denials, and other disturbing content.”53 A similar process happens on Facebook. The social media giant’s algorithm emphasizes interactions, thus giving more prominence to users who post, comment, and like often, as well as to messages that receive more attention.54 Combine this with a 2018 study that found “falsehoods diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information,”55 and Facebook’s algorithm becomes a recipe for super-charged extremism and radicalization. Facebook’s algorithm-based recommendation tool accounted for 64% of the growth of extremist groups on the platform.56 By 2018, Facebook was aware its “algorithms exploit the human brain’s attraction to divisiveness.”57 With this knowledge in hand, the company decided to do nothing, according to a Wall Street Journal report.58 Why? The same question can be asked of other social media services. Why don’t they do more to adjust their algorithms to limit hate, extremism, and other harms to democracy? The shortest possible answer – money. Again, we run into a contradiction between Zuckerberg’s words and Facebook’s actions. During his testimony before Congress in spring 2021, Zuckerberg contended, “Others claim that algorithms feed us content that makes us angry because it’s good for business, but that’s not accurate, either.”59 Facebook’s decision to do nothing, in its actions rather than its words, about internal data that the site was super-charging extremism support another conclusion. The foundation of social media firms’ business models indicates that there is a substantial motivation for the social media giant to do nothing about these problems. Let’s spend a minute on economics. According to a report in 2020, Facebook made about $178 per North American user per year in revenue, mostly from targeted advertising.60 Twitter made about $17.20 per user, Snapchat $11, and Pinterest $4.61 Any adjustments to the algorithms, especially if the changes limit ideas that receive substantial traffic, such as lies about the election, will lead to a loss in profit. Each user who leaves Facebook takes about $178 a year in revenue away. If Facebook limits recommendations to extremist groups and if 100,000 of Facebook’s 2.7 billion active users stop interacting on the service, they lose $17.8 million a year. The same can be said about Instagram (also owned by Facebook) and Snapchat. Changes that limit engagement

The Future of Discourse in Online Spaces

231

limit revenue. In other words, there is no financial incentive for a company to change its powerful algorithms. They have built these tools to maximize profit rather than to facilitate a healthy space for community and conversation. Dipayan Ghosh, who leads the Digital Platforms & Democracy Project at Harvard, came to a similar conclusion. He explained, At the very center of the consumer internet – beyond the superficial manifestations of harm such as the disinformation problem or the spread of hate speech or the encouragement of persistent algorithmic bias – is a silent mechanism that works against the will of the very people whose attention, desires, and aspirations it systematically manipulates with a cold technical precision.62 AI overlords indeed. Before examining potential solutions and unresolved legal questions that surround these concerns, we must first examine one other concern regarding AI entities and human discourse – bots.

ELIZA, Miquela, and Friends In 1966, Joseph Weizenbaum, a computer scientist at MIT, created the first humanlike computer program and named it ELIZA.63 The program was capable of conducting the type of conversations a person might have with a therapist, responding to information people provided with “tell me more about that” or “I see.”64 Sherry Turkle, who experimented with the program as a student and now leads the MIT Initiative on Technology and Self, noticed early on people enjoyed speaking with ELIZA.65 She explained, “People used the program as a projective screen on which to express themselves.”66 Until the widespread growth of networked technologies, programs like ELIZA were cordoned away, stuck on dark, gloomy hard drives in research labs like Weizenbaum’s. Then, with the development and massive adoption of the Internet, AI communicators suddenly had a space to flourish and interact with humans. As the famous New Yorker cartoon from the early 1990s joked, “On the Internet, nobody knows you’re a dog.”67 In other words, the Internet does not require a human form, like a conversation on the street does, making it a perfect place for AI entities to join humans in conversations about a variety of topics. Perhaps Miquela Sousa, the musician, model, and influencer with more than 3.1 million Instagram followers as of December 2021, has leveraged this transition to online spaces best. Miquela, it turns out, has no human form. She is a computer program that has music videos on YouTube, has attended the CFDA Fashion Awards, and has even broken up with a real-life boyfriend, which has to be the ultimate heartbreak since not even a computer program wants to be with him.68 Why does this matter when it comes to major questions about free expression and extremism, radicalization, and social media? Miquela might seem harmless since she’s breaking hearts, not democracies, but bots – modern, far

232

Jared Schroeder

more complex versions of ELIZA – have been dispatched by all manner of groups to join human discourse about matters of public concern. Researchers in 2020 examined 200 million tweets that discussed the pandemic and COVID-19.69 They found 82% of the most influential retweeting accounts were bots, and 62% of the top 1,000 retweeters were bots.70 Similarly, in fall 2018, as a caravan of immigrants moved north toward the US-Mexico border, bots, many of them spreading false and misleading information, made up about 60% of the discourse about the matter on Twitter.71 Bots, as nonhuman speakers, have proven to be influential participants in 21st-century democratic discourse, opening a new frontier of ethical and legal questions. Bots do not become emotional, grow tired, have personal ethics, or invest in family or faith, yet they often move seamlessly through human interactions about matters of public concern. Since thousands of bots can be programmed and set working by a single person in one day, they can overwhelm the marketplace of ideas with a certain conclusion, creating the false impression that most people generally agree about the matter, when in reality, most of the communicators are not human, they just appear to be.72 Bots can also trick social media algorithms and artificially elevate ideas, making them more likely to be seen.73 In 2018, for example, a company that sells grilled lamb paid a firm to boost its brand on Twitter in Saudi Arabia. For about $200, the firm used an army of bots to push “grilled lamb delivery” into Twitter’s “trending topics.”74 The ploy helped draw attention to the company, but it also showed the growing power of bot puppeteers to pull the strings and create artificial winners and losers in the marketplace of ideas. Finally, bots can make it impossible for humans to communicate because they drown out people’s voices by publishing countless messages about an issue or hashtag.75 While the 17,000 bot-based tweets about grilled lamb in Saudi Arabia probably didn’t push too many human speakers from taking part in the discussion, consider a gun control debate or an election. In another example, a study found that 45% to 60% of all Twitter accounts discussing COVID-19 in 2020 were bots.76 Many of the accounts argued America should reopen rather than keep pandemic safety measures in place.77 The number of bot accounts communicating a certain idea regarding a major health crisis had the power to influence discourse and drown out human voices. Suddenly, in scenarios such as these, human communicators’ messages are lost in a sea of bot-based babble. Human discourse becomes a needle in a haystack. All these concerns about the power of algorithms and the role of bots in democratic discourse raise questions about what can be done to protect human expression in the AI era. The courts have made few rulings specifically dealing with free expression and AI. Without clear direction from the courts regarding AI, we have at least two ways to approach this question. First, it could be argued AI is not human and therefore does not have First Amendment rights. Second, AI is created by humans, so its rights should be based on existing

The Future of Discourse in Online Spaces

233

precedents about human expression. Both paths, however, lead to the same place. When the Supreme Court concluded First Amendment safeguards extend to corporations, it did so by emphasizing the nature of the speaker cannot be determinative when it comes to free expression rights.78 Justices explained in the Citizens United campaign finance case in 2010, The identity of the speaker is not decisive in determining whether speech is protected. Corporations and other associations, like individuals, contribute to the “discussion, debate, and the dissemination of information and ideas” that the First Amendment seeks to foster. . . . The Court has thus rejected the argument that political speech of corporations or other associations should be treated differently under the First Amendment simply because such associations are not “natural persons.”79 Thus, while the Court has not made an AI-specific ruling, it has relatively emphatically emphasized the nature of the speaker should have nothing to do with its free-expression protections. Similarly, if we assume AI is merely a tool created by humans, then, like a person who uses a digital music file or a Word document to express themselves, AI is merely a vehicle for delivering human speech. The outcome in both lines of thought is the First Amendment appears to protect AI from government regulation. This means there are few legal or policy approaches, absent creating an exception for AI, that can address the problems of algorithmic determinism or the potential for bots to dominate the marketplace of ideas. Essentially, we are again left to trust social media firms to create a functional space for human interaction. As noted, this is a big ask because social media firms face substantial financial incentives to emphasize interaction over a safe, inclusive environment for interaction. Thus, these companies are unlikely to change their algorithms unless there is a financial incentive. When it comes to bots, however, social media firms have made efforts to delete accounts and change their policies to limit them. Instagram, in an effort to ensure content comes “from real people, not bots,” started requiring more data before a new account could be created in August 2020.80 In 2021, the streaming site Twitch deleted about 7.5 million bot-based accounts to limit artificially inflated viewer and follower tallies.81 Facebook reported blocking 4.5 billion accounts, most of which were bot-based, in the first nine months of 2020.82 The number, which is about 60% of the world’s population, illustrates the growth of AIbased accounts online. Since we rely on these firms, rather than traditional public spaces, for our public debate, removing bot accounts can help, as do policies the social media firms have crafted against bad bot-based behavior. Broadly, algorithms and bots occupy increasingly influential roles in the information people encounter and are an entirely new type of communicators in the marketplace of ideas. Along with these new actors have come increasing

234

Jared Schroeder

amounts of false and misleading information, which represents another area of concern regarding extremism, radicalization, and threats within democratic discourse.

True or False? Mark Zuckerberg’s appearance on CBS News in the summer of 2019 was a complete lie. The lie was not in what he said but in the fact that the appearance never happened. The clip in which Zuckerberg claimed, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” was a deepfake.83 A deepfake is an audio or video clip that portrays a person saying or doing something they never said or did.84 Deepfakes are created by advanced adversarial AI networks.85 One deep-learning network generates the deepfake using audio and video provided by the creator. The other deep-learning network identifies flaws in the generator’s version until, together, the two networks work to refine the deepfake into a believable representation of reality.86 Except it’s not. Deepfakes are perhaps the most complex vehicles for misinformation and disinformation in online environments that are increasingly filled with intentional, misleading falsehoods. They bring audio and video, forms of media audiences are used to trusting because they see and hear the action with their own senses, to create convincing lies.87 While deepfakes have primarily been used to edit celebrities into pornography, they continue to evolve into a threat to the world-building value of information.88 Lesser versions of deepfakes, sometimes called “cheap fakes,” have been more common. Groups have posted edited audio clips that made it sound as if Speaker of the House Nancy Pelosi’s speech was slurred in May 2019 and August 2020.89 Similarly, a manipulated clip made it appear that President Biden wasn’t sure what state he was in during the 2020 presidential campaign.90 Deepfakes and cheap fakes are more complex versions of the carefully calculated falsities that have increasingly circulated online. Partisans have been misusing images, taking them out of context to mislead people, for more than a decade. Turning Point USA, a right-wing organization with more than 2.5 million followers on Facebook and nearly half a million followers on Twitter, posted an image of empty grocery store shelves in 2020. The photo included the words #SocialismSucks.91 The photo was not the result of socialism. The photo was taken after a massive earthquake in Japan in 2011.92 Similarly, in 2019, people started sharing a photo of piles of trash, contending it was left in Hyde Park in Chicago after an environmental group met there. One of the pictures was from Mumbai, India, and another from a different event at the park.93 Despite the images being false, they tend to spread and be believed because they reinforce personal beliefs, as well as the beliefs that define ideologically homogenous spaces.94

The Future of Discourse in Online Spaces

235

Bots also have a role to play in the increasingly falsity-infested online information environment. Researchers, for example, found bots are communicating substantial amounts of false information about climate change. In an analysis of nearly a million tweets about climate change, researchers found about 10% of the accounts involved in the discussion were bots.95 The kicker, though, was those accounts made up 25% of the tweets. The bots made false statements about what is known about climate change, including falsely attributing disparaging comments about climate science to respected scientists.96 Of course, the bot accounts don’t say they are bot accounts. They generally blend in with human accounts and have become more and more difficult to pick out. Anyone can publish online, so good, old-fashioned lies, absent complex deepfake technologies or armies of bots, often flow through like-minded online communities. Spreading false information online, where traditional news media do not act as gatekeepers regarding what is published, has become part of the authoritarian playbook in recent years. Indian Prime Minister Narendra Modi tweeted photos about the massive crowds his rally drew in 2021. The only problem, the photos were from a 2019 rally for the opposition party.97 By the time fact-checkers got around to telling people the tweets were false, the lie had already traveled around the world. The truth cannot keep up. Brazilian President Jair Bolsonaro has also manipulated social media to spread false information. Facebook took down false accounts in which Bolsonaro staffers, posing as journalists and regular citizens, posted false information.98 The United States has seen plenty of false information online. After the 2020 election, social media users posted that 40,000 pro-Biden ballots were flown into Arizona from China and stuffed in ballot boxes in Maricopa County.99 The lie, as they often do, filtered up through communities until, even months after the election was decided, Republican officials were fruitlessly searching Arizona ballots for bamboo fibers to prove the votes were from China. In another example, at least 800 people around the world died after drinking bleach or other household cleaning products to avoid getting COVID-19 in 2020.100 The deaths and countless hospitalizations followed Facebook, Twitter, Instagram, and TikTok posts that bleach, as well as cocaine, could cure COVID-19.101 Similar misinformation has been attributed to higher-than-expected vaccine hesitancy numbers, as people, confused by false information they saw online, have decided not to get a COVID-19 vaccine.102 What is to be done about the eroding quality of information online? The threats to the information environment grow more complex as deepfake technologies, bots, and humans seamlessly communicate false information, which is often readily received by communities that are primed to accept it as truth.103 This emerging information environment places more pressure on people to be able to discern truth from falsity than has been expected in any previous era. What can be done? Very little. False information that is not defamatory is generally protected by the First Amendment. The Supreme Court ruled in United

236

Jared Schroeder

States v. Alvarez, a case in which a public official falsely claimed he earned the highest military honors, that the government should not regulate what is true or false. The Court reasoned, “The remedy for speech that is false is speech that is true. This is the ordinary course in a free society.”104 As noted in the preceding section, false information that is spread by AI is generally understood as protected via the programmer’s First Amendment rights. While lawmakers could create laws that specifically limit AI-based falsities, since it could be argued computer programs don’t have First Amendment rights, the Court has emphasized the nature of the speaker should not determine whether expression is protected.105 With this in mind, it is easy to contend that the way we have come to interpret the First Amendment and our reasons that speech is protected have become out of date in the 21st-century networked environment. Deepfakes, manipulated media, and intentional falsities that are communicated knowing the truth will struggle to catch up pose a real danger to democracy and people’s ability to participate in the marketplace of ideas.106 Absent new laws or new rationales for free expression, we again find ourselves at the mercy of social media firms’ interests in safeguarding the communication environments they have created. Facebook, Instagram, Twitter, TikTok, and YouTube have banned deepfakes.107 Deepfakes will likely still appear in these spaces but will be taken down. The question is, how quickly? Social media firms have also developed manipulated content policies, which cover cheap fakes and misleading photos. Again, this does not guarantee falsities will not circulate for a time before they are caught and removed. Finally, firms, particularly during the 2020 presidential election cycle, started labeling false and misleading content. Facebook and Twitter started to add labels that content might be false or provide links to true information.108 Ultimately, all these steps place substantial trust in social media firms to maintain the otherwise toxic, false, and misleading information environment, which played a substantial role in the Capitol riots. Aside from social media firms, the information environment requires increasingly strong media literacy and skepticism from citizens.

DISCUSSION QUESTIONS 1.

Does it violate the First Amendment when social media firms ban speakers or take down content? Generally not. The First Amendment applies to government limitations on freedom of expression.As private corporations, social media firms do not have to follow the First Amendment. In fact, it would violate the First Amendment if the government forced social media firms to leave content up that they would otherwise take down. For

The Future of Discourse in Online Spaces

237

more information about this, see Miami Herald v. Tornillo (1974) and New York Times v. United States (1971).Together, the cases set the precedent that the government can neither censor publishers nor force them to publish. 2.

Does AI have First Amendment rights? Sort of.While the Supreme Court has never addressed whether AI has free expression rights, it has concluded corporations should receive the same or similar First Amendment protections as humans. Key cases in this area are First National Bank v. Bellotti (1978) and Citizens United v. Federal Election Commission (2010). In these cases, the Court extended humanlike protections to artificial, human-made entities – corporations. The Supreme Court has emphasized the nature of the speaker should not matter when it comes to deciding whether free expression is protected.

3.

Why can’t the FCC regulate the Internet like it does broadcast media? The FCC was created because the unique nature of broadcast communication. TV and radio broadcasts use public airwaves, which come from a limited spectrum of frequencies. Since there are limitations on the number of broadcast frequencies, the FCC was created to maintain these spaces. Over time, the Supreme Court established the government, via the FCC, can compel broadcasters to publish content they otherwise would not publish and that indecent content, while protected by the First Amendment, can at times be regulated on broadcast. It’s easy to see parallels between social media and broadcast.The Supreme Court, however, rejected the government argument in Reno v. ACLU in 1997 that the Internet should be regulated like broadcast. Instead, the Court concluded the Internet should receive the highest level of potential First Amendment protection. The Court concluded, though it might seem naïve now, that the Internet is a realization of the democratic dream that every citizen can take part in self-government.

4.

Isn’t everything Section 230 of the Communications Decency Act’s Fault? No. It’s a shame that many politicians and pundits have chosen to misrepresent what Section 230 does and means. Section 230 protects forum providers from liability for how people use their services. Section 230 does not keep the government from censoring social media; the First Amendment does. Section 230 does not protect social media firms when

238

Jared Schroeder

they block and ban speakers; the First Amendment does. Section 230 expedites more than it protects. If Section 230 disappeared, social media firms would still have substantial First Amendment protections against liability claims and regulation attempts. They would almost always win the cases against them. Section 230 merely provides online forum providers security in knowing they will not constantly face long, drawn-out, and expensive lawsuits that result from the millions of ideas that are posted in their spaces each day. Most of the time, when people complain about Section 230, they are actually upset at the First Amendment, the foundation of democracy.

Notes 1. Sara Morrison, The Capitol Rioters Put Themselves All over Social Media. Now They’re Getting Arrested, Vox, January 19, 2021, www.vox.com/recode/22218963/ capitol-photos-legal-charges-fbi-police-facebook-twitter; Sheera Frenkel, The Storming of the Capitol was Organized on Social Media, New York Times, January 6, 2021. 2. Vanessa Friedman, Why Rioters Wear Costumes, New York Times, January 7, 2021; Elana Sheppard, Pro-Trump Capitol Rioters Like the “QAnon Shaman” Looked Ridiculous – by Design, NBC News, January 13, 2021. 3. Julia Alexander, Jacob Kastrenakes & Bijan Stephen, How Facebook, Twitch, and YouTube are Handling Live Streams of the Capitol Mob Attack, The Verge, January 6, 2021; Alex Woodward, Neo-Nazi Conspiracist “Baked Alaska” Arrested after Life-Streaming from Capitol Insurrection, The Independent, January 16, 2021. 4. Rebecca Shabad, Noose Appears near Capitol; Protestors Seen Carrying Confederate Flags, NBC News, January 7, 2021; David Bauder, Journalists Recount Harrowing Attacks Amid Capitol Riot, The Associated Press, January 8, 2021. 5. Amy Sherman, A Timeline of What Donald Trump Said before the Capitol Riot, Poynter, February 11, 2021, www.poynter.org/fact-checking/2021/a-timeline-ofwhat-donald-trump-said-before-the-capitol-riot/; Charlie Savage, Incitement or Riot? What Trump Told Supporters before Mob Stormed Capitol, New York Times, January 10, 2021. 6. Jack Nicas & Davey Alba, Amazon, Apple and Google Cut Off Parler an App that Drew Trump Supporters, New York Times, January 13, 2021; Brian Fung, Parler Has Now Been Booted by Amazon, Apple and Google, CNN, January 11, 2021, www.cnn.com/2021/01/09/tech/parler-suspended-apple-app-store/index.html. 7. Bobby Allyn, Judge Refuses to Reinstate Parler after Amazon Shut it Down, NPR, January 21, 2021; Cameron Peters, Why Conservatives’ Favorite Twitter Alternative Has Disappeared from the Internet, Vox, January 11, 2021, www.vox. com/2021/1/10/22223250/parler-amazon-web-services-apple-google-play-ban. 8. Russell Brandom, Parler Drops Federal Lawsuit Against Amazon, Files another in State Court, The Verge, March 3, 2021, www.theverge.com/2021/3/3/22310873/ parler-amazon-aws-lawsuit-antitrust-hosting-free-speech.

The Future of Discourse in Online Spaces

239

9. Mark Zuckerberg Opening Statement Transcript: House Hearing on Misinformation, Rev, March 25, 2021, www.rev.com/blog/transcripts/mark-zuckerbergopening-statement-transcript-house-hearing-on-misinformation. 10. Gregory J. Martin & Ali Yurukoglu, Bias in Cable News: Persuasion and Polarization, National Bureau of Economic Research, December 2014, www.nber.org/ system/files/working_papers/w20798/w20798.pdf. 11. Nabeel Gillani, Ann Yuan, Martin Saveski, Soroush Vosoughi & Deb Roy, Me, My Echo Chamber, and I: Introspection on Social Media Polarization, Proceedings of the 2018 World Wide Web Conference 823, 829–830 (2018); Katherine J. Wu, Radical Ideas Spread Through Social Media. Are the Algorithms to Blame? PBS, March 28, 2019, www.pbs.org/wgbh/nova/article/radical-ideas-social-mediaalgorithms. 12. Pamela J. Shoemaker & Tim P. Vos, Gatekeeping Theory 3–7 (2009); Mark Verstraete & Derek E. Bambauer, Ecosystem of Distrust, 16 First Amendment Law Review 129, 135–137 (2017). 13. Eytan Bakshy, Solomon Messing & Lada A. Adamic, Exposure to Ideologically Diverse News and Opinion on Facebook, 384 Science 1130 (2015); Philip M. Napoli, What If More Speech Is No Longer the Solution, 70 Fed. Comm. L.J. 55, 77–79 (2018). 14. Clay Shirky, Here Comes Everybody 77–79 (2008); Manuel Castells, The Rise of the Network Society 3 (2000). 15. Itai Himelboim, Stephen McCreery & Marc Smith, Birds of a Feather Tweet Together: Integrating Network and Content Analyses to Examine Cross-Ideology Exposure on Twitter, 18 Journal of Computer-Mediated Communication 154, 167, 171 (2013). 16. Cass Sunstein, #Republic: Divided Democracy in the Age of Social Media 44 (2017). 17. Manuel Castells, The Rise of the Network Society 3 (2000). 18. Michael Balsamo, Hate Crimes in U.S. Reach Highest Level in More Than a Decade, Associated Press, November 16, 2020; Southern Poverty Law Center, 2019 Year in Hate and Extremism Report Release, March 18, 2020, www. splcenter.org/news/2020/03/18/2019-year-hate-and-extremism-report-release. 19. Texas v. Johnson, 491 U.S. 397, 414 (1989). 20. Id. 21. 562 U.S. 443 (2010). 22. Id. at 458. 23. Brandenburg v. Ohio, 395 U.S. 444, 447–448 (1969). 24. Near v. Minnesota, 283 U.S. 697 (1931); New York Times v. United States, 403 U.S. 713 (1971). 25. 521 U.S. 844, 870 (1997). 26. Id. 27. First National Bank of Boston v. Bellotti, 435 U.S. 765, 777–778 (1978); Citizens United v. Federal Election Commission, 558 U.S. 310, 346–347 (2010). 28. 435 U.S. at 784. 29. The Communications Decency Act was part of the massive Telecommunications Act of 1996. Reno v. ACLU dealt with two provisions of the act, which sought to limit the availability of indecent content to children. The Court struck down both provisions, but Section 230, a relatively unheralded passage in the massive law, was not addressed. See Reno v. ACLU, 521 U.S. 844, 857–859 (1997). 30. Communications Decency Act, 47 U.S.C § 230 (1996). The crucial passage reads: “No provider or user of an interactive computer service shall be treated as the

240

31.

32. 33. 34. 35. 36.

37.

38.

39.

40.

41. 42. 43. 44. 45. 46. 47.

Jared Schroeder publisher or speaker of any information provided by another information content provider.” Kiran Jeevanjee, Brian Lim, Irene Ly, Matt Perault, Jenna Ruddock, Tim Schmeling, Niharika Vattikonda & Joyce Zhou, All the Ways Congress Wants to Change Section 230, Slate, March 23, 2021, https://slate.com/technology/2021/03/section230-reform-legislative-tracker.html. Limiting Section 230 Immunity to Good Samaritans Act, S. 3983, 116th Cong. (2020). Id. See Miami Herald v. Tornillo, 418 U.S. 241, 256 (1974); West Virginia State Board v. Barnette, 319 U.S. 624, 633–634 (1943). Miami Herald, 481 U.S. at 256. Sara Morrison, Florida’s Social Media Free Speech Law Has Been Blocked for Likely Violating Free Speech Laws, Vox Recode, June 1, 2021, www.vox.com/ recode/2021/7/1/22558980/florida-social-media-law-injunction-desantis; James Pollard, Federal Judge Blocks Texas Law That Would Stop Social Media Firms from Banning Users for a “Viewpoint,” Texas Tribune, December 1, 2021, www. texastribune.org/2021/12/01/texas-social-media-law-blocked. Kyle Langvardt, Regulating Online Content Moderation, 106 Geo. L. Rev. 1354, 1355–1358 (2018); Faiza Patel and Laura Hecht-Felella, Facebook’s Content Moderation Rules Are a Mess, Brennan Center, February 22, 2021, www.brennancenter. org/our-work/analysis-opinion/facebooks-content-moderation-rules-are-mess. Musadiq Bidar, Lawmakers Vow Stricter Regulations on Social Media Platforms to Combat Misinformation, CBS News, March 25, 2021; David Klepper, Dems to Facebook: Get Serious about Misinformation, Hate, Associated Press, September 28, 2020. Adam Gabbatt, Claim of Anti-Conservative Bias by Social Media Firms is Baseless, Report Finds, The Guardian, February 1, 2021; Ben Brody, Cruz, Hawley Want the FTC to Probe Social Media Content Curation, Bloomberg, July 15, 2019, www.bloomberg.com/news/articles/2019-07-15/cruz-hawley-want-the-ftcto-probe-social-media-content-curation. Shawn Mulcahy, Texas Senate Approves Bill to Stop Social Media Companies from Banning Texans for Political Views, Texas Tribune, April 1, 2021, www. texastribune.org/2021/03/30/texas-soical-media-censorship/; Kim Lyons, Florida Bill Would Fine Social Media Platforms for Banning Politicians – With Exemption for Disney, The Verge, May 1, 2021, www.theverge.com/2021/5/1/22413934/ florida-bill-restrict-social-media-politicians-twitter-facebook-disney-trump. Miami Herald v. Tornillo, supra note 34; West Virginia State Board v. Barnette, 319 U.S. 624, 633–634 (1943); Citizens United v. Federal Election Commission, 558 U.S. 310, 346–347 (2010). Violent Organizations Policy, Twitter, October 2020, https://help.twitter.com/en/ rules-and-policies/violent-groups; Hate Speech Policy, YouTube, June 5, 2019, https://support.google.com/youtube/answer/2801939?hl=en. Brendan Nyhan, YouTube Still Hosts Extremist Videos. Here’s Who Watches Them, Washington Post, March 10, 2021. Rules Enforcement, Twitter, January 11, 2021, https://transparency.twitter.com/ en/reports/rules-enforcement.html#2020-jan-jun. Expertise from around the World, Oversight Board, https://oversightboard.com/ meet-the-board/ (last accessed February 28, 2022). Elena Debra, The Independent Facebook Oversight Board Haw Made Its First Rulings, Slate, January 28, 2021. Oversight Board Upholds Former President Trump’s Suspension, Finds Facebook Failed to Impose Proper Penalty, Oversight Board, May 2021, https://

The Future of Discourse in Online Spaces

48.

49. 50.

51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68.

69.

70.

241

oversightboard.com/news/226612455899839-oversight-board-upholds-formerpresident-trump-s-suspension-finds-facebook-failed-to-impose-proper-penalty. Dipayan Ghosh & Jared Schroeder, Facebook’s Oversight Board Needs Greater Authority, Protego Press, May 20, 2021, https://protegopress.com/facebooksoversight-board-needs-greater-authority/; Will Facebook’s Oversight Board Actually Hold the Company Accountable? Washington Post, May 17, 2021. Katherine J. Wu, Radical Ideas Spread Through Social Media. Are the Algorithms to Blame, PBS, March 28, 2019. Alexandra Giannopoulou, Algorithmic Systems: The Consent is in the Detail? 9 Internet Policy Review 1, 10–11 (2020); Urbano Reviglio & Claudio Agosti, Thinking Outside the Black-Box: The Case for Algorithmic Sovereignty in Social Media, 6 Social Media+ Society 2, 5–6 (2020). Kevin Roose, The Making of a YouTube Radical, New York Times, June 8, 2019. Karen Ho, YouTube is Experimenting With Ways to Make Its Algorithm Even More Addictive, MIT Technology Review, September 27, 2019, www.technologyreview. com/2019/09/27/132829/youtube-algorithm-gets-more-addictive. Zeynap Tufekci, YouTube, the Great Radicalizer, New York Times, March 10, 2018. Luke Darby, Facebook Knows it’s Engineered to “Exploit the Human Brain’s Attraction to Divisiveness,” GQ, May 27, 2020. Soroush Vosoughi, Deb Roy & Sinan Aral, The Spread of True and False News Online, Science, March 9, 2018. Jeff Horwitz & Deepa Seetharaman, Facebook Executives Shut down Efforts to Make the Site Less Divisive, Wall Street Journal, May 2, 2020. Id. Id. Mark Zuckerberg Opening Statement Transcript: House Hearing on Misinformation, Rev, March 25, 2021, www.rev.com/blog/transcripts/mark-zuckerbergopening-statement-transcript-house-hearing-on-misinformation. Michelle Gao, Facebook Makes More Money Per User Than Rivals, but it’s Running Out of Growth Options, CNBC, November 3, 2020. Id. Dipayan Ghosh, Terms of Disservice 44 (2020). See Joseph Weizenbaum, ELIZA, A Computer Program for the Study of Natural Language Communication Between Man and Machine, 9 Comm. Of the ACM 36 (1966), which discusses the original computer chat program. To try chatting with ELIZA, go to http://psych.fullerton.edu/mbirnbaum/psych101/ eliza.htm. Sherry Turkle, Alone Together 23–35 (2011). Id. at 24. Michael Cavna, “Nobody Knows You’re a Dog”: As Iconic Cartoon Turns 20, Creator Peter Steiner Knows the Joke Rings as Relevant as Ever, Washington Post, January 1, 2013. Claire Lampen, CGI Influencer “Consciously Uncoupling” from Human Boyfriend, The Cut, March 5, 2020, www.thecut.com/2020/03/lil-miquela-cgi-influencerbreaks-up-with-human-boyfriend.html; Nicky Campbell, Miquela on Attending Her First CFDA Awards, CFDA, June 7, 2019. See its Instagram account, @lilmiquela, www.instagram.com/lilmiquela/?hl=en. Virginia Alvino Young, Nearly Half of the Twitter Accounts Discussing “Reopening America” May Be Bots, Carnegie Mellon Computer Science, May 20, 2020, www.cs.cmu.edu/news/nearly-half-twitter-accounts-discussing-reopening-americamay-be-bots. Id.

242

Jared Schroeder

71. Issie Lapowsky, Here’s How Much Bots Drive Conversation During News Events, Wired, October 30, 2010. 72. Molly McKew, How Twitter Bots and Trump Fans Made #ReleaseTheMemo Go Viral, Politico, February 4, 2018. 73. How Much to Fake a Trend on Twitter? In One Country, about £150, BBC, March 2, 2018. 74. Id. 75. Jared Schroeder, Marketplace Theory in the Age of AI Communicators, 17 First Amendment Law Review 22, 60–61 (2018). 76. Karen Ho, Nearly Half of Twitter Accounts Pushing to Reopen America May Be Bots, MIT Technology Review, May 21, 2020, www.technologyreview.com/ 2020/05/21/1002105/covid-bot-twitter-accounts-push-to-reopen-america. 77. Id. 78. Citizens United v. Federal Election Commission, 558 U.S. 310, 343 (2010). 79. Id. at 342–343 (quoting First National Bank of Boston v. Bellotti, 435 U.S. 765, 783 (1978)). 80. Introducing New Authenticity Measures on Instagram, Instagram, August 13, 2020, https://about.instagram.com/blog/announcements/introducing-new-authenticitymeasures-on-instagram. 81. Natalie Clayton, Twitch’s Bot Crackdown Cuts Millions of Followers from Some of the Site’s Top Streamers, PC Gamer, April 16, 2021, www.pcgamer.com/ twitchs-bot-crackdown-cuts-millions-of-followers-from-some-of-the-sites-topstreamers. 82. Jack Nicas, Why Can’t the Social Networks Stop Fake Accounts? New York Times, December 8, 2020. 83. Rachel Metz & Donie O’Sullivan, A Deepfake Video of Mark Zuckerberg Presents a New Challenge for Facebook, CNN, June 11, 2019. 84. John Villasenor, Artificial Intelligence, Deepfakes, and he Uncertain Future of Truth, Brookings, February 11, 2019, www.brookings.edu/blog/techtank/2019/02/14/ artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth. 85. Robert Chesney & Danielle Citron, Deepfakes and the New Disinformation War, 98 Foreign Affairs 147, 148 (2019). 86. Id. 87. See Id.; Jared Schroeder, Free Expression Rationales and the Problem of Deepfakes within the E.U. and U.S. Legal Systems, 70 Syracuse Law Review 1171, 1180–1182 (2020). 88. Dave Lee, Deepfakes Porn Has Serious Consequences, BBC, February 3, 2018. See also Danielle Citron, How Deepfakes Undermine Truth and Threaten Democracy, TedSummit 2019, July 2019, www.ted.com/talks/danielle_citron_ how_deepfakes_undermine_truth_and_threaten_democracy?language=en, for an overview of deepfakes’ dangers. 89. Sarah Mervosh, Distorted Vides of Nancy Pelosi Spread on Facebook and Twitter, Helped by Trump, New York Times, May 24, 2019; Donie O’Sullivan, Another Fake Pelosi Video Goes Viral on Facebook, CNN, August 3, 2020. 90. Richard Luscombe, Manipulated Video of Biden Mixing Up States Was Shared 1.1M Times before Being Removed, The Guardian, November 2, 2020. 91. Lisa Fazio, Out-of-Context Photos Are a Powerful Low-Tech Form of Misinformation, PBS, February 18, 2020. 92. Dan Evon, Is This Shopping Aisle Empty Due to Socialism? Snopes, September 17, 2019, www.snopes.com/fact-check/shopping-aisle-empty-socialism. 93. Dan Evon, Were Piles of Rubbish Left in Hyde Park by Global-Warming Protestors? Snopes, April 23, 2019, www.snopes.com/fact-check/protesters-hydepark-rubbish.

The Future of Discourse in Online Spaces

243

94. Manuel Castells, The Rise of the Networked Society 3–4 (2000); W. Lance Bennett & Shanto Iyengar, A New Era of Minimal Effects? The Changing Foundations of Political Communication, 58 Journal of Communication 707, 720 (2008); Itai Himelboim, Stephen McCreery & Marc Smith, Birds of a Feather Tweet Together: Integrating Network and Content Analyses to Examine Cross-Ideology Exposure on Twitter, 18 Journal of Computer-Mediated Communication 154, 166–171 (2013). 95. Corbin Hiar, Twitter Bots Are a Major Source of Climate Disinformation, Scientific American, January 22, 2021. 96. Id. 97. Lauren Frayer, How India is Confronting Disinformation on Social Media Ahead of Elections, NPR, April 22, 2021. 98. Jack Stubbs & Joseph Menn, Facebook Suspends Disinformation Network Tied to Staff of Brazil’s Bolsonaro, Reuters, July 8, 2020. 99. Jeremy Stahl, Arizona’s Republican-Run Election Audit Is Now Looking for Bamboo-Laced “China Ballots,” Slate, May 5, 2021. 100. Alistair Coleman, “Hundreds Dead” because of Covid-19 Misinformation, BBC, August 12, 2021. 101. Christina Capatides, Coronavirus Cannot be Cured by Drinking Bleach or Snorting Cocaine, Despite Social Media Rumors, CBS News, March 9, 2020. 102. Krista Conger, How Misinformation, Medical Mistrust Fuel Vaccine Hesitancy, Stanford Medicine, September 2, 2021, https://med.stanford.edu/news/all-news/ 2021/09/infodemic-covid-19.html. 103. 567 U.S. 709, 727 (2012). 104. Id. 105. 558 U.S. at 342–343 (quoting First National Bank of Boston v. Bellotti, 435 U.S. 765, 783 (1978)). 106. Jared Schroeder, Information, Community, and Change: A Call for a Renewed Conversation about First Amendment Rationales, 18 First Amendment Law Review 123, 157–162 (2020). 107. Nick Statt, TikTok is Banning Deepfakes to Better Protect against Misinformation, The Verge, August 5, 2020; David McCabe & Davey Alba, Facebook Says It Will Ban “Deepfakes,” New York Times, January 7, 2020; Lauren Feiner, Twitter Unveils New Rules to Tackle Deepfakes Ahead of the 2020 Election, CNBC, February 4, 2020. 108. Donie O’Sullivan & Marshall Cohen, Facebook Begins Labeling, but Not FactChecking Posts from Trump and Biden, CNN, July 21, 2020; Yoel Roth & Nick Pickles, Updating Our Approach to Misleading Information, Twitter, May 11, 2020, https://blog.twitter.com/en_us/topics/product/2020/updating-our-approachto-misleading-information.html.

Contributors

Adedayo L. Abah, PhD, is a professor in the Department of Journalism and Mass Communications at Washington and Lee University, Lexington, Virginia. She teaches courses in law and communications, news media and society, strategic communication, and global communication. Her research interests include the First Amendment, digital media and the law, and media and pop culture in Africa. Her work has appeared in media and communications journals, such as Communication, Culture & Critique, Media Culture & Society, Journal of Media Law and Ethics, International Communication Gazette, Interactions: Studies in Communication and Culture, and edited books. P. Brooks Fuller, JD, PhD, is an assistant professor of journalism in the Elon University School of Communications and the director of the North Carolina Open Government Coalition. His research and teaching focus on media law and policy issues ranging from freedom of expression and political extremism to obscenity and open government. He has a doctorate from the University of North Carolina at Chapel Hill and a law degree from the University of South Carolina. Holly Kathleen Hall, JD, is a professor of strategic communication at Arkansas State University, teaching classes in communication law and ethics, privacy law, intellectual property, and data protection. She has published in Visual Communications Quarterly, Communication Law and Policy, Journal of Media Law and Ethics, and First Amendment Studies and contributed chapters regarding social media to four books. Prior to joining the faculty at Arkansas State, Hall worked in public relations for ten years, has an Accreditation in Public Relations (APR) by the Public Relations Society of America, and earned a PGCert. in data protection and information governance in 2020 from Northumbria University. Jennifer Jacobs Henderson, PhD, is the associate vice president for academic affairs (student success) and a professor of communication at Trinity University in San Antonio, Texas. In that role, she serves as an advocate for

Contributors

245

students in their academic experiences, with her strategic efforts focused on belonging, retention, and graduation. When she has her professor hat on, her classes and research address issues of media law, the ethics of media, and the use of participatory cultures for political and social action. Dan V. Kozlowski, PhD, is an associate professor and chair of the Department of Communication at Saint Louis University, where he teaches free expression and a variety of journalism and media courses. He also holds a secondary appointment in SLU’s School of Law. His work has appeared in Communication Law and Policy, Journalism & Mass Communication Quarterly, Free Speech Yearbook, The International Encyclopedia of Communication, and other journals. He’s also a co-author of the textbook Mass Media Law. His research interests include student speech rights, judicial decision-making, comparative law, and journalism and culture. He received his master’s degree from Saint Louis University and his doctorate from the University of North Carolina at Chapel Hill. Before entering academia, he worked professionally as a copyeditor and page designer for a community newspaper in Missouri and as a sports producer and production assistant for a local TV news station in New York City. Jasmine E. McNealy, JD, PhD, is an associate professor of media production, management, and technology at the University of Florida College of Journalism and Communications, where she is the associate director of the Marion B. Brechner First Amendment Project. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture. She has been published in both social science and legal journals, including Computers in Human Behavior, First Amendment Law Review, Newspaper Research Journal, and Communication Law & Policy. Kathleen K. Olson, JD, PhD, is a professor at Lehigh University in Bethlehem, Pennsylvania. She has worked as an attorney and copyeditor and helped create the online version of the Austin American-Statesman in Austin, Texas. Her research focuses on intellectual property issues, including copyright and the right of publicity. Amy Kristin Sanders, JD, PhD, is an associate professor of journalism and law at the University of Texas at Austin. Along with T. Barton Carter and Marc A. Franklin, she co-authors the widely recognized casebook First Amendment and the Fourth Estate: The Law of Mass Media. A licensed attorney and award-winning journalist, Sanders regularly serves as an expert witness in legal proceedings and a consultant for Fortune 500 companies, international governments, and telecommunications regulators. In addition, she has authored more than two dozen scholarly articles in numerous law and mass communication journals. Before joining the professoriate, Sanders worked as a copyeditor and page designer for the Gainesville Sun

246

Contributors

(Florida), a New York Times Co. newspaper. She earned a PhD in mass communication law from the University of Florida. Her MA in professional journalism and her law doctorate are from the University of Iowa. Jared Schroeder, PhD, is an associate professor at Southern Methodist University. His research focuses on legal questions that have emerged regarding democratic discourse and the marketplace of ideas during a time when AI, algorithms, social media, and other network-based tools have become crucial to how individuals communicate and understand the world around them. He is the author of The Press Clause and Digital Technology’s Fourth Wave, co-author of Emma Goldman’s No Conscription League and the First Amendment, and a frequent contributor to news outlets regarding tech policy and free expression. Schroeder was a journalist for several years before earning his doctorate at the University of Oklahoma. He teaches courses in communication law. Derigan Silver, PhD, is an associate professor and chair in the Department of Media, Film, and Journalism Studies at the University of Denver. He teaches graduate and undergraduate courses on freedom of expression, media law, and Internet law and culture. He has published in peer-reviewed journals and law reviews on government secrecy and national security law, originalism, defamation, social architecture theory, access to the judiciary, copyright law, commercial speech, and student expression. He is also the co-author of the popular media law textbook Mass Media Law, with Dr. Clay Calvert and Dr. Dan V. Kozlowski. Daxton R. Stewart, JD, PhD, LLM, is a professor of journalism in the Bob Schieffer College of Communication at Texas Christian University. He worked as a sportswriter, public relations assistant, news editor, food columnist, and magazine editor for several years in Texas and Missouri and practiced law in Texas before returning to the newsroom and, ultimately, the classroom. He was the founding editor of the peer-reviewed online journal Community Journalism and currently serves as associate editor of the journal Communication Law & Policy. His other books include co-author of The Law of Public Communication (12th ed.), published in 2022 by Routledge, and Media Law Through Science Fiction: Do Androids Dream of Electric Free Speech?, published in 2019 by Routledge.

Index

actual malice see defamation advertising 91–93; disclosure requirements 92–98, 101, 105, 107; false or misleading advertisements 96–97; Federal Trade Commission regulations 19, 91–102; Food and Drug Administration 103–104, 108 algorithms 13, 72, 185, 224, 226, 229–230 American Broadcasting Co. (ABC) 194 American Civil Liberties Union 7, 61, 153, 155 American Society of News Editors 194, 196 anonymity viii, 5, 9, 20, 26, 30, 42, 55, 82, 196, 202 Apple 12, 53, 93, 178, 210, 224 applications 4, 13, 107, 167, 182, 201 apps see applications archives 106, 109, 124, 166 artificial intelligence 103, 229, 231–233, 236, 237 Associated Press (AP) 81, 82, 193, 199, 204 athletes 4, 10, 95, 96, 128; studentathletes 138–139, 143–144 Biden, Joseph 3, 193, 234 bots 103, 225, 229, 231–233, 235 boyd, danah ix Brady, Tom 74 breaking news 70, 81, 201 British Broadcasting Corp. (BBC) 198, 199, 200 Bush, George W. 176 Cable News Network (CNN) 15, 41, 46, 47, 84, 195 California Privacy Rights Act 16, 57

CAN-SPAM Act see email Cardi B. 94 cell phones see mobile devices Center for Media and Social Impact 77, 87 chatbots see bots Child Online Protection Act 154–155 Children’s Online Privacy Protection Act 57, 102–103, 157 Citron, Danielle 163 Classmates.com xii Clearview AI 4 Clinton, Bill 153 Clinton, Hillary 35 Communications Decency Act (CDA) 7, 21, 153–155; Section 230 safe harbor xii, 21, 26, 29–30, 46, 85, 166, 204, 227–228, 237–238 Computer Fraud and Abuse Act (CFAA) 62–63, 122–123, 161 confidentiality 8, 185, 196–197, 200, 213, 216–217 contempt of court 177, 180, 183, 188 copyright vii–viii, 3, 16, 20, 69–83, 86–89, 199, 203–204, 216, 218; and account control 117–118, 125; copyrightability 70, 74; Copyright Alternatives in Small-Claims Enforcement (CASE) Act 72; Copyright Claims Board 72; Copyright Term Extension Act 74; Digital Millennium Copyright Act 77–79, 85, 88–89; and fair use viii, 70–71, 73, 74–81, 87, 88, 203; infringement 20, 70–73, 77–79, 88–89, 203–204; linking and embedding 73–74, 88; and photographs 65, 69–74, 88, 203–204, 216, 220; and privacy 65, 157, 165; reform 86–87; transformative works

248

Index

75–77, 88; and video 74, 76–78, 80, 88, 203–204, 220 courtrooms 170, 186–188; and access 171–173; and fair trial 170–174, 182; federal courts 175–179; federal criminal courts 175–177; and jurors 182–183; state courts 179–181 COVID-19 xiii, 11, 54, 64, 173, 187–188, 208, 210, 232, 235 Creative Commons 72, 80–81, 89 criminal law vii, 1, 10, 11, 12, 20, 61–62, 150, 163–164, 167, 226, 227; Computer Fraud and Abuse Act 62–63, 123; copyright 71; cyberbullying 162; elections law 6; impersonation 85; nonconsensual pornography 11, 157–159, 163; obscenity 152–153; and sex offenders; sex trafficking 29–30; sexting 159; and trials (see courtrooms) crowdfunding 106–107 cyberbullying 3, 18, 85, 150–151, 156, 160–163, 164; and students 128, 133, 141 deepfakes 151, 163–164, 234–236 defamation xii, xiv, 3, 16, 25–48, 64, 85, 150, 210; actual malice 30–31, 33–36, 39, 44; anti-SLAPP (see SLAPP); Communications Decency Act 26, 29–31; damages 38–39; defamatory content 32–33; defenses 39–42; falsity 36–38; fault requirements 33–36; group libel 31–32; identification requirement 31–32; injury 38–39; libel per quod 32, 38; libel per se 32, 38; and opinion 39–40; privileges 40–42; publication requirement 28–29; republication 28–31, 47; single publication rule 28–29; SLAPP 42–43, 47; statute of limitations 31 Digital Millennium Copyright Act (DMCA) see copyright disinformation 26, 29, 99, 224, 231, 234–235 Dorsey, Jack 56, 83 doxxing 11–12, 60, 61 Drew, Lori 161 Electronic Communication Privacy Act 160 Ellison, Nicole ix

email x; and defamation 28, 38, 41, 44; and employers 64, 166, 211–212; and marketing 97–99, 103; and privacy 4, 12, 14, 58; spam 18, 98–99 employers 13–15, 19, 63–64; account ownership 117–121, 123–124, 125; copyright 220; firing employees for social media use 211–212; National Labor Relations Board 14–15, 209, 212–213, 219–220; restricting social media use 161, 165, 166; and passwords 122, 125, 126 endorsements 86, 92–96, 98, 107, 208–210, 214–216, 220 e-personation see impersonation ESPN viii, 18, 118, 195, 198 European Union 57–60 Facebook viii–xi, 2–4, 18, 44, 225–226, 228, 230, 233; and children 156–157; and copyright 74, 76, 81; defamation 28–29, 42; disinformation 26, 234–235; employee use 15, 212, 219; extremism 230; Facebook Live vii, ix, 202; and Federal Trade Commission 5, 100, 102; and First Amendment 6; and Food and Drug Administration; and journalism 193, 194, 195, 197, 199, 201; lawsuits xii–xiii, 6, 46, 86; obscenity 165; Oversight Board 229; passwords 14, 140; and photographs 88; privacy 53, 58, 60, 219; and Securities and Exchange Commission 104; and students 8, 133, 134, 137–138, 140, 141–142; surveillance and monitoring 13, 16–17, 100; terms of use 100, 166; threats 10–11, 167, 227; and trials 173, 184–185 fair use see copyright fake accounts see impersonation fake news see disinformation Federal Trade Commission (FTC) 4, 5, 19, 91, 208; and deceptive advertising 19, 92–97, 214; and endorsements 92–97, 107–108, 209, 210, 215–216, 218; and online reviews 95; and privacy 57–58, 61, 100–103, 108; and spam 98–100; and sweepstakes 97–98 file-sharing 71–72 Financial Industry Regulatory Authority (FINRA) 105–106 First Amendment viii, x–xv, 2, 6–11, 15, 20, 21, 151, 171, 226–228, 232, 233,

Index 235–238; and commercial speech 91; court access 171–172, 174–175, 177, 179, 181–182, 185–186, 188; and defamation 26, 28, 34, 39, 44, 45; and government official accounts xi–xii; and hot news 81–82; and humor 85, 89; and obscenity 151–156, 161–164, 165–167; and privacy 55, 61, 65; and right of publicity 85; students 9–10, 129, 131, 134–137, 141–142, 143; and threats 10 Flickr 80 Floyd, George 139 Food and Drug Administration (FDA) 104, 108 Foursquare 201 Fox News 41, 76, 205 free speech see First Amendment games 10, 17, 93, 94, 103, 209 global issues see international Google viii, xii, 4, 5, 14, 21, 102, 224, 227–228; and copyright 73, 76, 82, 84; and privacy 53, 58–60 Harris, Kamala 158 Health Insurance Portability and Accountability Act (HIPAA) 56–57 Hill, Jemele 15 hot news doctrine 69–70, 81–83 humor 39, 76, 83–84, 85–86, 89, 121, 130, 141, 163 impersonation x, 84–85, 161, 220 indecency 7, 29, 151–153 influencers 92–94, 100, 104, 107, 115, 209, 215, 220, 231 Instagram vii, viii, ix, 1, 15, 42, 70, 71, 73–74, 100, 116, 128, 165, 194, 196, 215, 224, 229, 231, 233, 236; Instagram Kids 157 insurrection see Trump, Donald intellectual property see copyright; hot news doctrine; impersonation; trademark international 2–3, 3, 5, 14, 21, 32, 57–60, 218 iPhone see smartphones iTunes 93 Jorgenson, Dave 195 journalists viii–ix, xi, 14, 21, 224, 235; and account ownership 118; covering trials 170–171, 173–181,

249

186–188; and defamation 27–28, 36; social media policies and guidelines 193–205; Society of Professional Journalists 194 jurors see courts Kickstarter 107 Kosseff, Jeff xii Lanham Act 83, 85, 119–120 LaRussa, Tony 84–85 Lessig, Lawrence 80, 87 libel see defamation libraries 28–29, 155–156 LinkedIn ix, 4, 15–17, 119–120, 122, 123–124, 184, 217 livestreaming vii, 10, 54, 167, 172, 173, 186–187, 188, 202, 224 location-based apps 9, 201 Los Angeles Times 181, 199–200 Mahanoy Area School District v. B.L. 128, 129, 135–136, 143 Martin, Roland 15 Meta see Facebook mobile devices 12, 16, 107, 210; and commercial speech 103, 107; and courts 170–171, 176–178, 187, 188; employers 166; and obscenity 157, 159, 166; and students 128 MySpace 8, 17, 118, 130, 133–134, 137, 143, 161, 211 National Labor Relations Act 15, 209, 212–214, 220 National Labor Relations Board 14–15, 19, 209, 212–213, 219 National Public Radio 195–200 National Security Agency 13, 59 native advertising 96, 208, 214, 220 New York Times 33, 115, 173, 176; social media policy 193, 195, 197, 200, 205 nonconsensual pornography 11, 65, 150–151, 157–159, 163–164 Nunes, Devin 30–31, 36 Obama, Barack 109, 194, 201 obscenity 3, 7, 20, 150–153, 165–167 Packingham v. North Carolina x, 2, 156 Parler 224 parody see humor Paton, John 201

250

Index

Pence, Mike 8 Periscope vii, 167 phone see mobile devices photographs ix, 4, 7, 18–19, 56, 98, 130, 134, 234–236; and copyright 65, 69–74, 88, 203–204, 216, 220; in courtrooms 172, 175, 177; nonconsensual pornography 11, 157–159; and privacy 56, 59–60, 65, 151, 163, 165, 218; selfies 7; sexting 159–160; tagging 165 Pinterest 17, 69, 70, 76, 216, 230 pornography see nonconsensual pornography; obscenity PragerU xi Prince 78 Prince, Richard 70 prior restraint 181–182, 188 privacy ix, xiv, 1, 4–5, 12, 13, 16–18, 30, 53–58, 63, 64–66, 99, 157–160, 208, 210, 218; and children 57, 91, 102, 156–157; data 57–60; doxxing 11–12, 60, 61; employees 121, 125, 165, 210–212, 217, 219; expectation of privacy on social media 54, 61–62; Facebook 4, 14; false light 37; Federal Trade Commission regulations 57–58, 61, 100–103, 108; Fourth Amendment; password 14, 63; photographs 56, 59–60, 65, 151, 163, 165, 218; privacy torts 54–56; and students 138–140, 143 public relations ix, 46, 73, 91, 93, 95–96, 120–122, 208–209, 218 Public Relations Society of America (PRSA) 209 Radio Television Digital News Association 179, 196, 199, 202 Reddit 160, 163, 182, 202 Reporters Committee for Freedom of the Press 173, 181 revenge pornography see nonconsensual pornography Righthaven, LLC 72 Schilling, Curt 15 schools see students search engines 14, 21, 65, 73, 76 Section 230 see Communications Decency Act (CDA) Securities and Exchange Commission 104–107, 109 sex offenders x, 2, 156 sexting 159–160, 164

Shirky, Clay x slander see defamation SLAPP see strategic lawsuits against public participation (SLAPP) smartphones 12, 16, 107, 170–171, 176–178, 187, 188, 210 Smolla, Rodney 5 Snapchat ix, xiii, 1, 5, 7, 12–13, 18–19, 115, 116, 128, 135, 143, 159, 194, 195, 202, 230 Snowden, Edward 13, 59 Solove, Daniel 54 spam 18, 98–99 Spam 195 Stored Communications Act 211 strategic lawsuits against public participation (SLAPP) 42–43, 47 student-athletes see athletes students 47, 128–129, 143, 193; athletes 134, 135–136, 138–139, 143–144; college students 8–9, 136–138, 143; cyberbullying 133–134, 162–163; First Amendment protection 9–10, 129, 131, 134–137, 141–142, 143; off-campus speech 130–133, 135–136, 141, 143; passwords 139–140; privacy 63, 138; sexting 159–160; student media 142; threats viii, 9, 133 swatting 10 teachers 8, 54, 129–130, 133, 136, 140, 159, 161, 163 terms of service see terms of use terms of use xiii, 16–17, 20–21, 54, 62, 63, 64, 100, 124, 165; Amazon 53; anonymity; Apple 53; copyright 70–71, 72, 87; enforceability 53, 65–66; expectation of privacy 212; Facebook 53; Federal Trade Commission scrutiny 5, 57, 61–62; Google 53; LinkedIn 4; Pinterest 16–17; privacy policies 5, 53–54, 56; Twitter 87, 100 text messages 12, 64, 98–99, 180, 203, 211; and sexting 159–161 threats viii, xii, 6, 9–12, 17, 20, 60–61, 133, 136–137, 138, 141, 162, 166, 167, 193, 195, 204–205, 216, 224, 226–227, 229 TikTok viii, ix, 4, 19, 71, 128, 165, 194–195, 225, 235–236 tracking 13, 16, 97, 100–103 trademarks viii, 70, 83–85, 89, 203; and account ownership 117, 119–121, 125

Index trials see courts Trump, Donald vii–viii, 228–229, 230; and defamation 30, 35, 43, 46; and insurrection 6, 8, 224, 228; Twitter account vii–viii, xi–xii, 46 Tumblr ix Twitch vii, ix, 202, 233 Twitter vii–viii, ix, xiii, 3–4, 185, 194–195, 201, 204, 209–210, 224, 228–229, 230; account ownership 114–115, 118–119, 120, 124; bad language 166; bots 232; breaking news 194, 199; commercial speech 96; and copyright 70–71; deepfakes 235–236; and defamation 30–31, 37–38, 40, 42, 43–44, 46, 48; fake accounts 84–85; Federal Trade Commission 102, 107–108; and FINRA 106; hot news doctrine 82; government accounts xii; retweeting 30, 36, 47, 106, 120, 184–185, 199, 204, 232; right of publicity 86, 121; and students 132, 139, 141; terms of use 17, 87, 100, 165; and trademark 89; and trials 176 unfair competition 73, 81, 91, 119–120 United Nations 3 user agreements see terms of use

251

veggie libel laws 27 video games see games videos ix, xi, 10, 17, 98, 104, 128, 209, 229–230, 231, 234; and copyright 69–71, 73–74, 76, 78, 80, 88, 203–204, 220; in courtrooms 172–173, 178, 180; deepfakes (see deepfakes); and defamation 26, 46, 47; obscene 11, 151, 158–159, 163; privacy 57, 61; TikTok (see TikTok); YouTube (see YouTube) Wall Street Journal 195, 197, 200, 230 West, Kanye 194 word-of-mouth campaigns 93 Wozniacki, Caroline 74 Xbox Live 17 Yahoo! 211 Yelp 42, 47 Yik Yak viii, 5, 9–10, 48, 202 YouTube ix, xi, 17, 47, 100, 128, 209, 225–226, 228, 229–230, 236; and copyright 69, 71, 73, 78–80, 87, 203 Zoom 170, 186 Zuckerberg, Mark xii, 53, 225, 230, 234