124 15
English Pages 436 [441] Year 2018
Digital Dominance
ii
Digital Dominance The Power of Google, Amazon, Facebook, and Apple
Edited by Martin Moore and Damian Tambini
1
iv
1 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2018
Some parts of this publication are open access. Except where otherwise noted, chapters 1, 4, 6, 7, 11, 13, and Conclusion are distributed under the terms of a Creative Commons AttributionNon Commercial-No Derivatives 4.0 International licence (CC BY-NC-ND), a copy of which is available at http://creativecommons.org/licenses/by-nc-nd/4.0/. Enquiries concerning use outside the scope of the licence terms should be sent to the Rights Department, Oxford University Press, at the above address. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Moore, Martin, 1970– editor. | Tambini, Damian, editor. Title: Digital dominance : the power of Google, Amazon, Facebook, and Apple / edited by Martin Moore and Damian Tambini. Description: New York, NY : Oxford University Press, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2017052008 (print) | LCCN 2018004455 (ebook) | ISBN 9780190845131 (updf) | ISBN 9780190845148 (epub) | ISBN 9780190845117 (pbk. : alk. paper) | ISBN 9780190845124 (cloth : alk. paper) Subjects: LCSH: Internet industry—Political aspects—United States. | Corporate power—United States. | Information society—United States. | Big data—United States. | Mass media—United States. Classification: LCC HD9696.8.U62 (ebook) | LCC HD9696.8.U62 D54 2018 (print) | DDC 303.48/330973–dc23 LC record available at https://lccn.loc.gov/2017052008 9 8 7 6 5 4 3 2 1 Paperback printed by Sheridan Books, Inc., United States of America Hardback printed by Bridgeport National Bindery, Inc., United States of America
CONTENTS
List of Contributors vii Introduction 1 Martin Moore and Damian Tambini SECTION 1: Economy 1. The Evolution of Digital Dominance: How and Why We Got to GAFA 21 Patrick Barwise and Leo Watkins 2. Platform Dominance: The Shortcomings of Antitrust Policy 50 Diane Coyle 3. When Data Evolves into Market Power—Data Concentration and Data Abuse under Competition Law 71 Inge Graef 4. Amazon—An Infrastructure Service and Its Challenge to Current Antitrust Law 98 Lina M. Khan SECTION 2: Society 5. Platform Reliance, Information Intermediaries, and News Diversity: A Look at the Evidence 133 Nic Newman and Richard Fletcher 6. Challenging Diversity—Social Media Platforms and a New Conception of Media Diversity 153 Natali Helberger 7. The Power of Providence: The Role of Platforms in Leveraging the Legibility of Users to Accentuate Inequality 176 Orla Lynskey
vi
8. Digital Agenda Setting: Reexamining the Role of Platform Monopolies 202 Justin Schlosberg 9. Free Expression? Dominant Information Intermediaries as Arbiters of Internet Speech 219 Ben Wagner 10. The Dependent Press: How Silicon Valley Threatens Independent Journalism 241 Emily Bell SECTION 3: Politics 11. Social Media Power and Election Legitimacy 265 Damian Tambini 12. Manipulating Minds: The Power of Search Engines to Influence Votes and Opinions 294 Robert Epstein 13. I Vote For—How Search Informs Our Choice of Candidate 320 Nicholas Diakopoulos, Daniel Trielli, Jennifer Stark, and Sean Mussenden 14. Social Dynamics in the Age of Credulity: The Misinformation Risk and Its Fallout 342 Fabiana Zollo and Walter Quattrociocchi 15. Platform Power and Responsibility in the Attention Economy 371 John Naughton Conclusion: Dominance, the Citizen Interest and the Consumer Interest 396 Damian Tambini and Martin Moore Index 409
[ vi ] Contents
LIST OF CONTRIBUTORS
Patrick Barwise (www.patrickbarwise.com) is emeritus professor of management and marketing at London Business School and former chairman of Which?, the UK’s leading consumer organization. He joined the London Business School in 1976 after an early career at IBM and has published widely on management, marketing, and media. Emily Bell is founding director of the Tow Center for Digital Journalism at Columbia’s Graduate School of Journalism and a leading thinker, commentator, and strategist on digital journalism. The majority of Bell’s professional career was spent at Guardian News and Media in London as editor-in-chief across Guardian websites and director of digital content for Guardian News and Media. She is coauthor of a number of publications on the transformation of journalism, a trustee on the board of the Scott Trust, the owners of The Guardian, and an adviser to Tamedia Group in Switzerland. Diane Coyle is the Bennett Professor of Public Policy at the University of Cambridge. She is a member of the Natural Capital Committee and also a fellow of the Office for National Statistics. She specializes in the economics of new technologies, markets and competition, and public policy. Her most recent book was GDP: A Brief but Affectionate History. Coyle was a BBC trustee for over eight years, and was formerly a member of the Migration Advisory Committee and the Competition Commission. Nicholas Diakopoulos is assistant professor at Northwestern University School of Communication, where he directs the Computational Journalism Lab. Robert Epstein is senior research psychologist at the American Institute for Behavioral Research and Technology (AIBRT), as well as the former editor-in-chief of Psychology Today magazine. A PhD of Harvard University,
vi
he has published 15 books on artificial intelligence, creativity, and other topics, as well as more than 250 scientific and popular articles, including scientific reports in Science, Nature, and the Proceedings of the National Academy of Sciences. Richard Fletcher is a research fellow at the Reuters Institute for the Study of Journalism, University of Oxford. He is lead researcher and coauthor of the annual Reuters Institute Digital News Report (2017, with Nic Newman, Antonis Kalogeropoulos, David Levy, and Rasmus Kleis Nielsen). Inge Graef is assistant professor at Tilburg Law School with affiliations to the Tilburg Institute for Law, Technology, and Society (TILT) and the Tilburg Law and Economics Center (TILEC). The focus of Graef’s research is on competition enforcement in digital markets. She is particularly interested in the interface between competition law and other regimes such as data protection and intellectual property law. Natali Helberger is professor of information law at the Institute for Information Law (IViR), University of Amsterdam. She is European Research Council (ERC) laureate, member of the board of directors of the Institute for Information Law, and cofounder of the Personalised Communication lab, a joint research initiative between communication scientists (ASCoR), information law scholars, ethicists, and data scientists to study the societal implications of AI and data-driven communication. Lina M. Khan is director of legal policy at the Open Markets Institute and a visiting fellow with Yale Law School. She was previously at the Open Markets Project at New America, where she researched consolidation across sectors, including in tech, media, finance, retail, and commodities. Orla Lynskey is an assistant professor in law at the London School of Economics (LSE). Her research focuses on data protection and digital rights. Her monograph The Foundations of EU Data Protection Law (OUP) was published in 2015. Lynskey is an editor of International Data Privacy Law and the Modern Law Review and a member of the European Commission’s Multistakeholder Expert Group on the General Data Protection Regulation (GDPR). Martin Moore is director of the Centre for the Study of Media, Communication and Power at King’s College London, and a Senior Research Fellow at King’s. His research focuses on political communication during election and referendum campaigns, and on the civic power of technology platforms. Prior to King’s he was director of the Media Standards Trust. He is the author of The Origins of Modern Spin (Palgrave MacMillan, 2006) and
[ viii ] List of Contributors
Tech Giants and Civic Power (2016), and publishes frequently on the media and politics. Sean Mussenden is a lecturer at the University of Maryland Philip Merrill College of Journalism. John Naughton is a senior research fellow at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) at Cambridge, emeritus professor of the Public Understanding of Technology at the Open University, and the Technology columnist of the London Observer. His most recent book, From Gutenberg to Zuckerberg: What You Really Need to Know about the Internet, is published by Quercus. Nic Newman is a research associate at Reuters Institute for the Study of Journalism and lead author of its annual Digital News Report. He is also a digital media consultant and product strategist working with media companies around the world. He was a founding member of the BBC News website and former head of product and technology for BBC News. Walter Quattrociocchi is currently heading the Laboratory of Data Science and Complexity at the University of Venice. His research interests include data science, cognitive science, and dynamic processes on complex networks. His research, based on an interdisciplinary approach, focuses on the information and misinformation diffusion, and the emergence of collective narratives in online social media as well as their relation with the evolution of opinions. He collected more than 60 scientific papers on major peer-reviewed conferences and journals. His results on misinformation spreading served to inform the Global Risk Report of the World Economic Forum (2016 and 2017) and have been intensively covered by the media (The Economist, The Guardian, Washington Post, New Scientist, Bloomberg, Salon, Poynter, New York Times, Scientific American). Justin Schlosberg is a senior lecturer at Birkbeck, University of London, and current chair of the Media Reform Coalition. His most recent book— Media Ownership and Agenda Control (Routledge, 2016)—focuses on critical questions of media ownership, gatekeeping, and agenda setting in the new media economy. He has been interviewed regularly by, among others, BBC Newsnight, Al Jazeera English, and Radio 4’s Today program. Jennifer Stark is a research scientist at the University of Maryland Philip Merrill College of Journalism. Damian Tambini is associate professor at the London School of Economics. He has served as an advisor and expert in numerous policymaking roles for
List of Contributors [ ix ]
x
the European Commission, the Council of Europe, the UK Government, and the UK media regulator, Ofcom. Dr. Tambini was previously director of the Programme in Comparative Media Law and Policy at Oxford University. Before that, he was director of the Media Policy Project at the Institute for Public Policy Research; Postdoctoral Fellow at Nuffield College Oxford; Lecturer, Humboldt University, Berlin; and a researcher at the European University Institute, Florence, Italy (where he took his PhD in 1996). He has published numerous articles and books on the topic of communication, policy, and politics. Daniel Trielli is a PhD student in the Media, Technology, and Society (MTS) program at Northwestern University. Ben Wagner is an assistant professor and director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business and a senior researcher of the Centre of Internet & Human Rights (CIHR). His research focuses on communications technology at the intersection between rights, ethics, and governance. Leo Watkins is a research analyst in the media team at Enders Analysis, contributing to work on tech, digital news, and public policy. Fabiana Zollo is a postdoctoral researcher at Ca’ Foscari University of Venice, where she is fellow member of the Laboratory of Data Science and Complexity coordinated by Walter Quattrociocchi. Currently, she is adjunct professor for the course “Data Management and Business Intelligence” in the Innovation and Marketing program. Her research is based on an interdisciplinary, cross-methodological approach and focuses on information and misinformation spreading, social dynamics, and the evolution of collective narratives on online social media. She collected several papers on the topic, both with national and international coauthors. Her results were widely covered by the media (i.e., New York Times, Washington Post, The Economist, Bloomberg View, The Guardian, Phys.org, Le Scienze, Pour la Science, El Pais). She was an invited speaker, among the others, at CAV Conference (2017), Wissenswerte (2016), the International Journalism Festival (2016).
[ x ] List of Contributors
Digital Dominance
xi
Introduction MARTIN MOORE AND DAMIAN TAMBINI
C
oncerns about corporate size and dominance have never been purely economic. When Senator John Sherman (Republican) introduced his antitrust bill to the US Senate in 1890, he said that too powerful private combinations were equivalent to “a kingly prerogative, inconsistent with our form of government.” “If we will not endure a king as a political power,” Sherman told other Senators, “we should not endure a king over the production, transportation and sale of any of the necessaries of life” (cited in US Congress 1890, 2,457). Reading the debate in the Senate it is clear that Sherman—along with his contemporaries—was most animated about the power of companies like Standard Oil and other Trusts, and that the Act itself was a way “to deal with the great evil that now threatens us” (US Congress 1890, 2,456). The economic arguments that were made by Senators—and there were some—were limited in scope and based chiefly on straightforward classical economic principles, taken almost directly from Adam Smith. Sherman, for example, said that without competition the aim of the monopolist “is always for the highest price that will not check the demand” (US Congress 1890, 2,460), just as Adam Smith had written, “[t]he price of monopoly is upon every occasion the highest which can be got” (Smith 1776). While there were some specific economic anxieties, therefore, these were in the context of deeper concerns for the dangers of unchecked power. The lawyer—and later Supreme Court Judge—Louis Brandeis, who took on and extended the critique of corporate dominance after the Sherman
2
Act had passed, based his criticisms of too large companies chiefly on moral and political objections, rather than economic ones (Urofsky 2009, 300). For Brandeis, individual opportunity and political liberty were intimately linked to industrial independence, entrepreneurialism, and small business. “Size, we are told, is not a crime,” Brandeis wrote in 1914, “[b]ut size may, at least, become noxious by reason of the means through which it was attained or the uses to which it is put” (Brandeis 1914). He even had a term for the problems associated with corporate size; he called it the “curse of bigness” (Brandeis 1914). The original Sherman Antitrust Act, and the complementary Clayton Act of 1914, were, in other words, only partly driven by commercial and economic concerns. Fear of emergent, undemocratic political power was as great a motivation. In this way, the impetus against corporate dominance was driven by both economic and political alarm. Legislation was introduced in order to challenge the concentrated power of the Trusts, and to prevent what Brandeis and others saw as the inevitable misuse of dominance. Brandeis even framed the challenge to the Trusts as the basis for a “New Freedom” (Rosen 2016). Yet, over time, as businesses grew and some of the anxieties about corporate size diminished, the economic aspects of antitrust came increasingly to the fore. This was, in part, because using economic criteria to identify and challenge dominance proved more practical than using political criteria. Richard Hofstadter, writing in 1964, showed how antitrust law was neglected in the 1920s and early 1930s when seen as politically motivated. It was revived later in the 1930s, when Franklin Delano Roosevelt saw its economic potential as part of the New Deal. Such was its transformation into a practical economic tool of government from then on, that by 1964 Hofstadter was able to write, “once the United States had an antitrust movement without antitrust prosecutions; in our time there have been antitrust prosecutions without an antitrust movement” (Hofstadter 2008, 189). It was in this context, and building on both US antitrust law and the manner in which it had evolved, that competition law became one of the founding articles of the European Union (EU). Article 85 of the 1957 Treaty of Rome determined that “[a]ny agreements between enterprises . . . which are likely to affect trade between the Member States and which have as their object or result the prevention, restriction or distortion of competition within the Common Market” would be deemed illegal. This closely resembled Section 1 of the Sherman Act, which stated, “[e]very contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is hereby declared to be illegal.”
[ 2 ] Digital Dominance
The shift of antitrust law away from its political roots toward a narrower economic instrument of government accelerated in the late 1960s and 1970s. This emerged from criticisms of particular aspects of antitrust cases made by economists in the Chicago School. Aaron Director, Ward Bowman, John McGee, and others took issue with prevailing assumptions about pricing and potential consumer harms (Posner 1979). These criticisms were then brought together in a broader critique of contemporary approaches to antitrust and an argument that actions should only be pursued where there was clear evidence of harm. “[T]he only legitimate goal of antitrust” Robert Bork wrote in The Antitrust Paradox, “is the maximization of consumer welfare” (Bork 1978). These arguments then helped frame interpretations of the law in the United States throughout the 1980s and beyond. By the late 20th century, a series of legal precedents had consolidated the Bork approach such that corporate size was only thought to be a problem— in the United States at least—if you could point to specific instances where consumer welfare had been damaged (Khan 2016, 720; also this volume). US legislation had, in other words, been stripped of its political purpose, and consequently also of its direct relevance to the citizen, in order to focus solely on the consumer. Even this focus was itself concentrated on a particular facet of consumer welfare—pricing. European competition law, though based on the foundations of US antitrust law, was less influenced by the arguments of the Chicago School. It was nonetheless constrained in its drafting and also favored a focused economic approach, though more for institutional than ideological reasons. In addition to the prevention of agreements between dominant firms— contained in the antitrust pillar of EU competition policy—the EU does intervene to prevent dominant positions emerging, and has extensive powers to review and even prevent large mergers. It also has duties to introduce new competition and ensure that states do not favor their own companies through state aids. But EU institutions often find themselves without the power to deal with dominance that results from endogenous growth of foreign companies that do not abuse that dominance—for example by raising prices. While the European social model has favored more regulation of powerful companies in the public interest, this has not been reflected in EU competition policy: attempts to introduce protection of media pluralism into EU merger regulations for example have failed, largely because member states have been keen to retain more sensitive political aspects of competition law to member state level in the name of “subsidiarity” (Harcourt 2015; Valcke 2012). The legitimacy deficit of the EU, and its technocratic nature (Zielonka 2014) has led to a largely “politics-neutral” application of competition law similar to that in the United States and a reliance on
Introduction
[ 3 ]
4
technocratic tests of concentration, market definition, and price elasticities to evaluate complaints and mergers. The underlying assumption is that citizen interests can be captured in simple economic notions of consumer utility independent of any wider political philosophy. The arrival of the Internet—particularly Web 2.0—introduced new questions and dimensions to the issue of bigness. It introduced, for example, a new type of organization—the digital platform—the success of which relied heavily on scale (as a consequence of direct and indirect network effects). It raised new questions as to the economic effects of bigness and the role of price—for example how to account for “free” services paid for not with money but with personal data. Perhaps most importantly, it revived political and social questions about bigness. This was in part because the political and social aims of the large dig ital corporations that grew to maturity in the first two decades of the 21st century were more explicit than their 19th-century industrial predecessors. Larry Page and Sergey Brin, the founders of Google, wanted their search engine to “organise the world’s information and make it universally accessible and useful” (Google 2017). Mark Zuckerberg wanted Facebook to “make the world more open and connected” (Coren 2017). For John D. Rockefeller, the founder of Standard Oil, industrial ambitions were separate from political or social ones. Rockefeller sought to dominate the oil industry and build an economic empire. After this empire was broken up in 1911 Rockefeller turned his attentions to philanthropy. Nor is it just the aims of the new digital corporations that are political and social. They also perform civic functions that make them distinct not just from their 19th-century counterparts but also from large companies in other industries. These functions include, for example, enabling people to connect with one another (sometimes for the purpose of collective action), providing access to news and information, giving people a digital voice, and informing people about how they might vote (Moore 2016). Some of these functions are also performed by news and media organizations that, in recognition of their political and social importance, benefit from legal and constitutional protections. These companies are therefore different from what has come before. They are also bigger. As Barwise and Watkins note in the first chapter of this volume, the five most valuable public companies in the world in mid- 2017—by market capitalization—were all digital companies based on the West Coast of North America: Apple, Alphabet (Google’s parent company), Amazon, Microsoft, and Facebook. In addition to their financial value, these companies have colonized much of the global digital space. In June 2017 Facebook announced it had two billion active monthly users (Nowak
[ 4 ] Digital Dominance
and Spiller 2017). Since the service was banned in China, and prohibited or restricted in a number of other countries, this meant that by late 2017 the majority of the remaining digital world had active Facebook profiles. Similarly, more than nine in ten smartphone owners used one of only two operating systems—Android (Google/Alphabet) or iOS (Apple). Many of the rewards flowing from the shift to digital were also going to these select few companies. In digital advertising, for example, almost 60% of spending went to Google and Facebook in 2016 (GroupM 2017, 40). Amazon accounted for 43% of e-commerce sales in the United States that year (BI Intelligence 2017). Prior to 2016 many democratic governments treated these organizations much as they would other corporations. However, during and after 2016, concerns about the political and social implications of their size and dominance grew. Most of these concerns centered on the two pure digital players, Facebook and Google—particularly the former. In May of that year Facebook was accused of suppressing conservative news in its Trending Topics (Nunez 2016). After the UK Brexit vote in June, anxieties were raised about whether people were cocooned in digital filter bubbles (Viner 2016). In July, Diamond Reynolds filmed the death of her partner, Philando Castile, on Facebook Live. Facebook temporarily suspended the service shortly afterward and later clarified its policy on its approach to live video (FB Newsroom 2016). In September, there was international outrage about Facebook’s censorship of Nick Ut’s renowned photograph from the Vietnam War (Ross and Wong 2016). This culminated in global concern about fake news online, in the aftermath of the US election on November 8. Certain governments have become so concerned about these issues that they have introduced punitive legislation or threatened to force companies to open their applications to the security services. Germany passed a law— the Netzwerkdurchsetzungsgesetz—threatening fines of up to €50mn if platforms fail to remove illegal content including hate speech or fake news within a specified time. The UK Home Secretary said that police and intelligence agencies should be given access to encrypted messaging services like WhatsApp (Sparrow 2017). These rapid government responses are likely to have unintended repercussions, some of which may cause as much harm as good. They may, for example, lead Facebook and other platforms to become more conservative and interventionist, choosing to remove any controversial material rather than risk being fined. In so doing they may turn the platforms into “arbiters of truth”, a role that Mark Zuckerberg said he explicitly wanted to avoid (Zuckerberg 2016). Rapid government reactions may also lead to the removal of certain digital services from countries, to privatized regulation,
Introduction
[ 5 ]
6
and to higher rather than lower barriers to entry for smaller companies and new entrants. Rather than respond in haste and illicit unintended, and potentially harmful, consequences, governments need to better understand these issues, acknowledge where digital dominance may be a problem and where it is not, and recognize the implications of their reactions on the broader digital environment. To do this they need to better understand these companies, examine how these companies are different, and identify the risks and implications of their dominance. This is the purpose of this volume. Necessarily, with a subject so large, the volume cannot cover every area. It does, however, explore the implications of digital dominance more broadly than much of the existing literature. Given how recent the phenomenon of this dominance is, it is not surprising that much of the literature is similarly recent, though growing fast. Initial books about the platforms were essentially company biographies, focusing on a single organization and the story of its birth and early life (Auletta 2009; Battelle 2006; Dormehl 2012; Isaacson 2011; Kirkpatrick 2010; Stone 2013; Wallace and Erickson 1992). It was the economic distinctiveness of the companies that first elicited particular academic attention, following from the work of Jean-Charles Rochet and Jean Tirole (Rochet and Tirole 2003). With notable exceptions, critical examinations of the role platforms play in society and politics did not arrive until after the first decade of this century (Howard 2006). Since then there have been a rising number of general critiques about the political and civic roles of media and technology companies (Morozov 2011, 2013; Wu 2012; Moore 2016; Taplin 2017) that complement studies of the effects of digital dominance in specific spheres. In economics, for example, Ariel Ezrachi and Maurice Stucke have teased out the problem of “virtual competition”, while Timothy Wu has shown how the platforms have come to dominate our individual and collective attention (Ezrachi and Stucke 2016; Wu 2016). In media, Natali Helberger (Helberger, Kleinen-Von Köngislöw, and van Der Noll. 2014), Kari Karpinnen (2012), and others have explored what digital convergence means for media pluralism. In technology, Frank Pasquale (2015) and Bruce Schneier (2015) have investigated the dangers of opaque algorithms and big data. Few studies to date have sought to take a wide look at the implications of digital dominance for legislation and regulation across the economy, society, and politics. Adapting legal and regulatory frameworks to new technological paradigms is particularly difficult where the scale of change challenges existing legal definitions and where new harms and apt responses have not yet achieved widespread support as key public policy objectives. It is becoming increasingly clear that platform power—the forms of dominance that involve the
[ 6 ] Digital Dominance
economic features of multisided markets, and network effects combined with various forms of externalities—have at least some of the features of a paradigmatic shift that will require a wider and deeper debate not only about their economic tendencies but also about how best to describe the political and social impacts in such a way that policymakers can respond. The first section of this book looks at the economic implications and risks of digital dominance. It sets the dominance of these companies in a historical and industrial context, explores how digital platforms are different from other large corporations, and suggests reasons why they may be more difficult to displace. Each of the authors in this section questions the adequacy of existing law and regulation to deal with these organizations. The second section assesses some of the social implications of these companies’ control of the digital sphere. It presents data about the extent to which people are relying on platforms for their news, illustrates how platforms are challenging existing definitions of media diversity, questions whether algorithms might increase discrimination or enhance inequality, and explores the extent to which they now shape digital free expression. The third and final section focuses on the implications of the tech platforms for politics. It outlines the degree to which political campaigns are already dependent on digital delivery via Facebook, shows how political opinions can be influenced by Google search results, and presents new evidence about political polarization on social media. The final chapter asks whether we need to reconceptualize our interpretations of power in order to understand the nature of the tech giants.
ECONOMY
In chapter 1, Patrick Barwise and Leo Watkins provide context for the subsequent chapters and explain how five US technology companies came to be the most valuable public organizations in the world. The authors show how current market concentration is the most recent stage of a 60-year pattern within the IT sector, where organizations like IBM and Microsoft previously dominated. At each stage, it proved very difficult to displace the dominant player, who was instead eclipsed by a rising challenger in an adjacent market. Consistent within these markets are their “winner-takes-all” features. These features, the authors explain, are a consequence of direct and indirect network effects, enhanced by economies of scale and the value derived through the collection of data. Once these companies have achieved critical mass, they are then able to sustain and protect their positions by habituating users to their services,
Introduction
[ 7 ]
8
embedding customers within their networks, and heightening switching costs (particularly nonmonetary costs). They can even, Barwise and Watkins point out, identify forthcoming competitive opportunities and threats through their cloud computing services. Of the five, the authors conclude, none appears to be under immediate threat of displacement. The closest comparable competitors are in China, but though they may represent a strategic threat, they currently remain focused on the domestic market. The most likely challenge may therefore come first from within Silicon Valley itself, though the travails of Snapchat indicate how tough such a challenge will be. What are tech platforms, what characteristics do they share with traditional businesses, and how do they create value? These are some of the questions Diane Coyle addresses in chapter 2. Platforms are necessarily difficult to define, Coyle writes, because they perform multiple different functions, often in parallel. Amazon is both a retailer and a marketplace. Yet, it is a market unlike those in the analog world. Buyers and sellers do not need to go to the same physical location to trade. Nor do they have to make transactions simultaneously. If the number of sellers on a platform increases, then the benefits to the buyer increase too—and vice versa. This so-called indirect network effect helps explain why platforms, once they reach a certain size, then scale up rapidly. Yet, in fostering this growth, the platforms have introduced the “problem of free.” People find it very hard to refuse “free” services, even when they know that they must be paid for somehow. As a consequence, Coyle suggests, we are seeing “a welfare-destroying arms race between advertisers (via platforms) and consumers,” the chief winners of which are the leading digital advertising platforms—Google and Facebook. The problem of free also means potential competitors have a significant—if not prohibitive— investment to make if they are to woo customers away from one of the existing platform services. Standard competition tools are not adequate for assessing the nature of these organizations, their behavior, or their direct and indirect economic impact, Coyle concludes. Even more importantly, standard tools are inadequate for analyzing not just the economic but also the social welfare effects of these platforms. Inge Graef examines the growing importance of personal data in maintaining or increasing digital market power in chapter 3, and the capacity of competition law to challenge mergers or acquisitions on the basis of personal data. Up until 2014 the European Commission assumed that datasets collected by dominant digital platforms were substitutable, and that unless the markets of merging companies clearly overlapped, the personal data
[ 8 ] Digital Dominance
they held should not be a reason to prevent or set hurdles for a merger or takeover. It was for this reason that the Commission waved through Google’s takeover/merger of DoubleClick (2007) and Facebook’s takeover of WhatsApp (2014). The Commission’s approach has evolved since 2014, thanks in part to inquiries by the Competition and Markets Authority (CMA), and by the Autorité de la Concurrence and Bundeskartellamt. These identified indicators that would signal whether the additional personal data would enhance market power (notably the value of the data itself and the availability of alternative data). In the case of Microsoft/LinkedIn (2016) the CMA recognized that the addition of the two companies’ data may increase market power or raise barriers to entry for competition. It came to a similar conclusion in the Verizon/Yahoo case (2016). Competition law is, in principle, able to address some of the concerns around the use of personal data in maintaining and increasing market power, Graef concludes, yet it cannot address them all. If personal data is considered to be critical to digital markets, it may be necessary to introduce further regulation and policy that explicitly enables and encourages the exchange and use of personal data. Due to the way in which antitrust law has evolved, particularly in the United States, there is evidence to suggest it is now unable to recognize or deal with digitally dominant companies. Lina Khan demonstrates this in chapter 4, with reference to Amazon (the chapter is an edited version of an article Khan originally wrote for the Yale Law Journal). Antitrust law has, over the last five decades, become centered on consumer welfare and price. If consumer prices are low, the law assumes that dominance is not a problem. Moreover, antitrust assumes that companies will rarely pursue predatory pricing since it will eat into their profits, and that in competitive environments firms will naturally seek to put one another out of business. Amazon challenges all of these assumptions. Amazon maintains low prices at the expense of profits so that it can pursue massive scale and long-term dominance. By acting at the same time as a retailer, a marketplace, and a delivery company, Amazon aims not to put its competitors out of business but to make them dependent on it. Khan shows how Amazon has managed to gain structural dominance through discriminatory prices and fees, by providing fulfillment infrastructure, and by operating an ever-expanding marketplace. These infrastructural assets also mean that Amazon is building digital obstacles that competitors will find increasingly hard to surmount. Before antitrust became fixated by price, competition authorities would have been able to take factors into account that went beyond this narrow aspect of consumer
Introduction
[ 9 ]
01
welfare. They would have been able, for example, to look at the health and diversity of the book market. They could have examined how vertical integration allows an organization like Amazon to leverage its dominance in one sphere in order to gain advantage in another, and how it raises questions of conflict of interest. “It is as if,” Khan writes, “Bezos charted the company’s growth by first drawing a map of antitrust laws, and then devising routes to smoothly bypass them.”
SOCIETY
Nic Newman and Richard Fletcher explore the extent to which people are relying on digital platforms for their news, and what implications this has for news publishers, in c hapter 5. Building on data they have collected from 36 different countries as part of their 2017 Digital News Report, Newman and Fletcher find distinct differences in media consumption across different countries. While Facebook and its subsidiaries (WhatsApp, Messenger, Instagram) are dominant in most major markets, the popularity of other platform services differs considerably. Google News, for example, reaches 30% of digital users in Mexico, but just 3% in Denmark. Why, the authors ask, have many users come to rely on platforms for their news? It is partly because stories find the user, rather than the user needing to seek out the stories. News is also aggregated and packaged in a way as to make it convenient, easy to navigate, and quick to consume. This does not mean users are exposed to fewer news sources via the platforms. Indeed, the authors present data that goes against prevailing beliefs about a lack of exposure to diverse news on platforms, suggesting people consume more news sources not less. Yet, the emergence of new technologies to access news, most notably voice controlled digital assistants such as Amazon Echo, may be further enhancing the gatekeeping function of platforms. In chapter 6, Natali Helberger shows how our understanding of media plurality and diversity needs to change in this brave new platform world, and how this then has to be reflected in the evolution of media policy. Policymakers and regulators are, Helberger notes, struggling with how to evaluate the effects of information intermediaries on media diversity. This is because tech platforms challenge our existing conceptions of media diversity. These conceptions focus on traditional media organizations’ provision of diverse content—their internal diversity, and on media ownership rules—to ensure external diversity or structural plurality. Yet, as Helberger explains, this fails to recognize the role social platforms play, particularly
[ 10 ] Digital Dominance
in enabling, channeling, framing, or inhibiting exposure to diverse media. A single platform may, for example, enable access to a far wider range of content than a traditional media organization. The same platform may, however, through its architectural framework and prioritization of certain content, reduce or preclude exposure to diverse media. The guidelines for Facebook’s Trending Topics, for example, did not contain any criteria for promoting media diversity. If societies want media diversity to remain a public policy objective, they need to better understand the role platforms play, and more clearly define the nature of platform power. From the perspective of internal diversity, platform power derives chiefly from their ability to organize, sort, and prioritize data, and then use this to determine how to present content to users. From the perspective of external diversity, platform power derives from their relationships with traditional media organizations, and the terms and conditions they set on these relationships. For this reason, Helberger calls for “a new social-cooperative understanding of media diversity” that both recognizes the importance of the relationship between the user and the platform, and the relationship between the platform and traditional media providers. Orla Lynskey examines the extent to which dominance by the platforms may enable or accentuate discrimination and inequality. Building on Barocas and Selbst (2016), Lynskey points to steps that may make discrimination more likely. The classification of particular groups based on target variables and labels can, for example, inadvertently differentiate protected groups, and enable prejudicial profiling of them (on the basis of race, ethnicity, or gender, for example). Similarly, the use of training data that is based on limited or skewed samples, can lead to unfair treatment of particular socioeconomic groups. Dominance by the platforms may also exacerbate or intensify inequality. Those unable to afford certain devices or apps may, as a consequence, need to provide greater—and more intrusive—personal data. This then exacerbates the information asymmetry between the user and commercial providers, and enables providers to target users when they may be most susceptible to exploitation (for high-interest loans at specific times, for example). Existing competition and data protection laws are, Lynskey suggests, limited in their capacity to deal with these issues. Competition law is not structured such as to address problems of data differentiation or discrimination. Equally, while data protection law may appear more applicable, the lack of transparency of data or algorithms precludes their effective use.
Introduction
[ 11 ]
21
In c hapter 8, Justin Schlosberg questions the generally held view that, as the power of Silicon Valley rises, so the power of mainstream media falls. On the contrary, Schlosberg argues, what we are seeing is “not the demise of concentrated news ‘voice,’ but its reconstitution within a more integrated, complex, and less noticeable power structure.” To demonstrate this, the author assesses the evolution of gatekeeping theory, and illustrates how the arrival of digital led many to downgrade the professional journalists and editors to a supporting—rather than lead—role. In their stead, we find algorithms and the self-selection of media by users themselves. Yet, as we gain more insight into how these algorithms work we see that they rely heavily on traditional editorial signs and signals. Equally, users tend—even when presented with limitless options—to gravitate to a handful of regular sources. The evidence, Schlosberg concludes, suggests that the agenda- setting power of traditional media remains in the new media environment, even if its distribution and consumption has become more complicated. As these information intermediaries have come to dominate our dig ital space, so they have become the leading platforms by which and on which many of us express ourselves in this space. This includes self-publication (through posts, tweets, vlogs, blogs, photographs, and podcasts), distribution, and access (notably via social media feeds and search). Such dominance has implications for free expression, an issue that Ben Wagner explores in chapter 9. Many of the risks to digital free expression, Wagner argues, are a consequence not of information intermediaries per se, but of dominant information intermediaries like Facebook and Google. It is their dominance that means their systems, guidance, and governance direct and constrain our speech rights. Yet, their systems were not established with this in mind, and have evolved in response to corporate, cultural, and commercial factors as well as to legislation and human rights. They have created their own rules and norms, based predominantly on US free speech protections (actual and perceived), coupled with ad hoc reactions to problems identified internally and externally. The consequence of this is that we now adhere, for much of our speech online, to rules and governance that are an unplanned mixture of cultural norms, personal preferences, corporate imperatives, legal requirements, and improvised exceptions. This has unintended political and social repercussions. The section ends with Emily Bell’s chapter on the growing dependence of the press on Silicon Valley, a chapter that raises serious concerns about the future of democracy and the Fourth Estate. The dominance of tech companies is so significant, and the commercial news environment changing
[ 12 ] Digital Dominance
so fundamentally, Bell writes, that it “brings into question whether an independent and pluralistic free press can survive outside the embrace of vast social networks and search engines.” News publishers have already become so heavily reliant on these platforms as a means of hosting and distributing their content—and, increasingly, reaching their audiences— that they have ceded much of their previous autonomy. Some news publishers like Buzzfeed built their success on the foundations of social media giants Facebook, Twitter, and Snapchat. As a consequence, they now have to shape, edit, and promote their news to the standards of those services. Yet, though the platforms have taken over many of the press’s functions over the last decade (and much news outlets’ advertising revenue) they do not share the same purpose or motivations. This raises critical democratic questions about the future of journalism and how social democracies function in future.
POLITICS
Damian Tambini opens the final section on politics by examining the charges leveled at social media in recent elections, particularly whether the dominance of these platforms—most notably Facebook—jeopardizes electoral legitimacy. Looking in detail at the 2016 UK EU Referendum and the 2016 UK General Election, Tambini presents evidence from interviews with the campaigners themselves, from election spending returns, and from data captured by the software WhoTargets.Me in 2017. The author finds evidence to support the centrality of Facebook to contemporary campaigns—“almost nothing went on traditional advertising,” the chief executive of Vote Leave tells the author—indeed, the platform has become a “one-stop shop for fundraising, recruitment, profiling, segmentation, message targeting and delivery.” Taking a lead from the infamous Sun newspaper headline after the UK election 1992, Tambini therefore asks, “Was it Facebook Wot Won It?” The data—from the UK at least—is not conclusive, partly because much of the data that might prove the Facebook effect one way or another is inaccessible. Yet there are numerous aspects regarding the role of Facebook in elections that raise justifiable public concerns and risk undermining electoral legitimacy: the increasing importance of a foreign organization in the electoral process, the extent of knowledge the same organization has about voters, and how it uses or profits from that knowledge. None of these concerns would be so pressing, the author concludes, were Facebook not so dominant.
Introduction
[ 13 ]
41
The search engine is, Robert Epstein argues, “the most powerful mind- control device ever invented,” and this may be seen in its potential power to influence people’s voting choices. The author cites two particular means of influence: the search engine manipulation effect (SEME) and the search suggestion effect (SSE). These were both developed by the author and his colleague Ronald E. Robertson, based on a series of research studies they have been conducting since 2010. In these studies, research participants were given the opportunity to learn more about political candidates using a search engine called “Kadoodle” (created for the purposes of the study). For the initial experiments, with US participants, the candidates were from Australia—in order to avoid political preconceptions. In these, the researchers found participants’ views of candidates shifting by over 48% after one search. A later study took place in India, during the 2014 Lok Sabha national election with real voters and current search results. Even in these circumstances, search results led to a 24.5% shift in the views of participants. Moreover, almost all the participants were unaware they were looking at biased search results. As the author outlines, the influence of search engine results on opinions is particularly powerful due to factors including operant conditioning. In other words, the way in which search engine users are trained—through repeated use—to believe that the top search results are the most accurate and trustworthy. The author then goes on to describe his subsequent research on Google’s autocomplete function, and its potential to influence search choices. His findings, he concludes, make him increasingly concerned that “our online environment is not only dominated by a very small number of players but also that these players have at their disposal new means of manipulation and control that are unprecedented in human history.” Nick Diakopoulos, Daniel Trielli, Jennifer Stark, and Sean Mussenden also examine the role of search results in influencing people’s votes in chapter 13. The chapter details four case studies, in each of which the authors have reverse engineered Google search results about US presidential candidates in order to assess whether there may be biases in results. The first case study looks at the list of search results themselves and finds that official campaign sources tend to lead results, followed by news articles. On average, search results were more favorable to Democratic than Republican candidates. The second analysis assesses the Google “Issue Guide”—a table to the right of organic search results, which presented a list of 16 (later 17) political issues, and an associated statement about the issue by the candidate. In addition to discovering a distinct disparity in the number of statements associated with candidates (Democrats had an average of 258 compared to 147 for Republicans), the authors found a variety
[ 14 ] Digital Dominance
of other biases, some of which appeared to be a consequence of editorial choices. The final two case studies look at the “In the News” box at the top of the results page, and the visual framing of the candidates through the choice of images featured. The chapter not only presents important insights into potential structural biases in search but also shows how methodologically challenging it is to conduct audits of this kind. Given how central Google is becoming to our access to information, including political information, studies such as this may become integral to our ability to question whether the information provided is fair and adequate. Fabiana Zollo and Walter Quattrociocchi explore online echo chambers in Facebook and YouTube in their chapter. Using a large-scale dataset from both platforms, they examine the extent and behavior of echo chambers for different topics, in this case science and conspiracy theories, and for different issues, in this case the Brexit referendum and the Italian Constitutional referendum. The authors apply a range of analyses to test the degree to which echo chambers exist, their homogeneity and polarization, the way in which information cascades within them, and how those within the chambers respond to new—particularly dissenting—information. Based on data from Italian, British, and American social media users, they provide not only empirical evidence for echo chambers but also evidence of the peculiar and distinctive dynamics between users within different chambers. In the final chapter John Naughton reflects on what the dominance of the two “pure digital” tech giants means in terms of shifting power. The size and reach of these companies gives them certain conventional powers with which we are familiar, such as economic clout and resources to invest in lobbying. However, these companies’ scale and dominance also give them powers with which we are much less familiar—and that we are only starting to conceptualize. These powers are, Naughton writes, byproducts of their dominance and powers that they did not—for the most part— aspire to or prepare for. They have enabled, for example, highly targeted and relatively invisible political campaigning. They have created a situation in which companies can virtually disappear from the digital space—should they sink too far down the search results or be deliberately obscured by Google (as a consequence of the legal “right-to-be-forgotten”).1
1. Many of these issues were discussed in a symposium organised by the LSE Media Policy Project in June 2016. The editors are grateful to the participants in this meeting, several of whom contributed chapters to this book, for comments and inspiration, and to the London School of Economics Department of Media and Communications for acting as hosts.
Introduction
[ 15 ]
61
Part of the power of these tech giants derives from their successful co- option of our attention. Since we now live in an attention economy, those who successfully capture that attention necessarily gain the rewards of that economy. We are only now beginning to recognize that this includes political and social as well as financial rewards. The benefits to consumers are relatively apparent—in terms of our ability to communicate, to gain access to information and news, and to shop. The costs to society are becoming clear more slowly, though may include harm to our political systems and to our social cohesion. How we respond, and the extent to which the harms might be reparable, we are yet to discover.
REFERENCES Auletta, Ken. 2009. Googled: The End of the World as We Know It. London: Virgin Books. Battelle, John. 2006. The Search: How Google and Its Rivals Rewrote the Rules and Transformed Our Culture. London: Nicholas Brealey. BI Intelligence. 2017. “Amazon Accounts for 43% of US Online Retail Sales.” Business Insider UK, February 3, 2017. http://uk.businessinsider.com/ amazon-accounts-for-43-of-us-online-retail-sales-2017-2. Bork, Robert. 1978. The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books. Brandeis, Louis. 1914. Other People’s Money and How the Bankers Use It. New York: Stokes: 163. https://archive.org/details/otherpeoplesmone00bran. Coren, Michael J. 2017. “Facebook’s Global Expansion No Longer Has Its Mission Statement Standing in Its Way.” Quartz, June 22, 2017. https://qz.com/1012461/ facebook-changes-its-mission-statement-from-ing-its-mission-statement-from- sharing-making-the-world-more-open-and-connected-to-build-community-and- bring-the-world-closer-together/. Dormehl, Luke. 2012. The Apple Revolution: Steve Jobs, the Counterculture and How the Crazy Ones Took Over the World. London: Virgin Books. Ezrachi, Ariel, and Maurice E. Stucke. 2016. Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Harvard: Harvard University Press. FB Newsroom. 2016. “Community Standards and Facebook Live.” Facebook Newsroom, July 8, 2016. https://newsroom.fb.com/news/h/community-standards-and- facebook-live/. Google. 2017. “Our Story: From the Garage to the Googleplex.” https://www.google. com/intl/en/about/our-story/. GroupM. 2017. “Interaction: Preview Edition for Clients and Partners.” February 2017. https://groupmp6160223111045.azureedge.net/cmscontent/admin. groupm.com/api/file/2521 Harcourt, A. 2015. Media plurality: What can the European Union do? In Media Power and Plurality: From Hyperlocal to High-Level Policy, edited by S. Barnett and J. Townend. London: Palgrave, 2015.
[ 16 ] Digital Dominance
Helberger, Natali, Katharina Kleinen-Von Köngislöw, and Rob van Der Noll. 2014. Convergence, Information Intermediaries and Media Pluralism—Mapping the Legal, Social and Economic Issues at Hand. Amsterdam: Institute for Information Law. Hofstadter, Richard. 2008. The Paranoid Style in American Politics: And Other Essays. Random House. Howard, Philip. N. 2006. New Media Campaigns and the Managed Citizen. Cambridge: Cambridge University Press. Isaacson, Walter. 2011. Steve Jobs. New York: Simon and Schuster. Karppinen, Kari. 2012. Rethinking Media Pluralism. New York: Fordham University Press. Kirkpatrick, David. 2010. The Facebook Effect: The Real Inside Story of Mark Zuckerberg and the World’s Fastest-Growing Company. London: Virgin Books. Moore, Martin. 2016. Tech Giants and Civic Power. King’s College London: Centre for the Study of Media, Communication & Power. Morozov, Evgeny. 2011. The Net Delusion: How Not to Liberate the World. London: Allen Lane. Morozov, Evgeny. 2013. To Save Everything, Click Here. New York: Public Affairs. Nowak, Mike, and Guillermo Spiller. 2017. “Two Billion People Coming Together on Facebook.” Facebook Newsroom, June 27, 2017. https://newsroom.fb.com/ news/2017/06/two-billion-people-coming-together-on-facebook/. Nunez, Michael. 2016. “Former Facebook Workers: We Routinely Suppressed Conservative News.” Gizmodo, May 9, 2016, http://gizmodo.com/ former-facebook-workers-we-routinely-suppressed-conser-1775461006. Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press. Posner, Richard A. 1979. “The Chicago School of antitrust analysis.” University of Pennsylvania Law Review 127: 925–48. Rochet, Jean-Charles, and Jean Tirole. 2003. “Platform competition in two-sided markets.” Journal of the European Economic Association 1, no. 4: 990–1029. Rosen, Jeffrey. 2016. Louis D. Brandeis: American Prophet. Yale: Yale University Press. Ross, Alice, and Julia Carrie Wong. 2016. “Facebook Deletes Norwegian PM’s Post as “Napalm Girl” Row Escalates.” Guardian, September 9, 2016. https://www.theguardian.com/technology/2016/sep/09/ facebook-deletes-norway-pms-post-napalm-girl-post-row. Schneier, Bruce. 2015. Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. New York: W.W. Norton and Company. Smith, Adam. 1776 (1998). An Inquiry into the Nature and Causes of the Wealth of Nations: A Selected Edition. Edited by Kathryn Sutherland. Oxford: Oxford University Press. Sparrow, Andrew. 2017. “WhatsApp Must Be Accessible to Authorities, says Amber Rudd.” Guardian, March 26, 2017. https://www.theguardian.com/technology/ 2017/mar/26/intelligence-services-access-whatsapp-amber-rudd-westminster- attack-encrypted-messaging. Stone, Brad. 2013. The Everything Store: Jeff Bezos and the Age of Amazon. New York: Back Bay Books/Little, Brown. United States Congress. 1902. “Bills and Debates in Congress Relating to Trusts, Fifty-First Congress.” Record 2,455–2,474, US Government Printing Office, March 21, 1890. http://www.appliedantitrust.com/02_early_foundations/3_ sherman_act/cong_rec/21_cong_rec_2455_2474.pdf.
Introduction
[ 17 ]
81
Urofsky, Melvin. 2009. Louis D. Brandeis: A Life. New York: Schocken Books. Valcke, P. (2012). “Challenges of Regulating Media Pluralism in the European Union: The Potential of Risk-Based Regulation.” Quaderns del CAC 38, 15, no. 1: 25–36. Viner, Katherine. 2016. “How Technology Disrupted the Truth.” Guardian, July 12, 2016. https://www.theguardian.com/media/2016/jul/12/ how-technology-disrupted-the-truth. Zielonka, Jan. 2014. Is the EU Doomed? Cambridge: Polity Press. Zuckerberg, Mark. 2016. Facebook post, November 13, 2016. https://www.facebook. com/zuck/posts/10103253901916271.
[ 18 ] Digital Dominance
SECTION 1
Economy
02
CHAPTER 1
The Evolution of Digital Dominance How and Why We Got to GAFA PATRICK BARWISE AND LEO WATKINS
Competition is for losers. If you want to create and capture lasting value, look to build a monopoly —Peter Thiel, cofounder of PayPal and Palantir
A
pple, Alphabet (Google), Microsoft, Amazon, and Facebook are now the five most valuable public companies in the world by market capitalization.1 This is the first time ever that technology (“tech”) companies have so dominated the stock market—even more than at the end of the 1990s’ Internet bubble. They are a large part of everyday life in developed economies and increasingly elsewhere. They wield enormous power, raising difficult questions about their governance, regulation, and accountability. This chapter is about how and why this came about. These tech giants vary in many ways. For instance, Apple is primarily a hardware company and Amazon has a huge physical distribution network, while Google, Microsoft, and Facebook are mainly “weightless” online businesses. Nevertheless, they share several features:
1. As of June 28, 2017 (see Table 1.1). A public company’s market capitalization is its value to its shareholders (share price times number of shares).
2
• A US West Coast base; • Dominant founders: Steve Jobs (Apple), Larry Page and Sergey Brin (Google), Bill Gates (Microsoft), Jeff Bezos (Amazon), and Mark Zuckerberg (Facebook) (Lex 2017); • Significant control of the digital markets on which consumers and other companies depend; • A business model to “monetize” this market power by charging users and/or others, such as advertisers, leading to sustained supernormal profits and/or growth; • A hard-driving, innovative corporate culture epitomized by Facebook’s former motto “Move fast and break things.” They have combined annual revenue of over $500bn, net income of over $90bn, and market capitalization of over $2.8 trillion (Table 1.1). Microsoft has been one of the world’s most valuable companies since the 1990s, but the other four—“GAFA” (Google, Apple, Facebook, Amazon)—are relative newcomers to the list.
A 60-Y EAR PATTERN: DOMINANT TECH PLAYERS CAN BE ECLIPSED, BUT NOT DISPLACED
This is the latest stage of a 60-year pattern, with the emergence of increasingly important new technology markets. These typically start as highly contested but soon become dominated by one (or two) US companies: • 1960s mainframes (IBM) • 1980s PCs (Microsoft and Intel) • 1990s the World Wide Web, creating multiple new online markets including search (Google), e-commerce (Amazon), and social networking (Facebook) • 2010s the mobile Internet (Apple and Google/Android) plus numerous mobile apps/services (GAFA and others, mostly based in the United States and China). These companies operate in markets with important winner-take-all features such as cost and revenue economies of scale, scope, and learning, and often high switching costs, locking users in. Their strategies typically include creating proprietary standards and platforms; gathering and exploiting vast quantities of user data; product bundling; building large- scale infrastructure, some of which is then rented to other companies; strategic acquisitions; branding and intellectual property (trademark and,
[ 22 ] Economy
Table 1.1. THE BIG FIVE US TECHNOLOGY COMPANIES Revenue (2016)
Market Capitalization 28.6.17
After-Tax Profit (Net Income) (2016)
P/E Ratio
Founded
Based
Main Product
Apple
1976
Cupertino, CA
Hardware
$216bn
$749bn
$45.7bn
16
Alphabet
1998
Mountain View, CA
Search
$90bn
$656bn
$19.5bn
34
Microsoft
1975
Redmond, WA
PC Software
$85bn
$534bn
$16.8bn
32
Amazon
1994
Seattle,
E-commerce
$136bn
$467bn
$2.4bn
195
Facebook
2004
Menlo Park, CA
Social network
$28bn
$436bn
$10.2bn
43
Total
–
–
–
$555bn
$2,843bn
$94.6bn
30
(Google)
WA
Source: Company reports and Dogs of the Dow (2017). P/E (Price/Earnings) ratio = share price/latest earnings per share = market capitalization/net income.
42
especially, patent) litigation; regulatory and tax arbitrage; and political lobbying. The result is dominance of at least one product or service category— in some cases several—leading to sustained high profits, which are then invested in (1) protecting and enhancing the core business and (2) high- potential new markets, especially where the company can use the same technology, infrastructure, or brand. Because of these markets’ winner-take-all features, it is extremely hard to displace a dominant, well-managed tech business from its leadership of its core product market. Instead, the greater risk is that they will be eclipsed by another company dominating a new, eventually bigger, adjacent market with similar winner-take-all qualities. The new market may then overshadow the previous one, without necessarily destroying it (Thompson 2014). For instance, IBM still dominates mainframes and Microsoft still dominates PC software, but these are both mature markets that have been surpassed by newer, larger ones for online, mobile, and cloud-based services. To head off this threat and exploit the new opportunities, dominant tech companies invest heavily in high-potential, emerging product markets and technologies, both organically and through acquisitions. Current examples include the augmented and virtual reality (AR/VR) platforms being developed by Apple, Google, and Facebook; the race between Google, Apple, Uber, Tesla, and others to develop self-driving car technology; and the creation of connected, voice-activated home hubs such as Apple’s HomePod, the Amazon Echo, and Google Home. The rest of the chapter is in three sections: the theory, the five company stories (Microsoft and GAFA), and the question: will the market end the tech giants’ digital dominance?
THE THEORY: WHY TECH MARKETS ARE WINNER-TAKE-A LL
Traditional economics goes some way toward explaining these companies’ market dominance. In particular, most tech markets exhibit extreme economies of scale. Software and digital content have high fixed development costs but low-to-zero marginal (copying and online distribution) costs. Unit costs are therefore almost inversely proportional to sales volume, giving a big competitive advantage to the market leader. Digital products are also (1) “nonrivalrous”—unlike, say, pizzas, cars, or haircuts, they can be used simultaneously by a limitless number of people—and (2) “experience goods”—users need to try them and learn
[ 24 ] Economy
about them (from personal experience, experts, and peers) to judge their quality.2 Their nonrivalrous nature often leads to business models based on advertising (free services, maximizing reach) and/or continuing customer relationships rather than one-off sales. The fact that these products are “experience goods” (1) increases the value of strong, trusted brands to encourage trial and (2) creates switching costs for existing users, further benefiting the market leader. The tech giants have some of the most valuable brands in the world: leading marketing company WPP now ranks Google, Apple, Microsoft, Amazon, and Facebook, in that order, as its top five global brands, with a combined value of $892bn (Kantar Millwood Brown 2017).3 These estimates are of the shareholder value of consumer brand equity. These companies also have significant employee brand equity, helping them attract the best technical, managerial, and commercial talent—another winner-take-all factor. Crucially, however, digital markets also have two other important characteristics that further encourage market concentration: 1. Many digital services serve communication or linking functions, generating both direct (within-market) and indirect (cross-market) network effects. These also occur in other markets but are especially prevalent and important in digital markets. 2. Digital technology enables large-scale real-time collection and automated analysis of usage data, which can be exploited both tactically and strategically, especially through continuous product improvement and personalization. The result is a recursive relationship between adoption and usage, product/service quality, and further adoption and usage, further reinforcing the winner-take-all dynamic. Tech companies’ strategies aim to exploit these winner-take-all market characteristics as well as classic sources of competitive advantage: product quality and design; marketing and branding; brand extensions and bundling; and various forms of customer lock-in. Increasingly, the companies also operate in multiple product markets, often with products and services offered free or below cost as part of a wider strategy to protect and extend 2. Economic analysis of these features predates the Internet: the literature on nonrivalrous (and, in the first instance, nonexcludable) “public goods” like defense and free-to-air broadcasting goes back to the 1950s (Samuelson 1954; Coase 1959) and the pioneering paper on experience goods is Nelson (1970). 3. The other two main valuation companies, Interbrand (2016) and Brand Finance (2017), also value them all in their top ten apart from Interbrand’s #15 ranking for Facebook in 2016.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 25 ]
62
their core market dominance and capture more data. Examples include Amazon’s Kindle and Google’s Maps and Gmail. We now discuss these distinctive winner-take-all characteristics of dig ital markets in more detail under four headings: direct network effects; indirect network effects (“multisided markets”); big data and machine learning; and switching costs and lock-in.
Direct Network Effects
In 1974, Jeffrey Rohlf, an economist at Bell Laboratories, published a seminal paper “A Theory of Interdependent Demand for a Telecommunications Service.” Bell Labs’ then-owner AT&T was contemplating the possible launch of a video telephony service, and Rohlf was researching how this should be priced if it went ahead. His mathematical model was based on the key qualitative insight (Rohlf 1974, 16) that “[t]he utility that a subscriber derives from a communications service increases as others join the system,” enabling each person to communicate with more others (although some adopters are more influential than others in driving network externalities, see Tucker [2008]). Economists call this effect a direct network externality (Katz and Shapiro 1985).4 In the context of Rohlf’s paper and this chapter, the relevant network effects are positive (“revenue economies of scale”), but they can be negative, as with congestion in transport and communication networks. There can also be both positive and negative “behavioral” direct network effects if other consumers’ adoption of a product makes it either more, or less, acceptable, fashionable, or attractive.
Indirect Network Effects (“Multisided Markets”)
Most tech companies are, at least to a degree, “platform” businesses, creating value by matching customers with complementary needs, such as software developers and users (Microsoft’s MS-DOS and Apple’s App Store), publishers and book buyers (Amazon), drivers and potential passengers (Uber), and, in many cases including Google and Facebook, advertisers and consumers. These network effects are called “indirect” because—unlike with the direct, single- market, externalities discussed previously— the value to 4. “Externality” because it involves external third parties in addition to the individual firm and customer. We interchangeably use the less technical term “network effect.”
[ 26 ] Economy
participants in each market (e.g., diners) depends on the number of participants in the other market (e.g., restaurants), and vice versa. Once a platform dominates the relevant markets, these network effects become self-sustaining as users on each side help generate users on the other. Most indirect network effects are, again, positive, although they too can be negative for behavioral reasons if some participants are antisocial or untrustworthy, for example, posting malicious reviews on TripAdvisor or fake news on Facebook, or overstating the size and quality of their homes (or, conversely, throwing a noisy, late-night party as a guest) on Airbnb. Platforms often incorporate governance processes to limit these behaviors (Parker, Van Alstyne, and Choudary 2016, Chapter 8). The need to appeal to both buyers and sellers simultaneously has been known since the first organized markets. But there was no formal modeling of two-sided markets until the late 1990s, when Rochet and Tirole (2003) noted structural similarities between the business models of payment card businesses, telecommunication networks, and computer operating systems. All exhibited network effects under which the value of the service for one group (e.g., payment card users) depended on how many members of the other group (e.g., merchants) were in the system, and vice versa.5 More recent work uses the term “multisided”—rather than two-sided— markets because some platforms facilitate interaction between more than two types of participant. For instance, Facebook connects six distinct groups: friends as message senders, friends as message receivers, advertisers, app developers, and businesses as both message senders and receivers (Evans and Schmalensee 2016a, 110). Digital devices with compatible software, such as Microsoft’s Xbox video games player, exhibit indirect network effects because (1) each device’s installed user base constitutes an addressable market for software developers and (2) the range and quality of software available for the device are key to its user appeal (Nair, Chintagunta, and Dubé 2004; Lee 2013). Similarly, automated online marketplaces such as Amazon, Airbnb, and Uber operate in multisided markets with indirect network effects. All businesses that depend on indirect network effects face the “chicken- and-egg” challenge of achieving critical mass in both or all the key markets simultaneously. Until the business reaches this point, it will need to convince investors that early losses will be justified by its eventual dominance of a large and profitable multisided market. Most start-up tech businesses, 5. These effects were also modeled independently by Parker and Van Alstyne (2005), who had noticed that most successful 1990s Internet start-ups had a two-sided market strategy.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 27 ]
82
such as Twitter, Uber, Snapchat, and Pinterest, are heavily loss-making for years and the casualty rate is high. Achieving critical mass is easier if the product or service offers immediate benefits independent of network effects. For instance, at its 2007 launch, the iPhone already offered 2G mobile (voice, texts, e-mail, and web browsing) and music, with a market-leading touch-screen interface, driving rapid adoption. The App Store (2008) then created a virtuous circle of further adoption and app development. Hosting a large digital platform requires massive infrastructure— servers, data storage, machine learning, payment systems, and so forth. Most of these have marked economies of scale and scope, enabling the business to take on other markets and to rent out capacity to other firms, further increasing its efficiency and profitability. The preeminent example is Amazon—both its logistics arm and its cloud computing business Amazon Web Services (AWS). Google, too, sells cloud storage, machine learning, data analytics, and other digital services that have grown out of, or complement, its core search business, while Microsoft is building its cloud services business, Azure.
Big Data and Machine Learning
The Internet enables tech companies to collect extensive, granular, real- time usage data at low cost. The resulting “big” datasets are challenging for traditional software to process because of their size, complexity, and lack of structure, but new data analytics techniques, increasingly automated (“machine learning”), can use big data to drive relentless improvement in products, services, pricing, demand forecasting, and advertising targeting. For instance, Netflix constantly analyzes viewing and preference data to inform its content purchases and commissions and to automate its personalized recommendations to users. The more detailed the data, the wider the range of transactions, the bigger the user sample, and the greater the company’s cumulative analytics experience, the better: quantity drives quality. Data and machine learning therefore offer both cost and revenue economies of scale, scope, and learning, encouraging digital businesses to offer free or subsidized additional services, at least initially, to capture more data. The business benefits of big data are both tactical (continuous improvement) and strategic. These are interlinked: over time, continuous improvement can give the dominant provider an almost unassailable strategic advantage in service quality, customization, message targeting, and cost
[ 28 ] Economy
reduction. Subject to privacy regulations (currently being loosened in the United States, see Waters and Bond [2017]), the data can also be sold to other, complementary companies, enabling them to obtain similar benefits. Finally, data can be analyzed at a more aggregate level to provide strategic insight into market trends. An important example is AWS’s and other cloud companies’ access to aggregate data on their many start-up clients, giving early intelligence on which are doing well and might be a competitive threat and/or investment opportunity. Big data and machine learning can powerfully reinforce network effects, increasing the dominant companies’ returns to scale and helping to entrench incumbents and deter market entry. However, economic theory has not yet caught up with this. For instance, Evans and Schmalensee (2016a) do not mention big data, analytics, algorithms, or machine learning. Parker, Van Alstyne, and Choudary (2016, 217–20) do list leveraging data as one of the ways in which platforms compete, but their discussion of it is barely two pages long and gives no references, reflecting the lack of relevant economic research to-date. There has been some broadly related work. Chen, Chiang, and Storey (2012) edited a special issue of MIS Quarterly on the use of big data analytics in business intelligence, while George, Haas, and Pentland (2014) and Einav and Levin (2014) explore its potential in management and economics research, respectively. But overall, although data and machine learning are key drivers of the tech giants’ market and civic power, existing economic theory provides an insufficient framework for making this power accountable and regulating it to sustain effective competition (Feijoo, Gomez-Barroso, and Aggarwal 2016; Kahn 2017).
Switching Costs and Lock-I n
Finally, all these companies use multiple ways to lock users in by increasing the cost or effort of switching to a rival product or service. As already noted, it takes time and effort to learn how to use unfamiliar systems and software. The greater the amount of such learning (“brand-specific consumer human capital”), the greater is the switching cost (Klemperer 1987; Ratchford 2001; Huang 2016). Often, there are also incompatibility issues locking users into a particular company’s ecosystem (Iansiti and Levien 2004) or “walled garden”: for instance, apps bought on iOS cannot be carried over to an Android device. Similarly, users’ personal data archives may not be portable to another platform. Some services’ utility also increases with use by allowing for customization by the user (e.g., creating playlists on iTunes or Spotify) and/or the
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 29 ]
03
company (based on the individual’s usage data) or enabling the user to accrue, over time, a reputation or status (e.g., Amazon marketplace ratings) or to accumulate content they do not want to lose (e.g., Facebook message histories), all of which reinforces lock-in.
Conclusion: Digital Markets Are Winner-Take-A ll and the Winners Are Hard to Dislodge
In this section, we have discussed several structural reasons why digital markets tend to be winner-take-all: economies of scale; important user and employee brands; direct and indirect network effects; big data and machine learning; and other factors that enable strategies based on switching costs and lock-in. The tech giants’ market dominance is strengthened by their corporate cultures. They are all ambitious, innovative, and constantly on the lookout for emerging threats and opportunities, exemplifying Grove’s (1998) view that “only the paranoid survive.” This makes them tough competitors. Finally, their tax avoidance further increases their net income and competitive advantage. Given all these factors, once a tech platform dominates its markets, it is very hard to dislodge. For a rival to do so, it would need to offer a better user experience, or better value for money, in both or all the markets connected by the platform, in a way that the incumbent could not easily copy, and over a sufficient timescale to achieve market leadership. For example, Google dominates both user search and search advertising. To dislodge it— as several have tried to do—a rival would need to offer users better searches and/or a better overall experience than Google, or some other incentive to switch to it (since Google searches are free, it cannot be undercut on price), long enough to overcome their habitual “googling” for information. Only by attracting more high-value users than Google would the challenger then be able to overtake it in search advertising revenue, although it could perhaps accelerate this (at a cost) by offering advertisers lower prices to compensate for its lower reach until it overtook Google. The overall cost would be huge—tens of billions—and with a high risk of failure, given Google’s alertness and incumbency advantages: search quality, superior user interface, brand/habitual usage, dominant reach and scale in search advertising, leadership in big data and machine learning, and deep pockets. However, competitive platforms can coexist if: (1) users can “multihome,” that is, engage with more than one platform (for instance, many consumers use several complementary social networks) and/ or
[ 30 ] Economy
(2) developers can create versions of their products for several platforms at little incremental cost. Having discussed the drivers of tech market concentration in generic and theoretical terms, we now turn to the five company stories and the extent to which some combination of these factors has, in practice, enabled each of them to achieve market dominance.
THE FIVE COMPANY STORIES
We here summarize the five companies’ individual histories, strategies, business models, and current market positions and concerns. Their stories have been much more fully documented elsewhere, for example, Wallace and Erickson (1992), Isaacson (2011), Auletta (2009), Kirkpatrick (2010), and Stone (2013).
Microsoft
Microsoft was founded by Bill Gates (19) and Paul Allen (22) in 1975 as a supplier of microcomputer programming language interpreters.6 Its big break came in 1980, when IBM gave it a contract to supply an operating system for the forthcoming IBM PC. Microsoft bought the software for $75,000 from another company, hired the programmer who wrote it, branded it MS-DOS, and licensed it to IBM and all the PC clone manufacturers, receiving a licence fee on every sale. It then acquired and developed a series of PC software products: Word (1983), Excel (1985), Windows—MS-DOS with a graphical user interface emulating that of the Apple Mac (1985), PowerPoint (1987), and Office—combining Word, Excel, PowerPoint, and other applications (1989). In 1995, Windows 95, a major upgrade using faster Intel processors, was bundled with Internet Explorer, which soon eclipsed Netscape as the dominant web browser. Users familiar with both the Apple Mac and the Windows/Intel PC generally preferred the Mac. But the PC, widely marketed by IBM and multiple clone manufacturers, outsold the Mac and soon became the standard, first in the corporate world and then across the whole market apart from niche segments such as desktop publishing, where the Mac’s superiority won out. Every PC came with MS-DOS and, later, Windows and
6. Allen left in 1983 after being diagnosed with Hodgkin’s lymphoma.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 31 ]
23
Office, making Microsoft the dominant PC software supplier. Shapiro and Varian (1999, 10–11) described the Microsoft-Intel approach as a classic strategy based on network effects, contrasting it with Apple’s strategy of controlling and integrating both the hardware and the software: “In the long run, the ‘Wintel’ strategy of strategic alliance was the better choice.” Today, Microsoft remains the dominant PC software supplier with a global market share of 89%, versus 8% for Apple’s OS X and 3% for all others (Netmarketshare 2018). However, Microsoft has struggled to replicate this success elsewhere. Efforts under Steve Ballmer (CEO 2000–2014) to extend Windows to mobile devices repeatedly foundered, especially after the launch of Apple’s iPhone and iOS (2007) and Google’s Android mobile operating system (2008). Microsoft tried again to create a Windows mobile ecosystem based around Nokia’s handset division, acquired for $7.9bn in 2013, but this too failed. Only 15 months later, under new CEO Satya Nadella, it took a $7.5bn impairment charge on the acquisition plus $2.5bn in restructuring costs.7 Ballmer’s resignation caused Microsoft’s stock price to jump over 7% (Reisinger 2013). Since the 2008 launch of Google Chrome, Microsoft has also lost share in the web browser market, despite bundling Internet Explorer with Windows since 1995. In search, its estimated cumulative losses were $11bn by 2013 (Reed 2013). However, its Bing search engine finally turned a profit in 2015 (Bright 2015), mainly as the default for Windows 10, iOS, Yahoo!, and AOL. Historically, Microsoft’s most successful move away from PC software was into video game consoles. This was initially a defensive move prompted by fears that Sony’s PlayStation 2 would lure games players and developers away from the PC, but Microsoft’s Xbox, launched in 2001, succeeded in its own right. Since 2012, Microsoft has also marketed PCs, laptops, and other devices under the Surface brand name, with some success. Microsoft’s challenge today is that the PC is no longer most users’ main device—and Apple Macs and Google Chromebooks are also eating into its installed PC base. In response, it has set about transforming itself into a major player in cloud computing and office productivity services. It bought Skype in 2011 for $8.5bn, giving it a communications tool to integrate with other products like Office 365, the Lync enterprise phone platform, and real-time translation software (Bias 2015; Tun 2015). With this 7. Microsoft does, however, receive an estimated $2bn a year in patent royalties from Android device manufacturers (Yarow 2013), the only positive legacy of its expensive 15-year effort to build a significant mobile business.
[ 32 ] Economy
combination (Skype for Business), it aims both to shore up its core PC software business and to create new office service opportunities, especially in the enterprise market. Its biggest gamble to-date is the $26.2bn acquisition of the loss-making professional networking site LinkedIn in June 2016. Nadella claimed that the main aim was to exploit the data on LinkedIn’s 433m users to “reinvent business processes and productivity” (Waters 2016). More prosaically, salespeople using Microsoft software could download LinkedIn data on potential leads to learn about their backgrounds, interests, and networks. Another aim may be to improve Microsoft’s reputation and network in Silicon Valley (Hempel 2017). Microsoft remains a powerful, highly profitable force and is undergoing rapid change under Satya Nadella. Nevertheless, since the millennium it has been increasingly overshadowed by the GAFA companies.
Apple
Apple began as a personal computer company, but, as discussed earlier, lost out to Microsoft and Intel in that market. Its subsequent success, making it the world’s most valuable public company today, stems from its mobile devices and ecosystem, especially the iPod and iTunes (2001), iPhone and iOS (2007), App Store (2008), and iPad (2010). The launch of the App Store created a classic two- sided market. Consumers bought iPhones because iOS had the best apps, and developers prioritized iOS because it offered the best addressable market: compared with users of other platforms, iOS users spent more on apps and the devices they owned were more uniform, reducing app development costs.8 Underpinning all this was Apple’s aesthetic and technical design edge, distinctive branding, and positioning as user-friendly rather than nerdish. The iPhone is also a personal device, not aimed at companies, as PCs were initially, increasing the scope for premium pricing. Since 2010, Apple has sustained and extended its ecosystem by constantly adding new products (e.g., Siri and Watch) and features, driving repeated user upgrades to the latest device version. The breadth and quality of the user experience is also encouraging some PC users to switch to Macs. Finally, Apple’s store network gives it a direct route to market, protects
8. Also, because iOS was based on the Mac operating system, Mac developers were able to write software for it with minimal retraining.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 33 ]
43
it from being squeezed by other retailers, boosts its brand exposure, and enables it to provide a superior, walk-in customer service. Neither the iPod, nor the iPhone, nor the iPad was the first product in its category, but each met real consumer needs and delivered a much better user experience than the competition. Together with Apple’s design edge and relentless incremental innovation (Barwise and Meehan 2011, 99–100), this has enabled the company to charge premium prices and turn its products into status symbols. Some, such as the Watch, have struggled to justify their premium prices, but the recent addition of contactless technology to the iPhone is encouraging retailers to adopt contactless payment terminals: Apple aims to use the scale of iPhone ownership to create an interactive environment for the Watch, justifying its high price, as the iPod and iTunes prepared the ground for the iPhone. Apple is the world’s most profitable public company and still dominates the premium end of the smartphone and tablet markets. However, as the rate of iPhone improvements slows and it runs out of new markets to conquer, it is increasingly turning toward its services to drive profits, including its commissions on app sales and in-app purchases in free-to-play games (Thompson 2017a). Meanwhile, it is constantly fighting the threat of hardware commoditization. The main company behind that threat is Google.
Google
Because the Internet is unimaginably vast, its value depends crucially on users’ ability to find what they are looking for. In the early 1990s, the number of websites became too large for a simple index. By 1994, there were dozens of commercial search engines aiming to meet this growing need, using the relative “density” of the search terms (keywords) on different sites—a simple measure of relevance—to rank the results. They had a range of business models, all directly or indirectly based on display advertising. Google began in 1996 as a research project by Stanford PhD students Larry Page and Sergey Brin. Page and Brin’s key insight was that, from a user perspective, search results should be ranked by each site’s importance as well as its relevance, reflected in the number and importance of other sites that linked to it. The resulting PageRank technology (named after Larry Page) was a big driver of their subsequent success, but far from the whole story. Page and Brin incorporated Google in 1998 with funding from angel investors including Amazon founder Jeff Bezos. In early 1999, Excite
[ 34 ] Economy
turned down an offer to buy it for $750,000, but by June that year, it had attracted $25m in venture capital (VC) funding. Its initial business model was based on sponsorship deals sold by sales reps on Madison Avenue. The breakthrough came in October 2000, when it started selling search advertising using its AdWords system, with advertisers bidding for keywords in real time. This auction, combined with cookie- based personalization, still determines which adverts each user sees and their ranking on the page.9 From the launch of AdWords in 2000, Google was a textbook success based on network externalities—literally: that same year it hired as chief economist Hal Varian, who coauthored the key book, Shapiro and Varian (1999). It succeeded by meeting the needs of both markets better than the competition. Users received the most relevant and important search results quickly and at no cost, on an attractive, uncluttered page with no distracting pop-up or banner ads. The only advertisements were short, text-based, relevant, and clearly distinguished from the natural search results. Meanwhile, advertisers received an efficient, highly targeted way of reaching potential customers actively looking for information using specific keywords. They could pay per click or even per customer acquired, increasing accountability and reducing risk. Marketing investment rapidly shifted from other media like print classifieds, leading to dramatic revenue and profit growth. Page and Brin hired Eric Schmidt as CEO in 2001. Three years later, Google’s initial public offering raised $1.67bn for about 7% of the company, giving it a market capitalization of over $23bn. Big data and machine learning lie at the heart of Google’s strategy. The more data it has about each user, the better it can understand the context and intention behind every search and serve relevant results and well-targeted advertising. Thanks to its expertise in artificial intelligence (AI) and natural language processing, users can now input direct questions rather than just search terms, and receive increasingly intelligent answers. To support its core business, Google has developed many other free services such as Chrome, Android, and Gmail, with Google Accounts unifying each user’s activity. The data generated by each service is used to enhance all of them and to improve advertising targeting, while the services also direct users to each other. Google further exploits its data by buying display advertising inventory from third party sites, adding its own data on 9. Google did not invent this approach. Overture (originally GoTo), another start-up, had successfully launched a version of real-time bidding for keywords in 1998 (Battelle 2006, 125).
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 35 ]
63
those sites’ visitors, and selling the integrated data—at a premium—to advertisers looking to reach those users. Through Google Cloud Platform (GCP), it also sells infrastructure capacity to other businesses. Google’s ability to create superior, free, widely accessible services creates a high barrier to market entry, as Microsoft and others have discovered. A rival has to run large initial losses and encourage users to switch to it despite its initial inferiority. Apple Maps is one recent attempt, only possible because Apple made it the default on iOS. Google’s video platform, YouTube, is a big business in its own right, with estimated annual revenue of $4bn. But it is still reckoned to be loss-making because of its high costs: uploading, indexing, and storing over 50 hours of new video every minute; supporting several billion video views each day; paying content partners; plus R&D, advertising sales, and so forth (Winkler 2015). YouTube is a long-term investment aimed at capturing viewing and revenue from both traditional broadcasters and online-only players such as Netflix. Meanwhile, it too generates valuable data. Since 2000, Google’s most important move has been the 2008 launch of Android, aimed at ensuring that neither iOS nor Windows Mobile became the dominant operating system in a world of billions of mobile devices. Google made Android open source and collaborated with technology and service companies to make it the main global standard, giving Google an even bigger lead in mobile search (a 95% share in May 2017) than in desktop search, where Microsoft (Bing), Baidu, and Yahoo each have shares of 5%–8%—still an order of magnitude less than Google’s 78% (Netmarketshare 2017). In 2015, Google reorganized as Alphabet, a holding company with the core business as its main subsidiary. Alphabet’s triple-class share structure enables Page, Brin, and Schmidt to take a long-term view, ignoring investor pressure for short-term returns. Other Alphabet subsidiaries include Waymo (self-driving cars), Nest (home automation), DeepMind (AI), Verily and Calico (life sciences), Sidewalk (urban infrastructure), and two VC funds. Alphabet aims to maximize synergies between these businesses. For instance, DeepMind provides cutting-edge machine-learning capabilities across the group and is also made available to others through GCP (Google Cloud Platform) and Google Assistant. Recently, Google’s core business has also sought to develop new revenue streams that reduce its dependence on search advertising, launching devices such as the Pixel smartphone and the voice-activated Google Home hub. Overall, Google remains unassailable in search and is making big bets in a wide range of other, mostly new, product markets.
[ 36 ] Economy
Facebook
Facebook began in 2003 as Thefacebook.com, undergraduate Mark Zuckerberg’s online version of Harvard’s printed “facebook” of student mugshots. It drew on ideas from other early social networking sites such as Friendster and Myspace but, unlike them, accepted only people who registered in their own names and with a Harvard.edu web address. It was soon rolled out to other US colleges, funded through online advertising and investment by Zuckerberg’s friends and family. In July 2005, NewsCorp bought Myspace, the early market leader with 21 million users, for $580m. Arguably, Myspace was already vulnerable because of its cluttered interface and other weaknesses, but NewsCorp then failed to invest in it and overloaded it with advertising, allowing Facebook to overtake it in unique global visitors in April 2008 (Albanesius 2009). Facebook kept growing, while Myspace went into decline: NewsCorp sold it for an estimated $35m in 2011. Facebook has two key features as a social network. First, for someone to add a “friend,” both sides must agree. Second, its default assumption is that content posted by users is visible to all their “friends” unless one or both parties opts out. By creating engaging content at little cost to the company, users themselves generate the audience, which Facebook then monetizes by inserting targeted advertising among the posts. This model is highly scalable because variable costs are relatively low—mainly just more data centers and servers. Users’ interactions and other behavior on the platform also generate extensive data for service improvement and advertising targeting. Facebook’s success has created its own challenges, however. As users’ networks expand, content from their close current friends can be swamped by posts from “friends” who mean less to them, creating a need for algorithms to match users with the content most likely to engage them and with the most relevant advertisements. Adding “friends” from different personal networks (such as school, work, and—notably—parents) can also lead to self-censorship, further reducing the consumer value. To manage this tension, Facebook now has ways for users to post to user-defined groups within their networks and is reducing its dependence on user- generated content (UGC) by increasing the flow of professionally generated content (PGC)—news articles, opinion pieces, videos. Facebook is an increasingly important channel for PGC, although many producers are in a tug-of-war with it: they want engagement on Facebook to lead users onto their sites; Facebook wants to keep them on Facebook.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 37 ]
83
Facebook’s pitch to advertisers is based on its huge reach and usage, highly targeted display advertising, and measurable short-term responses. By filling out “profiles” and following things they find interesting, users generate key targeting information. Facebook also increasingly enables social and psychological targeting: identifying which users are most central and influential within their social networks and when they are most likely to be receptive to specific advertising messages. However, both Facebook and Google have been criticized by advertisers for their unreliable, unaudited audience measures and other problems (Barwise 2017). In March 2016, 79% of online US adults were active Facebook users, well ahead of Instagram (32%), Pinterest (31%), LinkedIn (29%), and Twitter (24%) (Chaffey 2017). But Facebook’s market leadership is less secure than Google’s because, as already noted, users can be members of several social networks (“multihoming”) and many younger users prefer newer sites such as Snapchat. Other social media range from message platforms (e.g., Apple’s iMessages, Facebook Messenger, and WhatsApp, acquired by Facebook for $19bn in 2014), to specialist professional (LinkedIn, now owned by Microsoft) and short message networks (Twitter), to social photo-and video-sharing platforms such as Flickr, Instagram (also acquired by Facebook, in 2012, for $1bn), Pinterest, and Snapchat (which Facebook also reportedly tried to buy, but was turned down). These alternatives all threaten to draw valuable users away from Facebook by offering slightly different services. For instance, Snapchat is designed for more private, intimate, and fun interactions: the audience is selected-in and the default is that messages auto- delete. Where Facebook is unable to buy out a promising rival, it usually tries to copy its features: recent examples are Instagram “Stories,” Facebook “Messenger Day,” and WhatsApp “status,” all emulating Snapchat “Stories” with growing success.
Amazon
In 1994, Jeff Bezos quit his well-paid job as a 30-year-old high-flier at a Wall Street hedge fund to found Amazon. Bezos, who remains chairman, president, and CEO, chose the name Amazon because it sounded exotic and started with and A—an advantage if it appeared in an alphabetical list— but also because the Amazon is the world’s biggest river in terms of water flow and he wanted his business to be the world’s biggest online retailer, which, in revenue terms, it is.
[ 38 ] Economy
His core strategy was—and is—to build a dominant market share and brand in the consumer markets most suited to e-commerce; squeeze suppliers’ prices; and reinvest the profits in price cuts, marketing, customer retention, transaction handling, and physical and digital distribution. In line with this, Amazon has consistently prioritized long-term growth over short-term profit: the prospectus for its 1997 IPO specifically said that it would “incur substantial losses for the foreseeable future” (Seglin 1997). Bezos started with books because they were a good fit with online retailing: a huge number of low-ticket, standardized, easy to distribute products with a preexisting inventory, enabling him to launch quickly and offer many more titles, and at much lower prices, than even the largest physical bookshop. Bookselling also generated data on affluent, educated shoppers (Packer 2014). Over time, more and more product categories have been added as Amazon has refined its seamless online shopping experience and increasingly efficient distribution system. Amazon’s customer loyalty scheme Prime, first launched in 2005 in the United States and currently reaching 64% of US households (Hyken 2017), is now central to its business model. For a fixed fee, currently $99/year or $10.99/month in the United States, it offers subscribers unlimited free one-or two-day delivery (depending on the area), Amazon Video, Prime Music, unlimited photo storage, and other services. Rapid delivery encourages users to switch purchases from other retailers. Both Prime and the digital devices it sells at or below cost (the Kindle, Kindle Fire, Fire TV, and Echo home assistant) are aimed at making Amazon consumers’ default e- commerce option. Amazon also advertises on TV, Google, and Facebook, and on many smaller websites through its affiliate link program. It has also acquired consumer guide sites such as Goodreads and IMDb, in which it has embedded “buy from Amazon” links and from which it also collects user rating data. All this reinforces its core business model: relentless retail sales growth leading to increasing economies of scale in R&D, procurement, machine learning, marketing, and logistics. It then uses its superior capabilities not only to acquire more retail business but also to rent out infrastructure to other businesses: marketplace sellers pay to use Prime to deliver their goods, and businesses of all types buy cloud-based computing from AWS. Amazon Web Services is the most profitable part of the company: in the three months to March 31, 2017, it had an operating income of $890m, 24% of its $3.66bn revenue (Amazon 2017). Amazon Web Services sells both to Amazon itself (it grew out of a 2005 restructuring of the company’s backend technology) and, increasingly, to others, making it the leading
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 39 ]
04
supplier in the fast-growing cloud services market, followed by Microsoft (Azure), Google, IBM, and Oracle (Columbus 2017). Amazon has substantial and still-growing market power as both a buyer and a seller. As the range of products it sells expands, users are now going straight to it to search for them, bypassing Google and enabling it to sell search advertising. Although the volume of searches is relatively small, they have the potential to generate disproportionate advertising revenue as they increasingly replace Google’s most valuable searches, where consumers are actively looking for products. Amazon has more first-party consumer purchase data than any rival, to improve targeting, and can link both search and display advertising (e.g., on Amazon Prime Video) to actual purchases. Although still a relatively small player in digital advertising, it may challenge Google and Facebook in the longer term (Hobbs 2017). Closely linked to Amazon’s strategy and business model is its ultracompetitive company culture. Bezos’s annual letter to shareholders always includes a copy of his first such letter in 1997, which famously said, “This is Day One for the internet.” The aim is to keep behaving as if every day were still Day One. Amazon’s distribution centers are nonunionized and increasingly automated, and it is testing drones and self-driving vans to reduce delivery costs. Accusations of exploitative labor management in its warehouses find their corollary in office staff also constantly monitored and required to work under unrelenting pressure. Those who survive this “purposeful Darwinism” receive few perks but benefit from a financial package heavily weighted toward stock options (Kantor and Streitfeld 2015). Amazon has also been accused of anticompetitive activities including price discrimination and delisting competitors’ products, such as Google Chromecast and Apple TV in 2015 and Google Home in 2016. Khan (2017, this volume) gives several examples of Amazon allegedly exploiting its market power in anticompetitive ways: predatory pricing of best-selling e- books; using its buying power and Fulfillment-by-Amazon (FBA) and its extensive data to create unfair advantage over retail competitors. Amazon’s dominance of consumer e-commerce outside China looks unstoppable. Its leadership in cloud- based computing, through AWS, seems almost as secure. As already noted, AWS’s inside view of its clients’ businesses gives it a strategic competitive advantage, especially in deciding which tech start-ups represent significant threats or investment opportunities. With the easiest product categories already covered, core revenue growth has slowed and the remaining categories are by definition harder, but Amazon is betting on game-changing innovations like drone delivery to reduce distribution barriers for these categories.
[ 40 ] Economy
Amazon in 2017 announced a $13.7bn takeover bid for the upmarket US grocer Whole Foods. This was its largest ever acquisition. Analysts disagree about the strategy behind this move and its chances of success, but it clearly represents a move toward integrated “omnichannel” retailing combining on-and offline channels and covering even more product and serv ice categories including perishable groceries—an extremely challenging category. The shares of US store groups fell sharply on the announcement.
WILL THE MARKET END THE TECH GIANTS’ DIGITAL DOMINANCE?
In the first section of this chapter, we discussed a range of generic factors that make the tech giants’ markets winner-take-all: • • • • • • • •
Economies of scale; Strong user brands and habitual usage; Attractiveness to talent (“employee brand equity”); Direct (within-market) network effects; Indirect (cross-market) network effects; Big data and machine learning; Switching costs and lock-in; Corporate strategies and cultures.
In the next section, we showed how each company has indeed come to dominate its market(s) in ways that reflect these winner-take-all factors. Evans and Schmalensee (2016b) partly dispute this view. They argue that “winner-takes-all thinking does not apply to the platform economy,” at least for Google and Facebook, on the grounds that—although they dominate consumer search and social networking, respectively—in the advertising market they have to compete with each other and with other media. We disagree. Google and Facebook do, of course, have to compete for advertising. But advertising media are not homogeneous: advertisers use different channels for different purposes. Google completely dominates search advertising and Facebook has a dominant, and still growing, share of online, especially mobile, display advertising. Because marketing budgets are finite, they do compete indirectly against each other and against other advertising media—and other ways of spending marketing money (promotions, loyalty schemes, etc.)—just as all consumer products and services indirectly compete for consumers’ expenditure. But advertisers have no credible substitutes of comparable scale and reach as Google in search and
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 41 ]
24
Facebook in online display advertising. The fact that they continue to use them despite the numerous problems that have been highlighted (fraud, audience measurement, etc.) reflects this lack of choice. Leading marketing commentator Mark Ritson (2016) described the emergence of the “digital duopoly” as the single biggest UK marketing issue in 2016—adding that he expected it to become even worse in 2017. It is hard to see another company any time soon overtaking Google in search, Microsoft in PC software, or Amazon in e-commerce and cloud computing. Facebook’s lead in social networking looks almost as strong, despite the potential for users to “multihome” and its recent problems with audience measurement and so forth. This bullish view is reflected in these companies’ high Price/Earnings (P/E) ratios in Table 1.1, showing that the financial markets expect their earnings not only to withstand competitive pressures but also to continue growing faster than the market average for the foreseeable future. Some of this expected future growth presumably relates to the perceived long-term potential of their noncore activities, perhaps especially in the case of Alphabet, but it is hard to see how P/E ratios of 30-plus could be justified if their core businesses were seen as being under significant competitive threat.10 Apple’s lower P/E of 16 reflects its lower expected future growth rate as Samsung and other Android manufacturers gradually catch up with the quality and ease of use of its devices and ecosystem, boosted by the growing superiority of Google services such as Assistant, reflecting the high penetration of Android and Google’s lead in AI (Thompson 2017a). As Apple is increasingly forced to include Google’s services in its ecosystem, its price premium over Android devices—the big driver of its high margins—is likely to be eroded. Of course, whether—and if so, how soon—this happens will depend on Apple’s continuing ability to come up with new, better products, content, and services to reinforce its dominance of the market for premium-priced mobile devices. In the wider mass market for mobile devices, Android is already the global standard, accounting for 82% of new smartphones shipped in 4Q16, versus 18% for iOS (Vincent 2017). On the plus side, Apple has an outstanding track record in product quality, ease of use, design, and branding. As the number of different types of device continues to proliferate—PCs (where Apple’s share is growing); mobile, wearable, and smart home devices; virtual and augmented reality (VR/AR); automotive, 10. Amazon’s P/E of 195 also reflects its strategy of reinvesting most of its profit to achieve additional long-term growth. This leads to a double whammy: artificially low short-term profits and high long-term growth expectations.
[ 42 ] Economy
and so forth—Apple may be able to keep exploiting its ability to integrate devices and services into a superior, seamless user experience at a premium price. In contrast, Google, Microsoft, and Amazon, like IBM before them, all fit the long-term pattern that dominant tech players are rarely displaced as market leaders in their core markets, because the winner-take-all dynamics are so powerful. Facebook’s position is almost as secure. Only Apple is in significant danger of seeing its margins squeezed by a gradual process of commoditization.
Competition beyond the Tech Giants’ Core Markets
For all five companies, the question remains whether, in line with the pattern discussed in the introduction, they will be eclipsed (as opposed to displaced) by a rival—either another large established player or a start- up—becoming the dominant provider of a new, important product or service that overshadows them. Microsoft has already been surpassed by Apple and Google in terms of profit and market capitalization (Table 1.1), and all five companies are acutely aware of the potential threats—and opportunities—presented by new product markets and technologies. Major product markets currently of interest—in addition to Amazon’s recent move to transform grocery retailing through its Whole Foods acquisition—are transport, home automation, entertainment, healthcare, business, and professional processes, and a wide range of applications under the broad heading the “Internet of things” (IoT) that will generate even more data—and further increase society’s vulnerability to cyber attack. Key technologies include AI, voice and visual image recognition, VR/ AR, cloud-based services, payment systems, and cyber security. All the tech giants are investing in several of these, both organically and through acquisition. Their access to vast amounts of user data makes them well placed to spot trends early, and their scale and profitability give them plenty of capacity to invest in and acquire new businesses and technologies. The only national market of comparable scale to the United States is China. Chinese retail e-commerce is booming, with an estimated value already more than double that in North America: $899bn versus $423bn in 2016 (eMarketer 2016). Chinese tech companies operate under tight government controls and a constant threat of having their activities curbed, but benefit from protection from foreign competition and a somewhat cavalier view of privacy, data security, corporate governance, and intellectual property (not unlike the United States in the 19th century), although
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 43 ]
4
Intellectual Property protection may improve as they build up their own patent portfolios and brands. China’s “big four” tech companies are Tencent (mobile messaging and other content and services), Alibaba (e-commerce, digital entertainment, and cloud), Baidu (search and AI), and Huawei (mobile devices). Reflecting broader differences in business culture, Chinese tech companies tend to be less focused than those based in the United States, but the two are starting to converge as the top US tech groups diversify beyond their core businesses (Waters 2017). We can expect to see more Chinese tech successes over the next 10 years, increasingly based on innovation as well as imitation and with growing international sales, in competition with the US players. However, their current activities are still largely focused on Greater China and there is no realistic prospect of their offering a serious challenge to the United States elsewhere in the next few years. If anyone does overtake one of these companies in the next few years, it is more likely to be also based in Silicon Valley or Seattle. In The Death of Distance (1997), The Economist’s Frances Cairncross predicted a sharp reduction in the economic importance of geography. This has not happened. In addition to the top five companies by market capitalization discussed here, three of the other nine tech firms in the global 100 most valuable public companies—Oracle, Intel, and Cisco—are also based in Silicon Valley.11 Beyond the United States, there are just four Asian companies and one European one on the list.12 So, including the top five, eight of the world’s top 14 public tech companies are based in or near Silicon Valley. No other country has more than one (although other Chinese tech giants will doubtless soon join the list). Silicon Valley is also the leading cluster for tech start-ups. Of the top 50 global tech “unicorns”—companies founded after 2000 with a valuation over $1bn—at the time of writing, 21 are US-based. Sixteen of these are in Silicon Valley, including Uber, Airbnb, and Palantir (big data analytics) ranked 1, 4, and 5, respectively (CB Insights 2017). The other five are scattered around the United States: even America has only one Silicon Valley.13 In conclusion, with the partial exception of Apple, the tech giants seem unlikely to lose their dominance of their core market(s) any time soon, 11. The only other US company on the list is New York–based IBM. 12. Tencent (China), Samsung (Korea), Taiwan Semiconductor, Broadcom (Singapore), and SAP (Germany). 13. For the various reasons for Silicon Valley’s dominance, see Hafner and Lyon (1998), Mazzucato (2015), Porter (1998), Bell (2005), Garmaise (2011), and Ben-Hahar (2016).
[ 44 ] Economy
although they all, to varying degrees, face competitive threats at the margin. They are at greater risk of being overtaken by another company building a dominant share of a new, bigger, market. If and when that happens, the successful rival—either another tech giant or a start-up—is also likely to be based in Silicon Valley.
Do We Have a Problem?
How concerned should we be that market competition is unlikely to end Google, Microsoft, Facebook, and Amazon’s dominance of their core markets in the foreseeable future? That market dominance brings many benefits to consumers and other businesses. Current competition regulation is designed to prevent firms from using their market power to charge higher prices, or offer lower quality, than would prevail in a competitive market. It is unsuited to a platform context where, in Google’s case, consumers pay nothing and advertisers have a highly effective tool that did not exist 20 years ago and for which they pay a competitive, auction-based market price. Of course, incumbent industries disrupted by tech-based platforms (hotels by Airbnb, taxis by Uber, etc.) complain and highlight their real and imagined negative impacts. But much of this is just a normal part of disruptive innovation: the victims of creative destruction don’t like it. On this basis, there are good arguments for light- touch, perhaps technology-specific, regulation of platform businesses (Laffont and Tirole 2000) but not, in our view, for no regulation at all. Parker, Van Alstyne, and Choudary (2016, 239–53) list a wide range of reasons why we need “Regulation 2.0” for these markets: concerns about platform access, fair pricing, data privacy and security, national control of information assets, tax, labor regulation, and potential manipulation of consumers and markets. Similarly, Khan (2017, this volume) argues for more sophisticated regulation to address a range of anticompetitive behaviors. To this list we might add concerns about cyber security, digital advertising (fraud, mismeasurement, etc.), the impact of fake news, the decline in professional journalism, and the contribution of social media to political polarization (Barwise 2017). Finally, recent research suggests that the inequality between firms in winner-take-all markets, including tech, is one of three big drivers of growing income inequality (the other two being outsourcing and IT/automation: Bloom 2017). The responses to-date differ between Europe and the United States. European antitrust legislation focuses on ensuring fair competition (reflected in the Commission’s recent €2.4bn fine on Google for
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 45 ]
64
“systematically” giving prominent placement in searches to its own shopping service and demoting rival services), whereas US legislation focuses more narrowly on whether market dominance leads to demonstrable consumer harm (Khan 2017, this volume; Thompson 2017b). Because the dominant tech platforms are all US-based, this is likely to be an area of growing transatlantic conflict.
REFERENCES Albanesius, Chloe. 2009. “More Americans Go to Facebook Than MySpace.” PCMag. com, June 16. Amazon. 2017. “Amazon.com Announces First Quarter Sales up 23% to $35.7 Billion.” April 27. Auletta, Ken. 2009. Googled: The End of the World as We Know It. London: Virgin Books. Barwise, Patrick. 2017. “Disrupting the Digital Giants: Advertisers and Traditional Media Push Back.” LSE Media Policy Project (blog). April 5. Barwise, Patrick, and Seán Meehan. 2011. Beyond the Familiar: Long-Term Growth through Customer Focus and Innovation. San Francisco: Jossey-Bass. Battelle, John. 2006. The Search: How Google and Its Rivals Rewrote the Rules and Transformed Our Culture. London: Nicholas Brealey. Bell, Geoffrey G. 2005. “Clusters, Networks and Firm Innovativeness.” Strategic Management Journal 26, no. 3 (March): 287–95. Ben-Shar, Omri. 2016. “California Got It Right: Ban the Non-Compete Agreements.” www.forbes.com, October 27. Bias, William. 2015. “This Is Why Microsoft Bought Skype.” 247wallst.com, June 3. Bloom, Nicholas. 2017. “Corporations in the Age of Inequality.” Harvard Business Review, digital article, April 14. https://hbr.org/cover-story/2017/03/ corporations-in-the-age-of-inequality. Bright, Peter. 2015. “Bing Profitable, but Microsoft Revenue down 12 Percent as Shift to Cloud Continues.” Ars Technica, October 23. Brand Finance. 2017. Brand Finance Global 500 2017. London: Brand Finance: 18–27. Cairncross, Frances. 1997. The Death of Distance: How the Communications Revolution Will Change Our Lives and Our Work. Boston MA: Harvard Business School Press. CB Insights. 2017. “The Global Unicorn Club,” Accessed May 12, 2017. https://www. cbinsights.com/research-unicorn-companies. Chaffey, Dave. 2017. “Global Social Media Research Summary 2017.” Smartinsights. com, April 27. Chen, Hsinchun, Roger H. L. Chiang, and Veda C. Storey. 2012. “Business Intelligence and Analytics: From Big Data to Big Impact.” MIS Quarterly 36, no. 4 (December): 1165–88. Coase, Ronald H. “The Federal Communications Commission.” Journal of Law and Economics 2 (October): 1–40. Columbus, Louis. 2017. “Roundup of Cloud Computing Forecasts, 2017.” Forbes, April 29. Dogs of the Dow. 2017. “Largest Companies by Market Cap Today.” Accessed June 28, 2017. http://dogsofthedow.com/largest-companies-by-market-cap.htm,
[ 46 ] Economy
Einav, Liran, and Jonathan Levin. 2014. “The Data Revolution and Economic Analysis.” Innovation Policy and the Economy 14, no. 1: 1–24. eMarketer. 2016. “Worldwide Retail Ecommerce Sales Will Reach $1.915 Trillion This Year.” April 22. Evans, David S., and Richard Schmalensee. 2016a. Matchmakers: The New Economics of Multisided Markets. Boston, MA: Harvard Business Review Press. Evans, David S., and Richard Schmalensee. 2016b. “Why Winner-Take- All Thinking Doesn’t Apply to the Platform Economy.” Harvard Business Review, digital article, May 4. https://hbr.org/2016/05/ why-winner-takes-all-thinking-doesnt-apply-to-silicon-valley. Feijoo, Claudio, Jose-Luis Gomez-Barroso, and Shivom Aggarwal. 2016. “Economics of Big Data.” In Handbook on the Economics of the Internet, edited by Johannes M. Bauer and Michael Latzer, chapter 25. Cheltenham, UK: Edward Elgar. http://www.e-elgar.com/shop/ handbook-on-the-economics-of-the-internet. Garmaise, Mark. J. 2011. “Ties That Truly Bind: Noncompetition Agreements, Executive Compensation, and Firm Investment.” Journal of Law, Economics and Organization 27, no. 2: 376–425. George, Gerard, Martine R. Haas, and Alex Pentland. 2014. “Big Data and Management.” Academy of Management Journal 57, no. 2: 321–26. Grove, Andy. 1998. Only the Paranoid Survive. London: Profile Books. Hafner, Katie, and Matthew Lyon. 1998. Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon and Schuster. Hempel, Jessi. 2017. “Now We Know Why Microsoft Bought LinkedIn.” Wired.com, March 14. https://www.wired.com/2017/03/now-we-know-why-microsoft- bought-linkedin/. Hobbs, Thomas. 2017. “Can Amazon break the ‘digital duopoly’?” www.marketingweek. com, June. Huang, Yufeng. 2016. “Learning by Doing and Consumer Switching Costs.” Rochester, NY: Simon Business School working paper. Hyken, Shep. 2017. “Sixty-four Percent of US Households Have Amazon Prime.” www. forbescom, June 17. Iansiti, Marco, and Roy Levien. 2004. “Strategy as Ecology.” Harvard Business Review 82, no. 3 (March): 68–78. Interbrand. 2016. Best Global Brands 2016 Rankings. London: Interbrand. Isaacson, Walter. 2011. Steve Jobs. New York: Simon and Schuster. Kantar Millwood Brown. 2017. Brandz Top 100 Most Valuable Global Brands 2017. London: WPP, 30–33. Kantor, Jodi, and David Streitfeld. 2015. “Inside Amazon: Wrestling Big Ideas in a Bruising Workplace.” New York Times, August 15. Katz, Michael L., and Carl Shapiro. 1985. “Network Externalities, Competition and Compatibility.” American Economic Review 75, no. 3 (June): 424–40. Khan, Lina M. 2017. “Amazon’s Antitrust Paradox.” Yale Law Journal 126, no. 3 (January): 710–805. Kirkpatrick, David. 2010. The Facebook Effect: The Real Inside Story of Mark Zuckerberg and the World’s Fastest-Growing Company. London: Virgin Books. Klemperer, Paul. 1987. “Markets with Consumer Switching Costs.” Quarterly Journal of Economics 102, no. 2 (May): 375–94. Laffont, Jean-Jacques, and Jean Tirole. 2000. Competition in Telecommunications. Cambridge MA: MIT Press.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 47 ]
84
Lee, Robin S. 2013. “Vertical Integration and Exclusivity in Platform and Two-Sided Markets.” American Economic Review 103, no. 7: 2960–3000. Lex. 2017. “Tech Entrepreneurs: Great Man Theory.” www.ft.com, June 15. Mazzucato, Mariana. 2015. The Entrepreneurial State: Debunking Public vs. Private Sector Myths. Rev. ed. London: Anthem Press. Nair, Harikesh, Pradeep Chintagunta, and Jean-Pierre Dubé. 2004. “Empirical Analysis of Indirect Network Effects in the Market for Personal Digital Assistants.” Quantitative Marketing and Economics 2, no. 1: 25–58. Nelson, Philip. 1970. “Information and Consumer Behavior.” Journal of Political Economy 78, no. 2 (March–April): 311–29. Netmarketshare. 2017. “Desktop and Mobile/Tablet Search Engine Market Share, May 2017.” Netmarketshare.com. Netmarketshare. 2018. “Operating System Market Share.” www.netmarketshare.com/ operating-system-market-share, January 29, 2018. Packer, George. 2014. “Cheap Words.” New York Times Magazine, February 17 and 24. https://www.newyorker.com/magazine/2014/02/17/cheap-words Parker, Geoffrey G., and Marshall W. Van Alstyne. 2005. “Two-Sided Network Effects: A Theory of Information Product Design.” Management Science 51, no. 10: 1494–504. Parker, Geoffrey G., Marshall W. Van Alstyne, and Sangeet P. Choudary. 2016. Platform Revolution: How Networked Markets Are Transforming the Economy— And How to Make Them Work for You. New York: W. W. Norton. Porter, Michael E. 1998. “Clusters and the New Economics of Competition.” Harvard Business Review 76, no. 6 (November–December): 77–90. Ratchford, Brian T. 2001. “The Economics of Consumer Knowledge.” Journal of Consumer Research 27, no. 4: 397–411. Reed, Brad. 2013. “Microsoft Has Lost $11 Billion Trying to Compete with Google.” BGR.com, July 9. Reisinger, Don. 2013. “Microsoft Shares Surge on Ballmer Retirement News.” www. cnet.com, August 23. Ritson, Mark. 2016. “The Marketing Stories That Mattered This Year.” Marketing Week, December 15. Rohlf, Jeffrey. 1974. “A Theory of Interdependent Demand for a Communications Service.” Bell Journal of Economics and Management Science 5, no. 1 (Spring): 16–37. Rochet, Jean-Charles, and Jean Tirole. 2003. “Platform Competition in Two-Sided Markets.” Journal of the European Economic Association 1, no. 4: 990–1029. Samuelson, Paul A. 1954. “The Pure Theory of Public Expenditure.” Review of Economics and Statistics 36, no. 4 (November): 387–89. Seglin, Jeffrey L. 1997. “Hot Strategy: ‘Be Unprofitable for a Long Time.’ ” Inc.com, September 1. Shapiro, Carl, and Hal R. Varian. 1999. Information Rules: A Strategic Guide to the Network Economy. Boston, MA: Harvard Business School Press. Statcounter. 2017. “Desktop Operating System Market Share Worldwide, May 2017.” Accessed June 7, 2017. Stone, Brad. 2013. The Everything Store: Jeff Bezos and the Age of Amazon. New York: Back Bay Books/Little, Brown. Thompson, Ben. 2014. “Peak Google.” www.stratechery.com, October 22. Thompson, Ben. 2017a. “Apple’s Strengths and Weaknesses.” www.stratechery.com, June 5.
[ 48 ] Economy
Thompson, Ben. 2017b. “Ends, Means, and Antitrust.” www.stratechery.com, June 28. Tucker, Catherine. 2008. “Identifying Formal and Informal Influence in Technology Adoption with Network Externalities.” Management Science 54, no. 12: 2024–38. Tun, Zaw Thiha. 2015. “How Skype Makes Money.” www.investopedia.com, July 9. Vincent, James. 2017. “99.6 Percent of New Smartphones Run Android or iOS.” The Verge, February 16. Wallace, James, and Jim Erickson. 1992. Hard Drive: Bill Gates and the Making of the Microsoft Empire. New York: Wiley. Waters, Richard. 2016. “Microsoft Recruits Help in Strategy Shift.” www.ft.com, June 13. Waters, Richard. 2017. “Chinese and US Tech Models Are Starting to Converge.” www. ft.com, June 29. Waters, Richard, and Shannon Bond. 2017. “US Moves Step Closer to Overturning Broadband Privacy Regulations.” www.ft.com, March 29. Winkler, Rolfe. 2015. “YouTube: 1 Billion Viewers, No Profit.” Wall Street Journal, February 25. Yarow, Jay. 2013. “Microsoft Is Making an Astonishing $2 Billion per Year from Android Patent Royalties.” Business Insider, November 6.
T h e E vol u t i o n of Di g i ta l D o m i n a n c e
[ 49 ]
05
CHAPTER 2
Platform Dominance The Shortcomings of Antitrust Policy DIANE COYLE
INTRODUCTION
“Platform” is the term increasingly used for hybrid entities using digital technology as an interface between the users or consumers of a product or service and its suppliers. Platforms share some of the features of traditional businesses coordinating a supply chain; intermediaries or wholesalers connecting smaller suppliers to markets; networks connecting end-users to each other; and exchanges or marketplaces where individual suppliers and buyers meet to trade. The innovative character of platforms in coordinating economic activities is reflected in the fact that different terms are used, and different strands of economic research involved, including the pioneering work of Jean Tirole and others on two-sided markets, an older literature on networks, and recent work on market design.1 However, economics has yet to deliver practical antitrust tools to competition regulators, to enable them to draw up theories of harm in platform markets of different kinds and implement them empirically. Given 1. They are sometimes called multisided platforms, two-sided markets, peer-to- peer platforms, and networks, depending on the specific context. Key early research includes Jean-Charles Rochet and Jean Tirole (2003), Caillaud and Jullien (2003), Rochet and Tirole (2006), and Armstrong (2006). Surveys can be found in Rysman (2009), Evans (2011), and Parker, van Alstyne, and Choudary (2016).
the structure and business models of digital platforms, traditional tools of assessment such as market definition cannot be applied. Yet the literature has few general lessons about how antitrust authorities should analyze platform markets, especially those with one or a few dominant players. The future growth of platforms will benefit from predictable principles of competition assessment; these principles need to be rooted in a thorough welfare assessment. There have long been examples of economic institutions or organizations that could be characterized as platforms. A traditional bazaar is one instance, a known location for merchants and customers to meet and exchange. More recent examples include payments card networks enabling transactions between consumers and retailers, or operating systems coordinating the technical standards and terms of engagement for program developers and computer users. And some forms are very new, such as the “sharing economy” peer-to-peer platforms. It is surprisingly hard to pin down a definition of platforms, however, as they have characteristics of firms and of markets, involving both production and exchange; and they involve different kinds of coordinating mechanism— sometimes technical standards, sometimes exchange algorithms, sometimes social norms. In a sense, a platform is a business strategy as much as a kind of organization, and some firms operate both one-and two-sided lines of business (such as Amazon as a retailer and Amazon Marketplace). The typology shown in Table 2.1 is one attempt to categorize platforms, but others would certainly be plausible and there are examples of platforms that would not fit comfortably into this (see both Evans 2011; and Gawer 2014).
Table 2.1. PL ATFORM TYPES
B to B
Production
Intermediation
Exchange
Internal platforms,
Payment cards
Financial exchanges
Ad-funded media, phone
Ebay, Amazon Marketplace
Slack, AWS B to C
AWS, software OS, games consoles
networks, Zoopla, travel booking
P to P
Sharing economy work
Social media, UberX
Sharing platforms, e.g.,
platforms (Thumbtack,
UberPool, Airbnb, home
Taskrabbit)
swaps. Kidney exchanges
B to B: business-to-business (wholesale); B to C: business-to-consumer (retail); P to P: peer-to-peer.
P l at f or m D o m i n a n c e
[ 51 ]
25
Platforms are a new way of addressing the fundamental problem of economic organization, how to coordinate the supply and demand (of millions of individuals in the case of consumer markets) in the absence of complete information. Traditional markets coordinate using location, as in the old-fashioned marketplace, or time, as in financial market auctions. Platforms achieve improved coordination using technology. Participants do not need to be collocated, and while individual transactions happen very quickly, they do not all need to occur at the same time. The importance of information for the economy is well understood. In a classic 1945 article, Hayek made the point that the price system in a market economy is a decentralized mechanism for effective coordination when everyone has some unique information about their own preferences or costs (Hayek 1945). But many economic transactions take place within firms, rather than in marketplaces, reflecting Coase’s insight that sometimes the transactions costs involved in a market exchange would be higher due to asymmetries of information or an absence of clearly defined property rights. He pointed out that changes in information technology (the telephone) and in management techniques could change the optimal size and organization of a firm (Coase 1937). It is not surprising, then, that the steep decline in the cost of exchanging information would alter transactions costs and therefore the kind of economic organizations that exist. The cost of information and communication technologies has been falling extremely rapidly for some time now, but some more recent innovations have opened the way for the growth of platforms as a model. In particular, access to ubiquitous fast broadband via Wi-Fi or 3G/4G and the exponential growth of smartphone ownership means platforms that connect many individuals at any time are now viable. Technology is therefore an important element in the emergence of dig ital platforms. Another set of innovations has been an important enabler of certain kinds of platforms, from the discipline of “market design,” the strand of economics devising algorithms for matching heterogeneous demand and supply in a context of incomplete information (summarized in Roth 2015). The computer science and economics approaches have converged in these examples, where the matching is highly coordinated. Finally, the context of economies with increasingly varied types of goods and services on offer, and increasing scope for customization, has increased the value of matching technologies.
[ 52 ] Economy
THE BASIC ECONOMICS OF PLATFORMS
How do platforms create value, as compared with more traditional forms of business organization? Why do they seem able to disrupt so many conventional businesses? Why are nonmonetary platforms in the “sharing economy” now proliferating as well? Platforms capture additional value for their participants, value that was previously out of reach. Apart from the intrinsic value of communication, there can be better and faster matching of supply and demand and a more efficient allocation of resources. Platforms can help mitigate market failures—such as those arising with information goods (books, music, software) whose characteristics are unknown in advance—by building trust through their peer review mechanisms or payment mechanisms (Orman 2016). Indeed, in the case of such “experience” goods, platforms enable consumers not only to satisfy their preferences but also to discover new ones: think of the range of music discovery available through a platform like Spotify, with its discovery and recommendation algorithms, compared with the limited playlist of a traditional radio station. These matching and discovery benefits occur alongside the parallel revolution in the means of distribution thanks to the Internet. In addition, there is more intensive use of underused assets. This includes assets such as houses or cars in the case of sharing economy platforms, and those such as network infrastructure or investments in the case of other types of platform. Capital productivity is potentially increased. In short, platforms make markets work better. For successful platforms, these efficiency gains are large. They potentially benefit all participants, those on both sides of the platform as well as the platform’s owners (although, as with any form of social organization, may also have some negative characteristics). Platforms enable interactions or exchanges that make all participants better off by more, the more people take part on the other side of the platform; there are indirect network effects. The platform benefits buyers by coordinating sellers, and sellers by coordinating buyers. Without the platform, transactions costs would make it impossible for the resulting exchanges to take place. An old economy example is advertising-funded television, which gathers the audience for advertisers and pools advertising revenues to create programs for audiences. An early vintage new economy example is Ebay, which makes it possible to sell or purchase niche products—such as the broken laser pointer of its founding story—because it assembles large enough numbers of buyers and sellers. And a new digital example is
P l at f or m D o m i n a n c e
[ 53 ]
45
Airbnb, which is bringing new supplies of short-term accommodation to market because it is a go-to site for travelers, and is a go-to site because it has a large number of properties listed. Some platforms (such as social media networks) also feature direct network effects. The indirect network effects make it vital to get the right balance between providers and consumers on the platform, which depends on pricing to both sides. In their classic early paper in this literature, Rochet and Tirole make this central to the definition of a two-sided market: “The price structure matters, and the platform must design it so as to bring both sides on board” (Rochet and Tirole 2006, 665). There is in general one side of the platforms whose participants subsidize those on the other side. The subsidy will go to the side whose demand is more sensitive to price, and the two prices will be related to each other depending on how much benefit each side gains from the presence of the other. This relationship between prices on the two (or more) sides of the market connected by platforms and the cross-subsidy greatly complicates competition assessments. Another influence on price structure is the strength of consumers’ desire for variety: the stronger it is, the more likely it is that the platform will charge the suppliers (although not inevitably, as suppliers sometimes themselves provide the variety). Hence video game platforms subsidize consumers who prefer to have lots of games and make most profit from developers, whereas operating system software makes most of its profit from consumers and subsidizes developers (Hagiu 2009). Another factor is the extent to which the prices are public or instead known only to the platform and each individual supplier. The information asymmetry could alter the platform’s pricing strategy if it needs to entice consumers onside by reassuring them they are not being exploited (Llanes and Ruiz-Aliseda 2015). If the platform gets it wrong, and sets the price too high for consumers so there are few of them participating, the price suppliers are willing to pay to access the platform will be lower. Discouraging participation on one side by getting the price wrong can lead to a downward spiral in the number of transactions. Conversely, a positive feedback loop can lead to rapid growth in transaction volumes when a platform manages to attract consumers, which attracts more suppliers, which attracts more consumers, and so on. New platforms therefore need to reach a critical mass beyond which the positive feedback operates. Often, both consumers and suppliers will use several different platforms (called multihoming). Somebody wanting to book a holiday has every reason to look at many websites, while a property owner similarly will probably advertise on several platforms to reach a larger number of potential holiday-makers. However, many platforms (such as operating systems,
[ 54 ] Economy
estate agencies) have single-homing on one side, creating a competitive bottleneck, and multi-homing on the other side. There are also some examples of platforms dominating their markets because of the scale of the indirect network effects. Social media, search, and operating systems are examples: the benefits to consumers of everybody using the same platform or standard are compelling. This too poses significant challenges for competition policy, explored later. In older networks, such as telephone networks, the coordination of the two sides of the market comes about partly because of a standard set externally, for example technical standards such as GSM for mobile telephony, or regulatory actions such as the allocation of spectrum bands or phone numbering. Telecoms companies still offer benefits from coordination, and still need to price outgoing and incoming calls appropriately to balance them, as well as set interchange fees and protocols between their individual networks. Newer platforms largely internalize the network externalities and determine how the benefits so captured are shared. The realization of the value from the externalities—the better matching of supply and demand, the reduced transactions costs on the platform—means there is scope for everybody to benefit. Now that technology and organizational design mean platforms can exist, it is no wonder that they are spreading quickly.
PLATFORMS’ STRATEGIC CHOICES
To think about why a business will decide to operate as a platform and how it will devise its market strategies, the starting point is to compare the transactions costs a platform will face with a traditional firm in a conventional supply chain contracting with suppliers and selling its product on to consumers. (There is a large economic literature about firms’ choice between vertical integration and contracts with external suppliers. See, e.g., Williamson 2005.) The advantage of a platform lies in its ability to reduce consumers’ or buyers’ search costs and to reduce shared transaction costs between the various sides, to a greater extent thanks to direct contact between suppliers and customers (summarized by Hagiu 2007; Hagiu and Wright 2015). For example, Google started as a one-sided search business and has steadily added more “sides” beginning with AdWord and AdSense. The business landscape is changing in many countries as traditional businesses try to capture the advantages of two-sided or multisided operation, and existing platforms increase the number of “sides” to increase the network effects they can capture, and share with their users on all sides. This flux is often labeled “convergence.”
P l at f or m D o m i n a n c e
[ 55 ]
65
The structure of prices—to balance the two sides—is critical. As well as the choice of which side to subsidize, the platform needs to decide what mix of access or membership fees and usage or per-unit fees to charge. The determinants will include the variety of consumers’ tastes (which makes suppliers less good substitutes for each other); the extent of multihoming by consumers and by suppliers; and the risk of a holdup of suppliers by a platform (when for example they have to incur costs to get onto the platform) (Hagiu 2009). In reality, the choices are not always obvious, especially for new platforms, and there is much trial and error. For example, the use of mobile telephones spread more slowly initially in the United States than in Europe due to the choice of “receiving party pays” pricing in the United States rather than the “calling party pays” adopted in Europe. And while often consumers are subsidized, a few platforms are the opposite way round. The structure of incentives needed to establish a new platform might well be different from those required for an established platform; much of the literature explores already-established platforms. An important category of platforms offers a free online service to consumers and is funded by advertising. This includes many media platforms; subscription models are relatively rare. The peculiarity of these markets is that while advertisers want access to consumers, consumers typically do not want (much of) the advertising. In addition, the economic characteristics of information as a good are quite distinctive, in that it has strong public good characteristics and typically high fixed costs but low or zero marginal costs. There is a large literature on media, which is of course also a special case because of its cultural, civic, and political importance (for surveys see Gabszewicz, Resende, and Sonnac 2015; Anderson and Jullien 2016). It is not clear how the business landscape will evolve. Is it just a matter of time before platforms drive out incumbents with traditional business models, or before the incumbents switch organizational model? Might some platforms indeed evolve toward becoming traditional supply chain intermediaries with users on each side not having direct contact with each other? Certainly, some incumbent industries fear the former will occur and are calling for regulatory protection. Alternatively some are taking action by acquiring platforms. Recently, for example, Accor purchased the luxury accommodation platform One Fine Stay and Enterprise car rentals purchased City Car Club. As the spread of platforms is such a recent phenomenon, there is much still to understand about the evolution of the model. One question is how and when a platform reaches the tipping point for viability—and when, having passed this point, platforms will become dominant (Bresnahan, Orsini, and Yin. 2015).
[ 56 ] Economy
PLATFORM COMPETITION
In his 1888 novel Looking Back, Edward Bellamy envisaged in the year 2000 a world run by one single organization, the large industrial trusts of his own day having merged and somehow eventually morphed into a single giant public trust. So the fear (or hope) of a dominant organization goes back a long way. The giant digital platforms—Google, Facebook, Amazon, Apple, seemingly being rapidly joined in their dominance by newcomers such as Uber and Airbnb—are the closest the Bellamy vision has come to being realized. They go far beyond any other commercial entities in the scale and dominance they have achieved. Even in smaller markets than search, social media, or operating systems, the tendency for a dominant platform to emerge is clear. For example, Airbnb’s growth suggests it will achieve the same feat in the market for short-stay accommodation. Not surprisingly, platforms pose some significant challenges for competition analysis and policy.
Barriers to Entry
Platform markets can be divided into those where multihoming (on at least one side) is common, such as travel or finance—although the existence of fixed costs and the indirect network effects mean the number of competitors is likely to stay small—and those where a single platform dominates. The more homogeneous consumer demands, the stronger the indirect network effects, and the larger the fixed costs or economies of scale in supply, the fewer platforms will be viable. Potential for entry is a key consideration for competition policy. Although the indirect network effects make demand on each side of a platform more elastic, they also make entry by a new competitor harder. Platform start-ups will need to sustain losses on entry as they grow both sides, so in assessing the competitive landscape the likely cost of successful entry and reaching critical scale will be important.
Price Discrimination
Economists have expected platforms to practice price discrimination more than they seem to, in other words to use the information they hold about each individual buyer to set different prices and increase the platform’s profit at the expense of consumers (or suppliers). This seems to have been to date a far bigger concern in theory than in reality. There is some evidence
P l at f or m D o m i n a n c e
[ 57 ]
85
of concern about price discrimination in the form of websites (such as $herrif) that collect price information so individuals can check the price they face against the price paid by others. Some instances of online price discrimination have been identified (Hannak et al. 2014; Mikians et al. 2012). Auction mechanisms in some specific instances, including Google’s advertising auctions, clearly achieve this on one side of the platform. But there is no real evidence of widespread price discrimination on an individual consumer basis, as opposed to the use of mechanisms (such as premium delivery charges) to identify groups of customers. This is a puzzle, especially as consumers are very used to price discrimination by airline and travel websites, even to the extent of often knowing the price will change if the site is checked twice from the same IP address. This might just reflect the ease of consumer switching online, or perhaps there are unexplored advantages of simple pricing rules. Alternatively, it may be that the data to test for price discrimination have not been available; platforms themselves control access to transacted prices and volume of sales.
Trust
One fundamental dimension of platform design is creating mechanisms that establish trust between buyers and sellers (Boudreau and Hagiu 2009). Without a repeated relationship, it is harder to create the trust that enables a transaction, and so platforms have a number of strategies for building trust. Ratings systems are one method that has been considered by a number of researchers—including the scope for gaming ratings, and the bias toward giving good ratings (surveyed in Dellarocas, Dini, and Spagnolo 2006). Other trust-building techniques include provision of payment and sometimes escrow mechanisms through the platform, or sanctions against “misbehaving” suppliers. Platforms also spend considerable sums on marketing; while this is important for building the number of users, it is also the case that just as in conventional businesses, establishing a brand reputation is important for creating trust. In practice, platforms implement a wide range of rules concerning access and participation, technical standards, contracts, and so on. These rules are intended to manage uncertainty and share risk; to overcome or mitigate information asymmetries; and to coordinate their “ecosystem” in a complex environment. Boudreau and Hagiu suggest that platforms in fact act as “regulators” or rule-making governance institutions in the context of many market failures and coordination problems—and that they indeed substitute for the need for government regulation: “MSPs are in a unique
[ 58 ] Economy
position to be focal, private regulators by virtue of the one-to-many asymmetric relationship between them and the other players” (Boudreau and Hagiu 2009, 166). In effect, the platform aims to maximize the value generated by its entire ecosystem and so influence decisions taken off the platform as well. The variety of approaches taken by different platforms over time suggests there is much still to understand here about a range of choices, such as whether or not to be open or closed, which standards to adopt, how to write contracts to share risk and induce the revelation of private information, and how these levers interact with price setting. Their fundamental need to establish trust raises the question of whether platforms can in effect act as a kind of self-regulatory organization. The interests of the platform will often be at least partly aligned with those of its users, and with finding a reasonable balance between benefits for buyers and sellers, including when transfers are not possible. Sundararajan (2016, chapter 6) argues that much of the regulation of platform markets can indeed be left to platforms. Uber wants to ensure its drivers are safe and insured (in some countries, Uber drivers are indeed more likely than traditional taxis to be insured and are therefore seen as safer); Airbnb wants hosts not to lie about the quality of the accommodation and guests not to trash their rooms. The technology itself offers safety features such as GPS tracking, or the ability to record through photos. He suggests that self-policing is more effective in achieving the desired outcomes. For instance, although formal regulations about safety standards in hotels can seem on the face of it better at protecting consumers, they might in reality only be inspected once at the opening of a hotel; whereas Airbnb guests can give constant quality feedback through the rating system and the platform is strongly incentivized to ensure this is effective. (Similarly, TripAdvisor ratings may do a better job than formal regulation and inspection in monitoring hotel standards.) Formal regulation would then only be required to address other externalities, such as the increased noise and carelessness of many short-term visitors in a residential neighborhood. Even then, Sundarajan (2016) optimistically suggests, social norms will evolve in this new market that could make government intervention unnecessary. This is surely an open question, given instances of, for example, discriminatory behavior. For example, Edelman, Luca, and Svirsky (2016) find experimental evidence of racial discrimination by suppliers to online platforms. If discrimination were to prove more systematically prevalent on platforms than in traditional businesses, governments may want to introduce measures to ensure compliance with equality laws.
P l at f or m D o m i n a n c e
[ 59 ]
06
Weak Incentives to Innovate
There is relatively little work looking at the incentives to innovate. Bellflamme and Toulemonde (2016) suggest that there are direct profit incentives (due to cost reduction) and indirect strategic incentives (due to competition) for platforms to innovate, and these can work against each other. If a cost-reducing innovation will trigger an increase in competition on the side that is subsidized, there can be a negative incentive to innovate. Moreover, platforms will tend to concentrate their innovation on different sides to limit these cross-group competitive effects. Understanding incentives to innovate is an important issue in the context of competition policy (discussed later). Platforms recovering their costs from suppliers in order to keep consumers on board may make innovation among suppliers less feasible. Platforms that become dominant may have a reduced incentive to innovate themselves—although these also seem increasingly to be acquiring small innovators, for example in fields such as robotics or AI (Gawer and Cusumano 2014).
The Problem of Free
The question of incentives to innovate is particularly important in the context of the compelling consumer psychology of anything free. A number of psychological studies have demonstrated how irresistible consumers find a “free” good even knowing logically that a price is somehow paid, although this may be changing as customers become more accustomed to the time or attention costs involved. Even without this research, the prevalence of “free” services online is testament to this being a compelling model.2 One question is exactly what value platforms gain from the sale of data or of advertising space, or in other words what price are consumers paying for “free” services and how this compares to what a subscription model or usage fee model might cost them. Is the veil of the “free” service a means of redistributing surplus from either suppliers or consumers to the platform? Many consumers are unaware of the extent to which their personal data can be harvested and aggregated, or have some idea but do not care. Google collects almost all the log information, including details of the search queries, telephony log information, IP address, hardware settings, and cookies planted by websites. While Google promises to protect and not to sell users’ 2. Evans (2009) presents a comprehensive overview of the evolution of the online advertising industry.
[ 60 ] Economy
privacy and information, it has made billions of dollars through targeted online advertising. No individual is aware of the economic value of his or her private information. One study found that only 15% of the subjects are willing to pay half a penny for preventing data sharing with third parties (Preibusch 2013). In other words, most people significantly underestimate its value. The value of the aggregate information is in fact enormous. The power of big data is well beyond the imagination of many people and opens up a new horizon of online marketing. Even the available privacy protections are incomplete, as Muth (2015) has documented. There are therefore public policy concerns about privacy and transparency. The standard terms and conditions many platforms require consumers to accept are long and confusing, far from the true meaning of transparency. Many people will not bother to read privacy policies at all. The value of personal data is such, though, that search engines and aggregators have strong incentives to shift users’ behavior by altering ranking algorithms. They also have a strong incentive and the clear ability to favor their own vertically integrated sites, and maximize advertising revenues. It might be possible to start to consider these issues by looking at the evidence on consumer disbenefits from being served adverts, as well as at the advertising revenues earned by platforms. Rhodes (2011) demonstrates the importance of location on the screen for advertisers; prominence is highly profitable. Increasingly some consumers are purchasing ad blockers, which indicate the price for those individuals. Another avenue could be the proportion of mobile phone users’ data allowances being absorbed by the download of increasingly bandwidth-hungry advertising. Google’s dominance in terms of advertising revenues is clear, although Facebook is growing rapidly in the mobile ads market. The online advertising market has become extremely complex with a proliferation of intermediaries and automated trading, and so concerns about automated clicks and the effectiveness of this form of advertising. In many ways it increasingly resembles the algorithmic, high-frequency trading markets in finance. It is clear that there is a welfare-destroying arms race between advertisers (via platforms) and consumers. After all, platforms providing “free” services such as search or social networking funded by advertising sales are not really matching demand and supply in the same way as platforms charging an explicit price to one side or the other. For consumers do not demand all or much of the advertising. The more consumers are able to ignore or avoid certain kinds of adverts, the more sophisticated, and intrusive, the techniques become: more prominent on the screen, pop- ups over the desired web-page, videos not stills, pre-roll advertising that cannot be skipped, and so on. Two platforms, Google and Facebook, are the
P l at f or m D o m i n a n c e
[ 61 ]
26
main beneficiaries of the arms race as they are capturing a growing share of all advertising revenue as advertisers switch away from traditional media (IAB 2017). Yet advertisers are paying more for any incremental sales with decreasing returns and pass their advertising costs to consumers; consumers are paying more in indirect ways; but online and mobile advertising revenue is increasing rapidly. There is evidence of substantial fraud in the market (Kalkis Research 2016). There are some means for consumers to fight back, but they involve significant frictions and require sophistication on the part of users.3 There is a question as to whether the “free” model acts as an entry barrier too. Just as challenger banks find it almost impossible to enter markets where large incumbent banks minimize consumer switching by offering “free” current accounts, challenger platforms will find it hard to attract sufficient numbers on both sides of the platform to get to viable scale. Although the scale economies of the large incumbents would be daunting anyway, it is possible that entry would be easier in a paid-for world. It is not obvious that the “free” business model is sustainable, though. One significant question concerns the failure to invest in upstream supply. The most important example concerns journalism and the content industries more generally. The platforms are not making any investment in the continuing provision of the content they provide “free,” and the loss of traditional advertising revenues means of course that those publishers are decreasingly able to generate content. How is the special civic role of the media to be safeguarded if “[t]he public sphere is now operated by a small number of private companies, based in Silicon Valley?” according to one digital journalism expert (Bell 2014). How can polities sustain investment in journalism and other forms of national or local cultural content?
Ownership of Information
Platforms will invest in information goods only if they can achieve a return, and yet all of the methods of doing so are problematic. One approach is to see the platform as a data factory, investing in the identification of characteristics of participants, data that can then be sold to advertisers or suppliers (Bergemann and Bonatti 2015). Although there seems to be little public concern about the loss of privacy involved, this may be changing as awareness increases, including awareness of the value of personal data. 3. See, for example, https://diasporafoundation.org/, https://www.eff.org/privacybadger, http://sheriff.dynu.com/views/home.
[ 62 ] Economy
Another possibility is to claim intellectual property rights, legally enforced, and technologically implemented. However, it is clear that the social norms concerning intellectual property are not settled. There is a vigorous literature debating this, not to mention evolving legal case law (for an overview, see Menell, Lemley, and Merges 2016). One interesting example is the claim by John Deere to the US Copyright court that farmers do not in fact own the tractors they buy—contrary to the existing social understanding of ownership. John Deere’s aim was to prevent farmers modifying the complex software and sensors now installed in tractor cabs, and so they claimed (with partial success in the court) that farmers are leasing the company’s intellectual property in the software (Wiens 2015). Other related policy questions include data governance and regulation; the protection of personal privacy; individuals’ own rights over their personal information; and public access to information aggregated by the platforms. Policymakers and regulators need better guidance about these issues. The EU has already intervened in the “right to be forgotten” and the cookie law, but as implemented these impose a burden on consumers— to have to request take-down in specific instances, or an irritating extra click on every new webpage when failure to accept cookies actually makes almost all websites unusable. They also create a new entry barrier for other search engines. Intervention must also recognize that there is a trade-off, as better privacy protection will impose some costs, including consumers being served less relevant advertising (Goldfarb and Tucker 2011). The regulatory burden needs to fall on the platforms gaining the surplus in this market. The debate could extend to radical proposals such as a requirement to delete personal data after a certain length of time, or an individual right of access to their own combined data. In addition, the online platforms need to be required to provide data for official purposes, just as offline businesses are.
COMPETITION ANALYSIS OF PLATFORMS
The platform model poses significant challenges for the economic analysis of competition. There is no obvious relationship between price and marginal cost on either side of the platform; in a fully competitive market, one side will subsidize the other. This means the most obvious litmus test in competition analysis cannot be applied: the “SSNIP test” reasoning of looking at the effects of a “small but significant” price increase does not apply. Regulators cannot draw any conclusions from looking at the price on just one side. The need to keep both sides in the appropriate balance means that
P l at f or m D o m i n a n c e
[ 63 ]
46
any platform that tried to raise prices on one side would risk losing people on the other side and even entering a downward spiral. In general, positive feedback effects will make demand on all sides of the platform more elastic than might appear to be the case in a simple analysis. Another basic tool of competition policy, market definition, is also next to impossible to apply in the conventional way as well, again because of the feedback link between the two (or more) sides. It is impossible to consider, say, the search and advertising markets separately. One form competition between platforms takes is “envelopment,” adding another group of customers on one side and using those revenues to reduce the price charged to the profit-generating side of another platform. Another tool platforms use is the bundling or tying of services in order to cross-subsidize between different groups of users when they are unable to set a negative price to subsidize one side directly. Here too the standard competition tools can lead to misleading conclusions (Amelio and Jullien 2012). Traditional attempts to define a market would often understate the competitive constraints (on another side) on a platform. On the other hand, dominant platforms can also pursue a strategy of envelopment to prevent entry. In any specific case it is always possible to look at the degree of substitutability between the products or services provided by a platform and alternatives, so at least on a case-by-case basis it should always be possible to assess the competitive landscape. The inapplicability of standard tools leaves a vacuum in competition analysis, still to be filled by economic analysis. There is a need for more dynamic analysis of the entry, growth, and failure of platforms; empirical work looking at the successes and failures; and analytical work. Many platforms are clearly still experimenting, so there is much to be learned from case studies and econometric work. As David Evans (2011, 102) notes, “Many firms in a two-sided market have to produce multiple products in order to sell any products at all.” The product set will need to respond to the complex landscape of competing platforms. The digital platforms have two main kinds of argument in response to competition challenges. One is that their dominance is temporary, that the markets are winner-takes-all by nature, but the identity of the winner can change. Examples given include Microsoft’s Internet Explorer (halted in part by competition authority action) or its operating system (overtaken by technological innovation, tablets and smartphones bypassing its near- monopoly). There is clearly a possibility of competition for the market— and examples of dominance being overturned even in social networking. A second argument is that there is dominance but it increases consumer welfare through the capture of indirect network externalities, and to deconstruct the market position would be to harm consumers. This is
[ 64 ] Economy
clearly true, but it is not possible to evaluate the argument in any specific context without a means of evaluating the size of those gains; the division of welfare as between suppliers, consumers, and the platform; and the dynamic consequences. There is a great need for more empirical work on the size of the gains, although some research has taken creative approaches (Lendle et al. 2016; Cohen et al. 2016). There must be a question as to whether Google’s dominance in search can in fact be overturned, although this might be a straightforward matter of its scale rather than the role of indirect network effects. This is an activity in which there is little multihoming; consumers choose one search provider (Google’s share of search is about 64% in the United States, 90% globally). For advertisers, on the other side, the cost of joining a platform consists of the fixed costs of set-up on the platforms’ software, the cost of running a keyword campaign, and the cost-per-click. As the fixed cost element is high, there is a strong incentive to choose the platform (Google) that gets the most search queries. Even though the size of the positive feedback between the two sides has probably declined as Google has grown larger, a challenger search platform would need to be of better quality and reduce the fixed costs for advertisers by enough to compensate for the smaller number of consumers. This is daunting, although regulatory intervention could aim to decrease the cost of multihoming for advertisers. There is plenty of scope for Google to degrade the quality of search results to consumers (for example, by advertising its own products more prominently) before they will move away. The fact that Google’s profit comes solely on one side also gives Google the incentive to favor the side of the advertisers, albeit at some potential cost in terms of consumer trust. It will not guarantee the quality of the advertisements shown on Google. And it does little to manage the quality and legitimacy of the companies advertising, despite occasional outcries (Cookson 2016). And— as past and current competition cases suggest— this means Google is potentially powerful in other online markets. A number of complaints have alleged abuse of dominance due to changes in rankings or location on the page. There is evidence of the importance of ranking in determining the number of clicks a website receives; and what’s more, the ranking affects click-though rates through two channels, both through the access to users’ attention and through the halo effect of the search engine’s reputation on the individual websites listed (Glick et al. 2014). Regulators have apparently drawn different conclusions. In recent decisions about online map services, for example, a UK court gave Google a clean bill of health for replacing (free) Streetmap with Google maps in the display box (Ibanez Colomo 2016); while an earlier French decision—although recently
P l at f or m D o m i n a n c e
[ 65 ]
6
overturned on appeal—found in favor of (paid-for) Bottin Cartographes in a similar case (Accardo 2016). Facebook is another titan it might now be impossible to dislodge. Waller (2011) identifies two reasons. First, it is quite difficult to terminate an account. Users have to confirm and reconfirm after a cooling period. Even after the deactivation, Facebook owns all the information and files uploaded. Second, Facebook does not allow other websites to acquire the information uploaded. Reposting the information to another website is simply too cumbersome for ordinary users. Yoo (2012) is also concerned that the lack of data portability may effectively block potential entrants. Users of Facebook establish their own pages bit by bit. Users have invested hundreds of hours in polishing their profile, history, photos, interests, and connections. Since the content is not transferable, it makes switching to another network very costly. The locked-in relationship may significantly contribute to the consolidation of the market power of Facebook as switching is so costly; it is not clear that recent interventions such as France’s Loi sur la République Numérique4 will overcome this. Some economists argue that the dominant platforms provide each other with the most effective competition, as they are each other’s main competitor in key areas of their activity. Nicolas Petit (2016) has labeled this “moligopoly.” Economists have for now left competition authorities with more questions than answers. How big are the consumer gains from network effects? How should they be weighed against dynamic costs such as reduced future innovation? How important is the possibility of multihoming or switching? And to what extent does consumer psychology need to be taken into account (see section on “free”)? Is entry feasible or rather unfeasibly costly, especially when it comes to the handful of digital titans? Joshua Gans has argued that any disruptive entry will take the form of “supply side” disruption in the shape of a new technology, just as it took smartphone technology not to displace the Windows position in the OS market, but to make it less relevant to the things consumers want to do (Gans 2016). As markets can “tip,” to create a dominant player as with Google in search, should regulators be considering some ex ante regulation, or alternatives like regulation on open standards, to keep open a possibility of entry? Certainly, competition decisions would ideally be made more quickly. It is not the business of competition authorities to select between business models. But the more platforms of huge scale behave like markets 4. Promulgated in September 2016, giving among other things establishing time limits for the retention of online data and protecting minors. See https://www.economie. gouv.fr/republique-numerique.
[ 66 ] Economy
or exchanges than like conventional businesses, the greater the public interest in ensuring they observe fair rules to preserve competition. There is a deeper question, however. The competitive assessment and the welfare assessment diverge for two reasons. One is that platforms crystallize external benefits, only some of which they capture. Additional consumer benefits therefore need to be considered alongside competition analysis. The other is that competing platforms choose their price structure to balance participation between the sides and prices will target the marginal supplier and consumer. The prices chosen are unlikely to be those that will maximize social welfare, as the socially optimal prices would address the average rather than the margin. Do these wedges between market outcomes and social welfare mean competition authorities should shift their focus from a competitive assessment to a welfare assessment? Or are there just too many dangers in reverting to a policy framework that distinguishes between public interest and competitive outcomes? In short, economics has yet to deliver practical antitrust tools to competition regulators, to enable them to draw up theories of harm in platform markets of different kinds and implement them empirically. As one authoritative survey puts it: “The relevant theory, at least in its current stage of development, yields fewer clear predictions, and there is relatively little empirical work from which one can draw general lessons” (Evans and Schmalensee 2014). However, general lessons are exactly what is needed. The future growth of platforms will benefit from predictable principles of competition assessment; the principles need to be rooted on a thorough welfare assessment. ACKNOWLEDGMENTS
The research for this paper was supported by the Jean-Jacques Laffont Chaire Numérique at the Toulouse School of Economics. My thanks to Timothy Yeung for his research assistance; to Adam Cellan-Jones, Jacques Crémer, Eunate Mayor, Paul Seabright, and Alex Teytelboom for their comments on earlier drafts; and to participants in the TSE Digital Forum in June 2016 and a Breugel workshop in October 2016. Responsibility for omissions and errors is entirely mine. REFERENCES Accardo, Gabriele. 2016. “Paris Court of Appeal Overturns Google Abuse of Dominance Ruling.” TTLF Newsletter on Transatlantic Antitrust and IPR Developments, January 11, 2016. https://ttlfnews.wordpress.com/2016/01/11/ paris-court-of-appeal-overturns-google-abuse-of-dominance-ruling/.
P l at f or m D o m i n a n c e
[ 67 ]
86
Amelio, Andrea, and Bruno Jullien. 2012. “Tying and Freebies in Two-Sided Markets.” International Journal of Industrial Organization 30, no. 50: 436–46. Anderson, Simon, and Bruno Jullien. 2016. “The Advertising-Financed Business Model in Two-Sided Media Markets.” In Handbook of Media Economics, vol. 1B, edited by Simon Anderson, Joel Waldfogel, and David Stromberg, 41–90. Amsterdam and Kiddlington: North Holland. Armstrong, Mark. 2006. “Competition in Two-Sided Markets.” Rand Journal of Economics 37: 325–66. Bell, Emily. 2014. “Silicon Valley and Journalism: Make Up or Break Up?” Reuters Memorial Lecture 2014, November 11, 2014. Oxford: Reuters Institute for the Study of Journalism. http://reutersinstitute.politics.ox.ac.uk/news/ silicon-valley-and-journalism-make-or-break. Belleflamme, Paul, and Eric Toulemonde. 2016. “Innovation Incentives for Competing Two-Sided Platforms.” http://idei.fr/sites/default/files/IDEI/documents/conf/ Internet_2016/Articles/belleflamme.pdf. Bergemann, Dirk, and Alessandro Bonatti. 2015. “Selling Cookies.” American Economic Journal: Microeconomics 7, no. 3: 259–94. Bresnahan, Timothy, Joe Orsini, and Pai-Ling Yin. 2015. “Demand Heterogeneity, Inframarginal Multihoming, and Platform Market Stability: Mobile Apps.” Stanford University Working Paper. http://idei.fr/sites/default/files/IDEI/ documents/conf/Internet_2016/Articles/yin.pdf. Boudreau, Kevin J., and Andrei Hagiu. 2009. “Platform Rules: Multisided Platforms as Regulators.” In Platforms, Markets and Innovation, edited by Annabelle Gawer. Cheltenham, UK: Edward Elgar Publishing. Caillaud, Bernard, and Jullien Bruno. 2003. “Chicken & Egg: Competition among Intermediation Service Providers.” RAND Journal of Economics 34, no. 2: 309–28. Coase, Ronald H. 1937. “The Nature of the Firm.” Economica, n.s., 4, no. 16: 386–405. Cohen, Peter, Robert Hahn, Jonathan Hall, Steven Levitt, and Robert Metcalfe. 2016. “Using Big Data to Estimate Consumer Surplus: The Case of Uber.” National Bureau of Economic Research, Working Paper 22626. Cookson, Robert. 2016. “Jihadi Website with Beheadings Profited from Google Ad Platform.” Financial Times, May 17, 2016. Dellarocas, Chrysanthos, Federico Dini, and Giancarlo Spagnolo. 2006. “Designing Reputation (Feedback) Mechanisms.” In Handbook of Procurement, edited by Nicola Dimitri, Gustavo Piga, and Giancarlo Spagnolo. Cambridge: Cambridge University Press. Edelman, Benjamin, Michael Luca, and Dan Svirsky. 2016. “Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment.” Harvard Business School Working Paper, January 2016. http://www.benedelman.org/ publications/airbnb-guest-discrimination-2016-01-06.pdf. Evans, David S. 2009. “The Online Advertising Industry: Economics, Evolution, and Privacy.” Journal of Economic Perspectives 23, no. 3: 37–60. Evans, David S. 2011. Platform Economics: Essays on Multi-Sided Markets. Boston: Competition Policy International. Evans, David S., and Richard Schmalensee. 2014. “Antitrust Analysis of Multi-Sided Platform Businesses.” In Oxford Handbook on International Antitrust Economics, edited by Roger Blair and Daniel Sokol. Oxford and New York: Oxford University Press. Gabszewicz, Jean J., Joana Resende, and Nathalie Sonnac. 2015. “Media as Multi- Sided Platforms.” In Handbook of Media Economics, edited by Robert G. Picard and Steve S. Wildman. Cheltenham: Edward Elgar.
[ 68 ] Economy
Gans, Joshua. 2016. “What Would It Take to Disrupt Facebook?” Digitopoly (blog). http://www.digitopoly.org/2016/03/26/what-would-it-take-to-disrupt-facebook/. Gawer, Annabelle. 2014. “Technological Platforms: Toward an Integrative Framework.” Research Policy 43, no. 7: 1239–49. Gawer, Annabelle, and M. A. Cusumano. 2014. “Industry Platforms and Ecosystem Innovation.” Journal of Product Innovation Management 31: 417–33. Glick, Mark, Greg Richards, Margarita Sapozhnikov, and Paul Seabright. 2014. “How Does Ranking Affect User Choice in Online Search?” Review of Industrial Organisation 45, no. 2: 99–119. Goldfarb, Avi, and Catherine Tucker. 2011. “Privacy Regulation and Online Advertising.” Management Science 201157, no. 1: 57–71. http://dx.doi.org/ 10.2139/ssrn.1600259. Hagiu, Andrei. 2007. “Merchant or Two-Sided Platform?” Review of Network Economics 6, no. 2: 115–33. Hagiu, Andrei. 2009. “Two-Sided Platforms: Product Variety and Pricing Structures.” Journal of Economics and Management Strategy 18: 1011–43. Hagiu, Andrei, and Julian Wright. 2015. “Multi-Sided Platforms.” International Journal of Industrial Organization 43 (November): 162–74. Hannak, A., G. Soeller, D. Lazer, A. Mislove, and C. Wilson. 2014. “Measuring Price Discrimination and Steering on E-Commerce Web Sites.” Proceedings of the 2014 Conference on Internet Measurement. ACM, 305–18. Hayek, Friedrich A. 1945. “The Use of Knowledge in Society.” American Economic Review 35, no. 4: 519–30. IAB. 2017. “IAB Internet Advertising Revenue Report: 2016 Full Year Results.” An Industry Survey Conducted by PwC and Sponsored by Interactive Advertising Bureau, April 2017. https://www.iab.com/insights/iab-internet-advertising- revenue-report-conducted-by-pricewaterhousecoopers-pwc-2/. Ibanez Colomo, Pablo. 2016. “Streetmap v Google: Lessons for Pending Article 102 TFEU Cases (Including Google Itself).” Chillin’Competition, February 16, 2016. https://chillingcompetition.com/2016/02/17/streetmap-v-google-lessons-for- pending-article-102-tfeu-cases-including-google-itself/. Johnson, J. P. 2014. “The Agency Model and MFN Clauses.” Working paper, Cornell University. Kalkis Research. 2016. A Real Life Example of Google’s Implication in Ad Fraud and Traffic Laundering. Accessed May 19, 2016. https://kalkis-research.com/ real-life-example-google-implication-ad-fraud-traffic-laundering. Lendle, A., M. Olarreaga, S. Schropp, and P.-L . Vézina. 2016. “There Goes Gravity: eBay and the Death of Distance.” Economic Journal 126: 406–41. Llanes, Gastón, and Francisco Ruiz-Aliseda. 2015. “Private Contracts in Two-Sided Markets.” NET Institute Working Paper no. 15–16. October 2015. Menell, Peter S., Mark A. Lemley, and Robert P. Merges. 2016. “Intellectual Property in the New Technological Age.” Stanford Public Law Working Paper 2780190. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2780190. Mikians, J., L. Gyarmati, V. Erramilli, and N. Laoutaris. 2012. “Detecting Price and Search Discrimination on the Internet.” Proceedings of the 11th ACM Workshop on Hot Topics in Networks. ACM: 79–84. Muth, K. T. 2015. “Googlestroika: Five Years Later.” North Carolina Journal of Law & Technology 16: 487–527. Orman, Levent V. 2016. “Information Markets over Trust Networks.” Cornell University Working Paper.
P l at f or m D o m i n a n c e
[ 69 ]
07
Parker, Geoffrey, Marshall van Alstyne, and Sangeet Paul Choudary. 2016. Platform Revolution: How Networked Markets Are Transforming the Economy—and How to Make Them Work for You. New York: Norton. Petit, Nicolas. 2016. “Technology Giants, the Moligopoly Hypothesis and Holistic Competition: A Primer.” http://bruegel.org/wp-content/uploads/2016/10/ Tech-Giants-The-Moligopoly-Hypothezis-and-Holitic-Competition-A-Primer- PETIT-20-10-16-1-1.pdf. Preibusch, S. 2013. “The Value of Privacy in Web Search.” The Twelfth Workshop on the Economics of Information Security (WEIS). Rhodes, Andrew. 2011. “Can Prominence Matter Even in an Almost Frictionless Market?” Economic Journal 121 (556): 297–308. Rochet, Jean-Charles, and Jean Tirole. 2003. “Platform Competition in Two-Sided Markets.” Journal of the European Economic Association 1, no. 4: 990–1029. Rochet, Jean-Charles, and Jean Tirole. 2006. “Two-Sided Markets: A Progress Report.” RAND Journal of Economics 37, no. 3: 645–67. Roth, Alvin. 2015. Who Gets What And Why? London: William Collins. Rysman, Marc. 2009. “The Economics of Two-Sided Markets.” Journal of Economic Perspectives 23, no. 3: 125–43. Sundararajan, Arun. 2016. The Sharing Economy. Cambridge: MIT Press. Waller, S. W. 2011. “Antitrust and Social Networking.” North Caroline Law Review 90: 1771. Wiens, Kyle. 2015. “We Can’t Let John Deere Destroy the Very Idea of Ownership.” Wired, April 21, 2015. http://www.wired.com/2015/04/dmca-ownership-john- deere/. Williamson, Oliver E. 2005. “The Economics of Governance.” American Economic Review 95, no. 2: 1–18. Yoo, C. S. 2012. “When Antitrust Met Facebook.” George Mason Law Review 19: 1147.
[ 70 ] Economy
CHAPTER 3
When Data Evolves into Market Power—Data Concentration and Data Abuse under Competition Law INGE GR AEF
INTRODUCTION
When exploring the topic of digital dominance, a discussion of the role of data is inevitable. Data has developed into a key driver for digital business models and digital markets. As innovative products and services are increasingly being offered online, firms are better able than ever before to collect information about the profile, behavior, and interests of individuals. The knowledge that can be extracted from this data forms the basis for the competitiveness and growth of firms operating in digital markets. While the greater knowledge about the interests of users may lead to better quality of services and enable companies to cut costs, the increased collection and use of data can also result in negative welfare effects. In particular, having control over and being able to analyze large volumes of data may form a source of power for incumbents, potentially enabling them to exclude rivals from the market to the detriment of consumers. Such developments raise questions about the applicability of competition law to address possible adverse effects of data-related commercial behavior. However, as Competition Commissioner Vestager made clear, “If a company’s use of
27
data is so bad for competition that it outweighs the benefits, we may have to step in to restore a level playing field. But we shouldn’t take action just because a company holds a lot of data. After all, data doesn’t automatically equal power” (Vestager 2016a). It is this question that this chapter aims to answer: in which circumstances does data evolve into market power justifying competition intervention? The extent to which data contributes to the existence of market power for competition law purposes has so far been mainly encountered in the context of merger review. This issue is subject to fierce debates in scholarship. A number of authors have argued that there is no role for competition enforcement at all because of the nonexclusive nature of data and its wide availability due to which competition issues would be highly unlikely to occur in practice (see Sokol and Comerford 2017; Tucker and Wellford 2014). Others have claimed that the need for competition enforcement is urgent and might not even suffice to address consumer harm. As such, in their perspective an intervention may even need to go beyond the enforcement of the competition rules in order to be effective (see Prüfer and Schottmüller 2017; Stucke and Grunes 2016; Newman 2014; Argenton and Prüfer 2012). A third and more nuanced view is that the collection and use of data may lead to concerns that can trigger the application of competition law. The main point here is to identify the circumstances in which such competition issues occur and to examine how such issues can be addressed on the basis of existing tools (see Autorité de la concurrence and Bundeskartellamt 2016; UK Competition & Markets Authority 2015; Graef 2015). Nowadays, there seems to be consensus among competition enforcers that data may indeed give rise to market power and constitute a competition issue in certain circumstances. In that sense, attention has shifted to the establishment of indicators for data-related competition concerns. With that shift, the analysis of the role of data in merger decisions, public statements, and reports of competition authorities has also become more sophisticated over the years. First, the chapter aims to give an overview of the evolution of thinking in merger decisions of the European Commission in the online environment. Second, in order to further the debate on the appropriate scope of competition intervention in data markets, a number of indicators are identified that may point to the existence of market power resulting from data. Third, the limitations that the abuse of dominance regime of competition law imposes on the collection and use of data by dominant players is discussed. Finally, attention is paid to other policies attempting to control data-related practices of digital players and to protect consumers beyond competition law and the existence of dominance.
[ 72 ] Economy
FROM GOOGLE/D OUBLECLICK TO MICROSOFT/L INKEDIN AND BEYOND
Since 2007, data-related competition concerns have been considered in a number of merger investigations. Over time, the competition analysis has become more advanced in these decisions, ultimately culminating in a framework to assess horizontal issues regarding data concentration in Microsoft/LinkedIn in 2016. Because the data held by the merging parties is in most situations not traded to third parties but merely used as an input to provide services, a critical issue to be addressed is how the existing competition tools of market definition and market power can be applied to data.
Google/D oubleClick and Facebook/W hatsApp
The role of data in the emergence of market power started to gain attention from policymakers and academics in the competition law field after the announcement of Google’s proposed acquisition of DoubleClick in 2007. One of the results of the acquisition was that Google would become able to combine its own dataset on search behavior of users with DoubleClick’s dataset on web-browsing behavior of users. This combined information could potentially be used to better target advertisements to users. After its market investigation, the European Commission concluded that the combination of Google’s and DoubleClick’s data collections would not give the merged entity a competitive advantage that could not be matched by competitors considering that such a combination of information was already available to a number of Google’s competitors at the time of the proposed concentration. In particular, the Commission referred to Microsoft and Yahoo, which both ran search engines and offered ad serving at that time as well. Furthermore, the Commission argued that competitors could purchase data or targeting services from third parties including portals, other major web publishers and Internet service providers (Google/DoubleClick 2008, par. 269–72 and 365). On this basis, the Commission established that the possible combination of data from Google and DoubleClick did not raise foreclosure issues that would result in a significant impediment to effective competition (Google/DoubleClick 2008, par. 366). The debate about the role of data in competition analysis has continued since the Google/DoubleClick merger and attracted renewed attention in the context of Facebook’s acquisition of WhatsApp in 2014. The Commission approved the merger without obtaining any commitments from the parties. With regard to possible data- related competition concerns, the
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 73 ]
47
Commission expressed the view that the acquisition of WhatsApp would not increase the amount of data potentially available to Facebook for advertising purposes because WhatsApp did not collect data valuable for advertising purposes at the time of the merger (Facebook/WhatsApp 2014, par. 166). The Commission still investigated possible theories of harm relating to data concentration to the extent that it was likely to strengthen Facebook’s position in the market for online advertising. In this regard, the Commission argued that the merger would not raise competition concerns even if Facebook would introduce targeted advertising on WhatsApp or start collecting data from WhatsApp users with a view to improving the accuracy of the targeted ads served on Facebook’s social networking platform.1 In the Commission’s view, there would continue to be a sufficient number of alternative providers to Facebook for the supply of targeted advertising after the merger, and a large amount of Internet user data that are valuable for advertising purposes were not within Facebook’s exclusive control. In particular, the Commission considered Google, Apple, Amazon, eBay, Microsoft, AOL, Yahoo!, Twitter, IAC, LinkedIn, Adobe, and Yelp as market participants that collect user data alongside Facebook (Facebook/ WhatsApp 2014, par. 187–90). On that basis, the Commission concluded that the Facebook/WhatsApp merger did not give rise to serious doubts as to its compatibility with the internal market as regards the market for the provision of online advertising services.
Absence of Close Competition in the Markets for the End Products Concerned
The Commission’s analysis has been criticized in both cases. In particular, the unconditional clearance of Facebook/WhatsApp without the Commission starting an in-depth merger investigation2 came as a surprise,
1. In May 2017, the Commission imposed a €110mn fine on Facebook for providing misleading information during the merger investigation. While Facebook had informed the Commission that it would be unable to establish reliable automated matching between Facebook users’ accounts and WhatsApp users’ accounts, WhatsApp announced updates to its terms of service in August 2016 including the possibility of linking WhatsApp users’ phone numbers with Facebook users’ identities. However, the fact that misleading information was given did not impact the 2014 authorization of the transaction, as the decision was based on a number of elements going beyond automated user matching and the Commission at the time carried out an “even if” assessment assuming user matching as a possibility (European Commission 2017b). 2. On the basis of Article 10(1) of the EU Merger Regulation, the Commission generally has a total of 25 working days to decide whether to grant approval (Phase I) or to
[ 74 ] Economy
considering the impact of the merger in terms of the 19 billion US dollar purchase price paid by Facebook and the over 1 billion affected users. Even though attention has been devoted to data-related competition concerns in the two merger decisions, the Commission seems to have been rather quick in concluding that the concentration of data resulting from the mergers would not have significant effects on the respective markets. In both Google/DoubleClick and Facebook/WhatsApp, the Commission did not engage in a detailed analysis of what type of information was necessary in order to provide a particular advertising service. As it referred to a range of market players who were alleged to have access to similar information as held by Google and Facebook after the respective mergers, it seems that the Commission simply regarded datasets built by Internet players as substitutable in general irrespective of the specific products or services offered by these companies. The reason for this lack of specificity in the analysis could lie in the fact that no market for data was defined in the two cases. In both merger decisions, the Commission came to the conclusion that, respectively, Google and DoubleClick as well as Facebook and WhatsApp could not be regarded as close competitors in the relevant markets for the services they offered. In Google/DoubleClick, the Commission’s market investigation pointed out that Google and DoubleClick were not exerting a significant competitive constraint on each other’s activities. As a result, the merger did not seem to significantly impede effective competition with regard to the elimination of actual competition. While it could not be excluded that DoubleClick would, absent the merger, have developed into an effective competitor of Google in the market for online ad intermediation, it was rather likely in the Commission’s view that a sufficient number of other competitors would be left in the market. Sufficient competitive pressure would thus remain after the merger. On this basis, the Commission concluded that the elimination of DoubleClick as a potential competitor would not have an adverse impact on competition (Google/DoubleClick 2008, par. 192, 221, 278). In Facebook/WhatsApp, the Commission found that Facebook Messenger and WhatsApp were not close competitors in the relevant market for consumer communications services and that consumers would continue to have a wide choice of alternative communications apps after the transaction. With respect to social networking, the Commission did not take a final view on the existence and the still- evolving boundaries of a potential market for social networking services
start an in-depth investigation (Phase II). The Facebook/WhatsApp merger was unconditionally cleared in Phase I.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 75 ]
67
and concluded that, irrespective of the exact market borders, Facebook and WhatsApp were only distant competitors given the differences between the functionalities and focus of their services (Facebook/WhatsApp 2014, par. 101–7, 153–58, 172–79).
Market Def inition Requires the Existence of Supply and Demand for Data
Despite the fact that the relevant market players in the two cases might not have been close competitors as far as their end products or services were concerned, it is likely that the Commission would have been able to identify current or foreseeable future competitive constraints relating to data as an input for Google’s and Facebook’s products and services. Regardless of whether such constraints would have called for additional remedies or even for a prohibition of the mergers, these issues would have been worth an assessment by the Commission. The problem in this regard is that market definition under strict competition law standards requires the existence of supply and demand of the products or services included in the relevant market (see European Commission 1997, par. 13–23). Since both Google and Facebook used the data “merely” as an input for their services and did not trade this asset to a third party at the time of the relevant transactions, no supply and demand for data could be identified in a competition law principles only allow for the definition and analysis of a market for data if information is actually traded. Examples would be the data-licensing activities of Twitter and the sale of collected personal information about consumers by data brokers to other businesses. This implies that the relevant market for digital services such as search engines, social networks, and e-commerce platforms cannot take data as an object as long as the providers of these services do not sell or trade data to third parties (see also Graef 2015, 490). In its Facebook/WhatsApp merger decision, the Commission even expressly stated that it had not investigated any possible market definition with respect to the provision of data or data analytics services, since neither of the parties involved was active in any such potential markets. At the time of the merger, Facebook only used the information about its users for the provision of targeted advertising services and did not sell user data to third parties or offer any data analytics services. WhatsApp did not collect personal data that would be valuable for advertising purposes, and messages sent through WhatsApp by users were not stored in WhatsApp’s service but only on the users’ mobile devices or elected cloud.
[ 76 ] Economy
As a result, the Commission did not see a reason to consider the existence of a potential market for personal data (Facebook/WhatsApp 2014, par. 70– 72). Indeed, when following strict competition law principles, market definition can only be based on the services that online platforms offer.
Analysis of a Potential Market for Data
At the same time, it is important to note that the fact that no market for the provision of data could be defined in Facebook/WhatsApp did not form a reason for the Commission to refrain from assessing potential data concentration issues (Facebook/WhatsApp 2014, par. 72). Nevertheless, by defining a potential market for data the Commission would have been able to undertake a more complete analysis of competition concerns relating to the combination of datasets or data concentration in the merger. This would be a welcome development in merger review, considering that acquisitions in the online environment seem to be increasingly motivated by the underlying dataset of the target undertaking (Grunes and Stucke 2015, 3). In a March 2016 speech, Competition Commissioner Vestager stated in this regard that what makes a company an attractive merger partner is not always turnover: “Sometimes, what matters are its assets. That could be a customer base or even a set of data.” Even though the Competition Commissioner argues that “our test is nimble enough to be applied in a meaningful way to the ‘new economy’ ” (Vestager 2016b), the limitations of a static approach to market definition in competition law seem to have prevented the Commission from conducting a more forward-looking assessment in its Google/DoubleClick and Facebook/ WhatsApp merger decisions. For these reasons, a definition and analysis of an additional relevant market for data would have been appropriate even though, strictly speaking, there was no supply and demand for data. By defining a potential market for data in addition to the actual relevant markets for the services provided to users and advertisers, competition authorities and courts will be able to take a form of potential competition into consideration. Indeed, digital businesses compete not only in the product markets for the specific services offered to users and advertisers but also in a broader market for data that can be deployed for improving the quality and relevance of these services.3 In that sense, data can also play a role in identifying possible
3. In a similar vein, Harbour and Koslov argue that a data market definition would better reflect reality, in which Internet companies often derive value from data far
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 77 ]
87
trends and developing new products for which consumer demand exists. As such, data may influence competition for future markets. By defining a relevant market for data, a more forward-looking stance toward market definition will be possible that goes beyond analyzing current usages of data in narrowly drawn relevant markets for products and services (see also Graef 2016, 109–11).
Evolution in Approach in Microsoft/L inkedIn
Interestingly, the Commission took such a more forward-looking approach in its Microsoft/LinkedIn decision. One of the competition concerns that the Commission investigated was the possible postmerger combination of the data in relation to online advertising. The relevant data held by each of the merging parties consisted, in the Commission’s words, of “personal information, such as information about an individual’s job, career history and professional connections, and/or her or his email or other contacts, search behaviour etc. about the users of their services” (Microsoft/LinkedIn 2016, par. 176). Even though the merger did not significantly affect competition in a market for search or nonsearch online advertising services due to the low combined market share after the merger (Microsoft/LinkedIn 2016, par. 167–73), the Commission nevertheless examined whether the combination of the two datasets previously held by the two independent firms would give rise to competition concerns. Assuming that such data combination is allowed under the applicable data protection legislation, the Commission distinguished two main ways in which a merger may raise horizontal issues as a result of the combination of data under the ownership of the merged entity. First, the Commission acknowledged that the combination of two datasets as a result of a merger may “increase the merged entity’s market power in a hypothetical market for the supply of this data or increase barriers to entry/expansion in the market for actual or potential competitors, which may need this data to operate on this market.” Despite its earlier adherence to a competition analysis strictly limited to relevant markets for existing products and services in Google/DoubleClick and Facebook/WhatsApp, the Commission explicitly referred here to the possibility that the combination of data may require competitors to collect a larger dataset in order to compete effectively with the merged entity irrespective of how the data is used to develop a particular beyond the initial purposes for which the data has been collected in the first place (Harbour and Koslov 2010, 773).
[ 78 ] Economy
product or service. Second, the Commission made clear that even if there is no intention or technical possibility to combine the two datasets “it may be that pre-merger the two companies were competing with each other on the basis of the data they controlled and this competition would be eliminated by the merger” (Microsoft/LinkedIn 2016, par. 179). While the Commission did not explain the reasons behind this change in perspective,4 its more proactive stance is to be welcomed, as it will enable a more reliable analysis of data-related competition concerns in line with market realities. Despite this evolution in approach, the Commission nevertheless came to the same conclusion as in Google/DoubleClick and Facebook/WhatsApp, namely that the combination of data enabled by the acquisition of LinkedIn by Microsoft did not raise serious doubts as to its compatibility with the internal market in relation to online advertising. The Commission specified three grounds for this. First, Microsoft and LinkedIn did not make available their data to third parties for advertising purposes, with only very limited exceptions. Second, the combination of their respective datasets did not appear to result in raising the barriers to entry/expansion for other players in this space, as there would continue to be a large amount of Internet user data that were valuable for advertising purposes and that were not within Microsoft’s exclusive control. Third, the merging parties were small market players and competed with each other only to a very limited extent in online advertising and its possible segments (Microsoft/ LinkedIn 2016, par. 180). As such, the reasoning of the Commission still seems rather superficial, considering its general reference to the alleged wide availability of remaining data useful for advertising without entering into a more detailed analysis of the type of data that is necessary for this purpose.
Availability of Data as an Input for Machine Learning in Microsoft/L inkedIn
However, with regard to the other two instances in the Microsoft/LinkedIn decision where a hypothetical market for data was investigated, the Commission has undertaken a deeper analysis of data-related competition concerns. In the context of customer relationship management (CRM) software solutions, the Commission investigated whether the merged entity would have the ability to foreclose competing providers by refusing 4. In particular, the Commission did not refer to the underlying basis for its definition of a hypothetical market for data.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 79 ]
08
access to LinkedIn full data. The Commission concluded that this was not the case because such a refusal was unlikely to negatively affect the overall availability of data for machine learning (ML) in CRM software solutions (Microsoft/LinkedIn 2016, par. 253). First, LinkedIn did not appear to have a significant degree of market power in any potential relevant upstream market, which would in this case amount to a “hypothetical market or segment for the provision of data for the purposes of ML in CRM software solutions” (Microsoft/LinkedIn 2016, par. 254). As already made clear in relation to online advertising, LinkedIn did not license any data to any third party including for ML purposes. The Commission also argued that the applicability of European data protection law would limit the merged entity’s ability to undertake any treatment of LinkedIn full data. In this regard, the newly adopted General Data Protection Regulation, which will apply from May 25, 2018, would, in the Commission’s view, impose further limitations on Microsoft, namely by strengthening the existing rights and empowering individuals with more control over their personal data (Microsoft/LinkedIn 2016, par. 254–55). Second, the Commission considered that LinkedIn full data, or a subset thereof, could not be qualified as, and was not likely to become in the next two or three years (the usual time frame applied in merger review), an important input for the provision of ML in CRM software solutions. The market investigation indicated that all major CRM vendors had already started offering advanced functionalities to their customers based on ML, or planned to do so in the next two or three years, and none of these offerings had been developed or required for its use access to LinkedIn full data. Even if LinkedIn full data, or a subset thereof, were to be used in the near future for ML in CRM software solutions, it would constitute only one of the many types of data that are needed for this purpose. According to the Commission, the data collected by LinkedIn was one source of the third-party data that could be used for ML and could be relevant for certain use cases in certain industry sectors, but not for others. Finally, the Commission noted that there were many alternative possible sources of data that were already available for ML (Microsoft/LinkedIn 2016, par. 256–64). The Commission undertook a similar analysis with regard to the use of data for ML in productivity software solutions. In that context, the Commission argued that irrespective of whether Microsoft would have the incentive to foreclose competing providers of productivity software, it would not have the ability to foreclose as, in any event, by reducing access to LinkedIn full data, it was unlikely to negatively affect the overall availability of data for ML in productivity software solutions (Microsoft/
[ 80 ] Economy
LinkedIn 2016, par. 373). First, LinkedIn did not appear to have a significant degree of market power in any potential relevant upstream market, which in this case would be a “hypothetical market or segment for provision of data for the purposes of ML in productivity software solutions.” As the Commission made clear, LinkedIn did not license any data to any third party and was not planning to license its full data, or a subset thereof, to any third party including for ML purposes (Microsoft/LinkedIn 2016, par. 374). Second, European data protection rules were again considered to limit Microsoft’s ability to access LinkedIn full data (Microsoft/ LinkedIn 2016, par. 375). Third, the Commission argued that LinkedIn full data, or a subset thereof, could not be qualified as, and was not likely to become in the next two or three years, an important input for the provision of ML in productivity software solutions. As Microsoft did not have a plan to use LinkedIn full data, judging from internal documents, the Commission found that it might not have the incentive to use the entire dataset itself, considering that neither the relevant respondent raising the concern nor the market investigation shed further light on how LinkedIn data could become important for the future in relation to productivity software and ML functionalities (Microsoft/LinkedIn 2016, par. 376–77). Furthermore, the majority of respondents to the market investigation, including the main competitors in this space, expected the effects of the transaction on their company as well as on the market for the provision of productivity software to be neutral (Microsoft/LinkedIn 2016, par. 378). For these reasons, the Commission concluded that the merged entity would likely not have the ability to foreclose competing productivity software providers by not providing access to LinkedIn full data (Microsoft/LinkedIn 2016, par. 379). As any potential restriction of access to LinkedIn full data was unlikely to lead to consumer harm and because LinkedIn was only one of the data sources available on the market to provide customers with useful insights, the transaction was considered not to raise serious doubts as to its compatibility with the internal market in relation to productivity software solutions (Microsoft/LinkedIn 2016, par. 380–81).
Endorsement of the New Approach in Verizon/Yahoo
Such a detailed, prospective analysis of the extent to which a combination of previously independent datasets may give rise to competition concerns is to be welcomed and will make merger review in digital
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 81 ]
28
markets more reliable. It is instructive to note in this regard that the two ways distinguished in Microsoft/LinkedIn as to how a combination of data may raise horizontal issues in a merger have been referred to by the Commission as the relevant legal framework in its most recent Verizon/ Yahoo merger decision. Subject to the extent to which the combination of datasets is allowed under the applicable data protection legislation, the Commission again noted in Verizon/Yahoo that competition concerns may arise when (1) either the combination of two datasets post-merger increases the merged entity’s market power in a hypothetical market for the supply of this data or increases barriers to entry/expansion or (2) even if there is no intention or technical possibility to combine the two datasets, the two companies were competing with each other pre-merger on the basis of the data they controlled (Verizon/Yahoo 2016, par. 81–83). As such, it seems that the Commission has now adopted these two concerns as the relevant framework to assess data concentration concerns under merger review.5 When applying this framework to the specific circumstances of the Verizon/Yahoo merger, the Commission came to the conclusion that the possible postmerger combination of the data held by the two parties did not raise serious doubts as to its compatibility with the internal market in relation to online advertising. First, the Commission noted that any such data combination could only be implemented by the merged entity to the extent it is allowed by applicable data protection rules. Second, the combination of datasets was not considered to raise the barriers to entry/expansion for other players in this space, as there would continue to be a large amount of Internet user data that were valuable for advertising purposes and that were not within the merging parties’ exclusive control. Third, the Commission argued that the two parties had to be regarded as small market players in online advertising. Fourth, the vast majority of respondents to the market investigation had indicated that the data collected by Yahoo and Verizon could not be characterized as unique. One customer had even noted that it expected the merged entity to be able to improve its ability to capture and utilize data to target online advertising, which would in turn improve its competitiveness against existing stronger competitors (Verizon/Yahoo 2016, par. 89–93) (Table 3.1).
5. This is confirmed by Commission officials (see Ocello and Sjödin 2017, 1–3).
[ 82 ] Economy
Table 3.1. OVERVIEW OF MERGER DECISIONS RAISING DATA-R EL ATED COMPETITION CONCERNS IN REL ATION TO ONLINE ADVERTISING AND DATA ANALYTICS
Case name and number
Decision date
Sector
Data-related findings
Google/
March 11,
Online advertising
Combination of datasets did not
DoubleClick
2008
raise foreclosure issues that would
M.4731
result in a significant impediment to effective competition
Microsoft/Yahoo M.5727
February 18, 2010
Internet search and advertising
Merger was expected to increase competition in internet search and search advertising by allowing Microsoft to become a stronger competitor to Google due to higher scale
Telefónica UK/ Vodafone UK/
September 4, 2012
Data analytics (among others)
As regards the combination of consumer data, the creation of
Everything
the joint venture was not likely
Everywhere/JV
to significantly impede effective
M.6314
competition, since there would be various other players having access to a comparable set of data and offering competing data analytics services
Publicis/Omnicom M.7023
January 9,
With respect to the question of
2014
whether “big data” would become in the near future a key factor in helping advertisers to target better their offers to online customers, a sufficient number of alternative providers of big data analytics were argued to remain post-merger, as a result of which no serious doubts arose in relation to big data
Facebook/ WhatsApp M.7217
October 3, 2014
Consumer
Even if the merged entity were to
communications,
start collecting and using data
social networking,
from WhatsApp users to improve
and online
targeted advertising on Facebook’s
advertising
social network, the merger would not raise competition concerns because there would continue to be a large amount of Internet user data that were valuable for advertising purposes and that were not within Facebook’s exclusive control (continued)
48
Table 3.1. CONTINUED Case name and number
Decision date
Microsoft/LinkedIn December M.8124
6, 2016
Sector
Data-related findings
PC operating systems,
Combination of data did not give
productivity
rise to competition concerns
software, customer
as regards online advertising
relationship
because: first, the parties did
management
not license their data to third
software solutions,
parties pre-merger; second, the
professional social
combination of data did not
networks, online
appear to result in raising the
communications,
barriers to entry/expansion, as
online advertising
there would continue to be a large
(among others)
amount of Internet user data that were valuable for advertising purposes and that were not within Microsoft’s exclusive control; third, the parties are small market players and competed with each other only to a very limited extent in online advertising and its possible segments
Verizon/Yahoo M.8180
December 21, 2016
General search, online
Combination of data would not raise
advertising, data
competition concerns because
analytics, consumer
the relevant datasets could not be
communications
characterized as unique
(among others) For a more detailed discussion of these cases, see Graef (2015, 495–501).
Appraisal
Even though the assessment of data-related competition concerns under merger review6 has now evolved in such a way as to allow for an analysis of a hypothetical market for data, an accurate analysis still requires that one be able to identify the circumstances in which data amounts to a form of market power that would significantly impede effective competition in the internal market. In this regard, the phrase the Commission constantly uses in merger decisions in the online environment, that “a large amount of internet user data valuable for advertising purposes is not within the 6. For a more detailed discussion of these cases, see Graef (2015, 495–501).
[ 84 ] Economy
merging parties’ exclusive control,” remains problematic. This is because the broad statement is not further substantiated with an analysis of the extent to which data from Internet players in general can be regarded as substitutable to the combined dataset of the merged entity. For that reason, there is a need for competition authorities to identify what elements precisely could point to market power stemming from data. In order to adequately recognize data-related competition concerns in a particular case, competition authorities should be prepared to know what to look for.
INDICATORS FOR MARKET POWER STEMMING FROM DATA
In the past years, a number of national competition authorities have released reports examining the role of data in competition law and policy (Autoritat Catalana de Competència 2016; Autorité de la concurrence and Bundeskartellamt 2016; UK Competition & Markets Authority 2015). These reports as well as statements from officials of competition authorities shed light on the issue of when data is considered to evolve into market power.
Characterizing Data
In a June 2015 report, “The Commercial Use of Consumer Data,” the United Kingdom Competition & Markets Authority (UK CMA) outlined a number of specific characteristics of data that have to be taken into account when answering this question. A first characteristic is that data constitutes a nonrivalrous good, which means that the same piece of data may be used by more than one person at the same time. The fact that one entity has collected particular information does not preclude others from gathering identical data (UK CMA 2015, par. 3.6 under a). For example, consumers provide their contact information to numerous entities. Nevertheless, one should keep in mind that access to data can be restricted, either by imposing technical or contractual restrictions or by relying on intellectual property law to claim exclusivity of a particular dataset.7 Furthermore, the value of data often does not lie in the collected information itself but instead depends on the knowledge that can be extracted from it. A second characteristic of data as distinguished by the UK CMA is the cost structure of the collection, storage, processing, and analysis of data, 7. As argued by Graef (2015, 480–2), entities may in particular rely on copyright, sui generis database protection, and trade secret law in order to keep their datasets away from competitors.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 85 ]
68
which typically involves relatively high fixed costs and low or negligible marginal costs. As a result, incumbents will likely have cost advantages over smaller, new entrants in collecting and processing more and different types of data. Such economies of scale and scope relating to, respectively, the volume and variety of the available data (Stucke and Grunes 2016, 170– 99) may act as barriers to entry. As explained by the UK CMA, this will particularly be the case where the economies of scale and scope are significant and where data is a key input into the products and services being developed (UK CMA 2015, par. 3.6 under b). A third characteristic of data outlined by the UK CMA is the diversity in value in the types of data collected and used. While some data such as the name and date of birth of a person has lasting value and only has to be collected once, other types of data are more transient in value and are relevant for a shorter period of time. Reference can be made here to interests and preferences of individuals, which may change over time. The extent to which data holds its value over time is a relevant factor for considering whether competition concerns may arise (UK CMA 2015, par. 3.6 under c). In this regard, UK CMA Chief Executive Alex Chisholm stated in a June 2015 speech: “Some data loses value over time, so it is hard to see how persistent, unmatchable competitive advantage could be maintained. However some data has persistent value—for example in relation to customer transaction history on auction sites—and it is easier to see how the control of this data could become a barrier to entry” (Chisholm 2015). Similarly, Competition Commissioner Vestager argued in this regard: “It might not be easy to build a strong market position using data that quickly goes out of date. So we need to look at the type of data, to see if it stays valuable” (Vestager 2016a).
Availability of Data
The Autorité de la concurrence and the Bundeskartellamt argued in their May 2016 joint report, “Competition Law and Data,” that two factors are of particular relevance when considering whether data can contribute to market power: the scarcity or ease of replicability of data, and whether the scale and scope of data collection matters to competitive performance (Autorité de la concurrence and Bundeskartellamt 2016, 35–52). While the second factor corresponds to one of the characteristics of data already identified by the UK CMA, the first factor involves an additional assessment of the availability of data. Competition Commissioner Vestager hinted at the relevance of this element as well when she argued: “We also need to ask why competitors couldn’t get hold of equally good information. What’s to stop
[ 86 ] Economy
them from collecting the same data from their customers, or buying it from a data analytics company?” (Vestager 2016a). In this context, a distinction can be made between actual and potential availability of substitutable data. To assess the availability of actual substitutes, one has to examine whether the necessary data can be sourced from third parties on the market, for instance a data broker. With regard to the availability of potential substitutes, the question needs to be addressed whether it would be viable for a new entrant or potential competitor to collect and replicate the relevant data itself so to develop a new dataset with a comparable scope to that of the incumbent (see also Graef 2015, 503–4).8
Role of Data and Scale in Machine Learning
Apart from all of these considerations, it is important to identify what role data exactly plays in the process of a firm gaining a dominant position. Aspects other than access to a large and varied dataset need to be taken into account as well when examining how dominance can be established in a specific market. As Director-General for Competition Laitenberger made clear, relevant questions that need to be asked in this regard include: “How much of the attractiveness of a product depends on data? How easily does data translate into product improvements?” (Laitenberger 2017, 8). In addition to a representative dataset, good engineering resources and a well-functioning technology are required to successfully operate a digital service. The interaction of data with the underlying algorithm is also relevant in this regard. In order to improve the quality of a service, the algorithm needs to be constantly updated and fed with new information about the changing preferences of users. Scale plays an important role in this process of machine learning. In the context of the Microsoft/Yahoo merger decision, Microsoft argued that with larger scale a search engine can run tests on how to improve the algorithm and that it is possible to experiment more and faster as traffic volume increases because experimental traffic will take up a smaller proportion of overall traffic (Microsoft/Yahoo 2010, par. 162 and 223). Incumbents that have a large and stable base of returning users will have an advantage over newcomers because they are able to adapt more quickly to new developments. Even if potential competitors are able to purchase the necessary data from data brokers or other market players, providers with an established user base are likely to be in a better position 8. For an analysis of the actual substitutability of different types of data, see Graef (2015, 495–501).
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 87 ]
8
to update their databases and therefore may have a competitive advantage over smaller players and new entrants, which will be slower in adapting to the changing needs of users. However, these issues need to be tested empirically before one is able to make any strong statements.
Two Main Indicators
On the basis of the assessments conducted by competition authorities up to this point, it is submitted here that next to the question of whether data is a significant element of a product’s success, two issues generally seem to be regarded as adequate indicators for the extent to which data contributes to market power: (1) the value of the data at stake in terms of the strength of the economies of scale, the strength of the economies of scope, and the extent to which the particular data is transient in nature and thus depreciates over time; and (2) the availability of alternative data either by obtaining it from third parties on the market or by collecting the necessary data directly from users and thereby replicating the relevant dataset (Bourreau, De Streel, and Graef 2017, 7–8).9 The relevance of all these different elements also implies that the extent to which data evolves into market power has to be assessed on a case-by- case basis. It is hard to make any generalizations because the existence of a competitive advantage or entry barrier relating to data heavily depends on the particular circumstances of the case. While data may be an important input of production that cannot be easily duplicated in some scenarios, the analysis may be different in other situations where data cannot be made exclusive and alternative datasets are readily available on the market. Nevertheless, with the knowledge gained over the years, competition authorities are at least able to ask the right questions when confronted with a competition case that raises data-related concerns.
ABUSE OF DOMINANCE
As discussed, a combination of datasets or data concentration can be remedied in the merger context if this leads to a significant impediment to 9. See also the statements made by the European Commission during an OECD Hearing on Big Data in November 2016 (OECD Competition Committee 2017): “But before any such interventions, it is crucial to identify whether data is a key element for product success, whether data is replicable or available from other sources, and how quickly data becomes outdated.”
[ 88 ] Economy
effective competition. As made clear in the EU Merger Regulation, the main factor to be considered is whether a merger results in the creation or strengthening of a dominant position (Article 2(2) and (3) of the EU Merger Regulation). A potential remedy to address the possible negative effects from data concentration may be to demand the merging parties duplicate the relevant data to enable competitors to develop competing or complementary services on this basis (Schepp and Wambach 2016, 123). Precedent for such a remedy can be found in the context of the acquisition of Reuters by Thomson in 2008, where the Commission approved the merger on the condition that the merging parties would divest copies of their databases containing financial information (Thomson/Reuters 2008). An alternative remedy already put forward by former Federal Trade Commissioner Pamela Jones Harbour in her dissenting statement in Google/DoubleClick concerns the establishment of a firewall between the datasets of the merging parties for some period of time to prevent any anticompetitive effects (Harbour 2007, footnote 23 on page 9). Outside the merger context, limitations can only be imposed on the basis of competition law if such a dominant position stemming from data is actually abused in violation of Article 102 of the Treaty on the Functioning of the European Union (TFEU). Generally, a difference is made between exploitative abuse, which directly harms consumers, and exclusionary abuse, which involves behavior that indirectly causes consumer harm through the impact on competitors of the dominant firm. When applying these two types of abuse to the collection and use of data by digital businesses, a number of possible data abuses can be distinguished.
Exploitative Abuse
As regards direct exploitation of consumers, the role of data as a counter performance to be provided in exchange for being able to use digital services forms the main basis for establishing abuse. In practice, consumers are typically confronted with take-it-or-leave-it offers and do not have a real choice but to accept that providers will extract certain amounts and types of personal data. The collection of personal data thus operates as an indispensable currency used to compensate providers for the delivery of their services to consumers. Since personal data replaces price as a type of currency in the digital environment, exploitative abuse may relate to the excessive collection of information about consumers instead of the monetary price charged for a product or service. The question then arises what amount of data is to be considered as excessive. This will be hard to assess
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 89 ]
09
in an objective and general manner, as the willingness to reveal certain information in return for being provided with a particular service will differ among consumers. A proposed approach to assess whether a certain form of data collection is excessive under Article 102 TFEU involves the use of data protection principles as a benchmark against which the existence of abusive behavior can be tested (Costa-Cabral 2017, 33–37). Indeed, the then-Competition Commissioner Almunia in 2012 already referred to the possibility that “[a] single dominant company could of course think to infringe privacy laws to gain an advantage over its competitors” (Almunia 2012b). In a similar vein, the European Data Protection Supervisor Buttarelli stated in a 2015 speech that “[w]e should be prepared for potential abuse of dominance cases which also may involve a breach of data protection rules” (Buttarelli 2015, 3). By collecting personal data beyond what is allowed under data protection law, a company can get more insight into the preferences of individuals and use this additional information to further improve its services. It is instructive to note in this regard that the Bundeskartellamt announced the opening of proceedings against Facebook in March 2016 on suspicion of having abused its possible dominant position in the market for social networks. In particular, the Bundeskartellamt suspects that Facebook’s terms of service are in violation of data protection law and thereby also represent an abusive imposition of unfair conditions on users. If a connection can be identified between the alleged data protection infringement and Facebook’s possible dominance, the use of unlawful terms and conditions by Facebook could, in the view of the Bundeskartellamt, also be regarded as an abuse of dominance under competition law (Bundeskartellamt 2016). The Bundeskartellamt thus indeed appears to rely on data protection law as a benchmark for assessing whether certain exploitative behavior of a dominant firm should be considered anticompetitive under Article 102 TFEU. Depending on the final outcome of the case, the Bundeskartellamt may set a precedent for using competition enforcement to prevent exploitation of consumers by dominant firms through the collection and use of personal data beyond what is allowed under data protection rules.
Exclusionary Abuse
In terms of exclusionary abuse relating to data, possible anticompetitive practices include exclusivity contracts, cross-usage of datasets and refusals to give competitors access to data. Exclusivity contracts can be used by a dominant firm to limit possibilities for competitors to gather data (Grunes
[ 90 ] Economy
and Stucke 2015, 3). Reference can be made here to one of the concerns that the European Commission expressed in the Google case. The agreements concluded by Google with third-party websites on which it delivers search advertisements were alleged to result in de facto exclusivity requiring websites to obtain all or most of their search advertisements from Google and shutting out competing providers of search advertising intermediation services (Almunia 2012a). Cross-usage of datasets involves the situation whereby a dominant player uses data gathered in one market to enter and leverage its strong position to another, related market (Prüfer and Schottmüller 2017). National competition authorities in France and Belgium have already dealt with this issue. In September 2014, the Autorité de la concurrence adopted an interim decision in which it found GDF Suez capable of taking advantage of its dominant position in the market for natural gas by using customer files it had inherited from its former monopoly status to launch offers at market prices outside the scope of its public service obligation (Autorité de la concurrence 2014). Similarly, the Belgian competition authority imposed a fine on the National Lottery in September 2015 for having abused its legal monopoly in the Belgian market for public lotteries by using the contact details of individuals gathered in that context to promote the launch of a product when entering the competitive market for sports betting (Belgian Competition Authority 2015). Interestingly, the reproducibility of the respective databases for competitors was also assessed in these cases. According to both competition authorities, the main question to be answered in this regard is whether, considering the nature and size of the dataset, reproduction is possible under reasonable financial conditions and within a reasonable period of time (see also Graef 2016, 271–73). As such, these issues could also inform on the extent to which data may evolve into market power. In addition, a firm could abuse its dominant position by refusing to give competitors access to data. There is a string of case law at the EU level that deals with the requirements under which the legality of such refusals to deal has to be assessed.10 Previous cases relate to physical infrastructures such as tunnels and ports but also to intangible, intellectual property-protected assets. In literature, the requirements that need to be met for establishing 10. Relevant cases at the EU level include: Judgment in Telefis Eireann and Independent Television Publications Ltd v. Commission of the European Communities (Magill), Joined cases C- 241/ 91 and C- 242/ 91, ECLI:EU:C:1995:98; Judgment in Oscar Bronner GmbH & Co. KG v. Mediaprint Zeitungs, C-7/97, ECLI:EU:C:1998:569; Judgment in IMS Health GmbH & Co. OHG v. NDC Health GmbH & Co. KG, C-418/01, ECLI:EU:C:2004:257; Judgment in Microsoft, T 167/08, ECLI:EU:T:2012:323.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 91 ]
29
competition law liability for refusals to deal have been referred to as the “essential facilities doctrine.” This doctrine forms an important and at the same time controversial part of EU competition law, as its application interferes with the generally recognized principles of freedom to contract and freedom to dispose of one’s property. As soon as a refusal to deal qualifies as abuse of dominance, the dominant undertaking is forced to enter into a business relationship with the access seeker and has to share access to its assets. As a result, the EU courts have consistently held that refusals to deal are only to be considered abusive under Article 102 TFEU in “exceptional circumstances.” In fact, application of the essential facilities doctrine carries the highest burden of proof under competition law. The following requirements need to be fulfilled for a refusal to deal to constitute abuse of dominance: (1) access to the requested input is indispensable; (2) the refusal excludes all effective competition on the downstream market; and (3) there is no objective justification for the refusal. For intellectual property-protected assets, the additional new product requirement applies, which entails that the refusal to deal needs to prevent the access seeker from introducing a new product on the market for which potential consumer demand exists. As data has several features that set it apart from other assets, future refusals to give access to data may require a different analysis under the essential facilities doctrine (see also Graef 2016, 249–80). Policy Initiatives Beyond Competition Law and Dominance
Although competition enforcement is a strong mechanism to address problems relating to digital dominance, it has to be kept in mind that the objective of competition law is to address anticompetitive practices on a case-by-case basis. Potential abusive commercial patterns in the market are instead best resolved by introducing regulation.11 In this context, it is instructive to note that the Commission in its January 2017 Communication on “Building a European Data Economy” explored the development of a possible EU framework for data access in the context of nonpersonal data. As access to large and diverse datasets is regarded as a key driver to the emergence of a European data economy, the Commission aims to encourage exchange and (re)use of data. Several possible measures are currently being examined, including the establishment of a mandatory data sharing regime that would oblige market players in general to share 11. See also Almunia (2012b): “When unfair or manipulative commercial practices become pervasive in a market to the detriment of consumers and users the matter is best resolved with regulation.”
[ 92 ] Economy
data, thus going beyond the case-by-case approach of the abuse of dominance regime of competition law (European Commission 2017a, 8–10). Another policy initiative that is worth mentioning is the fact-finding exercise that the Commission is conducting of business-to-business (B2B) practices in the online platform environment. According to the Commission Communication, “Online Platforms and the Digital Single Market,” one of the concerns raised by stakeholders in this context involved platforms refusing market access or unilaterally modifying the conditions for market access including access to essential business data. Against this background, the Commission aims to consider whether action is needed to address fairness of B2B relations beyond the application of competition law (European Commission 2016, 11–13). An avenue that is explored in this context is the introduction of legislation governing fairness of B2B relationships at the EU level. European Union unfair trading law, including the Unfair Contract Terms Directive and the Unfair Commercial Practices Directive, is only applicable to business-to-consumer (B2C) situations. In the context of the drive toward a European data economy, an additional requirement of fairness in B2B relationships may be considered welcome to address situations where the unequal bargaining power between incumbents and new entrants results in unbalanced contractual clauses regarding access to data. Finally, data protection law cannot be ignored, at least when personal data12 is being collected, used, or processed. As the Facebook investigation by the Bundeskartellamt shows, competition and data protection law interact and may strengthen each other’s effectiveness when applied in a coherent way. Nevertheless, it should be kept in mind that, unlike competition law, data protection is not concerned with scale as such in the sense that a breach of data protection rules can be equally damaging to the interests of individual data subjects irrespective of the market position of the firm and the size of the dataset or the processing activities.13 As the cause of concerns in the two fields is different, there is a need to apply competition and data protection law in parallel. In other words, to adequately protect the interests of individuals in digital markets, competition enforcement needs to be accompanied by a strong enforcement of the data protection rules as well. 12. Article 4(1) of the General Data Protection Regulation defines “personal data” as “any information relating to an identified or identifiable natural person” and specifies that “an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” 13. However, the General Data Protection Regulation does take into account the level of risk of a certain form of processing in such a way that more detailed obligations will apply to controllers where the risk of processing is higher.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 93 ]
49
CONCLUSION
One can conclude from the preceding analysis that competition law is in principle adequately equipped to tackle competition concerns resulting from the collection, use, and processing of data by digital players. Competition authorities have invested considerable effort in identifying what role data plays in competitive processes in digital markets. In particular, the fact that the Commission started to define and analyze potential markets for data under merger review is a welcome evolution. While there is not yet a consensus or uniform answer as to how best to tackle potentially problematic commercial practices in data markets, a number of indicators for market power resulting from data can be identified on the basis of competition cases investigated and reports published so far: whether data is a significant element of a product’s success; the value of the data at stake in the sense of the strength of the economies of scale, the economies of scope, and the transient nature of data; and the availability of data in terms of the ability to obtain substitutable data from third parties on the market or to collect the necessary data directly from users and thereby replicate the relevant dataset. In practice, however, it is not easy to assess to what extent these aspects are present in a particular market. Whether regulation should be introduced beyond competition law to encourage exchange and use of data is essentially a policy issue that requires a trade-off between different interests. While mandatory sharing of data may have beneficial effects on competition in the short term, its impact on the long term should also be considered to prevent that disproportionate negative effects occur due to a decrease in incentives to invest in new, innovative services. What is clear is that competition policy on its own cannot solve all issues surrounding the collection and use of data in digital markets. While competition law is often referred to as the most appropriate regime to solve these issues because of its strong enforcement mechanism, its limitations should also be taken into account. Competition policy takes an economic, effects-based approach and mainly focuses on addressing considerations of economic efficiency. This while other public interests are also relevant to many issues occurring in the online environment. As such, there is a need for parallel application and enforcement of other regimes such as data protection as well as consumer protection law. Continued analysis, monitoring, and dialogue between different regulators and policymakers is thus of the essence to adequately protect consumers against potential harm emerging in digital markets.
[ 94 ] Economy
REFERENCES Almunia, Joaquín. 2012a. “Statement of Commissioner Almunia on the Google Antitrust Investigation.” Press Room Brussels, May 21, 2012. Almunia, Joaquín. 2012b. “Competition and personal data protection.” Speech Privacy Platform Event: Competition and Privacy in Markets of Data. Brussels, November 26, 2012. Argenton, Cédric, and Prüfer, Jens. 2012. “Search Engine Competition with Network Externalities.” Journal of Competition Law and Economics 8, no. 1: 73–105. Autoritat Catalana de Competència. 2016. “The Data-Driven Economy. Challenges for Competition.” November 2016. http://acco.gencat.cat/web/.content/80_acco/ documents/arxius/actuacions/Eco-Dades-i-Competencia-ACCO-angles.pdf. Autorité de la concurrence. 2014. “Décision n° 14-MC-02 du 9 septembre 2014 relative à une demande de mesures conservatoires présentée par la société Direct Energie dans les secteurs du gaz et de l’électricité.” http://www. autoritedelaconcurrence.fr/pdf/avis/14mc02.pdf. Autorité de la concurrence, and Bundeskartellamt. 2016. “Competition Law and Data.” May 10, 2016. http://www.autoritedelaconcurrence.fr/doc/ reportcompetitionlawanddatafinal.pdf. Belgian Competition Authority. 2015. “Beslissing n° BMA-2015-P/K-27-AUD van 22 september 2015, Zaken nr. MEDE-P/K-13/0012 en CONC-P/K-13/0013.” Stanleybet Belgium NV/Stanley International Betting Ltd en Sagevas S.A./World Football Association S.P.R.L./Samenwerkende Nevenmaatschappij Belgische PMU S.C.R.L. t. Nationale Loterij NV. http://www.mededinging.be/nl/beslissingen/ 15-pk-27-nationale-loterij. Bourreau, Marc, Alexandre De Streel, and Inge Graef. 2017. “Big Data and Competition Policy: Market Power, Personalised Pricing and Advertising.” CERRE Project Report, February 16, 2017. http://cerre.eu/publications/ big-data-and-competition-policy. Bundeskartellamt. 2016. “Bundeskartellamt Initiates Proceeding against Facebook on Suspicion of Having Abused Its Market Power by Infringing Data Protection Rules.” Press release. March 2, 2016. http://www.bundeskartellamt.de/ SharedDocs/Meldung/EN/Pressemitteilungen/2016/02_03_2016_Facebook. html?nn=3591568. Buttarelli, Giovanni. 2015. “Keynote Speech at Joint ERA-EDPS Seminar.” Workshop Competition Rebooted: Enforcement and Personal Data in Digital Markets. Brussels, September 24, 2015. Chisholm, Alex. 2015. “Data and Trust in Digital Markets: What Are the Concerns for Competition and for Consumers?” Speech, UEA Centre for Competition Policy Annual Conference in Norwich, June 19, 2015. https://www.gov.uk/government/speeches/ alex-chisholm-speaks-about-data-and-trust-in-digital-markets. Costa-Cabral, Francisco, and Orla Lynskey. 2017. “Family Ties: The Intersection between Data Protection and Competition in EU Law.” Common Market Law Review 54, no. 1: 11–50. European Commission. 1997. “Commission Notice on the Definition of Relevant Market for the Purposes of Community Competition Law [1997] OJ C 372/5.” European Commission. 2016. “Communication from the Commission to the European Parliament, the Council, the Economic and Social Committee and the Committee of Regions, Online Platforms and the Digital Single Market.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 95 ]
69
Opportunities and Challenges for Europe.” May 25, 2016, COM(2016) 288 final. European Commission. 2017a. “Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Building a European Data Economy.” January 10, 2017, COM(2017) 9 final. European Commission. 2017b. “Mergers: Commission Fines Facebook €110 Million for Providing Misleading Information about WhatsApp Takeover.” Press release. May 18, 2017. http://europa.eu/rapid/press-release_IP-17-1369_ en.htm. Graef, Inge. 2015. “Market Definition and Market Power in Data: The Case of Online Platforms.” World Competition 38, no. 4: 473–505. Graef, Inge. 2016. EU Competition Law, Data Protection and Online Platforms: Data as Essential Facility. Alphen aan den Rijn: Kluwer Law International. Grunes, Allen P., and Maurice E. Stucke. 2015. “No Mistake about It: The Important Role of Antitrust in the Era of Big Data.” The Antitrust Source 14, no. 4: 1–14. Harbour, Pamela Jones. 2007. “Dissenting Statement in Google/DoubleClick.” FTC File No. 071-0170, December 20, 2007. Harbour, Pamela Jones, and Tara Isa Koslov. 2010. “Section 2 in a Web 2.0 World: An Expanded Vision of Relevant Product Markets.” Antitrust Law Journal 76, no. 3: 769–97. Laitenberger, Johannes. 2017. “Competition at the Digital Frontier.” Speech Consumer and Competition Day Malta, April 24, 2017. http://ec.europa.eu/competition/ speeches/text/sp2017_06_en.pdf. Newman, Nathan. 2014. “Search, Antitrust, and the Economics of the Control of User Data.” Yale Journal on Regulation 31, no. 2: 401–54. Ocello, Eleonora, and Cristina Sjödin. 2017. “Microsoft/LinkedIn: Big Data and Conglomerate Effects in Tech Markets.” Competition Merger Brief 1/2017: 1–6. OECD Competition Committee. 2017. “Summary of Discussion of the Hearing on Big Data (29–30 November 2016).” DAF/COMP/M(2016)2/ANN2. March 22, 2017. http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/ ?cote=DAF/COMP/M(2016)2/ANN2&docLanguage=En. Prüfer, Jens, and Christoph Schottmüller. 2017. “Competing with Big Data.” Tilburg Law School Research Paper No. 06/2017, TILEC Discussion Paper No. 2017- 006, CentER Discussion Paper 2017-007, February 16, 2017. https://papers. ssrn.com/sol3/papers.cfm?abstract_id=2918726. Schepp, Nils-Peter, and Achim Wambach. 2016. “On Big Data and Its Relevance for Market Power Assessment.” Journal of European Competition Law & Practice 7, no. 2: 120–24. Sokol, D. Daniel, and Roisin E. Comerford. 2017. “Does Antitrust Have a Role to Play in Regulating Big Data?” In Cambridge Handbook of Antitrust, Intellectual Property, and High Technology, edited by Roger D. Blair, and D. Daniel Sokol, 293–316. Cambridge: Cambridge University Press. Stucke, Maurice E., and Allen P. Grunes. 2016. Big Data and Competition Policy. Oxford and New York: Oxford University Press. Tucker, Darren S., and Hill B. Wellford. 2014. “Big Mistakes Regarding Big Data.” The Antitrust Source 14, no. 2: 1–12. UK Competition & Markets Authority (UK CMA). 2015. “The Commercial Use of Consumer Data. Report on the CMA’s Call for Information.” June 2015.
[ 96 ] Economy
https://www.gov.uk/government/uploads/system/uploads/attachment_data/ file/435817/The_commercial_use_of_consumer_data.pdf. Vestager, Margrethe. 2016a. “Competition in a Big Data World.” Speech DLD Munich, January 17, 2016. Vestager, Margrethe. 2016b. “Refining the EU Merger Control System.” Speech Studienvereinigung Kartellrecht Brussels, March 10, 2016. European Commission Decisions Facebook/WhatsApp. 2014. Case No COMP/M.7217, October 3, 2014. Google/DoubleClick. 2008. Case No COMP/M.4731, March 11, 2008. Microsoft/LinkedIn. 2016. Case M.8124, December 6, 2016. Microsoft/Yahoo. 2010. Case No COMP/M.5727, February 18, 2010. Publicis/Omnicom. 2014. Case No COMP/M.7023, January 9, 2014. Telefónica UK/Vodafone UK/Everything Everywhere/JV. 2012. Case No COMP/M.6314, September 4, 2012. Thomson/Reuters. 2008. Case No COMP/M.4726, February 19, 2008. Verizon/Yahoo. 2016. Case M.8180, December 21, 2016.
Data C o n c e n t r at i o n a n d Data A b u s e u n de r C o m p e t i t i o n L aw
[ 97 ]
89
CHAPTER 4
Amazon—An Infrastructure Service and Its Challenge to Current Antitrust Law LINA M. KHAN
INTRODUCTION
In Amazon’s early years, a running joke among Wall Street analysts was that CEO Jeff Bezos was building a house of cards. Entering its sixth year in 2000, the company had yet to crack a profit and was mounting millions of dollars in continuous losses, each quarter’s larger than the last. Nevertheless, a segment of shareholders believed that by dumping money into advertising and steep discounts, Amazon was making a sound investment that would yield returns once e-commerce took off. Each quarter the company would report losses, and its stock price would rise. One news site captured the split sentiment by asking, “Amazon: Ponzi Scheme or Wal-Mart of the Web?” (Slate 2000). Sixteen years on, nobody seriously doubts that Amazon is anything but the titan of 21st-century commerce. In 2015, it earned $107 billion in revenue (Enright 2016), and, as of 2013, it sold more than its next 12 online competitors combined (Banjo and Ziobro 2013). By some estimates, Amazon now captures 46% of online shopping, with its share growing faster than the sector as a whole (LaVecchia and Mitchell 2016). In addition to being a retailer, it is a marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading provider of cloud server space and computing Lina M. Khan. “Amazon’s Antitrust Paradox.” Yale Law Journal 126, no. 3 (January 2017): 710–805. This article was adapted from the original with permission from the Yale Law Journal.
power. Although Amazon has clocked staggering growth— reporting double-digit increases in net sales yearly—it reports meager profits, choosing to invest aggressively instead. The company listed consistent losses for the first seven years it was in business, with debts of $2 billion (CNN Money 2002). While it exits the red more regularly now,1 negative returns are still common. Despite the company’s history of thin returns, investors have zealously backed it: Amazon’s shares trade at over 900 times diluted earnings, making it the most expensive stock in the Standard & Poor’s 500 (Krantz 2015). As two reporters marveled, “The company barely ekes out a profit, spends a fortune on expansion and free shipping and is famously opaque about its business operations. Yet investors . . . pour into the stock” (Clark and Young 2013). Another commented that Amazon is in “a class of its own when it comes to valuation” (Krantz 2015). Reporters and financial analysts continue to speculate about when and how Amazon’s deep investments and steep losses will pay off (see, e.g., Manjoo 2015). Customers, meanwhile, universally seem to love the company. Close to half of all online buyers go directly to Amazon first to search for products (Moore 2015), and in 2016, the Reputation Institute named the firm the “most reputable company in America” for the third year running (Strauss 2016; see also Hoffmann 2014). In recent years, journalists have exposed the aggressive business tactics Amazon employs. For instance Amazon named one campaign “The Gazelle Project,” a strategy whereby Amazon would approach small publishers “the way a cheetah would a sickly gazelle” (Streitfeld 2013). This, as well as other reporting (Soper 2015; Wingfield and Somaiya 2015), drew widespread attention (Streitfeld 2013), perhaps because it offered a glimpse at the potential social costs of Amazon’s dominance. The firm’s highly public dispute with Hachette in 2014—in which Amazon delisted the publisher’s books from its website during business negotiations—similarly generated extensive press scrutiny and dialogue (see Krugman 2014). More generally, there is growing public awareness that Amazon has established itself as an essential part of the Internet economy (see Manjoo 2016), and a gnawing sense that its dominance—its sheer scale and breadth—may pose hazards. But when pressed on why, critics often fumble to explain how a company that has so clearly delivered enormous benefits to consumers—not to mention revolutionized e-commerce in general—could, at the end of the day, threaten our
1. Partly due to the success of Amazon Web Services, Amazon has recently begun reporting consistent profits. See Wingfield (2016).
Amazon
[ 99 ]
01
markets. Trying to make sense of the contradiction, one journalist noted that the critics’ argument seems to be that “even though Amazon’s activities tend to reduce book prices, which is considered good for consumers, they ultimately hurt consumers” (Vara 2015). In some ways, the story of Amazon’s sustained and growing dominance is also the story of changes in our antitrust laws. Due to the Chicago School’s influence on legal thinking and practice in the 1970s and 1980s, antitrust law now assesses competition largely with an eye to the short- term interests of consumers, not producers or the health of the market as a whole; antitrust doctrine views low consumer prices, alone, to be evidence of sound competition. By this measure, Amazon has excelled; it has evaded government scrutiny in part through fervently devoting its business strategy and rhetoric to reducing prices for consumers. Amazon’s closest encounter with antitrust authorities was when the Justice Department sued other companies for teaming up against Amazon.2 It is as if Bezos charted the company’s growth by first drawing a map of antitrust laws, and then devising routes to smoothly bypass them. With its missionary zeal for consumers, Amazon has marched toward monopoly by singing the tune of contemporary antitrust. This chapter maps out facets of Amazon’s power. In particular, it traces the sources of Amazon’s growth and analyzes the potential effects of its dominance. Doing so enables us to make sense of the company’s business strategy and illuminates anticompetitive aspects of its structure and conduct. This analysis reveals that the current framework in antitrust— specifically its equating competition with “consumer welfare,” typically measured through short-term effects on price and output—fails to capture the architecture of market power in the 21st-century marketplace. In other words, the potential harms to competition posed by Amazon’s dominance are not cognizable if we assess competition primarily through price and output. Focusing on these metrics instead blinds us to the potential hazards. The chapter argues that gauging real competition in the 21st-century marketplace—especially in the case of online platforms—requires analyzing the underlying structure and dynamics of markets. Rather than pegging competition to a narrow set of outcomes, this approach would examine the competitive process itself. Animating this framework is the idea that a company’s power and the potential anticompetitive nature of that power
2. See United States v. Apple Inc., 952 F. Supp. 2d 638, 650 (S.D.N.Y. 2013).
[ 100 ] Economy
cannot be fully understood without looking to the structure of a business and the structural role it plays in markets. Applying this idea involves, for example, assessing whether a company’s structure creates certain anticompetitive conflicts of interest; whether it can cross-leverage market advantages across distinct lines of business; and whether the structure of the market incentivizes and permits predatory conduct.
THE CHICAGO SCHOOL REVOLUTION
One of the most significant changes in antitrust law and interpretation over the last century has been the move away from economic structuralism. Broadly, economic structuralism rests on the idea that concentrated market structures promote anticompetitive forms of conduct (see, e.g., Bain 1950; 1968; Turner and Kaysen 1959). This market structure–based understanding of competition was a foundation of antitrust thought and policy through the 1960s. Subscribing to this view, courts blocked mergers that they determined would lead to anticompetitive market structures. In some instances, this meant halting horizontal deals—mergers combining two direct competitors operating in the same market or product line—that would have handed the new entity a large share of the market. In others, it involved rejecting vertical mergers—deals joining companies that operated in different tiers of the same supply or production chain—that would “foreclose competition.” Centrally, this approach involved policing not just for size but also for conflicts of interest—like whether allowing a dominant shoe manufacturer to extend into shoe retailing would create an incentive for the manufacturer to disadvantage or discriminate against competing retailers. The Chicago School approach to antitrust, which gained mainstream prominence and credibility in the 1970s and 1980s, rejected this structuralist view. In the words of Richard Posner, the essence of the Chicago School position is that “the proper lens for viewing antitrust problems is price theory” (Posner 1979, 932). Foundational to this view is a faith in the efficiency of markets, propelled by profit-maximizing actors. The Chicago School approach bases its vision of industrial organization on a simple theoretical premise: “[R]ational economic actors working within the confines of the market seek to maximize profits by combining inputs in the most efficient manner. A failure to act in this fashion will be punished by the competitive forces of the market.” Practically, the shift from structuralism to price theory had two major ramifications for antitrust analysis. First, it led to a significant narrowing of
Amazon
[ 101 ]
02 1
the concept of entry barriers. According to the Chicago School, advantages that incumbents enjoy from economies of scale, capital requirements, and product differentiation do not constitute entry barriers, as these factors are considered to reflect no more than the “objective technical demands of production and distribution” (Eisner 1991, 105). The second consequence of the shift away from structuralism was that consumer prices became the dominant metric for assessing competition. In his highly influential work, The Antitrust Paradox, Robert Bork asserted that the sole normative objective of antitrust should be to maximize consumer welfare, best pursued through promoting economic efficiency (Bork 1978; see also Crane 2014). In 1979, in Reiter v. Sonotone Corp., the Supreme Court followed Bork’s work and declared that “Congress designed the Sherman Act as a ‘consumer welfare prescription’ ”—a statement that is widely viewed as erroneous (see Orbach 2013, 2152). Still, this philosophy wound its way into policy and doctrine (DOJ 1982). Today, showing antitrust injury requires showing harm to consumer welfare, generally in the form of price increases and output restrictions.3 Two areas of enforcement that this reorientation has affected dramatically are predatory pricing and vertical integration. The Chicago School claims, “predatory pricing, vertical integration, and tying arrangements never or almost never reduce consumer welfare” (Crane 2014). Both predatory pricing and vertical integration are highly relevant to analyzing Amazon’s path to dominance and the source of its power. Through the mid-20th century, Congress repeatedly enacted legislation targeting predatory pricing. Congress, as well as state legislatures, viewed predatory pricing as a tactic used by highly capitalized firms to bankrupt rivals and destroy competition—in other words, as a tool to concentrate control. Laws prohibiting predatory pricing were part of a larger arrangement of pricing laws that sought to distribute power and opportunity. Starting in the 1960s, Chicago School scholars criticized predatory pricing law and enforcement as misguided. They argued that predatory pricing rarely occurred in practice and that, by targeting conduct that resulted in lower prices, government was undermining competition. As the influence of these scholars grew, their thinking shaped both government enforcement and Supreme Court doctrine. In a series of cases in the 1980s and 1990s, the Supreme Court declared, “predatory pricing schemes are rarely tried, and even more rarely successful.”4 Furthermore the Court introduced 3. See, e.g., Ginzburg v. Mem’l Healthcare Sys., Inc., 993 F. Supp. 998, 1015 (S.D. Tex. 1997); Angelico v. Lehigh Valley Hosp., Inc., 984 F. Supp. 308, 312 (E.D. Pa. 1997). 4. Matsushita Electric Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 589 (1986).
[ 102 ] Economy
a legal test requiring that plaintiffs bringing these claims demonstrate that the alleged predatory conduct would lead to higher prices, sufficient to compensate predators for the amounts expended on the predation—a requirement now known as the “recoupment test.” In placing recoupment at the center of predatory pricing analysis, the Court presumed that direct profit maximization is the singular goal of predatory pricing.5 Furthermore, by establishing that harm occurs only when predatory pricing results in higher prices, the Court collapsed the rich set of concerns that had animated earlier critics of predation, including an aversion to large firms that exploit their size and a desire to preserve local control. Instead, the Court adopted the Chicago School’s narrow conception of what constitutes this harm (higher prices) and how this harm comes about—namely, through the alleged predator raising prices on the previously discounted good. Today, succeeding on a predatory pricing claim requires a plaintiff to meet the recoupment test by showing that the defendant would be able to recoup its losses through sustaining supracompetitive prices. Since the Court introduced this recoupment requirement, the number of cases brought and won by plaintiffs has dropped dramatically (Sokol 2014, 1013). Analysis of vertical integration has similarly moved away from structural concerns (Cole 1952). For most of the last century, enforcers reviewed vertical integration under the same standards as horizontal mergers. Critics of vertical integration primarily focused on two theories of potential harm: leverage and foreclosure. Leverage reflects the idea that a firm can use its dominance in one line of business to establish dominance in another. Foreclosure, meanwhile, occurs when a firm uses one line of business to disadvantage rivals in another line. Chicago School theory holds that concerns about both leverage and foreclosure are misguided. Vertical mergers antitrust law’s aversion to vertical arrangements was, Bork argued, irrational. “The law against vertical mergers is merely a law against the creation of efficiency” (Bork 1978, 234). With the election of President Reagan, this view of vertical integration became national policy. In 1982 and 1984, the Department of Justice (DOJ) and the FTC issued new merger guidelines outlining the framework that officials would use when reviewing horizontal deals. The new guidelines narrowed the circumstances in which the agencies would challenge vertical mergers (DOJ 1982; 1984). Although subsequent administrations
5. See ibid., 224.
Amazon
[ 103 ]
0 41
have continued reviewing vertical mergers, the Chicago School’s view that these deals generally do not pose threats to competition has remained dominant.
AMAZON’S BUSINESS STRATEGY
Amazon has established dominance as an online platform thanks to two elements of its business strategy: a willingness to sustain losses and invest aggressively at the expense of profits, and integration across multiple business lines.6 These facets of its strategy are independently significant and closely interlinked—indeed, one way it has been able to expand into so many areas is through foregoing returns. This strategy—pursuing market share at the expense of short-term returns—defies the Chicago School’s assumption of rational, profit-seeking market actors. More significantly, Amazon’s choice to pursue heavy losses while also integrating across sectors suggests that in order to fully understand the company and the structural power it is amassing, we must view it as an integrated entity. Seeking to gauge the firm’s market role by isolating a particular line of business and assessing prices in that segment fails to capture both (1) the true shape of the company’s dominance and (2) the ways in which it is able to leverage advantages gained in one sector to boost its business in another.
Foregoing Short-Term Returns for Long-Term Dominance
Recently, Amazon has started reporting consistent profits, largely due to the success of Amazon Web Services, its cloud computing business (see Bensinger 2016). Its North America retail business runs on much thinner margins, and its international retail business still runs at a loss (Bensinger 2016). But for the vast majority of its 20 years in business, losses—not profits—were the norm. Through 2013, Amazon had generated a positive net income in just over half of its financial reporting quarters. Even in quarters in which it did enter the black, its margins were razor-thin, despite astounding growth. The graph below captures the general trend (Figure 4.1).
6. I am using “dominance” to connote that the company controls a significant share of market activity in a sector. I do not mean to attach the legal significance that sometimes attends “dominance.”
[ 104 ] Economy
AMAZON’S PRICES194
120,000 100,000 80,000 60,000 40,000 20,000
2015
2013
2104
2011
2012
2010
2009
2007
2008
2005
2006
2003
Revenue (millions of $)
2004
2001
2002
1999
2000
1997
1998
1995
–20,000
1996
0
Prof it (millions of $)
Figure 4.1: Amazon’s Profits Source: Based on Evans (2013) and Market Watch (date).
Just as striking as Amazon’s lack of interest in generating profit has been investors’ willingness to back the company (see Streitfeld 2013). With the exception of a few quarters in 2014, Amazon’s shareholders have poured money in despite the company’s penchant for losses. On a regular basis, Amazon would report losses, and its share price would soar (see, e.g., Dini 2000; Quick Pen 2015). As one analyst told the New York Times, “Amazon’s stock price doesn’t seem to be correlated to its actual experience in any way” (Streitfeld 2015; see also Elmer-DeWitt 2015). Analysts and reporters have spilled substantial ink seeking to understand the phenomenon. As one commentator joked in a widely circulated post, “Amazon, as best I can tell, is a charitable organization being run by elements of the investment community for the benefit of consumers” (Yglesias 2013). In some ways, the puzzlement is for naught: Amazon’s trajectory reflects the business philosophy that Bezos outlined from the start. In his first letter to shareholders, Bezos wrote: We believe that a fundamental measure of our success will be the share-holder value we create over the long term. This value will be a direct result of our ability to extend and solidify our current market leadership position. . . . We first measure ourselves in terms of the metrics most indicative of our market leadership: customer and revenue growth, the degree to which our customers continue
Amazon
[ 105 ]
06 1
to purchase from us on a repeat basis, and the strength of our brand. We have invested and will continue to invest aggressively to expand and leverage our customer base, brand, and infrastructure as we move to establish an enduring franchise. (Bezos 1998)
In other words, the premise of Amazon’s business model was to establish scale. To achieve scale, the company prioritized growth. Under this approach, aggressive investing would be key, even if that involved slashing prices or spending billions on expanding capacity, in order to become consumers’ one-stop-shop. This approach meant that Amazon “may make decisions and weigh trade-offs differently than some companies,” Bezos warned (Bezos 1998). “At this stage, we choose to prioritize growth because we believe that scale is central to achieving the potential of our business model” (Bezos 1998). The insistent emphasis on “market leadership” (Bezos relies on the term six times in the short letter) signaled that Amazon intended to dominate. And, by many measures, Amazon has succeeded. Its year-on-year revenue growth far outpaces that of other online retailers (Garcia 2016; see also BI Intelligence 2016). Despite efforts by big-box competitors like Walmart, Sears, and Macy’s to boost their online operations, no rival has succeeded in winning back market share (see Wahba 2015). One of the primary ways Amazon has built a huge edge is through Amazon Prime, the company’s loyalty program, in which Amazon has invested aggressively. Initiated in 2005, Amazon Prime began by offering US consumers unlimited two-day shipping for $79 (Kawamoto 2005). In the years since, Amazon has bundled in other deals and perks, like renting e-books and streaming music and video, as well as one-hour or same-day delivery. The program has arguably been the retailer’s single biggest driver of growth (DiChristopher 2015).7 Amazon does not disclose the exact number of Prime subscribers in the United States, but analysts believe the number of users has surpassed 63 million—19 million more than in 2015 (Leonard 2016). Membership doubled between 2011 and 2013. By 2020, it is estimated that half of US households may be enrolled (Frommer 2015). As with its other ventures, Amazon lost money on Prime to gain buy-in. In 2011 it was estimated that each Prime subscriber cost Amazon at least $90 a year—$55 in shipping, $35 in digital video—and that the company therefore took an $11 loss annually for each customer (Woo 2011). One Amazon expert tallies that Amazon has been losing $1 billion to $2 billion
7. It has also been a key force driving up Amazon’s stock price (see Stone 2010).
[ 106 ] Economy
a year on Prime memberships (Seetharaman and Layne 2015). The full cost of Amazon Prime is steeper yet, given that the company has been investing heavily in warehouses, delivery facilities, and trucks, as part of its plan to speed up delivery for Prime customers—expenditures that regularly push it into the red (see Weise 2015). Despite these losses—or perhaps because of them—Prime is considered crucial to Amazon’s growth as an online retailer. According to analysts, customers increase their purchases from Amazon by about 150% after they become Prime members (Stone 2010). Prime members constitute 47% of Amazon’s US shoppers (Tuttle 2016). Amazon Prime members also spend more on the company’s website—an average of $1,500 annually, compared to $625 spent annually by non-Prime members (Rubin 2016). Business experts note that by making shipping free, Prime “successfully strips out paying for . . . the leading consumer burden of online shopping” (Fox Rubin 2015). Moreover, the annual fee drives customers to increase their Amazon purchases in order to maximize the return on their investment (see Tuttle 2010). As a result, Amazon Prime users are both more likely to buy on its platform and less likely to shop elsewhere. “[Sixty-three percent] of Amazon Prime members carry out a paid transaction on the site in the same visit,” compared to 13% of non-Prime members (O’Connor 2015). For Walmart and Target, those figures are 5% and 2%, respectively (O’Connor 2015). One study found that less than 1% of Amazon Prime members are likely to consider competitor retail sites in the same shopping session. Non- Prime members, meanwhile, are eight times more likely than Prime members to shop between both Amazon and Target in the same session (O’Connor 2015). In the words of one former Amazon employee who worked on the Prime team, “It was never about the $79. It was really about changing people’s mentality so they wouldn’t shop anywhere else” (Stone 2010). In that regard, Amazon Prime seems to have proven successful (Tuttle 2010). In 2014, Amazon hiked its Prime membership fee to $99 (Bensinger 2014). The move prompted some consumer ire, but 95% of Prime members surveyed said they would either definitely or probably renew their membership regardless (Whitney 2014), suggesting that Amazon has created significant buy-in and that no competitor is currently offering a comparably valuable service at a lower price. It may, however, also reveal the general stickiness of online shopping patterns. Although competition for online services may seem to be “just one click away,” research drawing on behavioral tendencies shows that the “switching cost” of changing web services can, in fact, be quite high (see Candeub 2014).
Amazon
[ 107 ]
0 8 1
No doubt, Amazon’s dominance stems in part from its first-mover advantage as a pioneer of large-scale online commerce. But in several key ways, Amazon has achieved its position through deeply cutting prices and investing heavily in growing its operations—both at the expense of profits. The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws. Another key element of Amazon’s strategy—and one partly enabled by its capacity to thrive despite posting losses—has been to expand aggressively into multiple business lines. In addition to being a retailer, Amazon is a marketing platform, a delivery and logistics network, a payment serv ice, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading provider of cloud server space and computing power (Amazon. com, Inc. 2016). For the most part, Amazon has expanded into these areas by acquiring existing firms. Involvement in multiple, related business lines means that, in many instances, Amazon’s rivals are also its customers. The retailers that compete with it to sell goods may also use its delivery services, for example, and the media companies that compete with it to produce or market content may also use its platform or cloud infrastructure. At a basic level this arrangement creates conflicts of interest, given that Amazon is positioned to favor its own products over those of its competitors. Critically, not only has Amazon integrated across select lines of business but also it has emerged as central infrastructure for the Internet economy. Reports suggest this was part of Bezos’s vision from the start. According to early Amazon employees, when the CEO founded the business, “his underlying goals were not to build an online bookstore or an online retailer, but rather a ‘utility’ that would become essential to commerce” (Mulpuru and Walker 2012). In other words, Bezos’s target customer was not only end- consumers but also other businesses. Amazon controls key critical infrastructure for the Internet economy— in ways that are difficult for new entrants to replicate or compete against. This gives the company a key advantage over its rivals: Amazon’s competitors have come to depend on it. Like its willingness to sustain losses, this feature of Amazon’s power largely confounds contemporary antitrust analysis, which assumes that rational firms seek to drive their rivals out of business. Amazon’s game is more sophisticated. By making itself indispensable
[ 108 ] Economy
to e-commerce, Amazon enjoys receiving business from its rivals, even as it competes with them. Moreover, Amazon gleans information from these competitors as a service provider that it may use to gain a further advantage over them as rivals—enabling it to further entrench its dominant position.
ESTABLISHING STRUCTURAL DOMINANCE
Amazon now controls 46% of all e-commerce in the United States (see LaVecchia and Mitchell 2016). Not only is it the fastest-growing major retailer, but it is also growing faster than e-commerce as a whole (Ray 2016; see generally Leonard 2016). In 2010, it employed 33,700 workers; by June 2016, it had 268,900 (Leonard 2016). It is enjoying rapid success even in sectors that it only recently entered. For example, the company “is expected to triple its share of the U.S. apparel market over the next five years” (Banjo 2016). Its clothing sales recently rose by $1.1 billion—even as online sales at the six largest US department stores fell by over $500 million.8 These figures alone are daunting, but they do not capture the full extent of Amazon’s role and power. Amazon’s willingness to sustain losses and invest aggressively at the expense of profits, coupled with its integration across sectors, has enabled it to establish a dominant structural role in the market. In what follows, several examples of Amazon’s conduct illustrate how the firm has established structural dominance. The first example focuses on predatory pricing. The other examples, Fulfillment-by-Amazon and Amazon Marketplace, demonstrate how Amazon has become an infrastructure company, both for physical delivery and e-commerce, and how this vertical integration implicates market competition. These cases highlight how Amazon can use its role as an infrastructure provider to benefit its other lines of business. These examples also demonstrate how high barriers to entry may make it difficult for potential competitors to enter these spheres, locking in Amazon’s dominance for the foreseeable future. All four of these accounts raise concerns about contemporary antitrust’s ability to register and address the anticompetitive threat posed by Amazon and other dominant online platforms.
8. Its clothing sales are greater than the combined online sales of its five largest online apparel competitors: Macy’s, Nordstrom, Kohl’s, Gap, and Victoria’s Secret’s parent (Banjo 2016).
Amazon
[ 109 ]
0 1
Discriminatory Pricing and Fees
Under current doctrine, whether below-cost pricing is predatory or not turns on whether a firm recoups its losses. Thereby, the following paragraphs examine how Amazon could use its dominance to recoup its losses in ways that are more sophisticated than what current doctrine recognizes. Most obviously, Amazon could earn back the losses it generated on bestseller e-books by raising prices of either particular lines of e-books or e- books as a whole. However, conducting recoupment analysis with Amazon is particularly challenging because it may not be apparent when and by how much Amazon raises prices. Online commerce enables Amazon to obscure price hikes in at least two ways: rapid, constant price fluctuations and personalized pricing.9 Constant price fluctuations diminish our ability to discern pricing trends. By one account, Amazon changes prices more than 2.5 million times each day (Ferdman 2013). Amazon is also able to tailor prices to individual consumers, known as first-degree price discrimination. There is no public evidence that Amazon is currently engaging in personalized pricing,10 but online retailers generally are devoting significant resources to analyzing how to implement it (Khan 2014). A major topic of discussion at the 2014 National Retail Federation annual convention, for example, was how to introduce discriminatory pricing without triggering consumer backlash (Khan 2014). One mechanism discussed was highly personalized coupons sent at the point of sale, which would avoid the need to show consumers different prices but would still achieve discriminatory pricing (Khan 2014). If retailers—including Amazon—implement discriminatory pricing on a wide scale, each individual would be subject to his or her own personal price trajectory, eliminating the notion of a single pricing trend. It is not clear how we would measure price hikes for the purpose of recoupment analysis in that scenario. There would be no obvious conclusions if some consumers faced higher prices while others enjoyed lower ones. But given the magnitude and accuracy of data that Amazon has collected on millions of users, tailored pricing is not simply a hypothetical power. It is true that brick-and-mortar stores also collect data on customer purchasing habits and send personalized coupons. But the types of consumer behavior that Internet firms can access—how long you hover your mouse
9. Several journalists have tracked instances of price discrimination in e-commerce. See, e.g., Angwin and Larson (2015); Valentino-Devries, Singer-Vine, and Soltani (2012). 10. But recent reporting does suggest that Amazon manipulates how it presents pricing in order to favor its own products. See Angwin and Mattu (2016).
[ 110 ] Economy
on a particular item, how many days an item sits in your shopping basket before you purchase it, or the fashion blogs you visit before looking for those same items through a search engine—is uncharted ground. The degree to which a firm can tailor and personalize an online shopping experience is different in kind from the methods available to a brick-and-mortar store—precisely because the type of behavior that online firms can track is far more detailed and nuanced. And unlike brick-and-mortar stores— where everyone at least sees a common price (even if they go on to receive discounts)—Internet retail enables firms to entirely personalize consumer experiences, which eliminates any collective baseline from which to gauge price increases or decreases. In which product market would Amazon choose to raise prices? This is also an open question—and one that current predatory pricing doctrine ignores. Courts generally assume that a firm will recoup by increasing prices on the same goods on which it previously lost money. But recoupment across markets is also available as a strategy, especially for firms as diversified across products and services as Amazon. Reporting suggests the company did just this in 2013, by hiking prices on scholarly and small- press books and creating the risk of a “two-tier system where some books are priced beyond an audience’s reach” (Streitfeld 2013). Although Amazon may be recouping its initial losses in e-books through markups on physical books, this cross-market recoupment is not a scenario that enforcers or judges generally consider (see Areeda and Hovenkamp 2010; Leslie 2013, 1720; Trujillo 1994). One possible reason for this neglect is that Chicago School scholarship, which assumes recoupment in single-product markets is unlikely, also holds recoupment in multiproduct scenarios to be implausible (Leslie 2013, 1720–21). Although current predatory pricing doctrine focuses only on recoupment through raising prices for consumers, Amazon could also recoup its losses by imposing higher fees on publishers. Large book retailer chains like Barnes & Noble have long used their market dominance to charge publishers for favorable product placement, such as displays in a storefront window or on a prominent table (see Kennedy 2005). Amazon’s dominance in the e- book market has enabled it to demand similar fees for even the most basic of services. For example, when renewing its contract with Hachette in 2014, Amazon demanded payments for services including the preorder button, personalized recommendations, and an Amazon employee assigned to the publisher (Stewart 2014). In the words of one person close to the negotiations, Amazon “is very inventive about what we’d call standard service. . . . They’re teasing out all these layers and saying, ‘If you want that service, you’ll have to pay for it’ ” (Stewart 2014). By introducing fees on services
Amazon
[ 111 ]
2 1
that it previously offered for free, Amazon has created another source of revenue. Amazon’s power to demand these fees—and recoup some of the losses it sustained in below-cost pricing—stems from dominance partly built through that same below-cost pricing. The fact that Amazon has itself vertically integrated into book publishing—and hence can promote its own content—may give it additional leverage to hike fees. Any publisher that refuses could see Amazon favor its own books over the publisher’s. It is not uncommon for half of the titles on Amazon’s Kindle bestseller list to be its own (see LaVecchia and Mitchell 2016, 2). While not captured by current antitrust doctrine, the pressure Amazon puts on publishers merits concern. For one, consolidation among book sellers—partly spurred by Amazon’s pricing tactics and demands for better terms from publishers—has also spurred consolidation among publishers. Consolidation among publishers last reached its heyday in the 1990s—as publishing houses sought to bulk up in response to the growing clout of Borders and Barnes & Noble—and by the early 2000s, the industry had settled into the “Big Six” in the United States (Kachka 2013). This trend has cost authors and readers alike, leaving writers with fewer paths to market and readers with a less diverse marketplace. Since Amazon’s rise, the major publishers have merged further—thinning down to five, with rumors of more consolidation to come (Kachka 2013). Second, the increasing cost of doing business with Amazon is upending the publishers’ business model in ways that further risk sapping diversity. Traditionally, publishing houses used a cross-subsidization model whereby they would use their bestsellers to subsidize weightier and riskier books requiring greater upfront investment. In the face of higher fees imposed by Amazon, publishers say they are less able to invest in a range of books. In a recent letter to the Department of Justice, a group of authors wrote that Amazon’s actions have “extract[ed] vital resources from the [book] industry in ways that lessen the diversity and quality of books” (Authors United 2015). The authors noted that publishers have responded to Amazon’s fees by both publishing fewer titles and focusing largely on books by celebrities and bestselling authors (Authors United 2015). The authors also noted, “Readers are presented with fewer books that espouse unusual, quirky, offbeat, or politically risky ideas, as well as books from new and unproven authors. This impoverishes America’s marketplace of ideas” (Authors United 2015) Amazon’s conduct would be readily cognizable as a threat under the pre–Chicago School view that predatory pricing laws specifically and antitrust generally promoted a broad set of values. Under the predatory pricing jurisprudence of the early and mid-20th century, harm to the diversity
[ 112 ] Economy
and vibrancy of ideas in the book market may have been a primary basis for government intervention. The political risks associated with Amazon’s market dominance also implicate some of the major concerns that animate antitrust laws. For instance, the risk that Amazon may retaliate against books that it disfavors—either to impose greater pressure on publishers or for other political reasons—raises concerns about media freedom. Given that antitrust authorities previously considered diversity of speech and ideas a factor in their analysis, Amazon’s degree of control, too, should warrant concern. Even within the narrower “consumer welfare” framework, Amazon’s attempts to recoup losses through fees on publishers should be understood as harmful. A market with less choice and diversity for readers amounts to a form of consumer injury. That the DOJ ignored this concern in its suit against Apple and the publishers suggests that its conception of predatory pricing overlooks the full suite of harms that Amazon’s actions may cause. Amazon’s below-cost pricing in the e-book market—which enabled it to capture 65% of that market,11 a sizable share by any measure—strains predatory pricing doctrine in several ways. First, Amazon is positioned to recoup its losses by raising prices on less popular or obscure e-books, or by raising prices on print books. In either case, Amazon would be recouping outside the original market where it sustained losses (bestseller e-books), so courts are unlikely to look for or consider these scenarios. Additionally, constant fluctuations in prices and the ability to price discriminate enable Amazon to raise prices with little chance of detection. Lastly, Amazon could recoup its losses by extracting more from publishers, who are dependent on its platform to market both e-books and print books. This may diminish the quality and breadth of the works that are published, but since this is most directly a supplier-side rather than buyer-side harm, it is less likely that a modern court would consider it closely. The current predatory pricing framework fails to capture the harm posed to the book market by Amazon’s tactics.
Amazon Delivery and Leveraging Dominance across Sectors
Amazon’s willingness to sustain losses has allowed it to engage in below- cost pricing in order to establish dominance as an online retailer. Amazon 11. At the height of its market share, this figure was closer to 90%. After Apple entered the market, Amazon’s share fell slightly and then stabilized around 65% (Packer 2014).
Amazon
[ 113 ]
4 1
has translated its dominance as an online retailer into significant bargaining power in the delivery sector, using it to secure favorable conditions from third-party delivery companies. This in turn has enabled Amazon to extend its dominance over other retailers by creating the Fulfillment- by-Amazon service and establishing its own physical delivery capacity. This illustrates how a company can leverage its dominant platform to successfully integrate into other sectors, creating anticompetitive dynamics. Retail competitors are left with two undesirable choices: either try to compete with Amazon at a disadvantage or become reliant on a competitor to handle delivery and logistics. As Amazon expanded its share of e-commerce—and enlarged the e- commerce sector as a whole—it started constituting a greater share of delivery companies’ business. For example, in 2015, UPS derived $1 billion worth of business from Amazon alone (Stevens and Bensinger 2015). The fact that it accounted for a growing share of these firms’ businesses gave Amazon bargaining power to negotiate for lower rates. By some estimates, Amazon enjoyed a 70% discount over regular delivery prices (Clifford and Cain Miller 2010). Delivery companies sought to make up for the discounts they gave to Amazon by raising the prices they charged to independent sellers (see Stevens 2016), a phenomenon recently termed the “waterbed effect” (see Dobson and Inderst 2008, 336–37; Kirkwood 2012, 1544). As scholars have described, [T]he presence of a waterbed effect can further distort competition by giving a powerful buyer now a two-fold advantage, namely, through more advantageous terms for itself and through higher purchasing costs for its rivals. What then becomes a virtuous circle for the strong buyer ends up as a vicious circle for its weaker competitors (Dobson and Inderst 2008, 337).
To this twofold advantage Amazon added a third perk: harnessing the weakness of its rivals into a business opportunity. In 2006, Amazon introduced Fulfillment-by-Amazon (FBA), a logistics and delivery service for independent sellers (Phx.corporate-ir.net 2006). Merchants who sign up for FBA store their products in Amazon’s warehouses, and Amazon packs, ships, and provides customer service on any orders. Products sold through FBA are eligible for service through Amazon Prime—namely, free two-day shipping and/or free regular shipping, depending on the order (Phx.corporate-ir.net 2006). Since many merchants selling on Amazon are competing with Amazon’s own retail operation and its Amazon Prime service, using FBA offers sellers the opportunity to compete at less of a disadvantage.
[ 114 ] Economy
Notably, it is partly because independent sellers faced higher rates from UPS and FedEx—a result of Amazon’s dominance—that Amazon succeeded in directing sellers to its new business venture (see Cole 2012). In many instances, orders routed through FBA were still being shipped and delivered by UPS and FedEx, since Amazon relied on these firms (Wohlsen 2014). But because Amazon had secured discounts unavailable to other sellers, it was cheaper for those sellers to go through Amazon than to use UPS and FedEx directly. Amazon had used its dominance in the retail sector to create and boost a new venture in the delivery sector, inserting itself into the business of its competitors. Amazon has followed up on this initial foray into fulfillment services by creating a logistics empire. Building out physical capacity lets Amazon further reduce its delivery times, raising the bar for entry yet higher. Moreover, it is the firm’s capacity for aggressive investing that has enabled it to rapidly establish an extensive network of physical infrastructure. Since 2010, Amazon has spent $13.9 billion building warehouses (Kucera 2013), and it spent $11.5 billion on shipping in 2015 alone (Leonard 2016). Amazon has opened more than 180 warehouses (Bensinger and Stevens 2016), 28 sorting centers, 59 delivery stations that feed packages to local couriers, and more than 65 Prime Now hubs (Bensinger and Stevens 2016). Analysts estimate that the locations of Amazon’s fulfillment centers bring it within 20 miles of 31% of the US population and within 20 miles of 60% of its core same-day base (D’Onfro 2015). This sprawling network of fulfillment centers—each placed in or near a major metropolitan area—equips Amazon to offer one-hour delivery in some locations and same-day in others (a service it offers free to members of Amazon Prime) (Bensinger and Stevens 2016). While several rivals initially entered the delivery market to compete with Prime shipping, some are now retreating (Soper 2015). As one analyst noted, “Prime has proven exceedingly difficult for rivals to copy” (Stone 2010; see also Mangalindan 2012). Most recently, Amazon has also expanded into trucking. In December 2015, it announced it plans to roll out thousands of branded semi-trucks, a move that will give it yet more control over delivery, as it seeks to speed up how quickly it can transport goods to customers (Del Ray 2015; Leonard 2016). Amazon now owns four thousand truck trailers and has also signed contracts for container ships, planes (Lewis 2016), and drones (Manjoo 2016). As of October 2016, Amazon had leased at least 40 jets (Manjoo 2016). Former employees say Amazon’s long-term goal is to circumvent UPS and FedEx altogether, though the company itself has said it is looking only to supplement its reliance on these firms, not supplant them (see Del Ray 2015; also Leonard 2016).
Amazon
[ 115 ]
6 1
The way that Amazon has leveraged its dominance as an online retailer to vertically integrate into delivery is instructive on several fronts. First, it is a textbook example of how a company can use its dominance in one sphere to advantage a separate line of business. To be sure, this dynamic is not intrinsically anticompetitive. What should prompt concern in Amazon’s case, however, is that Amazon achieved these cross-sector advantages in part due to its bargaining power. Because Amazon was able to demand heavy discounts from FedEx and UPS, other sellers faced price hikes from these companies—which positioned Amazon to capture them as clients for its new business. By overlooking structural factors like bargaining power, modern antitrust doctrine fails to address this type of threat to competitive markets. Second, Amazon is positioned to use its dominance across online retail and delivery in ways that involve tying, are exclusionary, and create entry barriers (Elhauge 2009). That is, Amazon’s distortion of the delivery sector in turn creates anticompetitive challenges in the retail sector. For example, sellers who use FBA have a better chance of being listed higher in Amazon search results than those who do not, which means Amazon is tying the outcomes it generates for sellers using its retail platform to whether they also use its delivery business (Mitchell n.d.). Amazon is also positioned to use its logistics infrastructure to deliver its own retail goods faster than those of independent sellers that use its platform and fulfillment service— a form of discrimination that exemplifies traditional concerns about vertical integration. And Amazon’s capacity for losses and expansive logistics capacities mean that it could privilege its own goods while still offering independent sellers the ability to ship goods more cheaply and quickly than they could by using UPS and FedEx directly. Relatedly, Amazon’s expansion into the delivery sector also raises questions about the Chicago School’s limited conception of entry barriers. The company’s capacity for losses—the permission it has won from investors to show negative profits—has been key in enabling Amazon to achieve outsized growth in delivery and logistics. Matching Amazon’s network would require a rival to invest heavily and—in order to viably compete—offer free or otherwise below-cost shipping. In interviews with reporters, venture capitalists say there is no appetite to fund firms looking to compete with Amazon on physical delivery. In this way, Amazon’s ability to sustain losses creates an entry barrier for any firm that does not enjoy the same privilege. Third, Amazon’s use of Prime and FBA exemplifies how the company has structurally placed itself at the center of e-commerce. Already 55% of American online shoppers begin their online shopping on Amazon’s platform (Del Ray 2016). Given the traffic, it is becoming increasingly clear that
[ 116 ] Economy
in order to succeed in e-commerce, an independent merchant will need to use Amazon’s infrastructure. The fact that Amazon competes with many of the businesses that are coming to depend on it creates a host of conflicts of interest that the company can exploit to privilege its own products. The framework in antitrust today fails to recognize the risk that Amazon’s dominance poses to open and competitive markets. In part, this is because—as with the framework’s view of predatory pricing—the primary harm that registers within the “consumer welfare” frame is higher consumer prices. On the Chicago School’s account, Amazon’s vertical integration would only be harmful if and when it chooses to use its dominance in delivery and retail to hike fees to consumers. Amazon has already raised Prime prices (Bensinger 2014). But antitrust enforcers should be equally concerned about the fact that Amazon increasingly controls the infrastructure of online commerce—and the ways in which it is harnessing this dominance to expand and advantage its new business ventures. The conflicts of interest that arise from Amazon both competing with merchants and delivering their wares pose a hazard to competition, particularly in light of Amazon’s entrenched position as an online platform. Amazon’s conflicts of interest tarnish the neutrality of the competitive process. The thousands of retailers and independent businesses that must ride Amazon’s rails to reach market are increasingly dependent on their biggest competitor.
Amazon Marketplace and Exploiting Data
As described above, vertical integration in retail and physical delivery may enable Amazon to leverage cross-sector advantages in ways that are potentially anticompetitive but not understood as such under current antitrust doctrine. Analogous dynamics are at play with Amazon’s dominance in the provision of online infrastructure, in particular its Marketplace for third-party sellers. Because information about Amazon’s practices in this area is limited, this section is necessarily brief. But to capture fully the anticompetitive features of Amazon’s business strategy, it is vital to analyze how vertical integration across Internet businesses introduces more sophisticated—and potentially more troubling—opportunities to abuse cross-market advantages and foreclose rivals. The clearest example of how the company leverages its power across online businesses is Amazon Marketplace, where third-party retailers sell their wares. Since Amazon commands a large share of e-commerce traffic, many smaller merchants find it necessary to use its site to draw buyers (Loten and Janofsky 2015). These sellers list their goods on Amazon’s
Amazon
[ 117 ]
8 1
platform and the company collects fees ranging from 6% to 50% of their sales from them (Loten and Janofsky 2015). More than 2 million third- party sellers used Amazon’s platform as of 2015, an increase from the roughly 1 million that used the platform in 2006 (Loten and Janofsky 2015). The revenue that Amazon generates through Marketplace has been a major source of its growth: third-party sellers’ share of total items sold on Amazon rose from 36% in 2011 (Bensinger 2012) to over 50% in 2015 (Halpin 2015). Third-party sellers using Marketplace recognize that using the platform puts them in a bind. As one merchant observed, “You can’t really be a high- volume seller online without being on Amazon, but sellers are very aware of the fact that Amazon is also their primary competitor” (Loten and Janofsky 2015). Evidence suggests that their unease is well founded. Amazon seems to use its Marketplace “as a vast laboratory to spot new products to sell, test sales of potential new goods, and exert more control over pricing” (Bensinger 2012). Specifically, reporting suggests that “Amazon uses sales data from outside merchants to make purchasing decisions in order to undercut them on price” and give its own items “featured placement under a given search” (Bensinger 2012). Take the example of Pillow Pets, “stuffed- animal pillows modelled after NFL mascots” that a third-party merchant sold through Amazon’s site (Bensinger 2012). For several months, the merchant sold up to 100 pillows per day (Bensinger 2012). According to one account, “just ahead of the holiday season, [the merchant] noticed Amazon had itself beg[u]n offering the same Pillow Pets for the same price while giving [its own] products featured placement on the site” (Bensinger 2012). The merchant’s own sales dropped to 20 per day (Bensinger 2012). Amazon has gone head-to-head with independent merchants on price, vigorously matching and even undercutting them on products that they had originally introduced. By going directly to the manufacturer, Amazon seeks to cut out the independent sellers. In other instances, Amazon has responded to popular third-party products by producing them itself. Last year, a manufacturer that had been selling an aluminum laptop stand on Marketplace for more than a decade saw a similar stand appear at half the price. The manufacturer learned that the brand was AmazonBasics, the private line that Amazon has been developing since 2009 (Soper 2016). As one news site describes it, initially, AmazonBasics focused on generic goods like batteries and blank DVDs. “Then, for several years, the house brand ‘slept quietly as it retained data about other sellers’ successes’ ” (Soper 2016). As it now rolls out more AmazonBasics products, it is clear that the company has used “insights gleaned from its vast Web store to build a private-label juggernaut that now
[ 118 ] Economy
includes more than 3,000 products” (Soper 2016). One study found that in the case of women’s clothing, Amazon “began selling 25% of the top items first sold through marketplace vendors” (Anderson 2014). In using its Marketplace this way, Amazon increases sales while shedding risk. It is third-party sellers who bear the initial costs and uncertainties when introducing new products; by merely spotting them, Amazon gets to sell products only once their success has been tested. The anticompetitive implications here seem clear: Amazon is exploiting the fact that some of its customers are also its rivals. The source of this power is: (1) its dominance as a platform, which effectively necessitates that independent merchants use its site; (2) its vertical integration—namely, the fact that it both sells goods as a retailer and hosts sales by others as a marketplace; and (3) its ability to amass swaths of data, by virtue of being an Internet company. Notably, it is this last factor—its control over data—that heightens the anticompetitive potential of the first two. Evidence suggests that Amazon is keenly aware of and interested in exploiting these opportunities. For example, the company has reportedly used insights gleaned from its cloud computing service to inform its investment decisions (see Barr 2011). By observing which start-ups are expanding their usage of Amazon Web Services, Amazon can make early assessments of the potential success of upcoming firms. Amazon has used this “unique window into the technology startup world” to invest in several start-ups that were also customers of its cloud business (Barr 2011). How Amazon has cross-leveraged its advantages across distinct lines of business suggests that the law fails to appreciate when vertical integration may prove anticompetitive. This shortcoming is underscored with online platforms, which both serve as infrastructure for other companies and collect swaths of data that they can then use to build up other lines of business. In this way, the current antitrust regime has yet to reckon with the fact that firms with concentrated control over data can systematically tilt a market in their favor, dramatically reshaping the sector.
TWO MODELS FOR ADDRESSING PLATFORM POWER
If it is true that the economics of platform markets may encourage anticompetitive market structures, there are at least two approaches we can take. Key is deciding whether we want to govern online platform markets through competition, or want to accept that they are inherently monopolistic or oligopolistic and regulate them instead. If we take the former approach, we should reform antitrust law to prevent this dominance from
Amazon
[ 119 ]
201
emerging or to limit its scope. If we take the latter approach, we should adopt regulations to take advantage of these economies of scale while neutering the firm’s ability to exploit its dominance.
Governing Online Platform Markets through Competition
Reforming antitrust to address the anticompetitive nature of platform markets could involve making the law against predatory pricing more robust and strictly policing forms of vertical integration that firms can use for anticompetitive ends. Importantly, each of these doctrinal areas should be reformulated so that it is sensitive to preserving the competitive process and limiting conflicts of interest that may incentivize anticompetitive conduct. Revising predatory pricing doctrine to reflect the economics of platform markets, where firms can sink money for years given unlimited investor backing, would require abandoning the recoupment requirement in cases of below-cost pricing by dominant platforms. And given that platforms are uniquely positioned to fund predation, a competition-based approach might also consider introducing a presumption of predation for dominant platforms found to be pricing products below cost. Similarly, antitrust law should be reformed to address how vertical integration may give rise to anticompetitive conflicts of interest and the fact that a dominant firm may use its dominance in one sector to advance another line of business. One way to address the concern about a firm’s capacity to cross-leverage data is to expressly include it into merger review. It could make sense for the agencies to automatically review any deal that involves exchange of certain forms (or a certain quantity) of data. A stricter approach to vertical integration would place prophylactic limits on vertical mergers by platforms that have reached a certain level of dominance. Adopting this prophylactic approach would mean banning a dominant firm from entering any market that it already serves as a platform—in other words, from competing directly with the businesses that depend on it.
Governing Dominant Platforms as Monopolies through Regulation
As described above, one option is to govern dominant platforms through promoting competition, thereby limiting the power that any one actor accrues. The other is to accept dominant online platforms as natural monopolies
[ 120 ] Economy
or oligopolies, seeking to regulate their power instead. Traditionally the United States has regulated natural monopolies through public utility regulations and common carrier duties. Industries that historically have been regulated as utilities include commodities (water, electric power, gas), transportation (railroads, ferries), and communications (telegraphy, telephones) (Wu 2010, 1616). Critically, a public utility regime aims at eliminating competition: it accepts the benefits of monopoly and chooses to instead limit how a monopoly may use its power (Wu 2010, 1643). Given that Amazon increasingly serves as essential infrastructure across the Internet economy, applying elements of public utility regulations to its business is worth considering (see Rahman 2015; 2016). The most common public utility policies are (1) requiring nondiscrimination in price and service, (2) setting limits on rate-setting, and (3) imposing capitalization and investment requirements. Of these three traditional policies, nondiscrimination would make the most sense, while rate-setting and investment requirements would be trickier to implement and, perhaps, would less obviously address an outstanding deficiency. A nondiscrimination policy that prohibited Amazon from privileging its own goods and from discriminating among producers and consumers would be significant. This approach would permit the company to maintain its involvement across multiple lines of business and permit it to enjoy the benefits of scale while mitigating the concern that Amazon could unfairly advantage its own business or unfairly discriminate among platform users to gain leverage or market power.12 Coupling nondiscrimination with common carrier obligations—requiring platforms to ensure open and fair access to other businesses—would further limit Amazon’s power to use its dominance in anticompetitive ways.
CONCLUSION
Internet platforms mediate a large and growing share of our commerce and communications. Yet evidence shows that competition in platform markets is flagging, with sectors coalescing around one or two giants (FCC 2017). The titan in e-commerce is Amazon—a company that has built its dominance through aggressively pursuing growth at the expense of profits and that has integrated across many related lines of business. As a result, 12. Net neutrality is a form of common carrier regime. For an exposition of why net neutrality and search neutrality should apply to major platforms, see Pasquale (2008, 263).
Amazon
[ 121 ]
21
the company has positioned itself at the center of Internet commerce and serves as essential infrastructure for a host of other businesses that now depend on it. This chapter argues that Amazon’s business strategies and current market dominance pose anticompetitive concerns that the consumer welfare framework in antitrust fails to recognize. In particular, current law underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets incentivize the pursuit of growth over profits, a strategy that investors have rewarded. Under these conditions predatory pricing becomes highly rational—even as existing doctrine treats it as irrational. Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors. In order to capture these anticompetitive concerns, we should replace the consumer welfare framework with an approach oriented around preserving a competitive process and market structure. Applying this idea involves, for example, assessing whether a company’s structure creates anticompetitive conflicts of interest; whether it can cross-leverage market advantages across distinct lines of business; and whether the economics of online platform markets incentivizes predatory conduct and capital markets permit it. More specifically, restoring traditional antitrust principles to create a presumption of predation and to ban vertical integration by dominant platforms could help maintain competition in these markets. If, instead, we accept dominant online platforms as natural monopolies or oligopolies, then applying elements of a public utility regime or essential facilities obligations would maintain the benefits of scale while limiting the ability of dominant platforms to abuse the power that comes with it. As Amazon continues both to deepen its existing control over key infrastructure and to reach into new lines of business, its dominance demands scrutiny. To revise antitrust law and competition policy for platform markets, we should be guided by two questions. First, does the legal framework capture the realities of how dominant firms acquire and exercise power in the Internet economy? And second, what forms and degrees of power should the law identify as a threat to competition? Without considering these questions, we risk permitting the growth of powers that we oppose but fail to recognize.
[ 122 ] Economy
REFERENCES Amazon.com, Inc. 2010. “Annual Report (Form 10-K) 27.” January 29, 2010. http:// www.sec.gov/Archives/edgar/data/1018724/000119312510016098/d10k.htm; http://perma.cc/L27R-CHUY. Amazon.com, Inc. 2016. “Annual Report (Form 10-K) 4.” April 6, 2016. http://phx. corporate -ir.net/External.File?item=UGFyZW50SUQ9NjI4NTg0fENoaWxkSU Q9MzI5NTMwfFR 5cGU9MQ==&t=1. Anderson, George. 2014. “Is Amazon Undercutting Third-Party Sellers Using Their Own Data?” Forbes, October 30, 2014. http://www.forbes.com/sites/retailwire/ 2014/10/30 /is-amazon-undercutting-third-party-sellers-using-their-own-data. Angwin, Julia, and Jeff Larson. 2015. “The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review.” ProPublica, September 1, 2015. http://www.propublica.org/article / asians-nearly-twice-as-likely-to-get-higher-price-from-princeton-review. Angwin, Julia, and Surya Mattu. 2016. “Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn’t.” ProPublica, September 20, 2016. http:// www.propublica.org/article/amazon-says-it-puts-customers-first-but-its -pricing-algorithm-doesn't. Areeda, Phillip, and Herbert Hovenkamp. 2010. Fundamentals of Antitrust Law. New York: Aspen Law and Business, 7–72. Authors United. 2015. “Letter to William J. Baer, Assistant Att’y Gen., Antitrust Div., Dept, of Justice.” July 14, 2015. http://www.authorsunited.net/july/ longdocument.html. Bain, Joe S. 1950. “Workable Competition in Oligopoly: Theoretical Considerations and Some Empirical Evidence.” American Economic Review 40: 35, 36–38. Bain, Joe S. (1959) 1968. Industrial Organization. 2d ed. New Jersey: John Wiley & Sons Banjo, Shelly. 2016. “Amazon Eats the Department Store.” Bloomberg: Gadfly, September 20, 2016. http://www.bloomberg.com/gadfly/articles/2016-09-20/ amazon-clothing-sales -could-soon-top-macy-s. Banjo, Shelly, and Paul Ziobro. 2013. “After Decades of Toil, Web Services Remain Small for Many Retailers.” Wall Street Journal, August 27, 2013. http://www. wsj.com/articles /SB10001424127887324906304579039101568397122. Barr, Alistair. 2011. “Amazon Finds Startup Investments in the ‘Cloud.’ ” Reuters, November 9, 2011. http://www.reuters.com/article/ amazon-cloud-idUSN1E7A727Q20111109. Bensinger, Greg. 2012. “Competing with Amazon on Amazon.” Wall Street Journal, June 27, 2012. http://www.wsj.com/articles/SB1000142405270230444140457 7482902055882264. Bensinger, Greg. 2014. “Amazon Raises Prime Subscription Price to $99 a Year.” Wall Street Journal, March 13, 2014. http://www.wsj.com/articles/ SB100014240527023035462045794369033094 11092. Bensinger, Greg. 2016. “Cloud Unit Pushes Amazon to Record Profit.” Wall Street Journal, April 28, 2016. http://www.wsj.com/articles/ amazon-reports-surge-in-profit-1461874333. Bensinger, Greg, and Laura Stevens. 2016. “Amazon’s Newest Ambition: Competing Directly with UPS and FedEx.” Wall Street Journal, September 27, 2016. http://www.wsj.com/articles/ama zons-newest-ambitioncompeting-directly-with-ups-and-fedex-1474994758.
Amazon
[ 123 ]
2 4 1
Bezos, Jeffrey P. 1998. “Letter to Shareholders.” Amazon.com, Inc., March 30, 1998. http:// media.corporate-ir.net/media_files/irol/97/97664/reports/ Shareholderletter97.pdf. BI Intelligence. 2016. “The Everything Shipper: Amazon and the New Age of Delivery.” June 5, 2016. http://www.businessinsider.com/the-everything- shipper-amazon-and-the-new -age-of-delivery-2016-6. Bork, Robert H. 1978. The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books. Candeub, Adam. 2014. Behavioral Economics, Internet Search, and Antitrust. Michigan: Michigan State University College of Law. Clark, Meagan, and Angelo Young. 2013. “Amazon: Nearly 20 Years in Business and It Still Doesn’t Make Money, but Investors Don’t Seem to Care.” International Business Times, December 18, 2013. http://www.ibtimes.com/ amazon-nearly-20-years-business-it-still-doesnt-make-money-investors -dont-seem-care-1513368. Clifford, Stephanie, and Claire Cain Miller. 2010. “Wal-Mart Says ‘Try This On’: Free Shipping.” New York Times, November 11, 2010. http://www.nytimes.com/ 2010/11/11/business/11shipping.html CNN Money. 2002. “7 Amazon Posts a Profit.” January 22, 2002. http://money.cnn. com/2002 /01/22/technology/amazon. Cole, Paul. 2012. “Should You Use Amazon Discounted UPS Shipping?” Seller Engine. http://sellerengine.com/should-you-use-amazon-discounted-ups-shipping. Cole, Robert H. 1952. “General Discussion of Vertical Integration.” In Vertical Integration in Marketing 9: 9 (Bureau of Econ. & Bus. Research, Univ. of Ill., Bulletin No. 74, Nugent Wedding ed.). Crane, Daniel A. 2014. “The Tempting of Antitrust: Robert Bork and the Goals of Antitrust Policy.” Antitrust Law Journal 79: 835–47. Del Ray, Jason. 2015. “Amazon Buys Thousands of Its Own Truck Trailers as Its Transportation Ambitions Grow.” Recode, December 4, 2015. http://recode.net/2015/12/04/amazon -buys-thousands-of-its-own-trucks-as-its-transportation-ambitions-grow. Del Ray, Jason. 2016. “55 Percent of Online Shoppers Start Their Product Searches on Amazon.” ReCode, September 27, 2016. https://www.recode.net/2016/9/27/ 13078526/amazon-online-shopping-product-search-engine. DiChristopher, Tom. 2015. “Prime Will Grow Amazon Revenue Longer Than You Think: Analyst.” CNBC, September 11, 2015. http://www.cnbc.com/2015/09/ 11/prime-will-grow -amazon-revenue-longer-than-you-think-analyst.html. Dini, Justin. 2000. “Amazon Losses Widen but Shares Rise After-Hours.” The Street, February 2, 2000. http://www.thestreet.com/story/875924/1/amazon-losses- widen-but -shares-rise-after-hours.html. Dobson, Paul W., and Roman Inderst. 2008. “The Waterbed Effect: Where Buying and Selling Power Come Together.” Wisconsin Law Review 2008: 331, 336–37. DOJ. 1982. “Merger Guidelines.” http://www.justice.gov/sites/default/files /atr/ legacy/2007/07/11/11248.pdf. DOJ. 1984. “Merger Guidelines.” http://www.justice.gov/sites/default/files/atr/ legacy/2007/07/11/11249.pdf. D’Onfro, Jillian. 2015. “Here Are All of Amazon’s Warehouses in the US.” Business Insider, March 24, 2015. http://www.businessinsider.com/how-many- fulfillment-centers-does-amazon -have-in-the-us-2015-3#ixzz3f3AX8zda.
[ 124 ] Economy
Eisner, Marc Allen. 1991. Antitrust and the Triumph of Economics: Institutions, Expertise, and Policy Change. Chapel Hill: University of North Carolina Press, 107. Elhauge, Einer. 2009. “Tying, Bundled Discounts, and the Death of the Single Monopoly Profit Theorem.” Harvard Law Review 123: 397, 466–67. Elmer-DeWitt, Philip. 2015. “This Is What Drives Apple Investors Nuts about Amazon.” Fortune, July 24, 2015. http://fortune.com/2015/07/24 / apple-amazon-profits. Enright, Allison. 2016. “Amazon Sales Climb 22% in Q4 and 20% in 2015.” Internet Retailer, January 28, 2016. http://www.internetretailer.com/2016/01/28/ amazon-sales-climb-22 -q4-and-20-2015. FCC. 2017. “Open Internet.” Federal Communications Commission. Accessed September 18, 2017. http://www.fcc.gov/general/open-internet. Ferdman, Roberto A. 2013. “Amazon Changes Its Prices More than 2.5 Million Times a Day.” Quartz, December 14, 2013. http://qz.com/157828/amazon-changes- its-prices-more-than-2-5-million -times-a-day. Frommer, Dan. 2015. “Half of US Households Could Have Amazon Prime by 2020.” Quartz, February 26, 2015. http://qz.com/351726/half-of-us-households- could-have-amazon-prime-by-2020; http://perma.cc /ZW4Z-47UY. Garcia, Tonya. 2016. “Amazon Accounted for 60% of U.S. Online Sales Growth in 2015.” Marketwatch, May 3, 2016. http://www.marketwatch.com/story/ama zon-accounted-for-60-of-online-sales-growth-in-2015-2016-05-03. Halpin, Nancee. 2015. “Third-Party Merchants Account for More Than Three-Quarters of Items Sold on Amazon.” Business Insider, October 16, 2015. http://www. businessinsider.com/third -party-merchants-drive-amazon-grow-2015-10. Hoffmann, Melissa. 2014. “Amazon Has the Best Consumer Perception of Any Brand.” Adweek, July 16, 2014. http://www.adweek.com/news/advertising-branding/ amazon -has-best-consumer-perception-any-brand-158945. Kachka, Boris. 2013. “Book Publishing’s Big Gamble.” New York Times, July 9, 2013. http://www.nytimes.com/2013/07/10/opinion/book-publishings-big-gamble. html. Kawamoto, Dawn. 2005. “Amazon Unveils Flat-Fee Shipping.” CNET, February 2, 2005. http://www.cnet .com/news/amazon-unveils-flat-fee-shipping. Kennedy, Randy. 2005. “Cash Up Front.” New York Times, June 5, 2005. http://www. nytimes.com /2005/06/05/books/review/cash-up-front.html. Khan, Lina. 2014. “Why You Might Pay More than Your Neighbor for the Same Bottle of Salad Dressing.” Quartz, January 19, 2014. http://qz.com/168314/why-you- might-pay-more-than-your -neighbor-for-the-same-bottle-of-salad-dressing. Kirkwood, John. 2012. “Powerful Buyers and Merger Enforcement.” Boston University Law Review 92: 1485–544. Krantz, Matt. 2015. “Amazon Breaks Barrier: Now Most Costly Stock.” USA Today, November 11, 2015. http://www.usatoday.com/story/money/markets/2015/ 11/11/amazon-pe-ratio -valuation-price/75519460. Krugman, Paul. 2014. “Amazon’s Monopsony Is Not O.K.” New York Times, October 19, 2014. http://www.nytimes.com/2014/10/20/opinion/ paul-krugman-amazons-monopsony-is-not-ok.html. Kucera, Daniella. 2013. “Why Amazon Is on a Building Spree.” Bloomberg, August 29, 2013. http://www.bloomberg.com/bw/articles/2013-08-29/why-amazon-is -on-a-warehouse-building-spree.
Amazon
[ 125 ]
261
LaVecchia, Olivia, and Stacy Mitchell. 2016. “Amazon’s Stranglehold: How the Company’s Tightening Grip Is Stifling Competition, Eroding Jobs, and Threatening Communities.” Institute for Local Self-Reliance, November, 10, 2016. http://ilsr.org/wp-content/uploads/2016/11/ILSR_Amazon Report_ final.pdf. Leonard, Devin. 2016. “Will Amazon Kill FedEx?” Bloomberg, August 31, 2016. http:// www.bloomberg.com/features/2016-amazon-delivery. Leslie, Christopher R. 2013. “Predatory Pricing and Recoupment.” Columbia Law Review 113: 1695–702. Lewis, Robin. 2016. “Amazon’s Shipping Ambitions Are Larger than It’s Letting On.” Forbes, April 1, 2016. http://www.forbes.com/sites/robinlewis/2016/04/01/ planes-trains-trucks -and-ships/#260c3aa1408c. Loten, Angus, and Adam Janofsky. 2015. “Sellers Need Amazon, but at What Cost?” Wall Street Journal, January 14, 2015. http://www.wsj.com/articles/ sellers-need-amazon-but-at-what-cost-1421278220. Mangalindan, J. P. 2012. “Amazon’s Prime and Punishment.” Fortune, February 21, 2012. http://fortune.com/2012/02/21/amazons-prime-and-punishment. Manjoo, Farhad. 2015. “How Amazon’s Long Game Yielded a Retail Juggernaut.” New York Times, November 18, 2015. http://www.nytimes.com/2015/11/19/ technology/how-amazons -long-game-yielded-a-retail-juggernaut.html. Manjoo, Farhad. 2016. “Tech’s ‘Frightful 5’ Will Dominate Digital Life for Foreseeable Future.” New York Times, January 20, 2016. http://www.nytimes.com/2016/ 01/21/technology/techs-frightful -5-will-dominate-digital-life-for-foreseeable- future.html. Manjoo, Farhad. 2016. “Think Amazon’s Drone Idea Is a Gimmick? Think Again.” New York Times, August 10, 2016. http://www.nytimes.com/2016/08/11/technology/think-amazons- drone-delivery -idea-is-a-gimmick-think-again.html. Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 604 (1986) (White, J., dissenting). Mitchell, Will. N.d. “How to Rank Your Products on Amazon—The Ultimate Guide.” StartUpBros. http://startupbros.com/rank-amazon. Moore, Sam. 2015. “Amazon Commands Nearly Half of Consumers’ First Product Search.” BloomReach, October 6, 2015. http://bloomreach.com/2015/10/ amazon-commands-nearly-half-of -consumers-first-product-search. Mulpuru, Sucharita, and Brian K. Walker. 2012. “Why Amazon Matters Now More Than Ever.” Forrester, July 26, 2012. O’Connor, Clare. 2015. “Walmart and Target Being Crowded Out Online by Amazon Prime.” Forbes, April 6, 2015. http://www.forbes.com/sites/clareoconnor/2015/ 04/06 /walmart-and-target-being-crowded-out-online-by-amazon-prime. Orbach, Barak. 2013. “Foreword: Antitrust’s Pursuit of Purpose.” Fordham Law Review 81: 2151–52. Packer, George. 2014. “Cheap Words.” New Yorker, February 17, 2014. http://www. newyorker.com/magazine/2014/02/17/cheap-words Pasquale, Frank. 2008. “Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines.” University of Chicago Legal Forum 2008: 263. Posner, Richard A. 1979. “The Chicago School of Antitrust Analysis.” University of Pennsylvania Law Review 127: 925–32. Quick Pen. 2015. “What’s Driving the Amazon Stock Up Despite 188% Full Year Income Drop?” Guru Focus, February 8, 2015. http://www.
[ 126 ] Economy
gurufocus.com/news/315124/whats-driving-the-amazon-stock-up -despite-188-full-year-income-drop. Rahman, K. Sabeel. 2015. “From Railroad to Uber: Curbing the New Corporate Power.” Boston Review, May 4, 2015. http://bostonreview.net/forum/k-sabeel- rahman-curbing-new-corporate -power. Rahman, K. Sabeel. 2016. “Private Power and Public Purpose: The Public Utility Concept and the Future of Corporate Law in the New Gilded Age, Address at the Association of American Law Schools’ 110th Annual Meeting.” January 8, 2016. Transcript on file with author. Rajan, Raghuram, and Luigi Zingales. 2003. Saving Capitalism from the Capitalists. Ray, Tiernan. 2016. “Amazon: All Retail’s SKUs Are Belong to Them, Goldman Tells CNBC,” Barrons: Tech Trader Daily, June 16, 2016. http://blogs.barrons.com/techtrader daily/2016/06/16/ amazon-all-retails-skus-are-belong-to-them-goldman-tells-cnbc Reiter v. Sonotone Corp., 442 U.S. 330, 343 (1979) US Supreme Court. Rubin, Ben Fox. 2015. “As Amazon Marks 20 Years, Prime Grows to 44 Million Members in US.” CNET, July 15, 2015. http://www.cnet.com/news/amazon- prime-grows-to-estimated -44-million-members-in-us. Rubin, Chad. 2016. “The Evolution of Amazon Prime and Their Followed Success.” Skubana, March 31, 2016. http://www.skubana.com/e-commerce-trends/ evolution-of-amazon-prime. Scherer, F. M. 2007. “Conservative Economics and Antitrust: A Variety of Influences.” In How the Chicago School Overshot the Mark, edited by Robert Pitofsky, 30–33. Oxford: Oxford University Press Seetharaman, Deepa, and Nathan Layne. 2015. “Free Delivery Creates Holiday Boon for U.S. Consumers at High Cost.” Reuters, January 2, 2015. http://www.reuters.com/article/us-retail -shipping-holidays-analysis-idUSKBN0KB0P720150102. Slate: Moneybox. 2000. “Amazon: Ponzi Scheme or Wal-Mart of the Web?” February 8, 2000. http://www.slate.com/articles/business/moneybox/2000/02/amazon_ ponzi_scheme _or_walmart_of_the_web.html. Sokol, D. Daniel. 2014. “The Transformation of Vertical Restraints: Per Se Illegality, the Rule of Reason, and Per Se Legality.” Antitrust Law Journal 79: 1003, 1014–15. Soper, Spencer. 2015. “EBay Ends Same-Day Delivery in U.S. in Face of Amazon Effort.” Bloomberg, July 27, 2015. http://www.bloomberg.com/news/articles/ 2015-07-27/ebay -ends-same-day-delivery-in-u-s-in-face-of-amazon-effort. Soper, Spencer. 2015. “Inside Amazon’s Warehouse.” Morning Call, August 17, 2015. http://www.mcall.com/news /local/amazon/mc-allentown-amazon- complaints-20110917-story.html. Soper, Spencer. 2016. “Got a Hot Seller on Amazon? Prepare for E-Tailer to Make One Too.” Bloomberg, April 20, 2016. http://www.bloomberg.com/news/articles/ 2016 -04-20/got-a-hot-seller-on-amazon-prepare-for-e-tailer-to-make-one-too. Stevens, Laura. 2016. “ ‘Free’ Shipping Crowds Out Small Retailers.” Wall Street Journal, April 27, 2016. http://www.wsj.com/articles/for-online-shoppers-free- shipping-reigns -supreme-1461789381. Stevens, Laura, and Greg Bensinger. 2015. “Amazon Seeks to Ease Ties with UPS.” Wall Street Journal, December 22, 2015. http://www.wsj.com/articles/amazon- seeks-to-ease-ties-with-ups -1450835575. Stewart, James B. 2014. “Booksellers Score Some Points in Amazon’s Spat with Hachette.” New York Times, June 20, 2014. http://www.nytimes.com/2014/
Amazon
[ 127 ]
2 8 1
06/21/business/booksellers-score-some -points-in-amazons-standoff-with- hachette.html. Stone, Brad. 2010. “What’s in Amazon’s Box? Instant Gratification.” Bloomberg Businessweek, November 24, 2010. http://www.bloomberg.com/news/articles/ 2010-11 -24/whats-in-amazons-box-instant-gratification; Strauss, Karsten. 2016. “America’s Most Reputable Companies, 2016: Amazon Tops the List.” Forbes, March 29, 2016. http://www.forbes.com/sites/karstenstrauss/2016/03/29 / americas-most-reputable-companies-2016-amazon-tops-the-list. Streitfeld, David. 2013. “As Competition Wanes, Amazon Cuts Back Discounts.” New York Times, July 4, 2013. http://www.nytimes.com/2013/07/05/business/ as-competition-wanes-amazon -cuts-back-its-discounts.html. Streitfeld, David. 2013. “A New Book Portrays Amazon as Bully.” New York Times, Bits Blog, October, 22, 2013. http://bits.blogs.nytimes.com/2013/10/22/a-new- book-portrays-amazon -as-bully. Streitfeld, David. 2015. “Accusing Amazon of Antitrust Violations, Authors and Booksellers Demand Inquiry.” New York Times, July 13, 2015. http://www. nytimes.com/2015/07/14/technology/accusing-amazon-of-antitrust- violations-authors-and -booksellers-demand-us-inquiry.html. Streitfeld, David. 2015. “Amazon Reports Unexpected Profit, and Stock Soars.” New York Times, July 23, 2015. http://www.nytimes.com/2015/07/24/ technology/amazon-earnings-q2.html Trujillo, Timothy J. 1994. “Note, Predatory Pricing Standards Under Recent Supreme Court Decisions and Their Failure to Recognize Strategic Behavior as a Barrier to Entry.” Journal of Corporation Law 19: 809–13, 825. Turner, Donald F., and Carl Kaysen. 1959. Antitrust Policy: An Economic and Legal Analysis. Cambridge: Harvard University Press. Tuttle, Brad. 2010. “How Amazon Gets You To Stop Shopping Anywhere Else.” Time, December 1, 2010. http://business.time.com/2010/12/01/how-amazon-gets- you-to-stop-shopping-any where-else. Tuttle, Brad. 2016. “How Amazon Prime Is Crushing the Competition.” Time, January 25, 2016. http://time.com/money/4192528/ amazon-prime-subscribers-spending. Valentino-Devries, Jennifer, Jeremy Singer-Vine, and Ashkan Soltani. 2012. “Websites Vary Prices, Deals Based on User Information.” Wall Street Journal, December 24, 2012. http://www.wsj.com/articles / SB10001424127887323777204578189391813881534. Vara, Vauhini. 2015. “Is Amazon Creating a Cultural Monopoly?” New Yorker, August 23, 2015. http://www.newyorker.com/business/currency/ is-amazon-creating-a-cultural-mono poly. Wahba, Phil. 2015. “This Chart Shows Just How Dominant Amazon Is.” Fortune, November 6, 2015. http://fortune.com/2015/11/06/ amazon-retailers-ecommerce. Weise, Elizabeth. 2015. “Amazon Prime Is Big, but How Big?” USA Today, February 3, 2015. http://www.usatoday.com/story/tech/2015/02/03/amazon-prime-10- years-old -anniversary/22755509. Whitney, Lance. 2014. “Amazon Prime Members Will Renew Despite Price Hike, Survey Finds.” CNET, July 23, 2014. http://www.cnet.com/news/amazon- prime-members-will-almost-all -renew-despite-price-increase.
[ 128 ] Economy
Wingfield, Nick. 2016. “Amazon’s Cloud Business Lifts Its Profit to a Record.” New York Times, April 28, 2016. http://www.nytimes.com/2016/04/29/ technology/amazon -q1-earnings.html. Wingfield, Nick, and Ravi Somaiya. 2015. “Amazon Spars with the Times over Investigative Article.” New York Times, October 19, 2015. http://www.nytimes. com/2015/10 /20/business/amazon-spars-with-the-times-over-investigative- article.html. Wohlsen, Marcus. 2014. “Amazon Takes a Big Step towards Finally Making Its Own Deliveries.” Wired, September 25, 2014.http://www.wired.com/2014/09/ amazon -takes-big-step-toward-competing-directly-ups. Woo, Stu. 2011. “Amazon ‘Primes’ Pump for Loyalty.” Wall Street Journal, November 14, 2011. http://www.wsj.com/articles/SB10001424052970203503204577036 102353359784. Wu, Tim. 2010. The Master Switch: The Rise and Fall of Information Empires. London: Atlantic Books Yglesias, Matthew. 2013. “Amazon Profits Fall 45 Percent, Still the Most Amazing Company in the World.” Slate: Moneybox, January 29, 2013. http://www.slate. com/blogs/moneybox /2013/01/29/amazon_q4_profits_fall_45 _percent.html.
Amazon
[ 129 ]
301
SECTION 2
Society
321
CHAPTER 5
Platform Reliance, Information Intermediaries, and News Diversity A Look at the Evidence NIC NEWMAN AND RICHARD FLETCHER
I
n this chapter we draw on research from our surveys of news consumption (Digital News Report 2012–17) as well as qualitative research and passive tracking to understand how much people use digital intermediaries, why they use them, and the extent to which this is exposing them to more or less varied news.1 The perceived influence of platforms— in particular Facebook and Google— has never been greater, as people worry about misinformation, polarization, filter bubbles, echo chambers, and the erosion of the shared news agenda, to name just a few (see, e.g., Pariser 2011; Sunstein 2007; Berry and Sobieraj 2014). We show how many of these platforms have become more important for news discovery and consumption, but our findings do not support the perception that by giving us more of what we want, intermediaries are leading to a narrower news diet. Rather, we find that for most people both search and social media tend to increase the number and variety of sources accessed when compared with those who access websites directly. One important caveat is that the news brand is 1. See www.digitalnewsreport.org for full details of the Digital News Report project, including a description of the methodology, additional data and findings from the 2017 report (Newman et al. 2017), and electronic copies of reports from previous years.
341
often not remembered by users when content is accessed from social media or search. Instead it is the platform that takes most of the credit. This may have implications for news brands’ ability both to make money and build deep and lasting relationships with consumers.
THE GROWTH OF PLATFORMS
Since 2012, we have been tracking the growth of platforms (social media, search, and aggregators) for news across multiple countries as part of a wider online news survey conducted by YouGov. We started in 2012 with just 5 countries but this has grown to 36 markets by 2017 with representative samples of around 2,000 respondents in almost all. Within these wider categories we have also tracked the usage of specific brands such as Facebook, Google News, and Apple News. First we look at the consumption of each of these categories and then later we explore the wider implications.
SOCIAL MEDIA
The use of social media for news has doubled in most countries that we have been tracking since 2013. In the United States, for example, it has grown from 27% of the online population to 51% in five years and in the UK from 20% to 41%. There are wide variations in the importance of social media for news from Chile (76%) to Germany and Japan (29%) (Figure 5.1). 70% USA
60%
58%
50%
51%
UK
40%
41% 38%
Germany
30%
29%
France
20%
Japan
10% 0%
Spain 2013
2014
2015
2016
2017
Figure 5.1: Social media as a source of news in selected countries (2013–2017). Note: Q3. Which, if any, of the following have you used in the last week as a source of news? Please select all that apply. Base: Total sample 2013–2017 in each country.
[ 134 ] Society
Social media use for news peaked in some countries in 2016 and has started to fall. This is partly because Facebook, the main provider, changed its algorithms to favor friends and family content (Isaac and Ember 2016) but also because of the rise of messaging applications that are widely used in Southern Europe, Asia, and Latin America for discovering and sharing news. Within social media and messaging, Facebook holds a dominant position with around three-quarters (70%) using it for any purpose and half for news (47%) (Figure 5.2). Corporately it owns Instagram and WhatsApp as well as operating Facebook and Facebook Messenger. This means that, overall, 8-in-10 (80%) touch a Facebook product at least once a week (54% for news). Japan is the only country in our survey where Facebook is not the leading social network for news. Here it is beaten into fourth place by YouTube, Line, and Twitter. It is worth noting that Google-owned YouTube is popular in general, but only around a third use it for news each week. This dominance is even more true with younger age groups, as is their use of social media in general as a source of news. A third of 18–24s say social media is their main source of news, more than TV news and newspapers put together—and for most people that means Facebook. Main source, however, does not mean only source of news. When it comes to dominance, we should remember that most people—young and old—combine a range of different sources and platforms for news. Looking
Social networks 70%
Facebook
Twitter Linkedln 0%
WhatsApp
47%
YouTube Instagram
Messaging Apps
61%
22% 24%
Snapchat
20% 10%
Viber
14% 20%
Wechat 40%
60%
80% At all
40% 36%
FB Messenger
6%
3%
15% 8% 9% 2% 7% 2% 4% 1%
0%
20%
40%
60%
80%
For news
Figure 5.2: Top social networks and messaging applications across all markets. Note: Q12a/b. Which, if any, of the following have you used for any purpose/for news in the last week? Please select all that apply. Base: Total sample: All markets = 71,805.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 135 ]
361
at 2017 data we can see that two-thirds of social media news users in the United States also watch television news (67%) and two-thirds also visit mainstream websites or apps (66%)—a bit more than the general population. Just 2% only use social media for news in an average week.
SEARCH AND AGGREGATORS
Unlike social media, search usage for news has not increased significantly during the last few years. If we take the 10 countries we have been tracking since 2014,2 weekly usage has actually declined from 44% to 38%. We can suppose this is partly due to rise of social media in the same period, but also the growth of other ways of getting to content. The ways people find news stories have become more diverse, with the growth of mobile aggregators, the rebirth of e-mail, and the rise of mobile alerts. In order to understand the relative importance of these different intermediaries, we can look at the preferred ways of getting to content across our entire 70,000 sample (Figure 5.3). Here we ask respondents to state their main way of coming across content and overall we find that destination websites and apps (direct access) remain slightly ahead of search and social. But if we add together preferences for all other routes, two-thirds of web users (65%) now prefer to use a side-door of some kind, rising to three- quarters (73%) for under-35s. 60% 65% prefer side door access (75% of under 35s) 40%
32% 25%
23%
20% 6% 0%
Direct
Search
Social media
E-mail
5%
5%
Mobile alerts Aggregators
Figure 5.3: Preferred gateways to news content in all markets. Note: Q10a. Which of these was the main way in which you came across news in the last week? Base: All who used a news gateway in the last week: All markets = 66,230.
2. United States, UK, Germany, France, Spain, Italy, Japan, Denmark, Finland, Brazil.
[ 136 ] Society
Table 5.1. SELECTED TOP MARKETS FOR EACH GATEWAY
Direct
Social
Search
Finland: 67%
Chile: 64%
Poland: 62%
Norway: 67%
Argentina: 58%
Turkey: 62%
Sweden: 59%
Hungary: 58%
Korea: 60%
UK: 54%
Romania: 55%
Czech Republic: 59%
Aggregators
E-mail
Mobile Alerts
Japan: 40%
Belgium: 34%
Taiwan: 32%
Korea: 37%
Portugal: 30%
Hong Kong: 27%
Taiwan: 31%
US: 23%
Turkey: 24%
Hong Kong: 25%
France: 22%
Sweden: 22%
Note: Q10. Thinking about how you got news online (via computer, mobile, or any device) in the last week, which were the ways in which you came across news stories? Please select all that apply. Base: Total sample in each market. Showing selected markets with high percentage weekly usage of each pathway to news.
It should be noted that search traffic is a mixture of keyword queries for news stories and navigational short cuts to a website (searches for the name of a branded news site). With around half of all search queries falling into this category the preference for brands is probably a little higher than this graph suggests. At the same time, we also find that behind these averages there are very different country-based preferences for how people discover and access news (Table 5.1). Audiences in Scandinavia and the UK are more likely to go directly to a website or app. Here strong commercial and public service brands have built and marketed powerful news destinations (Cornia, Sehl, and Nielsen 2016; Sehl, Cornia, and Nielsen 2016). In some cases up to 80% of traffic still flows directly to brands in these markets and the role of intermediaries is much less important.3 By contrast in southern and central Europe and in Latin America social media and search is a much bigger factor for news discovery and consumption. In Asia, many Asian markets have a unique model where individual media brands are often subsumed within giant portals. Yahoo plays this role in Japan, as do Naver and Daum in South Korea—where aggregators pay content providers an undisclosed amount
3. Personal conversations with news executives in these countries. Our passive tracking in the UK also shows the BBC with over 80% direct traffic.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 137 ]
381
(believed to be around $30m a year) for content published through their platforms. Elsewhere e-mail has made a comeback in many markets on the back of more personalized technology and mobile delivery while mobile alerts and notifications are the fastest growing route to content. Here the gateways are almost exclusively controlled by Apple (iOS) and Google (Android). Notifications are most popular in Taiwan (32%) and Hong Kong (27%), as well as Sweden (22%). They grew strongly between 2016 and 2017 in the United States (20%), where publishers have been investing heavily in driving more content to the lockscreen. In terms of search specifically, our surveys do not ask how much of this search traffic is attributable to Google, but our passive tracking data in the UK (collected by YouGov Pulse) shows that 93% of search referrals to news sites came from a Google domain with 3% from Microsoft’s Bing and 4% other/unknown. One significant development in recent years has been the introduction in 2016 of Google-hosted news stories, within the main mobile search results. The resulting Accelerated Mobile Pages (AMP) are hosted by Google partly to ensure fast downloads but—as with Apple News—the branding is managed by the publishers themselves. Some participating news brands report a significant proportion of mobile traffic now coming from AMP pages. In addition, Google also operates a news aggregator website (Google News), which again is much more popular in some countries than in others. Weekly reach ranges widely from 30% in Mexico and 28% in Greece to just 5% in the UK, 4% in Finland, and 3% in Denmark. Once again, we can see that there is a small subset of countries that is much less affected than others by digital intermediaries.
THE IMPACT OF MOBILE AND THE RISE OF MOBILE AGGREGATION
Our research shows that the move to mobile goes hand in hand with the growth of social media and other forms of aggregation. In many countries mobile news consumption has overtaken that from computers (desktop and laptops), while tablets are broadly stable or falling. Over half of our global sample (56%) accesses news on a smartphone weekly. On a mobile phone in particular, where it can be difficult to move quickly between multiple apps and websites, the convenience of a one-stop-shop can be compelling. Facebook and Twitter play this role for some but other mobile-specific aggregators are becoming more relevant. These include
[ 138 ] Society
Personalised alerts US: 19% of iPhone users - 11% in 2016 AUS: 18% of iPhone users - 8% in 2016 UK: 15% of iPhone users - 8% in 2016
Spotlight
Apple news story
Figure 5.4: How Apple is growing users to its iPhone news app in the United States, Australia, and the UK. Note: Q10c 2016. When using the Internet for news, have you used any of the following sites or mobile apps that aggregate different news links in the last week? Please select all that apply. Base: Apple iPhone users: US = 748, Australia = 708, UK = 660.
standalone products (e.g., Flipboard and SmartNews), as well as news aggregators that are designed to be part of a wider service (e.g., Apple News and Snapchat Discover). This second group—which are both destinations in their own right and allow content to hook into established ecosystems— have shown strong growth between 2015 and 2017. Another story of growth comes from Snapchat’s Discover portal, which offers publishers like Le Monde, CNN, and the Wall Street Journal the opportunity to reach a younger audience. Snapchat Discover has been available in the United States, UK, and Australia, with the first non-English speaking versions rolling out in France in September 2016, and Norway and Germany in early 2017. Our data show increased traffic between 2016 and 2017 among the much-prized 18–24s target audience. This has been driven by more prominent placement in the app and allowing users to subscribe directly to Discover content from individual publishers. Apple News has been one of the biggest gainers in recent years following the release of the Spotlight news feed and the ability to subscribe to rich- media mobile alerts for favorite publishers. These two features together seem to have supercharged usage, with a number of publishers reporting in 2017 that up to a third of their mobile traffic comes from the app or the related Spotlight news widget. The Apple News app is only available in the United States, UK, and Australia, where our survey data suggest it is used by around a quarter of iPhone users,4 but the Spotlight feature is available in many more countries (Figure 5.4). As the smartphone becomes more central to consumption, the lockscreen and notification centers built into operating systems are becoming a more important way of surfacing news stories. Both Apple (iOS) and Google (Android) have been developing alerts and other functionality to
4. This equates to between 8% and 12% of smartphone users in those countries.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 139 ]
4 0 1
Table 5.2. GOOGLE AND APPLE OPERATING SYSTEMS DOMINATE SMARTPHONE USAGE ACROSS SELECTED REGIONS
North America
European Union
Asia
Latin America
All countries
Apple (iOS)
32%
21%
30%
23%
25%
Google (Android)
46%
58%
57%
66%
57%
5%
9%
9%
8%
9%
Other
Note: Q8a. Which, if any, of the following devices do you ever use (for any purpose)? Base: Total sample in each region.
surface and recommend content to users. Mobile news notifications have been growing strongly in many countries over the last few years, and it is entirely possible that in a few years these platforms will become every bit as important as social media and search for news discovery and consumption (Table 5.2). One further important development is the emergence of new platforms for news consumption such as voice-controlled digital assistants such as the Amazon Echo and Google Home and wearables such as the Apple Watch and Galaxy Gear. It is still early days for all these gateways, but they contain the potential for further disruption. Although by 2017 voice-activated speakers were only available in a small number of countries, our data showed they were already making an impact. In the United States 4% of our sample use a voice-activated speaker—half of them for news (Table 5.3). As these devices become more widely available, they could disrupt both the smartphone and the radio itself. They also help establish Amazon’s role as the fourth major platform player in the news market.
Table 5.3. EMERGING DEVICES FOR NEWS IN THE UNITED STATES, THE UK , AND GERMANY
Voice activated speaker
Smartwatch
Use for any purpose
Use for news
Use for any purpose
US
4%
2%
3%
1%
UK
2%
1%
2%
< 1%
Germany
1%
< 1%
3%
1%
Use for news
Note: Q8a/b. Which, if any, of the following devices do you ever use (for any purpose)/ for news in the last week? Please select all that apply. Base: Total sample: US = 2,269, UK = 2,112, Germany = 2,062.
[ 140 ] Society
Overall, this consumption data shows a complex picture where it is hard to get a sense of dominance by any one company. If you put together all the devices, operating systems and products then Google and Facebook each reach at least 80% of the market in most countries. On the other hand, as we have shown, most people use multiple routes to content while a significant minority go directly to websites and apps as a matter of preference. We also see that beyond Google and Facebook the competition is heating up for the attention of both consumers and publishers with Snapchat, Apple, Samsung, and even Amazon becoming significant players in the news landscape.
REASONS FOR USING INTERMEDIARIES
We have both survey and focus group evidence about why many people are turning to social networks, search, and news aggregators for online news. From our survey data, the key reasons given relate to both speed of update and convenience. Social networks like Facebook, for example, are considered to do a good job in alerting people to news stories that they might otherwise miss (60%). From our survey data, the key reasons given relate to the convenience in bringing multiple sources in one place (50%) and the ability to discuss and debate the news with friends. Other participants felt that social media brought a wider range of perspectives that was possible through the mainstream media for example in bringing first-person accounts from refugee camps. “In the refugee crisis I got a lot of my news through Facebook, blogs, videos from the camps” 18–34, UK5
For those using news aggregators, focus group participants talked more about the choice and variety of news sources that were available. Many talked about the options to personalize news via aggregator apps: “I usually go through Apple News. It gets a variety of things like I’m interested in certain topics that I probably wouldn’t find or I’d have to search for it myself so it’s like a one stop shop of things that interest me.” 18–34, US
5. Focus groups were conducted in February 2016.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 141 ]
421
Looking at a range of focus group participants across countries it is clear that consumers broadly welcome the convenience brought by digital intermediaries like Facebook and Google. However, for some, the new perspectives and greater choice needed to be balanced by the potential negative impact of greater personalization and algorithmic selection. Over half felt that key information (57%) or challenging viewpoints (55%) might be lost in an algorithmically driven filter bubble. “Algorithms obviously are going to pick things that I wanna see, and that’s great, but at the same time it can get boring if I am only seeing things that I already know I like” 18–34, US “Is it a little bit claustrophobic in there? It’s just like you’re getting what you want. Maybe it’s nice to get things that you wouldn’t necessarily ask for?” 35–54, UK Algorithms: “Whoever pays them the most to pick their story and put it forward is who’s going to get it. That’s my theory. . . . People got to make a living.” 35–54, US
IMPLICATIONS OF DIGITAL DOMINANCE FOR DIVERSITY
As we mentioned in our introduction, many commentators worry that platform dominance might have a negative effect on people’s news diets. But at least when it comes to news use, in most cases this does not reflect how people actually feel. This is a complicated area for research, so in this section we go into a little more depth about how we arrived at our conclusions. In 2017, we asked users of social media and news aggregators whether they agree (on a five-point scale ranging from “strongly disagree” to “strongly agree”) that they often see news from brands they would not normally use. Across all 36 markets, more people agree (social media: 36%; aggregators: 35%) with this statement than disagree (both: 27%), though many are clearly undecided (Figure 5.5). Similarly, more people agree (social media: 40%; aggregators: 35%) that they often see news that does not interest them than disagree (both: 27%) (Figure 5.6). This suggests that people are not only being served news that closely matches their preferences.6 We can also use the survey data to measure the number of news brands people use online, simply by asking them to specify which they have used 6. We also asked users of search engines whether they agree that their search results often contain news from outlets they would not normally use. Again, we see a similar pattern, with 42% agreeing, 20% disagreeing, and 37% neither agreeing nor disagreeing.
[ 142 ] Society
60%
40%
36%
37%
35%
38%
27%
27%
20%
0%
Social media
Aggregators
Agree
Neither
Disagree
Figure 5.5: Proportion that agree they often see news from brands they would not normally use across all markets. Note: Q12. Thinking about when you have used *social media for news*/*news aggregators* . . . Please indicate your level of agreement with the following statements.—I often see news from outlets that I would not normally use. Base: All who used social media or news aggregators for news in the last week: All markets = 48,551/28,441.
60%
40%
40%
37%
33%
35%
27%
27%
20%
0%
Social media Agree
Aggregators Neither
Disagree
Figure 5.6: Proportion that agree they often see news stories they are not interested in across all markets. Note: Q12. Thinking about when you have used *social media for news*/*news aggregators* . . . Please indicate your level of agreement with the following statements.—I often see news stories that I am not interested in. Base: All who used social media or news aggregators for news in the last week: All markets = 48,551/28,441.
41
in the last week from a list of around 30 of the most popular in each country, and counting the total. This cannot tell us about the consumption of news brands in the “long tail,” but can point to differences in the consumption of prominent outlets. If we compare the number of news brands used by those who use social media, search engines, and aggregators, and those who do not, we see that users of these platforms say they use more (Figure 5.7). For example, people who don’t use social media for news use 3.10 news brands on average per week, but the figure for social media news users is 4.34. This pattern is repeated in all 36 markets we covered in 2017, as well as for search engines and news aggregators. Furthermore, the link between the use of platforms and number of news sources used remains significant even after we use a series of Poisson regression models to control for other factors known to influence news consumption, such as age, gender, income, education, and interest in news. There are important limitations associated with asking people to recall their news consumption (see e.g., Prior 2009). However, the results of studies based on tracking data tell a similar story. Based on web browser data, Athey, Mobius, and Pal (2017) have convincingly shown that the closure of Google News in Spain in 2014 resulted in a 20% reduction in news consumption among its users.
6
5.27
4.85
5
4.34
4 3.10
3.39
3.21
3 2 1 0 Used social media for news
Used search engines for news No
Used aggregators or news
Yes
Figure 5.7: Average number of online news brands used in the last week across all markets. Note: Q5B. Which of the following brands have you used to access news *online* in the last week? Please select all that apply. Q10. Thinking about how you got news online (via computer, mobile, or any device) in the last week, which were the ways in which you came across news stories? Please select all that apply. Base: Used/did not use social media for news/search engines for news/news aggregators in the last week: All markets = 28,557/43,238 16,893/54,902 7799/63,997.
[ 144 ] Society
What might explain these differences? For those highly interested in the news, it is easy to see how social media, search engines, and aggregators can help them to consume more news from a wider array of brands. But at the same time, this also implies that the increased choice offered by the web means that those less interested in the news always have the option of consuming something else, allowing them to largely opt out of news coverage, and potentially increasing knowledge gaps between different groups (Prior 2005). One possible explanation is that platforms are reintroducing incidental exposure to news. Incidental exposure simply refers to exposure to news content while intending to do something else. In 20th-century mass media environments, incidental exposure to news was common. Even as far back as the 1940s, media scholars knew that people read newspapers for reasons that had little to do with a desire to consume news (Berelson 1949). Later, television would provide an even more powerful means of incidental exposure to news (Neuman, Just, and Crigler 1992), particularly as broadcasters in some countries deliberately scheduled news at peak viewing times in order to increase exposure (Esser et al. 2012). In contrast to going directly to a website or app, platforms have the potential to reintroduce incidental exposure because they rely on algorithms to make automated display decisions, taking away some of the control from the user. To test the effect of incidental exposure on social media, we asked users of Facebook, YouTube, and Twitter whether they (1) think of it as a useful source of news, (2) see news when they are using it for other reasons, or (3) do not use it for news (Figure 5.8). Those in the first category are intentional news users, whereas those in the other two have the potential to be incidentally exposed. The rest are nonusers who cannot experience incidental exposure.7 At a glance, we can see that 19% across all 36 markets intentionally use Facebook for news, compared to 10% for YouTube, and just 6% for Twitter. Indeed, almost as many people use Facebook for news as use Twitter for any purpose (20%). As before, we can compare the average number of news sources used by the incidentally exposed and by nonusers to see whether using social media for other purposes nonetheless affects news consumption. When 7. An alternative approach would be to say that only those who said that they see news when using for other reasons are categorized as incidentally exposed. However, we chose the more cautious approach of also including those who said they do not use for news at all in order to increase our confidence in any remaining differences between nonusers and the incidentally exposed, given that it would likely lower the average number of online news brands used by the incidentally exposed.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 145 ]
46 1
23%
Facebook
39%
YouTube
Twitter
11%
0%
3%
19%
26%
10%
Sees news when using for other reasons
10%
6%
20%
Does not use for news
A useful source of news
40%
60%
80%
Figure 5.8: How people use different social networks to get the news across all markets. Note: Q12. You say you use Facebook/YouTube/Twitter for news. Which of the following statements applies best to you? Base: Total sample: All markets = 71,805. We did not ask about use of Twitter in South Korea.
we do this, we can see that on average those incidentally exposed on all three networks use more sources of news than nonusers. Again, when we build Poisson regression models for each market, we see that incidental exposure on YouTube and Twitter leads to significant increases in the number of online news sources used in every market, and half of all markets on Facebook (again, while also controlling for age, gender, income, education, and interest in news).8 This is broadly in line with previous studies that have shown that incidental exposure to political content on social media increases the likelihood of political participation (Valeriani and Vaccari 2016). We can also see that those who intentionally use each for news use more brands than the incidentally exposed, and that those who intentionally use Twitter and YouTube for news use more news sources than those who use Facebook (Figure 5.9). This makes sense in light of previous research that has shown that those motivated by information seeking are more likely to prefer asymmetrical networks like Twitter and YouTube, and those motivated by relationship formation are more likely to prefer symmetrical networks like Facebook (Kim and Lee 2016).9
8. The countries where the relationship was significant were: UK, United States, Germany, Portugal, Finland, Belgium, Netherlands, Austria, Czech Republic, Poland, Turkey, Japan, Korea, Taiwan, Hong Kong, Malaysia, Singapore, and Argentina. 9. Facebook is classed as a symmetrical network because relationships between users require mutual consent. Twitter and YouTube are asymmetrical networks because in most cases relationships between users can be one-way.
[ 146 ] Society
7
6.27
6
5.35
4 3
4.89
4.73
5
3.97
3.54 2.97
3.17 2.62
2 1 0 Facebook Non-users
YouTube Incidentally exposed
Twitter News users
Figure 5.9: Average number of online news sources used in the last week across all countries. Note: Q12. You say you use Facebook/YouTube/Twitter for news. Which of the following statements applies best to you? Q5B. Which of the following brands have you used to access news **online** in the last week? Please select all that apply. Base: Facebook/YouTube/Twitter Nonusers/Incidentally exposed/News users: All markets = 21,571/36,594/13,640 27,878/36,448/7479 57,651/9797/4357. We did not ask about use of Twitter in South Korea.
ATTRIBUTION IN DISTRIBUTED ENVIRONMENTS
A further implication of the shift to distributed consumption through search, social, and aggregators relates to the level of attribution and recognition for news brands. In our 2016 survey respondents suggested that it is often the platform rather than the publisher that gets the credit for the news, while focus group respondents talked about how Twitter and Facebook had broken some of the most important stories of the year— even though they create no content. Our data showed quite wide differences in terms of attribution with relatively strong brand recognition on social media in countries like Finland (60%) and Germany (55%), while only around a third said they noticed the brand in social media (most or all of the time) in highly competitive English-speaking markets such as the UK, Australia, and Ireland. When using an aggregator, consumers in some countries said they were even less likely to notice a specific brand. In Korea and Japan, where aggregated news sites are the norm, only around a quarter say they always or mostly notice the brand (Figure 5.10). This data suggests that if the rest of the world becomes more like Japan and Korea with dominant intermediaries playing a bigger role—news brands may increasingly struggle to gain recognition and much of the credit for the content will be inherited by the platform.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 147 ]
4 8 1
60% 52%
55% 50% 46%
52%
Also...
49% 40% 35%
40%
Finland: 60% Australia: 36% Ireland: 33%
37% 38% 36% 26%
24% 23%
20%
0%
Germany Denmark
US
Canada
Aggregators
Japan
UK
Korea
Social media
Figure 5.10: Proportion who notice news brands via social media and aggregators in selected countries. Note: Q10. Thinking about when you have used social media/aggregators for news, typically how often do you notice the news brand that has supplied the content? Base: All who used social media/aggregators as a source of news in the last week: Germany = 616/292, Denmark = 1126/150, USA = 1018/478, Canada = 947/741, Japan = 552/936, UK = 693/241, Korea = 700/828.
To understand this further, we conducted a more detailed attribution study over a period of a month (April 2017) with UK desktop users. Using a YouGov panel, we passively tracked usage by a representative sample of UK users, and then served them surveys (3,000) to see what respondents could remember about where they found the story (the path) and the source of the content (the news brand itself). Overall this study added further to our understanding of the overall picture. It confirmed our survey evidence that consumption is split between direct traffic, search, and social. However, it also showed that some brands were much less dependent on search and social than others. The BBC, for example, received around 80% of traffic to its stories directly with 20% from search and social. By contrast, new political start-up the Canary received most of its traffic via social media. In terms of attribution, we found that roughly two-thirds remembered the path through which they found the news story (Facebook, Google, etc.), but less than half could recall the name of the news brand itself when coming from search (37%) and social (47%). Breaking this down by network, only 44% of Facebook users could remember the news brand they had clicked on, compared with 55% of Twitter users (Kalogeropoulos and Newman 2017). Respondents were more likely to remember the brand if the story was about local news or political news, but less likely to remember celebrity or
[ 148 ] Society
entertainment stories. They were also more likely to remember the brand if they had a previous connection with it or used it as a main source. Once again this shows that strong news brands are less impacted and less depend ent than newer or weaker brands that have not built up direct loyalties.
CONCLUSIONS
Intermediaries undoubtedly play a substantial and important role in the news today, but the question of dominance is harder to unravel due to the fast-moving, multilayered, and multiplatform nature of modern news consumption. There is no question that Google dominates organic search and the search path in general (90% or more in some countries), while Facebook dominates the social and messaging space, with 80% accessing the companies’ services at least once a week. And that level of usage clearly gives those companies a powerful role in news discovery. Conversations with media companies combined with our own passive tracking study confirm that today Google and Facebook are still by far the largest sources of referral traffic for UK publishers. On the other hand, our data also shows that some companies are far more reliant on this referral traffic than others. Strong brands like the BBC (in the UK) can still rely on around 80% of users using the direct path at least once a week, while other brands are almost entirely dependent on Facebook and Google. We also note that young people are much more likely than older groups to use social media in particular to access news. For around a third it is their main source of news. But even here most 18–24s use multiple social networks and messaging apps and multiple sources of news including TV, radio, and online websites. Less than 5% only rely on social media for news. We find differences between countries. Averaging traffic across all brands, we find that the direct path is used more in Nordic countries, with aggregators, search, and social networks relatively less important. By contrast, it is much more common in France or Italy to start news journeys with or Google and in Brazil or Greece and to start journeys with Facebook or WhatsApp. In parts of Asia we see a different pattern again, with locally owned and managed aggregators playing a bigger role in news than either Facebook or Google. Furthermore there is evidence that in terms of discovery, both Google and Facebook may have reached some kind of peak. Our surveys show that search has become less important over time as social media, mobile
P l at f or m s , a n d N e w s Di v e r s i t y
[ 149 ]
501
aggregators, e-mail, and notifications have grown in importance. Our 2017 data also shows a slight decline in social media/Facebook’s role as a source of news in over half our countries—after years of continuous growth. This may be because new mobile-focused platforms such as Snapchat and Apple News are emerging along with messaging apps, as smartphones become the main access point for digital news. In the future, voice platforms like Amazon’s Alexa look set to provide further challenge to the so- called Facebook/Google duopoly. Even if there is dominance today, there is no certainty that this will still be the case in a few years’ time. A second important consideration is the extent to which any of these platforms, but particularly Facebook and Google, are impacting on diversity and quality of news sources consumed. As a consequence of the extensive use of these platforms for news discovery, our research highlights the increasing role of algorithms in selecting news sources and news stories on behalf of users. While these algorithms often give users more of what they already like, we find that at the same time they offer up a wider range of news sources than for those using traditional sources of news. We have less evidence about the type and range of content consumed through these platforms, though a number of studies suggest that social media in particular tends to reward shorter, more visual, and more emotive content (e.g., Kalogeropoulos, Cherubini, and Newman 2016), while search-engine optimization often leads to near- identical popular stories being produced by multiple providers (Cagé, Hervé, and Viaud 2017). The rise of platforms has had significant impacts for both consumers and publishers, but the implications of dominance are very different. From a consumer perspective, platforms like Google and Facebook have made it significantly easier to discover a range of content on subjects of personal or professional interest. It can be argued that the shared (and relatively simple) user experience provided by these dominant platforms have helped unlock the benefits of connected communities more quickly than would otherwise have been the case. This has delivered significant public value in terms of the speed and range of access to news with incidental exposure also extending these benefits to those less interested in current events. In many ways it is the size and scope of platforms like Facebook and Google that enables them to alert consumers so effectively to any story no matter how small and to bring news together in one convenient place from multiple sources across the globe. Our research shows that these attributes are highly valued by consumers, even if they often complain about the unreliability of some of the content that can be found there.
[ 150 ] Society
The impact of platform dominance on publishers, however, is more complex. On the one hand, the size and influence of these platforms has allowed publishers to reach new (often global) audiences but also those who would be unlikely go directly to a news website or download a news app. It is hard to see, for example, brands like the Huffington Post, Buzzfeed, or the Mail Online building global news businesses without the scale and free distribution provided by Facebook and Google. On the other hand, even these publishers have recently found their business models undermined by platforms that have far greater scale and targeting ability in an advertising market increasingly driven by programmatic technology. Perhaps even more worrying in the long term, we show that as people consume more content through platforms, the relationship with the reader and the creator of content is being weakened. Our research shows that it is often the platform rather than the publisher that gets most of the credit from consumers. Correct attribution of content within Google and Facebook is under 50%, which remains a significant point of contention for publishers looking to build deep and lasting relationships with their users. Ultimately these points are linked. If the economic rewards increasingly go to platforms rather than the creators of content, then the range and diversity of content may also dry up—undermining the benefits to consumers or greater choice. The balance of these elements varies across countries and age groups, but remains critical to understand. REFERENCES Athey, Susan, Markus Mobius, and Jeno Pal. 2017. “The Impact of Aggregators on Internet News Consumption: The Case of Localization.” Stanford Graduate School of Business Working Paper No. 3353. January 11. Berelson, Bernard. 1949. “What ‘missing the newspaper’ means.” In Communication Research 1948–1949, edited by Paul F. Lazarsfeld and Frank N. Stanton, 111–29. New York, NY: Harper. Berry, Jeffrey M., and Sarah Sobieraj. 2014. The Outrage Industry: Political Opinion Media and the New Incivility. Oxford: Oxford University Press. Cagé, Julia, Nicolas Hervé, and Maria-Luce Viaud. 2017. “The Production of Information in an Online World: Is Copy Right?” NET Institute Working Paper. May 23. https://ssrn.com/abstract=2672050. Cornia, Alessio, Annika Sehl, and Rasmus Kleis Nielsen. 2016. Private Sector Media and Digital News. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Esser, Frank, Claes H. de Vreese, Jesper Strömbäck, Peter van Aelst, Toril Aalberg, James Stanyer, Günther Lengauer, et al. 2012. “Political Information Opportunities in Europe: A Longitudinal and Comparative Study of Thirteen Television Systems.” International Journal of Press/Politics 17: 247–74.
P l at f or m s , a n d N e w s Di v e r s i t y
[ 151 ]
521
Isaac, Mike, and Sydney Ember. 2016. “Facebook to Change News Feed to Focus on Friends and Family.” New York Times, June 29, 2016. Accessed July 5, 2017. https://www.nytimes.com/2016/06/30/technology/facebook-to-change-news- feed-to-focus-on-friends-and-family.html. Kalogeropoulos, Antonis, Federica Cherubini, and Nic Newman. 2016. The Future of Online News Video. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Kalogeropoulos, Antonis, and Nic Newman. 2017. News Attribution in Distributed Environments. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Kim, Cheonsoo, and Jae Kook Lee. 2016. “Social Media Type Matters: Investigating the Relationship between Motivation and Online Social Network Heterogeneity.” Journal of Broadcasting & Electronic Media 60: 676–93. Neuman, W. Russell, Marion R. Just, and Ann N. Crigler. 1992. Common Knowledge: News and the Construction of Political Meaning. Chicago: University of Chicago Press. Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A.L. Levy, and Rasmus Kleis Nielsen. 2017. Reuters Institute Digital News Report 2017. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Pariser, Eli. 2011. Filter Bubbles: What the Internet Is Hiding from You. London: Penguin. Prior, Markus. 2005. “News vs. Entertainment: How Increasing Media Choice Widens Gaps in Political Knowledge and Turnout.” American Journal of Political Science 49: 577–92. Prior, Markus. 2009. “The Immensely Inflated News Audience: Assessing Bias in Self- Reported News Exposure.” Public Opinion Quarterly 73: 130–43. Sehl, Annika, Alessio Cornia, and Rasmus Kleis Nielsen. 2016. Public Service News and Digital Media. Oxford: Reuters Institute for the Study of Journalism, University of Oxford. Sunstein, Cass R. 2007. Republic.com 2.0. Princeton, NJ: Princeton University Press. Valeriani, Augusto, and Cristian Vaccari. 2016. “Accidental Exposure to Politics on Social Media as Online Participation Equalizer in Germany, Italy, and the United Kingdom.” New Media and Society 18: 1857–74.
[ 152 ] Society
CHAPTER 6
Challenging Diversity—Social Media Platforms and a New Conception of Media Diversity NATALI HELBERGER
INTRODUCTION
It is 2040.1 The morning alarm goes, and wakes me with a careful selection of MindBook headlines. Not too negative, since the radio app knows that I am not exactly a morning person, and that bad news in the morning will negatively affect my socializability and productivity. In the bathroom, my smart mirror treats me to some compliments, and a well-balanced mix of news about health and lifestyle products, and recent tech-developments to slowly prepare me for another day at the faculty. After the first cup of coffee, MindBook considers the time ripe to present me with the more serious kinds of headlines—a new oil conflict in Antarctica, the election campaign in the United States is again in full swing, Turkey is in negotiations with Russia over Cyprus. I smile: my extra minutes on MindBook last night were well invested. Having spent half an hour clicking very purposefully on all the news about external relations, politics, and oil prices seemed to have helped to get me out of this news-about-climate-crisis-and-smart-cities loop I was stuck in for the better half of last week. Admittedly, it did help 1. The author would like to thank the editors for their thoughtful feedback and a stimulating discussion.
541
me a lot to prepare my presentation at the Ministry for Education, Culture and Science yesterday. And yet, sometimes I wish that getting the news was a little less . . .efficient. Since the decline of the general news media 30 years ago, getting the bigger picture has become more difficult. Futuristic? A bit, but not excessively so. The way we find and receive news content is changing. Not rapidly but steadily. One key trend seems to be the fact that people access news content and media content more and more not only via traditional media but also via new information intermediaries, such as social media platforms, apps, and search engines (Reuters Institute for the Study of Journalism 2016, 2017; Pew Research Center 2016). These information intermediaries have stepped in to fill a critical gap in the news delivery chain: channeling attention and helping users to make a selection of the news that they find relevant. Information intermediaries often do not produce news themselves, neither do they see themselves as editors or as having the mission of providing citizens with a diverse set of information that we need in order to make informed choices. Rather, their business model is geared toward distributing news, connecting single articles with audiences, and realizing the advertising potential of different kinds of media content and target groups. And with the advances of data analytics and the increasing stock of data and intelligence about user preferences and interests, news has turned into a customizable product that can be carefully targeted and adjusted to individual recipients and the demands of advertisers. The presence of such data-driven, heavily targeted information intermediaries does not necessarily need to be a challenge to a diverse information environment, as long as there are alternative sources of information. But what to make of a situation in which there remain only one or a few dominant sources of information (as in the example of the fictional MindBook in the introductory scenario above)? And in the light of such a dominant player and a heavily targeted news environment, what are the prospects of still encountering diverse media content? The focus of my chapter is on one of the central public policy objectives in media policy: media diversity.2 I do not discuss other, equally important
2. Note that there is still considerable conceptual disagreement about the concrete meaning of the notions of “media pluralism” and “media diversity.” Often, both notions are used interchangeably (see McGonagle [2011], speaking of “conceptual messiness”). McGonagle suggests a pragmatic approach in that pluralism refers to issues of media ownership and the choice of the public between different providers of services, whereas diversity refers to the range of programs and services available (2011). Along these lines, this chapter uses predominantly the notion of “diversity,” and only uses “pluralism” where it is necessary to explicate the difference between issues of media ownership and the choice between different programs and services.
[ 154 ] Society
issues of platform dominance, such as the role of platforms in politics, their economic impact, and so forth, confident that many of those issues are covered by other chapters. Media diversity as a concept is deeply ingrained in our thinking about the role and contribution of the media in a democratic society and the idea that there shall be no one entity that can control (or dominate) the public debate. Instead, the media shall reflect the interests and needs of a heterogeneous society. In such a society, all voices have at least in principle the opportunity to make themselves heard. There is broad agreement that, as the Council of Europe has put it, “media pluralism and diversity of media content are essential for the functioning of a democratic society” and that “the demands . . . from Article 10 of the Convention for the Protection of Human Rights and Fundamental Freedoms [right to freedom of expression] will be fully satisfied only if each person is given the possibility to form his or her own opinion from diverse sources of information” (Council of Europe 2007). The close link between media diversity and democratic participation may also explain the vigor with which the rise of platforms and their growing influence in and on the media landscape are being met. The impact of their personalized recommendations and algorithmic filtering on users’ information diet is subject to much concern and dystopian visions about filter bubbles and information bias but also targeted exclusion from news access. Depending on people’s personal profile, users will get to see some kinds of information more, and others less or not at all. With the growing importance of a few large information intermediaries as sometimes the main source of information (Reuters 2017), the need to grasp the dynamics of these more centralized, data-driven (instead of editorially driven) news distribution models is ever more urgent. There is a general feeling of unease about the growing power and impact of platforms on users’ media diets and yet, as Martin Moore aptly observed, it is not “[u]ntil we better understand and communicate the dilemmas they raise” that we will be able to find the effective policy responses (Moore 2016). Regulators and policymakers across Europe are grappling with the question of what exactly the nature of these dilemmas is. Or to speak in the words of the British regulator Ofcom: “More fundamentally, the precise nature of future plurality concerns in the online news market are difficult to forecast.”3 Common to the discussions in countries such as the UK, but also Germany, France, and the Netherlands, is the difficulty of adequately conceptualizing and monitoring the impact of information intermediaries on
3. Ofcom, 2012, 27, para. 5.54.
C h a l l e n g i n g Di v e r s i t y
[ 155 ]
561
the information landscape, or understanding where the true risks to media diversity lay.4 The opacity of many of those platforms, and the secrecy that surrounds their algorithms and ordering mechanisms adds to this difficulty (Pasquale 2015), and requires entirely new methods of monitoring (Balazs et al. 2017). Understanding the nature of diversity concerns and potential sources of platform dominance is critical, however, to being able to identify adequate policy responses. This chapter aims to bring more conceptual clarity through developing a better understanding of platform power, how it can impact media diversity, and what the implications are for media diversity policies. In so doing, it concentrates on social media platforms. This is because of the particular role that these platforms play for news consumption but also because of the advances of at least some of these platforms into the business of distributing and aggregating news and media content. The main argument that this chapter makes is that with the arrival of information intermediaries, and social media platforms in particular, digital dominance can no longer be understood as the dominant control over content rights, outlets, or distribution channels, as used to be true with the traditional media. The true source of digital dominance is the ability to control the way people encounter and engage with information and the ability to steer their choices through the sheer knowledge about their interests and biases. More than ever media diversity has become the result of social dynamics, dynamics that are carefully orchestrated by one or few platforms. The chapter explains what implications this finding has for the way we measure and assess potential risks for media diversity on and from social platforms.
MEDIA DIVERSITY—W HAT IT IS AND WHY IT MATTERS, ALSO ONLINE
Does media diversity still matter? One could argue that in the digital information environment with its abundance of information media diversity has turned into a rather meaningless concept. Never was it possible to receive more information, not only from the national media but myriads of media companies, old and digital natives around the globe. This section will argue that “yes,” media diversity still matters, but changing media consumption habits and the arrival of social media platforms requires us to further develop our conception of media diversity. 4. See, e.g., Ofcom, 2012, 25: in Germany: Die Landesmedienanstalten—ALM (2016), Digitalisierungsbericht Kreative Zerstörung oder digitale Balance: Medienplattformen zwischen Wettbewerb und Kooperation, Vistas: Berlin, 2016, in particular 14 and 74; in the Netherlands: Commissariaat voor de Media, Discussion Paper No. 877 09/2016.
[ 156 ] Society
Media Diversity—W hy It Matters, and How
Diversity policies are anchored in our ideas about functioning deliberation in a democratic society, and as such serve potentially a whole battery of goals and values, from inclusiveness, tolerance, and open-mindedness, well-informed citizens, and public deliberation, to a healthy, competitive media landscape and industry. Diversity in the media can create opportunities for users to encounter different opinions and beliefs, self-reflect on their own viewpoints (Kwon, Moon, and Stefanone 2015), enhance social and cultural inclusion (Huckfeldt, Johnson, and Sprague 2002), and stimulate political participation (Mutz 2006). At the core of all the different values and objectives that diversity and diversity policies serve is dominance, or rather, the prevention of dominance and a situation in which one opinion, one ideology, one group or economic power dominates all others (Craufurd- Smith and Tambini 2012; Karppinen 2013; Valcke 2004). Whether one turns to the marketplace of ideas-rationale, or more deliberative or even radical conceptions of diversity—common to all of diversity’s many conceptualizations (Karppinen 2013) is the ability of all voices to participate and seek an audience. The prevention of dominance as a core objective of diversity policies is also clearly reflected in the different regulatory options that have been deployed to protect and promote media diversity: the existing regulations are either concerned with avoiding and mitigating dominance or posing constraints on quasi-dominant parties so that they cannot abuse their economic and opinion power to the disadvantage of the democratic discourse and functioning media markets (Valcke 2004). An example of the latter are the provisions that seek to promote internal diversity of supply, imposing more or less specific diversity requirements on one outlet. Typically that would be public service broadcasting, which, particularly in the earlier days of broadcasting, dominated the scene and was in many European countries the gateway to audiovisual information. Accordingly, public service broadcasting (and to a lesser extent other media services), were obliged to “enable different groups and interest in society— including linguistic, social, economic, cultural or political minorities—to express themselves” (Council of Europe 1999). Regulatory obligations to promote internal diversity or the diversity of a particular media outlet or platform include measures that guarantee a diverse composition of the programs of the public service broadcaster, provisions with the goal of protecting editorial independence, specific pluralism safeguards such as program windows, frequency sharing arrangements, provisions about the diversity of staff and program councils, list of important events, and quota rules.
C h a l l e n g i n g Di v e r s i t y
[ 157 ]
581
Then there are measures that are directed at protecting and promoting of often referred to as structural or external diversity, most prominently the media-ownership rules. Ownership rules have traditionally formed the core of regulators’ response to the trend toward commercialization and liberalization of the media (Karppinen 2013), with the goal of “prevent[ing] or counteract[ing] concentrations that might endanger media pluralism at the national, regional or local levels” (Council of Europe 1999, appendix, para. I). Then there are licensing requirements, the obligations to media transparency (see, extensively, Council of Europe 1994) or must-carry, due prominence rules and access obligations (Helberger, Kleinen- von Köngislöw, and Van Der Noll 2014; Council of Europe 2007). Next to the diversity and pluralism of supply, there is also diversity of exposure to consider, that is, the question of how diverse the selection of content and speakers is that users are ultimately exposed to and consume. As the Council of Europe acknowledged, “pluralism is about diversity in the media that is made available to the public, which does not always coincide with what is actually consumed” (Council of Europe 1999). This is an observation confirmed by research finding that an increase in the diversity of content can under certain circumstances actually lead to a decrease in the diversity of the content consumed (Napoli 1999; Ferguson and Perse 1993; Cooper and Tang 2009; Wojcieszak and Rojas 2011). This is because people have only so much time and attention to spend on consuming media content. The greater the diversity of content, the greater the need to filter and select. Filtering and selecting media content is an important function of information intermediaries, such as search engines and social media platforms. Their main goal is to channel audience attention and affect access to and the diverse choices people make. As such, they affect not so much the diversity of supply (social media platforms do not produce content), but rather the diversity of media content individual members of the audience are eventually exposed to (exposure diversity). And the key question is: are platforms an opportunity or threat to media diversity (and exposure diversity in particular)?
Are Social Media Platforms an Opportunity or Threat to Diversity?
The question to what extent social media platforms have added new opportunities or challenges for diversity is not easily answered. There is a growing body of research that finds evidence for a positive contribution of social media platforms to media diversity, and diversity of exposure
[ 158 ] Society
in particular. In its 2017 News Report, the Reuters Institute, found that users of social media were significantly more likely than non users to see sources they would not otherwise use. This finding echoes earlier research that finds that use of social media platforms can result in exposure to more diverse news (Bakshy, Messing, and Adamic 2015; incidental exposure to news: Lee, Lindsey, and Kim 2017, stressing the importance of heterogeneity of networks for this; in a similar direction Messing and Westwood 2014; or exposure dissenting opinions: Diehl, Weeks, and Gil de Zúñiga 2016). Others find evidence to the contrary, for example a lesser likelihood for exposure to cross-ideological content (Himelboim, McCreery, and Smith 2013) and the existence of echo chambers due to confirmation bias (Quattrociocchi, Scala, and Sunstein 2016). Yet others produce mixed evidence (Flaxman, Goel, and Rao 2016; Lee et al. 2014, finding that while social media platforms can increase exposure to diverse news, people who are more active in political discussions on SNSs are more likely to be polarized; Stroud 2008, on the role of exposure to particular kinds of content; Lee, Lindsey, and Kim 2017 on information overload as moderating factor, or Anspach 2017, on the importance of settings and the role that shares, likes, and comments can play for engagement). What this research shows is that for social media, different factors than in the traditional media determine the level of diversity users are exposed to. Such factors can include the settings of the filtering and recommendation algorithms, and which kinds of content the algorithm decides to prioritize or suppress. Inasmuch, the MindBook example and the potential of its recommender to narrow down the information diet to a choice of selected topics that the algorithm considers relevant is far from being futuristic. If there is one aspect that the debates about Facebook Newsfeed have made clear, it is the impact of the recommendation mechanism on the selection of contents in users’ newsfeed and the fact that the criteria that determine the selection differ strongly from the editorial criteria that matter in the traditional media (Devito 2017; Bucher 2012). This raises the more normative question to what extent diversity and pluralism should still matter in the context of social media platforms. Social media platforms are not media in the traditional sense, nor is their main purpose to inform, and do so in a way that reflects the diverse topics and voices that constitute our democratic societies (Devito 2017). Still, the Council of Europe highlights the importance of diversity in the context of the criteria according to which search results are selected, ranked, or removed (Council of Europe 2012). The importance of diversity as a regulatory goal has also been highlighted by UK regulator Ofcom in its review of its diversity and plurality measurements (Ofcom 2012). In this
C h a l l e n g i n g Di v e r s i t y
[ 159 ]
6 0 1
context, Ofcom referred explicitly to the opportunities but also challenges that come from digital platforms. According to Ofcom, “[t]here is a risk that (social media) recommendations are used in a manner that narrows citizens exposure to different points of view, by reinforcing their past habits or those of their friends” (Ofcom 2012, 25). And further: “If however they were to start exercising a greater degree of editorial control in the future, then this could raise significant plurality concerns” (Ofcom 2012, 26). In a similar vein, in the Netherlands, the Dutch regulatory authority for the media sector observed that the true risk online is not so much that the overall offer will be less diverse, but rather that the offer that is accessible to (individual) users may be less diverse as a result of algorithmic filtering such that users are not even aware of the size and diversity of the overall offer.5 Seeing the growing importance of social media platforms for the way users encounter and engage information (Reuters 2017), and the impact that at least the larger platforms exercise on the overall structure of news markets and information flows (Moore 2016; Kleis Nielsen and Ganter 2017), there are strong reasons to argue that diversity should matter, in one way or other, also in the context of social media platforms. And if one follows Karppinen in ultimately conceptualizing diversity as a matter of distributing communication power (Karppinen 2013, 114), it becomes clear that leading social media sites cannot be left outside diversity considerations. The question then is not so much whether media diversity still matters in a platform context. It does. The question is rather how and in which form. One of the reasons why policymakers find it so difficult to understand and handle the issue of diversity on social media platforms is that exposure diversity as a normative goal is still little understood and only beginning to trigger a—much needed—discussion (Craufurd- Smith and Tambini 2012; Helberger 2012; Valcke 2011). The other reason is that in order to be able to understand risks and opportunities from social media platforms potentially for diversity and pluralism (as normative goals), it is necessary to understand how exactly platform power is affecting the realization of media diversity and pluralism. The following sections therefore develop a conceptual framework to better understand the risks to, and opportunities from, social media for diversity and pluralism.
5. Commissariaat voor de Media, 15 Jaar Mediamonitor. Van Mediaconcentratie naar Mediagebruik, Commissariaat voor de Media: Amersfort, 2017, 46 (in Dutch). http:// www.mediamonitor.nl/wp-content/uploads/15-Jaar-MM-pdf-LR.pdf.
[ 160 ] Society
UNDERSTANDING PLATFORM POWER AND DIVERSITY
In order to understand the true impact of platforms on media diversity and pluralism, it is important to view platforms in terms of their business models (and economic incentives)—the means that they use to distribute content and their role in the wider information ecology. From the point of view of diversity policies, it is also important to understand how platforms differ from the more traditional media, such as broadcasting and newspapers. This is because existing policies have been written with the more traditional media in mind, and the differences between information intermediaries and traditional media may explain why the traditional instruments are only in part suitable to address new challenges to media diversity and pluralism. Having said so, it is also important to realize that platforms are undergoing highly dynamic transitions. Facebook is a good example. Having set out as essentially a “tech” company, for a long time news and media content were not their core business (Van Dijk, Poell, and De Waal 2016). The core business of social media platforms was providing social media services,6 and connecting people, content producers, and advertisers. Inasmuch, social media platforms are not like traditional media, and have different business incentives. Media content has not so much the function of informing people and keeping up with an editorial mission, but rather of fueling social interactions and forming the backdrop for advertising campaigns and initiatives to keep people on the website longer. As with many user-created-content sites, however, soon the realization dawned that cat videos and vacation pictures can go only so far in arresting the attention of users—a realization that led to an increasing interest in professional content on many of these sites. Examples are Facebook’s Instant articles, Trending topics, Twitter’s Moments, YouTube’s commissioning of professional media content, or Google’s News Initiative. Common to all these initiatives is the wish to integrate professional media content into their platform—without actually producing it. As a result, the 6. Twitter: Connect with your friends—and other fascinating people. Get in-the- moment updates on the things that interest you. And watch events unfold, in real time, from every angle. Facebook: Connect with friends and the world around you on Facebook. Instagram: Instagram is a fun and quirky way to share your life with friends through a series of pictures. Snap a photo with your mobile phone, then choose a filter to transform the image into a memory to keep around forever. We’re building Instagram to allow you to experience moments in your friends’ lives through pictures as they happen. We imagine a world more connected through photos. YouTube: Geniet van je favoriete video’s en muziek, upload originele content en deel alles met vrienden, familie en anderen op YouTube. Wechat: Connecting 800 million people with chats, calls and more.
C h a l l e n g i n g Di v e r s i t y
[ 161 ]
621
relationship to professional media producers, and impact on the overall media landscape, became increasingly complicated. And it is symptomatic how M. Zuckerberg, CEO of Facebook, has moved within a relatively short time from claiming, “We’re a technology company. We’re not a media company,”7 to the observation: “Facebook is a new kind of platform. It’s not a traditional technology company. It’s not a traditional media company. You know, we build technology and we feel responsible for how it’s used. . . . We don’t write the news that people read on the platform. But at the same time we also know that we do a lot more than just distribute news, and we’re an important part of the public discourse.”8 So what is it exactly that constitutes the communicative power of social media platforms such as Facebook, and that we need to be aware of when debating dominance, and the potential implications that these new players have for media pluralism and diversity? In this context, it is useful to return to the distinction between structural and internal diversity from the previous section.
Social Networks and Structural Diversity
Social media platforms do not so much affect the diversity of supply with different voices from different sources. These voices are still free to exist outside the structure of the social media platforms. Maybe the platforms’ greatest structural impact on diversity is in the way that they affect diversity of exposure and media consumption and control the users’ attention. This way they can affect not only the diversity of contents and plurality of sources that users encounter within the social media platform but also the vitality and diversity of the overall media landscape (since the media rely for their economic survival on access to users, and users’ attention). Social media platforms stage encounters with media content, affect the “findability” of content, order and prioritize existing content, manage and direct user attention as a scarce resource, and influence the choices users make. This happens not only through offering basic search functionality but also through algorithmic or collaborative filtering and display of personalized search results and recommendations (Schulz, Dreyer, and Hagemeier 2011; European Commission 2013; Council of Europe 2012; Van Hoboken
7. http://m oney.cnn.com/v ideo/t echnology/2 016/0 8/2 9/f acebook-c eo-w erea-technology-company-were-not-a-media-company-.cnnmoney/. 8. https://w ww.theguardian.com/t echnology/2 016/d ec/2 2/m ark-z uckerberga ppears-t o-f inally-a dmit-f acebook-i s-a -m edia-c ompany
[ 162 ] Society
2012). In other words, sources of communicative power or even dominance are not, as in the traditional media, the resources to produce content, IP rights, and expertise. Instead, the source of communicative power of social platforms is the control over powerful sorting algorithms and data—data about their users, about the way users engage with content, and about the best, most effective way of pushing content under the attention of users. Thereby, social media platforms are instrumental in a more conceptual shift from mass-to personalized modes of distributing media content. This is a shift in which it is not so much ownership and control over content that matters, but knowing the users, and establishing the knowledge, relationships, and technical infrastructure to trigger the engagement of users with particular types of content. This is a shift from a situation in which the news media function as our main sources of information, to a situation in which a “MindBook” sorts our information exposure according to its own logics and users’ preferences (Devito 2017). From this it follows that the real problem with structural diversity is not so much ownership over a particular resource. The true challenge from platforms for structural diversity lies in the relationship between those making media content and those “owning users,” their data, and the tools and technologies to distribute media content and arrest (or even monopolize) users’ attention. This also means that concerns about structural diversity are no longer easily solved by counting the number of sources and diversity of content in media markets, nor will the traditional measures to protect and promote structural diversity be particularly useful in protecting and promoting structural diversity in a platform context. Instead, the key to dealing with platform power and structural diversity is to balance negotiation power, protect media independence, and ensure a fair, level playing field.
Balancing Negotiation Power
So far, the relationship between the old media and the new intermediary platforms takes the form of bilateral negotiations between traditional media outlets and the intermediaries. As Kleis Nielsen and Ganter (2017) find, these relationships can be both symbiotic and asymmetrical. “Digital intermediaries may need news in a broad sense, or at least benefit from it. But it is not at all clear that they need any one individual news media organisation, even large ones” (Kleis Nielsen and Ganter 2017). And while private ordering and the way platforms manage their relationships with users has been subject to growing attention from the perspective of contract law
C h a l l e n g i n g Di v e r s i t y
[ 163 ]
641
and the fairness of terms of use (Loos and Luzak 2016; Wauters, Lievens, and Valcke 2014), a parallel discussion of the fairness of the terms in the agreements between media companies and publishers and broadcasters is still largely missing. Arguably, future media diversity policies need to add to their toolbox means to assess the fairness of deals in such asymmetrical relationships, as well as ways of improving the negotiation power between publishers and information intermediaries. This can include not only initiatives to promote the transparency of such deals, to stimulate the media to bundle their forces, but also ways to stimulate the openness of collaborations with third parties (similar to the way in which, e.g., telecom operators have a negotiation duty) and scrutiny of the fairness of the conditions under which media content is presented and distributed via platforms (e.g., brand visibility, client management, and data and revenue sharing). Inasmuch, the tools developed in telecommunications law (and under the European Access Directive in particular)9 might provide an interesting route to learn from as an area in which the regulator has developed a system of assessing the fairness and openness of B2B negotiations, also and particularly from the perspective of their impact on the openness, competitiveness of, and choice within communications markets.
The Importance of Media Independence
One structural problem or danger in any asymmetrical relationship is dependence. The aspect of dependency has been also identified by Nielsen and Ganter in their study, who point in this context to “a tension between (1) short-term, operational, and often editorially led consideration and (2) more long-term strategic considerations focused on whether the organisation will become too dependent on these intermediaries for reaching audiences, and in the process will control over its editorial identify, access to user data, and central parts of its revenue model” (Kleis Nielsen, Ganter 2017). The problem of dependencies deserves to be taken seriously, particularly from the perspective of the role that the media play in the realization of the fundamental right to freedom of expression as public watchdog and fourth estate—a role that they can play only if they remain independ ent from states as well as from commercial power. Dommering warns that the traditional media are at risk of losing more and more of their identity 9. Directive 2002/19/EC of the European Parliament and of the Council of March 7, 2002, on access to, and interconnection of, electronic communications networks and associated facilities (Access Directive), OJ L 108, 24.4.2002, 7–20.
[ 164 ] Society
in their attempt to assimilate and create a functional symbiosis between themselves and intermediaries (Dommering 2016). And Van Dijk & Poell point to the risk of new dependencies as the result of a shift in the news process from “an editorial logic to an algorithmic logic,” a shift whose main driver are platforms (Van Dijk, Poell, and De Waal 2016). Media law and policy in Europe have a long tradition of dealing with the independence of the media, be it the constitutional safeguards in Art. 10 ECHR against state censorship, or the extensive rules on advertising, sponsoring, and separation of editorial and commercial content in the relationship to commercial players. It is high time to revisit these rules in light of the intrinsic relationship between the media and information intermediaries.
Fair, Level Playing Field
Finally, the point about the fair, level playing field relates to the question whether it is still justified to treat offline and online media differently, and impose far stricter rules and diversity expectations/requirements on the former while maintaining a light-touch approach for the latter. For a long time, the key argument for justifying a stricter regulation of the broadcasting media vis-à-vis the online media has been the alleged persuasiveness of video (Barendt 1993). One may wonder to what extent broadcasting is still more persuasive than the communication of media content via, for example, social media platforms. Arguably, social media platforms can have an equal if not more persuasive impact, particularly if those platforms use the deep insights they have about users to refine their targeting into persuasion strategies. What is more, these platforms have the tools and power to engage users to act on information, and influence civic behavior (Moore 2016, 54). The difficulty here is understanding the true nature of editorial control/responsibility and diversity on social media platforms. To draw a preliminary conclusion: when assessing platform power (or even dominance) over a media sector, new benchmarks need to be developed that include the amount of consumer data, characteristics of the recommendation algorithm, and number of users, activity of users, and also the balance in the contractual conditions between platforms and media companies, the level of independence of the media from platforms, and the existence of an equal level playing field). Doing so may also require new forms of monitoring and measuring diversity, for example, in order to be able to ascertain the level of diversity that different categories of users on different platforms are eventually exposed to.
C h a l l e n g i n g Di v e r s i t y
[ 165 ]
61
Internal Diversity
Internal diversity considerations figure very prominently in the ongoing public policy debate about impact and responsibility of information intermediaries. These are fears about filter bubbles and echo chambers (see Pariser 2011; Sunstein 2000; High Level Expert Group on Media Freedom and Pluralism 2013). But these fears must be seen in context: as long as people have the opportunity to receive information from different sources (multisourcing), the fact that they receive a less diverse information diet on one platform can be counterbalanced by access to more diverse information on another, for example, in the public service media or the traditional press (see Zuiderveen Borgesius et al. 2016; Nguyen et al. 2014; Schmidt et al. 2017). To the contrary, in a situation in which one particular platform has become the dominant source of information (as in the MindBook example in the introduction), the internal diversity of that platform does matter. In the MindBook example, alternative sources have been crowded out of the market, and with them also the opportunity for users to learn what information they might be missing in their MindBook-only diet. Seen from this perspective it also becomes so evident why platform dominance is, or should be, of such concern for media policymakers, and why it is important to protect and promote structural diversity. In addition, and with the growing relevance of (some) platforms as important and maybe even exclusive gateway to information access (Reuters Institute for the Study of Journalism 2017), questions of internal diversity within the platforms come to the fore. In the public policy debates so far there have been no shortage of suggestions of how to hold platforms more accountable for the diversity within their platforms and to impose internal diversity safeguards (Paal 2012; Foster 2012; Neuberger and Lobgis 2010; Schulz, Dreyer, and Hagemeier 2011). The problem with most of these suggestions, and the real challenge for future media law and policy here, is understanding how diversity works on social media platforms and what the actual contribution of platforms is to internal diversity. Taking into account the growing number of users for whom social media platforms are the main gateway to accessing and experiencing media content, the issue of internal diversity becomes more and more pressing, and also infinitely more complex. This is because diversity on social media platforms is no longer a matter of an editor who determines what a (sufficiently) diverse mix of contents is. Diversity is increasingly also a matter of how users engage with that content, share, prioritize, like, or dislike it, and the extent to which the architecture and design of a social media platform
[ 166 ] Society
enables and steers such engagement. In other words, to truly understand the impact on, and power of platforms over, the internal diversity within the platform it is important to understand the impact that platforms have not only on the selection of the content itself but also on the conditions under which users encounter and can engage with content. This is essentially a user-driven perspective on diversity that corresponds to the social character of platforms. To give but two examples: Filtering, search, and self-selected personalization are examples of activities by which users themselves actively influence the diversity of contents they wish to be exposed to (Van Hoboken 2012). And through activities such as liking, flagging, rating, and sharing, users can actively influence which contents others are exposed to (Gerlitz and Helmond 2013; Crawford and Gillespie 2016). Engagement and using (diverse) content is critical to deliberate, show different perspective, or form an opinion. On social platforms, users can actively engage with diverse content in the form of actively contributing to the deliberation (through blogs, posts, comments, etc.). They can also engage in more symbolic ways, for example, through liking, voting, rating, and so forth. Platforms create the organizational framework and opportunities for exposure to and engagement with diverse content. Inasmuch, social media platforms not only distribute media content but also create their very own versions of “privately controlled public spheres,” in which users not only encounter diverse content but also engage and deliberate, share and contest. This is where their true contribution to and power over diversity within the platform lies. And this is also where their social responsibility lies. Platforms’ influence on news distribution and exposure, and ultimately diversity, can include measures and design decisions at the level of content (e.g., providing opportunities for UGC, and for user-led editing), engagement (possibilities to comment, post, or express consent or dissent), and network (through the ability to create groups, invite friends, etc.). Three examples may demonstrate my point in more detail, but also how the “diversity-by-architecture” perspective may provide new interesting avenues for diversity policies and research:
Diversity-versus Popularity-Based Recommender Design
The first and probably most obvious example is the settings of the recommender algorithms. Search, personalization, and recommendation play a rather pivotal role for both exposure to information and diverse exposure (Van Hoboken 2012). How important that role can be has been proven
C h a l l e n g i n g Di v e r s i t y
[ 167 ]
6 8 1
once again by the fierce controversy around Facebook’s Trending Topics algorithm and claims of bias. A closer look at the editorial guidelines and instructions for the human editors of Trending Topics revealed that considerations of media diversity were more or less absent in Trending Topics (meanwhile Facebook has again changed its algorithm and probably also the editorial guidelines in response to the Trending Topics criticism). Trending Topics editors were, for example, asked to get a good overview of what is trending, the Facebook Trending algorithm that notes whether topics are disproportionally often mentioned, engagement (likes, comments, and shares) and what the headlines from top news sites suggest that is trending, namely a selection of news websites that is strongly US/UK centered.10 Arguably, the Trending Topics algorithm thereby completely failed to reflect the diversity of the media scene in Europe, local content, and so forth. More generally, many recommender systems display a certain bias toward popular recommendations or recommendations that reflect individual interests and personal relevance (DeVito 2017) (such as in the MindBook example). To the contrary, it is, at least technically, also possible to program recommendation algorithms in a way to promote more diverse exposure to contents (Adomavicius and Kwon 2011; Munson and Resnick 2010; Helberger, Karppinen, and D’Acunto 2018). More sophisticated recommendation algorithms that also take into account medium-term objectives such as diversity, or at least giving users a choice between different recommendation logics, may have a positive effect on the diversity of content users are exposed to. Also, there are more and more third-party tools and applications available whose objective it is to make people aware of their filter bubbles, to encourage them to diversify their media diet, and to stimulate their curiosity.11 Stimulating initiatives like these, and giving prominence to such tools, could be a new and potentially far more effective approach in fostering diversity on platforms than traditional policy responses, such as the must-carry or due prominence rules suggested earlier (Foster 2012; Danckert and Mayer 2010; European Commission 2013). Arguably, dominance thereby also becomes a question of how open platforms are to alternative recommendation settings and technologies on their platforms that help users to critically question the recommendations by one party (e.g., the platform), and discover alternative recommendations by others.
10. BBC News, CNN, Fox News, The Guardian, NBC News, New York Times, USA Today, the Wall Street Journal, Washington Post, BuzzFeed News, see https://f bnewsroomus. files.wordpress.com/2016/05/full-trending-review-guidelines.pdf 11. E.g., Huffington Posts’ “Flipside”; BuzzFeed’s “Outside Your Bubble”; “Read across the Isle”; “Blue Feed, Red Feed”; “Escape Your Bubbles”; “Filterbubblan.se”; “AllSides.”
[ 168 ] Society
Diversity of the Personal Social Media Platform
A growing body of research does show that the diversity and heterogeneity of the social media platform is an important aspect for the quality of the deliberative process, and the openness toward other ideas and viewpoints (Jun 2012; Huckfeldt, Johnson, and Sprague 2002; Bakshy, Messing, and Adamic 2015; Messing and Westwood 2014). And while it is true that it is primarily users who decide who will be in their social media platform, social media do exercise some influence here as well (Diehl, Weeks, and Gil de Zúñiga 2015). Facebook, for example, suggests not only certain groups and friends but also whom to (not) follow, by making recommendation for pages similar to the ones one already follows. Note that the only option offered so far is “pages similar to,” and not “pages other than” or “pages likely to provide a contrasting viewpoint.” Inasmuch, social platforms could learn from research that shows that the presence of dissenting minority views in a group can promote openness toward, and consideration of alternatives at a group level and enhance problem-solving (Nemeth 1986). More generally, the extent to which users encounter cross-cutting content also depends on who their friends are (Bakshy, Messing, and Adamic 2015). Accordingly, stimulating the deliberate inclusion of such minority or contrasting actors could be a way to improve the quality and diversity of engagement on social media platforms. Understanding better the dynamics of diverse engagement on social media platforms, and how personal, social, and contextual characteristics contribute to diversity of exposure may be another way of stimulating diversity online (compare, e.g., Bramoullé and Rogers 2009; Swapneel et al. 2011). Furthermore, such understanding can inform the architectural design choices that stimulate engagement with a (heterogeneous) group of friends (Anspach 2017).
Privacy and Diversity
The final example to be discussed here are the privacy settings that are offered by social media. At first sight, privacy and media diversity may not appear to have much in common, but they do. Kwon, Moon and Stefanone show, for example, that the privacy affordances that are provided by a social medium can have an effect on the way users post and engage with content, including less popular, counterattitudinal content and content reflecting minority opinions (Kwon, Moon, and Stefanone 2015). On a more fundamental level, media diversity, as a constituting factor of freedom of expression and the role of the media in a democratic society, can
C h a l l e n g i n g Di v e r s i t y
[ 169 ]
701
only function if users enjoy a certain autonomy, that is, independence from the government or commercial forces, in making their decisions and weighting the arguments. Privacy rights, for example, can provide the necessary democratic breathing space for individuals to form their distinct and diverse identities and ideas (Richards 2008; Cohen 1996). Put differently, protecting the privacy of users, in their relationship to the media and also to advertisers and other third parties that seek to influence the way users choose and reflect on media content, is a way of protecting the very values that we hope to promote with media diversity: critical and diverse thinking. Dawes speaks in this context of a “political privacy” dimension. He argues, “[v]iewed from a civic republican perspective, therefore, the political legitimacy of the state is guaranteed by the public sphere, which in turn is dependent upon privacy” (Dawes 2014). None of the aspects mentioned here—diversity of the recommender system, diversity of the social media platform, and level of respect for users’ privacy and autonomy—are among the traditional benchmarks for assessing media diversity or dominance. And yet, as this analysis has demonstrated, these are factors that matter in the dynamic and user-driven construction of diversity online. One very concrete conclusion from this is that the assessment of diversity and the ability of particular parties to dominate the media landscape online not only must follow established criteria (such as the number of sources available, the diversity of categories of content presented, etc.) but also must be able to incorporate new criteria, including the extent to which users are (truly) free to choose between different sources and contents and enjoy both the options and the autonomy to do so.
CONCLUSIONS
This chapter has sought to sketch the contours of a new conception of media diversity, one that is able to take into account the new, deeply social dynamics of platforms in online media markets. It has argued that in order to truly understand the platforms’ potential impact on media diversity and media pluralism, it is critical to look at platforms not in isolation but in their relationship toward (1) other media outlets that they distribute and (2) users. The true impact of information intermediaries such as social media platforms on media diversity is not so much whether they are willing and able to present users with diverse packages of information in the sense that traditional media editors do. The contribution of social platforms runs
[ 170 ] Society
much deeper: they create the organizational and architectural framework and opportunities for exposure to and engagement with diverse content. This also means that diversity as a value or even public policy objective on social media platforms has not the same meaning as diversity in the traditional media context. Media diversity on social media platforms must be understood as a cooperative effort of the social media platform, media organizations, and users. The way users search for, engage with, like, shape their network, and so forth, has important implications for the diversity of contents, ideas, and encounters that they are exposed to. Similarly, the way the traditional media collaborate with information intermediaries to distribute content and reach viewers impacts structural diversity. When seen from this perspective, it becomes clear that the impact of information intermediaries on media diversity is not easily understood (or monitored) with existing mainstream conceptions and measures of diversity (such as the diversity of opinions and ideas from different sources). For the same reasons existing diversity safeguards are of only limited use in protecting and promoting diversity within, and in the presence of, powerful social media platforms. Instead, future diversity policies need to turn their attention to (1) the relationship between traditional media and information intermediaries, with the goal of establishing a more equal level playing field and structural diversity; and (2) the relationship between platforms and users, with the goal of promoting the architectural and organizational measures for users to be able to encounter and engage with diverse content. This also means that when assessing the impact of platforms on the diversity of media markets we need to include methods and factors into the media regulators’ toolbox that may go beyond the traditional framework for assessing dominance. Such factors can include the balance in the contractual conditions, control over data, or sophisticated recommendation algorithms between platforms and media companies, the level of independence of the media from platforms, and the existence of an equal, level playing field. It can also include factors such as the openness toward alternative recommendation metrics and the extent to which users are truly free in choosing among different voices and opinions online. REFERENCES Adomavicius, G., and Y. Kwon. 2011. “Maximizing Aggregate Recommendation Diversity: A Graphtheoretic Approach.” Proceedings of the 1st International Workshop on Novelty and Diversity in Recommender Systems (DiveRS 2011, Chicago) 2011.
C h a l l e n g i n g Di v e r s i t y
[ 171 ]
721
Anspach, N. M. 2017. “The New Personal Influence: How Our Facebook Friends Influence the News We Read.” Political Communication, online first, May 23, 2017. http://www.tandfonline.com/doi/abs/10.1080/10584609.2017.1316329 Bakshy, E., S. Messing, and L. A. Adamic. 2015. “Political Science: Exposure to Ideologically Diverse News and Opinion on Facebook.” Science (New York, N.Y.) 348, no. 6239: 1130. Balazs, B., N. Helberger, K. Irion, F. Zuiderveen-Borgesius, J. Möller, B. Van Es, B. Van der Velde, and C. De Vreese. 2017. “Tackling the Algorithmic Control Crisis: The Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents.” Yale Journal of Law and Technology 19: 133–80. Barendt, E. M. 1993. Broadcasting Law: A Comparative Study. Oxford; New York: Oxford: Clarendon Press. Bucher, T. 2012. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media and Society 14, no. 7: 1164–80. Bramoullé, Y., and B. Rogers. 2009. “Diversity and Popularity in Social Networks.” Working Paper 09-03, Centre Interuniversitaire sur le Risque, les Politiqies Économiques et l’Employ. Cohen, J. 1996. “A Right to Read Anonymously: A Closer Look at ‘Copyright Management’ In Cyberspace.” Conneticut Law Review 28: 981. Cooper, R., and T. Tang. 2009. “Predicting Audience Exposure to Television in Today’s Media Environment: An Empirical Integration of Active-Audience and Structural Theories.” Journal of Broadcasting and Electronic Media 400–418. Council of Europe. 1994. “Recommendation No. R(94)13 of the Committee of Ministers to Member States on Measures to Promote Media Transparency. R(94)13.” Strasbourg: Council of Europe. Council of Europe. 1999. “Recommendation No R(99)1 of the Committee of Ministers to Member States on Measures to Promote Media Pluralism.” Strasbourg: Council of Europe. Council of Europe. 2007. “Recommendation CM/Rec(2007)3 of the Committee of Ministers to Member States on the Remit of Public Service Media in the Information Society.” Strasbourg: Council of Europe. Council of Europe. 2012. “CM/Rec(2012)3 of the Committee of Ministers to Member States on the Protection of Human Rights with Regards to Search Engines.” CM/Rec(2012)3. Strasbourg: Council of Europe. Craufurd-Smith, R., and D. Tambini. 2012. “Measuring Media Plurality in the United Kingdom: Policy Choices and Regulatory Challenges.” Journal of Media Law 4, no. 1: 35–63. Crawford, K., and T. Gillespie. 2016. “What Is a Flag For? Social Media Reporting Tools and the Vocabulary of Complaint.” New Media and Society 18, no. 3: 410–28. Danckert, B., and F. J. Mayer. 2010. “Die vorherrschende Meinungsmacht von Google.” MultiMedia und Recht 4: 219–22. Dawes, S. 2014. Press Freedom, Privacy and The Public Sphere. Journalism Studies 15, no. 1: 17–32. Devito, M. 2017. “From Editors to Algorithms. A Value-Based Approach to Understanding Story Selection in the Facebook News Feed.” Digital Journalism 5, no. 6: 753–73. Diehl, T., B. E. Weeks, and H. Gil de Zúñiga. 2016. “Political Persuasion on Social Media: Tracing Direct and Indirect Effects of News Use and Social Interaction.” New Media and Society 18, no. 5: 1875–95.
[ 172 ] Society
Dommering, E. J. 2016. “Het Verschil van Mening: Geschiedenis van een Verkeerd Begrepen Idee.” Amsterdam: Uitgeverij Bert Bakker. European Commission. 2013. Preparing for a Fully Converged Audiovisual World: Growth, Creation and Value. Brussels: European Commission. Ferguson, D., and E. Perse. 1993. “Media and Audience Influences on Channel Repertoire.” Journal of Broadcasting and Electronic Media 37, no. 3: 31–47. Flaxman, S., S. Goel, and J. Rao. 2016. “Filterbubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80: 298–320. Foster, R. 2012. News Plurality in a Digital World. London: Reuters Institute for Journalism. Gerlitz, C., and A. Helmond. 2013. “The Like Economy: Social Buttons and the Data- Intensive Web.” New Media and Society 15, no. 8: 1348–65. Helberger, Natali, Katharina Kleinen-Von Köngislöw, and Rob Van Der Noll. 2014. Convergence, Information Intermediaries and Media Pluralism—Mapping the Legal, Social and Economic Issues at Hand. Amsterdam: Institute for Information Law. Helberger, N. 2012. “Exposure Diversity as a Policy Goal.” Journal of Media Law 4, no. 1: 65–92. Helberger, N., K. Karppinen, and L. D’Acunto. 2018. “Exposure Diversity as a Design Principle for Recommender Systems.” Information, Communication and Society 21, no. 2: 191–207. High Level Expert Group on Media Freedom and Pluralism. 2013. A Free and Pluralistic Media to Sustain European Democracy. Brussels: High Level Expert Group on Media Freedom and Pluralism. Himelboim, I., S. McCreery, and M. Smith. 2013. “Birds of a Feather Tweet Together: Integrating Network and Content Analyses to Examine Cross- Ideology Exposure on Twitter.” Journal of Computer-Mediated Communication 18, no. 2013: 154–74. Hoboken, J. V. 2012. Search Engine Freedom: On the Implications of the Right to Freedom of Expression for the Legal Governance of Web Search Engines. Alphen aan den Rijn Kluwer Law International; Frederick, MD: Aspen Publishers; Biggleswade, Bedfordshire, UK: Turpin Distribution Services. Huckfeldt, R., P. E. Johnson, and J. Sprague. 2002. “Political Environments, Political Dynamics, and the Survival of Disagreement.” Journal of Politics 64, no. 1: 1–21. Jun, N. 2012. “Contribution of Internet News Use to Reducing the Influence of Selective Online Exposure on Political Diversity.” Computers in Human Behavior 28, no. 4: 1450–57. Karppinen, K. 2013. Rethinking Media Pluralism. 1st ed. New York: New York: Fordham University Press. Kleis Nielsen, R., and S. A. Ganter. 2017. “Dealing with Digital Intermediaries: A Case Study of the Relations between Publishers and Platforms.” New Media and Society, Online first, published on April 17, 2017, 1–18. Kwon, K., S. Moon, and M. Stefanone. 2015. “Unspeaking on Facebook? Testing Network Effects on Self-Censorship of Political Expressions in Social Network Sites.” Quality and Quantity 49, no. 4: 1417–35. Lee, J. K., J., Choi, C. Kim, and Y. Kim. 2014. “Social Media, Network Heterogeneity, and Opinion Polarization.” Journal of Communication 64, no. 2014: 702–22. Lee, S. K, N. J. Lindsey, and K. S. Kim. 2017. “The Effects of News Consumption via Social Media and News Information Overload on Perceptions of Journalistic Norms and Practices.” Computers in Human Behaviour, online first, May 5, 2017.
C h a l l e n g i n g Di v e r s i t y
[ 173 ]
7 4 1
Loos, M., and J. Luzak. 2016. “Wanted: A Bigger Stick. On Unfair Terms in Consumer Contracts with Online Service Providers.” Journal of Consumer Policy 39, no. 1: 63–90. McGonagle, T. 2011. Minority Rights, Freedom of Expression and the Media: Dynamics and Dilemmas. Cambridge: Intersentia. Messing, S., and S. Westwood. 2014. “Selective Exposure in the Age of Social Media: Endorsements Trump Partisan Source Affiliation When Selecting News Online.” Communication Research 4, no. 2014: 1042–63. Moore, M. 2016. Tech Giants and Civic Power. King’s College London: Centre for the Study of Media, Communication & Power. Munson, S., and P. Resnick. 2010. Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI'10. New York, NY: ACM, 1457–66. Mutz, D. C. 2006. Hearing the Other Side: Deliberative versus Participatory Democracy. Cambridge: Cambridge University Press. Napoli, P. 1999. “Deconstructing the Diversity Principle.” Journal of Communication 49, no. 4: 7–34. Nemeth, C. J. 1986. “Differential Contributions of Majority and Minority Influence.” Psychological Review 93, no. 1: 23–32. Neuberger, C., and F. Lobgis. 2010. Die Bedeutung des Internets im Rahmen der Vielfaltssicherung. Berlin: Vistas. Nguyen, T., P. Hui, F. M. Harper, L. Terveen, and J. A. Konstan. 2014. “Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity.” Proceedings of the 23rd International Conference on World Wide Web 2014; WWW’14, April 7–11, 2014, Seoul, Korea. ACM 978-1-4503-2744-2/14/04. Ofcom. 2012. Measuring Media Plurality: Ofcom’s Advice to the Secretary of State for Culture, Olympics, Media and Sport. London: OFCOM. Paal, B. 2012. Suchmaschinen, Marktmacht und Meinungsbildung. Commissioned by Initiative for a Competitive Online Marketplace. Oxford: Initiative for a Competitive Online Market Place. Pariser, E. 2011. The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin. Pasquale, F. 2015. The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press. Pew Research Center. 2016. News Use across Social Media Platforms. Washington, DC: Pew Research Center. Quattrociocchi, W., A. Scala, and C. R. Sunstein. 2016. “Echo Chambers on Facebook.” Discussion Paper No. 877, 09/2016, Harvard Law School. Reuters Institute for the Study of Journalism. 2016. Reuters Institute Digital News Report 2016. Oxford: Reuters Institute for the Study of Journalism. Reuters Institute for the Study of Journalism. 2017. Reuters Institute Digital News Report 2017. London: Reuters Institute. Richards, N. 2008. “Intellectual Privacy.” Texas Law Review 87, no. 2: 387–445. Schmidt, A. L., F. Zollo, M. Del Vicario, A. Bessi, A. Scala, G. Caldarelli, H. E. Stanley, and W. Quattrociocchi. 2017. “Anatomy of News Consumption on Facebook.” Proceedings of the National Academy of Sciences 114, no. 12: 3035–39. Schulz W., S. Dreyer, and S. Hagemeier. 2011. Machtverschiebung in der oeffentlichen Kommunikation. Bonn: Friedrich Ebert Stiftung. Stroud, N. 2008. “Media Use and Political Predispositions: Revisiting the Concept of Selective Exposure.” Political Behaviour 30, no. 2008: 341–66.
[ 174 ] Society
Sunstein, C. R. 2000. “Television and the Public Interest.” California Law Review 88, no. 2: 499–564. Swapneel, K., J. Bell, N. Arora, and G. Kaiser. 2011. Towards Diversity in Recommendations Using Social Networks. Columbia University Computer Science Technical Reports. CUCS-019-11. New York: Columbia University. Valcke, P. 2011. “Looking for the User in Media Pluralism: Unraveling the Traditional Diversity Chain and Recent Trends of User Empowerment in European Media Regulation.” Journal of Information Policy 11, no. 1: 287–320. Valcke, P. 2004. Pluralism in a Digital Interactive Media Environment: Analysis of Sector Specific Regulation and Competition Law. Brussel: Larcier. Van Dijk, J., T. Poell, and M. De Waal. 2016. De Platform Samenleving. Stijd om publieke Waarden in een online Wereld. Amsterdam: Amsterdam University Press. Wauters, E., E. Lievens, and P. Valcke. 2014. “Towards a Better Protection of Social Media Users: A Legal Perspective on the Terms of Use of Social Networking Sites.” International Journal of Law and Information Technology 22, no. 3: 254–94. Wojcieszak, M., and H. Rojas. 2011. “Hostile Public Effect: Communication Diversity and the Projection of Personal Opinions onto Others.” Journal of Broadcasting and Electronic Media 55, no. 4: 543–62. Zuiderveen Borgesius, F. J., D. Trilling, J. Möller, B. Bodó, C. H. De Vreese, and N. Helberger. 2016. “Should We Worry about Filter Bubbles?” Internet Policy Review 5, no. 1: 1–16.
C h a l l e n g i n g Di v e r s i t y
[ 175 ]
761
CHAPTER 7
The Power of Providence The Role of Platforms in Leveraging the Legibility of Users to Accentuate Inequality ORL A LYNSKEY
INTRODUCTION
Factors such as the size, business model, and connection capacity of certain platforms, or digital intermediaries, mean they play a pivotal role in the digital ecosystem. Some platforms can be vital to the functioning of other platforms if they have assets—such as an operating system or user base— that are required by other entities to compete. Such control over access to end-users and their data may constitute a barrier to entry to certain markets or strengthen the market power of a company, leading to dominance. If a dominant platform takes advantage of this power to exploit consumers, for instance by offering unfair terms and conditions, as the German Competition Authority alleges Facebook did (Bundeskartellamt 2016), this may constitute an abuse of dominance sanctioned by competition law. However, as a result of their pivotal position in the digital ecosystem, powerful platforms also exercise a form of power that can be distinguished from this market power, the effects of which are not captured by competition law. Attempts to define and conceptualize this power are underway. Indeed, Cohen noted in 2016 that the successful state regulation of the information economy will, among other things, require an analytically
sound conception of such platform power (Cohen 2016, 374). This chapter contributes to this endeavor by identifying some of the characteristics of this power, which shall be labeled “the power of providence.” It is important to recall that “providence” can be understood as “the foreknowing and protective care of God (or nature etc.); divine direction, control, or guidance” (OED) or “an influence that is not human in origin and is thought to control people’s lives” (Cambridge Dictionary). Certain platforms could be said to exercise a “power of providence” for several reasons. First, these platforms have the ability to identify users and link diverse datasets, giving them powers akin to the “all-seeing” “eye of providence.” The “eye of providence” is a symbol depicting an eye, often surrounded by rays of light and usually enclosed by a triangle. It is said to represent the all-seeing eye of God watching over humanity (Wikipedia 2017). Furthermore, the triangle surrounding the eye of providence serves as a reminder of the multisided vantage point of platforms, which can see—and control interactions between—the users, advertisers, and goods and services providers depend ent on it. Second, platforms can “control people’s lives” insofar as the ability to collect and aggregate a large volume of data from a wide variety of data sources allows platforms to influence individuals in ways that have hitherto been classified by some as purely dystopian, for instance through microtargeting for political purposes. Finally, the architecture of the digital ecosystem and the terminology used to describe its processes (for instance, “machine learning”) may give the impression to individuals that its influence “is not human in origin.” While this is incorrect, and human input is indispensable to the functioning of digital platforms, even knowledgeable users may treat algorithmic decision-making (such as Internet search results) as “neutral” and be deterred from challenging the actions and processes of platforms as a result of the opacity and complexity of their operations. The “power of providence” exercised by platforms raises two problems not captured by competition law. The first is that data-driven profiling can have an exacerbated impact, particularly on protected groups, when conducted by platforms with such power. This, in turn, can accentuate existing inequalities between groups in society. While profiling is now a commonplace practice, it is suggested that profiling by powerful platforms has the potential to be particularly problematic as a result of their ability to leverage their strategic position to enhance the legibility of their users. Legibility refers to attempts (previously by states but by private platforms in this context) to “arrange the population in ways that simplified the classic state functions of taxation, conscription and prevention of rebellion” through methods such as mapping or modeling that provided a “synoptic”
T h e P o w e r of P r o v i de n c e
[ 177 ]
78 1
perspective of the population (Scott 1998, 2). Beyond this direct impact on inequality, the second concern is that this power of providence can be used to accentuate inequality indirectly. The influence of “App Stores” over the levels of privacy and data protection offered by the applications they host shall be used as an example to illustrate this point. Data protection law is often assumed to offer a remedy to these concerns. However, as this chapter suggests, while the introduction of the EU’s General Data Protection Regulation (GDPR) offers some hope in this regard, data protection law is not a panacea and some additional regulatory measures may be required in order to tackle the more systemic concerns identified in this chapter.
DIGITAL DOMINANCE AND PROFILING PRACTICES
By collecting and sorting data, dominant platforms can profile users of their platforms, and in this way individuals become visible or legible to these platforms (Taylor 2016). The techniques used to profile or categorize individuals have been clearly outlined by the UK’s Competition and Markets Authority (CMA) in its report on uses of consumer data (CMA 2015), and by the US Federal Trade Commission (FTC) in its report on data brokers (FTC 2014). Data mining, and profiling in particular, can impact on the individual in two distinct ways: first, it may lead to discrimination against individuals and groups on the basis of “protected grounds,” and, second, it may lead to differentiation among nonprotected groups in a way that disproportionately affects communities with certain attributes (such as lower socioeconomic status).
Data Mining and Discrimination
Discrimination occurs when an individual or group is treated in an unfavorable or prejudicial manner on the basis of a “protected characteristic.” In the UK, age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion and belief, sex, and sexual orientation are all protected characteristics (Equality Act 2010). Direct discrimination occurs when an individual or entity discriminates against an individual or group by treating them in a comparably less favorable way because of a protected characteristic (Equality Act 2010, s.13). Indirect discrimination occurs when a practice, policy, or rule applies to all individuals equally yet it has a disparate impact on those with protected characteristics by placing them at a disadvantage (Equality Act 2010, s.19).
[ 178 ] Society
The ways in which data mining can give rise to discriminatory treatment are often simplified for descriptive purposes. Thus, for instance, Gellert et al. note, “either the difference is made between those who are included and those who are excluded (inter) [from the database], or the differentiation is made within the database (intra)” (Gellert et al. 2012, 15). Taylor similarly affirms that discrimination can operate along these lines when she notes that factors such as gender, ethnicity, and place of origin help “to determine which databases we are part of, how those systems use our data, and the kinds of influence they can have over us” (Taylor 2017, 4). Yet a deeper understanding of how decisions regarding inclusion and exclusion are made, and differentiation within databases occurs, will be helpful in framing our subsequent discussions. Indeed, the data-mining process creates multiple opportunities for discriminatory outcomes. In order to illustrate this, Barocas and Selbst subdivide the data-mining process into component steps and identify ways in which these steps may facilitate discrimination (Barocas and Selbst 2016, 677–93). It is worth summarizing some of their findings. The first step they suggest is to identify the “target variable”—“what data miners are looking for”—and “class labels”—the values or attributes associated with the target variable. The major concern here from a discrimination perspective is that the identification and definition of the “target variable” (e.g., “clean lifestyle”) and the “labels” used to inform that (e.g., regular working patterns; low exposure to pollutants; healthy eater, etc.) might have a greater or lesser impact on groups with protected characteristics than others. Similarly, Barocas and Selbst highlight that the data that train data- mining models (training data) may lead to discrimination if the training data itself reflects prejudice (and the model thus learns from this prejudicial example) or if the training data is based on a biased sample of the population (and thus protected groups are under-or overrepresented). For example, certain groups, which have been judged uncreditworthy in the past, and therefore offered disadvantageous terms of credit that they have struggled to meet, will have their difficulties charted and held against them in data-driven lending cycles. As Pasquale notes, “late payments will be more likely, and then will be fed into present credit scoring models as neutral, objective, non-racial indicia of reliability and creditworthiness” (Pasquale 2015, 41). In this way, the profiling actually increases the likelihood of a consumer defaulting. The third way in which Barocas and Selbst suggest discrimination can creep into the system is during “feature selection”—the stage at which choices are made about what attributes to consider in the analysis (for
T h e P o w e r of P r o v i de n c e
[ 179 ]
8 01
instance, in determining whether someone is a “healthy eater,” one could focus on overall calorie intake, or whether they eat fresh foods). They suggest that the “reductive representations” of data may “fail to capture enough detail to allow for the discovery of crucial points of interest” (Barocas and Selbst 2016, 688). Thus, as they note, while inferences might be statistically sound, they will nevertheless be inaccurate if they are based on insufficiently granular data. For example, if participation in certain sports is used as a factor to indicate accomplishment in university admissions, it may negatively impact certain ethnic groups who are less likely to have had the opportunity to play these sports, whereas more granular data might reveal that these candidates acquired similar skills in different ways. Indeed, statistical accuracy is not considered sufficient to justify discriminatory treatment of those in protected groups. In Test-A chats the European Court of Justice (ECJ) was asked to consider the legality of provisions of the EU “Gender Goods and Services Directive.” Article 5(1) of that Directive prohibited the use of sex as a factor in the calculation of an individual’s premiums and benefits while Article 5(2) allowed for a derogation from this prohibition when risk assessment is “based on relevant and accurate statistical data” and sex is a “determining factor.” The Court held that this derogation was incompatible with the prohibition of discrimination on the grounds of sex. Advocate General Sharpston, an advisory member of the Court, had made a similar point in Lindorfer when she stated: it might be helpful to imagine a situation in which (as is perfectly plausible) statistics might show that a member of one ethnic group lived on average longer than another. To take those differences into account when determining the correlation between contributions and entitlements under the Community pension scheme would be wholly unacceptable, and I cannot see that the use of the criterion of sex rather than ethnic origin can be more acceptable.
This risk of discrimination has been recognized by policymakers. For example, former FTC Commissioner Ramirez highlighted that algorithmic data profiling can “accidentally classify people based on categories that society has decided—by law or ethics—not to use, such as race, ethnic background, gender or sexual orientation” (Ramirez 2013). Moreover, in its report on data brokers, the FTC notes that individuals are divided into segments such as “Urban Scramble” and “Mobile Mixers,” which focus on minority communities and those with lower incomes, both of which incorporate a high concentration of Latino and African American consumers (FTC 2014). The FTC notes that these segments may be “more sensitive” as
[ 180 ] Society
a result of their reliance on characteristics such as ethnicity, income level, and education. They could therefore be labeled “redundant encodings,” cases in which membership in a protected class is encoded in other data (Barocas and Selbst 2016, 691; see Figure 7.1). Indeed, data profiling can be used to conceal discriminatory treatment (Barocas and Selbst 2016, 692). Yet such attempts would constitute indirect discrimination and would therefore also be captured by the law (Gellert et al. 2012, 19).
Identify ‘target’ and ‘class labels’
Select features and attributes relevant to class variables
Choose ‘training data’
• Feature selection • Target: outcome data • Data mining seeks requires decisionminers are looking for to identify statistical making about what (eg. good students for relationships in attributes to include University admisssion). a dataset and to in analysis. aggregate findings to • Class variables: create models. Models the values associated • Risks: data are depend on training with target (eg. good reductive and may data: discriminatory academic grades; not take account of data leads to extracurricular discriminatory models. factors that explain activities). statistical variation • Risks: reproduce (eg. reliance on data • Risks: Choice of existing decisionregarding success target or class making prejudice (eg. in competitive variables may when categorising sport as a measure systematically data for class label of extracurricular disadvantage certain purposes–does a activities will groups (eg. family student who excels underrepresent history of University in sciences but has students who attendance). poor literacy grades have not had this have ‘good grades’?) opportunity for or draw inferences institutional–public from a biased sample v private school–or of the population financial reasons). (what if data about certain categories of individuals–eg. recent immigrants from certain countries is not yet included in the dataset?). Figure 7.1: The data mining process and the risk of discrimination. Source: Based on categorization identified by Barocas and Selbst (2016).
T h e P o w e r of P r o v i de n c e
[ 181 ]
821
Data Mining and Differentiation
In addition to discriminating against individuals with protected characteristics, data mining and profiling processes can also differentiate among individuals and groups on the basis of classifications that are not protected. In the digital context, data gathering, via tools such as first-and third-party cookies, and the subsequent mining of that data, can be used to categorize individuals in groupings according to, for instance, their perceived interests or characteristics. Tailored advertisements, goods, and services are then offered to individuals on the basis of this categorization (Article 29 Data Protection Working Party 2010, 5). Indeed, proponents of behavioral advertising regularly assert that advertisers have little interest in the actual identity of an individual: rather, they are interested in categorizing individuals into groupings (e.g., those more likely than others to be interested in folk music) (Lenard and Rubin 2010). Therefore, if the fact that an individual attends marriage counseling is a “feature” that is selected as relevant in assessing creditworthiness as a target variable, a credit card company does not need to be able to identify the individual in order to offer him more expensive credit. Thus, without necessarily identifying the individual he can be “singled out” and categorized in a way that differentiates him from others. Like discrimination, such differentiation can exacerbate existing inequalities. An obvious example of this is differentiation on the basis of socioeconomic status, or proxies for this status. Taylor highlights that the greatest burden of dataveillance has always been borne by the poor (Taylor 2017, 2). For instance, it has been documented that data-driven law enforcement strategies have led to the overpolicing of poorer neighborhoods (see, for instance, Lum and Isaac 2016). Moreover, in addition to being subject to more surveillance and with higher stakes, Madden et al. demonstrate that poor Americans are more vulnerable to data-mining processes as a result of the devices that they use to access the Internet and the pattern of their “privacy-relevant” behavior (Madden et al. 2017, 4). For instance, they highlight that 63% of those living in households earning less than $20,000 per year mostly use their mobile phones to go online compared to just 21% of those in households earning $100,000 or more per year. This is relevant as mobile phones are less secure than other devices, such as laptops or desktops (Madden et al. 2017). The practical impact of such differentiation on the poor is recognized by the FTC when it states that big data mining “can injure the economic stability and civil rights of the poor, such as when they are targeted for predatory financial products, charged more for goods and services online, or profiled in ways that limit their employment and educational opportunities” (FTC 2016, 9–11). The
[ 182 ] Society
UK’s CMA also recognizes that data mining can be used to differentiate among consumers based on the quality or the price of goods and services offered to them, stating that the collection of consumer data may enable firms to make judgments about the lowest level of quality needed by consumers/groups of similar consumers. This may enable a firm to engage in quality discrimination where quality differences are not reflected in the price of goods or services. (CMA 2015, 93)
Platforms could facilitate such practices by restricting the products that are displayed to consumers or changing the order in which they are listed to display poorer quality products first in some circumstances (Acquisti 2010, 19). According to Borgesius and Poort, despite several high-profile incidents of personalized pricing, such pricing practices seem to be used relatively rarely (Borgesius and Poort 2017, 3). The precise welfare effects of such practices are ambiguous and need to be assessed on a case-by-case basis. However, it is possible that an individual will pay more than required for goods or services, allowing the company concerned to extract more profit from their offerings and thus entailing a “transfer of wealth from the pockets of consumers to the pockets of operators” (House of Lords 2016, 75). Borgesius and Poort highlight the factors explaining this reluctance to use personalized pricing practices, most evidently that consumers perceive such practices to be unfair and thus companies fear consumer backlash if found to be differentiating in this way. Turow’s work provides vivid examples of this unfairness. For example, he explains how companies can designate individuals into categories such as “targets” or “waste” using data-mining techniques and offer discounts to them on this basis. Contrary to distributive justice intuitions, those with perceived higher spending capacity and reserve prices for products (“targets”) are offered more significant discounts than price-sensitive consumers or those with lower spending capacity (“wastes”) in order to entice targets to become regular consumers (Turow 2011, 108– 10). As Borgesius and Poort suggest, and as discussed below, it is the surreptitious nature of such practices, when compared to signposted discounts for particular groups such as the elderly or students, that contributes to the public discomfort with these practices (Borgesius and Poort 2017, 6).
The Ability to Create Perceptions
The discriminating and differentiating impact of data mining not only compounds and exacerbates existing inequalities but also has the potential to
T h e P o w e r of P r o v i de n c e
[ 183 ]
8 4 1
create further inequality by distorting perceptions. Indeed, as Helberger et al. argue, much of the concern regarding the influence of platforms, or “gatekeepers,” lies in their control over access to individuals and the way in which the relationship between gatekeepers and users is shaped, rather than their control over access to information as such (Helberger, Katharina, and van der Noll 2015, 51). This echoes Zuboff’s claim that if “power was once identified with the ownership of the means of production, it is now identified with the ownership of the means of behavioural modification” (Zuboff 2015, 82). A good example of this is search-ranking mechanisms based on data mining. Sweeney’s research indicates that a Google search for Caucasian names presents more neutral results than a search for typically African American names (Sweeney 2013).
THE “POWER OF PROVIDENCE”: THE AGGRAVATING EFFECT OF DIGITAL DOMINANCE
Data-driven discrimination and differentiation is not confined to situations of digital dominance. However, this chapter suggests that the effects of such discrimination and differentiation may be exacerbated in the presence of dominance as a result of the privileged position of dominant companies in the digital ecosystem—their “power of providence.” In particular, this privileged position results in superior data-mining capacity and greater information and power asymmetries between individuals and dominant digital platforms.
The “Power of Providence” of the Digitally Dominant
Specific attention ought to be paid to digital platforms that operate as “market makers—or orchestrators—in the digital ecology value chain” (Mansell 2015, 25). The size, business model, and connection capacity of these market makers mean that they play a vital role in the digital ecosystem. The EU Commission has suggested that in the near future “only a very limited part of the economy will not depend on [online platforms]” (Ansip 2015). Platforms enjoy this pivotal position when the functioning of other platforms or services is dependent on them, such as when they have assets that other entities need in order to compete (for instance, an operating system or a user base) (TNO Report 2015, 14). Indeed, in its 2017 interim report on the implementation of its Digital Single Market strategy, the European Commission notes that there is widespread concern among
[ 184 ] Society
businesses that some platforms may engage in practices such as favoring their own products or services over those of other businesses and restricting access to, and use of, data directly generated by a business’s activities on a platform (EU Commission Communication 2017). Such an allegation of discrimination—the systematic favoring of its own service over that of other businesses—lies at the heart of the European Commission’s antitrust investigation against Google, which culminated in a €2.42bn fine for the company (EU Commission—Antitrust 2017). The Commission claims that Google has systematically given prominent placement to its own comparison shopping service in its Google search engine results and demoted rival comparison shopping services results in its generic search-engine ranking. Critics have challenged the decision on the basis that it enters the unprecedented territory of establishing a principle that a company may not favor its own products over those of competitors (Lamadrid 2017). Leaving the merits of the case aside, it does illustrate the ability—and willingness—of the EU Commission to sanction the practices of powerful digital companies. Such intervention is however ordinarily only justified when the company concerned enjoys a position of dominance, assessed in accordance with the Commission’s guidance on market definition (EU Commission 2007), and when the practice concerned would lead to a decrease in “consumer welfare” (European Commission 2009). Moreover, what consumers may define as a market with a dominant player—for instance, Facebook in the market for a social networking site—often does not reflect how the market is defined for competition law purposes. Findings of market power focus solely on the economic power of the company, which may be distinct from its power over data flows or its power to influence opinions. For example, instinctively many consumers would assume that Google is dominant in the EU market for organic search engine services. However, it could be argued that no such market exists for several reasons: for instance, that organic search results compete with paid search results, or that integrated search tools in a social networking service compete with Google’s search engine. If accepted, the market would be broader than a market for “organic search results” and could envisage companies such as Facebook as competitors of Google. This in turn would make a finding of market power on Google’s part less likely. Equally, it could be argued that even if a market for organic search results exists, and Google has a market share of in excess of 90% of this market in Europe, it is not in a position of market power due to the low barriers to entry in the market and the fact that “competition is just a click away,” a mantra in the technology sector. These empirical assessments are, of course, vigorously contested. However, they illustrate the point that although competition
T h e P o w e r of P r o v i de n c e
[ 185 ]
8 61
law is the only legal instrument available to harness the excesses of private power and dominance, given its definition of dominance it may be of limited utility in tackling the inequalities identified earlier. This claim is further supported by the fact that the harms competition law seeks to address are primarily economic harms, and there has been significant resistance to expanding the consumer welfare paradigm to incorporate noneconomic harms (Easterbrook 1984; Odudu 2010). Therefore, while there is a lively debate about whether practices such as “price discrimination” on the basis of personalization are captured by competition law (Townley, Morrison, and Yeung 2017), many of the inequalities generated by differentiation, discrimination, and power over opinion formation are not captured by competition law as a result of these constraints (the need for “dominance” and detriment to “consumer welfare”). Yet, as discussed below, these inequalities are exacerbated by the pivotal position—the power of providence—of digital platforms that control access to infrastructure and users.
The Impact of the “Power of Providence”
The problems that data mining entails are not exclusive to the online environment. On the contrary, current business practices indicate that the line between “offline” and “online” practices is difficult to draw when it comes to the creation of profiles. Practices such as “onboarding,” whereby a data broker adds offline data into a cookie to enable advertisers to use this offline activity information about consumers to determine what online advertisements to deliver to them, indicate that there may be little value in taking a distinct approach to the regulation of digital gatekeepers (CMA 2015). Indeed, in the United States, a digital rights advocacy group—the Electronic Privacy Information Center (EPIC)—alleges that Google is using credit card data to track whether the online advertisements it is offering lead to (offline) in-store purchases and that users have not been provided with adequate information about how this practice operates and how to opt-out from the practice (EPIC 2017). Given these blurred boundaries, regulators must consider whether data mining by dominant digital firms merits particular attention and, if so, why. For instance, is offering an individual a beauty product at a certain time in the day based on data mining techniques different from the practice of selling chocolate bars and snacks at supermarket checkouts? Both, it could be argued, are psychological ploys to encourage sales. It is suggested here that dominant firms in the digital ecosystem warrant special attention because of the number of individuals with whom they have
[ 186 ] Society
direct contact and the extent of the data processed regarding these individuals. Therefore, while all entities—irrespective of their reach, or the extent of their data processing—might potentially have a negative impact on individuals, the actions of larger entities—with wider reach and greater data- processing capacity—has the potential to have a more significant impact on societal interests and individual rights. Indeed, the ECJ recognizes this implicitly in its Google Spain judgment: it highlights that both the ubiquity of Google’s search engine and the quantity of personal data processed are relevant to the extent of the interference with the individual’s rights when their personal data is made available through Google’s search results. In other words, the broader the reach of a service and the more personal data processed the greater the interference with individual rights. This, in turn, justifies paying particular attention to the actions of digitally dominant firms. Indeed, an analogy could be drawn here with the competition law provisions discussed earlier. Under EU competition, dominant companies are said to have a “special responsibility” such that practices, like the imposition of exclusive dealing obligations on consumers, that would be lawful for a nondominant company would be unlawful if pursued by a dominant company. This “special responsibility” is justifiable on the grounds that the actions of a dominant firm have a greater impact on competition than those of nondominant firms. Similarly, it is argued here that the actions of dominant digital firms can also have a greater impact on the rights and interests of individuals than those of nondominant firms and that this may justify the imposition of specific duties on them that may not be appropriate for nondominant firms. For instance, Facebook has over 2 billion users as of mid-June 2017, approximately 1.2 billion of whom use their Facebook account on a daily basis (Constine 2017). Facebook’s data processing potential is further enhanced as a result of its partnerships with a variety of data brokers, including some of the world’s largest. For instance, Facebook partners with Acxiom, which claims to hold data on 700 million people, and Datalogix, which holds $2 trillion worth in offline purchase-based data. Facebook’s extensive direct access to users as well as its data-processing capability makes it a desirable trading partner for these data brokers and gives it a superior ability to profile individuals based on what Pasquale labels a “self- reinforcing data advantage” (Pasquale 2013, 1015). While many large platforms claim that the quantity of data they process is not decisive to their success, and that it is rather their use of this data that is significant, this does raise the question of why such data-sharing partnerships are necessary and whether they are compliant with the principle of “data minimization” enshrined in many data protection laws globally.
T h e P o w e r of P r o v i de n c e
[ 187 ]
81
The concern here is therefore not simply the “digital”; it is the combination of the digital with power. As suggested previously, power in this context may overlap with the concept of “market power” used in competition law and economic regulation, however it is not synonymous with market power in terms of how it is defined or measured. Indeed, one of the great challenges in this regard is that we lack even the language to describe this private power and, as a result, we fall back on the language and concepts of economic regulation and competition law. This point has not gone entirely unnoticed and, in part, explains the ongoing debates over whether competition law needs to be redesigned to remain fit for purpose in a digital era, and in particular whether a new concept of “market power” is needed (Autorité de la Concurrence and Bundeskartellamt 2016). This chapter has labeled such power the “power of providence” as part of the movement to decouple power from the limitations of antitrust “market definition” and to advocate for a reconceptualization of power in response to the quasi- regulatory role played by private platforms in society. Yet, as demonstrated in what follows, however one labels this power, its impact and effects are already tangible. Platform power both directly and indirectly exacerbates existing inequalities. It does so directly by aggravating the asymmetries between those who process personal data and those who are rendered transparent by this process to the detriment of the latter. It does so indirectly as powerful platforms, in practice, determine the terms and conditions offered by dependent service providers (such as applications in an “App store”) to their users.
Exacerbating Asymmetries of Information and Power
The differential pricing practices, referred to earlier, are one example of how the asymmetry of power and information between individuals and platforms is evident. Information asymmetries between the individual and the dominant platform enable the platform, for instance, to attempt to influence the political opinions of the individual or to engage in differential pricing practices based on an estimation of an individual’s reserve price for a product or service. Individuals will perceive such practices as unfair, and they may be exploitative—for instance, in the pricing context by extracting higher rents from individuals when they are desperate or vulnerable (e.g., payday loans with excessive interest rates, or the more banal hike in taxi fares when a phone battery is dying). One of the reasons why such practices are unfair is that their operation remains opaque while the individual is simultaneously
[ 188 ] Society
rendered transparent. This is highlighted by Helberger et al., who—writing in the media plurality context—consider it problematic that users have no knowledge of the selection criteria on which processes of implicit personalization are based and that they are not provided with any tools to change them or “turn them off.” They are therefore unable to assess or ascertain for themselves how limited their news selection is (Helberger, Katharina, and van der Noll 2015, 34; see also Helberger, this volume). Pasquale also highlights this opacity, stating that there may be “scarlet letters emblazoned on our digital dossiers” that we may not even know about (Pasquale 2015, 56). However, this lack of knowledge is not the sole problem. Power asymmetries persist even when individuals are, for instance, given more information or the ability to view and amend the parameters that are used to generate their profiles. When individuals are co-opted into the process in this way, it does not follow that they will be able to challenge the factors influencing a particular profile (for instance, the choice of training data or “feature selection” to use the terminology above). If an individual is categorized in a manner that he or she disagrees with, for instance, “diabetic lifestyle” or “leans left” (FTC 2014, 21), a profiler may be able to argue that the inference is simply a matter of opinion rather than fact (Pasquale 2015, 32). Moreover, even if an individual knows that certain characteristics are valued, or punished, more than others when determining the terms and conditions on which goods and services are offered, this may not help them decipher how to act. For instance, according to the CMA, some grocery retailers that offer motor insurance use purchasing data from loyalty schemes to “draw inferences about household characteristics—for instance, to offer discounts to households that appeared from their shopping habits to be relatively low risk” (CMA 2015, 45). However, in the endless debates over whether butter is better for you than margarine, even if an individual were to try to conform to a profiler “ideal” this may not be possible.
De Facto Influence over the Data-Processing Practices of Service Providers
A final reason that dominant digital platforms merit particular attention is that, given the dependence of other content providers on their platform, in practice they can exercise a decisive influence over the levels of fundamental rights, such as data protection and privacy, enjoyed by individuals. For instance, the CMA acknowledges that operating systems (such as Google’s Android, or the Apple OS) are responsible for the “Application Programming Interfaces (APIs) which dictate how the software and
T h e P o w e r of P r o v i de n c e
[ 189 ]
9 01
hardware interact—including what information the app can access.” These APIs control the release of information according to the privacy controls in place at the operating system level (CMA 2015, 42). In other words, it is the operating system that has the final say on the minimum level of data-processing standards offered by the applications it hosts. This means that operating systems could, in theory, exclude applications with substandard data use policies from their platforms. However, it would seem that platforms are doing very little to promote key data protection principles, such as data minimization (Article 6(1)(c) Directive 95/46 EC), among application providers. For example, a 2014 survey conducted by the Global Privacy Enforcement Network (GPEN) discovered that one- third of all applications requested an excessive number of permissions to access additional personal information (CMA, 2015). Moreover, the US Federal Trade Commission (FTC) has taken actions against applications such as Brightest Flashlight and Snapchat in recent years for misrepresenting how the personal data they gather is used (CMA 2015, 123–24). This is not to say that platforms are entirely inactive when it comes to promoting privacy and data protection. For instance, recent reports suggest that Google Play—the App store for Android users—culled applications from its platform on the basis of privacy and data protection concerns (Abent 2017). However, their ostensible “lowest common denominator” approach to these rights influences the extent to which these rights can be enjoyed by their users in practice. Indeed, Google Play’s cull appeared only to remove egregious violators of rights from the App store, for example applications requesting sensitive permissions—such as unnecessary data from cameras or microphones—and that did not comply with the basic principles set out in the Play Store privacy policy. Dominant platforms can, furthermore, make it difficult for individuals to take steps to defend their own rights, for instance by preventing users from using ad-blockers or by excluding privacy enhancing technologies (PETs) from their platforms. Indeed, the PET Disconnect alleges that it has been unjustly excluded from Android’s Google Play application store. In its defense, Google has stated (informally) that it applies its policies consistently to all applications and that it has “long prohibited apps that interfere with other apps—such as altering their functionality, or removing their way of making money.” It also notes that there are many more PETs available in the Google Play Store that comply with its policies. While the impact on rights might be minimal given the availability of competing PETs, the lack of transparency regarding Google Play Store’s exclusion policy is striking. This also provides a vivid reminder of
[ 190 ] Society
the commercial imperatives driving Google’s Play Store operations: the PET is not viewed as an application to help ensure the effectiveness of individual rights by allowing individuals to control how their personal data is processed. Rather, it is viewed as a threat to Google’s bottom line as it facilitates the exercise of these rights and thus threatens the revenue streams of other apps dependent on data processing for their commercial viability. In light of these enhanced concerns caused by platform power—the direct impact on asymmetries of power, and the indirect shaping of the de facto rights protection enjoyed by individuals—it is appropriate to query whether a “special responsibility” should be placed on powerful platforms, akin to that placed on dominant companies by competition law. At present, as a result of the limited role played by competition law to mitigate these potential harms, the primary body of rules associated with data-driven practices and their subsequent consequences, such as differentiation, are data protection rules. However, as shall now be discussed, while data protection law offers some mechanisms to mitigate the negative impact of data-mining practices, it also has its limits.
THE ROLE AND LIMITS OF DATA PROTECTION LAW
Data protection legislation applies when “personal data” are “processed” (Article 2(a) and 2(b), Directive 95/46 EC). “Personal data” and “processing” are broadly defined and, as a result, many of the potential harms of data processing by gatekeepers may be captured by data protection legislation. For instance, the concentration of data in the hands of a powerful platform may, like any other concentration of data, entail a heightened risk of data security breach (Cormack 2016, 17). Provisions in data protection legislation (theoretically) mitigate such risks by requiring those responsible for data processing to respect certain safeguards and to ensure that data- processing systems are structurally robust. Yet, to date, data protection legislation has proven to be of limited utility in regulating and curbing the excesses of data mining. Indeed, phenomena such as “Big Data” processing have emerged in spite of the ostensible tension between its operational principles and the foundational principles of data protection law. While the introduction of a new legislation framework—the General Data Protection Regulation (GDPR)—will undoubtedly improve the effectiveness of this legal regime as discussed in what follows, it would be erroneous to place too much hope in its provisions.
T h e P o w e r of P r o v i de n c e
[ 191 ]
921
The Scope of Data Protection Rules
In the EU, a complex regulatory regime sets out a framework of checks and balances that data-processing operations must comply with to be lawful. Like its predecessor, the 1995 Data Protection Directive, the EU’s GDPR applies to the automated (or systematic) processing of personal data. Once within the scope of this legal framework, the entities that determine how and why personal data are processed—the brains behind the personal data- processing operations—become “data controllers.” As such, the data controllers must ensure that the data-processing operation has a relevant legal basis; that it complies with the data-processing principles and safeguards; and that the rights of individuals—“data subjects”—are respected. Given the prevalence of commercial data mining, and the dearth of jurisprudence relating to this practice, a preliminary query is therefore whether the data-mining practices of powerful platforms fall within the scope of the data protection rules. Personal data is defined as any information that relates to someone who is identified or identifiable on the basis of that data (Article 4(1) GDPR). There are therefore three constituent elements of “personal data”: it is (1) any information that (2) relates to (3) an identified or identifiable person. The Article 29 Working Party, an advisory body on data protection matters comprising representatives of national data protection agencies, has suggested that information “relates to” an individual when, among other things, the purpose of the data is to “evaluate, treat in a certain way or influence the status or behaviour of an individual” or the data is “likely to have an impact on a certain person’s rights and interests” (Article 29 Working Party 2007, 9–12). Thus, the Article 29 Working Party eschews narrower understandings of the words “relate to” whereby the focus of the information is on the data subject, or the data have a clear link to the private life of the individual. It follows from the Working Party’s definition that the classification or categorization of individuals through data-mining practices, for instance according to their perceived spending capacity or future interests, “relates to” individuals as this categorization determines how individuals will be treated (for example, what advertisement they will be delivered, or music suggestions they will be offered). However, the ECJ has cast doubt on such a broad interpretation of information “relating to” an individual. In YS the Court was asked to consider whether the data provided by an applicant for a residence permit and the legal analysis of the applicant’s status in relation to that residence permit contained in a “minute” drawn up by the competent immigration officer constitute personal data. It was not contested before the Court that the data contained in the minute about the applicant (such as name, date of birth, gender, language,
[ 192 ] Society
etc.) constituted personal data, and the Court confirmed this finding (para 38). The Court held, however, that the legal analysis in the minute—which examined the applicant’s data in light of the relevant legal provisions—did not constitute personal data (para 39). It reasoned that such legal analysis is “not information relating to the applicant for a residence permit, but at most, in so far as it is not limited to a purely abstract interpretation of the law, is information about the assessment and application by the competent authority of that law to the applicant’s situation” (para 40; emphasis added). It justified such a finding on the basis that it was borne out by the objective and general scheme of the Data Protection Directive (para 41). The Court’s reasoning provides food for thought when transposed to the operations of dominant digital firms. It seems to suggest that the data “provided to” these firms, such as the data on an individual’s browsing activities, would, like the data provided by the applicant for the residence permit, constitute personal data. However, the application of the company’s algorithm to that data through data mining practices—the equivalent of the application of legal provisions to that data through legal analysis— would not constitute personal data. Indeed, Korff cautions that companies could use such reasoning in order to remove profiling from the scope of application of the data protection rules (Korff 2014). He notes that: After all, a profile, by definition, is also based on an abstract analysis of facts and assumptions not specifically related to the data subject—although both are of course used in relation to the data subject, and determine the way he or she is treated.
Indeed, the applicants, several Member State governments and the European Commission in YS had argued that the legal analysis should constitute personal data as it refers to a specific natural person and is based on that person’s situation and individual characteristics (para 35). The recent Opinion of Advocate General Kokott, an advisor to the Court, in the Nowak case appears to offer a counterbalance to the Court’s findings in YS. In Nowak the Court is asked to consider whether an examination script constitutes personal data. The advocate general clearly opines that it does, reasoning that an examination script links the solutions it contains with the individual candidate who produces the script. As such, the “script is a documentary record that that individual has taken part in a given examination and how he performed” (para 21). In particular, she highlights that a script is intended to assess the “strictly personal and individual performance” of a candidate (para 24). She adds to this by suggesting that the comments of an examiner on a script are also personal data (para 63),
T h e P o w e r of P r o v i de n c e
[ 193 ]
941
noting that “the purpose of comments is the evaluation of the examination performance and thus they relate indirectly to the examination candidate” (para 61). Despite the clear analogy between the examination corrections in Nowak and the legal analysis in YS, the advocate general does not attempt to reconcile the two. It is therefore suggested that whether a profile itself “relates to” an individual remains an open question, even following the entry into force of the GDPR. A further bone of contention in the context of profiling is whether the personal data relates to someone who is identifiable. When determining whether someone is identifiable, recital 26 GDPR specifies that “account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person” to identify the data subject directly or indirectly. Objective factors, such as costs, the time commitment, and the current technological availability, should be taken into consideration when making this assessment. The most ardent proponents of online behavioral advertising have sought to argue that advertisers are not interested in a user’s actual identity rather, they simply wish to categorize users into groups of those who are more likely than average to have certain interests or capabilities (Lenard and Rubin 2010). This is a point that has been acknowledged even by those seeking more effective regulatory responses to profiling (Barocas and Nissenbaum 2014). Indeed, Taylor highlights that data injustice tends to occur increasingly on the collective level. She notes, New technologies tend to sort, profile and inform action based on group rather than individual characteristics and behaviour, so that in order to operationalize any concept of data justice, it is inevitably going to be necessary to look beyond the individual level. (Taylor 2017, 14)
An example may serve to illustrate this point. Facebook categorizes its users on the basis of their user profiles and activity and then offers to connect advertisers to users with profiles that match their needs. For instance, Facebook might estimate that a user is in her mid-30s, is based in London, is interested in cycling, and works as a professional. It might therefore offer this individual advertising for spinning studios in central London on its platform. Facebook does not provide the user details to the advertiser. The advertiser may therefore argue that, even if the user clicks through to its spinning advertisement, it would not be able to identify the user on the basis of the user IP address alone. Moreover, given the broad parameters of the profile, it might argue that even if it did have the profile and an IP address it would not be able to identify an individual on that basis.
[ 194 ] Society
Facebook would however likely be processing personal data when connecting the advertisement to a user as, even if it categorized individuals in broad terms, it has the technical capacity to link this broad profile back to an individual. In Breyer the ECJ adopted a wide interpretation of identifiability. It found that a dynamic IP address could constitute personal data if the provider of a publicly available website or online media service has the legal means available to it to link that dynamic IP address with additional data to identify the individual. What is noteworthy about the Court’s finding in Breyer is the Court’s assessment of what means are “likely reasonably to be used” by a data controller to identify a data subject. In that instance, in order to identify the individual behind a dynamic IP address the website operator would need to contact a competent authority (in situ, a cybercrime authority) who would then in turn need to contact an Internet service provider in order to obtain the additional identifying information (para 47). The availability of the mere prospect of connecting data with identifying data—even if the process for doing so is laborious—ostensibly renders that data “identifiable.” This broad precedent, when coupled with the Opinion of the Advocate General in Nowak, therefore opens up the possibility that a profile—the application of a data-mining formula to particular personal data—constitutes personal data.
The Substantive Rights Provided by Data Protection Law
Once within the scope of the data protection regime (as there is personal data processing) the utility of the rights available to combat profiling pursuant to that framework remains hotly debated. According to the GDPR, data subjects have a right to receive specified information regarding the processing of their personal data. Among other things, the individual should be informed of the existence of automated decision-making, including profiling, and provided with “meaningful information about the logic of this automated decision-making as well as its significance and envisaged consequences for the data subject” (Articles 13(2)(f) and 14(2)(g) GDPR). Such information is also available to the data subject when exercising his or her right to access pursuant to Article 15(1)(h) GDPR. Article 22(1) GPDR also provides that the individual shall have the “right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” There are a number of broad exceptions to this right however. It does not apply if the automated decision-making is necessary to enter into or perform a contract between the individual and
T h e P o w e r of P r o v i de n c e
[ 195 ]
961
the controller, if the automated decision-making is authorized by law, or if it is based on the explicit consent of the data subject. However, where the right does not apply as the automated decision-making is based on consent or is necessary to enter into or perform a contract, the data controller must “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision” (Article 22(3) GDPR). Whether these provisions amount to a “right to explanation” of profiling for the data subject has been the subject of vigorous doctrinal debate. On the one hand, Wachter et al. have asserted that pursuant to these GDPR provisions the individual has, at best, a right to be informed ex post about the general system functionality of an automated decision-making system as opposed to the right to receive an explanation for specific automated decisions ex ante (Wachter, Mittelstadt, and Floridi 2017). Powles and Selbst, among others, roundly contest these findings. They contend that the distinctions drawn between system functionality and specific decisions, and ex ante and ex post explanations fail to withstand scrutiny. They suggest that it is “hard to imagine a useful or meaningful explanation of system functionality that does not allow a data subject to map inputs to outputs, and to figure out what will happen in her case” (Powles and Selbst 2017). They therefore conclude that if you can explain system functionality, you can usually explain specific decisions. While this is true, it could be argued that system functionality could be compared to the relevant legal provisions applied in YS while a specific decision could be compared to the legal analysis, which applies a formula to particular facts. It may be in the data subject’s interest to determine how a formula (whether it be a law or automated decision-making algorithm) applies to her data and not all individuals will have the skills required to work backward from the automated decision to the original personal data processed in order to decipher how a particular formula has been applied to their situation. The distinction suggested by Wachter et al. is thus not entirely irrelevant. Nevertheless, the broader point raised by Powles and Selbst remains critical: we have yet to consider what exactly “meaningful information about the logic” of automated decision-making means and it is necessary to consider this in the wider context of the changes introduced by the GDPR. The narrow, formalistic reading of critical individual rights proposed by Wachter et al. seems to militate against the direction of travel of EU data protection law. One could point to many factors that would indicate a more generous interpretation of these GDPR rights might be preferred: for instance, one of the principal aims of data protection reform
[ 196 ] Society
was to enhance the effectiveness of the rights of data subjects while comparisons with the predecessor Directive remain of limited value given that the EU Charter of Fundamental Rights was not in force for much of its existence. The precise contours of these rights will therefore likely only be clarified after the new rules have been interpreted by the new EU data protection agency—the European Data Protection Board—or by the ECJ. Rather than requiring the initiative of an individual data subject, the “right not to be subject to a decision based solely on automated data processing” ostensibly prohibits such automated decision-making in some circumstances. If true, this would differentiate this right from other data protection rights granted to the individual and potentially render it more effective in practice. As Blume has noted, other rights such as the right of access and the right to delete presuppose “the initiative of the data subject, which in practice will not often be taken” (Blume 2014, 270). Nevertheless, even the most favorable interpretation of this right for a data subject is just one small piece in the jigsaw puzzle of counterbalancing digital dominance. These provisions need to be overseen and such oversight is an onerous burden to place on individuals, particularly given that such an individual- centric system of oversight may exacerbate existing inequalities. Put bluntly, those who are poor in skills, time, or other resources, may not have the same capacity to exercise their rights as others who are, for instance, more technology literate or informed. It is for this reason that the increased focus on more “back-end” enforcement in the GDPR is to be welcomed. The GDPR seeks to enhance the effectiveness of the rights of individuals but, in so doing, it avoids the pitfalls of the past and distributes the onus for such rights protection more evenly across stakeholders. The introduction of a general principle of accountability in the GDPR, pursuant to which the data controller must be able to demonstrate compliance with key data protection safeguards such as fair and lawful data processing, is a critical step in this regard. This principle of accountability will encourage data controllers to give adequate consideration to their data protection compliance mechanisms while the GDPR’s increased emphasis on effective enforcement, through mechanisms such as enhanced sanctions, provides an added incentive to take these obligations seriously.
CONCLUSIONS
This chapter has suggested that commonplace data-mining techniques are not neutral in their application and can exacerbate existing societal
T h e P o w e r of P r o v i de n c e
[ 197 ]
9 8 1
inequalities, along legally protected grounds (such as race, gender, and sexuality) as well as other grounds such as socioeconomic status. The techniques in question are not used solely by powerful platforms. However, it has been suggested that the “power of providence” of digital platforms— stemming from the volume and variety of data they process, as well as their reach—means that the effects of these data-mining techniques may be particularly pernicious in this context. This is for two reasons: first, such practices can widen existing power and information asymmetries between the individual and the data controller (widening the gulf of power between the disadvantaged individual and powerful platform even more). Second, powerful platforms have the capacity to influence the data-mining practices of providers in the broader internet ecosystem: this capacity can be used to push for higher standards of rights protection and encourage or conversely, as is presently the case, to condone or even facilitate unfair data-mining techniques. It has therefore been suggested that, just as it is appropriate to impose a “special responsibility” on firms with significant market power through competition law rules, it may be appropriate to impose additional legal responsibilities on firms with the “power of providence” as a result of the volume and variety of the personal data they process and the extent of their reach. Data protection law is often touted as the answer to these profiling practices. However, the enforcement of this body of law has, to date, been underwhelming, in part because of the onus it places on individuals to assert rights on their own behalf. The entry into force of the EU’s GDPR does offer some hope in this regard but should not be viewed as a silver bullet. Given the architecture of the digital system, it is suggested that more imaginative and holistic solutions will be required. Taylor, for instance, highlights that “markets are an important factor in establishing and amplifying power asymmetries to do with digital data, and that addressing the value chains involved in large-scale data gathering and surveillance may be a functional shortcut to controlling misuse” (Taylor 2017, 5). Getting a firmer grip and understanding of how personal data are processed by powerful platforms and networks of data brokers would be an excellent first step in the process of taming this power of providence.
REFERENCES Abent, Eric. 2017. “Google Play Prepares to Remove Apps over Missing Privacy Policies.” Slashgear.com, February 9, 2017. https://www.slashgear.com/google- play-prepares-to-remove-apps-over-missing-privacy-policies-09474464/.
[ 198 ] Society
Acquisti, Alessandro. 2010. “The Economics of Personal Data and the Economics of Privacy.” Joint WPISP-WPIE Roundtable OECD, December 1, 2010. Ansip, Andrus. 2015. “Productivity, Innovation and Digitalisation—Which Global Policy Challenges?” Speech at Bruegel annual meeting, September 7, 2015. https://ec.europa.eu/commission/commissioners/2014-2019/ansip/ announcements/speech-vice-president-ansip-bruegel-annual-meeting- productivity-innovation-and-digitalisation-which_en. Article 29 Working Party. 2010. “Opinion 2/2010 on Online Behavioural Advertising.” Adopted on June 22, 2010 (WP 171). Article 29 Working Party. 2007. “Opinion 4/2007 on the Concept of Personal Data.” Adopted on June 20, 2007 (WP136). Autorité de la Concurrence and Bundeskartellamt. 2016. “Competition Law and Data.” May 10, 2016. http://www.autoritedelaconcurrence.fr/doc/ reportcompetitionlawanddatafinal.pdf. Barocas, Solon, and Helen Nissenbaum. 2014. “Big Data’s End Run around Procedural Privacy Protections.” Communications of the ACM 57, no. 11: 31–33. Barocas, Solon, and Andrew Selbst. 2016. “Big Data’s Disparate Impact.” California Law Review 104: 671–732. Blume, Peter. 2014. “The Myths Pertaining to the Proposed General Data Protection Regulation.” International Data Privacy Law 4, no. 4: 269–73. Borgesius, Frederik Zuiderveen, and Joost Poort. 2017. “Online Price Discrimination and EU Data Privacy Law.” Journal of Consumer Policy 40: 347–66. Bundeskartellamt. 2016. “Bundeskartellamt Initiates Proceeding against Facebook on Suspicion of Having Abused Its Market Power by Infringing Data Protection Rules.” Accessed August 28, 2017. https://www.bundeskartellamt. de/SharedDocs/Meldung/EN/Pressemitteilungen/2016/02_03_2016_ Facebook.html. Cohen, Julie. 2016. “The Regulatory State in the Information Age.” Theoretical Inquiries in Law 17, no. 2: 369–414. Competition and Markets Authority. 2015. “The Commercial Use of Consumer Data: Report on the CMA’s Call for Information.” CMA38, June 2015. Constine, Josh. 2017. “Facebook Now Has 2 Billion Monthly Users . . . and Responsibility.” Tech Crunch.com, June 27, 2017. Accessed August 27, 2017. https://techcrunch.com/2017/06/27/facebook-2-billion-users/. Cormack, Andrew. 2016. “Is the Subject Access Request Right Now Too Great a Threat to Privacy?” European Data Protection Law Review 2, no. 1: 15–27. Easterbrook, Frank. 1984. “Limits of Antitrust.” Texas Law Review 63: 1–40. EPIC. 2017. “EPIC Files FTC Complaint to Stop Google from Tracking In-Store Purchases.” July 31, 2017. https://epic.org/2017/07/ epic-files-ftc-complaint-to-st.html. Equality Act. 2010. https://www.legislation.gov.uk/ukpga/2010/15/contents. EU Commission. 2009. “Guidance on the Commission’s Enforcement Priorities in Applying Article 82 of the EC Treaty to Abusive Exclusionary Conduct by Dominant Undertakings.” OJ [2009] C 45/7. EU Commission—Antitrust. 2017. “Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving Illegal Advantage to Own Comparison Shopping Service.” Press release, June 27, 2017. EU Commission. 2017. “Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of Regions on the Mid-Term Review on the Implementation of
T h e P o w e r of P r o v i de n c e
[ 199 ]
02
the Digital Single Market Strategy: A Connected Digital Single Market for All.” COM (2017) 228 final. “European Parliament and Council Directive 46/EC of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data [1995].” (Directive 95/46) OJ L281/23. Federal Trade Commission. 2016. “Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues.” June 2016. https://www.ftc.gov/system/files/ documents/reports/big-data-tool-inclusion-or-exclusion-understanding- issues/160106big-data-rpt.pdf. Federal Trade Commission. 2014. “Data Brokers: A Call for Transparency and Accountability.” May 2014. https://www.ftc.gov/system/files/documents/ reports/data-brokers-call-transparency-accountability-report-federal-trade- commission-may-2014/140527databrokerreport.pdf Gellert, Raphael, Katja de Vries, Paul de Hert, and Serge Gutwirth. 2012. “A Comparative Analysis of Anti-Discrimination and Data Protection Legislations.” In Discrimination and Privacy in the Information society: Data Mining and Profiling in Large Databases, edited by Bart Custers et al., 61–89. Berlin-Heidelberg: Springer. Helberger Natali, Kleinen-von Königslöw Katharina, and Rob van der Noll. 2015. “Regulating the New Information Intermediaries as Gatekeepers of Information Diversity.” Info 17, no. 6: 50–71. House of Lords—Select Committee on the EU. 2016. “Online Platforms and the Digital Single Market.” HL Paper 129. April 20, 2016. Korff, Douwe. 2014. “The Proposed General Data Protection Regulation: Suggested Amendments to the Definition of Personal Data.” EU Law Analysis. http:// eulawanalysis.blogspot.co.uk/2014/10/the-proposed-general-data-protection. html. Lamadrid, Alfonso. 2017. “Google Shopping Decision—First Urgent Comments.” Chilling Competition, June 27, 2017. https://chillingcompetition.com/2017/06/ 27/google-shopping-decision-first-urgent-comments/. Lenard, Thomas M., and Paul H. Rubin. 2010. “In Defense of Data: Information and the Costs of Privacy.” Policy and Internet 2, no. 1: 149–83. Lum, Kristian, and William Isaac. 2016. “To Predict and Serve?” Significance 13: 14–19. Madden, Mary, Michele E. Gilman, Karen E. C. Levy, and Alice E. Marwick. 2017. “Privacy, Poverty and Big Data: A Matrix of Vulnerabilities for Poor Americans.” Washington University Law Review 95: 53–125. Mansell, Robin. 2015. “Platforms of Power.” Intermedia 43, no. 1: 20–24. Odudu, Okeoghene. 2010. “The Wider Concerns of Competition Law.” Oxford Journal of Legal Studies 30, no. 3: 599–613. Pasquale, Frank. 2013. “Privacy, Antitrust and Power.” George Mason Law Review 20, no. 4: 1009–24. Pasquale, Frank. 2015. The Black Box Society—The Secret Algorithms That Control Money and Information. Harvard: Harvard University Press. Powles, Julia, and Andrew Selbst. 2017. “Meaningful Information and the Right to Explanation.” International Data Privacy Law 4, no. 1: 233–42. Ramirez, Edith. 2013. “The Privacy Challenges of Big Data: A View from the Lifeguard’s Chair.” Keynote address—Technology Policy Institute Aspen Forum, August 2013. Scott, James C. 1998. Seeing Like a State. New Haven: Yale University Press.
[ 200 ] Society
Sweeney, Latanya. 2013. “Discrimination in Online Ad Delivery: Google Ads, Black Names and White Names, Racial Discrimination, and Click Advertising.” Acmqueue 11, no. 3: 1–19. Taylor, Linnet. 2016. “Data Subjects or Data Citizens? Addressing the Global Regulatory Challenge of Big Data.” In Information, Freedom and Property: The Philosophy of Law Meets the Philosophy of Technology, edited by Mireille Hildebrandt and Bibi van den Berg, 81–106. Oxford: Routledge. Taylor, Linnet. 2017. “What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally.” June 26, 2017. Available at SSRN: https://ssrn.com/ abstract=2918779. TNO Report. 2015. “Digital Platforms: An Analytical Framework for Identifying and Evaluating Policy Options.” TNO (2015) R11271. Townley, Christopher, Eric Morrison, and Karen Yeung. 2017. “Big Data and Personalised Price Discrimination in EU Competition Law.” King’s College London Law School Research Paper No. 2017-38. Available at SSRN: https:// ssrn.com/abstract=3048688 or http://dx.doi.org/10.2139/ssrn.3048688 Turow, Joseph. 2011. The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth. New Haven: Yale University Press. Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. 2017. “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.” International Data Privacy Law 7, no. 2: 76–99. Wikipedia. 2017. “Eye of Providence.” Accessed September 13, 2017. https:// en.wikipedia.org/wiki/Eye_of_Providence. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30: 75–89. Case Law Case C-434/16 Peter Nowak v Data Protection Commissioner, Opinion of Advocate General Kokott delivered on July 20, 2017 EU:C:2017:582. Case C-213/15 European Commission v Patrick Breyer EU:C:2017:563. Joined Cases C-141/12 and C-372/12 YS v Minister voor Immigratie, Integratie en Asiel and Minister voor Immigratie, Integratie en Asiel v M and S EU:C:2014:2081. Case C-131/12 Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González EU:C:2014:317. Case C-236/09 Association Belge des Consommateurs Test-Achats ASBL and Others v Conseil des ministres EU:C:2011:100.
T h e P o w e r of P r o v i de n c e
[ 201 ]
0 2
CHAPTER 8
Digital Agenda Setting Reexamining the Role of Platform Monopolies JUSTIN SCHLOSBERG
INTRODUCTION
This chapter discusses the news-gatekeeping and agenda-setting role played by platform monopolies.1 It interrogates and challenges oft-held assumptions that the bewildering market power wielded by the likes of Google and Facebook has come at the expense of the mainstream press and broadcasters, whose traditional dominance over public conversation is said to have waned in line with declining circulations and ever greater competition for eyeballs and advertisers. Critical accounts have tended to emphasize the obscure and invisible ways in which platform monopolies engage in news editorial practices 1. Throughout this chapter, I use the term “platform monopolies” to refer to those online intermediaries that serve primarily as news gateways, that is, not themselves endpoints of news consumption but services that direct traffic and connect users to content via recommendation algorithms. They are distinct from news providers that either produce original news content themselves or provide a channel and brand for content produced by a wholesale provider. They are also distinct from aggregators that package and host news content in full. In reality, a number of brands are engaged in several of these practices, but it is nevertheless clear that the major search and social media monopolies—Google, Facebook, Twitter—serve predominantly as news gateways, just as the majority of major conventional news brands predominantly produce and publish their own news content. This terminology and categorization is drawn largely from Ofcom’s plurality measurement framework (Ofcom 2015).
(e.g., Herrera 2014; Pariser 2011; Napoli 2014; Moore 2016). This notion of invisibility resonates strongly with the radical view of power articulated by Steven Lukes (1974). Through his concern for the relative prominence or absence of particular issues in public debate, Lukes argued passionately for a critical examination of how agendas come to be shaped. This conceptual focus was seen as central to any understanding of dominance in late capitalist societies. If, as Ben Bagdikian suggested, the crux of media power consists in “carefully avoiding some subjects and enthusiastically pursuing others” (2000, 16), then the question of who pulls agenda strings (as well as to what degree and in what ways), is one that requires constant reexamination in the face of rapidly evolving communication technologies. The following discussion proceeds in three stages. First, it traces the roots of the digital agenda-setting debate, beginning with the gatekeeping paradigm in media studies. In contrast to Lukes, neither the gatekeeping nor agenda-setting research traditions have tended to fixate on the role of concentrated elite power in setting the limits of public debate, and recent literature has tended to paint a picture of dispersed or “network” gatekeeping and a withering aggregate or mainstream agenda. Attention is then turned to the role of platform monopolies in personalizing agendas and, by extension, usurping agenda power from both traditional news providers and the networked audience. But this is called into question when we examine empirical and anecdotal evidence of the contrary: that platform monopolies are, if anything, helping to resurrect a mainstream agenda consensus dominated by a small number of established conventional news brands. The chapter concludes by arguing that alongside personalizing news agendas, platform monopolies are also helping to recreate an aggregate agenda that cuts across fragmented and polarized audiences, and that the prevailing evidence points toward increasing sensitivity and deference to the agendas promoted by conventional major news brands. Their agenda- setting role in this respect is best understood as one of amplification. We are witnessing not the demise of concentrated news “voice,” but its reconstitution within a more integrated, complex and less noticeable power structure.
THE ORIGINS OF CONTROL
In media studies, the issue of power behind the agenda finds its roots in gatekeeping theory. Since its earliest formulation, this theory has always been preoccupied with the factors that determine which stories, issues, or
Di g i ta l Ag e n da S e t t i n g
[ 203 ]
4 0 2
frames are included in the news agenda at any given time. Although attention was initially focused on the subjective personality of the editor as the main driver of news selection (White 1950), research has since developed and expanded to reflect a much wider range of influences and account for the nuances and complexities of editorial decision-making (e.g., Galtung and Rouge 1965; Manning 2001; Gans 1979). This has also led to the emergence of different levels of analysis, examining gatekeeping as a function of individual agency, communication routines, organizational behavior, social institutions, and the social system as a whole (Shoemaker and Reece 1996). While the gatekeeping tradition has focused on the drivers of editorial decision-making, agenda-setting research has probed the effects of news selection in turn on the broader media, public, and policy agenda. Ever since McCombs and Shaw’s seminal study at Chapel Hill (McCombs and Shaw 1972), agenda-setting research has tended to confirm what Bernard Cohen had first hypothesized a decade or so earlier: that the press “may not be successful much of the time in telling people what to think, but is stunningly successful in telling its readers what to think about” (Cohen 1963, 13). Later studies went further and found that the media were capable of determining the salience of not only particular topics or issues but also perspectives or “attributes” within a given story that shaped how people think about issues (e.g., Kim, Scheufele, Shanahan 2002). But even with expanding and differentiated levels of analysis, the gatekeeping and agenda-setting paradigms in media research have not tended to produce critical responses to the question of agenda power along the lines that Lukes put forward. Starting from the premise that news filtering is a necessary and desirable feature of professional journalism, concerns about the potential for elite power to set the limits of public debate via this process have tended to be resolved with reference to, among other things, shared news values (Gans 1979; McCombs 2014), community ties (White 1950), and watchdog journalism as a “public alarm system” (Zaller 2003). In the digital context, the very location of gatekeeping and agenda- setting power in the hands of professional journalists and editors has been called into question. For some in the gatekeeping tradition, this is a cause for celebration, with news-and information-filtering power becoming increasingly dispersed among an ever more enabled and emboldened user- audience (Bruns 2011; Anderson 2011). The phenomenon of “networked gatekeeping” (Meraz and Papacharissi 2013) captures this power shift as a relegation of the professional gatekeeping function in favor of the crowd. For some, it is reduced to editorial signposting, marking a switch “from the watchdog to the ‘guidedog’ ” (Bardoel and Deuze 2001, 94). Others suggest,
[ 204 ] Society
conversely, that the role of professional journalists is now limited to that of newsgathering, while audiences have assumed responsibility for determining which stories achieve salience and prominence (Dylko et al. 2012). Either way, the part played by professional media in agenda creation is now a supporting rather than lead role; a point made emphatically by Henry Jenkins when he argued, The power of participation comes not from destroying commercial culture but from writing over it, modding it, amending it, expanding it, adding greater diversity of perspective, and then recirculating it, feeding it back into the mainstream media (2006, 257).
Coupled with channel proliferation, changing patterns of consumption and fragmenting audiences, the new participatory news culture is said to have dissolved anything approaching a singular and unitary agenda (Bennett and Iyengar 2008; McNair 2006). According to this perspective, news providers are becoming increasingly politically partisan in an effort to attract and sustain niche audiences, and amid “the continued detachment of individuals from the group-based society” (Bennett and Iyengar 2008, 208). The commercial success of Fox News has been held as testament to a new reality in which “partisan selective exposure” becomes the key driver of news consumption (van Aelst and Walgrave 2011). Agenda skeptics like Bennett and Iyengar thus paint a much more sobering picture compared to the celebrants of dispersed gatekeeping power. But they also share common ground in one important sense: both conceive of audiences as no longer passive recipients of news but active agents of their own agenda, self-selecting their media diet according to their personal and political identities and affiliations. Above all, they share a belief in the erosion of the mass media paradigm according to which major news brands— for better or for worse—play a pivotal role in fostering public debate and defining its boundaries.
NEWS “FOR YOU”
In recent years, there has been a groundswell of scholarly attention to the particular dynamics of algorithms in the polarization (and even atomization) of news audiences as agendas become increasingly personalized (e.g., Moore 2016; Napoli 2014; Pariser 2011). And it is here that the gatekeeping and agenda-setting role of platform monopolies—through their design and control of algorithmic news filters—takes center stage.
Di g i ta l A g e n da S e t t i n g
[ 205 ]
0 62
Cass Sunstein (2001) was one of the first to capture this problem with reference to emergent “echo chambers” and the “Daily Me” characteristics of digital news consumption. The newly discovered powers of news selection in the hands of users was not, according to Sunstein, reinvigorating the public sphere but fracturing it and diminishing it. Indeed, Sunstein’s account evolved into a story of ever greater individualistic retreat from the commons. Left to their own devices in news selection, individuals were becoming progressively “cocooned” in private information realms (2009); inevitably driven to choose the familiar over the surprising, the reaffirming over the challenging. This was said to be creating a profound deficit in exposure to diverse and contesting views central to both the pluralist democratic promise of a “marketplace of ideas” and the deliberative democratic vision of an inclusive public sphere (Karppinen 2013). But more recently, the problem has been articulated as not so much one of self-weaving cocoons, but rather the active shaping of choices and preferences by dominant algorithms. In a disturbing account of the rapidly developing “personalized Internet,” Eli Pariser (2011) argued that not only adverts but virtually everything that we encounter online will soon be microtargeted in a way that does not just predict choices but also produces powerful “closing off” effects: The basic code at the heart of the new Internet is pretty simple. The new generation of Internet filters looks at the things you seem to like—the actual things you’ve done, or the things people like you like—and tries to extrapolate. They are prediction engines, constantly creating and refining a theory of who you are and what you’ll do and want next. Together, these engines create a unique universe of information for each of us [ . . . ] which fundamentally alters the way we encounter ideas and information. (Pariser 2011, 9)
In response to their critics, platform monopolies tend to invoke user sovereignty as the basis on which their algorithms are designed. The underlying logic seems obvious enough: it is ultimately in the interests of these companies—as well as their advertisers and partners—to maximize user engagement with their services. And that can only be achieved by developing algorithms exclusively with the end user’s interests in mind. This notion of putting the user first was vividly captured by the testimony of Eric Schmidt, Google’s chairman, to a US Senate committee in 2011: At Google, we’ve always focused on putting consumers—our users—first. For example, we aim to provide relevant answers as quickly as possible, and our
[ 206 ] Society
product innovation and engineering talent deliver results that we believe users like, in a world where competition is only a click away. (Schmidt 2011)
But there are a number of conceptual problems with this argument, not least that it presupposes users are in the driving seat of search queries, and that Google’s algorithm is designed merely to seek and respond as opposed to guide, tailor, or even censor. Above all, the rhetoric of “user first” obscures the potentially subtle ways in which the interests of the user are both interpreted as well as shaped in line with commercial exigencies. Prior to Schmidt’s Senate testimony in 2011, Google’s search algorithm had already started on a process of adaptation that was making it increasingly interventionist in guiding users toward particular types of content. Schmidt himself told the Wall Street Journal in 2010, “I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next” (Jenkins 2010). This is a very different sentiment to that conveyed in Schmidt’s Senate testimony a year later. Although a more interventionist search function is not necessarily in conflict with the principle of putting users first, the shift from “answering questions” to “telling people what to do next” clearly signals a profound change in the order of power assumed over the flow of news and information. Like Google’s search function, Facebook’s news feed algorithm filters and orders stories and informational content in line with the social and demographic profiles of its users. And as with Google, this filtering process is barely noticeable from the vantage point of the user. When I land on the home page of the New York Times, it is pretty clear to me that the ordering and wording of its headline stories are the products of editorial decision- making. But when I log onto Facebook, the stream of posts and shares I am presented with looks more like the chronological order of choices made by all of my friends and pages I have “liked,” rather than the selective priorities of intricate personalization metrics. According to one recent study, 62% of Facebook users were unaware that their newsfeed was “curated” by Facebook in this way (Eslami et al. 2015), while another suggested that up to 72% of posts by friends and subscribed pages are routinely hidden from a user’s news feed (Herrera 2014).
THE RETURN OF THE AGGREGATE
Paradoxically, there are also key features of major news algorithms that may be helping to resurrect an aggregate as opposed to personalized agenda.
Di g i ta l Ag e n da S e t t i n g
[ 207 ]
8 0 2
For a start, platform monopolies are increasingly sensitive and deferential to the editorial priorities set by mainstream news brands. This became vividly apparent in May 2016 when five whistle-blowers revealed the exist ence of a specialist “curating” team within Facebook, housed within the basement of its New York offices and responsible for manually editing its trending topics—the aggregated list of the most popular news stories that feature alongside the user’s personalized feed (Nunez 2016). In news terms, Facebook’s trending topics—just like those of Twitter—guarantee a substantial boost of through traffic to any of its featured stories and thus have the potential to profoundly challenge or reinforce a mainstream agenda consensus. The revelation stirred controversy over an apparent anticonservative bias in the curating process, although this appeared to be a reflection of the political sensibilities of most of the curators rather than a top-down driven policy. But what did apparently feed down from management were explicit instructions to take extra caution with news stories involving Facebook itself and to ensure that stories attracting substantial coverage in mainstream media and on Twitter were given a boost if they were not trending on Facebook “organically.” Indeed, the team was allegedly given explicit instructions to check trending topics against the agendas of a list of “preferred” traditional news sources. As for Google news, volume and scale have long been the key drivers of “optimization.” One look at a recent patent filing for its news algorithm reveals just how much size matters in this sense: the size of the audience, the size of the newsroom, and the volume of output.2 Perhaps the most contentious metric is one that purports to measure what Google calls “importance” by comparing the volume of a site’s output on any given topic to the total output on that topic across the web. In a single measure, this promotes both concentration of the agenda at the level of source (by favoring organizations with volume and scale), as well as at the level of output (by favoring organizations that produce more on topics that are widely covered elsewhere). Clearly Google believes that “real news” providers are those that can produce significant amounts of original, breaking, and general news on a wide range of topics and on a consistent basis and at face value, that does not sound like such a bad thing. In a world saturated with hype, rumor, and fake news, it is not surprising that most people are attracted to news 2. See United States Patent Application Publication No. US 2014/0188859 A1. July 3, 2014. Accessed March 28, 2016. https://docs.google.com/viewer?url=patentimages. storage.googleapis.com/pdfs/US20140188859.pdf.
[ 208 ] Society
brands that signal a degree of professionalism. Despite rock bottom levels of public trust in journalism as a public serving profession (Ipsos MORI 2015), it has become clear that major news brands still enjoy a great deal of agenda authority that cuts above the digital information noise. In a survey of online news consumers across nine countries in 2013, the vast majority of respondents in all countries said that they tended to access news from websites that they know and trust (Levy and Newman 2013). And this translates into a surprisingly small number of news sources. In their repeat study the following year the same researchers noted, “[a]lthough the number of sources available is almost infinite, the average user tends to access fewer than three sources each week, with the British and Japanese using just over two” (Levy and Newman 2014, 42). Scholars have coined this phenomenon “gatekeeping trust,” defined as trust that “the news media selects stories based on judgments of the relative importance of social problems” (Pingree et al. 2013, 351). In other words, it is a form of trust intimately connected to both news selection and agenda setting power. Journalists may not be trusted to tell a story accurately or fairly, but gatekeeping trust signals a prior recognition that the stories they do tell are at least worthy of telling. In a fascinating experimental study conducted in 2014, a sample of students were unwittingly exposed to a set of news issues artificially presented either as stories carried by mainstream news brands, personal blog entries, reports by recognized nongovernmental organizations (NGOs), or crosswords with an associated news or gaming site brand (Carpentier 2014). What the researchers found was that the association of a major news brand (New York Times or Washington Post) with any of the issues was likely to cue relative judgments of issue importance. So, for instance, participants exposed to a particular story with a recognized newspaper brand were more likely to consider the issue important compared to those exposed to an exact copy of the same story presented as a personal blog entry or NGO report. Clearly, platform monopolies have become increasingly sensitized to this gatekeeping trust, as well as critiques that their news “quality” metrics have done nothing to hamper the proliferation of hate speech and fakery, typified by the rise of far right news brands like Breitbart. Google made clear as much when it stated in its patent filing that “CNN and BBC are widely regarded as high quality sources of accuracy of reporting, professionalism in writing, etc., while local news sources, such as hometown news sources, may be of lower quality.” Of course, by favoring certain types of news organization (i.e., those with scale and established brand presence), Google is to some extent
Di g i ta l A g e n da S e t t i n g
[ 209 ]
0 21
endorsing and reinforcing a mainstream agenda consensus (and potentially one with a Western bias given its stated preference for sources like the BBC and CNN). But there is something else at play here that may be behind the enduring gatekeeping trust in mainstream news brands exhibited by both consumers and platform monopolies. To get to the heart of the matter, we need to consider the fundamental characteristic of consumer cognition that underpins both optimistic accounts that major algorithms are helping to deliver unprecedented news diversity, and pessimistic accounts suggesting that they are driving the progressive disintegration of the public sphere.
COLLECTIVE VERSUS INDIVIDUAL CONSUMPTION
What both these schools of thought rely on is an assumption that people fundamentally prefer uniquely tailored and niche content over that which is aimed at a mass audience. This preference for personalized content, along with the unique capacity of algorithms to fulfill it, lies at the heart of Chris Anderson’s “long tail” thesis (2006), according to which the dig ital economy will, by its nature, foster an explosion of niche culture at the expense of the mainstream. But the assumption that users prefer “tailored” content overlooks the significance of collective sensibilities when it comes to the consumption of news, entertainment, and culture. Part of what makes news or entertainment entertaining or other symbolic goods meaningful is that they form part of a shared culture; that there is something transcendent in the collective imaginary of cultural consumption that is not quite realized when content is uniquely selected, either by individual users themselves, or by algorithms on behalf of individual users. This collective imaginary is not necessarily in conflict with the personal and subjective dimension but, on the contrary, may serve to support and reinforce it. In a thought-provoking essay on the relationship between music and self-identity, David Hesmondhalgh argued that the individual emotions we may experience from listening to music are often “intensified” by the sense that they are shared, or potentially shared, by others: This [sense] can be especially strong at a live performance, but it is just as possible when experiencing music individually, when we might, however semi- consciously and fleetingly, imagine others— a particular person, or untold thousands—being able to share that response. (2008, 2)
[ 210 ] Society
It seems reasonable to assume that the more atomized our pattern of consumption, the more distant we become from that collective imaginary, and this may go some way to explaining the substantial empirical evidence casting doubt over the long tail (e.g., Elberse and Oberholzer-Gee 2006; Duch-Brown and Martens 2014; Fleder and Hosanagar 2009), as well as the resilience of gatekeeping trust in major news brands. Another peculiar feature of the cultural economy is that the value of products is contingent on a degree of novelty or newness (Garnham 2000) even if it is bound up with the familiar. This is not the same for coffee. If I have a favorite brand and type of coffee, chances are I would actively seek to consume it again and again, perhaps from the same outlet, in the same size, with the same amount of milk or sugar, and so forth. But I would be unlikely to purchase my favorite book more than once. I might not even read the one I purchased more than once, though I may well seek out other titles by the same or similar authors. In other words, the enjoyment or utility that consumers derive from symbolic goods does not necessarily translate into recurring demand (for the same product). Of course, personalizing algorithms are wise to this reality, as well as to the “pull” of the collective imaginary in cultural consumption. As Nikki Usher (2015) points out, the best algorithms do more than just cater to prior tastes and interests but seek to broaden or extend them: “Gradually, a good algorithm not only gives you what you expect, but also helps you discover what you didn’t know you wanted to see.” This is why social networking turned out to be such a valuable tool for targeted advertising—because it does not just target us in isolation but continually alerts us to the content choices of our friends, as well as our friends’ friends, and so on. In theory at least, this endemic feature of algorithms can push users outside of their cocoons, echo chambers, or filter bubbles, in direct conflict with personalizing effects. It is entirely plausible then, that major news algorithms may be having both a personalizing and an aggregating impact on news agendas, and this contradiction seems to be borne out empirically. While some studies do indeed demonstrate polarizing and atomizing effects of algorithms (e.g., Schmidt et al. 2017), others suggest that users do not remain rigidly within interest, ideological, or partisan communities (Kelly, Fisher, Smith 2006; Gentzkow and Shapiro 2010; Webster and Ksiazek 2012). A study by Pew Research (2013) also found that Facebook users have a high tendency to read news from sources that do not share their point of view. And if they do not, there is evidence to suggest that user choice plays a greater role in limiting exposure to ideologically diverse content compared to algorithms (Bakshy, Messing, Adamic 2015; Quattrociocchi, Scala, and Sunstein 2016).
Di g i ta l Ag e n da S e t t i n g
[ 211 ]
21
Indeed, the notion that users are any more partisan in their news consumption choices compared to the analog era finds little support in the empirical literature. One recent study in particular (Flaxman, Goel, and Rao 2016) suggests that the net effect of personalizing and aggregating forces may yet prove to be negligible. The researchers examined the web-browsing histories of 50,000 US online news consumers and found that while platform monopolies were associated with polarizing effects to some extent, they were also associated with an increase in exposure to ideologically opposing content. Perhaps more importantly, they found that the vast majority of online news consumption bypassed platform monopolies altogether.
AGENDA-S ETTING EFFECTS REVISITED
Of course, the existence of algorithm metrics that promote an aggregate agenda and mainstream news sources does not mean that platform monopolies are compliant reproducers of a mainstream media agenda. There are clearly real opportunities for agenda resistance on social media platforms and key points of dissonance. This was perhaps no more vividly demonstrated than during recent elections across Europe and North America, widely seen as crystallizing moments in the supposed fracturing of a liberal consensus politics. Once again, however, recent empirical evidence paints a mixed picture. So, for instance, while Twitter has been found to play a key role in challenging the mainstream media election agenda at the framing or “attribute” level, this is perhaps not so much the case at the prior “issue” level (Ceron 2014; Moore and Ramsay 2015) or in terms of elite source dominance (Harder, Sevenans, and Van Aelst 2017; Skogerbø et al. 2016). In 2017, the “snap” election called in the UK by Theresa May’s incumbent Conservative government was widely considered to have resulted in an unprecedented defeat for Britain’s “billionaire press” (Monbiot 2017). Experts and analysts joined in the clamor, proclaiming it was “The Sun wot lost it” (Temple 2017) while the Conservative campaign’s outdated logic of media management—heavily focused on appealing to the mainstream press—has been cited as the chief cause of their disastrous poll results (Franks 2017). The Sun in particular has enjoyed a long- standing reputation for swaying elections, predominantly in favor of the Conservatives. But their relentless and intensifying attacks on Jeremy Corbyn, the leader of the Labour Party, coincided with a dramatic polling shift in his favor.
[ 212 ] Society
In contrast, the same pundits tended to credit social media platforms as the agenda game changers, especially the emergent leftist “fifth estate” typified by news sites like the Canary or Evolve Politics. With the help of Facebook and Twitter, these sites were seen as playing a key role in mobilizing and energizing a resurgent youth vote and in challenging the agenda- setting influence of the mainstream press in general. Evidence collected during the campaign appeared to lend weight to this view, with Jeremy Corbyn outperforming incumbent Prime Minister Theresa May on key social media metrics (Littunen 2017). But there remains a huge gulf between the sources that dominate news traffic in terms of page views, and new entrants that have proved particularly effective at so-called click bait journalism. In the UK, at the time of writing, 8 out of the top 10 news websites based on page views are owned by traditional newspaper groups (Littunen 2017), the majority of which are Conservative-leaning. The reality is that the reach of Conservative news sources like the Sun and Mail is still several hundred times that of leftist new entrants like the Canary, and there is little prospect of that gap being even marginally narrowed any time soon. Celebrants of declining press power also gloss over the fact that much of the online activism and social media support base for Corbyn’s Labour was alive and kicking throughout the preelection period, when the party’s polling was consistently at its lowest ebb in recent memory. So what changed? For one thing, the only two traditional Labour- supporting national newspapers—the Guardian and the Mirror—promptly dropped their strident editorial opposition to Jeremy Corbyn and rallied behind the Labour leadership shortly after the election was called. As for broadcasters, the unveiling of election manifestos combined with special election rules on impartiality meant that for the first time since Corbyn assumed the Labour leadership, reporting on the party was (for a moment at least) focused on policies rather than personalities.
CONCLUSION
All this suggests that the agenda power of traditional media has been no more neutralized by social media than state secrecy and surveillance has been neutralized by hacktivism. Platform monopolies have clearly played some part in creating (new) spaces for agenda resistance, but the notion that they have displaced the gatekeeping and agenda power of conventional media overlooks the complex ways in which agenda power is consolidating in the new news environment. Indeed, when it comes to the prior
Di g i ta l A g e n da S e t t i n g
[ 213 ]
4 21
issue agenda and the dominance of elite sources, the prevailing evidence highlights the consonance between agenda filters applied by traditional and platform monopolies alike. To return to Lukes, the issue agenda is the core terrain on which power in late capitalist societies is mobilized, and the means by which a hegemonic consensus is “produced.” This chapter began by suggesting that, within this context, amplification is now a key currency of communicative power. To the extent that they account for the most significant proportion of “referral” traffic to news websites around the world, platform monopolies and their intricate algorithms undoubtedly play a profound role in determining the relative power of “voice” on news agendas both at the individual and aggregate level. But in this new reality, it makes little sense to consider digital dominance as the preserve of Silicon Valley alone, just as it makes little sense to consider the “digital” in any sort of binary opposition to the “traditional.” For all the promise of participatory platforms of communication (Bruns 2011), of cultural chaos (McNair 2006) and convergence (Jenkins 2006), and networked forms of resistance (Benkler 2011), empirical evidence overwhelmingly paints a picture of a shared dominance of dig ital agendas by a relatively small number of institutional megaphones, be they platform monopolies, aggregators, or major conventional news organizations. Among the latter, the last two decades have certainly seen the emergence of significant new entrants from Buzzfeed to the Huffington Post, a new leftist online press, and a virtually infinite horizon of sources within the “long tail” of digital news. But this only obscures the enduring dominance of those few organizations that can be said to be active in agenda creation. That is, those who command a reach that transcends fragmented audiences and are able to produce content on a scale and at a speed that generates salience both within and beyond the digital. REFERENCES Anderson, Chris. 2006. The Long Tail: Why the Future of Business Is Selling Less of More. London: Random House. Anderson, Chris W. 2011. “Deliberative, Agonistic, and Algorithmic Audiences: Journalism’s Vision of Its Public in an Age of Audience Transparency.” International Journal of Communication 5: 529–47. Bagdikian, Ben. 2000 [1983]. The Media Monopoly. Boston: Beacon Press. Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348, no. 6239: 1130–32.
[ 214 ] Society
Bardoel, J. and M. Deuze. 2001. “ ‘Network Journalism’: Converging Competencies of Old and New Media Professionals.” Australian Journalism Review 23, no. 3: 91–103. Benkler, Yochai. 2011. “A Free Irresponsible Press: WikiLeaks and the Battle over the Soul of the Networked Fourth Estate.” Harvard Civil Rights-Civil Liberties Law Review 46: 311–95. Bennett, W. Lanc, and Shanto Iyengar. 2008. “A New Era of Minimal Effects? The Changing Foundations of Political Communication.” Journal of Communication 58, no. 4: 707–31. Bruns, Axel. 2011. “Gatekeeping, Gatewatching, Real-Time Feedback: New Challenges for Journalism.” Brazilian Journalism Research 7, no. 2: 117–36. Carpentier, F. R. D. 2014. “Agenda Setting and Priming Effects Based on Information Presentation: Revisiting Accessibility as a Mechanism Explaining Agenda Setting and Priming.” Mass Communication and Society 17, no. 4: 531–52. Ceron, Andrea. 2014. “Twitter and the Traditional Media: Who Is the Real Agenda Setter?” APSA 2014 Annual Meeting. https://papers.ssrn.com/so13/papers. cfm?abstract_id=2454310. Cohen, Bernard Cecil. 1963. Press and Foreign Policy. Princeton: Princeton University Press. Duch-Brown, Nestor, and Bertin Martens. 2014. “Search Costs, Information Exchange and Sales Concentration in the Digital Music Industry.” Institute of Prospective Technological Studies, Joint Research Centre. Dylko, Ivan B., Michael A. Beam, Kristen D. Landreville, and Nicholas Geidner. 2012. “Filtering 2008 US Presidential Election News on YouTube by Elites and Nonelites: An Examination of the Democratizing Potential of the Internet.” New Media and Society 14, no. 5: 832–49. Elberse, Anita, and Felix Oberholzer-Gee. 2006. “Superstars and Underdogs: An Examination of the Long Tail Phenomenon in Video Sales.” Division of Research, Harvard Business School. Eslami, Motahhare, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. “I Always Assumed That I Wasn’t Really That Close to [Her]: Reasoning about Invisible Algorithms in News Feeds.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153–62. ACM, New York. Flaxman, S., S. Goel, and J. M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80, no. S1: 298–320. Fleder, Daniel, and Kartik Hosanagar. 2009. “Blockbuster Culture’s Next Rise or Fall: The Impact of Recommender Systems on Sales Diversity.” Management Science 55, no. 5: 697–712. Franks, Suzanne. 2017. “Over-Managing the Media: How It All Went Wrong.” Electionanalysis.co.uk. http://www.electionanalysis.uk/uk-election-analysis- 2017/section-4-parties-and-the-campaign/over-managing-the-media-how-it- all-went-wrong/. Galtung, Johan, and Mari Holmboe Ruge. 1965. “The Structure of Foreign News: The Presentation of the Congo, Cuba and Cyprus Crises in Four Norwegian Newspapers.” Journal of Peace Research 2, no. 1: 64–90. Gans, Herbert J. 1979. Deciding What’s News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time. Evanston, IL: Northwestern University Press. Garnham, Nicholas. 2000. Emancipation, the Media and Modernity: Arguments about the Media and Social Theory. Oxford: Oxford University Press.
Di g i ta l A g e n da S e t t i n g
[ 215 ]
621
Gentzkow, Matthew, and Jesse M. Shapiro. 2010. “Ideological Segregation Online and Offline.” Quarterly Journal of Economics 126, no. 4: 1799–839. Harder, Raymond A., Julie Sevenans, and Peter Van Aelst. 2017. “Intermedia Agenda Setting in the Social Media Age: How Traditional Players Dominate the News Agenda in Election times.” International Journal of Press/Politics 22, no. 3: 275–93. Herrera, Tim. 2014. “What Facebook Doesn’t Show You.” Washington Post, August 18, 2014. https://www.washingtonpost.com/news/the-intersect/wp/2014/08/18/ what-facebook-doesnt-show-you/?utm_term=.69366857cea0. Hesmondhalgh, David. 2008. “Towards a Critical Understanding of Music, Emotion and Self‐Identity.” Consumption, Markets and Culture 11, no. 4: 329–43. Ipsos MORI. 2015. “. . . would you generally tryst them to tell the truth, or not?” Ipsos-mori.com. https://www.ipsos.com/sites/default/files/migrations/en-uk/ files/Assets/Docs/Polls/Veracity%202014%20trend.pdf Jenkins, Henry. 2006. Convergence Culture: Where Old and New Media Collide. New York: New York University Press. Jenkins, Holman W. 2010. “Google and the Search for the Future.” Wall Street Journal. August 14, 2010. http://www.wsj.com/articles/SB10001424052748704901104 575423294099527212. Karppinen, Karl. 2013. Rethinking Media Pluralism. New York: Fordham University Press. Kim, Sei-Hill, Dietram A. Scheufele, and James Shanahan. 2002. “Think about It This Way: Attribute Agenda-Setting Function of the Press and the Public’s Evaluation of a Local Issue.” Journalism and Mass Communication Quarterly 79, no. 1: 7–25. Kelly, John W., Danyel Fisher, and Marc Smith. 2006. “Friends, Foes, and Fringe: Norms and Structure in Political Discussion Networks.” In Proceedings of the 2006 International Conference on Digital Government Research, 412–17. Digital Government Society of North America, SAN DIEGO. Levy, D., and N. Newman. 2013. “Digital News Report 2013.” Reuters Institute for the Study of Journalism. http://reutersinstitute.politics.ox.ac.uk. Levy, D., and N. Newman. 2014. “Digital News Report 2014.” Reuters Institute for the Study of Journalism. http://reutersinstitute.politics.ox.ac.uk. Littunen, Matti. 2017. “An Analysis of News and Advertising in the UK General Election.” Open Democracy. https://www.opendemocracy.net/uk/ analysis-of-news-and-advertising-in-uk-general-election. Lukes, Steven. 1974. Power: A Radical View. London and New York: Macmillan. Manning, Paul. 2001. News and News Sources: A Critical Introduction. London: Sage. McCombs, Maxwell E. 2014. Setting the Agenda: The Mass Media and Public Opinion. Oxford: Polity. McCombs, Maxwell E., and Donald L. Shaw. 1972. “The Agenda-Setting Function of Mass Media.” Public Opinion Quarterly 36, no. 2: 176–87. McNair, Brian. 2006. Cultural Chaos: Journalism, News and Power in a Globalised World. London: Routledge. Meraz, Sharon, and Zizi Papacharissi. 2013. “Networked Gatekeeping and Networked Framing on #egypt.” International Journal of Press/Politics 18, no. 2: 138–66. Monbiot, George. 2017. “Press Gang.” Monbiot.com, June 16, 2017. http://www. monbiot.com/2017/06/16/press-gang/.
[ 216 ] Society
Moore, Martin. 2016. “Tech Giants and Civic Power.” Centre for the Study of Media, Communication and Power, King’s College London. https://www.kcl.ac.uk/sspp/ policy-institute/CMCP/Tech-Giants-and-Civic-Power.pdf. Moore, Martin, and Gordon Ramsay. 2015. “UK Election 2015: Setting the Agenda.” Centre for the Study of Media, Communication and Power, King’s College London. https://www.kcl.ac.uk/sspp/policy-institute/publications/ MST-Election-2015-FINAL. Napoli, Philip. 2014. “Digital Intermediaries and the Public Interest Standard in Algorithm Governance.” LSE Media Policy Project. http://blogs.lse.ac.uk/ mediapolicyproject/2014/11/07/digital-intermediaries-and-the-public- interest-standard-in-algorithm-governance/. Nunez, Michael. 2016. “Want to Know What Facebook Really Thinks of Journalists? Here’s What Happened When It Hired Some.” Gizmodo.com. http://gizmodo. com/want-to-know-what-facebook-really-thinks-of-journalists-1773916117. Ofcom. 2015. “Measurement Framework for Media Plurality.” Ofcom.org.uk. http:// stakeholders.ofcom.org.uk/binaries/consultations/media-plurality-framework/ summary/Media_plurality_measurement_framework.pdf. Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. London: Penguin UK. Pew Research. 2013. “The Role of News on Facebook: Common Yet Incidental.” Pewresearch.org. http://www.journalism.org/2013/10/24/the-role-of-news-on- facebook/. Pingree, R. J., A. M. Quenette, J. M. Tchernev, and T. Dickinson. 2013. “Effects of Media Criticism on Gatekeeping Trust and Implications for Agenda Setting.” Journal of Communication 63, no. 2: 351–72. Quattrociocchi, Walter, Antonio Scala, and Cass R. Sunstein. 2016. “Echo Chambers on Facebook.” Harvard.edu. http://www.law.harvard.edu/programs/olin_ center/papers/pdf/Sunstein_877.pdf. Schmidt, E. (2011). Testimony of Eric Schmidt, Executive Chairman, Google Inc. Before the Senate Committee on the Judiciary Subcommittee on Antitrust, Competition Policy, and Consumer Rights, September 21, 2011. http://searchengineland.com/ figz/wp-content/seloads/2011/09/Eric-Schmidt-Testimony.pdf Schmidt, Ana Lucía, Fabiana Zollo, Michela Del Vicario, Alessandro Bessi, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2017. “Anatomy of News Consumption on Facebook.” Proceedings of the National Academy of Sciences 114, no. 12: 3035–39. Shoemaker Pamela, J., and Reese Stephen, D. 1996. Mediating the Message: Theories of Influences on Mass Media Content. White Plains, NY: Longman. Skogerbø, Eli, Axel Bruns, Andrew Quodling, and Thomas Ingebretsen. 2016. “Agenda-Setting Revisited: Social Media and Sourcing in Mainstream Journalism.” In The Routledge Companion to Social Media and Politics, edited by Axel Bruns, Gunn Enli, Eli Skogerbø, Anders Olof Larsson, and Christian Christensen, 104–20. New York: Routledge. Sunstein, Cass R. 2001. Republic.com. Princeton: Princeton University Press. Sunstein, Cass R. 2009. Republic.com 2.0. Princeton: Princeton University Press. Temple, Mick. 2017. “It’s the Sun Wot Lost It.” Electionanalysis.co.uk. http://www. electionanalysis.uk/uk-election-analysis-2017/section-3-news-and-journalism/ its-the-sun-wot-lost-it/. Usher, Nikki. 2015. “Who’s Afraid of a Big Bad Algorithm?” Columbia Journalism Review. http://www.cjr.org/analysis/whos_afraid_of_a_big_bad_algorithm.php.
Di g i ta l Ag e n da S e t t i n g
[ 217 ]
8 21
Van Aelst, P., and S. Walgrave. 2011. “Minimal or Massive? The Political Agenda– Setting Power of the Mass Media According to Different Methods.” The International Journal of Press/Politics, 1940161211406727. Webster, James G., and Thomas B. Ksiazek. 2012. “The Dynamics of Audience Fragmentation: Public Attention in an Age of Digital Media.” Journal of Communication 62, no. 1: 39–56. White, David M. 1950. “The ‘Gate Keeper’: A Case Study in the Selection of News.” Journalism Quarterly 27: 383–90. Zaller, John. 2003. “A New Standard of News Quality: Burglar Alarms for the Monitorial Citizen.” Political Communication 20, no. 2: 109–30.
[ 218 ] Society
CHAPTER 9
Free Expression? Dominant Information Intermediaries as Arbiters of Internet Speech BEN WAGNER
INTRODUCTION
The public sphere and free expression have always been deeply intertwined. The first Habermasian imaginaries of what a public sphere might look like stem from a period in which crown copyright was being loosened, leading to a new flourishing of debate around key political issues (Calhoun 1992, 14). To write about free expression is thus never just about censorship and the limitation of speech but also about the societies enabled through free and open discourse, for which the public sphere is just one of many metaphors. The creation of speech spaces—wide or narrow—in which debate can flourish and narratives develop is at the core of the idea of freedom of expression (Wagner 2016b). Within the academic debate on free expression, far too much emphasis is placed on the edges of such speech spaces— that is, what the boundaries of free expression are—and not enough on the core, the speech that free expression enables. This chapter attempts to develop a narrative focused more on enabling free debates and expansive speech spaces, rather than focusing narrowly on the exact limitations and the cases of content that are limited. It is argued in the following that a considerable number of the challenges related to
0 2
freedom of expression on the Internet only exist because of the existence of large, quasi-monopolistic Internet intermediaries. If such dominant intermediaries did not exist, many of these problems would also not exist. Challenges to digital free expression are thus directly related to the dominance of a select few actors such as Google or Facebook. It is because of the dominance of these tech giants that the terms of service they apply do not have any direct correlation to any specific legal system. This is not a necessary thing, and these corporations could choose a legal system to base their decisions on, however they choose not to and create their own rules. Due to their dominance, they become de facto law in many parts of the world where they have a substantial market share. The core challenge in relation to these tech giants is not to convince them to apply regulatory frameworks or legal provisions, but rather the fact that they have created their own regulatory and contractual frameworks. Because of the size, scale, and power of such organizations, they are currently able to resist regulatory encroachment, thereby effectively insulating themselves from outside influence on their own rules and norms in relation to content. This is not to say that this was always the case, as the exemption on intermediaries from liability for Internet content has contributed to their dominance in the past, it does not explain their current dominance in defining speech norms, that is, what human beings can say on the Internet. The effect of such mechanisms is to effectively globalize a specific set of corporate speech norms that were developed at a relatively early stage of the inception of these companies and have not changed significantly since (Wagner 2016b). The following chapter attempts to sketch out some of the existing public policy challenges in relation to free expression online and then discuss to what extent these challenges are linked to dominant online intermediaries. It argues that a considerable number of the challenges about free expression online are not actually related to the online existence of online intermediaries per se, but rather to the existence of dominant intermediaries who are able to take advantage of the system of intermediary liability. By generalizing the challenges related to intermediaries (Stalla-Bourdillon 2016) or media outlets (Helberger and Trilling 2016), there is a considerable danger that regulatory burdens will be placed on a multitude of actors for whom many of the regulatory concerns do not apply. As is frequently the case in this context, regulators threaten the regulation of Google, but because all intermediaries are deemed to be equal, the regulatory burden is placed on much smaller actors such as Delfi (Benedek and Kettemann 2014). As this approach is clearly insufficient, the conclusions suggest a differentiation between important dominant and nondominant actors.
[ 220 ] Society
DIGITAL DOMINANCE AND FREEDOM OF EXPRESSION: WHERE DO WE STAND?
How certain actors came to become the dominant Internet platforms is a question that is beyond the scope of this chapter. The effect of these shifts has been to ensure that a considerable proportion of Internet speech is dependent on a few large Internet intermediaries, most notably Facebook and Google. These two platforms have an inordinate level of control over the online advertising market, with the two companies controlling 65% of the online advertising revenues in 2015 (Ingram 2017). The same can be said with online access to media content and search results, with large parts of the time spent on these platforms dominated by Google and Facebook respectively. According to Facebook it has “1.28 billion daily active users on average for March 2017,” while “85.8% of our daily active users are outside the US and Canada” (Facebook Newsroom 2017). According to the Reuters Digital News Report 2017, for 64% of 18-to 24-year-olds, 58% of 25-to 34-year-olds, and 49% of the 35-to 44-year-olds, “online” is the primary source of news (Newman et al. 2017; Newman and Fletcher, this volume), a proportion that is only likely to grow in future given the age groups involved. Moreover, it should be considered that for print or television content, the Internet is a key source for reporting, suggesting that the influence of online media is far higher than simply the proportions of media content in online platforms. For example, during the Arab uprisings in Tunisia, it was not online content per se that influenced the media sphere but rather the interrelationship between online content, satellite television, and mass public demonstrations that shaped the media environment (Wagner 2011). The effect of this dominance on freedom of expression has been quite pronounced. To provide the example of just one type of content takedown, Facebook state that “on average, [they] deleted around 66,000 posts reported as hate speech per week—that’s around 288,000 posts a month globally” (Allan 2017). Historically as both Facebook and Google grew very quickly—due in no small part to intermediary liability exemptions—they were essentially able to set their own norms in the area of content regulation. As both companies developed these norms in the United States, the norms they developed to govern free expression are in some way based on US understandings of freedom of expression. However as both platforms wanted to target not just adults but also teenagers, the focus of speech norms has been set by US advertising regulations from the 1990s (COPPA), which stipulate that it is illegal to target online advertising at children who are younger than 14 years of age. As a result platforms like Google and
Free Expression?
[ 221 ]
2
Facebook have adapted their speech norms to cater for the sensitivities of 14-year-old children and their parents (Wagner 2016b). There have been considerable international debates about whether such norms are indeed appropriate for countries outside the United States, with frequent suggestions to decentralize decision- making capacity around speech issues away from online intermediaries (Wu 2012). However, more recent struggles between the Norwegian prime minister and Facebook about what constitutes appropriate online content suggest that the question of appropriate regulation of speech on Facebook’s platform is anything but settled (Ross and Wong 2016). Similar things can be said for Google’s position in the search market, where the speech norms used are primarily defined by adherence to US laws within a certain age group. While there are certainly notable exceptions to this like the implementation of the so-called Right to be Forgotten decision of the ECJ, by and large these norms have not changed significantly and still serve as the basis for Google’s platform speech norms to this day (Wagner 2016b). While credible threats of regulation on limiting access to child sexual abuse material or content that incites violence or promotes violent terrorist activity have been effective in shifting Google’s or Facebook’s internal speech norms in certain limited areas (Wagner 2013), these threats are limited and should not be overstated. Due to the economic size and global scope of both platforms, tech giants are essentially able to set their own speech norms. This development cannot be overstated, as it means that the boundaries of dominant speech norms on the Internet are set by private rather than public actors who dominate the public sphere. Thus the military government of Thailand was in June 2017 able to convince Youtube to stop showing a clip from the Charlie Chaplin film The Great Dictator after a civil society group called to Thai citizens to watch the clip in which “Chaplin exhorts the people to take back power from the dictator” (Palatino 2017). While this specific case might seem slightly absurd, it illustrates nicely the effects that policy pressure exerts on dominant social media platforms: short-term, specific limitations of specific content. Even the European Commission with its considerable regulatory clout has had no success in convincing Facebook to change its internal regulatory procedures around freedom of expression. This is particularly problematic as the governance mechanisms around these norms such as transparency, accountability, or redress mechanisms are highly limited. There is no credible, publicly available information on how the norms used by both companies are applied, how many people in total are responsible for the application of these norms or whether the decision-making process is conducted by automated or human systems.
[ 222 ] Society
While dominant platforms are happy to provide general statements about how many (new) staff they are employing in content moderation, it is almost impossible to know which decisions are decided by human beings and which are being decided by automated systems. Even their transparency reports do no provide this kind of information for the entire platform, but simply provides an overview of some of the requests received to takedown content. This essentially makes decision-making systems around the takedown of content from dominant intermediaries a black box. Academic researchers, policymakers, and the general public can only guess at what these private companies might be doing and have no control over the outcome of the decisions made by platforms. These governance deficits go beyond content takedown decisions made as a result of the 2014 Right to be Forgotten ruling of the Court of Justice of the European Union (Brock 2016), and rather refer to any form of content takedown via the flagging mechanism on the Facebook, Google, or Twitter platforms. To make matters worse, the typically self-regulatory model present in this area means that these private intermediaries are seen as “organizationally responsible” for problems that appear on their platforms. As a result, considerable political and social pressure is exerted on these platforms to resolve the problems “themselves.” Thus regulatory measures by the German government to improve the enforcement of the law in social networks— mainly to reduce the prevalence of hate speech— in 2017 in Germany (known formally as the Netzwerkdurchsetzungsgesetz1 or NetzDG), as well as the French, British, and EU initiatives (Aswad 2016; Bamat 2015; Rifkind 2014) to reduce hate speech on social media platforms in 2014, 2015, and 2016, all ended up pushing for greater self-regulation by the social media platforms themselves. While this mechanism makes sense in a fragmented market with numerous actors where market players compete both on price and on the quality of their governance arrangements, in markets dominated by one player there is no competition around govern ance arrangements. Thus pressure by political actors leads to a “spiral of privatised regulation” in which more and more public issues appear on the platform—more and more pressure is exerted on Facebook and Google—and more and more self-regulatory solutions are found to these problems which—crucially— are also implemented by Facebook and Google. Examples of this include
1. See http://dip21.bundestag.de/dip21/btd/18/130/1813013.pdf.
Free Expression?
[ 223 ]
4 2
both the German NetzDG and the EU code of conduct on illegal online hate speech (European Commission 2016). Admittedly the “spiral of privatised regulation” could also be described as a shift toward self-regulated platforms, however simply describing it as a “shift” fails to acknowledge the important and quite fundamental societal shifts taking place as ever more types of human expression are slowly moving into online platforms. This cyclical and systematic process goes far beyond news media regulation or telecommunications regulation, rather it reorients the governance of societal information systems. For example, 30 years ago, it was relatively difficult to publish information widely in Germany without having a responsible individual by German press law listed within the publication marked as “Verantwortlich im Sinne des Presserechts” or V.i.S.d.P. While there have always been illicit publications that avoided including this information, digital communications have made distribution of information broadly without attribution to German press law increasingly simple. Additional legal protections for Internet intermediaries such as intermediary liability law are shielded from liability to a far greater extent than comparable press law. While the authors of neither the Communications Decency Act 1996 Section 230 nor the EU ecommerce directive are likely to have expected this at the time, their liability exemptions had a de facto deregulatory effect on societal communications. Building on the assumption that the Internet regulatory norms will continue to encompass ever-greater amounts of societal expression, the spiral of privatized regulation argues that there is a specific political process that has developed in the last decade in which government policymakers: 1. Become concerned about a specific type of Internet content (child sexual abuse material, hate speech, terrorist propaganda, etc.) and 2. Believe that the only way to meaningfully change this problem is to change the way dominant online platforms moderate Internet content. 3. In order to respond to this challenge, they threaten draconian legislation measures, which they do not actually wish to implement as these would be financially and politically costly. 4. Instead these threats are used as leverage to convince large dominant online platforms to change their content moderation policies both in scope (more types of content) and process (faster takedown). 5. The dominant online platforms agree to do so in return for being able to do so “in house” and being able to implement the regulations themselves.
[ 224 ] Society
This process repeats itself in every new area where online content is considered to be a problem, as a result of which the norms by which large dominant online platforms regulate Internet content have become a strange mixture of the existing norms of the dominant platform and piecemeal responses to the threats of large powerful actors. Precisely because threats of legislation are not the same as actual law and are never tested in court, the scope of potential threats is essentially endless. This regulatory bargain ensures that government regulators push for self-regulatory measures that would not otherwise be legal if they were enshrined in law (Mueller 2010, 213) while simultaneously ensuring that all power and responsibility around the implementation lies with the dominant private sector actors. Combined with the size and political power of these two dominant actors, the spiral of privatized regulation has considerable effects on the ability of governments to ensure effective governance of speech online. This has nothing to do with the actual boundaries of speech, whether these are more limiting or more liberal. Rather it ensures that the quality of decisions made has no meaningful form of accountability, transparency, redress mechanism, or due process as both the process and the final decision are exclusively controlled by Facebook and Google. Thus even a Facebook post by a head of state is not considered off-limits, with Facebook deleting a historical picture of a nude child posted by the Norwegian prime minister (Ross and Wong 2016). While Facebook has since adapted its policy on political speech to exempt figures like Donald Trump— even when his statements on Facebook “violated the company’s written policies against ‘calls for exclusion’ of a protected group” (Angwin and Grassegger 2017)—this does not resolve the challenge. Reportedly the decision to exempt Trump from the rules normally applied on Facebook was made at the direct “order of Mark Zuckerberg,” raising the question of what legitimacy the CEO of a private company has in determining legitimate political speech. Were the procedures, processes, and norms of Facebook sufficiently robust there would be no need for them to be overridden by the CEO. While this is a problem in and of itself, this kind of speech governance has specific consequences due to the dominant position of these intermediaries. These challenges are now discussed, particularly in regard to this question: Are the specific challenges discussed here related to dominant position of these intermediaries? Or, could they also exist without the dominant position of these Internet intermediaries?
Free Expression?
[ 225 ]
6 2
CHALLENGES OF DOMINANCE TO FREEDOM OF EXPRESSION
The size and scale of these intermediaries means that they are able to do things that other intermediaries are not, such as influencing the outcome of elections, having a chilling effect as a consequence of their dominance, and defining the scope of what is considered “the Internet.”
The Ability to Change the Outcome of (“Swing”) Elections
One of the most obvious and important suggestions about digital dominance is that companies like Google or Facebook could conceivably be able to swing elections. The reason this example is included here is due to the considerable importance elections have both for enabling speech and deciding which speech has impact. The electoral process is a motor for freedom of expression but can also fatally impede it, depending on the process nature of the electoral process. There are some documented examples of this conducted by Facebook, which suggest they are able to increase voter turnout by a statistically significant amount (Bond et al. 2012). Aside from the challenges raised by the lack of user consent within the experiment or the lack of transparency in selling advertising to political parties, the broader challenge is that any one actor has the ability to shift voter turnout, which can have considerable effects on election results (Osborn, McClurg, and Knoll 2010). The research of Epstein and Robertson (2015) also suggests that the manipulation of Google’s search results could have considerable effects on elections, however Zweig (2017) has argued that while the effect they are arguing for clearly exists, its results are overstated. This is clearly a problem that only exists by virtue of the existence of digital dominance. If true competition existed in the search or social networking markets, it would be highly unlikely that this level of election influencing power could exist. Thus, the power to influence digital media environments at a scale necessary to “swing” an electoral outcome is closely linked to the existence of a dominant actor for a specific country. Thus Google’s more than 80% share of the search market in the United States, the UK, France, Canada, Germany, Australia, Italy, Spain, India, and Brazil (Statista 2017) means that the search results in any of these countries can be shifted, influenced, or manipulated as suggested by Epstein and Robertson above (see also Epstein, this volume) to achieve specific political outcomes. Similar effects could not be achieved without dominance and would require considerable coordination among a diverse set of different
[ 226 ] Society
private companies, making influencing an election more difficult by several orders of magnitude. Moreover, achieving the same effects would require coordinated action across organizations, something that is far less likely as the trust issues would be considerable. It seems safe to say that the potential effects on elections would not exist without such powerful dominant actors. Importantly, it is often forgotten that one key aspect of free and fair elections is the configuration of the media environment in the months before the election. Whether an election’s outcome is being manipulated is not related to the stuffing of ballot boxes alone. Rather, election observers who are responsible for ensuring free and fair elections, equally look at the media climate, the balance in media presence of different candidates, and their ability to get their message out (ODIHR 2007). Here the control over the information environments by intermediaries like Google or Facebook is particularly problematic, as they evidently have the ability to strongly influence information environments in numerous countries around the world. For example, there have been numerous suggestions that the most recent elections in the UK and the United States were heavily influenced by Facebook and Google. While it is impossible to know how great the influence of dominant intermediaries was, previous research suggests that their influence could have a significant measurable effect on the electorate (Bond et al. 2012). In particular, the role of companies like Cambridge Analytica has been heavily debated in the media (Cadwalladr 2017), with a strong suggestion by numerous commentators that their actions were illegitimate, as a result of which “democracy was hijacked” (Cadwalladr 2017), which some commentators have suggested is a new form of “data-led politics” (Beer 2017). As noted by David Beer, this trend is less about the specific efficacy of the technologies themselves and more about the expectations that surround these technologies. Beer’s argument that data-driven politics are engaged in an “attempt to project an all-knowing and all-seeing power onto those who use them” (Beer 2017) is compelling in this context, as the suggestion of effective knowledge that can be acted upon itself creates a new kind of targeted politics. As this power is not checked, it is an open question how election authorities or observers could or should ensure that free and fair elections can indeed take place (Tambini 2016). The most obvious avenue in this regard is to provide greater regulatory scrutiny over the funds that are spent for political advertising on dominant platforms. Currently there is no regulatory scrutiny over these forms of advertising, to the point that even its content is not publicly available. This phenomenon—known colloquially as
Free Expression?
[ 227 ]
8 2
“dark posts”—has become common in political advertising in recent years and allows for different targeted adverts to be shown to each individual voter without the possibility of any outside scrutiny. This level of targeting makes it extremely difficult to provide effective oversight for these kinds of campaigning mechanisms and is likely to require additional regulatory scrutiny. At a minimum, election authorities should have access to all campaign materials on dominant platforms. Another area where additional regulatory intervention could be helpful is in regard to the personal data on individuals stored by political parties and campaigns. Currently beyond general data protection regulations there is no specific oversight of this data, that could be regarded as valuable as the funds that campaigns have access to. As a result, similar disclosure requirements as apply to political donations and funding should also apply to data. As will be discussed in the next section in greater detail, dominant intermediaries have access to vast amounts of personal data. As a result of this, any interactions between political parties and campaigns are in need of special regulatory scrutiny and oversight.
Vast Access to Personal Data by Dominant Actors Chilling Speech?
Another key challenge of digital dominance in this regard is the potentially “chilling effect” of individual organizations having vast amounts of personal data at their disposal. As UN special rapporteur on Freedom of Expression, Frank la Rue, argued compellingly, there is a close relationship between privacy and freedom of expression, as a result of which harms to privacy can directly negatively impact freedom of expression. For example, the chilling effect created by NSA surveillance—when it was made public— is considerable and well documented (Stoycheff 2016). To provide just one example, individuals were less likely to search for information on sensitive personal questions like sexual diseases or health risks (Marthews and Tucker 2017). However, it is unclear whether digital dominance contributes to this problem or not. Due to the numerous actors engaged in trading with data (Christl and Spiekermann 2016) and more broadly the business model of surveillance capitalism (Zuboff 2015), markets already ensure that numerous actors have access to this data. There is no strong indication that these markets would radically change were digital dominance not to exist. One relevant shift in these markets could come to pass in Europe, where the General Data Protection Regulation (GDPR) is to be implemented from 2018 onward. Through a stronger implementation of purpose
[ 228 ] Society
limitation—through which companies must clearly define what they want to use the personal data they collect for—it is possible that digital dominance would have a greater effect, as such purpose limitations would likely limit the extent of data trading possible between private companies (Herrmann et al. 2016). While greater privacy protections could have the effect of limiting chilling effects on freedom of expression, it would also reinforce the effects of digital dominance. Should data markets indeed dry up, the main “chill” that speech could receive would indeed be through the data housed at Google or Facebook. While the overall chilling effect of a lack of privacy would be reduced through greater privacy protections, the capacity of dominant actors to chill speech would be increased considerably. Here the specific concentration of access to vast encompassing datasets is important. It is often argued that powerful actors such as states govern through the shadow of hierarchy (Héritier and Eckert 2008). Part of the problem with dominant actors such as Google or Facebook is that their very existence produces a similar “corporate shadow of hierarchy,” in which they are able to influence solely by virtue of being very large and having access to vast amounts of personal data. This makes individuals potentially vulnerable toward these organizations, even if they have never actually interacted with them. As their data management practices intentionally obscure what personal data they actually hold, it becomes very difficult for individuals to know for certain what data these intermediaries hold, beyond the reasonable basic assumption that it is likely very large.
Freedom of Thought and Limiting the Right to Seek Information
Another related challenge to the chilling effects of limited privacy on freedom of expression is freedom of thought itself. The idea of freedom of thought (sometimes also termed “intellectual privacy”) is closely related to the right to impart information, a core component of the right to freedom of expression, which is typically considered to include seeking, receiving, and imparting speech (La Rue 2011). While online intermediaries doubtless provide access to extraordinary amounts of information that would be far more difficult to access without them, the existence of dominant intermediaries leads to a poverty of information discovery mechanisms. In these terms, to exist on the Internet is to be in the top-10 of a Google search result or be quoted in a frequently cited social media post (Lovink 2015). The mechanics of popularity for such “digital existence” are focused on specific areas of human interaction, which—because they are defined by only a few companies—are necessarily limited. Thus digital dominance
Free Expression?
[ 229 ]
0 32
limits information discovery mechanisms, which in turn influences how human beings can see and imagine the world. This would not be necessary or even reasonable to imagine without digital dominance, rather it is a direct consequence of the dominance of a select few actors. This is particularly evident in the case of “Free Basics” (or Facebook Zero as it was previously called) a basic service provided in countries where many individuals have difficulty affording Internet access. Importantly this service was provided “zero-rated,” that is, without up-front monetary cost. The service has been heavily criticized from numerous sides as an attempt to replace the Internet with a limited walled garden that only Facebook has access to. Some governments saw this service as such a threat that they took measures to prevent it, with the Indian government going as far as banning the service (Broeders and Taylor 2017; Tripuraneni 2016; Yim, Gomez, and Carter 2016). Importantly, such bans do not change the fact that for many individuals around the world, Facebook and Google constitute the whole Internet. They are unable to browse on websites or enter URLs and see the portals these intermediaries provide as providing access to the totality of the Internet’s information. Importantly, there is a strong suggestion that dominant actors are stifling innovation in their respective fields. The suggestion that other forms of search, social networking, or more broadly information discovery and networking tools could exist but do not due to the dominant status of a few actors, inevitably limits the forms of information discovery that exist for users on the Internet. To a certain degree it can be argued that this is not the fault of dominant intermediaries, it is simply a function of market conditions. At the same time, as soon as these actors abuse their dominant market position—as has been frequently suggested by smaller competitors such as Foundem and numerous others (Crane 2011; Edelman 2015)—dominance is doubtlessly contributing to limiting competition and innovation. Limiting the right to seek information by obstructing the numerous ways in which this is possible, has a direct impact on freedom of expression.
Setting Norms for Speech and Society
Finally, one specific set of challenges related to dominance is the norm- setting power of online intermediaries. Due to the sheer size and scale of companies like Google and Facebook, they are able to set social norms and speech norms through their very operation. Due to the fact that they are so dominant in their respective markets, they are able to set standard forms
[ 230 ] Society
of interaction that are then generalized across whole societies. This dominance is not even relevant to specific political topics, but rather relates to how structures around societal debates are framed. For example, the issue is not that Facebook or Google have themselves had certain positions on whether breastfeeding mothers are considered an appropriate form of content on their platform, but rather that Facebook themselves have created the norms by which speech is judged on a wider scale. They have thus become a site of contestation regarding the appropriateness of issues as diverse as appropriate territorial boundaries, local adherence, or even providing access to the truth itself. This is particularly evident in Google search boxes, which attempt to provide authoritative “true” answers to questions that users ask without resorting to search results. Thus, a user might ask for details about a flight, a city, or a currency conversion but also far more controversial political questions. While much of the content around search boxes is drawn from Wikipedia, the fact that they are trying to provide authoritative answers not just to easily resolvable questions of fact but also to many highly political questions should give pause for thought about the normative power Google search boxes have developed. Many of the same challenges apply to the liking process around forms of content, based on which personalized content is created. It is not the (frequently disputed) form of filter bubbles that is itself created, but rather the massive institutionalization of “like buttons” right across the internet. As the usage of these buttons has direct and quite substantive effects on the content that is “liked” by users, the process of liking content has become institutionalized as a means of gauging popularity built on systems of self-affirmation (Toma and Hancock 2013). As a result, the process of attributing quantifiable popularity has been institutionalized as a common social process for all forms of content. The self-confirming spiral of personalization that this form of interaction perpetuates is itself just a symptom of the infrastructure that creates the potential for personalization and a thirst for affirmative likes (Lovink 2015). In short, Facebook has created norms around how we judge Internet speech, and created processes that define how we evaluate Internet speech. In essence, societies have been coached and educated in how to speak by Facebook. Whether this is a good thing or a bad thing is beside the point. It clearly has considerable effects on human interaction online, and it can be argued far outside the online sphere as well. This would not be possible were actors like Google or Facebook not dominant actors, rather their dominance is a constitutive part of this norm-setting power.
Free Expression?
[ 231 ]
2 3
POTENTIAL REGULATORY RESPONSES TO DIGITAL DOMINANCE
Following the preceding broad overview of the challenges related to dominance, this next section looks in greater detail at what regulatory responses might assist in responding effectively to this challenge. As the regulatory challenges in regard to freedom of expression are distinct from other areas of digital dominance, it is important to discuss them separately in the context of this chapter. The following constitutes a broad overview of measures that could provide effective responses to the challenges listed above.
Transparency Requirements for Dominant Platforms
At present, neither regulators nor citizens have access to sufficient information about the operation of dominant platforms. Thus, neither election authorities nor election monitors have information that allows them to “assess where voters get their news” (ODIHR 2007, 16) or whether dig ital “public-service messages have a useful impact in voter education and in getting out the vote” (ODIHR 2007, 16). Media regulators have similar challenges beyond the election period, as the existence of large dominant intermediaries obscures their ability accurately to understand and thereby effectively regulate media markets. Finally, independent academic research about key elements of the power of large dominant intermediaries is essentially impossible, as—beyond infrequent leaks to the press (Gillespie 2012; Hopkins 2017)—there is no publicly available information on key elements of their operations. Given the public scope and scale of dominant intermediaries, a greater level of transparency about key elements of the operations is essential. Basic questions about content that is removed from their sites, by whom it is removed, and how the content is curated is not available to the public or even to the relevant regulators. While there could be a risk that the creation of transparency could lead to a “gaming” of such platform management systems (Diakopoulos and Koliska 2017, 13), such risks are widely overstated. There is an overriding public interest in greater transparency of large private intermediaries, and the information submitted does not need to be such that it enables manipulation of the system. As existing corporate transparency initiatives have thus far proved insufficient to provide such information, it is likely that government regulation will be required to ensure greater transparency. The most effective
[ 232 ] Society
manner to do so would likely combine transparency reports with regular independent audits of the reports to ensure their accuracy. While such transparency reports and audits could conceivably be developed within a framework similar to media self-regulation, where only a regulatory framework of requirements is set, but the precise functioning of the system and the individuals involved is left to the private companies themselves. Notably, organizations like the Global Network Initiative (GNI) or the Telecommunications Industry Dialogue have both made attempts to move in this direction, while stopping considerably short of actually producing reliable governance in this area. The German NetzDG of 2017 has also attempted to provide greater transparency in this area by instituting biannual reporting requirements for content takedowns in Germany by social media providers. It remains to be seen whether this requirement will be effective, as the reporting requirements under paragraph §2 of NetzDG merely require a German language report about how the platform deals with complaints about illegal content.
Considerably Higher Minimum Standards for Content Self-R egulation
Another area where regulatory responses would be helpful relates to quality of content self-regulation by dominant Internet intermediaries. While the quality of content self-regulation is highly diverse on the Internet, this does not pose the same significant regulatory problem outside of dominance. At present, there are no significant procedural legal standards for content regulation that are applied within large online intermediaries like Google or Facebook. Thus, while they need to ensure that illegal content is taken down once they are made aware of it, the law does not require certain minimum standards in doing so and even less so in regard to their own community guidelines. Here shifts are clearly required to ensure that large online intermediaries provide users whose content is being taken down with basic procedural mechanisms and safeguards similar to what already exists in law. They need to: • be aware of why their content is being taken down; • be aware of what the legal or extra-legal basis for the decision to take down their content is; • have a clear understanding of the process that led to the decision;
Free Expression?
[ 233 ]
4 3 2
• know which individuals and technical systems were involved in making the decision; • have access to an impartial appeals mechanism if they disagree with the decision; • have access to redress mechanisms if mistakes are made. These mechanisms—which are not dissimilar from existing judicial mechanisms—are not sufficiently implemented by large Internet intermediaries at present. It is almost impossible for users whose content was removed to know why decisions were made or what they can do about such decisions. The same applies to users who “flag” content they deem inappropriate, who are also unable to know with any degree of certainty whether a computer of a human being will respond to their request. It can also be suggested that from a certain point onward the escalation mechanisms around content removal should no longer be conducted by the private companies themselves but by an independent third-party authority. As has been suggested by Martin Moore and Gordon Ramsay, these kinds of mechanisms are not entirely dissimilar to the low-cost arbitration mechanisms proposed by the Leveson inquiry that could be “as suited to the digital environment as the print environment” (2016, 3). The purpose of such arbitration mechanisms is not actively to harm business models but rather to ensure that decisions of high public relevance cannot be made within private corporations alone. However, because the consequences of such decisions being made by government executives is equally problematic from a freedom of expression and governmental overreach perspective, low-cost external arbitration via a self-regulatory mechanism seems like a potentially effective alternative to ensure that an independent third party is involved. This is not for a second to suggest that such decisions might not also be made within national judiciaries, but rather to argue that providing low- cost and independent arbitration mechanisms could usefully complement legal decision-making in situations in which—for reasons of cost or lack of judicial independence—recourse to legal channels is not an option. Such challenges include any of the many countries without a functioning independent legal system, or countries like the UK, where the costs of engaging in litigation particular in areas such as libel law are so high that it has been argued there is a “chilling effect of English libel law” (Hurley 2009). Providing a low-cost arbitration alternative in a manner that shields intermediaries from liability could also provide an incentive for intermediaries to participate in such a system.
[ 234 ] Society
Humanizing Content Regulation Online
At present a significant proportion of content regulation online takes place through automated systems. These are involved not only in supporting takedown decisions but also in making decisions about content takedowns themselves in an automated or quasi-automated manner (Wagner 2016a). This raises considerable challenges for freedom of expression, as many representatives of these private corporations themselves acknowledge. As a result, consistent pressure by governments to ensure ever higher levels of automation (Rifkind 2014) and ever greater speeds (Wimmers and Heymann 2017) of content regulation by private companies poses considerable challenges for freedom of expression. For example, in 2014, the British Intelligence and Security Committee of Parliament (ISC) called on Facebook to provide a list of all automated systems that could indicate the identity of terrorists. Similarly, the German NetzDG requires obviously illegal content to be removed within 24 hours of a complaint being made. This is because the consist ent push toward speed and automation limits the ability of individual human beings to make well-founded decisions about content on Internet intermediaries. Given their size and scale, dominant Internet intermediaries are particularly vulnerable to using increasing amounts of technology to lower the cost of content regulation. However, it can be argued that human decision-making is itself at the core of the human rights regime, which cannot meaningfully be supported by delegating such decisions to automated systems (Wagner 2016a). Thus, dominant intermediaries need to provide an overview of which decisions are currently made in a fully or partially automated way in a manner that is transparent to users involved in these decision-making procedures and the general public. Then, wherever possible automated decision-making should be avoided and human decision-making preferred. Detechnifying these key aspects of online content regulation is an important step toward ensuring freedom of expression in dominant Internet intermediaries.
CONCLUSION
The role of a few dominant Internet intermediaries is fundamentally changing the role of the Internet in regard to its ability to enable and promote freedom of expression. Their considerable concentration of market power
Free Expression?
[ 235 ]
6 3 2
limits many of the normal effects that allow such challenges to be resolved by the market. As a result, specific regulatory interventions are likely to be necessary to safeguard freedom of expression, while of course ensuring that such interventions do not limit freedom of expression even further. However, with the appropriate regulatory design this is perfectly possible and it is far too simplistic to suggest that additional government intervention in this area will automatically lead to censorship or a restriction of freedom of expression. In the absence of such regulatory innovation the current spiral of privatized regulation will continue to ensure that private companies are under ever greater pressure by regulatory authorities to limit freedom of expression in their stead, without any of the normal safeguards or procedural and structural accountability mechanisms that would typically be in place in the public sector. By not taking responsibility for the effects of their pressure on dominant Internet intermediaries, states are actively involved in creating content regulatory frameworks that limit freedom of expression. This pattern can be seen in the German NetzDG, the EU Internet Forum, and the EU Code of Conduct. This problem is compounded by an unwillingness to acknowledge the differences between dominant intermediaries and all other actors within the system of governance. There is an evident need for a new regulatory category that considers dominant intermediaries in their own specific regulatory context. Simply put Google and small news sites such as Delfi should not be regulated in the same manner. Finally, the current cycle of perpetual media outrage in the policy debate toward dominant intermediaries is not conducive to finding appropriate regulatory mechanisms to respond to the challenge. Rather than demonizing or evangelizing the role that Google and Facebook play in the existing Internet ecosystem, it is sufficient to suggest that their existence in their current form poses considerable challenges to freedom of expression. These challenges are mainly related to procedural governance issues, rather than centered in specific forms of content per se. The argument is not that they should change the norms around which they regulate content on their platforms, but rather that the governance process by which dominant intermediaries regulate Internet content is completely insufficient for the significant public interest in ensuring freedom of expression on their platforms. Their dominant role means that they are no longer just private companies: their platforms have grown to provide important public goods and the regulatory framework around them needs to be developed further to reflect this.
[ 236 ] Society
REFERENCES Allan, Richard. 2017. “Hard Questions: Hate Speech. Facebook Newsroom.” Accessed July 3, 2017. https://newsroom.fb.com/news/2017/06/hard-questions-hate- speech/. Angwin, Julia, and Hannes Grassegger. 2017. “Facebook’s Secret Censorship Rules Protect White Men from Hate Speech but Not Black Children.” ProPublica. Accessed July 4, 2017. https://www.propublica.org/article/ facebook-hate-speech-censorship-internal-documents-algorithms. Aswad, Evelyn. 2016. “The Role of U.S. Technology Companies as Enforcers of Europe’s New Internet Hate Speech Ban.” Columbia Human Rights Law Review. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2829175. Bamat, Joseph. 2015. “France Prepares for War against Online Hate Speech.” France 24. Accessed August 4, 2017. http://www.france24.com/en/20150224-france- online-hate-speech-internet-anti-semitic-racism-legal-reforms-taubira. Beer, David. 2017. “Data-Led Politics: Do Analytics Have the Power That We Are Led to Believe?” British Politics and Policy at LSE. http://eprints.lse.ac.uk/70078/. Benedek, Wolfgang, and Matthias C. Kettemann. 2014. Freedom of Expression and the Internet. Strasbourg: Council of Europe. Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. 2012. “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Nature 489, no. 7415: 295–98. Brock, George. 2016. The Right to Be Forgotten: Privacy and the Media in the Digital Age. London and New York: I.B. Tauris. Broeders, Dennis, and Linnet Taylor. 2017. “Does Great Power Come with Great Responsibility? The Need to Talk about Corporate Political Responsibility.” In The Responsibilities of Online Service Providers, edited by Mariarosaria Taddeo, and Luciano Floridi, 315–23. Switzerland: Springer. http://link.springer.com/ chapter/10.1007/978-3-319-47852-4_17. Cadwalladr, Carole. 2017. “The Great British Brexit Robbery: How Our Democracy Was Hijacked.” The Guardian, May 7, 2017. https://www.theguardian.com/ technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy. Calhoun, Craig J. 1992. Habermas and the Public Sphere. Cambridge: MIT Press. Christl, Wolfie, and Sarah Spiekermann. 2016. Networks of Control: A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy. Wien: Facultas. Crane, Daniel A. 2011. “Search Neutrality as an Antitrust Principle.” George Mason Law Review 19: 1199. Diakopoulos, Nicholas, and Michael Koliska. 2017. “Algorithmic Transparency in the News Media.” Digital Journalism 5, no. 7: 809–28. Edelman, Benjamin. 2015. “Does Google Leverage Market Power through Tying and Bundling?” Journal of Competition Law and Economics 11, no. 2: 365–400. Epstein, Robert, and Ronald E. Robertson. 2015. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences 112, no. 33: E4512–21. European Commission. 2016. “European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech.” European Commission press release, May 31, 2016. http://europa.eu/rapid/press-release_IP-16-1937_ en.htm.
Free Expression?
[ 237 ]
8 32
Facebook Newsroom. 2017. “Company Info.” Accessed July 2, 2017. https:// newsroom.fb.com/company-info/. Gillespie, Tarleton. 2012. “The Dirty Job of Keeping Facebook Clean.” Social Media Collective. Accessed May 26, 2017. https://socialmediacollective.org/2012/02/ 22/the-dirty-job-of-keeping-facebook-clean/. Helberger, Natali, and Damian Trilling. 2016. “Facebook Is a News Editor: The Real Issues to Be Concerned About.” LSE Media Policy Project. Accessed September 9, 2016. http://blogs.lse.ac.uk/mediapolicyproject/2016/05/26/facebook-is-a- news-editor-the-real-issues-to-be-concerned-about/. Héritier, Adrienne, and Sandra Eckert. 2008. “New Modes of Governance in the Shadow of Hierarchy: Self-Regulation by Industry in Europe.” Journal of Public Policy 28, no. 1: 113–38. Herrmann, Michael, Mireille Hildebrandt, Laura Tielemans, and Claudia Diaz. 2016. “Privacy in Location-Based Services: An Interdisciplinary Approach.” A Journal of Law, Technology and Society 13, no. 2: 144–70. Hopkins, Nick. 2017. “Revealed: Facebook’s Internal Rulebook on Sex, Terrorism and Violence.” The Guardian, May 21, 2017. https://www.theguardian.com/news/ 2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-violence. Hurley, Richard. 2009. “The Chilling Effect of English Libel Law.” BMJ: British Medical Journal (Online) 339. http://search.proquest.com/openview/ 5e61f3dae2c4b9fc6ab5d4e55383107b/1?pq-origsite=gscholar&cbl=2043523. Ingram, Mathew. 2017. “Here’s How Google and Facebook Have Taken Over the Digital Ad Industry.” Fortune. January 4, 2017. http://fortune.com/2017/01/ 04/google-facebook-ad-industry/. La Rue, Frank. 2011. “Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression.” Accessed May 26, 2017. http://ictlogy.net/bibliography/reports/projects. php?idp=2187&lang=ca. Lovink, Geert. 2015. Zero Comments. Place of publication not identified: Transcript- Verlag. Accessed May 26, 2017. http://public.eblib.com/choice/publicfullrecord. aspx?p=4348314. Marthews, Alex, and Catherine E. Tucker. 2017. Government Surveillance and Internet Search Behavior. Rochester, NY: Social Science Research Network. https:// papers.ssrn.com/abstract=2412564. Moore, Martin, and Gordon Ramsay. 2016. “Submission to Consultation on the Leveson Inquiry and Its Implementation Department for Culture, Media and Sport and the Home Office.” Centre for the Study of Media, Communication and Power, King’s College London. https://www.kcl.ac.uk/sspp/policy-institute/ publications/CMCP-Consultation-Submission-for-DCMS-100117-Final.pdf Mueller, Milton. 2010. Networks and States: The Global Politics of Internet Governance. Cambridge: MIT Press. Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos, David A. L. Levy, and Rasmus Kleis Nielsen. 2017. Reuters Institute Digital News Report 2017. Oxford: Reuters Institute for the Study of Journalism. ODIHR. 2007. Handbook for Long-Term Election Observers: Beyond Election Day Observation. Poland: ODIHR. Osborn, Tracy, Scott D. McClurg, and Benjamin Knoll. 2010. “Voter Mobilization and the Obama Victory.” American Politics Research 38, no. 2: 211–32.
[ 238 ] Society
Palatino, Mong. 2017. “At Thailand’s Request, YouTube Blocks Video Clip of Charlie Chaplin’s ‘The Great Dictator’.” Global Voices, June 27, 2017. https:// globalvoices.org/2017/06/27/at-thailands-request-youtube-blocks-video-clip- of-charlie-chaplins-the-great-dictator/. Rifkind, Malcolm. 2014. “Report on the Intelligence Relating to the Murder of Fusilier Lee Rigby.” Intelligence and Security Committee of Parliament, November 25, 2014. http://isc.independent.gov.uk/files/20141125_ISC_Woolwich_ Report(website).pdf Ross, Alice, and Julia Carrie Wong. 2016. “Facebook Deletes Norwegian PM’s Post as “Napalm Girl” Row Escalates.” Guardian, September 9, 2016. https://www.theguardian.com/technology/2016/sep/09/ facebook-deletes-norway-pms-post-napalm-girl-post-row. Stalla-Bourdillon, Sophie. 2016. “Internet Intermediaries as Responsible Actors? Why It Is Time to Rethink the E-Commerce Directive as Well.” Rochester, NY: Social Science Research Network. http://papers.ssrn.com/ abstract=2808031. Statista. 2017. “Share of Desktop Search Traffic Originating from Google in Selected Countries as of April 2017.” Accessed September 13, 2017. https://www. statista.com/statistics/220534/googles-share-of-search-market-in-selected- countries/. Stoycheff, Elizabeth. 2016. “Under Surveillance Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring.” Journalism and Mass Communication Quarterly 93, no. 2: 296–311. Tambini, Damian. 2016. Draft Feasibility Study on the Use of Internet in Elections. Strasbourg, France: Council of Europe. http://rm.coe.int/ CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09 000016806be076. Toma, Catalina L., and Jeffrey T. Hancock. 2013. “Self-Affirmation Underlies Facebook Use.” Personality and Social Psychology Bulletin 39, no. 3: 321–31. Tripuraneni, Hanuman Chowdary. 2016. “The Free Basics (of Facebook) Debate in India.” Info 18, no. 3: 1–3. Wagner, Ben. 2011. ““I Have Understood You”: The Co-Evolution of Expression and Control on the Internet, Television and Mobile Phones during the Jasmine Revolution in Tunisia.” International Journal of Communication 5: 1295–302. Wagner, Ben. 2013. “Governing Internet Expression: How Public and Private Regulation Shape Expression Governance.” Journal of Information Technology and Politics 10, no. 3: 389–403. Wagner, Ben. 2016a. Draft Report on the Human Rights Dimensions of Algorithms. Strasbourg, France: Council of Europe. Wagner, Ben. 2016b. Global Free Expression: Governing the Boundaries of Internet Content. Cham, Switzerland: Springer International Publishing. Wimmers, Jörg, and Britta Heymann. 2017. “Zum Referentenentwurf Eines Netzwerkdurchsetzungsgesetzes (NetzDG)—eine Kritische Stellungnahme.” AfP-Zeitschrift Für Medien-Und Kommunikationsrecht 48, no. 2: 93–102. Wu, Tim. 2012. “When Censorship Makes Sense: How YouTube Should Police Hate Speech.” New Republic, September 18, 2012. Accessed October 27, 2012. http://www.tnr.com/blog/plank/107404/ when-censorship-makes-sense-how-youtube-should-police-hate-speech#.
Free Expression?
[ 239 ]
0 42
Yim, Moonjung, Ricardo Gomez, and Michelle S. Carter. 2016. “Facebook’s “Free Basics”: For or against Community Development?” Journal of Community Informatics 12, no. 2: 217–25. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1: 75–89. Zweig, Katharina. 2017. “Watching the Watchers: Epstein and Robertson’s ‘Search Engine Manipulation Effect’.” AlgorithmWatch. Accessed May 26, 2017. https:// algorithmwatch.org/en/watching-the-watchers-epstein-and-robertsons- search-engine-manipulation-effect/.
[ 240 ] Society
CHAPTER 10
The Dependent Press How Silicon Valley Threatens Independent Journalism EMILY BELL
“A critical, independent and investigative press is the lifeblood of any democracy. The press must be free from state interference. It must have the economic strength to stand up to the blandishments of government officials. It must have sufficient independence from vested interests to be bold and inquiring without fear or favour. It must enjoy the protection of the constitution, so that it can protect our rights as citizens.” —Nelson Mandela
W
hen violence erupted at a white supremacist rally in Charlottesville in August 2017, the technology platforms used by far right racist groups to organize were suddenly in a very bright and unwelcome spotlight. One particular service, Cloudflare, which provides protection from hacking and “denial of service” attacks pulled its service from a site called the Daily Stormer, which was most closely identified with the racist movement. Cloudflare chief executive Matthew Prince issued a memo to his staff explaining both that he had pulled the plug on the site’s contract because he was “in a bad mood” but also raised the point that he recognized for him to have this much power over how expression moved around the Internet was essentially wrong. In a blog post, Prince noted (2017): In a not-so-distant future, if we’re not there already, it may be that if you’re going to put content on the Internet you’ll need to use a company with a giant network like Cloudflare, Google, Microsoft, Facebook, Amazon, or Alibaba.
4 2
For context, Cloudflare currently handles around 10% of Internet requests. Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.1
The dependency of what we think of as a “free press” on nonjournalistic platforms is not a threat for the future—it is already here. A “free press” is most widely understood to be the ability of publishers to produce news without interruption or censorship from government. In the United States this right is enshrined in the Constitution. Under the First Amendment, the Constitution stipulates that Congress shall pass no law that abridges the right to free speech or a free press. But now we have mechanisms of speech regulation that sit entirely outside the auspices of government and legal due process and that can, with the flick of a switch, decide that certain organizations or individuals can cease to exist online. The first two decades of the 21st century are seeing a dramatic reshaping of the commercial news environment, which goes far beyond any previous disruption—an upheaval so great it brings into question whether an independent and pluralistic free press can survive outside the embrace of vast social networks and search engines. These titans of technological power serve as the engines of a vibrant information economy, and in many ways their capabilities ought to make for a vastly improved environment for journalism. However their incentives are not linked to a better informed society, but to increased consumption of users, increased opportunities for advertisers, and larger revenues for shareholders. At no time in history have news audiences had so much choice in the variety of the news that reaches them, while simultaneously having so little insight over how this material reaches them. News organizations and journalists find themselves similarly attracting far greater audiences than were ever possible in the days of the printed paper, sometimes hundreds of millions of views a day, but are no longer in control of how their own stories are circulated and read. A further dichotomy in the relationship between the free press and Internet-powered technologies, is that the possibilities for publication are limitless but the sustainability of institutions focused on journalism have become commercially weakened, in some cases fatally so. The functions of independent news publishers have been systematically taken over by a very small handful of opaque commercial technology companies who now play a pivotal role in hosting, monetizing, and distributing journalism, but who
1. https://blog.cloudflare.com/why-we-terminated-daily-stormer/.
[ 242 ] Society
do not directly invest in reporters or journalists themselves. The example of the Daily Stormer is one where no right-thinking individuals would be sorry that its messages are not reaching a wider audience, but the precedent it sets points to a much more troubling future and confirms that censorship and control now sits with companies that do not have the preservation and the support of a free press as their primary objective. In 2010, after its sensational leak of classified cables from US diplomats, Wikileaks was temporarily denied the use of payment services by PayPal, Visa, and Mastercard, with all three companies citing the fact of the leak breaking the law as being a justification for blocking the organization’s access (Poulsen 2010).2 Between the Wikileaks case and the Daily Stormer in 2017 there has been a drip feed of cases that have put technology companies into the position of “editor-in-chief” over controversial content. Now, it is often social platforms that break news of terrorist events and in some cases actually livestream violent or terrorist acts (Beckett 2016a).3 From significant government-endorsed actions, such as Project Jigsaw launched by Google, to individuals banned from Twitter, there have been multiple examples of how technology platforms can affect speech and expression. Project Jigsaw for example aims to improve discourse and safety online. One of its experiments, “The Redirect Method,” targeted potential recruits to ISIS through examining search terms and sent them search results that pressed the case for opposing ISIS (“Redirect Method” n.d.).4 In this specific case people might feel reassured that a large technology company is helping governments target terrorist recruits. But in the abstract, the tools used here—racial profiling, the substitution of search results, the covert circulation of government propaganda—are more worrying. The threat to the free press from platform technologies comes not just from its ability to censor without due process or transparency but also from the development of a publishing model that robs independent journalistic organizations of an important part of their revenue.
THE RISE OF PLATFORMS AS PUBLISHERS
The rapid rise of the consumer Internet in the 1990s had intrigued established news organizations, offering newspapers a potential way to compete
2. https://www.wired.com/2010/12/paypal-wikileaks/ 3. https://www.cjr.org/special_report/in_the_age_of_livestreamed_terror_platforms_and_publishers_must_rethink_their_roles.php 4. https://redirectmethod.org/
T h e De p e n de n t P r e s s
[ 243 ]
42
with television companies and 24-hour media that had been eroding daily newspaper sales since the early 1980s. Publishers envisaged that the World Wide Web would provide a way for far greater numbers of readers to consume their material than ever before across a 24-hour cycle, and online advertising would add a healthy stream of income in addition to their newspaper revenues. And for some time this utopian vision of the future for news publishers was true. Revenues online and in print continued to grow until 2006, when according to the Pew Research Center, US newspaper advertising revenues peaked at $49.2 billion (State of the News Media 2017);5 by 2016, this figure had fallen off a precipitous cliff dropping to a mere $18 billion. The healthy boom in advertising of the early 2000s masked a much more fundamental shift that was taking place in technology and user behavior that would, by the end of the decade, pose an existential threat to even the largest and most established news organizations. This revenue has largely gone to Facebook and Google, reports the IAB, which now share the vast majority of the growing digital ad market. The principles of “Moore’s Law,” (an observation by Gordon Moore, the founder of Intel, that processing power for computers doubles roughly every two years) meant that web publishing rapidly developed from being a one-way pipe for newspapers to publish more of the same—images and text—to a two-way network, where anyone with an Internet connection could upload words, pictures, sound, and video, sharing it with their friends or the world. The advances that heralded what became known as “Web 2.0” encouraged a rash of new companies that put social publishing at their heart. Wikipedia appeared as early as 2001, then MySpace was founded in 2003, Facebook in 2004, YouTube in 2005, and Twitter in 2007. Soon every start-up seeking venture capital funding was based on the “social web.” The launch of the iPhone in 2007 dragged the user experience of consuming and publishing material even further from the format confines of the traditional press. The social web, along with the powerful search capabilities of Google, replaced not only the laborious publishing systems of the industrial press with scaled technologies that anyone could use but also created a whole new attention economy away from the bundled monopolies of print and broadcast media (Wu 2016).6 The adoption of social media has been as rapid as the decline in revenues of newspapers; in 2008 Facebook had 100 million users globally, and in the first quarter of 2017 it had 1.93 billion global users. Although these metrics are themselves 5. Pew Center, “State of the News Media, 2017.” 6. See Tim Wu’s book, The Attention Merchants.
[ 244 ] Society
slippery (an “active user” on Facebook can be someone who has a Facebook account who rarely uses it but links other accounts to it), the scale and shift in the axis of power for communications companies is undeniable. News publishers had built businesses not simply on their monopolies of production and distribution but also in processes that edited and shaped the presentation of news. The “free press” was often edited in accordance with a particular political viewpoint. Reporting was meant to be carefully scrutinized before it was allowed to appear under the brand or masthead of an official publication. “Production” was as much of the cost of news as reporting. Editors of pages, sections, or publications often held the most prestigious jobs in news. Social media and technology companies started from a fundamentally different point to that of traditional publishers: that everyone should have an equal voice, everyone could be a publisher, and that information and social connections should circulate as freely as possible. Their role was not to decide who or what was published via their technologies, but to gather as many users as possible and encourage them to create social interactions. Although the social web was gathering pace from 2001 onward, traditional media remained conflicted about their relationship with the new platforms, regarding them as “frenemies,” tools that might help the process of journalism while at the same time threatening its revenue streams and its authority. News organizations were generally slower than other web publishers to adopt blogging, commenting, and social media practices. Even when Twitter appeared in 2007, a tool that immediately became central to how journalists reported, tracked stories, and found sources, few organizations at the time had staff focused on social media use. Social media teams barely existed before 2007, and were often sat apart from the main newsroom where the “real journalism” took place.
NEWS PUBLISHERS WAKE UP TO THE NEW REALITY
It took a revolution, quite literally, to see social media taken seriously by news organizations in the West. The protests and uprisings that swept the Middle East in 2010 and 2011, collectively known as the Arab Spring, saw young online activist groups use online tools, and most notably Facebook and Twitter, to communicate their message and organize protests. In Tunisia and Egypt, Facebook and Twitter became what a young Tunisian activist described as the “GPS for the revolution” (Pollock 2011),7 and the firehose 7. https://www.technologyreview.com/s/425137/streetbook/.
T h e De p e n de n t P r e s s
[ 245 ]
4 62
of stories for the mainstream media too. In the region the Al Jazeera television channel used videos uploaded by protestors and revolutionaries from YouTube and Facebook to amplify stories. In the Western media, journalists like Andy Carvin at the American public radio station NPR, found reporting live and breaking news across social channels, which enabled them to tell breaking news stories in real time from thousands of miles away. Carvin famously tweeted 1,200 times in the space of 48 hours during the Libyan uprising of 2011, breaking Twitter’s normal cap for allowed usage. The Arab Spring highlighted how, in areas where there was no free press, social media platforms quickly become the default mechanism for publishing and communication. Studies after the uprising suggested that with limited access social media played less of an organizing role, but more of a “megaphone” role to the rest of the world. A Pew Center research paper into the Arab Spring and social media concluded (Brown, Guskin, and Mitchell 2012): Twitter, Facebook and other new media offer ways for the Arab-American news media to reach audiences, but also pose a threat to smaller outlets. In addition to keeping up with the online presence of larger news organizations, Arab- American media are forced to compete with user-generated content that is rapidly available to audiences. The utility of social media in accessing information became clear during the Arab uprisings and events such as Egypt’s parliamentary and presidential elections.8
It is also fair to say that until the Arab Spring there was no widespread acknowledgment in Western democracies that the new social platforms could reliably replace the core functions of news publishers and support the free press of the 21st century. The willingness of the platforms to keep their services open to insurgents, protestors, and revolutionaries altered the mainstream media’s too frequent perception of social media as a frivolous distraction. It also made the platforms themselves the centers for breaking news and newsgathering, putting them in the hazardous editorial position of amplifying terrorist messages—a position journalism has been contending with for decades, but never at this scale (Beckett 2016b).9 The Arab Spring was a milestone in the civic possibilities of newsgathering and distribution via the social web. However, the commercial robustness of social platforms had not yet been tested.
8. http://www.journalism.org/2012/11/28/role-social-media-arab-uprisings/. 9. https://www.cjr.org/tow_center_reports/coverage_terrorism_social_media.php
[ 246 ] Society
This changed with the initial public offering of shares in Facebook in 2012. Beset by skepticism and criticism that it was pitched far too high for its unproven advertising model, many thought an offering of shares that valued Facebook at $100 billion was doomed to failure. However, founder Mark Zuckerberg’s ability to turn the company into the key advertising platform for mobile over the next two years confounded critics and ultimately transformed Facebook from a rackety start-up to one of the world’s largest media and technology companies. By September 2014 Facebook was worth $200 billion, and by July 2017 it was worth $465 billion. Facebook joined Google, Amazon, and Apple as one of what New York University business professor Scott Galloway coined as “the four horsemen of the apocalypse” (2017): Companies heading for trillion-dollar valuations, that have between them reshaped the media landscape as well as a host of other sectors, from retail to transport. As the use of Twitter as a conduit for news and user-generated content was taking hold, Jonah Peretti, one of the founders of an early digitally native news site the Huffington Post, announced that his viral media company BuzzFeed was hiring the widely admired political blogger Ben Smith as its editor-in-chief from Politico. Peretti, who was a graduate of the MIT Media Lab, where his thesis focused on how to create online courses for students, had made his career focused on the study of viral patterns of content across the social networks. BuzzFeed was a side project at Huffington Post, however BuzzFeed’s business model of measuring what list or cat gif was spreading fastest across the web was at the heart of what would become a new advertising business. Hiring Smith in 2011 came as a direct result of how Peretti saw the social web working for news and information as well as memes like “17 Horses That Look Like Miley Cyrus.” In an interview with Fast Company, Jonah Peretti said, “We realized that a bigger shift is happening, and that there’s a big opening to be the kind of site that is social from the ground up . . . (a site that) is focused on making content that people think is worth sharing, and a big piece of that is original reporting” (Zax 2011). BuzzFeed was the first company to announce itself as a “social news company” that did not care most about its homepage or even its website, but rather lived within the social feeds of users on Twitter, Facebook, Instagram, Snapchat, or whatever other social conduits emerged. The idea that news organizations needed their own autonomous presence, which was merely linked to social sites, was becoming out of date. As well as building its own technology platform BuzzFeed hired editors and “curators” to make versions of its content that were native to the social platforms. These new platforms and modes of expression were proliferating quickly with
T h e De p e n de n t P r e s s
[ 247 ]
8 42
Instagram and WhatsApp (both of which were later acquired by Facebook) launching in 2010 and Snapchat launching in 2011. The new, rapidly growing services like BuzzFeed cashed in on the idea that people increasingly preferred to create and share content with each other in social feeds. The smartphone and the use of apps rather than websites accelerated this trend. BuzzFeed was essentially a new type of advertising agency, which made viral campaigns for advertisers and created distribution paths for them through testing and refining many thousands of bits of social content, memes, gifs, and listicles. The vast social footprint of a social news organization such as BuzzFeed was more responsive and appealing to advertisers than large-scale campaigns that took months to prepare and produced monolithic high-production-value TV campaigns. The capabilities championed by BuzzFeed started with the idea that social feeds were becoming more powerful than monolithic news or entertainment providers. Therefore technology platforms would be where people would increasingly spend their time and their money, and therefore the best way of succeeding in this world was to use data analytics and editorial creativity to create and measure virality across the dozens of social publishing platforms. The rise of new “digitally native” publishers like Politico, Fusion, Vox. com, Mic.com, NowThisNews, and dozens of other venture-capital-funded start-ups, put far more pressure on legacy news organizations. The most significant moment for many publishers and industry analysts came in 2013 when the iconic newspaper The Washington Post was sold by the Graham family who owned the paper for $250 million to Amazon founder Jeff Bezos. In a letter to staff published in The Washington Post announcing the sale Don Graham wrote: All the Grahams in this room have been proud to know since we were very little that we were part of the family that owned The Washington Post. We have loved the paper, what it stood for, and those who produced it. But the point of our ownership has always been that it was supposed to be good for The Post. As the newspaper business continued to bring up questions to which we have no answers, Katharine and I began to ask ourselves if our small public company was still the best home for the newspaper. Our revenues had declined seven years in a row. We had innovated, and to my critical eye our innovations had been quite successful in audience and in quality, but they hadn’t made up for the revenue decline. Our answer had to be cost cuts, and we knew there was a limit to that. We were certain the paper would survive under our ownership, but we wanted it to do more than that. We wanted it to succeed. (2013)
[ 248 ] Society
For many the sale of the Post was emblematic of the fact that even the most dedicated legacy media owners could no longer support the costs of digitization against the background of falling revenues. The fact that Bezos, the world’s richest man, had made his fortune through Amazon further compounded the idea that power and wealth once vested in publishers was transferring to a new set of gatekeepers. In the five years between 2007 and 2012 the rise of the social mobile web created the conditions for a seismic change in the media landscape. Although journalism and news was only a fraction of the content this shift affected it represented not only the most important part of that content but also the most vulnerable. The dominance of Google in search advertising and Facebook in display advertising was rapidly consolidated as consumer attention and advertising dollars moved to mobile. Google’s own research showed that even if the average smartphone user downloaded 30 to 40 apps, the number they used on a daily basis was no more than five or six, and 40% of that use was through social media apps. In 2016 Internet advertising had overtaken television as the largest advertising medium, and within the online advertising market, Google and Facebook are the two dominant forces. In 2016 Google’s holding company, Alphabet reported $79 billion in advertising revenues, while Facebook made $27 billion from advertising. One pound or dollar in every five spent on advertising anywhere, goes to either Google or Facebook. To put these numbers in some kind of perspective Comcast, the largest traditional media owner in the United States made $13 billion out of advertising in 2016. But Comcast is a cable, television, and entertainment company. Advertising revenues for news operations declined even more starkly. The legacy news business the New York Times had seen its overall advertising revenue drop from over $2 billion a year in 2006 to just over $500 million in 2016. The loss of advertising revenue was perhaps inevitable given that both Google, Facebook, and other social platforms hold far richer data about Internet users than legacy publishers. However, news organizations were simultaneously becoming highly dependent on both social media and search for attracting traffic to their digital content and had hoped that the large audiences available online would allow them to eventually make enough money to support their journalism. In 2015 the measurement and metric company Parse.ly reported that of 400 online publishers they monitored, the highest proportion of referred traffic (in other words visitors to pages who were not visiting websites directly) was coming not from search engines like Google but from social media platforms and in particular from Facebook. Of all the referred traffic coming to news publishers, 45% was social traffic and 32% was search
T h e De p e n de n t P r e s s
[ 249 ]
0 52
traffic. Facebook and Google respectively dominated each category. In the same year the Pew Center reported that 41% of US adults were getting news through their Facebook feed. News organizations felt at the mercy of the Facebook Newsfeed algorithm, the powerful piece of mathematics that decided what pieces of content to show each user every time they logged onto the site or opened the app. The dilemma for commercial news producers became whether to withdraw from social network distribution and suffer disappearing from public view and consciousness in an attempt to gain more control over their journalism and their revenues, or whether, like BuzzFeed, to integrate as closely as possible with the social platforms, hoping they might in the process be able to negotiate higher revenues from the platform companies. The common complaint from publishers stretching as far back as the launch of Google News in 2002, was that aggregators, search companies, and now social media companies were profiting from their reporting without paying for it. Some of these arguments were valid, others were less persuasive. Many news organizations already aggregated the work of others, often without attribution or linking to the original reporting. News organizations had also benefited from “fair use,” a legal arrangement whereby news organizations are exempted from copyright infringement if they are using only a portion of the copyrighted work to illustrate a matter of public interest. The scale and power of news organizations such as News Corporation in the UK, coupled with scandals such as phone hacking, left them short of public sympathy. Annual measures of trust in media organizations have shown declines in public confidence almost every year since the mid-1970s, so the idea that media power is being challenged caused limited public consternation. However the size and the growing influence of the technology companies that undermined and replaced the models of the independent press were being placed under scrutiny too. Although no moves had been made to look at the antitrust aspects of Google and Facebook’s operations in the United States, in Europe in particular the competition authorities and the European Parliament were unsettled by the commercial and cultural reach of US-owned and -controlled companies. In January 2015 at the annual gathering of the world’s business elite in the Swiss mountain resort of Davos, the technology company Google gathered 26 publishers in a conference room to hold an off-the-record discussion about what the technology company might do to materially help news publishers. The mood of the meeting was cordial but also inflected with mutual wariness. What could the search engine giant offer to publishers that would “help support journalism”? The answer was couched in
[ 250 ] Society
many different ways but essentially the same from everyone sitting in the awkward circle of chairs: “Money.” This was the inaugural meeting of a group that formed the core of the Google Digital News Initiative. The meeting was aimed only at European publishers and paid for out of the company’s marketing budget. In short it was an effort to bring publishers onside within the European jurisdiction where Google faced mounting hostility from regulators, politicians, and the news media over its market power. In the previous 18 months it had faced opposition on three fronts: First news publishers had become increasingly vocal and active in trying to block Google’s access to their sites as they claimed that Google made money from unfairly accessing and aggregating their journalism. Second, the revelations about the role of technology companies in surveillance, exposed by former NSA contractor Edward Snowden, had exacerbated the already tense relationship between US technology companies and European authorities. And third, the judicial shock delivered by the surprise 2014 ruling on the “right to be forgotten” whereby the European Court of Justice had upheld a ruling in Spain, that plaintiffs could request Google remove links from search results that were “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.” It was against this background of agitation to limit Google’s power it felt it needed to build bridges with publishers to help Google avoid further savage regulation from Europe. The “Digital News Initiative” that started in Europe was the first official acknowledgment by a technology company of their active role in shaping the news environment. The funding of an initiative to pay news organizations and entrepreneurs grants to experiment with news models was the first time there had been a direct transfer of wealth from technology companies into journalism, albeit at a microscopic level: at the close of the third round of funding Google had given £19 million in funding to 107 different projects across Europe. Google’s move into supplying direct aid for the news business happened at the beginning of a year when other platform and technology companies also began more direct overtures to journalists and publishers. Snapchat, a teen messaging service that was barely on the radar of most news organizations but that was proving wildly popular with an under-25 demographic, released a feature called “Discover” on January 27, 2015. It was on the face of it a mundane development: a screen that had 13 slots for publishing “channels” occupied by mainstream media brands such as CNN, Vice, Cosmopolitan, and ESPN. The idea was that each “channel” would post stories especially adapted for the users of Snapchat, with animated graphics
T h e De p e n de n t P r e s s
[ 251 ]
2 5
and “swiping” motions to change screens rather than clicking. But the shift Discover represented for technology companies—that they would now host not just social media but also professional media wholly within their own apps—was a paradigm shift. The competitive offering to publishers seemed to electrify competition, and suddenly every platform company was producing new features or products directly designed to attract publishers to design and publish their journalism on another platform, Facebook, Apple, Twitter, and Google all piled in with new publishing offerings. Other social sites like LinkedIn had recently opened publishing platforms and were redesigning feeds so that users saw more news posts. The revolutionary idea that BuzzFeed had pioneered in 2010 of being the first social news company was no longer novel. Facebook launched Instant Articles, an application that allowed publisher pages to render instantly to users with the same design and presentation of the original publisher but the user never leaves Facebook by clicking through to the publisher site. Apple created Apple News, a newsstand-type product that would curate a large number of news stories according to user preference. Twitter Moments packaged the best tweets and links of the day into a page that looked and felt like an edited news page. Google launched Accelerated Mobile Pages, which, like Facebook Instant Articles, allowed publisher pages to load far faster. The choice publishers were being asked to make with this cornucopia of new technical applications was essentially one between using resources to build and design their own websites and mobile apps, keeping control of the reader and user pathways to stories and monetizing their work independently, or conversely integrate more closely with companies that were vastly larger and wealthier in scale but that did not have news publishing as a core set of competencies. In research conducted at the Tow Center for Digital Journalism at Columbia Journalism School, we spent a year from March 2016 to March 2017 monitoring the output of a group of news organizations of all types and how they used social platforms. We found that there was an increasing reliance on social distribution to find audiences. The new standard is to be present on multiple platforms at all times, and to post tailored, native content on those platforms. We found that almost as many articles are published by news organizations directly to technology platform applications such as Instant Articles as link back to publisher sites. Even legacy media brands such as CNN have digital teams devoted to distributing on mobile, social platforms, video platforms, and emerging platforms such as augmented reality. Even when publishers said that their strategy was to
[ 252 ] Society
treat third-party publishers with caution, the evidence suggested a rapid and perhaps irreversible dependence was building. Social platforms take advantage of this dependency, using new tools and services to entrench publishers further. For the technology companies the development of a closer relationship with news publishing was an even bigger cultural shift. Engineering companies in Silicon Valley are staffed by young people (Hardy 2013),10 many with engineering backgrounds. A frequent criticism of technology companies has been that the philosophies of software design and engineering—to make things work for users as efficiently as possible—ignore other cultural values and potential consequences. The incentive structures for engineers are completely different from those for journalists. News organizations traditionally apply many filters to a piece of information before it is published, engineers release code, assess how it is used, refine it, and update it. In 2014 Facebook thought it would surprise its users by automatically publishing an animated timeline of their previous year’s “highlights.” Soon Facebook users were seeing pictures of bereavements, of bad news they might have suffered, and in one case an unfortunate user was presented with a picture of his house burning down. At the time a Facebook employee told this author that the timeline mistake stemmed from a mistake product developers had made about what they thought people used the platform for and what they actually used the platform for: “Most of the people working on that product were men in the twenties who are working for Facebook—their lives are pretty awesome, so they thought why wouldn’t you want to see a replay of that awesome year?”
THE TIPPING POINT: STEALTH PUBLISHING AND THE 2016 PRESIDENTIAL ELECTION
On the evening of July 7, 2016, in the suburbs of Minnesota two police officers in a squad car pulled over a vehicle. The driver of the car was Philando Castile, he was accompanied by his fiancée, Diamond Reynolds, and her four-year-old daughter, who was strapped into the back of the car. Castile was a popular 32-year-old school meals supervisor who lived and worked in the area. During the traffic stop Castile politely informed officer Jeronimo Yanez that he had a weapon in his car, we know this from dash cam footage
10. https://bits.blogs.nytimes.com/2013/07/05/technology-workers-are-youngreally-young/
T h e De p e n de n t P r e s s
[ 253 ]
4 52
recorded by the police. Yanez appears to panic and fires seven shots into Castile, hitting him five times, twice fatally in the heart. From inside the car Diamond Reynolds uses her mobile phone to stream the immediate aftermath of the shooting onto the new Facebook Live video service, a streaming service that had been released by the social platform only a few months earlier. The stream of Castile bleeding to death while his girlfriend tries to calmly talk to the police started to go viral on social media with the link being posted across Twitter. After the seven-minute live video ended, Facebook took down the video for about an hour, in what policy executives at the platform described as “a glitch,” before restoring the video. The next day Mark Zuckerberg posted about the incident on his own Facebook account: My heart goes out to the Castile family and all the other families who have experienced this kind of tragedy. My thoughts are also with all members of the Facebook community who are deeply troubled by these events. The images we’ve seen this week are graphic and heartbreaking, and they shine a light on the fear that millions of members of our community live with every day. While I hope we never have to see another video like Diamond’s, it reminds us why coming together to build a more open and connected world is so important—and how far we still have to go. (2016)
Internally at the company the launch of live streaming had caused much soul searching. In the first few months the enormous viral success of a woman called Candace Payne trying on a Star Wars character “Chewbacca” mask and being rendered helpless with laughter had produced exactly the kind of “user generated content” Facebook had hoped its innovation would create. “Chewbacca Mom” received 141 million views on Facebook Live, and Zuckerberg invited Payne to the Facebook campus, where she was pictured enjoying meeting Chewbacca himself and collecting Facebook branded gifts. The two most high profile “hits” on Facebook Live that summer demonstrated how Facebook was entering a territory it was fearful of entering; arbitrating the various use cases of its platform. Suddenly Facebook was unable to point to external news organizations as being the amplifiers and publishers of work that the platform merely carried. Through enabling the world to use live video, Facebook had unleashed a myriad of powerful possibilities its 20-strong live video policy team were having to grapple with. By reposting the Castile video, Facebook had taken a positive view on a news event. It had to consider the case for handing the footage to the police versus reposting potential vital evidence. Zuckerberg then posts his
[ 254 ] Society
own views on the incident offering condolences to the victim’s family and promoting the importance of the act of witnessing, very much as an editor might do. These actions alone go far further than the role of “just a technology company,” they also cross a line beyond which other types of carriage technology would have had to venture in the past. The chief executive of Verizon or Comcast does not feel obliged to comment on what was on the evening news. The separation of editorial function from distribution is more clearly drawn in cable, in newspapers, in any form where the package is finalized and sent through pipes or out on trucks. Content published on Facebook is a living changing entity, as with all writeable web applications. Where there is no institutional editor and no third-party journalist involved, such as in the case of Philando Castile or even Chewbacca Mom, then the publishing responsibility falls, as it should, on the platform company. Platforms themselves had not anticipated and did not want to make editorial decisions, yet the act of designing tools for publishing is not a neutral one. In August 2014, when images of the Isis execution of American journalist Jim Foley began circulating on Twitter, chief executive Dick Costolo announced that the platform would freeze any account found sharing the images. While it seems obvious that any commercial platform would want to remove imagery that was distressing to audiences and therefore toxic to advertisers, up until that point Twitter had removed or blocked very little material from its platform, at least partly because it was founded on the ideals of free expression. The Silicon Valley interpretation of free expression sometimes erred to the one-dimensional, seeing good speech conquering bad, all opposing views however unpleasant should be entertained, and hate speech was not to be censured unless it was either directly threatening or inciting violence. This American standard of free expression was however far more complex to interpret both in a global context and within the increasingly violent and polarized language of the Internet. Trolls, and even bots which mimicked humans were often spreading lies or aggression for their personal ends in unchecked numbers, chilling expression on the open web and adding to an overall feeling that the new public sphere created by social platforms was not safe for vulnerable people. Between the 2012 presidential election in the United States and the 2016 election media use and consumption habits had changed. In 2013 Facebook announced a change in its Newsfeed algorithm where it suggested that there would be more prominent “high quality articles” in the news feed and fewer frivolous memes. When the platform changed its algorithm some publishers like Upworthy, a site that used the viral power of the social web to spread positive and socially progressive stories, saw a tremendous fall in traffic
T h e De p e n de n t P r e s s
[ 255 ]
6 5 2
while others noticed a big spike in Facebook traffic. This powerful effect of the Facebook algorithm on billions of pieces of content a day was a warning sign that the company could in effect turn the traffic tap to news sites on and off. Desperate to appear neutral however, Facebook claimed any changes to traffic was an algorithmic outcome partly influenced by user behavior. In May 2016 the technology site Gizmodo published a bombshell story from a former news “curator” at Facebook who had been responsible for editing “trending topics,” the story claimed that curators regularly suppressed right-wing stories from trending, and that other stories were boosted to override the algorithm. Zuckerberg’s response was to hold an unprecedented meeting with right-wing commentators to hear their concerns about the platform (Stelter 2016).11 The outcome of the revelations and the meeting was that Zuckerberg ended the use of human curators, firing all those currently on contracts and said in future algorithms would be making decisions about how stories surfaced and circulated on the platform. Throughout the summer of 2016, Zuckerberg insisted in interviews and at town hall meetings that Facebook is “A technology company. We are not a media company. . . . we don’t produce any content” (“Facebook CEO” 2016)12 From an external perspective, and particularly from the news business point of view the repeated insistence that Facebook was “not a media company” was frustratingly inaccurate. When Facebook made $25 million available to publishers to produce live video, it was a direct intervention in the market for what type of content the platform wished to see produced for advertisers (Wagner 2017).13 When in the summer of 2016 Facebook announced that the Newsfeed would feature fewer stories from publishers but more from “family and friends,” it immediately blew a hole in a number of publishers’ traffic numbers. Facebook’s continual denial of its role as a media company was easy to understand from the point of view of managing publishing risk. However it also prevented a long-overdue conversation about whose responsibility it really was for the type of news environment we experience. In business terms there was little progress on the issue of who should pay for reporting and journalism when business models are being wrecked by advertising migration. 11. http://money.cnn.com/2016/05/18/media/facebook-conservative-leaders-meeting/ index.html 12. http://money.cnn.com/video/technology/2016/08/29/facebook-ceo-were-a- technology-company-were-not-a-media-company-.cnnmoney/index.html 13. https://www.recode.net/2017/4/21/15387554/facebook-video-deal-publishers- live-ads.
[ 256 ] Society
FAKE NEWS
The fragile plausible deniability of Facebook was blown away by the 2016 US presidential election. The shock victory of Donald Trump threw up many questions for traditional news media: How had the polls got it so wrong? Why did national news media not see this coming? Had the loss of local reporting caused a disconnect with deprived parts of America? In August 2016 the New York Times writer John Herrman produced a piece titled “Inside Facebook’s (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political Media Machine,” which examined the hyperpartisan news factories that were seeing massive support for Trump. The piece brilliantly captured the scale and reach of all manner of sites, pages, and producers of right-wing memes, stories, and opinion that were pushing the Trump message out of sight of mainstream media. Herrman writes: Individually, these pages have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created.14
Although it caused some impact when it was published, it was only after the election that Herrman’s piece took on ominous significance for politicians and media alike. It was swiftly followed by a series of startling pieces from BuzzFeed Media Editor Craig Silverman, who used his own methodology to examine the types of stories that had been driving this attention to the Trump campaign. Many he discovered were pushing entirely untrue stories, such as the pope supposedly supporting Donald Trump. Silverman’s revelations that fake news stories had outperformed stories from reputable news outlets on Facebook (2016)15 were further proof, if any were needed, that Facebook had designed a platform that was not only
14. https://www.nytimes.com/2016/08/28/magazine/inside-facebooks-totally- insane-unintentionally-g igantic-hyperpartisan-political-media-machine.html 15. https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.imvzKkqrM#.nsBAoVDNy
T h e De p e n de n t P r e s s
[ 257 ]
8 52
raking in billions in advertising revenues but also doing so through allowing for the circulation of shoddy, untrue, and deliberately misleading messages. Facebook’s insistence that it was a “neutral” platform dedicated to promoting discourse from all sides suddenly looked like willful inaction that promoted antidemocratic practices. Those who monitored the Brexit campaign in the UK in June 2016 had noted similar trends of very high engagement levels for the most extreme content on pages on Facebook and YouTube. In truth the 2016 US presidential election was a prism through which to view the changes that had been happening to the news environment over the previous five or more years. Social platforms were taking over from mainstream publishers in terms of hosting, monetizing, and distributing news. The systems for incentivizing news producers and the pathway to the audience were both obscured by the black box algorithms operating inside social media companies. But the lack of mechanisms of constraint outside commercial terms of use, and the design of platforms optimized for “shares” and “likes” meant that a system of news creation and distribution had grown up independent from checks on fairness, accuracy, or accountability.
THE FUTURE OF THE INDEPENDENT PRESS
Following the 2016 election, Facebook set up a series of initiatives called the “Facebook Journalism Project,” appointed former broadcaster Campbell Brown as its head of “news partnerships,” and spent money on initiatives to increase teaching of media literacy. Like Google’s Digital News Initiative, the moves were confirmation that the company had acknowledged its role as an active ingredient in the provision and dissemination of news. For news organizations the dilemma of whether to become more depend ent on these new systems of power remained. An unusual consequence of Donald Trump’s victory was a sudden upswing in support for traditional news organizations. The New York Times, the Washington Post, and the Guardian all saw increases in subscriptions or membership as a result of a “Trump bump.” Not-for-profit news organizations such as NPR and ProPublica saw similar rises in donations, as did support organizations such as the Committee to Protect Journalists. The idea that independent journalism was under threat from both politicians and commercial pressures had taken hold in at least some parts of the public consciousness. The ideal of an independent press is that there is a “fourth estate” or accountability function of reporting that sits separately from commercial
[ 258 ] Society
and governmental interests. In many parts of the world, including Britain and Europe, the news media has been heavily regulated in order to engineer plurality, balance, and fairness into markets—with mixed results. In the United States, which is culturally inclined to be a free market economy, the momentum has been with deregulation for the past 30 years. From 1949, US broadcasters were required by the Federal Communications Commission to present matters of national importance from a variety of perspectives. The “fairness doctrine” as it was known was abandoned by the FCC during Ronald Reagan’s presidency in 1987. In 1996 the Telecommunications Act for the first time allowed for cross-media ownership. Revenues and power in media were therefore becoming both more polemic and centralized long before Facebook, Google, and Twitter arrived on the scene. Antitrust laws were relied on to regulate markets for competition, but the speed and fluidity of technology and platform company models meant that regulators had tremendous difficulty even defining markets, let alone regulating them. The encroachment of tech companies into the sphere of media and news publishing has effectively meant that media regulation in the United States is in free fall. Other parts of the world, notably Europe, are setting regulatory standards for technology companies in terms of their role in the cultural life of citizens, but there is no suggestion that this is likely to happen in the United States. A close relationship between platform companies and journalism is inevitable. However Google and Facebook are more than just publishers. They are vast data companies, which are investing in many subsidiary activities such as fiber networks, transport (Google’s self-driving cars), new platforms like virtual reality, and new technologies like artificial intelligence. They also have a close and obscure relationship with government agencies, helping in antiterror initiatives and other activities associated with the surveillance state. If journalism’s dependence on these platform technologies increases, as seems inevitable, then it raises questions over whether it is healthy to have the press co-opted by systems of knowledge, power, and wealth that rival that of federal and national governments. How does a press that is beholden to such companies report on and differentiate itself from such systems of power? Only part of this answer can rest with journalism and news organizations themselves. Part has to come from the founders of technology companies who have the power to make changes to their corporate practices that are supportive of independent journalism. Being transparent about advertising and tracking practices, opening up public datasets that are in the interest of an open democracy, and instituting better corporate
T h e De p e n de n t P r e s s
[ 259 ]
0 62
governance and accountability practices are all aspects of reform that are urgently needed. There also has to be a collective will to return wealth to the business of independent reporting, particularly in areas where the scale of social platforms and the cheap, effective advertising have made journalism unsustainable. Local journalism has suffered from the financial structures of commercial news companies squeezing assets in order to maintain profitability and share price, and the intervention of Internet companies with their superior products and advertising models that benefit from large scale. The submersion of a free press within the fabric of the political and commercial surveillance economy is a reality for which we are unprepared. The mechanics of the commercial social web need to be fundamentally redesigned if we are to improve the news environment from its very imperfect past. Silicon Valley claims to enjoy solving hard problems, fixing the problem of how we deliver reliable independent news to the world is now its most urgent task of all.
REFERENCES Beckett, Charlie. 2016a. “Fanning the Flames: Reporting on Terror in a Networked World.” Columbia Journalism Review, September 22, 2016. https://www.cjr.org/ tow_center_reports/coverage_terrorism_social_media.php Beckett, Charlie. 2016b. “Livestreaming Terror: The Nebulous New Role of Platforms. Columbia Journalism Review, September 22, 2016. https://www.cjr.org/special_ report/in_the_age_of_livestreamed_terror_platforms_and_publishers_must_ rethink_their_roles.php Brown, Heather, Emily Guskin, and Amy Mitchell. 2012. “The Role of Social Media in the Arab Uprisings.” Journalism & Media, November 28, 2012. Pew Research Center. http://www.journalism.org/2012/11/28/role-social-media-arab-uprisings/ Facebook CEO. 2016. “We’re a Technology Company. We’re Not a Media Company.” CNN, August 29, 2016. http://money.cnn.com/video/technology/2016/08/ 29/facebook-ceo-were-a-technology-company-were-not-a-media-company- .cnnmoney/index.html Galloway, Scott. 2017. The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google. New York, NY: Penguin Random House. Graham, Donald. 2013. “Letter from Donald Graham on sale of The Post.” The Washington Post, August 5, 2013. https://www.washingtonpost.com/national/ letter-from-donald-graham-on-sale-of-the-post/2013/08/05/3e6642e0-fe0f- 11e2-9711-3708310f6f4d_story.html?utm_term=.8d869fac6541 Hardy, Quentin. 2013. “Technology Workers Are Young (Really Young).” The New York Times, July 5, 2013. https://bits.blogs.nytimes.com/2013/07/05/ technology-workers-are-young-really-young/ Herrman, John. 2016. “Inside Facebook’s (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Media Machine.” The New York Times Magazine, August 24, 2016. https://www.nytimes.com/2016/08/28/magazine/
[ 260 ] Society
inside-facebooks-totally-insane-unintentionally-gigantic-hyperpartisan- political-media-machine.html Pollock, John. 2011. “Streetbook: How Egyptian and Tunisian Youth Hacked the Arab Spring.” MIT Technology Review, August 23, 2011. https://www. technologyreview.com/s/425137/streetbook/ Poulsen, Kevin. 2010. “PayPal Freezes WikiLeaks Account.” Wired, December 4, 2010. https://www.wired.com/2010/12/paypal-wikileaks/ Prince, Matthew. 2017. “Why We Terminated Daily Stormer.” Cloudflare, August 16, 2017. https://blog.cloudflare.com/why-we-terminated-daily-stormer/ The Redirect Method. (n.d.). https://redirectmethod.org/ Silverman, Craig. 2016. “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook.” BuzzFeed, November 16, 2016. https:// www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed- real-news-on-facebook?utm_term=.suNoZRb9B#.li6XoZAN7 State of the News Media. 2017. Pew Research Center. http://www.pewresearch.org/ topics/state-of-the-news-media/ Stelter, Brian. 2016. “Facebook’s Mark Zuckerberg Meets with Conservative Leaders.” CNN, May 18, 2016. http://money.cnn.com/2016/05/18/media/facebook- conservative-leaders-meeting/index.html Wagner, Kurt. 2017. “Facebook Is Offering Publishers Money to Create Produced Video.” Recode, April 21, 2017. https://www.recode.net/2017/4/21/15387554/ facebook-video-deal-publishers-live-ads Wu, Tim. 2016. The Attention Merchants: The Epic Scramble to Get Inside Our Heads. New York, NY: Penguin Random House. Zax, David. 2011. “WTF, Indeed: Politico’s Ben Smith Joins BuzzFeed to Build a ‘Social News Organization.’ ” Fast Company, December 13, 2011. https://www.fastcompany.com/1800780/ wtf-indeed-politicos-ben-smith-joins-buzzfeed-build-social-news-organization Zuckerberg, Mark. 2016. “Facebook Post.” Facebook, July 7, 2016. https://www. facebook.com/zuck/posts/10102948714100101
T h e De p e n de n t P r e s s
[ 261 ]
2 6
SECTION 3
Politics
4 6 2
CHAPTER 11
Social Media Power and Election Legitimacy DAMIAN TAMBINI
INTRODUCTION: SOCIAL MEDIA, PLATFORM DOMINANCE, AND ELECTORAL LEGITIMACY
Debate about the Internet and democracy has evolved from starry-eyed hope (Rheingold 1995; Tambini 1998), through critical realism (Zittrain 2008; Howard 2006; Sunstein 2001), to despair (Barocas 2012; Morozov 2011; Kreiss 2012). Recent elections have called into question the promise of the Internet to provide expanding resources for information and deliberation (Tambini 2000). Growing numbers of commentators argue that the Internet agora has been displaced by the monopolized Internet of “surveillance capitalism” in which a small number of immensely powerful platform companies (Zuboff 2015) provide integrated services of targeted propaganda and misinformation undermining campaign fairness by rewarding richer campaigns and those that are increasingly able to bypass existing regulatory frameworks. In recent elections, data-driven campaigns, supported by surveillance technologies that game privacy protection to profile voters and target their weaknesses have been widely criticized. (Barocas 2012; Kreiss 2012, 2016; Howard and Kreiss 2009; Tambini et al. 2017). Some, including Epstein (this volume) go so far as to claim that powerful intermediaries such as Google and Facebook can and do influence the outcome of elections.
62
At the same time, the shock results of votes in the UK referendum and US elections led in 2016 to widespread questioning of the role of social media, which was seen as responsible for distributing fake news (Allcott and Gentzkow 2017; Tambini 2017), using manipulative psychometric profiling (Cadwalladr 2017), and undermining authoritative journalism (Bell, this volume; Allcott and Gentzkow 2017, 211) and ultimately the fairness and transparency of elections. This chapter examines the charge against the social media in recent elections, with a focus on the question of dominance: whether the powerful position of a few platforms in political campaigning—and particularly Facebook—is undermining electoral legitimacy. The focus will be on the UK, which has particularly high levels of online and Facebook use, and the referendum in 2016 and general election in 2017, which offer useful contrasting examples of recent campaigns. This chapter draws on interviews conducted with campaigners on the state of the art in targeted campaigning during the referendum in 2016, and a study of online ads used in the 2017 election conducted in collaboration with the grassroots group Who Targets Me.
MEDIA AND ELECTORAL LEGITIMACY: THE FRAMEWORK
A number of national and international rules exist to prevent media and communications undermining the legitimacy and integrity of elections and referenda (Council of Europe 2003; Parliamentary Assembly of the Council of Europe 2001). On the international level, intergovernmental organizations such as the Organisation for Security and Cooperation in Europe (OSCE), the Council of Europe, the European Union, and the UN operate election-monitoring projects to ensure free and fair elections. The issue of media influence on elections, and government capture of media have become increasingly important for these monitoring missions but international organizations have done little to deal with the social media challenge. The OSCE member states must commit to secure free and fair elections, and in particular: “[e]nsure that political campaigning can be conducted in an open and fair atmosphere without administrative action, violence, intimidation or fear of retribution against candidates, parties or voters; (and) [e]nsure unimpeded media access on a non-discriminatory basis” (OSCE 2010, 18). These and the other commitments contained in the OSCE election guidelines and similar documents such as the Venice Commission (2010) guidelines have led to the development of sophisticated tools for monitoring mass media during elections. According to the OSCE website,
[ 266 ] Politics
Election observation missions examine the coverage given to candidates in both state and privately owned media. Beyond parties and candidates themselves, the media are the most important source of election-related information for the public. Their ability to function freely and independently is essential to a democratic election. [ . . . ] An observation mission also assesses media laws, the performance of regulatory bodies, and whether media-related complaints are handled fairly and efficiently.
According to Rasto Kuzel, OSCE election media analyst, “media- monitoring projects can provide the general public with benchmarks to judge the fairness of the entire election process. This function is vital even in those countries that have a long-term tradition of freedom of speech and freedom of the media” (cited in OSCE 2017a). There have been instances in the past where elections have been scathingly criticized because of the media environment. The OSCE report on the 2015 Tajikistan elections for example, was critical of a lack of coverage of opposition parties in both state and private media (OSCE 2015, 18). In 2017, the OSCE conducted a monitoring mission to cover UK elections, as they had done in 2015. But for the first time they added a specific media component to observe the role of key media companies in the election (OSCE 2017b).1 A full election-monitoring mission of the OSCE according to the guidelines now includes monitoring of national media to examine evidence of systematic bias or exclusion. A key component of this is ensuring that the media are free and there is proper protection for freedom of expression, but guidance is clear that liberty is not enough: it is also necessary to ensure that media are not captured by special interests, or systematically biased against groups or interests, and that international standards such as those of the UN and the Office of Democratic Institutions and Human Rights (ODIHR) and the Council of Europe are respected. Domestically, national election laws, media regulation, and campaign finance rules have been adapted to protect elections from the potential threat that mass media propaganda may pose, and in particular to ensure that elections are fair, clean and transparent. Election laws establish limits to spending and/or donations to election campaigns, which are defined as printing, distribution, and production of campaign messages, largely through the media. The UK for example meets its international obligations to hold free and fair elections by implementing the Representation of the People Act 1983. 1. A list of election monitoring organizations can be found on the website of the Ace Project, a UN-endorsed monitoring organization (Ace Project 2017).
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 267 ]
8 62
In addition, media regulation provides for regulation of impartiality/ balance in broadcasting, and competition and pluralism in media systems as a whole. So for example, in addition to UK broadcasters’ general requirements to broadcast news that is impartial “in matters of political and industrial controversy” they have specific duties during election periods: “Due weight must be given to the coverage of major parties during the election period. Broadcasters must also consider giving appropriate coverage to other parties and independent candidates with significant views and perspectives.”2 The UK media regulator Ofcom bases its assessment of what is a major party on previous electoral performance, but is likely in the future to delegate some of these decisions to broadcasters, who will remain bound by their general duties of impartiality. While the overall objectives of election law and monitoring are similar in mature democracies (to make sure elections are free, fair, and transparent), means vary. Most countries control spending or donations, provide free but rationed political advertising on TV, and operate strict transparency and disclosure rules for parties and campaigns. And during the past 50 years in which broadcasting, most recently TV, has been the dominant medium, broadcasters have been subject to strict obligations to ensure that their potential to influence an election is controlled. Not only do most—at least in Europe—have balance and impartiality obligations, their role in political advertising is also regulated. For example, many democracies, including the UK, France Spain, Denmark, and Ireland operate complete bans on political advertising on TV (see Tambini et al. 2017; Holz-Bacha and Kaid 2006; Falguera, Jones, and Ohman 2014; see also Piccio 2016) and others implement partial bans. Italy for example permits it only on local TV. No such rules exist for social media.
THE SOCIAL CONSTRUCTION OF ELECTION LEGITIMACY
Despite national and international standards, “electoral legitimacy” is not a legal concept. International organizations do not inspect elections to make sure they conform to the rules, and blacklist those that do not. Rather it is a social construct (Suchmann 1995). Election monitors generally write descriptive reports on elections rather than unequivocal endorsements or condemnation. The absence of legitimacy is generally signaled not only by statements of international organizations 2. The UK Communications Regulator Ofcom operates a specific code that broadcast licensees must adhere to during election periods. See Ofcom (2017b).
[ 268 ] Politics
and monitors but also by low turnout, protest, violence, system crisis, and the withdrawal of consent (see also Mackinnon 2012, 12). However, it is also the case that nondemocratic systems and authoritarian pseudodemocracies can also be highly legitimate in the eyes of their populations, in part because of the lack of an independent media. In systems of “competitive authoritarianism,” open elections may be held, but a lack of real media independence undermines the process of open deliberation (Way and Levitsky 2002, 57–58). Therefore, the concept of legitimacy proposed for this chapter is as follows: for an election or referendum to be legitimate, results must be accepted both by international standards bodies and the overwhelming majority of citizens. And by contrast, where many or most citizens, and/or the majority of standards bodies and election monitors say legitimacy is lacking, we can say an election is illegitimate. Fundamentally election legitimacy is about perceived fairness. Increasingly, governance of mass media and also social media is required to guarantee such fairness. With the rising importance of media in elections, and what some would even term the “mediatization of politics” (Garland, Couldry, and Tambini 2017; Esser 2013; Kunelius and Reunanen 2016; Hepp 2013) monitors are increasingly taking notice of media system requirements in their assessments. International standards bodies have outlined standards for the media. The obvious next point is whether those standards need to be updated for a period in which social media are increasingly displacing print and broadcasting.
CAMPAIGNS MOVE ONLINE
A growing number of researchers and commentators are concerned about data-driven political campaigning and message targeting on social media. The concerns include privacy (Howard 2006; Kreiss and Howard 2010; Cohen 2012; Barocas 2012); transparency (Kreiss and Howard 2010); campaign finance (Butrymowicz 2009); and the (in)ability of existing electoral laws to maintain a level playing field and thus election legitimacy (Pack 2015; Barocas 2012; Ewing and Rowbottom 2011; Tambini 2017). Researchers have raised longer-term concerns with the undermining of the quality of deliberation; since 2016 the concern has been with the proliferation of messages that were either inconsistent with, or contradictory to, other communications from a campaign. Or third-party messages that were deliberately designed to mislead or provoke. There is also a longer term worry about “political redlining,” that is, the ability to
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 269 ]
0 72
target messaging on a narrow segment of the electorate (Barocas 2012) and exclude others, because they are less likely to vote or do not belong to key swing demographics; and with the overall transparency of political deliberation (Ewing and Rowbottom 2011). One area of concern that links these various claims is the notion that effective targeting may undermine voter autonomy: voters for whom social media is the dominant source of news and information could be inundated with a constant stream of skewed, politically interested messaging that would drown out opposing views; a new form of targeted propaganda. Following the shock results of the 2016 Brexit referendum and the US election, a wide range of concerns were raised about social media campaigning by a wider range of public commentators. The influence of deliberately targeted “fake news” messages, and the potential for foreign intervention in domestic campaigns, including spooky “psychometric profiling” have been raised by journalists such as Carole Cadwalladr of the Observer newspaper (Cadwalladr 2017). At the time of writing, several investigations into the use of targeting were ongoing: In addition to the US Special Prosecutor’s investigation of Russian involvement in the 2016 elections, The Information Commissioner’s office (the UK regulator for freedom of information and data protection) was examining the use of data for campaign purposes (Denham 2017); and an investigation by the UK electoral supervisor the Electoral Commission examined potential breaches of campaign funding reporting obligations relating to provision of database and targeting services by Leave.EU (Electoral Commission 2017). While the international agencies such as the OSCE that are responsible for electoral supervision and monitoring have been relatively slow to respond to the challenge of social media, the Council of Europe has carried out a feasibility study for a new recommendation on how democracies might regulate the new practices (Council of Europe 2017). Despite this gathering storm of debate, there has been a lack of robust and disinterested information on how the campaigns actually work. Research into data-driven campaigning has tended to rely on interviews (Moore 2016; Anstead 2017), ethnography (Nielsen 2012), or legal analysis (Butrymowicz 2009). There is surprisingly little analysis of the messages themselves, or of the validity of some of the more worrying claims about new forms of propaganda. A partial exception is Allcott and Gentzkow (2017). The key proposal of the theoretical literature, namely that the legitimacy of elections and referenda is undermined by these new campaigning tools, has not been tested, and there remains a rather large gap between hype (generally of the dystopian variety) and understanding of how targeted campaigning on social media has in fact been deployed.
[ 270 ] Politics
THE BREXIT REFERENDUM 2016 AND GENERAL ELECTION 2017
The UK referendum of 2016, like the US election of the same year, led to a shock outcome.3 The discussion following the referendum predictably focused on why there was such a contrast with previous votes, and a tendency to “blame” unwelcome political changes on the Internet. In particular, concerns were expressed about misinformation and “fake news” being distributed online without the skeptical filter of journalism, and about targeted messaging online (Allcott and Gentzkow 2017). Commentators, who themselves had been sidelined by new opinion leaders online, looked for someone to blame, and Facebook was convenient. In May 2017, after a series of shorter stories, Carole Cadwalladr published a detailed “exposé” relating to opaque links, data sharing, and cross-funding between the UK referendum and the US Trump campaign. Cadwalladr closed the article arguing that “Britain in 2017 . . . increasingly looks like a ‘managed’ democracy. Paid for by a US billionaire. Using military-style technology. Delivered by Facebook. . . . the first step into a brave, new, increasingly undemocratic world” (Cadwalladr 2017). In the article she alleged not only that both campaigns were using sophisticated data-driven social media targeting campaigns but also that there was a degree of cross-funding (through provision of benefits in kind such as data services), coordination of campaign data, and learning between the two campaigns. For the politically displaced, the story was attractive, as it offered support to the claim that the result was illegitimate. In comparison with other advanced democracies, the UK has a very active online population, and users are particularly engaged on social media. More than 82% of British adults used the Internet daily or almost daily in 2016 according to the Office for National Statistics (ONS 2016), and 27% of online adults reported using Facebook on a daily basis. The Internet was according to Ofcom the only news platform with a growing number of users since 2013: 48% of UK adults say they use the Internet to get their news (Ofcom 2017a). According to the same report, 27% of UK adults say they get news from Facebook. Social media, according to the data from a 2017 report, are the fastest growing news source sector: “overall, 47% of those who use social media for news say they mostly get news stories through social media posts, compared to 30% in 2015.” This survey evidence is self-reported, and different surveys vary to an extent. According to the Reuters Institute Digital News 3. The author acknowledges the excellent research assistance of Sharif Labo for this section.
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 271 ]
2 7
Report 2017 (reported in this volume), 41% say they use social media for “news” in the UK. This shift online, and to social media, is reflected also in advertising spending, though estimates of spend vary. Between 2008 and 2016 the “digital” (online) share of US political ad spend rose from 0% in 2008 to an estimated 10% in 2016 (E-Marketer 2016). Given evidence from interviews with campaign leaders (Tambini 2016), and spending returns to the UK Electoral Commission,4 much more than 10% of election marketing budgets is now spent on digital. In 2015, the first year in which digital spending was reported separately by the Electoral Commission, around 23% of the total spend was digital, with the majority of this being spent on Facebook (Electoral Commission 2016). In the United States, which remains dominated by TV spend, almost a billion dollars, or 10% of spend on political ads was forecast to be spent on online advertising in the 2016 election (E-Marketer 2016). The reason for this rapid shift of campaign activity online is simple. Social media advertising appears to be more cost-effective than other less “smart” forms of advertising. Of particular interest to political strategists and campaigners is the fact that data-driven campaigns offer superior targeting and audience-segmentation capabilities. Campaigns can get the messages they think will be most persuasive to people who are undecided but likely to vote, in the constituencies that might swing the election, or key voters in a referendum. What is attractive to advertisers is that they can target those key strategic voters with the messages that are most likely to swing those voters on the basis of demographic, political, and even potentially psychometric profiling. According to campaign leaders, strategists are following audiences online, and developing more sophisticated approaches to online advertising. This is generally combined with an attempt to develop shocking and resonant “shareable” messages to harness the organic sharing of propaganda online. According to Andy Wigmore, the campaign director of Leave.EU: It didn’t matter what was said in the press. The more critical they were of us when we published these articles to our social media, the more numbers we got. So it occurred to us that actually Trump was onto something because the more outrageous he was the more air time he got, the more air time he got the more outrageous he was. . . . The more outrageous we were the more air time we got in the normal media and the more airtime—which was always critical—, the more support we got. . . . The more outrageous we were, we knew that the
4. Researchers examined spending returns as they were published by the Electoral Commission and categorized the payees according to their basic function, in order to identify social media and other forms of spend.
[ 272 ] Politics
press were going to attack us, which is what they did. We are now anti-establishment full throttle. The more outrageous we were the more attention we got. The more attention we got, the bigger the numbers. (Andy Wigmore, interview, September 2016)
How a Data-D riven Social Media Campaign Works
In order to gain a rich understanding of data-driven campaigning on social media we interviewed referendum campaign leaders.5 This builds on the work of Anstead (2017) and others. Seven semistructured interviews were conducted with a common template of questions designed to enable the campaigners to outline their approaches to data-driven campaigning, voter profiling, and social media messaging. The interviews were conducted in London August– November 2016, following the referendum to exit the EU. Three were conducted on the phone/Skype, and the others were conducted in person. In practice, it is impossible to separate the mass media campaign from the social media campaign, and it is impossible to separate the “organic” social media campaign driven by “voluntary” sharing and liking and the use by campaigns of the commercial advertising services offered by social media. Effective campaigns use those three elements together. But in what follows the focus is on the paid element, which has particular implications for election legitimacy, and which often fuels and primes the organic social media campaign, which in turn feeds mass media with stories. On the basis of the literature review and expert interviews carried out following the 2015 general election and the 2016 Brexit referendum, it is 5. To gain an insight into the message-targeting and communications strategy of a modern political campaign we interviewed the key participants from the two officially designated sides: Stronger In and Vote Leave. We were interested in speaking with people who had close operational detail of the campaign strategy; how the key messages were decided on, message sign-off and audience segmentation. We anticipated this would require authorization from senior figures in the campaign and so chose to approach these senior figures first and asked them to suggest people to speak with throughout the campaign organization. We e-mailed interview requests to the heads, deputy heads, and campaign managers. We secured interviews with Jack Straw and Lucy Thomas, the director and deputy director of Stronger In, and Matthew Elliott, the CEO of Vote Leave. These interviews provided the names of other individuals, consultancies, and agencies involved in the campaigns that we subsequently approached as well as providing useful operational detail of the campaigns, especially on the Stronger In side. We also interviewed Andy Wigmore of Leave.EU. All interviews were transcribed and analyzed according to a meaning-condensation process with a focus on ascertaining expert views on processes of segmentation and profiling. Respondents were asked to go on the record and did so. The following section is based on a thematic analysis of their responses.
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 273 ]
4 7 2
possible to outline the following generic stages in building a social media campaign (Figure 11.1 and 11.2): 1. Building the audience. Using a wide range of database-building techniques, campaigners build databases of potential supporters, link these to various forms of purchased and freely available data, such as the electoral register, existing party membership, and canvassing lists, cold- calling records, and “opt in” data-harvesting techniques such as surveys, competitions, and petitions, which are increasingly carried out online. 2. Audience segmentation. There are various approaches to audience segmentation, which combine the following types of criteria: (1) marginality: Is the voter situated in a constituency that is possible to win, that is, a target constituency? Is the voter undecided?; (2) the basic demographic information attached to this voter (e.g., gender, age, income, education); (3) previous voting record (including likelihood of actually voting); (4) evidence of current opinions and “hot-button” issues; and (5) social media activity and degree of its influence. The different campaigns in 2016 each had a slightly different approach to profiling, but each attached a score and a profile to each potential voter using data from the electoral role. In elections, parties are able to learn between elections, but in referenda regulation requires them to “start from scratch” (Matthew Elliott, interview, September 2016) and destroy data on completion. 4. Message creation and testing. The process of finding messages that are effective and resonate with potential voters has in recent years involved extensive “focus group” testing, and repetition of a narrow range of messages that have been vetted and signed off by senior politicians. The social media campaign, by contrast tends to be more dynamic, with messages devised and tested online throughout each day of the campaign using processes of “A/B” testing, whereby messages are selected on the basis of their resonance rather than ideological or political selection. 5. Message targeting and delivery. Many campaigners report that they are focusing more of their advertising spend on digital, and they are doing this because they have a clear sense that social media platforms in particular are much more cost-effective than for example, press, display, or direct mail marketing techniques. The question of whether specific messages are targeted on the basis of the segmentation and profiling techniques described at (2) is the black box of research on social media and campaign targeting. Campaigners frequently claim that they are able to target messages on an individual basis, and serve individually targeted messages that are designed to appeal to particular demographic, education level, psychological, or geographic groups.
[ 274 ] Politics
FOCUS GROUPS
PHONE SURVEY
ATTITIDUNAL + DEMOGRAPHIC FACTORS 100
SEGMENTATION
0 LEAVE
ELECTORAL ROLL
Commercial Proprietary Data
Individual Propensity Models
HEADS vs HEARTS
DISENGAGED MIDDLE
MESSAGE DELIVERY
Figure 11.1: Data-driven profiling in online political advertising: Interview findings on the Vote Leave Campaign.
6 7 2
TURNOUT LIKELIHOOD 0
100
VOTE REMAIN LIKELIHOOD 0
100
PREFERENCE FOR SOCIAL MEDIA, MAIL, PHONE ELECTORAL ROLL
COMMERCIAL DATA
TELEPHONE POLLING*
AUDIENCE PROFILING
FOCUS GROUPS
REMAIN
0
100
LEAVERS
AUDIENCE SCORING
IN-PLAY
REMAIN
SEGMENTATION
MESSAGE FEEDBACK LOOP
IN-PLAY
MESSAGE CREATION & TESTING
*National Rep. Sample Figure 11.2: Data-driven profiling in online political advertising: Interview findings on the Stronger In Campaign.
MESSAGE DELIVERY
THE REFERENDUM: VOTER PROFILING AND SEGMENTATION IS GETTING SMARTER
All the campaigns used a wide combination of techniques to build the audience and profile and segment it. This involved complex modeling of relationships among demographic characteristics, previous expressions of opinion, and stated voting intentions. Such profiling can involve hundreds of data points from dozens of sources. As Will Straw, CEO of Stronger explained, These were opinion groups with demographic characteristics. So for the segmentation— . . . they identified common traits based on how people answer specific questions. Such demographic characteristics as well, but mainly based on their answers to questions that have been asked. What that threw up was some really interesting characteristics of these different groups. So you could say that the average person in this segment would be better or worse off than average, would be overall younger than average, would get their media from the BBC versus newspapers versus online. Would have these attitudes to the EU. These other issues would be of interest to them. Whether they are members of particular groups and so on. So some quite good general information. Then throughout the campaign we used that sub-segmentation to drive our focus group work. So when we had focus groups, I think we had close to thirty focus groups over the course of the campaign, we would get—You might have four to eight different tables up the focus group depending on the size, but it would be a male heads versus heart and a female strong sceptic group [ . . . ] Then we would have monthly depth polls which went back through the segmentation and we could see how the segments were shifting, both in their total numbers but also in their views of the Referendum. Then we would underneath that be able to track how people responded to different questions, certainly immigration question or the economy. How were we best able to get our messages across to those different groups.
Given that this process of segmentation and profiling is subsequently used in order to determine to whom messages are addressed and which messages are addressed to those voters the cumulative effect of this data- driven profiling is of interest: it is likely, for example, that this profiling procedure may inadvertently result in different messages being targeted on the basis of protected characteristics, such as ethnic or religious grouping. Profiling and segmentation has always taken place to an extent on a geographical basis; these new techniques merely offer a much cheaper and effective way of doing so and thus may raise new concerns (see Lynskey,
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 277 ]
8 72
this volume). Profiling and segmentation has always taken place but rapid innovation makes individual level targeting much more efficient and sophisticated.
MESSAGE TARGETING AND DELIVERY
One of the striking things about all the major campaigns to leave the EU is that they both took the strategic decision to focus the majority of their resources and energy on Facebook. There was strong agreement that it was simply the most effective form of political advertising. All the leaders said that Facebook was crucial, and particularly the two Leave campaigns. Andy Wigmore claimed that his team made a strategic decision early in the campaign to put the entire ad budget into Facebook. And this was true also of his counterparts in the other (official) Leave campaign, such as Matthew Elliott. Elliott: . . . almost nothing went in traditional advertising. Maybe one or two things which were more aimed at the press and getting coverage, but almost nothing went on traditional advertising. DT: A lot on social media and— Elliott: A load on social media, a lot of it geared towards the end of the campaign. DT: So increasingly that social media spend is Facebook? Elliott: Facebook yes.
EU REFERENDUM CAMPAIGN FOCUS: EXPENSES FOR INDIVIDUALS WITH LESS THAN £250K SPEND
In order to further understand how the campaigns were approaching social media, and test some of the claims made by our interviewees, we also examined the Electoral Commission on returns on the referendum. Taking one illustrative example, the returns released in early 2017 show that social media now account for most of the spending of the major parties. While the overall sums are relatively small, due to the Electoral Commission spending caps, social media have become the largest recipient of advertising spending, with most of this going to Facebook (Figure 11.3, Table 11.1). The data covers those campaigners that reported spend of between £10,000 and £250,000 at the EU Referendum. Any individual or organization that intended to spend more than £10,000 was required to register as a
[ 278 ] Politics
Campaign Advertising Expenses SMS Marketing Recruitment Agency Music Image Licensing Mailer Delivery Public Policy Research Search Advertising Professional Services Consultancy Polling/Market Research Media Buying Agency Public Affairs Consultancy Out of Home/Outdoor Printing Political Consultancies Other Communications Consultancy Creative Ad Agency Digital Agency Merchandise Branding PR Agency Economics Consultancy Printing Media Production Agency Newspapers Advertising Social Media Advertising Advertising Agency Social Media Advertising/Data Analytics 0%
Figure 11.3: Campaign ad spend: breakdown. Source: Analysis of Electoral Commission spending returns.
5%
10%
15%
20%
25%
30%
0 8 2
Table 11.1. MARKETING, MEDIA , AND MARKET RESEARCH SPENDING TOTALS Category
Spend
Percentage
TOTAL
£3,172,565.83
Social Media Advertising/Data Analytics
£775,315.18
24%
Advertising Agency
£715,059.35
23%
Social Media Advertising
£368,085.52
12%
Newspapers Advertising
£210,169.50
7%
Media Production Agency
£203,565.10
6%
Printing
£125,554.95
4%
Economics Consultancy
£109,594.80
3%
PR Agency
£90,006.22
3%
Merchandise Branding
£78,805.80
2%
Digital Agency
£62,371.99
2%
Creative Ad Agency
£57,792.58
2%
Communications Consultancy
£54,000.00
2%
Other
£53,318.45
2%
Political Consultancies
£41,730.00
1%
Out of Home/Outdoor Printing
£38,723.16
1%
Public Affairs Consultancy
£33,382.80
1%
Media Buying Agency
£28,583.80
1%
Polling/Market Research
£25,489.60
1%
Professional Services Consultancy
£24,000.00
1%
Search Advertising
£21,400.00
1%
Public Policy Research
£16,034.10
1%
Mailer Delivery
£13,034
0%
Image Licensing
£10,133.00
0%
Music
£9,000.00
0%
Recruitment Agency
£5,016.00
0%
SMS Marketing
£2,400.00
0%
Source: Analysis of Electoral Commission spending returns.
“permitted participant” and submit expenses to the Electoral Commission earlier than groups spending more than £250,000. A few parties who spent in excess of £250,000 submitted their expenses earlier. The expenses analyzed are in the categories of marketing, media, and market research. They make up 66% of the total expenses of £4.8 million reported. Expenses outside the campaign period are not included. One difficulty we encountered analyzing this data is that a great deal of the advertising spend is channeled through intermediaries such as advertising agencies. Advertising agencies tend to be active across different media. That said, the highest spend was in social media both through
[ 280 ] Politics
agencies and directly. By examining the spending returns we found that most social spend went to Facebook. An important implication of this is that social media spend, which is growing to become a disproportionate size of the pie, is hardly broken down at all. It therefore becomes an obscure black box in regulated campaigns.
IS FACEBOOK BECOMING A ONE-S TOP-S HOP FOR ELECTION CAMPAIGNING? SOME FINDINGS FROM THE LSE/W HO TARGETS ME PROJECT
During the 2017 UK General Election the social enterprise Who Targets Me persuaded approximately 11,000 volunteers to download a browser plugin. The plugin scraped political advertising from their Facebook feeds and created a large database that contained the almost 4.5 million records of exposure to Facebook ads (Figures 11.4–11.7). Voters continued to volunteer during the election campaign, and this, together with obvious selfselection biases, means that the data is not a representative record of all the ad exposures, but it is a valuable record of a large sample of advertisements that can provide some general indications of the kinds of activities of party political advertisers and of Facebook users.6 These initial results from the LSE/W ho Targets Me research collaboration offer significant evidence that Facebook is not only an important part of the message delivery machinery for targeted advertising services but also is emerging into a one-stop-shop for fundraising, recruitment, profiling, segmentation, message targeting, and delivery. This vertical integration of campaign services, and its operation by a company that in most of the globe is foreign, will have serious implications for future election legitimacy if it is to continue unchecked.
SOME IMPLICATIONS OF SOCIAL MEDIA CAMPAIGNING
The shift to social media therefore poses some serious potential concerns for election legitimacy but, partly because of the lack of transparency of the
6. The dataset is a collection of 1,341,004 impressions of 162,064 unique Facebook advertisements. The data was gathered between May 27, 2017, and June 18, 2017, via a Chrome plugin installed by volunteers taking part in the Who Targets Me project (https://whotargets.me/). The project is intended to capture and save the content of political Facebook ads served to participating volunteers, and more information on the
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 281 ]
8 2
3000 2500 2000 1500 1000 500
5/ 3/ 2 5/ 017 5/ 0 2 : 5/ 017 00 7/ 0 2 5/ 017 :00 9/ 0 5/ 201 :00 11 7 / 0 5/ 201 :00 13 7 / 0 5/ 201 :00 15 7 / 0 5/ 201 :00 17 7 / 0 5/ 201 :00 19 7 / 0 5/ 201 :00 21 7 / 0 5/ 201 :00 23 7 / 0 5/ 201 :00 25 7 / 0 5/ 201 :00 27 7 / 0 5/ 20 :00 29 17 / 0 5/ 201 :00 31 7 /2 0 6/ 017 :00 2/ 0 6/ 201 :00 4/ 7 0 6/ 201 :00 6/ 7 0 2 6/ 017 :00 8/ 0 6/ 20 :00 10 17 /2 0: 01 00 70 :0 0
0
Figure 11.4: Total impressions of political ad per day. Total number of ads served to our sample on Facebook during the election campaign. Note that the sample grew during the campaign, so this should not be seen as an indication of numbers of ads viewed. Note: This is a count of the total number of ads served (unique ad impressions) per day to users with the Who Targets Me plugin installed. Data from May 30 has been removed from this graph, due to an error in the plugin on that day which caused an unknown number of duplicate ad impressions to be recorded on that day. The data on which this graph is based is a database of 1,341,004 adverts captured by the Who Targets Me project, of which 20,958 were judged to be political in nature on the basis of a filter applied to the names of the advertisers named as responsible for the ads. The filter sought to detect the main political parties by searching for text matches to *labour*, *conservative*, *liberal democrat*, *ukip*/*independence party*, *momentum* (where * is a wildcard and the search was case-insensitive) in the names of the advertisers.
platform, and of the process of campaigning, claimsare difficult to assess through research. This fuels the conspiracy theories. In addition to what seems to be a process of consolidation and vertical integration of campaign activity in one platform, namely Facebook, allegations have been made of various forms of foreign involvement, biases in distribution of key messages, bias against small parties, bias against new entrants, bias against parties with socially diverse supporters, bias against certain campaign messages/issues, and bias against certain groups of voters—so-called redlining (Kreiss 2012). Such biases may be unintentional or deliberate. As a hypothetical example, if a party or campaign emerged that was standing on a platform of breaking up social media companies, there would be a strong incentive for social media companies to undermine the visibility of that party. This example may, or may not be far-fetched, but parties already exist that propose radical, sometimes statist solutions that would be hostile to the economic model of the platform companies. Electoral supervision exists plugin and the team that developed it can be found at https://whotargets.me/about. Volunteers agreed that data could be scraped from their Internet browser when they viewed Facebook. This enabled researchers to monitor the different types of messages that were viewed. Graphs presented here outline the basic content of messages during the GE2017. Future research will analyze targetting strategies, content, and profiling.
[ 282 ] Politics
100 90 80 70 60 50 40 30 20 10 0
00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0: 17 017 017 017 017 017 017 017 017 017 017 017 017 017 017 017 017 017 0 2 2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 /2 2 / / / 1 5 3 7 9 6 5/8 10 /12 14 16 /18 20 22 24 26 28 30 6/ 6/ 6/ 6/ 6/ 5/ 5 5 5/ 5/ 5/ 5/ 5/ 5/ 5/ 5/ 5/
Figure 11.5: Political posts containing the word “join.” Posts containing the word “join” or “joining” were more evenly spread throughout the campaign. The high volumes indicate that parties were active in using Facebook as a recruitment campaign—to build their databases. Note: This is a count of the total number of ads served (unique ad impressions) per day to users with the Who Targets Me plugin installed, filtered to include only ads (conservatively) run by political parties or allies (Labour, Momentum, Liberal Democrat, Conservative, UKIP) and containing particular keyphrases. Data from May 30 has been removed from this graph, due to an error in the plugin on that day which caused an unknown number of duplicate ad impressions to be recorded on that day. The data on which this graph is based is a database of 1,341,004 adverts captured by the Who Targets Me project, of which 20,958 were judged to be political in nature on the basis of a filter applied to the names of the advertisers named as responsible for the ads.
5/ 6/ 2 5/ 017 8/ 0 5/ 201 :00 10 7 0 / 5/ 201 :00 12 7 /2 0: 5/ 01 00 14 7 0 / 5/ 201 :00 16 7 0 / 5/ 201 :00 18 7 0 / 5/ 20 :00 20 17 /2 0: 5/ 01 00 22 7 0 / 5/ 20 :00 24 17 0 / 5/ 20 :00 26 17 /2 0: 5/ 01 00 28 7 0 / 5/ 201 :00 30 7 /2 0 6/ 01 :00 1/ 7 0 2 6/ 017 :00 3/ 2 0: 6/ 017 00 5/ 2 0 6/ 017 :00 7/ 2 0: 6/ 017 00 9/ 20 0:0 17 0 0: 00
140 120 100 80 60 40 20 0
Figure 11.6: Ads containing the word *donat*. The relatively high volume of ads containing the words “donate” or “donation” confirms that FB was a significant fundraising platform for parties throughout the campaign and even after it. Note: This is a count of the total number of ads served (unique ad impressions) per day to users with the Who Targets Me plugin installed, filtered to include only ads (conservatively) run by political parties or allies (Labour, Momentum, Liberal Democrat, Conservative, UKIP) and containing particular keyphrases. Data from May 30 has been removed from this graph, due to an error in the plugin on that day which caused an unknown number of duplicate ad impressions to be recorded on that day. The data on which this graph is based is a database of 1,341,004 adverts captured by the Who Targets Me project, of which 20,958 were judged to be political in nature on the basis of a filter applied to the names of the advertisers named as responsible for the ads.
4 8 2
3000
Ads containing *vote* or *voting*
2500 2000 1500 1000 500 0
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 7 0: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 6/ /8/ 10/ /12/ 14/ 16/ /18/ 20/ 22/ 24/ 26/ 28/ 30/ /1/ /3/ /5/ /7/ /9/ 5 / 6 / 6 5/ / 6 / / / 6 / / 6 5 5 5 5 5 5 5 5 5 5 5/ 7 01
Figure 11.7: Ads containing the word “vote” or “voting.” Adverts from all parties containing the words “vote” or “voting.” These instructional posts cluster at the end of the campaign period. Note: This is a count of the total number of ads served (unique ad impressions) per day to users with the Who Targets Me plugin installed, filtered to include only ads (conservatively) run by political parties or allies (Labour, Momentum, Liberal Democrat, Conservative, UKIP) and containing particular keyphrases. Data from May 30 has been removed from this graph, due to an error in the plugin on that day which caused an unknown number of duplicate ad impressions to be recorded on that day. The data on which this graph is based is a database of 1,341,004 adverts captured by the Who Targets Me project, of which 20,958 were judged to be political in nature on the basis of a filter applied to the names of the advertisers named as responsible for the ads.
to ensure that elections—and the deliberative processes that surround them—are seen to be fair. They are increasingly powerless to do so in the face of opaque platforms. In order for elections to be legitimate, voter choices should be demonstrably free and not constrained by propaganda or subject to any form of control or deceit. This is another reason why targeting has been an issue, and “filter bubble” (Sunstein 2001; Pariser 2011) concerns have arisen. While the “jury is out” on the extent to which intermediaries narrow or broaden access to sources of information (see Newman and Fletcher this volume; Ofcom 2017a; Helberger this volume) the danger of social media targeting offers new opportunities in election campaigns for those wishing to shift opinion and votes with scant regard for the truth. There have thus been important concerns about voter autonomy and new forms of manipulation and propaganda. According to the UK election lawyer Gavin Millar, Section 115 of the 1983 Act creates an offence of “undue influence.” Amongst other things this [ . . . ] prohibits impeding or preventing the free exercise of the franchise by duress or any fraudulent device or contrivance. In its long history it has been used against priests and imams preaching politics to the faithful, as well as those who circulated a bogus election leaflet pretending to be from another party [ . . . ] To me the most concerning is the impact of the targeted
[ 284 ] Politics
messaging on the mind of the individual voter. A “persuadable” voter is one thing. A vulnerable or deceived voter is quite another. (Millar 2017)
Foreign intervention has been a feature of much of the public debate, particularly links between the Trump campaign and Russia and the Brexit campaign and the United States, and involvement of Russia in various elections in France and Germany. In the UK this has led to the Electoral Commission enquiring about the funding of the various leave campaigns for example. It will be pointed out that allegations about social media bias and control are speculation. But speculation and conspiracy theory is what undermines trust in democracy. One of the basic premises of free and fair elections is that the contest is free and fair, and perceived as such. This is why simplicity and transparency are so important. While media system capture and bias is inevitable in a mass media system, whether that is one dominated by private media, public media, or some variant (Hallin and Mancini 2004), those biases are by their nature transparent and obvious for everyone to see.
WAS IT FACEBOOK WOT WON IT?
If an election is swung by a private company it is more likely to lose legitimacy in the eyes of citizens and the international community.7 The evidence from the UK is mixed: on the one hand, the mere fact that there has been a loud debate on these issues since the 2016 referendum suggests that data-driven campaigning has had a negative impact on election legitimacy. But others claim that this is simply sour grapes—losers questioning the process. There is something in both arguments and they are not mutually exclusive. Empirical data on the role of Facebook in the overall information ecology is ambivalent, in part because Facebook data is difficult to access. Facebook is market-dominant as a social media company (particularly if we include Instagram and WhatsApp) but not as a media company. In terms of time spent, and survey reports on where people get their news, it is certainly not dominant. But in terms of deliberation and information gathering related to elections, it is becoming the crucial platform in some countries, which is reflected in the shift of UK political advertising onto 7. The title of this section is a reference to an infamous front page headline in the British tabloid the Sun, which gleefully claimed the day after the 1997 election victory of Tony Blair that “it was the Sun Wot Won it.”
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 285 ]
8 62
the platform over the past five years. Facebook, in particular, is emerging as a vertically integrated one-stop-shop for fundraising, recruitment, database building, segmentation, targeting, and message delivery. As a result, there is a paradox: the complex process of deliberation and debate during an election cycle, the flow of ideas, memes, reversals of public opinion, and fluctuations of fortune of individual politicians is now more knowable than ever before. The problem, for most democracies, is that it is knowable by a company based in California that has no intention of sharing that knowledge with anyone, apart from those able to pay for it, without asking too many questions about what they will do with this data or where they are based. This is not Facebook’s fault, but it is a fact, and in the history of elections it is a novel one. There are multiple sensitivities about foreign involvement in media systems. Most countries have maintained rules preventing foreign ownership of media companies under pressure from trade liberalization (this after all is why Rupert Murdoch had to take US citizenship) and the United States, the UK, and most other mature democracies have specific laws that prohibit foreign involvement in campaign funding. So the mere fact of a private, foreign company having this position cuts across the spirit of these previously existing laws.
WHY DOMINANCE MATTERS
Until now, this chapter has focused on the implications for democratic legitimacy of data- driven social media– based election campaigns. The question that follows is to what extent this is a problem of dominance— or conversely whether increasing choice, plurality, and switching between social media platforms could mitigate any of these concerns. The short answer is that dominance matters. A good deal of the concerns we have discussed would be allayed, to an extent by more competition and pluralism in social media platforms.
Censorship Effects
If a nondominant platform takes down a post, that could be described as editorial discretion. If a dominant platform takes down or blocks a post, a person or a topic, that is censorship. It is of little import whether the material is taken down by a human due to a rule violation, or by an algorithm for reasons that are not understood. Dominant platforms censor.
[ 286 ] Politics
Prominence Effects
Platforms can also use their dominant position to promote messages. This has been most evident when Google and others took positions in relation to intellectual property and net neutrality discussions in the US Congress, and platforms have also lobbied on gay rights issues. This is of course what is traditionally done by newspapers, which is why they are subject to sector specific merger and competition rules that limit market concentration. Propaganda Bubbles
If one company holds data on you, and one profile is sold on to advertisers and fed into the relevance algorithm that determines what you are exposed to, there is the danger that this single profile will determine the “filter bubble” (Pariser 2011) effect of what you are exposed to. These are complex processes, and as yet little understood (Helberger this volume; Newman and Fletcher this volume). In the context of elections, the “propaganda bubble” effect could undermine legitimacy if there is a genuine lack of exposure pluralism (Helberger 2012) such that individual autonomy and free will is undermined, and deliberation undermined. In other words, each citizen might be better served by living within multiple “filter bubbles” operated with separate data ecologies.
Lack of Competitive Discipline
Where there are high switching costs and consumer lock-in (Barwise and Watkins this volume) users may be less able to exert “democratic discipline” on platforms—for example by demanding greater control over personal data, more transparency about relevance and prominence, and due process and “put back” rights in relation to takedowns and blocking. There is increasing evidence that Facebook is becoming a “one-stop-shop” for political campaigns that need to gather, profile, segment, and target, and that consumer lock in due to a lack of data portability compounds the effects of this. Separation of Powers and Checks and Balances
Like branches of government, social media companies should be balanced by countervailing power; which can be provided by other social media companies.
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 287 ]
82
A dominant company like Facebook, particularly one that is offering a vertically integrated “one-stop-shop” for election services, is in a historically unique position, and as a foreign company it is a position that if left unchecked will be corrosive of trust and democratic legitimacy. Some of this is speculation. Some of this, we will be told by Facebook and others, could be wrong. But that is, at least in part, the point: because of a lack of transparency, speculation is necessary. Because of opacity and speculation, electoral legitimacy and democratic legitimacy more widely, suffers. Plurality of platforms would provide an important safeguard to democratic legitimacy. Social media are not transparent, and the shift of campaigns online undermines the principle of transparency. To a certain extent this directly undermines existing regulation. The Political Parties, Elections and Referendums Act in the UK places a number of requirements on parties to be open about the funding and governance of campaigns. These exist so that citizens can be clear on who is behind any party of campaign. For example, campaigns are obliged to label their leaflets and other materials that. In 2016, the Electoral Commission admitted that these transparency requirements were not possible to enforce effectively online (Electoral Commission 2016). In a world of leaflets, campaigns could simply provide “imprint” information in small print on each leaflet which specified which campaign was behind the leaflet, and voters (and journalists and other campaigns) could find detailed information about the funding of that campaign on the Electoral Commission website. Social media advertising, where ad messages take a simpler format and do not include imprint information, was undermining that key tenet of transparency.
UNDUE INFLUENCE: THE CRISIS OF ELECTORAL LEGITIMACY
An election in the UK shares many of the features of a village fête. People gather in their local village hall or primary school and are met by volunteers puffed up with civic pride. Votes, like raffle tickets, are carried in battered steel boxes to bigger local secondary schools and counted by more local volunteers. The politicians wear retro rosettes, and tears are shed in the great climax of civic participation, when the teller, often in ceremonial garb, announces the count. Part of the reason for the fusty process and archaic technology, in the era of big data and instant AI-driven feedback is ritual, and part of it is about trust. The two go together, and they are both important factors in the social construction of legitimacy.
[ 288 ] Politics
But the crucial factor in the legitimacy of elections is fairness. Profound political change and party realignment always involves contestation of legitimacy, and the events of 2016 and 2017 have been no exception. Both losers and winners have raised concerns about recent elections and referenda, but there have been some themes that link them, and also concern social media: foreign interference, message targeting, and database-driven campaigning that subverts existing election supervision law. While election designs can be complex, the principle and process of counting Xs on papers could not be more intuitive and widely understood. Transparency has extended to the process of information and to the campaigns itself. While it is clearly the case that in free media systems private media exercise significant influence on the outcomes; the bias and selectivity of those media are there for everyone to see, and newspapers in particular have been freely selected by readers in part for the biases they represent in competitive markets regulated for competition, media plurality and diversity. According to the tests set out earlier in this chapter, electoral legitimacy in the UK is still intact: international organizations and British subjects still view electoral processes as legitimate. But, particularly with regard to the UK Referendum, cracks are beginning to show. This chapter has examined how data-driven campaigning—and Facebook dominance—can undermine legitimacy. The wider issue here may be that while social media still in theory offer new opportunities for democracy, the increasingly commercial and increasingly smart, data-driven social media may in the long term be on a collision course with the open, voluntary, equal public deliberation required by democracy. Some of the corrosive effects of social media can be mitigated if citizens are provided with the appropriate information and the tools needed to switch platforms and exert some competitive pressure. Continuing dominance and monopoly positions, particularly by opaque foreign companies, are likely to be particularly corrosive of trust, fairness, and legitimacy. Many of the issues raised in this chapter are features of social media per se, not any one platform or the fact of dominance. But, and here is the central point, dominance exacerbates the problem. Put in another way, an increased plurality of social platforms would go a long way to addressing many of them.
ACKNOWLEDGMENTS
The author is grateful to Sharif Labo, Richard Stupart, Emma Goodman, Joao Vieira-Malgaes, and Nikola Belakova for research assistance on this chapter; to Paolo Mancini and Martin Moore for providing comments; and
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 289 ]
0 9 2
to Louis Knight-Webb and the Who Targets Me volunteers for providing access to their Facebook data.
REFERENCES Ace Project. 2017. “Election Observation Portal.” Accessed September 7, 2017. http:// aceproject.org/electoral-advice/dop. Allcott, Hunt, and Matthew Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31, no. 2: 211–36. Anstead, Nick. 2017. “Data-Driven Campaigning in the 2015 UK General Election.” The International Journal of Press/Politics 22, no. 3: 294–313. Barocas, Solon. 2012. “The Price of Precision: Voter Microtargeting and Its Potential Harms to the Democratic Process.” In Proceedings of the First Edition Workshop on Politics, Elections and Data, 31–36. PLEAD ‘12. New York: ACM. doi:10.1145/ 2389661.2389671. Butrymowicz, Daniel W. 2009. “Loophole.com: How the FEC’s Failure to Fully Regulate the Internet Undermines Campaign Finance Law.” Columbia Law Review 109: 1708–51. Cadwalladr, Carole. 2017. “The Great British Brexit Robbery: How Our Democracy Was Hijacked.” Observer, May 7, 2017. https://www.theguardian.com/technology/ 2017/may/07/the-great-british-brexit-robbery-hijacked-democracy. Cohen, Julie. E. 2012. “What Privacy Is For.” Harvard Law Review 126, no. 7: 1904–33. Council of Europe. 2003. “Recommendation Rec(2003)4 of the Committee of Ministers to Member States on Common Rules against Corruption in the Funding of Political Parties and Electoral Campaigns.” Adopted by the Committee of Ministers on April 8, 2003, at the 835th meeting of the Ministers’ Deputies. https://search.coe.int/cm/Pages/result_details.aspx?Obje ctID=09000016805e02b1. Council of Europe. 2017. “Feasibility Study on the Use of Internet in Elections, MSI-MED (2016)10rev.” Strasbourg: Council of Europe. https://rm.coe.int/ 16806fd666. Denham, Elizabeth. 2017. “The Information Commissioner Opens a Formal Investigation into the Use of Data Analytics for Political Purposes.” Information Commissioner’s Office blog, May 17, 2017. https://iconewsblog.org.uk/2017/05/ 17/information-commissioner-elizabeth-denham-opens-a-formal-investigation- into-the-use-of-data-analytics-for-political-purposes/. E-Marketer. 2016. “US Political Ad Spending, by Format, 2008–2016.” Last Modified April 21, 2016. http://www.emarketer.com/Chart/US-Political-Ad-Spending- by-Format-2008-2016-billions-of-total/189566. Electoral Commission. 2016. “UK Parliamentary General Election 2015: Campaign Spending Report.” February 2016. http://www.electoralcommission.org.uk/__ data/assets/pdf_file/0006/197907/UKPGE-Spending-Report-2015.pdf. Electoral Commission. 2017. “Electoral Commission Statement on Investigation into Leave.EU.” Electoralcomission.org, April 21. 2017. https://www. electoralcommission.org.uk/i-am-a/journalist/electoral-commission-media- centre/news-releases-Referendums/electoral-commission-statement-on- investigation-into-leave.eu.
[ 290 ] Politics
Esser, Frank. 2013. “Mediatization as a Challenge: Media Logic versus Political Logic.” In Democracy in the Age of Globalization and Mediatization, edited by Hanspeter Kriesi et al., 155–76. Basingstoke: Palgrave Macmillan. Ewing, Keith D., and Jacob Rowbottom. 2011. “The Role of Spending Controls: New Electoral Actors and New Campaign Techniques.” In The Funding of Political Parties: Where Now?, edited by Keith D. Ewing, Jacob Rowbottom, and Joo- Cheong Tham, 77–91. Abingdon: Routledge. Falguera, Elin, Samuel Jones, and Magnus Ohman, ed. 2014. Funding of Political Parties and Election Campaigns: A Handbook on Political Finance. Stockholm: International Institute for Democracy and Electoral Assistance. http://www.idea.int/sites/default/files/publications/funding-of-political- parties-and-election-campaigns.pdf. Garland, Ruth, Damian Tambini, and Nick Couldry. (2017). “Has Government Been Mediatized? A UK Perspective.” Media, Culture and Society. In Press. Available at http://eprints.lse.ac.uk/70662/ Hallin, Daniel C., and Paolo Mancini. 2004. Comparing Media Systems: Three Models of Media and Politics. Cambridge: Cambridge University Press. Helberger, Natali. 2012. “Exposure Plurality as a Policy Goal.” Journal of Media Law 4, no. 1: 65–92. Hepp, Andreas. 2013. Cultures of Mediatization. Cambridge: Polity. Holz-Bacha, Christina, and Lynda Lee Kaid. 2006. “Political Advertising in International Comparison.” In The Sage Handbook of Political Advertising, edited by Lynda Lee Kaid and Christina Holz-Bacha, 3–13. Thousand Oaks: Sage Publications. Howard, Philip N., and Daniel Kreiss. 2009. “Political Parties & Voter Privacy: Australia, Canada, the United Kingdom, and United States in Comparative Perspective.” World Information Access Project Working Paper #2009.1. Seattle: University of Washington. https://ssrn.com/ abstract=2595120. Howard, Philip. 2006. New Media Campaigns and the Managed Citizen. Cambridge: Cambridge University Press. Kreiss, Daniel. 2012. “Yes We Can (Profile You): A Brief Primer on Campaigns and Political Data.” Stanford Law Review Online, 64, no. 70 (February): 70–74. Kreiss, Daniel, and Philip N. Howard. 2010. “New Challenges to Political Privacy: Lessons from the First US Presidential Race in the Web 2.0 Era.” International Journal of Communication 4: 1032–50. Kunelius, Risto, and Esa Reunanen. 2016. “Changing Power of Journalism: The Two Phases of Mediatization.” Communication Theory 26, no. 4: 369–88. doi:10.1111/ comt.12098. Levitsky, Steven, and Lucan A. Way. 2002. “The Rise of Competitive Authoritarianism.” Journal of Democracy 13, no. 2: 51–65. MacKinnon, Rebecca. 2012. Consent of the Networked: The Worldwide Struggle for Internet Freedom. New York: Basic Books. Millar, Gavin. 2017. “Undue Influence.” The New European, May 12, 2017. Moore, Martin. 2016. “Facebook, the Conservatives and the Risk to Fair and Open Elections in the UK.” Political Quarterly 87, no. 3: 424–30. Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: Public Affairs. Nielsen, Rasmus Kleis. 2012. Ground Wars: Personalized Communication in Political Campaigns. Princeton: Princeton University Press.
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 291 ]
2 9
Ofcom. 2017a. News Consumption in the UK: 2016. London: Ofcom. https://www. ofcom.org.uk/__data/assets/pdf_file/0017/103625/news-consumption-uk2016.pdf. Ofcom. 2017b. “Elections and Referendums.” In The Ofcom Broadcasting Code (with the Cross-promotion Code and the On Demand Programme Service Rules). April 3, 2017. London: Ofcom. https://www.ofcom.org.uk/__data/assets/pdf_file/ 0009/100116/broadcast-code-april-2017-section-6.pdf. ONS. 2016. Internet Access—Households and Individuals: 2016. What the Internet Is Used for and Types of Purchases Made, by Adults (Aged 16 or Over). London: Office for National Statistics. https://www.ons.gov.uk/peoplepopulationandcommunity/ householdcharacteristics/homeinternetandsocialmediausage/bulletins/ internetaccesshouseholdsandindividuals/2016. OSCE. 2010. Election Observation Handbook. 6th ed. Warsaw: OSCE Office for Democratic Institutions and Human Rights. OSCE. 2015. “Republic of Tajikistan Parliamentary Elections 1 March 2015: OSCE/ ODIHR Election Observation Mission Final Report from May 15, 2015.” Warsaw: OSCE. http://www.osce.org/odihr/elections/tajikistan/ 158081?download=true. OSCE. 2017a. “Media Analyst.” Osce.org. Accessed September 7, 2017. http://www. osce.org/odihr/elections/44233. OSCE. 2017b. “United Kingdom Early General Election 8 June 2017. Final Report.” Accessed September 14, 2017. http://www.osce.org/odihr/elections/uk/ 336906?download=true?. Pack, Mark. 2015. “Constituency Expense Limits Are Dying off in the UK, but Neither Politicians nor the Regulator Will Act.” Markpack.org (blog), March 20, 2015. http://www.markpack.org.uk/130283/internet-speeds-up-the-killing-off-of- expense-controls-in-marginal-seats/. Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. London: Viking. Parliamentary Assembly of the Council of Europe. 2001. “Recommendation 1516 (2001), Financing of Political Parties.” Adopted by the Standing Committee, acting on behalf of the Assembly, on 22 May 2001. http://www.assembly.coe. int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=16907&lang=en. Piccio, Daniela R. 2016. “The state of political finance regulations in Western Europe.” International IDEA Discussion Paper 13/2016. Stockholm: International Institute for Democracy and Electoral Assistance. http://www.idea.int/ sites/default/files/publications/the-state-of-political-finance-regulations-in- western-europe.pdf. Rheingold, Howard. 1995. The Virtual Community. London: Minerva. Suchmann, Mark. 1995. “Managing Legitimacy: Strategic and Institutional Approaches.” Academy of Management Review 20, no. 3: 571–610. Sunstein, Cass R. 2001. Republic.com. Princeton: Princeton University Press. Tambini, Damian. 1998. “Civic Networking and Universal Rights to Connectivity: Bologna.” In Cyberdemocracy, edited by Roza Tsagarousianou, Damian Tambini, and Cathy Bryan. London: Routledge. Tambini, Damian. 2000. “The Civic Networking Movement: The Internet as a New Democratic Public Sphere?” In Citizenship, Markets and the State, edited by Colin Crouch, Damian Tambini, and Klaus Eder, 238–60. New York: Oxford University Press. Available online at http://eprints.lse.ac.uk/2895/.
[ 292 ] Politics
Tambini, Damian. 2016. “In the New Robopolitics, Social Media Has Left Newspapers for Dead.” Guardian, November 18, 2016. https://www.theguardian.com/commentisfree/2016/nov/18/ robopolitics-social-media-traditional-media-dead-brexit-trump. Tambini, Damian. 2017. “Fake News: Public Policy Responses.” Media Policy Brief 20. London: Media Policy Project, London School of Economics. http://eprints. lse.ac.uk/73015/1/LSE%20MPP%20Policy%20Brief%2020%20-%20Fake%20 news_final.pdf. Tambini, Damian, Sharif Labo, Emma Goodman, and Martin Moore. 2017. “The New Political Campaigning.” Media Policy Brief 19. London: Media Policy Project, London School of Economics. http://eprints.lse.ac.uk/71945/7/ LSE%20MPP%20Policy%20Brief%2019%20-%20The%20new%20political%20 campaigning_final.pdf. Venice Commission. 2010. “Guidelines on Political Party Regulation.” Warsaw: OSCE Office for Democratic Institutions and Human Rights. https://www.osce.org/ odihr/77812?download=true. Zittrain, Jonathan L. 2008. The Future of the Internet and How to Stop It. New Haven: Yale University Press. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1: 75–89.
S o c i a l M e di a P o w e r a n d E l e c t i o n L e g i t i m ac y
[ 293 ]
4 9 2
CHAPTER 12
Manipulating Minds The Power of Search Engines to Influence Votes and Opinions ROBERT EPSTEIN
“A world of unseen dictatorship is conceivable, still using the forms of democratic government.” —Kenneth E. Boulding
O
n January 17, 1961, three days before John F. Kennedy became president of the United Sates, outgoing president Dwight D. Eisenhower gave a remarkably surprising and prescient farewell speech. Eisenhower spoke boldly about what he saw as “the potential for the disastrous rise of misplaced power” through an emerging alliance between the various branches of the US military and the vast new industries of war that World War II had brought into being (Eisenhower 1961). Eisenhower dubbed this alliance “the military-industrial complex,” and he saw it as a serious threat to security and liberty in the years to come, preventable only by the efforts of “an alert and knowledgeable citizenry.” These weren’t the ramblings of some left-wing outsider. Eisenhower was a highly decorated Army general who had led Allied forces to victory over Nazi Germany in World War II. He was the ultimate insider, baring his soul about what he perceived to be serious dangers facing his country and the world. One of his warnings is especially pertinent to the world
we face today: that our increasing dependence on technology could lead to the emergence of a “technological elite” so powerful it could come to control public policy without people’s awareness of the role it was playing in their lives. Research directed by the author since 2013 suggests that such an elite now exists: that a small group of corporate executives now has the power to shift opinions, purchases, and even the outcomes of elections on a massive scale around the world without anyone being the wiser. That this power exists is now beyond question; perhaps more troubling is the growing number of indications such power is actually being wielded. This chapter reviews some of this research and also proposes how society can, following Eisenhower’s entreaties, do a better job in the years to come of becoming an alert and knowledgeable citizenry that protects itself from potentially dangerous sources of manipulation made possible by emerging technologies.
OLD AND NEW SOURCES OF MANIPULATION AND CONTROL
As the behavioral psychologist B. F. Skinner (1971) noted in his best-selling book Beyond Freedom and Dignity, in some sense all human behavior is controlled—always has been, always will be. Control per se is not bad. How could society exist without socialization practices or without police to maintain order? Imagine the chaos if we removed the lines that demark the lanes on multiple-lane highways. Control itself is not bad; it is certain kinds of control to which people object—especially aversive forms—that is, the kinds that make us feel bad: whips, chains, paddles, penalties, threats, punishments, and so on. We are not nearly as distressed about the positive means that governments, corporations, and the people around us use to control us: advertisements, salaries, rewards, bonuses, and praise, for example. In open societies like the United States and the UK—societies with a free press and relatively benign governments—we are also subjected to diverse and competing forms of control: a wide variety of leaders, newspapers, websites, vendors, and pundits that are pulling or pushing us in different directions. This makes us feel like we ourselves are actually in control—like we are making up our own minds. Company A says, “Buy our widget,” and Company B says “No, buy our widget,” and then we decide. Billboard A says, “Vote for Mary,” and billboard B says, “No, vote for James,” and, again, we decide. Even in open societies, however, things are not always so simple. Sometimes we feel like we are making choices when in fact we are not. In the late 1800s, for example, a single corporation—Western Union—controlled
M a n ip u l at i n g Mi n d s
[ 295 ]
6 9 2
virtually all long-distance communications in the United States through its nationwide system of telephone poles, wire, and telegraph operators. In the presidential election of 1876, Western Union not only chose the Republican candidate—a relatively unknown politician from Ohio named Rutherford B. Hayes—but also used its communication monopoly to shift votes. Among other things, it made sure that news stories that traveled over its wires favored Hayes, and it also shared the messages sent and received by the campaign staff of Hayes’s opponent with Hayes’s own staff. Even with all the underhanded corporate help, Hayes did not win easily, but he did win, and the electorate was unaware of Western Union’s meddling (Blondheim 1994). In this case, the key to controlling votes was to control the information to which people had access—information contained in correspondence and news stories. George Orwell reminded us in 1984 that if you can control the information people have, you can control how they think. Also fictional: In a 1992 movie about computer hacking— Sneakers, with Ben Kingsley, Robert Redford, and Dan Aykroyd—at the end of the film Kingsley’s character, defending the art of hacking, says, The world isn’t run by weapons anymore, or energy or money. It’s run by ones and zeroes, little bits of data. It’s all just electrons. . . . There’s a war out there, old friend, a world war. And it’s not about who’s got the most bullets. It’s about who controls the information: . . . what we see and hear, how we work, what we think. It’s all about the information. (Sneakers Script 2017)
Fast-forward to a real study published in 2012 in which the Facebook company demonstrated the enormous power it has to control votes in an election. In the report, Facebook revealed that it had sent “go out and vote!” reminders to 60 million of its members on Election Day in the United States in 2010, causing an additional 340,000 people to vote that day who otherwise would have stayed home (Bond et al. 2012). Fast-forward again to Donald Trump’s surprising victory in the US presidential election of November 2016. What if Facebook had chosen that day to send “go out and vote!” reminders only to supporters of Trump’s opponent, Hillary Clinton? Extrapolating from the 2012 study, that might have caused 450,000 more people to vote for Mrs. Clinton (given that 200 million people were registered to vote, that 100 million might have favored Clinton, and that Facebook’s reminders might have reached 80% of those people)—more than enough, most likely, to give her the win, with no one but a handful of people at the company knowing about Facebook’s interference. This is not just fantasy on my part; at times, Facebook has advertised the power it has
[ 296 ] Politics
to flip elections through targeted messaging (Pasick 2017), and a New York Times investigation concluded that Facebook boosted pro-Clinton voter registration through targeted messaging in the months prior to the election (Chokshi 2016). As far as the author can tell, Facebook did not interfere on election day itself, perhaps because executives there were being cautious or were overly confident that Mrs. Clinton would triumph without the company’s help. But imagine having that much power—the power to flip a close national election invisibly with a few keystrokes that will determine what kind of messages hundreds of millions of people will see on their computers and mobile devices. Facebook probably held back on sending out targeted reminders on Mrs. Clinton’s behalf on election day, and the company might even have unwittingly sent hundreds of thousands of votes to Mr. Trump in the final days before the election by drastically boosting the exposure of dozens of fake news stories that were damaging to Mrs. Clinton, rapidly spreading the contents of these bizarre stories to tens of millions of its members (Silverman 2016). Facebook’s CEO, Mark Zuckerberg, at first denied this had occurred (Isaac 2016) but ultimately demurred, announcing that Facebook would soon launch new algorithms to protect users from fake news (Morris 2016). Fast-forward again to November 10, 2016—two days after the presidential election—when Eric Schmidt, CEO of the holding company that owns Google, Inc., gave a speech at a meeting in New York organized by the New York Times. Said Schmidt: “How people get their information, what they believe, what they don’t, is, I think, the project for the next decade” (Scola 2016; italics added). Think about how new and bizarre these events are. The leaders of two massive tech companies are not just talking about what their businesses were originally created to do; they are reaching far beyond. On its surface, Facebook is a social networking site that allows us to keep in touch with friends and family members. On its surface, the Google search engine is a benign and simple tool that helps us find information on the Internet. Both Zuckerberg and Schmidt know full well, however, that these platforms have morphed over the years into very different kinds of tools. Among other things, they have become supremely intrusive tracking devices (Moore 2016; Taplin 2017a), and they have also become tools for manipulating the opinions, beliefs, purchases, and voting preferences of billions of people, often without their knowledge (Epstein 2016a). Day by day, Schmidt, Zuckerberg, and their fellow executives are making decisions about how to use the new powers they have, with the public and authorities completely
M a n ip u l at i n g Mi n d s
[ 297 ]
8 92
in the dark about the full range of techniques that can be deployed, the discussions that are taking place, and the decisions that are being made. How some of the Big Tech companies have rapidly gone from being helpful gadgeteers to what some might view as Machiavellian monsters is a big topic—too big to handle in this brief chapter. For present purposes, suffice it to say that both experts and authorities are gradually waking up to the magnitude of the problem. Three antitrust actions are currently underway against Google by the EU, and similar actions in Russia and India have already resulted in fines against the company. Google has already faced fines as large as $500 million in the United States for its online shenanigans, and under the Trump administration, new investigations of the company by the US Department of Justice are likely to be launched soon. This chapter does not discuss the legal and regulatory issues surrounding the growth of Big Tech—issues that are beyond the expertise of the author. Instead, it focuses on two lines of scientific research that have been conducted in recent years that demonstrate the extraordinary power Google has to manipulate people without their knowledge. Two methods have already been mentioned that Facebook can use to shift opinions—by determining which news items to feature in its newsfeeds and by messaging targeted demographic groups (for a look at five such ways, see Epstein 2016c)—but Facebook’s power to manipulate is trivial compared with Google’s. How can a simple search engine shift people’s views?
WHAT A SEARCH ENGINE DOES—A ND FOR WHOM
To understand how a search engine can shift opinions, you first have to know how it works. Before you ever use a search engine, a company like Google is constantly combing the Internet using programs called “crawlers,” which look, among other things, for new web pages, changes in existing web pages, and links among web pages. Crawlers build an index of the information they find—just like the index you find in the back of a book—so that eventually people like you and me will be able to find that information quickly. Because Google runs at least twice as many computer servers—probably more than two million at the moment—as its closest competitor (Microsoft), it also crawls more of the Internet than anyone else. By 2015, Google was likely maintaining an index of 45 billion web pages, easily more than three times as many as Microsoft (van den Bosch, Bogers, and de Kunder 2016). Note that most search engines these days do not run crawlers at all; doing so is just too expensive. Yahoo, a search engine that predated Google, stopped crawling the Internet long ago; when
[ 298 ] Politics
its search engine gives you search results, it is taking them from other companies, including Google (Sullivan 2014). Even Bing, Microsoft’s search engine, might now be cutting costs by drawing search results from Google (Epstein 2016b). When you type a search term into the query box on Google.com (more about search terms later), Google’s software mainly does four things: it parses, then selects, then orders, then displays. Parsing means that it analyzes what you typed, breaking down your words into terms it can use for search purposes. So if you type “best dog food,” the software looks for terms or phrases (such as “dog food”) it has in its index and then looks for modifiers (like “best”) it can use to narrow down the search. If you are like the vast majority of people in the world who allow Google to track everything they do, Google then adds what it knows about you to the parsed search term: where you live, what kind of dog you have, how much you spend on dog food, what websites and news sources you trust, and so on. Next, the software uses this information to select a relevant group of web pages from its index. That gray message you get—say, “About 38,300,000 results”—shows you how many relevant pages it found. Next—and this step is especially critical for manipulation purposes—it orders those results from best to worst using criteria that Google keeps secret. Finally, it displays those results in numbered groups, 10 results per page, with the top 10 on the first page you see, the next 10 on the second page, and so on. The selecting and ordering can be done in an infinite number of different ways; how does Google choose to do it? If we are searching for a simple fact (“What is the capital of Nigeria?”), we don’t care much about how Google proceeds as long as we end up with the correct answer. But what happens when there is no correct answer? What happens when we search for “best dog food”? We want the very best food we can get for our dog, do we not? How, exactly, is Google doing the selecting and ordering? Is Google interpreting the word “best” the same way we are? Are the results they give us somehow “unbiased” and “objective,” or do the results somehow favor Google, Inc.? Does Google make more money when we click on certain search results? If Google has a business relationship with the Purina pet food company, will Purina results turn up higher than other results? For that matter, what if we are searching for more sensitive information? What if we type “Is Hillary Clinton a liar?” or “Are Jews evil?” or “Should the UK remain in the EU?” How does Google do the selecting and ordering in such cases, and whose interests are served by what they show us? Although Google has done a superlative job of convincing us that it is nothing more than a cool, benign source of endless free services that exist entirely for our benefit, that is far from the truth. Google is one of the most
M a n ip u l at i n g Mi n d s
[ 299 ]
03
profitable corporations in the world, currently bringing in about over $100 billion a year in revenues, most of which comes from targeted advertising (Alphabet 2017). The basic business model is highly deceptive: Google provides free services—the search engine, Gmail, YouTube, the Android operating system, the Chrome browser, Google maps, and a hundred other services—which it uses to collect information about us, and then it leverages that information to help vendors reach us with advertisements showing us products and services we want (Epstein 2013a, 2013b). In other words, even though we feel like the search engine and Gmail are products we are somehow getting free of charge, to Google, we are the product. As Google Android head Andy Rubin put it, “We don’t monetize the things we create. We monetize users” (Gruber 2013; italics added). To put this another way, Google, Inc., thrives as a business by selling us to vendors. The search engine and Gmail are not products at all; they are actually just ingenious surveillance tools (Epstein 2013a, 2013b). Over time, Google has been able to build an elaborate and ever- expanding profile for each of us that not only identifies all of our preferences and inclinations, no matter how base or carnal, but that allows the company to predict what we want and need. “Maybe,” said Larry Page, cofounder of Google, in an interview in 2014, “you don’t want to ask a question. Maybe you want to just have it answered for you before you ask it. That would be better” (Khosla 2014). Over time, the value of the trickle of free information the company gives us every day is greatly outweighed by the value of the vast amount of information it has collected about us (Epstein 2016e)—information that not only affects what we purchase but that also can be used to determine how we vote (Epstein 2015; Epstein and Edelman 2016; Epstein and Robertson 2015) and even how we think (Epstein 2016a).
SEME: THE SEARCH ENGINE MANIPULATION EFFECT
Through the end of 2011, the author viewed Google the way most people still do: as a super cool corporate anomaly that miraculously did amazing things for us every day completely free of charge. By that time, however, a number of people had already expressed grave concerns about the company. In 2007, the legal scholars Oren Bracha and Frank Pasquale called for the regulation of Google in an essay in the Cornell Law Review (Bracha and Pasquale 2008). In 2011, Scott Cleland, a US State Department official under President George H. W. Bush, published a scathing book called Search & Destroy: Why You Can’t Trust Google Inc. (cf. Auletta 2009; Taplin 2017a;
[ 300 ] Politics
Vaidhyanathan 2011). Also in 2011, Google executive James Whitaker left the company, later noting: The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus. (Whitaker 2012; italics added)
The author began to turn a critical eye toward Google in January 2012, after receiving multiple notices from the company saying his website had been hacked (Epstein 2012). He wondered why he was not being notified by some government agency or nonprofit organization. When had Google become the Internet’s sheriff, prowling cyberspace for shady properties? Having been a programmer most of his life, he also wondered how Google was now blocking access to his website not only through its search engine but also, somehow, through both Safari (a browser owned by Apple) and Firefox (a browser owned by Mozilla, a nonprofit organization). He eventually explained how and why Google blocks access to millions of websites in an investigative article he wrote for U.S. News & World Report called “The New Censorship” (Epstein 2016d). Late in 2012, he was looking at a growing scientific literature that examined how search results impacted consumer behavior. Apparently, people trusted search rankings so much that 50% of all clicks went to the top two results, with more than 90% of clicks going to that precious first page of 10 results, and eye-tracking and other studies suggested that people focused on high-ranked results even when lower-ranked results were superior (Agichtein et al. 2006; Chitika 2013; Granka, Joachims, and Gay 2004; Guan and Cutrell 2007; Jansen, Spink, and Saracevic 2000; Joachims et al. 2007; Lorigo et al. 2008; Optify 2011; Pan et al. 2007; Purcell, Brenner, and Rainie 2012; Silverstein et al. 1999; Spink et al. 2001). Such findings led him to ask: Do people trust high-ranking results so much that results that are biased toward one particular perspective could shift the opinions or beliefs of people who were undecided on an issue? Early in 2013, he and Ronald E. Robertson, currently a doctoral candidate in network science at Northeastern University in Boston, put this idea to a test. Using real search results and real web pages from the 2010 election for the prime minister of Australia, they randomly assigned a diverse group of 102 eligible US voters to one of three groups: (1) people who were exposed to search results that favored Candidate A (Julia Gillard)—that is, whose high-ranking search results linked to web pages that made Gillard look better than her opponent, (2) people who were exposed to search results that favored Candidate B (Tony Abbott), or
M a n ip u l at i n g Mi n d s
[ 301 ]
0 23
(3) people who were exposed to search results that favored neither candidate (the control group). They used the Australia election to make sure their American participants would all be “undecided.” Before letting them conduct their online search, the researchers gave them basic information about each candidate and then asked them in five different ways which candidate they preferred; the researchers asked how the participants trusted each candidate, liked each candidate, and so on. Prior to search, the three groups did not differ significantly in their preferences on any of the five measures. Then participants were let loose on a custom search engine— “Kadoodle”—where they could research the candidates for up to 15 minutes using 30 search results, organized in five pages of six results each. Participants could freely shift from page to page and click on any of the results to look at full web pages, just as people do on Google and other search engines. After their search, the researchers again asked them for their preferences using those same five measures. The author had speculated that the two bias groups would shift their preferences by 2 or 3 percentage points after their searches, with people seeing pro-Gillard search results shifting a bit in her direction and people seeing pro-Abbott results shifting a bit in his. But that is not what happened. Instead, dramatic shifts occurred in all five preference measures, with the proportion of people favoring one candidate or the other shifting by 48.4%, and this was after just one search. (Note how the math works here: If the preference is initially 50/50, and you can get 48% of the people in one group—in other words, 24 people out of 50—to shift toward the candidate you are supporting, you now have the ability to create a win margin of 48% for that candidate—in other words, to get 74% of undecided voters to vote for the favored candidate with only 26% voting for his or her opponent. So the shift mentioned above—what the researchers call their “VMP” or vote manipulation power—can be considered an estimate of the win margin one might be able to create among undecided voters in a tight race. Needless to say, a margin of 48.4% is gigantic, especially if many voters are undecided.) The effect the researchers found was so large that they were skeptical about it. Also disturbing was the fact that only 25% of the participants showed any awareness that they were seeing biased search rankings, even though the rankings were, from the perspective of the researchers, blatantly biased. In two subsequent experiments, they intermingled the search results supporting the two candidates slightly to mask the bias. They still got dramatic shifts in voting preferences—63.3% and 36.7%, respectively,
[ 302 ] Politics
in the two experiments—while reducing the number of people who spotted the bias in the search results to zero, thus showing that biased search rankings can shift voting preferences invisibly—that is, with no awareness at all that people are being manipulated. The initial experiments were small and performed in a laboratory environment in San Diego, California, but the fourth experiment was conducted online with 2,100 eligible voters from all 50 US states. This experiment produced a 33.5% shift overall, and there were now enough people so that demographic effects could be examined. The researchers found that different demographic groups varied substantially in their susceptibility to this kind of manipulation, with one demographic group— moderate Republicans—shifting by an astonishing 80%. The researchers were also able to look separately at the small group of people (a total of 120 of the people in the two bias groups) who noticed that the search results were biased, and here another disturbing discovery was made: The people who noticed the bias shifted even farther in the direction of the bias—by 45%. In other words, simply being aware of a bias in search rankings does not protect people from being affected by that bias—quite the contrary, in fact. The final experiment in this initial series took the researchers to India for the 2014 Lok Sabha national election there—the largest democratic election in the world. They recruited 2,150 undecided, eligible voters throughout India who had not yet voted and randomly assigned them to one of three groups: Each participant had access to search results favoring one of the three major candidates running for prime minister. The previous experiments had all employed American participants who viewed materials from that 2010 Australian election, but now current search results and web pages were being used with real voters, right in the middle of an intense election campaign. The author’s thinking here was that biased search results could still shift voting preferences but by only 1% or 2%. Again, his prediction proved to be wrong. At first, an overall shift of 10.6% was found, but when procedures were optimized based on what the researchers were learning about Indian culture, the shift increased to 24.5%—over 65% in two of the demographic groups they looked at—with 99.5% of the participants showing no awareness that they were seeing biased search results. Again, this shift in voting preferences occurred after only a single search; presumably, if search results were biased in favor of one candidate over a period of months before an election, people would, over time, conduct multiple searches that might impact their voting preferences. In other words, it is possible that the large numbers we were getting were actually on the low side.
M a n ip u l at i n g Mi n d s
[ 303 ]
4 0 3
The results of these initial experiments were published in the Proceedings of the National Academy of Sciences USA in 2015 (Epstein and Robertson 2015), and the effect that search rankings had to shift votes and opinions was dubbed the search engine manipulation effect (SEME). The report included mathematics that would allow one to predict which elections could be flipped using biased search rankings given information about Internet penetration and other factors in a given population. Because the win margins in many elections are small (Mr. Abbott defeated Ms. Gillard by only 0.24% in that election in Australia), and because normal search activity often boosts one candidate over another in search results, the report estimated that SEME was currently determining the outcomes of upward of 25% of the world’s national elections. It included a mathematical model showing that even very small biases in search results could have a dramatic impact on elections because of a possible synergy between two phenomena: High search rankings increase interest in a candidate (SEME), and strong interest in a candidate boosts search results related to that candidate. Subsequent research the researchers have conducted on SEME has increased their understanding of it substantially. Among the major findings: • SEME is powerful because of operant conditioning. The simple factual searches we conduct day in and day out (“What is the capital of Nigeria?”) invariably show us the correct answer in the top search position, teaching us, over and over again, that what is higher in the list is better and truer. People also mistakenly believe that computer algorithms are inherently more objective than people are, even though algorithms are written by people and virtually no one knows how computer algorithms actually work (Gerhart 2004). • SEME can dramatically shift the opinions of people who are undecided about almost anything at all—global warming, homosexuality, fracking—not just voting preferences. • When multiple searches on the same topic lead repeatedly to similarly biased search results, additional searches do indeed increase the impact of SEME. • SEME can be suppressed to some extent with warnings that alert people to the bias they are seeing in search results. (Note that when people see a warning, that is very different from when they notice on their own that search results are biased. Warnings suppress SEME, whereas noticing bias increases SEME’s impact.) Unfortunately, the only way we have found to suppress SEME completely is with a kind of equal-time rule: that is, by alternating biased search results—first toward one
[ 304 ] Politics
candidate, then toward the other, then toward the first again, and so on (Epstein, Robertson, Lazer, and Wilson 2017).
SSE: THE SEARCH SUGGESTION EFFECT
In June 2016, a media company called SourceFed released a 7-minute video on YouTube which claimed that Google, Inc., was suppressing negative search suggestions for Hillary Clinton. In other words, when you started to type a search term such as “Hillary’s he,” whereas Bing and Yahoo would show you suggestions such as “Hillary’s health” or “Hillary’s health problems,” Google would only show you positive suggestions, such as “Hillary’s health plan.” This was true, SourceFed said, even though Google’s own Trends1 data revealed that far more people were searching for “Hillary’s health problems” than for “Hillary’s health plan.” The SourceFed video also showed that Google regularly showed negative suggestions for other people, such as Donald Trump; it just would not show negatives for Mrs. Clinton. The video soon attracted more than a million views, and a 3- minute version posted on Facebook soon had more than 25 million views.2 During the summer of 2016, the author and eight members of his staff investigated SourceFed’s claims systematically, and they found that those claims were generally valid (Epstein 2016f). Note, for example, the dramatic differences in what Google suggested when people typed the word “crooked” on August 8, 2016, versus what Bing and Yahoo showed for the same search term (Figure 12.1). Donald Trump’s insulting moniker for Mrs. Clinton—“crooked Hillary”—was conspicuously absent from Google’s suggestions, even though it was a frequently used search term: Or consider the differences that turned up when typing “Hillary Clinton is” on August 3, 2016 (Figure 12.2): There was little question that Google was suppressing negative terms for Mrs. Clinton, but what was the point? Why suppress negative suggestions for one candidate? As the author has reported at recent scientific conferences, as of this writing he has completed a series of four experiments that shed light on 1. You can access Google Trends at https://trends.google.com/trends/. 2. The 7-minute version of the video was posted by SourceFed on June 9, 2016, at http://youtube.com/watch?v=PFxFRqNmXKg. Unfortunately, not long after it had been viewed more than a million times, the video on YouTube (which is owned by Google) was made “private,” and it appears to be inaccessible by any means at this writing. The 3-minute version is still accessible at http://facebook.com/SourceFedNews/ videos/vb.322741577776002/1199514293432055/?type=2&theater.
M a n ip u l at i n g Mi n d s
[ 305 ]
6 0 3
Figure 12.1: The search term “crooked” produced dramatically different results on Google than it did on Bing and Yahoo on August 8, 2016. Bing and Yahoo showed related search phrases that were popular at that time, including “crooked Hillary,” the unflattering nickname Donald Trump gave Mrs. Clinton during the 2016 US presidential campaign. Google showed four innocuous items; “crooked Hillary” was not among them.
what Google was doing—on, specifically, the differential suppression of negative search suggestions (Epstein 2017a). Each experiment was conducted online with a diverse group of 300 participants from multiple US states, and each had the same general format. People were shown examples of search terms typed into Google’s search bar, and search suggestions were also shown for each example. In some examples, the suggestions included a negative term (that is, a “low-valence” term). Two of the experiments controlled for both the word frequency and the arousal levels of the search suggestions, so that only negativity was varied. Participants were asked to pick the search suggestion they would click if they had conducted this search; if they preferred, they could ignore the
[ 306 ] Politics
Figure 12.2: The search term “Hillary Clinton is” produced dramatically different results on Google than it did on Bing and Yahoo on August 3, 2016. Bing and Yahoo each showed a number of highly negative search phrases, all of which, according to Google Trends, were popular at the time. Remarkably, Google showed only “Hillary Clinton is winning” and “Hillary Clinton is awesome,” even though neither phrase showed any degree of popularity on Google Trends.
search suggestions and type their own search term. The experiments were designed to shed light on several mysterious aspects of Google’s search suggestions, among them: • Why does Google systematically suppress negative search suggestions for some people and some topics—including for the company itself? Google, Bing, and Yahoo all show negative search suggestions for Bing and Yahoo, but only Bing and Yahoo show negative search suggestions for Google. • Why do Google’s search suggestions not correspond to the frequency with which search terms are used in the general population, as indicated by Google’s own data in Google Trends? • Why does Google generally show people only four suggestions? When the company first introduced autocomplete in 2004, it showed 10 suggestions, and these seemed to be indicative of how frequently these search terms were being used in the population at large. Bing and Yahoo still do this, although Bing generally shows eight suggestions rather than 10. What is so special about the number four? These are issues the author and his associates are still studying, but so far what they have learned is both clear and disturbing. The bottom line is that although Google introduced autocomplete as a way of making people’s searches faster and more efficient—at least that is what the company said— over time, the purpose of Google’s autocomplete system has changed. Its main purpose now appears to be to manipulate people’s searches—that is,
M a n ip u l at i n g Mi n d s
[ 307 ]
8 0 3
to nudge searches one way or another so that people will see search results and web pages the company wants them to see. Differentially suppressing negative suggestions for a preferred candidate (or, for that matter, a company or a political position) turns out to be an incredibly powerful way of manipulating search because of a phenomenon called “negativity bias”: the tendency for negative stimuli to draw far more attention than neutral or positive stimuli do (Baumeister et al. 2001; Estes and Adelman 2008; Kuperman et al. 2014; Nasrallah, Carmel, and Lavie 2009; Rozin and Royzman 2001). It is the old cockroach-in-the-salad effect: A single, small cockroach in a salad draws an inordinate amount of attention, ruining the entire salad. There is no corresponding phenomenon for positive stimuli; adding an attractive piece of chocolate to the center of a plate of sewage does not make the sewage more appetizing. Negative stimuli, however, are incredibly powerful. Controlling for the arousal levels and word frequencies of our search suggestions, the new experiments show that a single negative item in a list of search suggestions generally draws far more clicks than neutral or positive suggestions do—10 to 15 times as many clicks under some circumstances. So, over time, differentially suppressing negatives for one candidate—the one the search company favors—has the potential to drive millions of people to view positive information about that candidate while also driving millions of people to view negative information about the opposing candidate. This brings us back to SEME, of course. Biased search results have a dramatic impact on the opinions and votes of undecided people; rigging search suggestions to nudge people toward positive or negative web pages is a simple yet powerful way to shift opinions without anyone being the wiser. This new form of manipulation is called the search suggestion effect (SSE). Research that is the author is currently conducting is showing how SSE and SEME can work synergistically, as well as how to quantify the SSE’s potential impact on an election. Regarding the number of search suggestions Google shows us, the new SSE research suggests that four is the magical value that allows one to maximize the impact of a negative search suggestion (the more search suggestions you show, the lower the impact of the negative suggestion) while also minimizing the likelihood that people will ignore the search suggestions and type their own search term (Figure 12.3). Having people type their own term is the last thing Google wants; the company maximizes control over people’s searches by making sure they click on one of the suggestions the company provides. More and more, these suggestions have nothing to do with the popularity of search and much more to do with the algorithm’s
[ 308 ] Politics
Maximizing Control Over Search 100
CLICK PROBABILITY (Percentage of Maximum)
90 80 70 60 50 40 30 20
2
4 6 Number of search suggestions
All Suggestions
8
Negative Suggestions
Figure 12.3: This graph shows partial results from one of the author’s recent experiments on the search suggestion effect (SSE). The positively sloped line shows that the probability that the user will click on an offered search suggestion (rather than completing his or her own search term) increases as more search suggestions are offered. The negatively sloped line shows that the probability that the user will click on a negative search term (that is, one with a low “valence”) decreases as more search terms are offered. Offering four search suggestions (upper corner of the outlined parallelogram) maximizes the probability that the user will click on one of the offered search suggestions and that he or she will click on a negative search suggestion if one is available. In other words, offering four search suggestions maximizes control over a user’s search.
attempt to anticipate an individual’s responses based on the vast amount of information the company has collected about him or her. It is not yet clear what the combined effects of SEME and SSE are, but their effects are not, in any case, the whole story. The author and his associates have recently begun a series of “Answer Bot” experiments, which are looking at yet another subtle aspect of how Google is systematically manipulating “what [people] believe, what they don’t,” as Eric Schmidt put it (Scola 2016). Although it may not be obvious to people, Google has rapidly been moving away from the search engine as a tool for answering queries and toward an Answer Bot model: that is, simply giving people the answer to their question, just as Captain Kirk’s computer always did in the old Star Trek shows and movies. People do not really want to be given a list of 38 million web pages; they just want the answer. Google is increasingly providing just that with tools like their “featured snippets”—those boxes that are appearing with increasing frequency at the top of the first page of search results—along with the Siri-like Google
M a n ip u l at i n g Mi n d s
[ 309 ]
0 31
Assistant that is now embedded in many new Android devices, and, more recently, the Google Home device that the company is urging people to install in every room. With Home or Assistant, you simply ask your question (“What is the best dog food?”), and Google gives you the answer. This gives the company a high degree of control over what people purchase and how they think, and, as a bonus, it gives the company the ability to monitor, record, and analyze much of what people say 24 hours a day (Edwards 2017; Moynihan 2016). If this sounds shocking to you, you have not been paying attention. Google has been monitoring, analyzing, and storing all of people’s Gmails since 2007—even the drafts people decided to delete after realizing they were too outrageous to send (Epstein 2013b, 2014)—and many Android phones have been able to see and hear people since perhaps 2008. As far as anyone knows, Google stores all of this information permanently. As one top Google executive was quoted as saying in an article in the New York Times, “Never delete anything, always use data—it’s what Google does” (Hardy 2015).
A METHOD FOR MONITORING SEARCH ENGINES
It is one thing to demonstrate in controlled experiments that a company like Google has the power to shift opinions and votes by showing people biased search rankings—quite another to show that Google is actually showing people biased rankings. Early in 2016, Epstein and Robertson, working in secret with two teams of programmers, devised a system to monitor the bias in Google’s search rankings (Epstein and Robertson 2017). Specially, they recruited a Nielsen-type network of confidential field agents scattered throughout the United States, and they developed browser add-ons for both the Chrome and Firefox browsers that allowed them to track election-rated searches conducted by the field agents for nearly six months before election day on November 8, 2016. Overall, they were able to preserve 13,207 election-related searches (in other words, 132,070 search results), along with the 98,044 web pages to which the search results linked. After the election, they used crowdsourcing techniques to determine whether the search rankings people saw were biased toward Hillary Clinton or Donald Trump. The researchers are still analyzing this wealth of data, but, overall, they found a clear and consistent bias in Google’s search rankings for Hillary Clinton in all 10 search positions on the first page of search results over
[ 310 ] Politics
most of this 6-month period—enough, perhaps, to have shifted more than two million votes toward Mrs. Clinton. Was this bias deliberately created by Google executives, or was it algorithmically driven by everyday “organic” search processes? In the opinion of this author, it doesn’t matter. Bias in search results shifts opinions and votes dramatically without people’s knowledge. If Google executives are deliberately altering parameters to favor one candidate, that practice should be made illegal. If Google executives are simply standing aside and allowing their algorithms to show people biased search rankings, that practice too should be stopped. One can easily adjust an algorithm so that it suppresses all bias; the author’s data show unequivocally that Google has such power (Epstein et al. 2017). Researchers have lately been developing ways of teasing apart sources of bias in online search (e.g., see Kulshrestha et al. 2017). Although such efforts are laudable, this author believes that any and all bias that turns up in important online source material such as search results—material people believe to be inherently unbiased and objective—needs to be strictly monitored and regulated. No matter what the source of the bias, it has too much of an impact on people’s opinions and behavior—almost always without any awareness on their part that they are being influenced—to be ignored.3
3. The monitoring system that Epstein and Robertson deployed in 2016 can be considered a successful proof of concept. It demonstrated that ephemeral events on the Internet—events that have never been tracked and that disappear in an instant—can be systematically monitored on a large scale. The system they developed could be used, in theory, to preserve any sort of ephemeral events on the Internet—search suggestions, search results, news feeds, advertisements—even events that have not been invented yet. To put this another way, the system they developed can be expanded into a worldwide ecosystem of passive monitoring software. Monitoring software can be installed on the computers of a large number of Nielsen-type confidants with known demographic characteristics, and this network can be scaled up as needed. Recruiting such confidants, developing and updating the necessary software, maintaining the security of such a system, analyzing the wealth of data that will be collected—all of this will be difficult and expensive, but it can be done. The author is now working with colleagues from Stanford University, Princeton University, MIT, the University of Maryland, the University of Virginia, King’s College London, and elsewhere to create a new organization—The Sunlight Society (http://TheSunlightSociety.org)—that will create such a system and coordinate similar systems, and, in so doing, protect the public from the machinations and manipulations of the Big Tech companies. The Society is dedicated to “detecting, studying and exposing new technologies that pose a threat to democracy and human freedom,” following the famous dictum of Justice Louis Brandeis that “sunlight is said to be the best of disinfectants” (Brandeis 1913). As needed and on an ongoing basis, Sunlight will share its findings with the general public, the media, regulators, antitrust investigators, legislators, and law enforcement agencies.
M a n ip u l at i n g Mi n d s
[ 311 ]
231
FINAL THOUGHTS
Since Google, Inc., was incorporated on September 7, 1998, it has become the gateway to virtually all knowledge for most people on earth outside China and Russia (the company’s operations have so far been constrained by the governments of those countries). Throughout the European Union and in most other countries around the world, more than 90% of search is conducted on Google’s search engine—“trillions” of searches per year, says the company (Sullivan 2016), with that number increasing rapidly as Internet penetration continues to increase. In major English dictionaries, to “google” now means to conduct an online search, and this verb is creeping into non-English dictionaries too (Greenfield 2012). Google dominates the Internet not only in online search but also in mobile device software (Android), browsers (Chrome), language translation (Translate), e-mail (Gmail), online videos (YouTube), physical tracking (Maps), DNS routing, online storage, and dozens of other important domains, which means, among other things, that Google controls five out of the world’s six billion-user online platforms: browsers, video, mobile, search, and maps (Cleland 2015). It has twice tried to dethrone Facebook from its dominance in the sixth billion-user platform—social media—but has so far failed to do so. In the critical world of online analytics, Google is unmatched: about 98% of the top 15 million websites in the world use Google Analytics to track the traffic to those sites, which means Google is also tracking that traffic. Meanwhile, social media platforms—Facebook in particular—are rapidly becoming the main sources through which people get their news (Gottfried and Shearer 2016). Sometimes, at least momentarily, we get the impression that the online world is filled with many closely competing corporate giants, but this is largely an illusion. The popular social media companies Instagram and WhatsApp are both owned by Facebook, which has acquired more than 65 companies since it was founded in 2005 (Toth 2016). Since 2010, Google has been acquiring an average of a company a week (CB Insights 2017)— most recently, a gaggle of companies developing artificial intelligence systems; it owns YouTube, as I mentioned, and it recently purchased Waze, the ubiquitous GPS navigation app. Twitter is still independent, but it might soon end up in either Facebook’s or Google’s hands (Bilton 2016). This is not what the Internet’s creators had in mind. Sir Tim Berners- Lee, who invented the World Wide Web in 1989, is one of several prominent people who have recently expressed concern about what has become of the Internet. It was conceived of as the great leveler—a playing field that would give equal voice to every individual, organization, and small business.
[ 312 ] Politics
Instead, as Berners-Lee lamented at a 2016 conference focused on reinventing the Internet, we have “the dominance of one search engine, one big social network, one Twitter for microblogging” (Hardy 2016). In a recent book and an essay in the New York Times, Robert Reich (2015a, 2015b), a professor of public policy at the University of California Berkeley and secretary of labor under President Bill Clinton, recently expressed his own concerns about the rising online monopolies, and so have Jonathan Taplin (2017a, 2017b) of the University of Southern California, Steven Strauss (2017) of Princeton University, Thomas Edsall (2017) of Columbia University, Nathaniel Persily (2017) of Stanford University, consumer advocate Ralph Nader (Ballasy 2017), and others (see Moore 2016; Epstein 2017b). The author’s concern, driven by years of controlled scientific studies, along with the election-related search data he and his associates preserved in 2016 (Epstein and Robertson 2017), is that our online environment is not only dominated by a very small number of players but also that these players have at their disposal new means of manipulation and control that are unprecedented in human history. It is reasonable to assume that other such means exist that we have not yet discovered and that advances in technology will make possible other methods for controlling thinking and behavior that we cannot now envision. The failure of legislators and regulators to tackle such issues suggests that technology will remain well ahead of the legal and regulatory systems for the foreseeable future—perhaps indefinitely. The creation of monitoring systems of the sort described in this essay might prove to be critical in future years for protecting humanity from high-tech hijacking.
AUTHOR’S NOTE
The author is grateful to Roger L. Mohr, Jr. and Patrick D. Pham for help in the preparation of this manuscript. He can be contacted at [email protected]. REFERENCES Agichtein, Eugene, Eric Brill, Susan Dumais, and Robert Ragno. 2006. “Learning User Interaction Models for Predicting Web Search Result Preferences.” Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA, August 6–11, 2006. http:// susandumais.com/sigir2006-fp338-preferences-agichtein.pdf. Alphabet. 2017. “Press Release: Alphabet Announces Fourth Quarter and Fiscal Year 2016 Results.” Alphabet. https://abc.xyz/investor/news/earnings/2016/Q4_ alphabet_earnings/.
M a n ip u l at i n g Mi n d s
[ 313 ]
431
Auletta, Ken. 2009. Googled: The End of the World as We Know It. New York: Penguin. Ballasy, Nicholas. 2017. “Nader: Congress Must Deal with Google, Microsoft, Apple, Facebook ‘Monopoly’.” PJ Media, March 16, 2017. https://pjmedia.com/news- and-politics/2017/03/16/nader-congress-must-deal-with-google-microsoft- apple-facebook-monopoly/. Baumeister, Roy F., Ellen Bratslavsky, Catrin Finkenauer, and Kathleen D. Vohs. 2001. “Bad Is Stronger Than Good.” Review of General Psychology 5, no. 4: 323–70. https://doi.org/ 10.1037//1089-2680.5.4.323. Bilton, Nick. 2016. “Will Google or Facebook Buy Twitter?” Vanity Fair, June 16, 2016. http://www.vanityfair.com/news/2016/06/ nick-bilton-on-the-future-of-twitter. Blondheim, Menahem. 1994. News over the Wires: The Telegraph and the Flow of Public Information in America, 1844–1897. Cambridge, MA: Harvard University Press. Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. 2012. “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Nature 489: 295– 28. https://doi.org/ 10.1038/nature11421. Bracha, Oren, and Frank Pasquale. 2008. “Federal Search Commission—Access, Fairness, and Accountability in the Law of Search.” Cornell Law Review 93, no. 6: 1149–209. http://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=3107&context=clr. Brandeis, Louis D. 1913. “What Publicity Can Do.” Harper’s Weekly, December 20, 1913. http://3197d6d14b5f19f2f440-5e13d29c4c016cf96cbbfd197c579b45.r81. cf1.rackcdn.com/collection/papers/1910/1913_12_20_What_Publicity_Ca.pdf. CB Insights. 2017. “The Google Acquisition Tracker.” Accessed June 11, 2017. https:// www.cbinsights.com/research-google-acquisitions. Chitika. 2013. “The Value of Google Result Positioning.” Chitika, Inc., June 7, 2013. https://chitika.com/google-positioning-value. Chokshi, Niraj. 2016. “Facebook Helped Drive a Voter Registration Surge, Election Officials Say.” New York Times, October 12, 2016. https://www.nytimes.com/ 2016/10/13/us/politics/facebook-helped-drive-a-voter-registration-surge- election-officials-say.html. Cleland, Scott. 2015. “Google Consolidating Its Dominance at Unprecedented Rates—2012–2015 Chart.” Precursor, September 2, 2015. http://precursorblog.com/?q=content/ google-consolidating-its-dominance-unprecedented-rates-2012-2015-chart. Cleland, Scott. 2011. Search & Destroy: Why You Can’t Trust Google Inc. St. Louis: Telescope Books. Edsall, Thomas B. 2017. “Democracy, Disrupted.” New York Times, March 2, 2017. https://www.nytimes.com/2017/03/02/opinion/how-the-internet-threatens- democracy.html. Edwards, Haley Sweetland. 2017. “Alexa Takes the Stand: Listening Devices Raise Privacy Issues.” TIME, May 3, 2017. http://time.com/4766611/alexa-takes-the- stand-listening-devices-raise-privacy-issues/. Eisenhower, Dwight D. 1961. “Military-Industrial Complex Speech.” Michigan State University, January 17, 1961. http://coursesa.matrix.msu.edu/~hst306/ documents/indust.html. Epstein, Robert. 2012. “Why Google Should Be Regulated (Part 1).” Huffington Post, December 23, 2012. http://www.huffingtonpost.com/dr-robert-epstein/google- privacy_b_1962827.html. (An edited version first appeared in The Kernel [UK] on September 5, 2012, entitled, “Google: The Case for Hawkish Regulation.”)
[ 314 ] Politics
Epstein, Robert. 2013a. “Google’s Dance.” TIME, March 27, 2013. http://techland. time.com/2013/03/27/googles-dance/. Epstein, Robert. 2013b. “Google’s Gotcha.” U.S. News & World Report, May 10, 2013. http://www.usnews.com/opinion/articles/2013/05/10/ 15-ways-google-monitors-you. Epstein, Robert. 2014. “Google’s Snoops: Mining Our Private Data for Profit and Pleasure.” Dissent, May 9, 2014. http://www.dissentmagazine.org/online_ articles/googles-snoops-mining-our-data-for-profit-and-pleasure. Epstein, Robert. 2015. “How Google Could Rig the 2016 Election.” Politico, August 19, 2015. http://politico.com/magazine/story/2015/08/how-google-could-rig-the- 2016-election-121548.html. Epstein, Robert. 2016a. “The New Mind Control.” Aeon, February 18, 2016. https:// aeon.co/essays/how-the-internet-flips-elections-and-alters-our-thoughts. Epstein, Robert. 2016b. “Bigger Brother: Microsoft and Google’s New Pact Could Signal the Beginning of the End for Personal Privacy.” Quartz, May 4, 2016. http://qz.com/676184/microsoft-and-googles-pact-is-the-end-of-personal- privacy/. Epstein, Robert. 2016c. “Five Subtle Ways Facebook Could Influence the US Presidential Election This Fall.” Quartz, June 12, 2016. http://qz.com/703680/ five-subtle-ways-facebook-could-influence-the-us-presidential-election-this- fall/. Epstein, Robert. 2016d. “The New Censorship.” U.S. News & World Report, June 22, 2016. http://www.usnews.com/opinion/articles/2016-06-22/ google-is-the-worlds-biggest-censor-and-its-power-must-be-regulated. Epstein, Robert. 2016e. “Free Isn’t Freedom: How Silicon Valley Tricks Us.” Motherboard, September 6, 2016. http://motherboard.vice.com/read/ free-isnt-freedom-epstein-essay. Epstein, Robert. 2016f. “Research Proves Google Manipulates Millions to Favor Clinton.” Sputnik International. Last modified September 14, 2016. https:// sputniknews.com/us/20160912/1045214398/google-clinton-manipulation- election.html. Epstein, Robert. 2017a. “The Search Suggestion Effect (SSE): How Autocomplete Can Be Used to Impact Votes and Opinions.” Paper presented at the 2nd biennial meeting of the International Convention of Psychological Science, Vienna, Austria, March 24, 2017. http://aibrt.org/downloads/EPSTEIN_2017-The_ Search_Suggestion_Effect-SSE-ICPS_Vienna-March_2017.pdf. Epstein, Robert. 2017b. “Is It Still Possible to Stop ‘Big Tech’ from Killing Democracy?” The Hill, May 28, 2017. http://thehill.com/blogs/pundits-blog/ technology/335507-is-it-still-possible-to-stop-big-tech-from-killing-democracy Epstein, Robert, and Benjamin Edelman. 2016. “The Other Elephant in the Voting Booth: Big Tech Could Rig the Election.” The Daily Caller, November 4, 2016. http://dailycaller.com/2016/11/04/the-other-elephant-in-the-voting-booth- big-tech-could-rig-the-election/. Epstein, Robert, and Ronald E. Robertson. 2015. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences USA 112, no. 33: E4512–21. http://www.pnas.org/content/112/33/E4512.full.pdf?with-ds=yes. Epstein, Robert, and Ronald E. Robertson. 2017. “A Method for Detecting Bias in Search Rankings, with Evidence of Systematic Bias Related to the 2016 Presidential Election.” Vista, CA: American Institute for Behavioral Research
M a n ip u l at i n g Mi n d s
[ 315 ]
631
and Technology, White Paper no. WP-17-02. http://aibrt.org/downloads/ EPSTEIN_&_ROBERTSON_2017-A_Method_for_Detecting_Bias_in_Search_ Rankings-AIBRT_WP-17-02_6-1-17.pdf Epstein, Robert, Ronald E. Robertson, David Lazer, and Christo Wilson. 2017. “Suppressing the Search Engine Manipulation Effect (SEME).” Proceedings of the ACM: Human-Computer Interaction 1, no. 2: article 42. http://aibrt. org/downloads/EPSTEIN_et_al-2017-Suppressing_the_Search_Engine_ Manipulation_Effect_(SEME).pdf Estes, Zachary, and James S. Adelman. 2008. “Automatic Vigilance for Negative Words in Lexical Decision and Naming: Comment on Larsen, Mercer, and Balota (2006).” Emotion 8, no. 4: 441–44. http://dx.doi.org/10.1037/ 1528-3542.8.4.441. Gerhart, Susan L. 2004. “Do Web Search Engines Suppress Controversy?” First Monday 9, no. 1. http://journals.uic.edu/ojs/index.php/fm/article/view/1111/1031. Gottfried, Jeffrey, and Elisa Shearer. 2016. “News Use across Social Media Platforms 2016.” Pew Research Center, May 26, 2016. http://www.journalism.org/2016/ 05/26/news-use-across-social-media-platforms-2016/. Granka, Laura A., Thorsten Joachims, and Geri Gay. 2004. “Eye-Tracking Analysis of User Behavior in WWW Search.” Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Sheffield, United Kingdom, July 25–29, 2004. https://www. cs.cornell.edu/people/tj/publications/granka_etal_04a.pdf. Greenfield, Rebecca. 2012. “How to Say ‘Google’ in Every Language (Almost).” The Atlantic, December 6, 2012. https://www.theatlantic.com/technology/archive/ 2012/12/how-do-you-saw-google-other-languages/320649/. Gruber, John. 2013. “The Inside Story of the Moto X.” Daring Fireball, August 1, 2013. http://daringfireball.net/linked/2013/08/01/moto-x. Guan, Zhiwei, and Edward Cutrell. 2007. “An Eye Tracking Study of the Effect of Target Rank on Web Search.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, California, April 28–May 3, 2007. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110.1496&rep=rep 1&type=pdf. Hardy, Quentin. 2015. “Google Offers Cheap Storage for Certain Kinds of Data.” New York Times, March 11, 2015. https://bits.blogs.nytimes.com/2015/03/11/ google-offers-cheap-storage-for-certain-kinds-of-data. Hardy, Quentin. 2016. “The Web’s Creator Looks to Reinvent It.” New York Times, June 7, 2016. https://www.nytimes.com/2016/06/08/technology/ the-webs-creator-looks-to-reinvent-it.html. Isaac, Mike. 2016. “Facebook, in Cross Hairs after Election, Is Said to Question Its Influence.” New York Times, November 12, 2016. https://www.nytimes.com/2016/ 11/14/technology/facebook-is-said-to-question-its-influence-in-election.html. Jansen, Bernard J., Amanda Spink, and Tefko Saracevic. 2000. “Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web.” Information Processing and Management 36, no. 2: 207–27. http://doi.org/10.1016/ S0306-4573(99)00056-4. Joachims, Thorsten, Laura Granka, Bing Pan, Helene Hembrooke, Filip Radlinski, and Geri Gay. 2007. “Evaluating the Accuracy of Implicit Feedback from Clicks and Query Reformulations in Web Search.” ACM Transactions on Information Systems (TOIS) 25, no. 2: 2–26. https://www.cs.cornell.edu/people/tj/ publications/joachims_etal_07a.pdf.
[ 316 ] Politics
Khosla, Vinod. 2014. “Fireside Chat with Google Co-Founders, Larry Page and Sergey Brin.” Khosla Ventures, July 3, 2014. http://www.khoslaventures.com/ fireside-chat-with-google-co-founders-larry-page-and-sergey-brin. Kulshrestha, Juhi, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna P. Gummadi, and Karrie Karahalios. 2017. “Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media.” Proceedings of ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW), Portland, Oregon, February 25–March 1, 2017. http://dx.doi.org/10.1145/ 2998181.2998321. Kuperman, Victor, Zachary Estes, Marc Brysbaert, and Amy Beth Warriner. 2014. “Emotion and Language: Valence and Arousal Affect Word Recognition.” Journal of Experimental Psychology: General 143, no. 3: 1065–81. http://dx.doi. org/10.1037/a0035669. Lorigo, Lori, Maya Haridasan, Hrönn Brynjarsdóttir, Ling Xia, Thorsten Joachims, Geri Gay, Laura Granka, Fabio Pellacini, and Bing Pan. 2008. “Eye Tracking and Online Search: Lessons Learned and Challenges Ahead.” Journal of the Association for Information Science and Technology 59, no. 7: 1041–52. http:// dx.doi.org/10.1002/asi.20794. Moore, Martin. 2016. Tech Giants and Civic Power. King’s College London: Centre for the Study of Media, Communication & Power. http://www.kcl.ac.uk/sspp/ policy-institute/CMCP/Tech-Giants-and-Civic-Power.pdf. Morris, David Z. 2016. “Zuckerberg Details Further Fake News Controls for Facebook.” Fortune, November 19. 2016. http://fortune.com/2016/11/19/ zuckerberg-fake-news-facebook/. Moynihan, Tim. 2016. “Alexa and Google Home Record What You Say. But What Happens to That Data?” Wired, December 5, 2016. https://www.wired.com/ 2016/12/alexa-and-google-record-your-voice/. Nasrallah, Maha, David Carmel, and Nilli Lavie. 2009. “Murder, She Wrote: Enhanced Sensitivity to Negative Word Valence.” Emotion 9, no. 5: 609–18. http://dx.doi. org/10.1037/a0016305. Optify. 2011. The Changing Face of SERPs: Organic Click Through Rate. Optify, Inc. http://www.my.epokhe.com/wp-content/uploads/2011/05/Changing-Face-of- SERPS-Organic-CTR.pdf. Pan, Bing, Helene Hembrooke, Thorsten Joachims, Lori Lorigo, Geri Gay, and Laura Granka. 2007. “In Google We Trust: Users’ Decisions on Rank, Position, and Relevance.” Journal of Computer‐Mediated Communication 12, no. 3: 801–23. http://dx.doi.org/10.1111/j.1083-6101.2007.00351.x. Pasick, Adam. 2017. “Facebook Says It Can Sway Elections After All—for a Price.” Quartz, March 1, 2017. https://qz.com/922436/facebook-says-it-can-sway- elections-after-all-for-a-price/. Persily, Nathaniel. 2017. “The 2016 U.S. Election: Can Democracy Survive the Internet?” Journal of Democracy 28, no. 2: 63–76. http://www. journalofdemocracy.org/article/can-democracy-survive-the-internet. Purcell, Kristen, Joanna Brenner, and Lee Rainie. 2012. “Search Engine Use 2012.” Pew Research Center’s Internet & American Life Project, March 9, 2012. http://pewinternet. org/~/media//Files/Reports/2012/PIP_Search_Engine_Use_2012.pdf. Reich, Robert B. 2015a. “Big Tech Has Become Way Too Powerful.” New York Times, September 18, 2015. http://www.nytimes.com/2015/09/20/opinion/is-big- tech-too-powerful-ask-google.html.
M a n ip u l at i n g Mi n d s
[ 317 ]
831
Reich, Robert B. 2015b. Saving Capitalism: For the Many, Not the Few. New York: Knopf. Rozin, Paul, and Edward B. Royzman. 2001. “Negativity Bias, Negativity Dominance, and Contagion.” Personality and Social Psychology Review 5, no. 4: 296–320. https://sites.sas.upenn.edu/rozin/files/negbias198pspr2001pap.pdf. Scola, Nancy. 2016. “Eric Schmidt Warns about Rise of Misinformation in 2016 Campaign.” Politico Pro, November 11, 2016. https://www.politicopro.com/ technology/story/2016/11/eric-schmidt-warns-about-rise-of-misinformation- in-2016-campaign-137454. (Politico also reported the same quote here: http:// www.politico.com/tipsheets/morning-tech/2016/11/election-postmortem-in- the-valley-217364. The actual statement can be viewed about six minutes into the video posted here: https://www.youtube.com/watch?v=TjnFOhwDAYM.) Silverman, Craig. 2016. “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook.” BuzzFeed, November 16, 2016. https://www.buzzfeed.com/craigsilverman/ viral-fake-election-news-outperformed-real-news-on-facebook. Silverstein, Craig, Hannes Marais, Monika Henzinger, and Michael Moricz. 1999. “Analysis of a Very Large Web Search Engine Query Log.” Association for Computing Machinery Special Interest Group on Information Retrieval Forum 33, no. 1: 6–12. https://pdfs.semanticscholar.org/da9c/892c7655ba5ae556458e78 33c3ee138ab669.pdf. Skinner, B. F. 1971. Beyond Freedom and Dignity. New York: Alfred A. Knopf. “Sneakers Script.” 2017. Drew’s Script-O-R ama. Accessed April 16, 2017. http://www. script-o-rama.com/movie_scripts/s/sneakers-script-transcript-robert-redford. html. Spink, Amanda, Dietmar Wolfram, Major B. J. Jansen, and Tefko Saracevic. 2001. “Searching the Web: The Public and Their Queries.” Journal of the Association for Information Science and Technology 52, no. 3: 226–34. http://dx.doi.org/ 10.1002/1097-4571(2000)9999:99993.0.CO;2-R . Strauss, Steven. 2017. “Is It Time to Break Up the Big Tech Companies?” Los Angeles Times, June 30, 2017. http://www.latimes.com/opinion/op-ed/la-oe-strauss- digital-robber-barons-break-up-monopolies-20160630-snap-story.html. Sullivan, Danny. 2014. “The Yahoo Directory—Once the Internet’s Most Important Search Engine—Is to Close.” SearchEngineLand.com, September 26, 2014. http://searchengineland.com/yahoo-directory-close-204370. Sullivan, Danny. 2016. “Google Now Handles at Least 2 Trillion Searches Per Year.” SearchEngineLand.com, May 24, 2016. http://searchengineland.com/ google-now-handles-2-999-trillion-searches-per-year-250247. Taplin, Jonathan. 2017a. Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy. New York: Little, Brown and Company. Taplin, Jonathan. 2017b. “Is It Time to Break Up Google?” New York Times, April 22, 2017. https://www.nytimes.com/2017/04/22/opinion/sunday/is-it-time-to- break-up-google.html. Toth, Steve. 2016. “Companies Acquired by Facebook.” TechWyse, October 26, 2016. https://www.techwyse.com/blog/infographics/ 65-facebook-acquisitions-the-complete-list-infographic/ Whittaker, James. 2012. “Why I Left Google.” Microsoft Developer, March 13, 2012. https://blogs.msdn.microsoft.com/jw_on_tech/2012/03/13/ why-i-left-google/
[ 318 ] Politics
Vaidhyanathan, Siva. 2011. The Googlization of Everything (And Why We Should Worry). Berkeley, CA: University of California Press. van den Bosch, Antal, Toine Bogers, and Maurice de Kunder. 2016. “Estimating Search Engine Index Size Variability: A 9-Year Longitudinal Study.” Scientometrics 107, no. 2: 839–56. http://dx.doi.org/10.1007/s11192-016-1863-z.
M a n ip u l at i n g Mi n d s
[ 319 ]
0 23
CHAPTER 13
I Vote For—How Search Informs Our Choice of Candidate NICHOL A S DIAKOPOULOS, DANIEL TRIELLI, JENNIFER STARK , AND SE AN MUSSENDEN
S
earch engines such as Google mediate a substantial amount of human attention, acting as algorithmic gatekeepers and curators as people seek and navigate news and information online. A 2014 survey found that 51% of American respondents got news in the last week via a search engine (Media Insight Project 2014), most likely with either Google or Bing, which together represent a practical duopoly, dominating about 86% of the US market for desktop search (“Rankings—comScore, Inc” 2017). Yet, as proprietary systems, little is known about search engine information biases and how they may be shaping the salience and quality of users’ information exposure. This is of particular importance in considering how voters become informed during elections. The primary focus of this chapter is to begin to illuminate the multifaceted roles that Google’s search technologies have played in algorithmic information curation in the 2016 US elections. The concern over the power of search engines in mediating access to information is not a new one. As early as 2000 Introna and Nissenbaum expounded on issues of accountability and transparency (Introna and Nissenbaum 2000), and these concerns have been echoed in more recent work (Laidlaw 2009; Granka 2010). Research on the potential for search engines to impact people has enumerated a number of possible repercussions, including the potential to affect attitudes (Knobloch-Westerwick, Johnson, and Westerwick 2015), to alter perceptions based on image
presentations (Kay, Matuszek, and Munson 2015), and to lead to anticompetitive tendencies by privileging preferred products or pages (Hazan 2013; Edelman 2010; Edelman and Lockwood 2011; Alam and Downey 2014). The fundamental reason why search rankings are so powerful is that the order of information has a substantial impact on human attention and reliance: not only do people click on top results more than lower results (Agichtein et al. 2006) but also they believe something is more relevant if it is in a higher position (Pan et al. 2007). In the media environment, search engines have the power to “grant visibility and certify meanings” as Gillespie (2017) has written. Because search engines act as gatekeepers to information, the way they are designed and wielded can yield subtle or overt power. For instance, the prevailing design value baked into (and optimized for in) Google search is that of “relevance,” but this may come at the cost of considering other possible design values such as fairness or information diversity (Van Couvering 2007). In China the state exercises its will to censor and block politically inexpedient information via the predominating nationalized search engine, Baidu (Jiang 2013, 2014). In politics the most startling evidence for the power of search engines to impact election processes comes from Epstein and Robertson (2015). In their laboratory study, they showed that by manipulating the ordering of supporting information for candidates in a mock search engine, they could shift the voting preferences of undecided voters. Other work has explored differences between search engines in terms of the types of sources surfaced for election-related queries. Muddiman (2013) found that Google ranked campaign-controlled and advocacy pages higher than Yahoo, for instance. Research has shown that users search for candidate information during key campaign events such as debates and gaffes (Trevisan et al. 2016), however query strategies used to seek information about candidates are an underresearched topic. Legal scholars have explored the complexities of applying regulation to search engines (Grimmelmann 2014; Bracha and Pasquale 2008) in order to quell their potential for abuse. One of the key barriers to regulating search engines in the US jurisdiction is that their output is currently interpreted as free speech by courts (Laidlaw 2009). While there may be other regulatory options for ensuring search engine accountability, such as setting up nonpublic limited-access courts (Bracha and Pasquale 2008), these only partially address accountability of such systems due to their nonpublic nature. Given the challenges to the regulation of search engines, an alternative to achieving algorithmic accountability (Diakopoulos 2015) of search engines, and the approach we have taken here, is to audit their results by gathering data around specific candidate-related queries.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 321 ]
23
Yet there are a number of confounds to studying search engine results using an auditing approach. Factors such as the query terms used, language setting, geolocation, search history, and logged-in status, not to mention randomization or A/B testing that the search engine may be undertaking make it difficult to see the outputs of the system as stable (Ørmen 2015; Xing et al. 2014; Kliman-Silver et al. 2015). Moreover, the unavoidable constitution of the search engine as a sociotechnical system (Gillespie 2017) that embeds feedback loops with user inputs make it difficult to disambiguate the role of the algorithm versus the role of the human user. And this is further compounded by the knowledge that different demographics and users may exhibit different types of information-seeking behavior (Weber and Jaimes 2011). The choice of query terms can itself lead to politically biased results, as can the “universe” or input bias of the pages indexed by the search engine. Teasing apart differences in results arising from query term selection, input bias, and actual algorithmic ranking bias is a methodologically challenging proposition (Magno et al. 2016; Kulshrestha et al. 2017). Taking these methodological challenges into account to the extent possible, this chapter details four distinct case studies/audits. Each case illustrates how Google mediates candidate information in different ways: (1) Search Results: the set of sites ranked on the first page of search results, including their support or opposition to the candidate; (2) Issue Guide: the presentation of an issue guide integrated into search results on candidates constituted from algorithmically curated quotations from each candidate gleaned from news coverage; (3) In the News: the presentation of news information about each candidate as framed in the “In the News” section, which occupies privileged screen real-estate at the top of search results; and (4) Visual Framing: candidates are visually framed in search results as a consequence of the image selections that Google makes. The following subsections detail each of these case studies, examining why and how each facet of the search engine is important to consider with respect to how voters gather information about candidates. We then conclude the chapter with a comparison and contrast of the cases, including methodological challenges and an elaboration on where additional work is needed to better understand how search engines inform voters.
CASE 1: SEARCH RESULTS
Previous research has indicated that “biased search rankings can shift the voting preferences of undecided voters by 20% or more” (Epstein and Robertson 2015). The impact of that bias may be felt more if users assume
[ 322 ] Politics
that the results are neutral and, therefore, trustworthy. To begin to articulate such biases in a real search engine a crowdsourced analysis was used to explore differences in how primary candidates in the 2016 US presidential election were presented by Google.
Methods
Search result links were collected on December 1, 2015, from nonpersonalized Google searches for each candidate’s complete name (i.e., “first-name last-name”). The top 10 ranked results for each of the 16 candidates (13 Republicans and 3 Democrats) were collected. The focus on the first page was grounded in knowledge of search users’ behavior: users end up clicking on the first 10 results 70% of the time (Loiz 2014). The process to determine whether the linked webpages returned by the search were positive or negative for each candidate was crowdsourced using the Amazon Mechanical Turk (AMT) microtask market. Each link was presented to three separate AMT workers, who determined whether the linked webpages returned by the search were favorable or oppositional toward each candidate. Several iterations of the instructions for workers were tested with pilot data. As a basis for that process, an initial sample of the websites was rated by the researchers and measured against the ratings of AMT respondents. Language and instructions for the task were adjusted until there was a good level of agreement between the researcher rating and the respondents’ ratings. The intraclass correlation coefficient between ratings done by AMT and the researcher’s baseline rating was 0.912. The final version of the AMT task included simple instructions on how to read the websites. The respondents were instructed to only evaluate the linked webpage, to consider only the primary content of the webpage, and to decide whether the webpage favored or opposed a particular candidate. Workers were directed not to consider source reputation, but instead to focus on the content of the specific article. We incorporated strategies into our task design to help ensure quality responses from crowdsource workers (Kittur, Chi, and Suh 2008) including asking respondents to answer a series of four questions: (1) “Please summarize the content of the website in one sentence”; (2) “To what extent does the linked webpage favor (candidate name)?”; (3) “To what extent does the linked webpage oppose (candidate name)?”; and (4) “Please explain the reasoning behind your ratings in a few sentences. Be sure to include relevant excerpts from the page that support your ratings.”
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 323 ]
4 2 3
Questions 1 and 4, which were answered via free text, were used to control the quality of the information. Question 1 was designed so that respondents had to explicitly demonstrate that they, in fact, had read and understood the page. Question 4 was designed to ensure that respondents had fully considered the page; it also helped explain the respondent’s reasoning. Any response that did not include answers to Questions 1 and 4 were rejected, and another worker was then asked. In questions 2 and 3, each responder had to rate each website on scales of “favorability” and “opposition” to a particular candidate, on a scale of 1 (“not at all”) to 5 (“a lot”). This rating was not done using a single scale because some websites were found to be more complex than merely favoring or supporting; many of them included different angles and opinions on the same candidate. This was the case, for instance, for news articles that had quotes from both Democrats and Republicans about the same candidate, or that recounted both controversies and accomplishments by the same person. At the end of the crowdsourcing process, each page had three scores for favorability and three scores for opposition, which were averaged. The averages of the favorability and opposition scores were subtracted from each other to calculate an average net positivity score. Those average net positivity scores were then averaged by candidate and party.
Analyzing Net Positivity
The average net positivity score for all websites returned by the top 10 search results was 1.2 on a scale of -4 (very opposed) to 4 (very favorable). The Democrats had higher average scores (1.9) than the Republicans (1.0). The candidate with the highest average was Democrat Bernie Sanders, with 3.2. The lowest was Republican George Pataki, with 0.3. Analysis included not only the average net positivity score but also the total number of positive and negative web pages for each search result. Any result with an average net positivity score between -0.5 and 0.5 was considered neutral. Below and above those thresholds, they were considered, respectively, negative and positive. Democrats had, on average, 7 positive search results in those top 10, whereas GOP candidates had an average of 5.9. Sanders was also the only candidate with no negative pages surfaced in his first 10 web results; the average candidate had 2.1. Sanders had 9 positive websites compared to the average of 6.1, and one neutral, whereas the average number of neutral results per candidate was 1.8. Republican Ted Cruz had the most negative results, with five, compared with five positive results.
[ 324 ] Politics
The results for candidates tended to decline in positivity as you move down the rankings on the first page. On average, the net positivity score of the first result was 2.2. By the 10th ranked result, the average score dropped to 0.9 (Figure 13.1). Looking closer at the individual results, the negative websites tended to appear in the bottom of the first page of Google results. Campaign websites—which included candidate webpages and social media profiles—were clustered at the top. The negative websites tended to be news articles that were mostly critical of the candidates. Google presented a higher proportion of negative news articles about Republican candidates than their Democratic counterparts. Democrats tended to have more official sites, social platforms, and, to a lesser degree, more positive coverage on the first page. Six of the top 10 results for Bernie Sanders were run by the campaign. They were a Facebook profile, a Twitter profile, a YouTube account, and three campaign website pages. The other links were to a Wikipedia page and three news stories, all favorable to Sanders. Ted Cruz, on the other hand, had only two websites and a Facebook profile. The other seven links were news articles, and for Cruz, those were mostly negative. The way Google ranks and selects websites, therefore, opens the way for digitally savvy campaigns to take hold of the top 10 results. Official sources of information appear to be privileged and come to dominate the most coveted and attention-getting positions on the results page.
4.0 Democrats Republicans Average
3.5
Favorability Score
3.0 2.5 2.0 1.5 1.0 0.5 0.0
1
2
3
4
5
6
7
8
9
10
Result Rank
Figure 13.1: The decay in the average positivity score on the first page of web results for presidential candidates in the US presidential election of 2016. Source: Original chart produced by Daniel Trielli.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 325 ]
6 2 3
CASE 2: ISSUE GUIDE
On February 1, 2016, Google deployed a new feature for users searching for US political information regarding the presidential election candidates (Schonberg 2016). Whenever a user searched the name of a candidate in the presidential primaries taking place at the time, the results page displayed a list of 16 political issues (later expanded to 17) each containing several position statements by the candidate. These statements were sourced from news articles that were linked alongside the quotes and, according to our contact with Google public relations the guide, would “only show issue topics from candidates when they have submitted a statement or if there is a sufficient amount of news articles that include relevant quotes from them that our algorithm has found” (see Figure 13.2). Google’s ability to edit together and inject information that it deems relevant to candidates is another instance of how their dominance may affect the public’s ability to get fair and balanced information. This case study was motivated by a need to understand how the design of the new feature might bias how people perceive the candidates. The analysis presented relies on data collected by observing the information box constituting the issue guide. The data collected includes quotes by the candidate, the name of the candidate, the broad issue or topic the quote is referring to, the rank position of the quote within its topic, the link to the article from which the quote was extracted for the infobox, the website that is the source for that article, and the date the article was published. Data was collected three times, at three-week intervals, on April 1, April 22, and on May 13, 2016. In the first two collections, the tool presented quotes from five candidates: Democrats Hillary Clinton and Bernie Sanders, and Republicans Ted Cruz, John Kasich, and Donald Trump. By the time the third collection was performed, both Kasich and Cruz were removed from the list, since they had withdrawn from the campaign.
Quantifying Statements
The initial analysis involved counting the number of quotes by each candidate presented by the information box and then looking at the change across each data collection. The first discovery was that the number of quotes increased over time. No quotes were removed between each meas urement. New quotes were added as the candidates made new statements
[ 326 ] Politics
Figure 13.2: The Google campaign issue guide showing positions for Donald Trump, collected June 2, 2016. Source: Screenshot of webpage taken and cropped by authors.
that were covered by new articles. This increase over time happened with all candidates. However, over time a disparity between the totals from candidate to candidate was detected. On average, candidates had 191 statements listed in the tool on April 1. But after separating by ideology, the Democratic candidates had an average of 258, while the Republicans had an average of 147. This continued through the second collection (average of 287 statements for Democrats versus 166 for Republicans) and the third, after two Republicans dropped out (300 and 160). In this final collection, Hillary Clinton had more statements than Donald Trump in 13 of the 17 topics; Donald Trump had more statements in three topics; and there was an equal number of statements in one topic.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 327 ]
8 2 3
The challenge, then, was to try to determine whether that disparity could be the result of an imbalanced input of news articles that preceded the aggregation by the information box. If the volume of coverage of the candidates was divergent, there would be fewer articles about some candidates, which would lead to fewer statements extracted and presented. Fewer news articles about Donald Trump would, therefore, indicate bias in the coverage indexed, not necessarily the Google information box curation algorithm. However, searches for each of the three remaining candidates conducted on Google News showed the opposite. In May of 2016, Google News had 941 indexed articles about Trump, 710 about Clinton, and 630 about Sanders. Data from LexisNexis corroborated that Trump was the focus of a larger number of news stories during the election (“U.S. Presidential Campaign Tracker” 2016).
News Sources
We also considered the distribution of sources for the articles from which statements were extracted, in order to determine whether a source or group of sources was dominant in the tool. In the May 13 collection, 326 different news organizations served as the sources for the statements of the candidates. Some sources had more articles represented than others. Out of a total of 735 statements from the three remaining candidates, 63% came from sources that, individually, had a small participation—1 to 5 articles listed in the information box. Another 24% came from sources that were in the middle tier—6 to 20 statements. And 14% of the statements came from four news sources, that is, 1.2% of the total of 326 sources: The Hill, Politico, The Washington Post, and The Huffington Post. Additionally, Google itself was the source for 17 statements each by Bernie Sanders and Hillary Clinton. Unlike Donald Trump, Sanders and Clinton decided to use a specific Google-based solicitation for self- presentation on issues, and those were also included in the information box. This difference in how the candidates provided (or did not provide) information accounts for only about 12% of the disparity in totals we refer to in the previous section. The next step was to determine whether there was an ideological imbalance in the sources from which the statements came. A database of ideological leanings of websites from prior research (Bakshy, Messing, and Adamic 2015) was used to determine whether there was a political bias in the news sources used in the information box. The database scores websites from -1 to 1, the negative values meaning they are preferred (by Facebook shares
[ 328 ] Politics
and likes) by users who are left-wing and positive values meaning they are preferred by users who are right-wing. Applying the bias scores from the study to the 83 overlapping sources used by the Google issues guide, provided an average bias score of -0.16, meaning that overall there is a slight liberal bias in news sources. However, those 83 sources cover only 50% of all the articles in the information box. When considering only the top 10 sources for the Issues tool, all of which were covered by the political bias database, the bias score was more liberal at −0.31.
Editorial Choices
Editorial choices for the Google information box were also analyzed. The first element in that inquiry was the order in which the topics were listed. Previous research has indicated that items listed higher on the page will get more attention from searchers (Agichtein et al. 2006). In the first data collection, the order of issues was: Immigration, Abortion, Guns, Foreign Policy, Taxes, Health Care, Economy and Jobs, Civil Liberties, Crime and Safety, Environment, Education, Budget and Spending, National Security, Medicare and Social Security, Veterans, and Energy. In the following data collections, it was mostly unchanged, except for the addition of Gay Marriage, in the sixth position between Taxes and Health Care. It is also noteworthy that when the results page was initially presented to the user, the list was trimmed and expandable; only the top four topics were displayed, lending even more significance to the prioritization of the first topics. The order of topics is not alphabetical, indicating some other prioritization of the issues. A similarity with polls conducted by Gallup (Newport 2016) and the Pew Research Center (Doherty, Kiley, and Jameson 2015) and with Google Trends was investigated to see whether the order was determined by public interest or search trends, but no correlation was found. Additionally, the addition of Gay Marriage indicates a human editing process for two reasons: first, the statements shown in this category were previously listed under “Civil Liberties”; and second, Google Trends showed no spike in searches related to same-sex marriages that could explain its surge in the electorate’s interest. In conclusion, a variety of strategies used in this inquiry surfaced the presence of biases in the Google information box, with respect to representation of candidates in their proportion of statements, dominance of sources, political bias of sources, and editorial choices. Still, the absence of algorithmic transparency (Diakopoulos and Koliska 2016) regarding how
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 329 ]
0 3
the tool was designed and how it curated information limited our ability to establish why we observed these patterns.
CASE 3: IN THE NEWS
At the top of the result page for searches about newsworthy topics during the 2016 US elections Google featured a box with related news articles labeled “In the News” (later relabeled as “Top Stories” in November 2016 after the election). Because of its prominence, the Google “In the News” box had the potential to lead users to particular news sources, and to direct information seekers to particular narratives and frames around candidates, thus shading their impressions. With so much potential for guiding attention, questions of ideological diversity and news-value of the articles “In the News” become critical and motivate the current case study (Figure 13.3). For this analysis article links were collected from Google using a nonpersonalized browser every hour between May 31 and July 8, 2016, resulting in a total of 5,604 links. Along with the URL for the articles, metadata was collected, including the name of the source for the article (usually a news organization), the text of the link (usually the title of the article), the string of text that shows how long the story has been posted or updated (“X hours ago”), and the exact time the link was scraped. The 5,604 links were nonunique, meaning that they could point to the same article in multiple instances. In total, there were 972 unique articles collected.
Figure 13.3: Google “In the news” box for Donald Trump, collected October 17, 2016. Source: Screenshot of webpage taken and cropped by authors.
[ 330 ] Politics
Privileged Sources
To determine whether particular sources dominated the Google “In the News” box, the total number of links collected (including repetition) from each website (i.e., top level domain) was measured against the total number of links displayed. Out of the 113 news sources, 60 had articles linked nine or fewer times during the time frame of collection. Another 35 sources had between 10 and 50 links displayed in the Google “In the News” box. Sixteen sources had between 50 and 500 links showcased. The remaining two sources combined, CNN and New York Times, account for 2,476 links (44.2%) listed. Of the 5,604 links listed in the Google “In the News” box, 1,276 came from CNN and 1,200 originated from the New York Times domain. For example, between June 2 and June 6, an article titled “Hillary Clinton’s Evisceration of Donald Trump” appeared in 90 different hourly measurements, indicating a strong staying power and emphasis in Google’s results. However, the sources that are most frequent in the Google “In the News” boxes do not rely only on single articles that are repeated multiple times. In fact, when only unique links are considered, CNN and the New York Times also have high prevalence in the ranking. Out of the 972 unique articles displayed, 152 belonged to the New York Times and 142 to CNN, meaning that, combined, they amounted to 30% of the unique articles. In third place, NBCNews.com had 69 articles, or 7% of the total unique links. The Google “In the News” box also prioritizes the three links that it presents. There are always three links that are displayed in the tool, but the first in that list has more prominence due to additional information displayed. All of the first listed links include a short summary, or snippet of the article, and 99.4% of them also include an image. The distribution of sources in those first links listed was also measured, to investigate whether or not some sources also had more dominance in the premium spot. Again, CNN and the New York Times dominate the first listed link, with 1,211 out of 1,868, or 64.4% of them being from these two sources. This is a startling concentration of attention oriented toward only two publishers of information. When evaluated on a weekly basis, some variation in the ranking of the sources for articles was detected. The most clear example is the Washington Post, a news organization that was in the top five sources of links for the week of June 12 to June 18, when it rose to third place. One hypothesis is that the Washington Post had a temporarily higher profile than usual, since it became the focus of a controversy concerning the presidential election coverage. The increase of the newspaper in the Google “In the News” box rankings coincided with the then-candidate
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 331 ]
2 3
Donald Trump criticizing the Post on Twitter and revoking their credentials for campaign events. However, more research is necessary to conclude that becoming the focus of news itself could itself propel a news organization to higher prominence in the box.
Freshness
Using the metadata relating to time, we found that “freshness” of articles also appears to be a relevant factor for prevalence in the “In the News” box. When all articles are considered, 30.5% of all links are marked as being less than three hours old. A smaller proportion, 4.8%, of links listed were more than one day old, and occurred mostly on weekends. Additional research is needed to determine the specifics of how Google’s algorithms considers article timestamps and update times. For example, the article “Hillary Clinton’s Evisceration of Donald Trump,” may have had staying power because it was continually updated and considered “new” by Google because of an updated timestamp.
Non-N ews Organizations
Not all linked articles came from news organizations. Of the 5,604 total links, 214 came from Twitter (the official accounts of Hillary Clinton, Donald Trump, and Chelsea Clinton were the originators of those links), 58 from YouTube, and 24 from the Federal Bureau of Investigation. Also included in the list of sources are websites that are controversial sources of news, such as Breitbart News, Infowars, and Russia Today. It is important to note that the Google “In the News” box shifted from a strict news organization background to a more open aggregator of news- related links in 2014. Until that year, the box was called “news for . . .”; then, it changed to “in the news.” That was the year in which users started noticing non-news links appearing in the box, such as press releases from companies and content from Reddit and YouTube. The results presented earlier can lead to different threads of inquiry. The analysis sheds light on the disparity in attention that some sources of news achieve via the box, but it still remains unclear why such disparities came about. There is a clear predilection toward fresher and newer content, privileging the daily news cycle or at least the appearance of it via the updated publication date of articles. The inclusion of non-news links is a complicating factor, raising questions about Google’s very definition of a news source.
[ 332 ] Politics
CASE 4: VISUAL FRAMING
The first three case studies presented focus on textual results from search engines such as links to articles and websites. However, Google also presents images on the main results page. Google’s image box is located in the top right section of the main results page, and typically contains five to seven images. The box draws attention to specific images of the candidates, and to the articles or information linked via these images. In this way, Google may contribute to shaping users’ attitudes and perceptions by visually framing candidates in particular ways. Previous work suggests that the visual portrayal of candidates in election campaigns may have an impact on electoral outcomes. For instance, the competence of a candidate is inferred from attractiveness (Todorov et al. 2005), and beauty itself is positively associated with votes (Berggren, Jordahl, and Poutvaara 2010). Furthermore, people are unlikely to change their candidate pick even after they are provided additional information regarding candidates’ competencies (Todorov et al. 2005). In this case we examine how Hillary Clinton and Donald Trump were framed in the images surfaced on search results pages by Google including analyses of emotional and gestural content, the sources of images and their political leanings, and image rank positions. Images, their sources, and the ranking position (i.e., first, second, etc.) were collected for the queries “Hillary Clinton” and “Donald Trump” once per day from September 3 until October 28, 2016. As a baseline, we also collected images from Google Image Search for each candidate for the same time period, resulting in 353 images of Clinton and 298 of Trump after removing images that did not contain the candidate, or that contained multiple faces. In our sampling period there were nine unique images from the image box of Clinton, and 11 of Trump. Unique images were identified using a difference hash algorithm (Buchner 2017) which converts the images to grayscale, reduces them to 72 pixels, and then computes the difference between neighboring pixels producing an alphanumeric hash unique to that image. To analyze the emotional content of the images, we used an API (Microsoft Azure 2017), which provided a confidence score for each of eight emotions, comprising anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Clinton was deemed happy in seven images, neutral in one, and surprised in another, whereas Trump was happy in three images, neutral in seven, and angry in one. This distribution of emotions was somewhat different from the distribution observed in Google Image search baseline images, where Clinton exhibited a neutral expression in the majority of
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 333 ]
4 3
images. In terms of gestural content, Clinton was gesturing in only one of nine image box images (11%) versus 40% of baseline images, while Trump was gesturing in four of 11 image box images (36%) versus 57% of baseline images (Figure 13.4). Altogether, these images frame the candidates in a manner typical of representation of women and men in news media (Rodgers, Kenix, and Thorson 2007; Kwak and An 2016), with Clinton portrayed as happy and more passive (fewer gestures), and Trump depicted as serious, and more energetic (more gestures). Energetic gesturing is associated with dominance and power (Burgoon and Dunbar 2006), and effective leadership is typically associated with masculine traits including dominance (Koenig et al. 2011). Therefore, these images may reinforce the gender stereotype that men are strong leaders, while women, who display characteristics incongruent with typical leadership traits, are ineffective. Because images are hyperlinked, Google’s choice of images also serves to direct attention to specific sources of those images. Research shows that people show more interest in unconventional news photos when presented outside the context of the article (Mendelson 2001). Such atypical images in Google’s image box would be particularly powerful in directing attention to the related articles as they are also presented outside the context of the source article. This could have an impact on voters, as outlets may portray candidates differently, and may impart partisan bias within the images that are published. Sources for Clinton’s baseline images were mostly left-leaning, but left-leaning sources of image box images were even
Donald Trump Emotions
Hillary Clinton Emotions Anger Contempt Digust Fear Happiness Neutral Sadness Surprise 0
10 20 30 40 50 60 70 80 0 Percentage of Images Image Box
10 20 30 40 50 60 70 80 Percentage of Images
Baseline
Figure 13.4: Percentage of Clinton and Trump images from the image box or from Google Images (Baseline) that show listed emotions. Source: Original chart produced by Jennifer Stark.
[ 334 ] Politics
more highly represented (Stark and Diakopoulos 2017). While sources of Trump’s baseline and image box images were also mostly left-leaning, centrist sources were reduced and right-leaning sources were augmented in the image box compared with the baseline (Figure 13.5). In terms of preferential position of the images within the image box, Google+ was always in the first ranking position, which affords the largest image, meaning that while Google is privileging their own social media platform, knowing this, the campaigns themselves can control the visual in the most dominant position. Clinton’s Wikipedia image was in second position, suggesting it is second in importance. Trump’s Wikipedia page was second also, but after several edits to the Wikipedia page’s main picture, this image was dropped altogether. The remaining rank positions of images for both candidates were less stable. Notably for Clinton, many images changed rank between the 3rd and 24th of September. During this time Clinton was reported as suffering from pneumonia, apologized for calling Trump supporters “deplorables,” and suggested Russia was intervening in the election process (Figure 13.6). The news cycle does appear to impart some variability in the visual portrayal of the candidates after the top image or two, but does not seem to be the only driver, given that the primary rank positions were independent from news. Altogether, Google’s images are selected from sources across the ideological spectrum, and prioritize images from its own social media site Google+. There are several limitations with this case study. Our baseline consists of images collected from Google Images search results. Although we Donald Trump Image Sources
Hillary Clinton Image Sources
Left
Center
Right 0
10
20 30 40 50 60 Percentage of Sources
70
80 0
Image Box
10
20 30 40 50 60 Percentage of Sources
70
80
Baseline
Figure 13.5: Percentage of sources for Clinton and Trump images from the image box or Google Images baseline identified from each listed ideology. Source: Original chart produced by Jennifer Stark.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 335 ]
6 3
Rank
Google+ Wikipedia Biography Russia Insider 1 2 3 4 5 6 7
Sept 10
Sept 17
Sept 24
Politico
Oct 01
CNN
Oct 08
Time
Oct 15
YouTube
Mises
Oct 22 (2016)
Figure 13.6: Image box rank positions of Clinton across time. Source: Original illustration produced by Jennifer Stark.
expected this baseline to represent a universe of Google images from which Google selects images for the image box, not all images found in the image box were also in the baseline. Alternative baselines may be more representative of all images that exist for each candidate, for example, using a longer search time period, or images collected from additional search engines like Bing and Yahoo! Moreover, the bias of photographers and editors in selecting images that feed indices like Google Images is not addressed. We also relied on computer vision algorithms for emotional content information, which in turn are trained on data with its own unspecified sampling bias. Open questions include how the images are selected for the image box, and the role that algorithms or human editors may play in those selections as well as whether source bias would be the same for a different time period, or for politicians in different countries. Our results suggest that Google’s image results reflect visual gender differences with respect to leadership roles present in society: that women are happy and passive and men are serious and active. An open question is whether such gendered visual frames, from search engines in particular, can impact perception of presidential competency.
DISCUSSION
Through a series of case studies this chapter has begun to characterize the range of influences that Google’s search technologies were having on information curation during the 2016 US elections. Results clearly show the myriad editorial choices that Google’s algorithms make in shaping the information environment, including a focus on official sources in the main rankings, ordering in the issue guide that may have unduly privileged
[ 336 ] Politics
certain issues, the dominance of a small set of news sources highlighted “In the News,” and differences in the visual framing of a female versus a male candidate. These results constitute a set of observations that raise important questions for future work, and suggest political conversations that need to be undertaken about the editorial role of search engines in political life: What should be the responsibilities of Google as an important intermediary of information for voters? Studying search engine results is a methodologically challenging undertaking. Though we were able to observe several instances of how results may shift attention toward candidates we lacked any form of scientific control, or ability to run experiments, and found it difficult to explain why we were seeing many of the results patterns that we saw. Still, we believe that it is a valuable public interest endeavor to report on the observed patterns of information that dominant entities like Google mediate, given that they can have a substantial effect on exposure to political information and bias. For the Issue Guide case we found it valuable to consider the dynamics of the results from Google and we believe this should be an important aspect that informs the methodologies of algorithm audits of search engines in future work. While taking individual snapshots of data allows for concrete analyses, the dynamics of how the results change over time (and how the search giant may be responding to public pressure) is important to track for accountability purposes. Writing “stories” about results may be augmented by building dashboards to track and reflect search results over time. In two of the cases, for the Issue Guide and for Visual Framing, the concept of defining baselines emerged. Defining an appropriate baseline dictates what the expectation for an algorithm should be, thus informing what is perceived as interesting and newsworthy for an audit. But what is the “right” baseline? In our work we made logical arguments about the expected input data sets that Google algorithms would use to curate quotes or images, however other reasonable baselines could be considered. Some may consider results less compelling if the baseline does not come from a sample independent of the platform under study. We believe that the public impact of these types of audits hinges on making a strong and well- reasoned claim for the expected information that a search engine should provide, and then showing that the search engine meets that expectation or violates that expectation. Additional algorithmic transparency information could inform the baseline definition process, and more generally may be considered by regulators exploring how targeted transparency policies could balance the dominance of information intermediaries. Several of the case studies presented focused on characterizing the sources that Google surfaces in results. We think diversity is an important
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 337 ]
8 3
frame for considering the information that voters have about candidates. For instance, although Google predominantly operates on a logic of relevance, the idea of information diversity could help to mitigate issues of political polarization that challenge society. In these cases we have considered the ideological diversity present in sources, however other definitions of diversity are also possible, such as by looking at ownership diversity (i.e., who owns a particular source), and even topical diversity (i.e., the topical frames that are apparent in information surfaced by search engines). Future work might be usefully oriented toward building community resources that can reliably tag sources according to these dimensions of diversity so that audits can easily incorporate various diversity measures.
ACKNOWLEDGMENTS
This work has been supported by grants from the Tow Center for Digital Journalism and from the VisMedia project at the University of Bergen. We have also benefited through discussions with professor Susan Moeller at the University of Maryland, and from the feedback we received from Torie Bosch, who edited the first three case studies when they appeared as articles published in Slate.
REFERENCES Agichtein, Eugene, Eric Brill, Susan Dumais, and Ragno Ragno. 2006. “Learning User Interaction Models for Predicting Web Search Result Preferences.” In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 3. doi:10.1145/1148170.1148175. Alam, Mohammed A., and Doug Downey. 2014. “Analyzing the Content Emphasis of Web Search Engines.” In Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1083–86. doi:10.1145/2600428.2609515. Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348, no. 6239: 1130–32. doi:10.1126/science.aaa1160. Berggren, Niclas, Henrik Jordahl, and Panu Poutvaara. 2010. “The Looks of a Winner: Beauty and Electoral Success.” Journal of Public Economics 94, no. 1–2: 8–15. doi:10.1016/j.jpubeco.2009.11.002. Bracha, Oren, and Frank Pasquale. 2008. “Federal Search Commission? Access, Fairness, and Accountability in the Law of Search.” Cornell Law Review 93, no. 1149: 1149–210. doi:10.2139/ssrn.1002453. Buchner, Johannes. 2017. “ImageHash 3.4.” Python Software Foundation. https://pypi. python.org/pypi/ImageHash.
[ 338 ] Politics
Burgoon, Judee K., and Norah E. Dunbar. 2006. “Nonverbal Expressions of Dominance and Power in Human Relationships.” In The SAGE Handbook of Nonverbal Communication, edited by Valerie Manusov and Miles L. Patterson, 279–78. Thousand Oaks, CA. doi:10.4135/9781412976152.n15. Diakopoulos, Nicholas. 2015. “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.” Digital Journalism 3, no. 3: 398–415. doi:10.1080/21670811.2014.976411. Diakopoulos, Nicholas, and Michael Koliska. 2016. “Algorithmic Transparency in the News Media.” Digital Journalism, forthcoming. doi:https://doi.org/10.1080/ 21670811.2016.1208053. Doherty, Carroll, Jocelyn Kiley, and Bridget Jameson. 2015. “Contrasting Partisan Perspectives on Campaign 2016.” http://www.people-press.org/2015/10/02/ contrasting-partisan-perspectives-on-campaign-2016/#. Edelman, Benjamin. 2010. “Hard-Coding Bias in Google ‘Algorithmic’ Search Results.” http://www.benedelman.org/hardcoding/. Edelman, Benjamin, and Benjamin Lockwood. 2011. “Measuring Bias in ‘Organic’ Web Search.” http://www.benedelman.org/searchbias/. Epstein, Robert, and Ronald E. Robertson. 2015. “The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections.” Proceedings of the National Academy of Sciences of the United States of America 112, no. 33: E4512–E4521. doi:10.1073/pnas.1419828112. Gillespie, Tarleton. 2017. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” Information, Communication and Society 20, no. 1: 63–80. doi:10.1080/1369118X.2016.1199721. Granka, Laura A. 2010. “The Politics of Search: A Decade Retrospective.” Information Society 26, no. 5: 364–74. doi:10.1080/01972243.2010.511560. Grimmelmann, James. 2014. “Speech Engines.” Minnesota Law Review 98: 868–952. Hazan, Joshua G. 2013. “Stop Being Evil: A Proposal for Unbiased Google Search.” Michigan Law Review 111, no. 5: 789–820. Introna, Lucas D., and Helen Nissenbaum. 2000. “Shaping the Web: Why the Politics of Search Engines Matters.” Information Society 16, no. 3: 169–85. doi:10.1080/ 01972240050133634. Jiang, Min. 2013. “The Business and Politics of Search Engines: A Comparative Study of Baidu and Google’s Search Results of Internet Events in China.” New Media and Society 16, no. 2: 212–33. doi:10.1177/1461444813481196. Jiang, Min. 2014. “Search Concentration, Bias, and Parochialism: A Comparative Study of Google, Baidu, and Jike’s Search Results From China.” Journal of Communication 64, no. 6: 1088–110. doi:10.1111/jcom.12126. Kay, Matthew, Cynthia Matuszek, and Sean A. Munson. 2015. “Unequal Representation and Gender Stereotypes in Image Search Results for Occupations.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems—CHI ’15, 3819–28. doi:10.1145/2702123.2702520. Kittur, A., E. H. Chi, and B. Suh. 2008. “Crowdsourcing User Studies with Mechanical Turk.” In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems, 453–56. http://dl.acm.org/citation. cfm?id=1357127. Kliman-Silver, C., A. Hannak, D. Lazer, C. Wilson, and A. Mislove. 2015. “Location, Location, Location: The Impact of Geolocation on Web Search Personalization.” In Proceedings of the 2015 Internet Measurement Conference, 121–27. doi:10.1145/2815675.2815714.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 339 ]
0 4 3
Knobloch-Westerwick, Silvia, Benjamin K. Johnson, and Axel Westerwick. 2015. “Confirmation Bias in Online Searches: Impacts of Selective Exposure before an Election on Political Attitude Strength and Shifts.” Journal of Computer- Mediated Communication 20, no. 2: 171–87. doi:10.1111/jcc4.12105. Koenig, Anne M., Alice H. Eagly, Abigail A. Mitchell, and Tiina Ristikari. 2011. “Are Leader Stereotypes Masculine? A Meta-Analysis of Three Research Paradigms.” Psychological Bulletin 137, no. 4: 616–42. doi:10.1037/a0023557. Kulshrestha, Juhi, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna P. Gummadi, and Karrie Karahalios. 2017. “Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media.” In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17), 417–32. doi:10.1145/ 2998181.2998321. Kwak, Haewoon, and Jisun An. 2016. “Revealing the Hidden Patterns of News Photos: Analysis of Millions of News Photos Using GDELT and Deep Learning-Based Vision APIs.” The Workshops of the Tenth International AAAI Conference on Web and Social Media News and Public Opinion: Technical Report WS-16-18. Laidlaw, Emily B. 2009. “Private Power, Public Interest: An Examination of Search Engine Accountability.” International Journal of Law and Information Technology 17, no. 1: 113–45. doi:10.1093/ijlit/ean018. Loiz, Dana. 2014. “How Many Clicks Does Each SERP Get?—Google Organic Click-Through Rate Study 2014.” Advanced Web Ranking Blog. https://www. advancedwebranking.com/blog/google-organic-click-through-rates-2014/. Magno, Gabriel, Belo Horizonte, Camila Souza Araújo, and Virgilio Almeida. 2016. “Stereotypes in Search Engine Results: Understanding the Role of Local and Global Factors.” Workshop on Data and Algorithmic Transparency (DAT’16). http://datworkshop.org/papers/dat16-final35.pdf. Media Insight Project. 2014. “The Personal News Cycle: How Americans Choose to Get Their News.” https://www.americanpressinstitute.org/publications/ reports/survey-research/how-americans-get-news/. Mendelson, Andrew. 2001. “Effects of Novelty in News Photographs on Attention and Memory.” Media Psychology 3, no. 2: 119–57. doi:10.1207/S1532785XMEP0302. Microsoft Azure. 2017. “Emotion API.” https://azure.microsoft.com/en-gb/services/ cognitive-services/emotion/. Muddiman, Ashley. 2013. “Searching for the Next U.S. President: Differences in Search Engine Results for the 2008 U.S. Presidential Candidates.” Journal of Information Technology and Politics 10, no. 2: 138–57. doi:10.1080/ 19331681.2012.707440. Newport, Frank. 2016. “Democrats, Republicans Agree on Four Top Issues for Campaign.” http://www.gallup.com/poll/188918/democrats-republicans-agree- four-top-issues-campaign.aspx. Ørmen, Jacob. 2015. “Googling the News: Opportunities and Challenges in Studying News Events through Google Search.” Digital Journalism 4, no. 1: 1–18. doi:10.1080/21670811.2015.1093272. Pan, Bing, Helene Hembrooke, Thorsten Joachims, Lori Lorigo, Geri Gay, and Laura Granka. 2007. “In Google We Trust: Users’ Decisions on Rank, Position, and Relevance.” Journal of Computer-Mediated Communication 12, no. 3: 801–23. doi:10.1111/j.1083-6101.2007.00351.x.
[ 340 ] Politics
“Rankings—comScore, Inc.” 2017. Accessed June 27. http://www.comscore.com/ Insights/R ankings. Rodgers, Shelly, Linda Jean Kenix, and Esther Thorson. 2007. “Stereotypical Portrayals of Emotionality in News Photos.” Mass Communication and Society 10, no. 1: 119–38. doi:10.1080/15205430701229824. Schonberg, Jacob. 2016. “On the Road to the 2016 Elections with Google Search.” Google Blog. https://www.blog.google/products/search/on-road-to-2016- elections-with-google/. Stark, Jennifer A., and Nicholas Diakopoulos. 2017. “Using Baselines for Algorithm Audits.” In Proceedings of the 2017 European Data and Computational Journalism Conference, 3. http://datajconf.com/#schedule. Todorov, Alexander, Anesu N. Mandisodza, Amir Goren, and Crystal C. Hall. 2005. “Inferences of Competence from Faces Predict Election Outcomes.” Science 308, no. 5728: 1623–26. doi:10.1126/science.1110589. Trevisan, Filippo, Andrew Hoskins, Sarah Oates, and Dounia Mahlouly. 2016. “The Google Voter: Search Engines and Elections in the New Media Ecology.” Information, Communication and Society, 1–18. doi:10.1080/ 1369118X.2016.1261171. “U.S. Presidential Campaign Tracker.” 2016. Lexisnexis. https://www.lexisnexis.com/ en-us/products/newsdesk/presidential-tracker.page. Van Couvering, Elizabeth. 2007. “Is Relevance Relevant? Market, Science, and War: Discourses of Search Engine Quality.” Journal of Computer-Mediated Communication 12, no. 3: 866–87. doi:10.1111/j.1083-6101.2007.00354.x. Weber, Ingmar, and Alejandro Jaimes. 2011. “Who Uses Web Search for What: And How.” In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, 15–24. doi:10.1145/1935826.1935839. Xing, Xinyu, Wei Meng, Dan Doozan, Nick Feamster, Wenke Lee, and Alex C. Snoeren. 2014. “Exposing Inconsistent Web Search Results with Bobble.” In Passive and Active Measurement. PAM 2014. Lecture Notes in Computer Science, edited by M. Faloutsos and A. Kuzmanovic, 8362: 131–40. Switzerland: Springer. doi:10.1007/978-3-319-04918-2_13.
H o w S e a r c h I n f or m s O u r C h o i c e of C a n di dat e
[ 341 ]
2 4 3
CHAPTER 14
Social Dynamics in the Age of Credulity The Misinformation Risk and Its Fallout FABIANA ZOLLO AND WALTER QUAT TROCIOCCHI
INTRODUCTION
The advent of the Internet and web technologies have radically changed the paradigm of news consumption, leading up to the formation of a new scenario where people actively participate not only in the diffusion of content, but also its production. We switched from a model where information was supplied by few, official news sources and mediated by experts and/ or journalists, to the current, disintermediated environment made by a heterogeneous mass of news sources, often alternative to the mainstream flow. In such a context, social media play a crucial role and are becoming central not only to our social lives, but also to the political and civic world. Indeed, the so-called tech giants (Facebook, Google, Twitter, Amazon, Apple and others) are increasingly growing in importance, to the extent that their digital dominance raises critical questions about the power that these organizations are acquiring. People are relying more and more on digital media for their works and social lives as well as for news. Worldwide there were over 1.94 billion monthly active Facebook users for March 2017 (Facebook 2017). Clearly, by now the platform is too big to ignore. Indeed, social media have rapidly become the main information source for many of their users (Newman, Levy, and Nielsen 2016). Moreover, for engaged and active users, there is a huge amount of information competing for their
attention. Every 60 seconds on Facebook, 3.3 million posts are created, 510,000 comments are posted, and 293,000 statuses are updated. In other words, users deal with a continuous, uninterrupted flow of information, where content quality may be poor (Allen 2017; The Social Skinny 2017). Indeed, despite the enthusiastic rhetoric about the emergence of a collective intelligence (Lévy 1997), social media are riddled with unsubstantiated and often untruthful rumors, which can influence public opinion negatively. It is no accident that, since 2013, the World Economic Forum (WEF) has been placing the global danger of massive digital misinformation at the core of other technological and geopolitical risks, ranging from terrorism, to cyberattacks, to the failure of global governance (Howell 2013). The phenomenon is alarming. When people are misinformed, they hold beliefs neglecting factual evidence. Moreover, it has been shown that, in general, people tend to resist facts (Kuklinski et al. 2000), and that corrections frequently fail to reduce misperceptions, instead acting as a backfire effect (Nyhan and Reifler 2010). Taking a step back, we thus wonder how the dominance of few social networks (and Facebook, in particular) can lead to the formation of echo chambers and exacerbate the phenomenon of misinformation spreading online. To address the problem properly, we have to account for the sociocognitive factors underlying the phenomenon. Indeed, understanding the main determinants behind contents’ consumption and the emergence of narratives on online social media is crucial. To this aim, we need to go beyond the pure, descriptive statistics of big data and address such a challenge by implementing a cross-methodological, interdisciplinary approach that takes advantage of both the question-framing capabilities of social sciences and the experimental and quantitative tools of hard sciences. The chapter is structured as follows. In the second section, we analyze a collection of works about the spreading of misinformation on online social media, focusing on the role played by confirmation bias and its effect on users’ response to troll contents and dissenting information; in the third section, we study how information produced by official news outlets is consumed by 376 million Facebook users; in the fourth section, we present two case studies, both related to politics: Brexit and the Italian constitutional referendum; finally, in the fifth section, we draw some conclusions.
THE SPREADING OF MISINFORMATION ONLINE
In this section we focus on a collection of works (Bessi et al. 2015a; Bessi et al. 2015b, 2016a; Del Vicario et al. 2016; Zollo et al. 2015; Zollo et al.
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 343 ]
43
2017) aiming at characterizing the role of confirmation bias in the spreading of misinformation on online social media. We want to investigate the cognitive determinants behind misinformation and rumor spreading by accounting for users’ behavior on different and specific narratives. In particular, we define the domain of our analysis by identifying two distinct narratives: (1) conspiracy and (2) scientific information sources. Notice that we do not focus on the quality or the truth value of information but rather on its verifiability. Indeed, while producers of scientific information as well as data, methods, and outcomes are readily identifiable and available, the origins of conspiracy theories are often unknown and their content is strongly disengaged from mainstream society, sharply diverging from recommended practices. Thus, we first analyze users’ interaction with Facebook pages belonging to such distinct narratives on a time span of five years (2010–2014), in both the Italian and the US context. Then, we perform a comparative study on how contents (videos) from both categories are consumed on two different social media, namely, Facebook and YouTube. Finally, we measure users’ response to (1) information consistent with one’s narrative, (2) troll contents, and (3) dissenting information i.e., debunking attempts.
Datasets
We identify two main categories of pages: Conspiracy news and Science news. The first category includes all pages diffusing conspiracy information (i.e., pages that disseminate controversial information, most often lacking supporting evidence and sometimes contradictory of the official news). Pages like I don’t trust the government, Awakening America, or Awakened Citizen promote heterogeneous contents ranging from aliens, chem-trails, and geocentrism, to the causal relation between vaccinations and homosexuality. The second category is that of scientific dissemination and includes institutions, organizations, and scientific press with the main mission to diffuse scientific knowledge. For example, pages like Science, Science Daily, and Nature are active in diffusing posts about the most recent scientific advances. Finally, we identify two additional categories of pages: 1. Troll: sarcastic, paradoxical messages mocking conspiracy thinking (for the Italian dataset); 2. Debunking: information aiming at correcting false conspiracy theories and untruthful rumors circulating online (for the US dataset).
[ 344 ] Politics
Table 14.1. BREAKDOWN OF THE ITALIAN FACEBOOK DATASET
SCIENCE PAGES
CONSPIRACY
34
39
TROLL 2
POSTS
62,705
208,591
4,709
LIKES
2,505,399
6,659,382
40,341 58,686
COMMENTS
180,918
836,591
LIKERS
332,357
864,047
15,209
53,438
226,534
43,102
COMMENTERS
Table 14.2. BREAKDOWN OF THE US FACEBOOK DATASET SCIENCE
CONSPIRACY
DEBUNKING
PAGES
83
330
66
POSTS
262,815
369,420
47,780
LIKES
453,966,494
145,388,117
3,986,922
COMMENTS
22,093,692
8,304,644
429,204
LIKERS
39,854,663
19,386,131
702,122
7,223,473
3,166,726
118,996
COMMENTERS
To produce our datasets, we categorized conspiracy and scientific news sources available on Facebook with the help of several experts active in debunking fake news and conspiracy theories.1 To validate the list, all pages were then manually checked by looking at their self-description and the type of promoted content. The exact breakdowns of the Italian and US Facebook datasets are reported in Tables 14.1 and 14.2, respectively. The entire data- collection process was performed exclusively by means of the Facebook Graph API (Facebook 2017), which is publicly available and can be used through one’s personal Facebook user account. We used only publicly available data (users with privacy restrictions are not included in our dataset). Data was downloaded from Facebook pages that are public entities. Users’ content contributing to such entities is also public unless users’ privacy settings specify otherwise and, in that case, it is not available to us. When allowed by users’ privacy specifications, we accessed public personal information. However, in our study we used fully anonymized and aggregated data. We abided by the terms, conditions, and privacy policies of Facebook. 1. We are grateful to Skepti Forum, Skeptical spectacles, Butac, and Protesi di Complotto.
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 345 ]
6 4 3
Table 14.3. BREAKDOWN OF THE FACEBOOK DATASET FACEBOOK
SCIENCE
CONSPIRACY
POSTS
4,388
16,689
LIKES
925K
1M
COMMENTS
86K
127K
SHARES
312K
493K
Table 14.4. BREAKDOWN OF THE YOUTUBE DATASET YOUTUBE
SCIENCE
CONSPIRACY
VIDEOS
3,803
13,649
LIKES
13,5M
31M
COMMENTS
5.6M
11.2M
VIEWS
2.1M
6.33M
Starting from the US dataset, we picked all posts on Facebook linking to a video on YouTube and, then, we downloaded metadata of such videos. To build the YouTube dataset, we downloaded likes, comments, and descriptions of each video cited/shared in Facebook posts by means of YouTube Data API (YouTube 2016; Bessi et al. 2016b). Each video link in Facebook contains a unique ID, which identifies the resource uniquely on both Facebook and YouTube. The comments thread in YouTube, with its time sequence, is the equivalent of the feed timeline in a Facebook page. The same techniques used to analyze Facebook data can be then used in YouTube data with minimal modifications. The YouTube dataset consists in about 17 thousand videos linked by Facebook posts supporting Science or Conspiracy news. Videos linked by posts in Science pages are considered as videos disseminating scientific knowledge, whereas videos linked by posts in Conspiracy pages are considered as videos diffusing controversial information and supporting myth and conspiracy-like theories. Such a categorization has been validated by both the authors and Facebook groups very active in monitoring conspiracy narratives. The exact breakdown of the data is shown in Tables 14.3 and 14.4. Echo Chambers
To investigate the existence of echo chambers on social media, in this section we use our massive dataset to analyze users’ interaction with contrasting
[ 346 ] Politics
narratives and measure the impact of confirmation bias in users’ response to distinct kind of information.
Attention Patterns
We start our analysis by focusing on how information is consumed by users in both the Italian (Bessi et al. 2015a; Bessi et al. 2015b, 2016a) and the US Facebook (Zollo et al. 2017), and YouTube (Bessi et al. 2016b). Facebook’s paradigm allows users to interact with contents by means of likes, comments, and shares. Each action has a particular meaning (Ellison, Steinfield, and Lampe 2007): while a like represents a positive feedback to the post, a share expresses the desire to increase the visibility of a given information; finally, a comment is the way in which the debate takes form around the topic of the post. Similarly, on YouTube a like stands for positive feedback to the video, while a comment is the way in which users debate around the topic promoted by the video. Our results show that distinct kinds of information (Science, Conspiracy) are consumed in a comparable way, both in the Italian and the US Facebook. Also, we find strong correlations on how users like, comment, and share videos on Facebook and YouTube (Bessi et al. 2016b). Indeed, although the algorithm for content promotion is different, information reverberates in a similar way. However, when considering the correlation between couples of actions, we find that users of Conspiracy pages are more prone to both share and like a post, denoting a higher level of commitment (Bessi et al. 2015a). Conspiracy users are more willing to contribute to a wide diffusion of their topics on interest, according to their belief that such information is intentionally neglected by mainstream media.
Polarization
We now want to understand if users’ engagement with a specific kind of content can become a good proxy to detect groups of users sharing the same system of beliefs, that is, echo chambers. Assume that a user u has performed x and y likes (or comments) on scientific and conspiracy posts, ( y − x ) . Thus, we say a user u is polarized toward respectively, and let ρ( u ) = ( y + x) Science if (ρ( u ) ≥ 0.95 , that is, she left more than 95% of her likes (or comments) on scientific posts; analogously, we say a user u is polarized toward
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 347 ]
8 4 3
Conspiracy if (ρ( u ) ≤ −0.95 , that is, she left more than 95% of her likes (or comments) on Conspiracy posts. In Figure 14.1 we show the probability density function (PDF) of users’ polarization on the Italian Facebook (Bessi et al. 2016b). We observe a sharply bimodal distribution showing two main peaks by values –1and 1. Indeed, the majority of users are polarized either toward Science or Conspiracy, eliciting the formation of two well-segregated groups of users. Moreover, we also find that the more a user is active on her narrative, the more she is surrounded by friends sharing the same attitude. Hence, social interactions of Facebook users are driven by homophily: users not only tend to be very polarized but also tend to be linked to users with similar preferences. Indeed, it may be observed that for polarized users the fraction of friends with the same polarization is very high ( > 75% ) and grows with the engagement in their own community. Similar patterns can be observed on the US Facebook (Zollo et al. 2017). In Figure 14.2 we show that the PDF of users’ polarization is sharply bimodal here as well, with two principal peaks by values –1 and 1, corresponding to Conspiracy and Science, respectively. The same pattern holds if we look at polarization based on comments rather than on likes. Analogously, Figure 14.3 shows that on both Facebook and YouTube (Bessi et al. 2016b) the vast majority of users are polarized toward one of the two conflicting narratives, that is, Science and Conspiracy. Thus, our results confirm the existence of echo chambers on both the Italian and the US Facebook, and YouTube. Indeed, contents related to distinct narratives aggregate users into distinct, polarized communities, where users interact with like-minded people sharing their own system of beliefs.
PDF
10
5
0
–1.0
–0.5
0.0 ρ(u)
Figure 14.1: Italian Facebook. Users’ polarization. Source: Bessi, Petroni, et al. (2016); Bessi, Petroni, et al. (2015).
[ 348 ] Politics
0.5
1.0
Science
Conspiracy 6
6
PDF
PDF
9 4 2
3
0
0 –1.0
–0.5
0.0 ρlikes
0.5
1.0
–1.0
–0.5
0.0 0.5 ρcomments
1.0
Figure 14.2: US Facebook. Users’ polarization considering likes (left) and comments (right). Source: Zollo et al. (2017).
PDF
15 10 5 0 0.00
0.25
0.50 ρ Facebook
0.75
1.00
YouTube
Figure 14.3: US Facebook and YouTube. Users’ polarization. Source: Bessi, Zollo, et al. (2016).
Cascades
In this section, we show how confirmation bias dominates viral processes of information diffusion and that the size of the (mis)information cascades may be approximated by the size of the echo chamber (Del Vicario et al. 2016). Our aim is to characterize the statistical signature of cascades according to the narrative, that is, Science or Conspiracy. We first consider cascades’ lifetime, that is, the time (in hours) elapsed between the first and the last share of the post. Our findings show that, for both categories, there is a first peak at approximately 1–2 hours and a second one at approximately 20 hours, denoting that the temporal sharing patterns are similar independently of the narrative. However, if we look at the lifetime as a
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 349 ]
0 53
function of the cascade size (i.e., the number of users sharing the post) we find that news assimilation differs according to the categories. Indeed, for scientific news the variability of the lifetime grows with the cascade’s size, and higher cascade’s size values correspond to a higher lifetime variability; whereas, for conspiracy-related contents, the lifetime of a post shows a monotonic growth with regard to the cascade’s size. Such results suggest that Science information is usually assimilated (i.e., it reaches a higher level of diffusion) quickly. A longer lifetime does not necessarily correspond to a higher level of interest, but possibly to a prolonged discussion within a specialized group of experts. Conversely, conspiracy rumors are assimilated more slowly and show a positive correlation between lifetime and size; thus, long-lived posts tend to be discussed by larger communities. Finally, it is possible to observe that the majority of links between consecutively sharing users is homogeneous, that is, both users share the same polarization and, hence, belong to the same echo chamber. In particular, our results suggest that information spreading occurs mainly inside homogeneous clusters in which all users share the same polarization. Hence, contents tend to circulate and be confined inside the echo chambers. Summarizing, we found that, although consumption patterns on Science and Conspiracy pages are similar, cascade dynamics differ. Indeed, selective exposure is the primary driver of contents’ diffusion and generates the formation of echo chambers, each with its own cascade dynamics.
Emotional Dynamics
Now we want to analyze the emotional dynamics inside and across echo chambers. In particular, we apply sentiment analysis techniques to the comments of our Facebook Italian dataset, and study the aggregated sentiment with respect to scientific and conspiracy-like information (Zollo et al. 2015). The sentiment analysis is based on a supervised machine learning approach, where we first annotate a substantial sample of comments, and then build a classification model, which is applied to associate each comment with one sentiment value: negative, neutral, or positive. The sentiment is intended to express the emotional attitude of Facebook users when posting comments. To further investigate the dynamics behind users’ polarization, we first study how the sentiment changes with regard to a user’s engagement in her own echo chamber. The first, interesting result is that sentiment of polarized users tends to be more negative than general ones (Figure 14.4). Indeed, we may notice that the percentage of users whose emotional
[ 350 ] Politics
Science
Conspiracy 10% 35%
7%
26%
29%
Science
Conspiracy Negative Neutral Positive
55% 45%
20%
27% 66%
All Users
34%
46%
Polarized Users
Figure 14.4: Italian Facebook. Proportions of negative, neutral, and positive comments made by all users (left) and polarized users (right) on both Science and Conspiracy posts. Source: Zollo et al. 2015).
Science
Positive Neutral Negative 100
101
101 user comments
103
Conspiracy
Positive Neutral Negative 100
101
101 user comments
103
104
–1.0 –0.5 0.0 0.5 1.0
Figure 14.5: Italian Facebook. Users’ sentiment as a function of their number of comments. Negative (respectively, neutral, positive) sentiment is denoted by red (respectively, yellow, blue) color. Source: Zollo et al. (2015).
attitude on Facebook is negative differs by 11% for Conspiracy, and 8% for Science. Moreover, when looking at the sentiment as a function of the number of comments of the user, we find that the more active a polarized user is, the more she tends toward negative values, both on Science and Conspiracy posts. Such results are shown in Figure 14.5, where the sentiment has been regressed with regard to the logarithm of the number of comments. Interestingly, the sentiment of Science users decreases faster than that of Conspiracy users. Thus, we have observed that polarized users tend to be negatively involved in the social network, and that such a negativity increases along with their activity (in terms of comments). Therefore, we wonder what
S o c i a l D y n a m i c s i n t h e A g e of C r e d u l i t y
[ 351 ]
2 5 3
–1.0 –0.5 0.0 0.5 1.0
Positive
Neutral
Negative 101 102 103 post comments
Figure 14.6: Italian Facebook. Aggregated sentiment of posts as a function of their number of comments. Negative (respectively, neutral, positive) sentiment is denoted by red (respectively, yellow, blue) color. Source: Zollo et al. (2015).
happens when polarized, negative-minded users meet their opponents. To this aim, we pick all the posts representing the arena where the debate between Science and Conspiracy users takes place. In particular, we select all the posts commented at least once by both a user polarized toward Science and a user polarized toward Conspiracy. We find 7,751 such posts (out of 315,567), reinforcing the fact that the two communities are strictly separated and do not often interact with one another. Then, we analyze how the sentiment changes when the number of comments of the post increases, that is, when the discussion becomes longer. Figure 14.6 shows the aggregated sentiment of such posts as a function of their number of comments. Clearly, as the number of comments increases—that is, the discussion becomes longer—the sentiment is always more negative. Our findings show that the length of the discussion does affect the negativity of the sentiment of the users involved in the debate. In this respect, it is worth highlighting that longer discussions online are often related to flaming, a hostile and offensive interaction between (or among) users, which usually goes beyond the original topic of discussion (Coe, Kenski, and Rains 2014).
Response to Paradoxical Information
We have showed that users tend to aggregate around preferred contents shaping well-separated and polarized communities. Our hypothesis is that users’ exposure to unsubstantiated claims may affect their criteria for
[ 352 ] Politics
contents’ selection and increase their propensity to interact with false information. Thus, in this section our aim is to test how polarized users from both categories—Science and Conspiracy—interact with troll information, which is deliberately false and consists in paradoxical imitations of Conspiracy contents (Bessi et al. 2015a). Such posts diffuse clearly dubious claims, such as the undisclosed news that infinite energy has been finally discovered, or that a new lamp made of actinides (e.g., plutonium and uranium) will finally solve the lack of energy with less impact on the environment, or that chemical analysis reveals that chem-trails contain sildenafil citratum, sold as the brand name Viagra. Figure 14.7 shows the percentage of polarized users from both categories who interact with troll posts in terms of comments and likes on the Italian Facebook. Our findings show that users usually exposed to Conspiracy claims are more likely to jump the credulity barrier: indeed, Conspiracy users represent the majority in both liking (81%) and commenting (78%) on troll posts. Therefore, even when information is deliberately false and framed with a satirical purpose, its conformity with the Conspiracy narrative transforms it into credible content for members of the Conspiracy echo chamber. Thus, confirmation bias plays a crucial role in content selection. Moreover, such results are coherent with the literature indicating the exist ence of a relationship between beliefs in conspiracy theories and the need for cognitive closure, that is, the desire to get rid of ambiguity and arrive at definite (sometimes irrational) conclusions (Leman and Cinnirella 2013; Epstein et al. 1996; Webster and Kruglanski 1994). Indeed, people who prefer a heuristic approach to evaluate evidences and form their opinions are more likely to end up with an explanation consistent with their preexisting system of beliefs. Likes 19,14%
Comments 22,08%
80,86% Conspiracy Users
77,92% Science Users
Figure 14.7: Italian Facebook. Percentage of likes and comments on troll posts made by users polarized toward Science (light blue) and Conspiracy (orange). Source: Bessi, Coletto et al. (2015).
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 353 ]
4 53
Response to Dissenting Information
Debunking pages on Facebook strive to contrast misinformation spreading by providing fact-checked information to specific topics. However, if confirmation bias plays a pivotal role in selection criteria, then debunking is likely to sound to Conspiracy users as information dissenting from their preferred narrative. In this section, our aim is to study and analyze users’ behavior with regard to debunking attempts on the US Facebook (Zollo et al. 2017). As a first step, we show how debunking posts get liked and commented on according to users’ polarization. Figure 14.8 shows how users’ activity is distributed on debunking posts: the left (respectively, right) panel shows the proportions of likes (respectively, comments) left by users polarized toward Science, users polarized toward Conspiracy, and nonpolarized users. We notice that the majority of both likes and comments are left by users polarized toward Science (respectively, 66.95% and 52.12%), while only a small minority are made by users polarized toward Conspiracy (respectively, 6.54% and 3.88%). Indeed, the first interesting result is that the biggest consumer of debunking information is the scientific echo chamber. Out of 9,790,906 polarized Conspiracy users, just 117,736 interacted with debunking posts—that is, commented on a debunking post at least once. Thus, debunking posts remain confined within the scientific echo chamber and only few users usually exposed to unsubstantiated claims actively interact with the corrections. Dissenting information is mainly ignored. However, in our scenario few users belonging to the Conspiracy echo chamber interact with debunking information. We now wonder about the effect of such an interaction. Therefore, we perform a comparative analysis between Conspiracy users’ behavior before and after they first comment Likes
Comments
26,52% 44,00% 6,54%
52,12%
66,95% 3,88%
Science
Conspiracy
Other
Figure 14.8: US Facebook. Proportions of likes (left) and comments (right) left by users polarized toward Science, users polarized toward Conspiracy, and nonpolarized users. Source: Zollo et al. (2017).
[ 354 ] Politics
on a debunking post. Figure 14.9 shows the liking and commenting rate— that is, the average number of likes (or comments) on Conspiracy posts per day—before and after the first interaction with debunking. Should the debunking work, we would expect that Conspiracy users acknowledge the correction and reduce their engagement with the Conspiracy echo chamber. Instead, we observe that their liking and commenting rates on Conspiracy posts increase after commenting. Thus, their activity in the Conspiracy echo chamber is reinforced after the interaction, rather than reduced. In other words, debunking attempts are producing a backfire effect (Nyhan and Reifler 2010).
THE ANATOMY OF NEWS CONSUMPTION ON ONLINE SOCIAL MEDIA
So far, we have measured the echo chamber effect pushed by confirmation bias by analyzing information coming from different and contrasting narratives. We have seen that users look for information that already adheres to their system of beliefs, tending to form polarized groups around a shared worldview. In this section, we generalize our previous results by focusing on how information produced by official news outlets is consumed by users. To map the information space on Facebook and better understand its dynamics, we analyze the news consumption patterns of 376 million users over a time span of six years (Schmidt et al. 2017).
0.6
Likes
Comments 0.6
PDF
PDF
0.4
0.2
0.4
0.2
0.0
0.0 10–3 10–2 10–1 100 101 rate Before Debunking
10–3 10–2 10–1 100 101 rate After Debunking
Figure 14.9: US Facebook. Rate—i.e., average number, over time, of likes (left) and comments (right) left on Conspiracy posts by users who interacted with debunking posts. Source: Zollo et al. (2017).
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 355 ]
6 5 3
Table 14.5. DATA BREAKDOWN. THE DATASET INCLUDES ALL THE ANGLOPHONE NEWS OUTLETS ON FACEBOOK OVER A TIMESPAN OF SIX YEARS (2010–2 015) PAGES
920
POSTS
12,825,291
LIKES
3,621,383,495
COMMENTS
366,406,014
USERS
376,320,713
LIKERS
360,303,021
COMMENTERS
60,115,975
Dataset
Starting from the list provided by the Europe Media Monitor (Steinberger, Pouliquen, and van der Goot 2013), we selected all the Anglophone news sources and their related page on Facebook. The downloaded data from each page includes all the posts made from January 1, 2010 to December 31, 2015, as well as all the likes and comments on such posts. The Europe Media Monitor list also includes the country and the region of each news source. For an accurate mapping on the globe, we also collected the geographical location (latitude and longitude) of each page. A breakdown of data is reported in Table 14.5.
Selective Exposure
Our aim is to quantify the turnover of Facebook news sources by measuring the heterogeneity of users’ activity, that is, the total number of pages a user interacts with. Figure 14.10 shows the number of news sources a user interacts with during her lifetime—that is, the temporal distance between the first and last interaction with a post—and for increasing levels of engagement—that is, her total number of likes. For a comparative analysis, we standardized between 0 and 1 both lifetime and engagement over the entire user set. Thus, the highest lifetime and activity a user may have are denoted by a value of 1, the lowest by 0. Figure 14.10 shows the results for the yearly time window (first column), for the weekly (second column), and monthly (third column). We may observe that a user usually interacts with a small number of news outlets and that higher levels
[ 356 ] Politics
101.6 101.4 101.2
101
pages per month
pages per week
pages per year
101.8
100.5 100
101 0.00
0.25
0.50
0.75
1.00
0.00
0.50
0.75
102
100.5
0.50
0.75
1.00
0.25
0.50
0.75
1.00
standardized lifetime
100.5 100 10–0.5 10–1 10–1.5
100 standardized activity
0.00
pages per month
pages per week
101
0.25
100.5
1.00
101
101.5
0.00
101
standardized lifetime
standardized lifetime
pages per year
0.25
101.5
101.5 101 100.5 100 10–0.5 10–1
0.00
0.25
0.50
0.75
standardized activity
1.00
0.00
0.25
0.50
0.75
1.00
standardized activity
Figure 14.10: Facebook on global scale. Maximum number of unique news sources that users with increasing levels of lifetime (top) and activity (bottom) interacted with (yearly, weekly, and monthly, respectively). Source: Schmidt et al. (2017).
8 53
of activity and longer lifetime correspond to a smaller number of sources. Indeed, while users with very low lifetime and activity levels interact with about 100 pages in a year, 30 in a month, and 10 in a week, the same values are far lower for more active and long-lived users, who interact with about 10 pages in a year, and less than 4 monthly and weekly. Thus, there is a natural tendency of users to confine their activity on a limited set of pages. According to our findings, news consumption on Facebook is dominated by selective exposure.
Clusters and Users’ Polarization
User tendency to interact with few news sources might elicit page clusters. To test this hypothesis, we first characterize the emergent community structure of pages according to users’ activity. Figure 14.11 shows two graphs where nodes are pages and two pages are connected if a user likes (or comments on) at least a post on both of them. The weight of a link— that is, the size of the arch—is determined by the number of users that the two pages have in common. Colors denote the membership of a node to a particular community.2 By examining users’ activity across the various clusters and measuring how they span across news outlets, we find that most users remain confined within specific clusters. To understand the relationship between page groupings and user behavior, we quantify the fraction of activity a user has in the largest communities and in any other community. Figure 14.12 shows users’ activity across the five largest communities. Vertices of the pentagon represent the five largest communities, and the central point all the remaining ones. The position of each dot is determined by the number of communities the users interact with. The size and transparency indicate the number of users in that position. Our findings show that users are strongly polarized and that their attention is confined to a single community of pages. Users’ interaction with Facebook news outlets denotes a dominant community structure with sharply identified groups. Since users tend to focus on a small number of pages, the Facebook news sphere results tend to be clustered and dominated by a precise community structure.
2. Communities were detected by the Fast Greedy (FG) algorithm (Clauset, Newman, and Moore 2004). Details are available at Schmidt et al. (2017).
[ 358 ] Politics
Comments
likes
Figure 14.11: Facebook on global scale. Community structure based on likes (left) and comments (right). Source: Schmidt et al. (2017).
1.0
0.5
y
Community Quadrant Community 1 Community 2 Community 3 Community 4 Community 5 Others Communities
0.0
–0.5
–1.0 –1.0
–0.5
0.0 x
0.5
1.0
Figure 14.12: Facebook on global scale. Users’ activity across the five largest communities. Source: Schmidt et al. (2017).
CASE STUDIES: ECHO CHAMBERS AND POLITICS
We now want to better understand the main determinants behind content consumption and the emergence of narratives on online social media. In this section, we address this challenge by focusing on the discussion around two of the
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 359 ]
0 6 3
Table 14.6. BREAKDOWN OF THE BREXIT DATASET ON FACEBOOK
TOTAL PAGES
81
ABOUT BREXIT 38
POSTS
303,428
5,039
LIKES
186,947,027
2,504,956
COMMENTS
38,182,541
469,397
LIKERS
30,932,388
1,365,821
7,222,273
259,078
COMMENTERS
most recent and debated topics of the political sphere: Brexit (Del Vicario et al. 2017a) and the Italian constitutional referendum (Del Vicario et al. 2017b). Brexit Dataset
Following the EMM list (Steinberger, Pouliquen, and van der Goot 2013), we selected all news outlets (and their related Facebook page) whose legal head office (at least one of them) is located in the United Kingdom. For each page, we downloaded all the posts from January 1 to July 15, 2016, as well as all the related likes and comments. The exact breakdown of data is provided in Table 14.6. All UK-based pages have been divided into two groups: Brexit pages, which include those pages engaged in the debate around the Brexit, and all others. Out of 81 pages, 38 posted at least one news story about the Brexit. Results and Discussion
We start our analysis by characterizing users’ behavior on Brexit pages and the related community structure. Figure 14.13 shows the pages-users graph, where nodes are Brexit pages and two pages are connected if at least one user liked a post from each of them. The weight of a link is determined by the number of users in common between the two pages. Colors (respectively, blue and red) represent membership in one of the two communities (respectively, C1 and C2).3 3. Communities were detected by the Fast Greedy (FG) algorithm (Clauset, Newman, and Moore 2004). Details are available at Del Vicario et al. (2017b).
[ 360 ] Politics
Figure 14.13: Brexit. Community structure (a) and percentage of pages in the different communities (b). Colors indicate the membership of users in the different communities (blue for C1, red for C2). Source: Del Vicario, Zollo, et al. (2017).
Taking into account the meaning of a like as a positive feedback to a post, we characterize how contents from the two communities get consumed by Facebook users. We define the users’ polarization by likes (reps., ( y − x ) , where y is the number of likes (resp., comcomments) as ρ( u ) = ( y + x) ments) that user u left on posts of C2 and x the number of likes (resp., comments) left on posts of C1. Thus, a user u is said to be polarized toward C2 (resp., C1) if ρ( u ) = 1 (respectively, –1). In Figure 14.14 we report the distribution of users’ polarization by likes (left panel) and comments (right panel). We find that the distribution is sharply bimodal in both cases (with two visible peaks by values –1 and 1), denoting that the majority of users may be divided into two main groups referring to the two communities of Figure 14.13. Our analysis provides evidence of the existence of two well-segregated echo chambers, whose emergence is completely spontaneous. Indeed, connections among pages are the direct result of users’ activity, and any
S o c i a l D y n a m i c s i n t h e A g e of C r e d u l i t y
[ 361 ]
2 6 3
Likes
Comments
PDF
6
4
2
0 –1.0
–0.5
0.0
0.5
1.0 –1.0 ρ(u)
–0.5
0.0
0.5
1.0
Figure 14.14: Brexit. Users’ polarization by likes (left) and comments (right). Source: Del Vicario, Zollo, et al. (2017).
contents’ categorization has been performed a priori. Users are divided into two main distinct groups and confine their attention on specific pages. Thus, they tend to focus on one narrative and ignore the other.
The Italian Constitutional Referendum Datasets
Facebook and Twitter are two of the most popular online social media where people can access and consume news and information. However, their nature is different: Twitter is an information network, while Facebook is still a social network, despite its evolution into a “personalized portal to the online world” (Oremus 2016). Such a difference highlights the importance of studying, analyzing, and comparing users’ consumption patterns inside and between both platforms.
Facebook
Following the exhaustive list provided by Accertamenti Diffusione Stampa (ADS 2016) we identified a set of 57 Italian news sources and their respective Facebook pages. For each page, we downloaded all the posts from July 31 to December 12, 2016, as well as all the related likes and comments.
[ 362 ] Politics
Then, we filtered out posts about the Italian constitutional referendum (held on December 4) by keeping those containing at least two words in the set {Referendum, Riforma, Costituzionale} within their textual content, that is, their description on Facebook or the URL content they pointed to. The exact breakdown of the dataset is provided in Table 14.7.
Twitter
Data collection on Twitter followed the same pipeline adopted for Facebook. First, we identified the official accounts of all the news sources reported by ADS (2016). Then, we gathered all the tweets posted by such accounts from July 31 to December 12, 2016, through the Twitter Search API. Starting from that set, we selected only tweets pointing to news related to the referendum debate. Specifically, we considered only the statutes whose URL was present in the Facebook dataset. The resulting set of valid tweets consists of about 5,400 elements. To make the available information comparable to Facebook, from each tweet we collected information about the users who favored (i.e., left a favorite), retweeted, or replied. Specifically, favorites and retweets express an interest in the content as Facebook likes do, while replies are similar to comments. To obtain such information we bypassed the Twitter API, since it does not return favorites or retweets. In particular, we scraped the tweet page and extracted the users who liked or retweeted the status, and the retweet/favorite counts. Although we are able to identify the users acting on a tweet, the information is partially incomplete, since the tweet page shows at most 25 users who retweet or favor (retweeters or
Table 14.7. BREAKDOWN OF FACEBOOK AND TWIT TER DATASETS
FACEBOOK PAGES/ACCOUNTS POSTS/TWEETS LIKES/FAVORITES RETWEETS
TWITTER
57
50
6,015
5,483
2,034,037 -
57,567 55,699
COMMENTS
719,480
30,079
USERS
671,875
29,743
LIKERS/FAVORITERS
568,987
16,438
RETWEETERS COMMENTERS
- 200,459
14,189 8,099
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 363 ]
4 6 3
favoriters). However, such a restriction has a limited impact on the set of valid tweets. Indeed, our retweeters’ and favoriters’ sets capture the entire set of users acting on tweets for about 80% of statuses. As for the replies to a tweet, we were able to collect every user who commented on a tweet reporting a news about the referendum and the reply tweet she wrote. The tweet page reports the entire discussion about the target tweet, including the replies to replies and so on. Here we limited our collection to the first level replies—that is, direct replies to the news source tweet—since the target of the comment is identifiable, that is, the news linked to the tweet. The breakdown of the Twitter dataset is reported in Table 14.7. The ratio between Facebook and Twitter volumes mirrors the current social media usage in the Italian online media scene. Indeed, news sources and newspapers exploit both media channels to spread their contents, as denoted by the similar number of posts. Nevertheless, activities on the posts are skewed toward Facebook, since in Italy its active users are 4/5-fold those of Twitter. Our collection methodology and the high comparability between the two datasets provide a unique opportunity to investigate news consumption inside and between two different and important social media.
Results and Discussion
Since online social media facilitate the aggregation of individuals in echo chambers, our aim is to understand whether a similar pattern is observed around the Italian constitutional referendum’s debate. By accounting for the connections created by users’ activities among pages, we first want to analyze the community structure on both Facebook and Twitter. To this aim, we focus on the bipartite networks, that is, graphs whose vertices can be divided into two disjoint sets. More specifically, in our case nodes are Facebook pages/Twitter accounts and links are determined by the number of likers/favoriters or retweeters that two pages/accounts have in common, as shown in Figure 14.15. We may observe that there are five communities of news pages in Facebook, and four in Twitter. Networks’ vertices are grouped according to their community membership. For the case of both Facebook and Twitter, pages are not equally distributed among the communities, however in either case we can restrict our attention to the main communities (three for Facebook and two for Twitter), as they account for about 90% of the news sources and their activity. In the case of Facebook, we may observe three main communities of comparable size, while for Twitter we observe a bigger community (C1) and a smaller one (C2), accounting,
[ 364 ] Politics
Figure 14.15: Italian referendum. Community structure for Facebook (left) and Twitter (right). Colors indicate the membership of Facebook pages or Twitter account in the different communities (green for C1, red for C2, blue for C3, yellow for C4, and orange for C5). Source: Del Vicario, Gaito, et al. (2017).
63
respectively, for 70% and 18% of the vertices. Notice that the most active community of Facebook (C1), shown in yellow in Figure 14.1, represents major national news sources, while C2 and C3 include local news sources and the majority of links can be ascribed to geographical proximity. For Twitter, we can observe a completely different pattern, where C1 is the biggest community (blue links), which includes a mixture of national and local news sources, and the national ones are known to be the more conservative ones, while C2 includes major national news sources and very few local ones. In order to characterize the relationship between the observed spontaneous emerging community structure of Facebook pages/Twitter accounts and users’ behavior, we now look at users’ activity inside and across the different communities. For Facebook, we quantify the fraction of the activity of any user in the three most active communities versus that in any other community. Figure 14.16 shows that users are strongly polarized and that their attention is confined to a single community of pages. In the case of Twitter, we can restrict our attention to C1 and C2, since the activity is concentrated in such communities. Figure 14.17 shows the distribution of users’ polarization on Twitter. We find that the distribution is bimodal, with a small probability for the user to interact with pages from both communities, and, hence, denoting that users may be divided into two main categories referring to the two communities C1 and C2.
1.0
Community Quadrant Community 1 Community 2 Community 3 Others Communities
0.5 y 0.0
Standardized User Frequency 0.00 0.25 0.50 0.75 1.00
–0.5 –1.0 –1.0
–0.5
0.0 x
0.5
1.0
Figure 14.16: Italian referendum. Users’ activity across the three most active communities of Facebook. Vertices of the triangle represent the three most active communities and the central point all the remaining ones. The position of each dot is determined by the number of communities the users interact with. The size indicates the number of users in that position. Source: Del Vicario, Gaito, et al. (2017).
[ 366 ] Politics
0.95
PDF
0.75 0.5 0.25 0.05
–1.0
–0.5
0.0 polarization
0.5
1.0
Figure 14.17: Italian Referendum. Users’ polarization on Twitter. Source: Del Vicario, Gaito, et al. (2017).
CONCLUSIONS
In this chapter, we discussed a series of works aiming at unveiling the main determinants behind information consumption on online social media. To this aim, we first investigated users’ behavior on two different and specific narratives, Science and Conspiracy, on both Facebook and YouTube. We provided empirical evidence of the existence of echo chambers on both social networks. Indeed, contents related to distinct narratives aggregate users into polarized communities, where they interact with like-minded people and reinforce their worldview. Also, we showed that confirmation bias is the primary driver of contents’ diffusion, contributing to the formation of echo chambers. By analyzing the emotional dynamics inside and across such segregated communities, we found a rather negative environment, which is exacerbated when the two communities (rarely) meet and discuss together. Moreover, as the discussion becomes longer, debates tend to degenerate, affecting users’ sentiment negatively. Then, we tested whether users’ exposure to unsubstantiated claims may affect their selection criteria and increase their attitude to interact with false information. Our findings show that users usually exposed to Conspiracy claims are more likely to jump the credulity barrier: even when information is deliberately false and framed with a satirical purpose, its conformity with the Conspiracy narrative makes it credible for members of the Conspiracy echo chamber. If confirmation bias plays a pivotal role in selection criteria, then debunking is likely to sound to Conspiracy users much like information
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 367 ]
8 6 3
dissenting from their preferred narrative. And indeed, our results show that debunking posts remain confined within the scientific echo chamber and only few Conspiracy users interact with the corrections, in which case their activity in the Conspiracy echo chamber is reinforced. Thus, debunking attempts produce a backfire effect. Generalizing our previous results, we focused on how information produced by official news outlets is consumed by 376 million Facebook users over a time span of six years. According to our findings, news consumption on Facebook is dominated by selective exposure. There is a natural tendency of users to restrict their activity on a limited set of pages. Indeed, we found that users are strongly polarized and confine their attention to a single community of pages. Finally, we considered two particular case studies related to the political sphere: the Brexit, and the Italian Constitutional Referendum. Our results show that, in both cases, well-segregated echo chambers emerge spontaneously, as a direct result of users’ activity and without any categorization of contents a priori. Thus, users tend to confine their attention on a specific cluster of pages, focusing on one narrative and ignoring the others. Our findings suggest that social media represent a new, hybrid system, where confirmation bias plays a pivotal role in information consumption and diffusion. Indeed, social media rely on processes that feed and foster echo chambers and, thus, users’ polarization. Such a scenario inevitably affects civic and political lives, and the dominance of few, increasingly powerful tech companies cannot be ignored. To smooth polarization and contrast misinformation, understanding how core narratives behind different echo chambers evolve is a key step. To this aim, knowing the mechanisms underlying such platforms would be crucial to have a better awareness of their effect on users’ behavior online.
ACKNOWLEDGMENTS
This chapter is based on coauthored material. We thank Aris Anagnostopoulos, Alessandro Bessi, Guido Caldarelli, Michela Del Vicario, Sabrina Gaito, Shlomo Havlin, Igor Mozetič, Petra Kralj Novak, Fabio Petroni, Antonio Scala, Louis Shekhtman, H. Eugene Stanley, Brian Uzzi, and Matteo Zignani.
REFERENCES ADS. 2016. Elenchi Testate. http://www.adsnotizie.it/_testate.asp.
[ 368 ] Politics
Allen, Robert. 2017. “What Happens Online in 60 Seconds?” Smartinsights.com, February 6, 2017. http://www.smartinsights.com/internet-marketing- statistics/happens-online-60-seconds/. Bessi, Alessandro, Mauro Coletto, George Alexandru Davidescu, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015a. “Science vs Conspiracy: Collective Narratives in the Age of Misinformation.” PLoS ONE 10, no. 2: e0118093. Bessi, Alessandro, Fabio Petroni, Michela Del Vicario, Fabiana Zollo, Aris Anagnostopoulos, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015b. “Viral Misinformation: The Role of Homophily and Polarization.” In Proceedings of the 24th International Conference on World Wide Web. Florence: ACM. Bessi, Alessandro, Fabio Petroni, Michela Del Vicario, Fabiana Zollo, Aris Anagnostopoulos, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2016a. “Homophily and Polarization in the Age of Misinformation.” European Physical Journal Special Topics 225, no. 10: 2047–59. Bessi, Alessandro, Fabiana Zollo, Michela Del Vicario, Michelangelo Puliga, Antonio Scala, Guido Caldarelli, Brian Uzzi, and Walter Quattrociocchi. 2016b. “Users Polarization on Facebook and Youtube.” PLoS ONE 11, no. 8: e0159641. Clauset, Aaron, Mark E. J. Newman, and Cristopher Moore. 2004. “Finding Community Structure in Very Large Networks.” Physical Review E (APS) 70, no. 6: 066111. Coe, Kevin, Kate Kenski, and Stephen A. Rains. 2014. “Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments.” Journal of Communication 64, no. 4: 658–79. Del Vicario, Michela, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2016. “The Spreading of Misinformation Online.” Proceedings of the National Academy of Sciences 113, no. 3: 554–59. Del Vicario, Michela, Fabiana Zollo, Guido Caldarelli, Antonio Scala, and Walter Quattrociocchi. 2017a. “Mapping Social Dynamics on Facebook: The Brexit Debate.” Social Networks 50: 6–16. Del Vicario, Michela, Sabrina Gaito, Walter Quattrociocchi, Matteo Zignani, and Fabiana Zollo. 2017b. “Public Discourse and News Consumption on Online Social Media: A Quantitative, Cross-Platform Analysis of the Italian Referendum.” (Accepted at) International Conference on Data Science and Advanced Analytics (DSAA). Tokyo: IEEE. Ellison, Nicole B., Charles Steinfield, and Cliff Lampe. 2007. “The Benefits of Facebook ‘Friends’: Social Capital and College Students’ Use of Online Social Network Sites.” Journal of Computer-Mediated Communication 12, no. 4: 1143–68. Epstein, Seymour, Rosemary Pacini, Veronika Denes-R aj, and Harriet Heier. 1996. “Individual Differences in Intuitive-Experiential and Analytical-R ational Thinking Styles.” Journal of Personality and Social Psychology 71, no. 2: 390–405. Facebook. 2017. Using the Graph API. https://developers.facebook.com/docs/graph- api/using-graph-api/. Howell, W. L. 2013. Digital Wildfires in a Hyperconnected World. Tech. Rep. Global Risks. Switzerland: World Economic Forum. Kuklinski, James H., Paul J. Quirk, Jennifer Jerit, David Schwieder, Robert F. and Rich. 2000. “Misinformation and the Currency of Democratic Citizenship.” Journal of Politics 62, no. 3: 790–816.
S o c i a l D y n a m i c s i n t h e Ag e of C r e d u l i t y
[ 369 ]
0 73
Leman, Patrick John, and Marco Cinnirella. 2013. “Beliefs in Conspiracy Theories and the Need for Cognitive Closure.” Frontiers in Psychology 4: 378. Lévy, Pierre. 1997. Collective Intelligence. New York: Plenum/Harper Collins. Newman, Nic, David A. L. Levy, and Rasmus Kleis Nielsen. 2016. Digital News Report. Oxford: Reuters Institute for the Study of Journalism. Nyhan, Brendan, and Jason Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32, no. 2: 303–30. Oremus, Will. 2016. “Facebook Isn’t the Social Network Anymore” slate.com, April 24, 2016. http://www.slate.com/articles/technology/technology/2016/04/ facebook_isn_t_the_social_network_anymore_so_what_is_it.html Schmidt, Ana Lucía, Fabiana Zollo, Michela Del Vicario, Alessandro Bessi, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2017. “Anatomy of News Consumption on Facebook.” Proceedings of the National Academy of Sciences 114, no. 12: 3035–39. Steinberger, Ralf, Bruno Pouliquen, and Erik Van der Goot. 2013. “An Introduction to the Europe Media Monitor Family of Applications.” arXiv preprint arXiv:1309.5290. The Social Skinny. 2017. “Social Media Statistics.” Accessed September 18, 2017. http://thesocialskinny.com/tag/social-media-statistics/. Webster, Donna M., and Arie W. Kruglanski. 1994. “Individual Differences in Need for Cognitive Closure.” Journal of Personality and Social Psychology 67, no. 6: 1049. YouTube. 2016. Youtube API. https://developers.google.com/youtube/v3/ docs/. Zollo, Fabiana, Alessandro Bessi, Michela Del Vicario, Antonio Scala, Guido Caldarelli, Louis Shekhtman, Shlomo Havlin, and Walter Quattrociocchi. 2017. “Debunking in a World of Tribes.” PLoS ONE 12, no. 7: e0181821. Zollo, Fabiana, Petra Kralj Novak, Michela Del Vicario, Alessandro Bessi, Igor Mozetič, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015. “Emotional Dynamics in the Age of Misinformation.” PLoS ONE 10, no. 9: e0138740.
[ 370 ] Politics
CHAPTER 15
Platform Power and Responsibility in the Attention Economy JOHN NAUGHTON
In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it. —Herbert Simon (1971)
INTRODUCTION
The five dominant digital companies discussed in chapter 1— Apple, Alphabet (Google), Microsoft, Amazon, and Facebook—are all embodiments of corporate wealth and power. As such they pose some familiar regulatory challenges and some new ones. Among the former are abuses of monopoly power (e.g., Google and search, Microsoft and operating systems); among the latter are questions of how to demonstrate harm in markets where services are provided free of charge to users, and how to conceptualize the kinds of power that mastery of digital technologies and ownership of a dominant platform may confer on a particular digital giant. This second set of questions is the focus of this chapter. Senior Research Fellow, Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge. e: [email protected].
2 7 3
For this purpose we need to distinguish between the three companies (Amazon, Apple, and Microsoft) that sell goods and services to paying customers, and the two “pure digital” companies (Google and Facebook) that provide services that are free at the point of consumption but whose business models involve collecting and monetizing user data, which are aggregated, refined, and sold to third parties in order to facilitate the targeting of advertisements at users.1 As noted in chapter 1, such business models involve two-sided (or multisided) markets and as such pose interesting analytical and regulatory problems, but these are not our concern here. Rather we look at the distinctive kinds of power and influence possessed and wielded by these two “pure digital” operators and the power shifts that their platforms and operations have enabled.
WHAT SETS GOOGLE AND FACEBOOK APART?
At first sight, Google and Facebook may just look like relatively recent manifestations of large transnational US-based corporations. And so, in one sense, they are—as an empirical comparison between them and more conventional corporations might suggest. Take, for example, Facebook and the energy giant ExxonMobil. In the second quarter of 2017, both were listed in the top 10 most valuable companies in the United States by market capitalization, and had comparable market valuations: Facebook at $357 billion (seventh in the list) and ExxonMobil at $342 billion (in 10th place).2 And both companies are global operations in that they have a presence in many markets outside of the United States. But there the similarities end. In 2016, Facebook had just over 17,000 employees,3 making it a significantly smaller employer than ExxonMobil, which in the same year had 71,300.4 ExxonMobil had much greater global
1. A business model characterized— e.g., by Zuboff (2015)— as “surveillance capitalism.” 2. https://en.wikipedia.org/wiki/List_of_public_corporations_by_market_capitalization#2017 (accessed August 25, 2017). In the previous quarter their respective rankings were eighth and sixth. 3. https://www.statista.com/statistics/273563/number-of-facebook-employees/. 4. https://www.statista.com/statistics/264122/number-of-employees-at-exxon- mobil-since-2002/. Since then, the number of Facebook’s employees and its market value have grown significantly—to 20,658 and $490B respectively in mid-2017 (https://newsroom.fb.com/company-info/). The corresponding figures for ExxonMobil are 72,700 and $325.1B, which means that employee numbers are grew slowly and share price dropped in the period under discussion.
[ 372 ] Politics
revenues ($218.6 billion compared with Facebook’s $27.638 billion),5 but Facebook’s net income as a proportion of global revenues was 10 times that of ExxonMobil’s (36.57% compared with 3.58%). And Facebook appears to be growing much more rapidly than ExxonMobil: its July 2017 SEC filing, for example, claims an annual growth rate of employee numbers of 43%.6 But it is their relationships with the people who use their products and services that most vividly illustrate the difference between the two organizations. Direct comparisons are difficult because ExxonMobil has only customers—purchasers of the products and services that it offers—whereas Facebook has both users (those who use its services at no charge) and customers (the organizations that pay to advertise on its platforms). The same applies to Google. Figures on the numbers of these customers are hard to come by, but in 2016 Facebook’s global revenue from advertising was $26.9 billion, which meant that its average advertising revenue per user was $15.98.7 However, it is the sheer scale and the level of engagement of their users that really marks out the two digital giants as new and distinctive kinds of corporation. In mid-2017, for example, Facebook had 2.01 billion monthly active users (MAUs) of whom 1.32 billion accessed the site daily. The company also owns WhatsApp, Messenger, and Instagram, which (respectively) have 1.2 billion, 1.2 billion, and 700 million users (Lanchester 2017, 3). Every one of the interactions with Facebook services is logged and algorithmically analyzed in the process that generates vast troves of user data from their likes, shares, messages, uploaded photographs and videos, and other activities. Possession of these hoards of user data is what really marks the “pure digital” corporations from conventional companies. ExxonMobil doubtless collects significant amounts of data about its customers, but it is unlikely that it knows much about their daily activities, their friendships, hobbies, families, political and sexual orientations, or personal obsessions (Stephens- Davidowitz 2017). Facebook, in contrast, does “know” these things and uses that knowledge to enable its real customers—advertisers—to target users with unprecedented precision. As one critic (Taplin 2017, 143) puts it: If I want to reach women between the ages of 25 and 30 in zip code 37206 who like country music and drink bourbon, Facebook can do that. Moreover, 5. https://en.wikipedia.org/wiki/ExxonMobil and https://en.wikipedia.org/wiki/ Facebook. 6. http://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/c5180607-adbf-4f94- a09f-708c043c4d3b.pdf. 7. https://www.statista.com/statistics/234056/facebooks-average-advertising- revenue-p er-u ser/.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 373 ]
4 7 3
Facebook can often get friends of these women to post a “sponsored story” on a targeted consumer’s news feed, so it doesn’t feel like an ad. As Zuckerberg said when he introduced Facebook Ads, “Nothing influences people more than a recommendation from a trusted friend. A trusted referral is the Holy Grail of advertising.”
The accumulation of huge amounts of personal information about users is a concomitant of the business model of surveillance capitalism. It is claimed that Facebook, for example, holds 98 distinct data points on every user (Dewey 2016). It and Google are currently the masters of this industry and have built elaborate technological systems for delivering targeted advertising material to individual users that is tailored using what the company knows about them based on their online activities on Google’s or Facebook’s platforms.
CONCEPTUALIZING DIGITAL POWER
“Power” is a famously slippery concept, and it has become more so in a networked world. Max Weber saw it as “the chance to carry through one’s will, even against resistance” (Weber 1968, 53). Bertrand Russell defined power as “the ability to produce intended effects” (Russell 1938, 23), which captures the essence of the archetypal conception of power in terms of the capacity of defined agents to make things happen—what we might call direct power. A classic articulation of this is found in Lukes’s three dimensions of power as the capacity to (1) compel people to do what they do not want to do; (2) prevent them doing what they want to do; and (3) shape the way they think (Lukes 1974). The first two can be interpreted in terms of coercive power as possessed by, say, agents of the state, while the latter is a capacity possessed by state agencies and also private corporations like media organizations. One of the lessons of the last decade is that both Google and Facebook have acquired this third dimension of power—for example, in their ability to shape the public sphere. Traditional notions of power imply discernible hierarchical arrangements and cause-and-effect processes. The advent of the Internet and the networks that it has enabled or spawned, however, necessitates conceptualizations of power that reflect the realities of these new structures. One example is Grewal’s notion of network power, which is essentially the power that flows from successful exploitation of network effects (Grewal 2009). All networks, Grewal argues, have standards embedded in them, and in theory people are free to adopt or ignore these standards. But in practice
[ 374 ] Politics
the standards gain in value the more they are used and thereby undermine the viability of alternative forms of cooperation. The result is that “our choices tend to narrow over time, so that standards are imposed on us” (Caldwell 2008). Grewal developed his concept of network power in the context of a broader study of globalization, so he was writing in a wider context than that of the corporate actors that are the focus of this chapter. This is also true of Castells, a leading scholar of cyberspace, who has written extensively on the subject of power (Castells 2009). In his more recent work (Castells 2011, 773), Castells employs somewhat overlapping terminology to distinguish four types of network power: (1) Networking Power—the power of the organizations included in the core networks of global network society over those who are excluded from those networks; (2) Network Power—the power resulting from the standards required to coordinate social interaction in networks; (3) Networked Power—the power of some social actors over other social actors in the network; and (4) Network-making Power— “the power to program specific networks according to the interests and values of the programmers, and the power to switch different networks following the strategic alliances between the dominant actors of various networks.” In this schema, it would seem that the only area of overlap between Castells and Grewal is that the latter’s concept of network power essentially corresponds to Castells’s second category—also called “network power”—that is, that exercised by the protocols that determine the behaviour of nodes within a network. Other kinds of corporate power include that which flows from ownership and control of particular kinds of resource. These include vast hoards of user data accumulated over many years; formidable cohorts of talented engineers and computer scientists; impressive revenue streams and profits; global infrastructures of computing resources and data storage; huge reserves of offshore cash, and so forth. There are also less esoteric types of power. For example, the ability to influence policymaking or legislation through conventional lobbying; cultivating thought-leaders by supporting academic research in areas of interest to the companies; and the development of close relationships with government and regulators via the “revolving door” through which former officials are employed by companies, and senior roles in government agencies and regulators are filled by former Google and Facebook executives. Since this has become standard practice for large corporations in liberal democracies, it no longer distinguishes the digital companies from their nondigital counterparts, and so it is omitted from the current analysis.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 375 ]
6 7 3
DIRECT AND INDIRECT POWER
Much of the discussion about power focuses on agency—the ability of an entity “to produce intended effects,” as Russell puts it. This is what we might call direct power, and the two “pure digital” companies under consideration here certainly possess it. One sees it, for example, in the one-sidedness of the online End User Licence Agreements (EULAs) that they impose on users— contracts that would rarely pass muster in the offline world. But what is striking about Google and Facebook is that, as a result of their business models, resources, operations, and platforms, they also produce unintended effects, which implies possession of novel and elusive kinds of indirect power. As an illustration, consider the exhortation of Facebook’s founder and CEO, Mark Zuckerberg, to his developers after the company’s IPO: “Move fast and break things.” Within the entrepreneurial culture of Silicon Valley where Schumpeter’s notion of “creative destruction” has the status of Holy Writ, this seemed unremarkable. But the experience of the 2016 Brexit campaign in the UK and the election of Donald Trump in the United States suggests that some “breakage”—or at any rate disruption—occurred in those two democratic processes as a result of Facebook’s dominance. And so while it was clearly not the intention of the company’s CEO to “break” democracy, nevertheless his technology had an impact—perhaps a significant one—on the outcome. Another illustration of indirect power comes from Google’s “autocomplete” facility, which provides suggestions based on past queries which, if appropriate, enable the searcher to complete a query simply by hitting the return key. This generated significant controversy in December 2016, when a journalist typed a query beginning “Are Jews . . .” and found that one of the autocomplete options offered was “evil.” Hitting return then brought up a page of 10 results, 9 of which contained anti-Semitic content. The third result linked to an article from stormfront.org, a neo-Nazi website. The fifth was a YouTube video: “Why the Jews are Evil. Why we are against them” (Cadwalladr 2016). In the controversy resulting from the article, Google went to great pains to point out that it was not anti-Semitic and that the autocomplete suggestions were algorithmically generated. “Our search results,” it stated, are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs—as a company, we strongly value a diversity of perspectives, ideas and cultures. (Gibbs 2016)
[ 376 ] Politics
Within hours of the article’s publication, however, the company had taken unspecified action to modify the output of the algorithm. In protesting its innocence, though, Google was glossing over an uncomfortable reality—that its algorithms have the power to shape the public sphere. They do this by curating a selection of possible answers to a query from the vast diversity available on the Web. And the ability to do this constitutes a form of power. The fact that the company’s algorithms are just choosing from websites and resources published by people and agencies that are not under its control does not detract from this reality. One sees this most clearly in the landmark 2014 Google Spain ruling of the European Court of Justice that individuals in countries within its jurisdiction had the right to prohibit Google from linking to items that were “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.” This created the so-called “Right to be Forgotten” (RTBF)— which is a misnomer because the complained-of material can still remain on the Web. What the ruling confers is thus only a right not to be listed in a Google search conducted in any of the countries lying within the jurisdiction of the European Court.8 But given the dominance of Google in the European market, it is essentially an implicit recognition that an Internet company has the power to render an individual invisible in cyberspace on the grounds that if a Google search cannot find him or her, then that person has effectively “been disappeared.” Google’s protestations that its search results are purely the output of algorithmic decisions with no human intervention are doubtlessly correct. Given the scale of the company’s operations,9 it could hardly be otherwise. The claim that it is merely finding and ranking what its algorithms find on the Web, however, does not mean that it is not at the same time shaping— and distorting—the public sphere. As an illustration, consider the online controversy that erupted in May 2017 after President Trump was accused of having leaked “highly classified” information to senior Russian officials at a White House meeting (Miller and Jaffe 2017). This prompted a round of furious ripostes on Trump-supporting websites and news outlets about the leaking of classified information in earlier years by Obama administration officials. A particular focus of the response were allegations that Obama’s CIA director and his vice president
8. The information can still be found via a search on Google.com—the version of the search engine that serves users based in the United States. 9. 40,000 search queries every second on average, i.e., 3.5 billion searches per day according to http://www.internetlivestats.com/google-search-statistics/.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 377 ]
8 73
had both leaked classified information about the mission that killed Osama Bin Laden in 2011—leaks that had allegedly endangered the lives of people who had been part of the operation. Added spice was provided by allegations that the Obama officials had also leaked classified information to the Hollywood studio that was making a feature film about the raid.10 At one level, this little spat might be regarded as the routine infighting of partisan politics. But if a citizen who was puzzled about the story entered the query “Obama classified information bin laden” into a Google search box she would have received a list of results the first six of which—all dated the same day as the Washington Post story or one day afterward—were links to sites publishing various versions of the Obama story. She would have had to scroll down to the seventh link—to a post on the Politifact, a fact-checking site, debunking the allegations and published five years earlier on August 20, 2012—to find some nonpartisan information on the subject (Sollenberger 2017). Given that most Google users rarely get beyond the first page, and about half never go beyond the first three results the pro- Trump partisans will probably have achieved their objective—which presumably was to obscure the significance of reports about Trump’s allegedly giving classified information to Russian officials. Since we can rule out the hypothesis that Google has some ulterior motive in “fixing” these particular results, we are left with two inferences. The first is that the results are indeed algorithmically curated, as the company maintains; the second is that the algorithm can be gamed by politically motivated operators as well as by commercially driven practitioners of search engine optimization (SEO). The methodology for doing this is well understood. It involves creating content that is not wildly misleading but subtly masks the truth; phrasing that content in ways that will encourage people to share it; and finally posting the content on a wide variety of sites. After that, Google’s algorithms (which privilege up-to-date and widely shared content) will do the rest. Even a cursory survey of the role that Google and Facebook may have played in the presidential election and the Brexit referendum suggests that the line between direct and indirect power is often blurred. In the case of Facebook, for example, there is experimental evidence (Kramer, Guillory, and Hancock 2014) that the company can, through manipulation of users’ news feeds, affect their moods and that it can motivate users to vote in elections (Bond et al. 2012). Such experiments are demonstrations of direct power—that is, the ability to produce intended effects.
10. Released in 2012 as Zero Dark Thirty.
[ 378 ] Politics
But how should we categorize the role that Facebook played in the 2016 presidential election? In the first place, many commentators have noted the significance of the fact that an increasing number of Internet users get their news from social media and Facebook in particular (Bell and Owen 2017). This suggests that the company has acquired Lukes’s “third dimension” of power—the capacity to influence how people think; in other words the media power traditionally possessed by television networks and newspapers. There is also evidence that Donald Trump’s campaign made intensive use of data-driven campaign advertising on Facebook. Gary Coby, director of advertising at the Republican National Committee, claimed that the Republican campaign used the tools that Facebook provides for advertisers to test versions of ads for effectiveness—sometimes running 40,000– 50,000 variants in a single day (Lapowsky 2016). Similar faith in the effectiveness of digital advertising was displayed in the UK Brexit referendum campaign. Dominic Cummings, who was campaign director for Vote Leave, claimed that 98% of Vote Leave’s campaign funding was spent on digital advertising (Economist 2017). Much was made during the 2016 election about the prevalence of “fake news” on social media and particularly of a study showing that the most popular fake stories were more widely shared on Facebook than the most popular mainstream news stories (Silverman 2016). The controversy prompted a pained response from the company’s CEO. “Of all the content on Facebook,” he wrote, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other. (Zuckerberg 2016)
This may be an accurate assessment as far as the impact of fake news is concerned (Allcott and Gentzkow 2017).11 But Facebook played other roles in addition to being a conduit for news, fake or otherwise—for example, in enabling paid-for political advertising to reach voters who might be receptive to it. In September 2017, for example, the company’s Chief Security 11. The study (by two economists) analyzed three datasets. The first tracked the amount of traffic on news websites that was directed by social media. The second examined the top fake news stories identified by BuzzFeed and two prominent fact- checking sites, Snopes and PolitiFact. The third was derived from of the researchers’ own postelection online survey of 1,200 voters, which showed that social media were not the major source of political news for most Americans in 2016: only 14% say they
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 379 ]
0 8 3
Officer revealed that an internal inquiry had found approximately $100,000 in ad spending from June 2015 to May 2017—associated with roughly 3,000 ads—that was “connected to about 470 inauthentic accounts and Pages in violation of our policies.” The analysis suggested, “these accounts and Pages were affiliated with one another and likely operated out of Russia.” The ads in question generally did not mention a specific candidate but instead “appeared to focus on amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights” (Stamos 2017). What is not known is how many users were exposed to these ads and pages, but one estimate, based on an average cost of $6 per 1,000 page impressions, suggests that $100,000 would ensure that the ads would be seen nearly 17 million times (Collins, Poulsen, and Ackerman 2017). These revelations are significant for three reasons. First, they show how a technical infrastructure designed to serve targeted ads to users for mundane commercial purposes was co-opted to convey political messages. Second, this repurposing of Facebook’s platform was achieved by foreign actors, making it a foreign intervention in a US presidential election. And third, this intervention contravenes a core principle in political advertising—transparency: “political ads are supposed to be easily visible to everyone, and everyone is supposed to understand that they are political ads, and where they come from” (Vaidhyanathan 2017). Another distinctive feature of the 2016 election was the belated realization of the existence and extent of an alt-right ecosystem of linked websites (Albright 2016a) and the “weaponization” of YouTube as a prime distributor of fake news, pro-Trump propaganda, and conspiracy theories about his opponent (Albright 2017). The likelihood is that there is no single overarching factor that accounted for Trump’s victory. As Jonathan Albright— the researcher who first mapped the alt-right ecosystem—puts it, the result wasn’t the fault of the Facebook algorithm, the filter bubble, or professional journalism being completely “out of touch” with the majority of the country. Nor was it the fault of pollsters and statistics geeks who were working with enormous—yet unreliable—sources of data. . . . As the Trump electoral win relied on Facebook and other social media sites as their most important source of election coverage—compared with the 57% who got their election news from network, cable, and local TV outlets. So any assessment of the importance of fake news has to be based on assumptions on (1) who was exposed to these stories, (2) of those, who believed them, and (3) whose votes were actually influenced by their exposure. Given that, an accurate assessment of the importance of fake news is difficult to achieve.
[ 380 ] Politics
clearly demonstrates, the topics people discuss with their closest connections and the viewpoints they share in confidential circles trump even the biggest data sets. Especially when the result involves a clear outcome: an election win from a single behavioral tactic: finding people who can be influenced enough to actually go out and vote. (Albright 2016b)
The interactions between democratic processes and social media in 2016 highlighted the emerging role of the two “pure digital” companies in politics. But they also illustrated the blurring of the boundaries between the exercise of direct power and the indirect power that flows from the ways their platforms and business models enable political actors to achieve results that would have been impossible in a predigital age. In that sense, they are beginning to play a role in democratic politics analogous to that of broadcast media, particular television in earlier eras. The leaders of Google and Facebook did not set out to influence the course of the Brexit referendum or of the US presidential election. And yet the activities that their platforms enabled did influence the campaigns—and perhaps the outcomes—in each case. The challenge, therefore, is to find a way of conceptualizing this curious mix of direct and indirect power. Since both flow ultimately from the nature and affordances of the technological platforms that both companies have built, it seems reasonable to call it platform power.
PLATFORM POWER platform, n. a scheme, device, plan of action; a scheme of church government or of administrative policy; a party programme: a site: a basis: a raised surface level: a terrace: a plateau: a flooring: a raised floor for speakers, musicians, etc.: a medium for discussion: a deck for temporary or special purpose: a position prepared for mounting a gun: a raised walk in a railway station giving access to trains: a floating installation, usually moored to the sea-bed for drilling for oil, marine research, etc. . . . —Chambers 20th Century Dictionary (1983)
In common parlance, the term “platform” is a portmanteau word covering a multitude of uses. Within discourses about digital technology, however, it has acquired a special interpretation that derives from the distinctive affordances of the technology. These are categorized by the security expert Ross Anderson as zero or very low marginal costs, powerful network effects, the dominance of power law distributions in many networked markets, and technological lock-in (Anderson 2014). To these we may add the capacity for comprehensive surveillance of all networked activities.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 381 ]
8 23
These affordances determined the various types of technological platforms that have emerged since the 1970s. The most obvious one was the Internet—the network of networks defined by adherence to the TCP/IP suite of protocols—which emerged from the Arpanet and was switched on in January 1983. A second platform was defined by a dominant PC operating system—Microsoft DOS and, later, Windows—which at one time accounted for over 90% of the desktop computer market. (Similar, but smaller, OS-based platforms were centred on Apple’s MacOS and, later, OS X; and on Linux.) A third platform was the World Wide Web, which was built using the Internet as a foundation, developed in the late 1980s, and went mainstream in the mid-1990s. Then came platforms that were built using the Web as a foundation. Amazon and eBay launched in 1994 and 1995, respectively, with the latter as the first major example of a platform that served as an online intermediary enabling buyers and sellers to find one another and conduct transactions on which it levied a fee.12 A technology platform is not just a product or service but “a powerful ecosystem that scales up quickly and easily” (Simon 2011, 22). Despite their dominance, platforms are still poorly understood, possibly because—as Bratton observes—“they are simultaneously organizational forms that are highly technical, and technical forms that provide extraordinary organizational complexity to emerge, and so as hybrids they are not well suited to conventional research programs. As organizations, they can also take on a powerful institutional role, solidifying economies and cultures in their image over time” (Bratton 2016, 42). Neither Google nor Facebook were platforms when they first appeared in 1998 and 2004, respectively. Google was just a search engine and Facebook a social networking site. But as they grew they metamorphosed into platforms by building—either through internal innovation or corporate acquisition—complex ecosystems designed to capture as much of their users’ attention and online data trails as possible.13 More significantly, perhaps, both companies provide programming application
12. In that sense, eBay was the forerunner of contemporary platforms like Uber and Airbnb. 13. In addition to general search, for example, Google offers a wide range of other services including specialized search tools, webmail, advertising services, mapping, satellite imagery, photo-hosting, online documents (including spreadsheets), blogging, a vast searchable archive of digitized books, language translation, and the Android mobile operating system. Facebook has a narrower but still wide range of offerings in addition to extensions of its core social-networking services. These include photo-and video- hosting, e-mail (Messenger), encrypted chat, telephony and messaging (WhatsApp),
[ 382 ] Politics
interfaces (APIs)—that is, protocols and tools that enable third-party developers to build applications that can interface with the services offered by the platforms. So, for example, a developer creating a website for a client can embed a Google map showing the location of the client’s premises.
Generativity and Its Discontents
One of the most important features of a digital platform is its generativity— its ability “to create, generate or produce new output, structure or behaviour without input from the originator of the system” (Financial Times 2017). Generativity is important because it provides a measure of the capacity of the platform to foster innovation—and perhaps disruption (Zittrain 2008). It is also relevant to assessing the power of platform owners—and their responsibilities. A survey of the generativity of technological platforms that have emerged in recent decades shows that they occupy a broad spectrum that runs from zero or near-zero generativity—they are proprietary and closed: nothing happens on the platform that is not explicitly approved by its owner14—to, at the other extreme, platforms that are completely open, with no proprietor or controlling interest. Early platforms like those operated by Compuserve and AOL were examples of low generativity. At the opposite end of the spectrum were the early Internet and the World Wide Web, which were both designed to enable “permissionless innovation” (van Schewick 2010) and were highly generative. Two points about generativity are worth noting. First, it is not necessarily just a function of proprietorship. For example, as Zittrain points out, the IBM PC (and its successors) was a highly generative platform even though the operating system was owned and controlled by Microsoft. Anyone with the requisite software skills could create any program they wanted, and as long as the software conformed to the syntax of the programming language in which it was written, the PC would faithfully execute the coded instructions. Microsoft’s permission was not required (and of course the company did not accept any responsibility for any social or other effects that the software might have).
live video streaming, advertising services, and photo-sharing (Instagram). See https:// en.wikipedia.org/wiki/List_of_Google_products and https://en.wikipedia.org/wiki/ List_of_Facebook_features#Platform for more detail (accessed August 20, 2017). 14. A good example is the Apple App store.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 383 ]
4 8 3
Second, the generativity of a platform may change over time. The early Internet was highly generative not just because of its innovation-friendly architecture but also because the predominant device via which people accessed the network was the PC. This began to change in 2007 with the introduction of the smartphone, which is a tethered, tightly controlled information appliance with “no user-configurable parts”15 and the increasing dominance of the digital giants. For companies like Google and Facebook, generativity is vital for a number of reasons: it represents and encourages user engagement (and the concomitant data resulting from it); it boosts network effects, enticing new users to use the platform or retaining existing ones; and it keeps user- generated content flowing. But, as we have seen, it also has its downsides— search algorithms being gamed by Holocaust-denying websites; Facebook news feeds being used to disseminate fake news; distressing and/or criminal activities being streamed to Facebook Live (Naughton 2017a); and so on. “As a foundational element, platforms therefore enable a degree of control over what is constructed upon them, while simultaneously relying upon the unknown and relatively unpredictable things assembled atop them” (Williams 2015, 220). What this implies is that the platform power possessed by Google and Facebook creates strange kinds of “negative externalities” analogous to the pollution produced by, say, conventional industrial plants. The costs of the pollution are borne not by the polluter but by society at large. But whereas environmental pollution affects the planet’s biosphere, the externalities produced by the two digital giants affect the public sphere and what has come to be known as the attention economy, and it is to this that we now turn.
THE ATTENTION ECONOMY
The quotation from Herbert Simon that heads this chapter is dated 1971. Among other things, it demonstrates the great economist’s prescience. When he wrote the article from which the quotation is taken, the Arpanet— the military precursor to the Internet—was operational but had not yet
15. In October 2016 the number of Internet users accessing the Net via a mobile device (smartphone or tablet) exceeded the numbers doing so via conventional computers for the first time. This trend seems set to continue. https://www.theguardian. com/technology/2016/nov/02/mobile- web-browsing-desktop- smartphones-tablets (accessed August 21, 2017).
[ 384 ] Politics
been completed (Hafner and Lyon 1998, 223; Naughton 1999, 160). The first mobile network was six years away (Agar 2004, 37). The first digital camera using a charge-coupled sensor did not appear until 1975.16 And the Internet (in the sense of the TCP/IP-based network that we use today) was fully 12 years away. So the full import of the digital abundance that we now experience must have been hard to imagine at the beginning of the 1970s. And the same goes for the extent to which attention would become such a scarce resource. Yet it has come to pass. Ever since the advent of mass media, advertising has been a prime mover of commerce, and advertising is fundamentally about capturing people’s attention with the aim of influencing their behavior or attitudes (Wu 2016). In that sense we now live in an “attention economy.” The transition from a world where information was scarce to one in which it is superabundant is one of the signal achievements of digital network technology. And it is—to use a programming metaphor—both celebrated as a feature (Benkler 2007, 59–90) and deprecated as a bug (Andrejevic 2013). In actual fact it is both, and as such has defined the marketplace in which our two “pure digital” companies operate. Superabundance of information (or “content”) was what created the need for search engines in the first place; and it was the market opportunity that was eventually exploited by Google—which had both better technology than its rivals (initially the PageRank algorithm, augmented later by a vast trove of user data and its global computing infrastructure), and an overweening corporate ambition (“to organize all of the world’s information”). Google’s arrival in 1998 was a seminal moment in the evolution of the Internet, to the extent that a digital archaeologist might see the history of the network as divided into two eras: BG (Before Google) and AG (After Google). In terms of the analytical framework outlined in chapter 1, search turned out to a winner-takes-all market, and effectively Google took all. But with this victory came remarkable powers. Some of these were the familiar concomitants of monopoly. But some were genuinely new. As Google’s market dominance grew, for example, it became clear that, for many enterprises, figuring prominently in Google search results was a precondition for survival. A predictable arms race developed between the company and SEO techniques aimed at gaming the PageRank algorithm.17
16. https://en.wikipedia.org/wiki/Digital_camera (accessed July 2, 2017). 17. Actually a suite of algorithms; but the singular is used here for convenience.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 385 ]
6 8 3
Google’s response to the SEO challenge took the form of 500–600 minor periodic tweaks to the algorithm per year plus occasional major updates.18 The results of those major changes were seismic shocks (“Google waves”) that reverberated around the online world. Companies that had built up substantial businesses because customers found them via Google searches suddenly discovered that they had apparently disappeared from Cyberspace, with predictable financial consequences. They had not disappeared from the search results, of course, but had merely been relegated to perhaps several screens beyond the first page of results. Given that most users rarely venture beyond the first page, that effectively meant being consigned to oblivion. What was equally traumatic was that there appeared to be no redress for such misfortunes. Google had not intentionally damaged their business; it had merely adjusted its algorithm to combat attempts to game it. But the implicit power effectively to render people invisible that it had acquired was to have consequences far beyond the realm of ecommerce. Clay Shirky’s (2008) aphorism—“It’s not information overload. It’s filter failure”—highlights the centrality of Google in the attention economy. Given the vastness, diversity, and richness of the web, Internet users employ search as a way of making that diversity manageable. Google’s dominance in search implies that people’s search queries—both individually and collectively—yield unparalleled insights into the topics that they (and the societies to which they belong) are paying attention to. On a societal level, this capacity to take the pulse of the planet, as it were, was captured by the company’s adoption of “Zeitgeist” as the name for its summary of the most popular search terms, organized by week, month, and year as well as by topic and country. Zeitgeist, after all, means “spirit of the age”—the defining spirit or mood of a particular period of history as shown by the ideas and beliefs of the time.19 In the early days of the company, this might have seemed like hubris, but, given its current dominance of the search market, the record of what people search for on Google may well provide the best guide available to what is on their minds at any given point in time—especially if it is the case that—as some experts contend— users’ Google searches are more candid and revealing than their replies to social surveys (Stephens-Davidowitz 2017). This high-granularity database of human online activities acquired over a decade and a half is not only 18. “Google Algorithm Change History” https://moz.com/google-algorithm-change (accessed July 17, 2017). 19. The term Zeitgeist for these lists was dropped in 2007 (and adopted for a series of annual conferences organized by the company), but a curated record of the most popular searches on Google continues to be published online as “Google Trends” (https:// trends.google.com/).
[ 386 ] Politics
unparalleled, but is arguably the company’s most valuable asset. The only other company with a comparable asset is Facebook.
Three Dimensions of Attention
Discussions about the attention economy generally treat attention as a uniform good. This may be reasonable for macroeconomic analysis, but it is too crude if we wish to understand the power that dominance of the online attention economy confers on its two most important companies. There is a significant body of empirical research on the neurophysiology of attention (Nobre and Kastner 2014), but between the science and what we observe in online behavior there currently remains a gap, which has mostly been filled by anecdote and/or speculation (as, for example, in Carr 2011). In a recent essay, James Williams argues, “attention” in its broadest sense “extends far beyond what cognitive scientists call the ‘spotlight’ of attention, or our moment-to-moment awareness,” and “ultimately, it converges on conceptions of the human will” (Williams 2017). Noting that most of the public discourse about the impact of social media focuses almost exclusively on the “spotlight” phase, he proposes a useful taxonomy that distinguishes between three different kinds of attention, which he calls spotlight, starlight, and daylight. The first determines our immediate awareness and capacities for action toward practical tasks; the second concerns our “broader capacities for navigating life ‘by the stars’ of our higher goals and values”; and the third concerns “fundamental capacities—such as reflection, metacognition, reason, and intelligence—that enable us to define our goals and values.” Since the business model of surveillance capitalism involves intense competition to capture attention, and since humans appear to be highly vulnerable to distraction, most of the corporate effort is devoted to creating addictive distractions that maximize the “user engagement” on which revenue and profits ultimately depend; in other words, the industry is overwhelmingly focused on providing services and products in the “spotlight” zone (Eyal 2014). But since, for every individual, the amount of available attention is inescapably finite, competition for individuals’ attention is a zero-sum game. The more attention is devoted to one zone, the less is available for the other two. Williams uses this inescapable reality to argue that the current preponderance of social media in people’s online lives poses a serious threat to liberal democracy because these technologies “privilege our impulses over our intentions.”
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 387 ]
83
As information technologies have enveloped our lives, they’ve transformed our experiential world into a never-ending flow of novel attentional rewards. The ubiquity, instantaneity, and randomized delivery of these rewards has imbued our technologies with a distinctly dopaminergic character: it’s quite literally turned them into informational “slot machines.” Like regular slot machines, the benefits (i.e. “free” products and services) are upfront and immediate, whereas the attentional costs are paid in small denominations distributed over time. Rarely do we realize how costly our free things are. (Williams 2017)
What this brings to mind are the rival dystopian visions of two 20th- century British writers—George Orwell and Aldous Huxley. Orwell warned that humanity would be overwhelmed by externally imposed oppression. Huxley, on the other hand, thought that no Big Brother would be required: our “almost infinite appetite for distractions” would be sufficient. What Orwell feared, observed the cultural critic Neil Postman, “were those who would ban books. What Huxley feared was there would be no reason to ban a book, for there would be no one who wanted to read one” (Postman 1986, vii). Implicit in Williams’s argument is a model of deliberative democracy in which citizens appreciate the importance of intangible values like tolerance, empathy, collegiality, civility, evidence, and reason. He sees many of these being undermined by networked technology and increasingly frantic attempts by Internet companies to capture our attention. Critics will respond that this conception of democracy is an “ideal type” that has yet to be realized in practice. But some of the symptoms of the malaise that Williams diagnoses have become strikingly evident in the last few years—for example, in addiction to smartphone apps, in Internet users’ appetite for distractions and clickbait, in shrinking attention spans and growing political polarization, and in the trolling, misogyny, cruelty, and hate speech that proliferates online. Digital technology is not the root of all of these evils, but it is assuredly playing a role in the cultural changes that societies are now experiencing. And for that reason the powerful companies that dominate cyberspace have responsibilities that they have been reluctant to acknowledge.
CORPORATE RESPONSIBILITY
There are various reasons for their reluctance. One is the reservation that accepting editorial responsibility for what is communicated via their platforms would undermine their status as mere intermediaries and make
[ 388 ] Politics
them legally liable for content. Like other Internet companies they have been absolved from that responsibility since 1996 by a clause inserted into the US Communications Decency Act, which stipulates, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (Naughton 2017b). But perhaps a more compelling reason for the companies’ reluctance to accept responsibility for the impact of their platforms on democratic processes is that doing so would risk undermining their business models. For, as observed earlier, both are practitioners of surveillance capitalism: they provide free services to users in return for the right to mine, refine, and exploit their data for targeted advertising and other purposes (Zuboff 2015). Since the “resources” thus extracted are generated solely by users, there is a continuous imperative to continually increase the level of “user engagement”—in other words to capture more and more of users’ attention. The tension between this imperative and acknowledging the negative externalities that it involves is palpable. What seems to be missing in both companies is a mature understanding of what corporate social responsibility involves. In the past, their founders appeared to believe that slogans (“Don’t Be Evil”) and lofty aspirations (e.g., to “Build a Global Community”) were sufficient to signal their corporate benevolence. This naiveté may be explained by the relative youth of the companies, their rapid and precipitous expansion, their colossal commercial success, and the worship of “creative destruction”—the prevailing ideology of Silicon Valley. The corporate doctrine of the primacy of maximizing shareholder value has no doubt also played a part. But now that Google and Facebook have become major global corporations, a radical reassessment of their roles in society is overdue, and with it an appreciation of the responsibilities that accompany the power that they have acquired. What kinds of responsibilities need to be recognized? Ethicists would see two kinds, differentiated by agency and intentionality. First of all, there is the “outcome responsibility” that accompanies the intentional exercise of direct power. In the case of the digital giants under discussion this would involve issues like possible abuses of market dominance; algorithmically driven price or customer discrimination (Ezrachi and Stucke 2016); unreasonable End User Licence Agreements (EULAs); transparency (or lack thereof) about the exploitation of customer and user data; whether gender and ethnic diversity are being treated as objectives rather than aspirations; the extent of company cooperation with security agencies; and so on. These are issues that will be familiar to the board of any modern corporation and there is nothing conceptually difficult about them. But Google and
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 389 ]
0 9 3
Facebook have a second kind of responsibility—what ethicists call “obligation responsibility.”20 This is the responsibility that comes with being in a special position (examples from everyday life would be parent, lawyer, accountant, physician, etc.). In such cases the agent is responsible not only for the consequences of their own actions but also for preventing damage to others, whether or not this damage is the result of some action of the agent. Thus a physician who comes upon a victim of a traffic accident has an obligation to offer help, even if the accident was not the result of his or her actions. In the corporate case, this suggests a need for a shift in perspective: while outcome responsibility focuses attention on actions and the damage they inflict, obligation responsibility focuses on the vulnerability of what exists in an environment in which a corporation occupies a special position. And the discharge of obligation responsibility can require proactive action. So while Google and Facebook may not have outcome responsibility for the effects their platforms are having on democracy, the question to be asked is whether they should not accept some (obligation) responsibility for the political system (in the way that many companies today accept “corporate social responsibility” for the environment, say). If they do not, then they will continue to exercise what a British prime minister21 once famously described as “the prerogative of the harlot throughout the ages”—power without responsibility.
REFLECTIONS
In a way, the story of this chapter is an old one. There have always been moral panics about the impact on democracy of new communications technologies. In the 19th century, the arrival of cheap newspapers allowed large numbers of partisan publications with large circulations—and only a passing acquaintance with the facts (Wu 2016, 17)—leading to fears that the resulting cacophony of opinion and news would reduce the capability of the press to hold power to account.22 In the 20th century, as first radio and later television achieved dominance, people worried that politics would be reduced to sound bites and personalities, that charismatic or physically attractive personalities would
20. In the UK it is often known as the duty of care. 21. Stanley Baldwin (1867–1947). 22. “Freedom of the press,” observed the British drama critic, Hannen Swaffer, “is freedom to print such of the proprietor’s prejudices as the advertiser’s won’t object to.”
[ 390 ] Politics
have advantages over less striking but more competent candidates, and that political power would gravitate to the owners of broadcasting networks (Postman 1986). In the early years of the current century, democratic anxiety shifted to the unregulated (unregulable?) blogosphere and the homophily that led people to trap themselves in filter bubbles (Pariser 2011) and echo chambers (Sunstein 2009) that would insulate them from alternative philosophies and lifestyles, increase political polarization and lead to “enclave extremism.” This chapter has been drafted in a period when public concerns have moved to focus on the political impact of social media and the power that this confers on the two companies that have become the dominant actors in the attention economy. In his history of the major communications technologies of the 20th century—telephone, movies, broadcast radio, and television—Tim Wu (2012) discerned a “cycle”: each started out gloriously creative and anarchic; each spurred utopian hopes; and each wound up captured by industrial interests. The big question, Wu said at the end of that volume, was whether the same would happen to the Internet. We now know the answer: the cycle has been repeated: the technology has effectively been captured by the five corporations that are the subject of this book. The challenge now is to figure out the implications—economic, social, cultural, and political—of this outcome. What is particularly ironic is that the Internet—a network architecture designed with an intrinsic capacity for facilitating “permissionless innovation”—should have been captured so successfully. After all, in one sense nothing has changed: the technology retains all its initial potential for empowering users, liberating their creativity, enabling the explosion of peer-production celebrated by Yochai Benkler in his landmark book (Benkler 2007). And yet this potential remains largely unrealized, and much current Internet use involves passive consumption of streaming media content, obsessive use of addictive smartphone apps, and what the critic Nicholas Carr calls “digital sharecropping” (Carr 2012). The two companies that have captured most of the value from these uses of the Internet have acquired great market power—to the point (as observed in chapter 1) that it is difficult to imagine them being displaced in the foreseeable future, at least by competitive forces. Some aspects of their power—for example market dominance—are relatively conventional, and in some cases are already being tackled using established legal tools like antitrust legislation. But other aspects of their power are—as we have seen—harder to pin down. The essential point about both Google and Facebook is that their scale and market dominance gives them—as a kind of byproduct—powers
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 391 ]
2 9 3
that are in many cases unique or unprecedented. This is power they did not explicitly seek, and for which they—and society—are largely unprepared, and for which there are at present few antidotes or controls. This is largely because their platforms, in their present forms, empower other actors who have political, ideological—and perhaps geopolitical—motivations, and thereby enable unexpected power-shifts. The experience of the last decade and a half shows that the tools that Google and Facebook built for conducting their core businesses of targeted advertising, can be used not only for socially productive purposes (see, for example, Margetts et al. 2016) but also to damaging effect—for example, in polluting the public sphere. We are still in the early stages of this development, and the obvious analogy is with the arrival of television as a campaigning tool in the 1960s. As numerous commentators have chronicled (for example, Fallows 1997; Postman 1986), television dramatically changed the conduct and nature of politics in most democracies, starting with the 1960 US presidential campaign. We are now confronted with the question of whether, in the 21st century, technology platforms will have a similar impact on our polity.
ACKNOWLEDGMENTS
I am grateful to Gerard de Vries, Damian Tambini, Martin Moore, and Nikola Belakova for their extremely helpful comments on various drafts of this chapter.
REFERENCES Agar, Jon. 2004. Constant Touch: A Global History of the Mobile Phone. Cambridge: Icon Books. Albright, Jonathan. 2016a. “Left + Right: The Combined Post-#Election2016 News ‘Ecosystem’.” Medium, December 11, 2016. https://medium.com/@d1gi/ left-right-the-combined-post-election2016-news-ecosystem-42fc358fbc96. Albright, Jonathan. 2016b. “How Trump’s Campaign Used the New Data-Industrial Complex to Win the Election.” LSE United States Politics and Policy (blog), November 26, 2016. http://blogs.lse.ac.uk/usappblog/2016/11/26/how- trumps-campaign-used-the-new-data-industrial-complex-to-win-the-election/. Albright, Jonathan. 2017. “FakeTube: AI-Generated News on YouTube.” Medium, January 17, 2017. https://medium.com/@d1gi/ faketube-ai-generated-news-on-youtube-233ad46849f9. Allcott, Hunt, and Matthew Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31, no. 2: 211–36.
[ 392 ] Politics
Anderson, Ross. 2014. “Privacy versus Government Surveillance: Where Network Effects Meet Public Choice.” http://weis2014.econinfosec.org/papers/ Anderson-WEIS2014.pdf. Andrejevic, Mark. 2013. Infoglut: How Too Much Information Is Changing the Way We Think and Know. New York and Abingdon: Routledge. Bell, Emily, and Taylor Owen. 2017. “The Platform Press: How Silicon Valley Reengineered Journalism.” Tow Center for Digital Journalism, Columbia University, March 29, 2017. Accessed July 13, 2017. https://www.cjr.org/ tow_center_reports/platform-press-how-silicon-valley-reengineered- journalism.php. Benkler, Yochai. 2007. The Wealth of Networks. New Haven: Yale University Press. Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. 2012. “A 61-Million-Person Experiment in Social Influence and Political Mobilization,” Nature 489, September 13. doi: 10.1038/nature11421. Bratton, Benjamin H. 2016. The Stack: On Software and Sovereignty. Cambridge, MA: MIT Press. Cadwalladr, Carole. 2016. “Google, Democracy and the Truth about Internet Search.” Observer, December 4, 2016. https://www.theguardian.com/technology/2016/ dec/04/google-democracy-truth-internet-search-facebook. Caldwell, Christopher. 2008. “Network Power That Works Too Well.” Financial Times, May 23, 2008. https://www.ft.com/content/ 608c3bb4-28e4-11dd-96ce-000077b07658?mhq5j=e1. Carr, Nicholas. 2011. The Shallows: How the Internet Is Changing the Way We Think, Read and Remember. London: Atlantic Books. Carr, Nicholas. 2012. “The Economics of Digital Sharecropping.” Roughtype.com, May 4, 2012. Accessed July 18, 2017. http://www.roughtype.com/?p=1600. Castells, Manuel. 2009. Communication Power. Oxford and New York: Oxford University Press. Castells, Manuel. 2011. “A Network Theory of Power.” International Journal of Communication 5: 773–87. Collins, Ben, Kevin Poulsen, and Spencer Ackerman. 2017. “Russia’s Facebook Fake News Could Have Reached 70 Million Americans.” Daily Beast, September 8, 2017. Accessed September 13, 2017. http://www.thedailybeast.com/ russias-facebook-fake-news-could-have-reached-70-million-americans. Dewey, Caitlin. 2016. “98 Personal Data Points That Facebook Uses to Target Ads to You.” Washington Post, August 19, 2016. https://www.washingtonpost.com/ news/the-intersect/wp/2016/08/19/98-personal-data-points-that-facebook- uses-to-target-ads-to-you/. Economist. 2017. “How Online Campaigning Is Influencing Britain’s Election.” Economist, May 27, 2017. https://www.economist.com/news/britain/ 21722690-social-media-allow-parties-target-voters-tailored-messagesand-cat- videos-how-online. European Court of Justice: Grand Chamber. 2014. Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González. May 13, 2014. Accessed July 15, 2017. http://tinyurl.com/y74ndrrq. Eyal, Nir. 2014. Hooked: How to Build Habit-Forming Products. London: Penguin. Fallows, James. 1997. Breaking the News: How the Media Undermine American Democracy. New York: Random House.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 393 ]
4 9 3
Financial Times. 2017. “Definition of Generativity.” Accessed August 21, 2017. http:// lexicon.ft.com/Term?term=generativity. Ezrachi, Ariel, and Maurice E. Stucke. 2016. Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Cambridge, MA: Harvard University Press. Gibbs, Samuel. 2016. “Google Alters Search Autocomplete to Remove ‘Are Jews Evil’ Suggestion.” Guardian, December 5, 2016. https://www.theguardian.com/technology/2016/dec/05/ google-alters-search-autocomplete-remove-are-jews-evil-suggestion. Grewal, David Singh. 2009. Network Power: The Social Dynamics of Globalization. New Haven: Yale University Press. Hafner, Katie, and Matthew Lyon. 1998. Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon & Schuster. Kramer, Adam, Jamie Guillory, and Jeffrey Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111, no. 24: 8788–90. doi: 10.1073/ pnas.1320040111. Lanchester, John. 2017. “You Are the Product.” London Review of Books 39, no. 16 (August): 3–10. Lapowsky, Issie. 2016. “Here’s How Facebook Actually Won Trump the Presidency.” Wired, November 15, 2016. https://www.wired.com/2016/11/facebook-won- trump-election-not-just-fake-news/. Lukes, Steven M. 1974. Power: A Radical View. London and New York: Macmillan. Margetts, Helen, Peter John, Scott Hale, and Taha Yasseri. 2016. Political Turbulence: How Social Media Shape Collective Action. Princeton: Princeton University Press. Miller, Greg, and Greg Jaffe. 2017. “Trump Revealed Highly Classified Information to Russian Foreign Minister and Ambassador.” Washington Post, May 15, 2017. https://www.washingtonpost.com/world/national-security/trump-revealed- highly-classified-information-to-russian-foreign-minister-and-ambassador/ 2017/05/15/530c172a-3960-11e7-9e48-c4f199710b69_story.html?utm_term=. e308e593cd78. Naughton, John. 1999. A Brief History of the Future: The Origins of the Internet. London: Weidenfeld & Nicolson. Naughton, John. 2017a. “How Facebook Became a Home to Psychopaths.” Observer, April 23, 2017. https://www.theguardian.com/commentisfree/2017/apr/23/ how-facebook-became-home-to-psychopaths-facebook-live. Naughton, John. 2017b. “How Two Congressmen Created the Internet’s Biggest Names.” Observer, January 8, 2017. https:// www.theguardian.com/commentisfree/2017/jan/08/ how-two-congressmen-created-the-internets-biggest-names. Nobre, Anna C., and Maria Kastner, eds. 2014. The Oxford Handbook of Attention. Oxford and New York: Oxford University Press. Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. London: Viking. Postman, Neil. 1986. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. London: Heinemann. Russell, Bertrand. 1938. Power: A New Social Analysis. London: Allen & Unwin.
[ 394 ] Politics
Shirky, Clay. 2008. “Interview with Clay Shirky.” Columbia Journalism Review, December 22, 2008. Accessed June 22, 2017. http://archives.cjr.org/overload/ interview_with_clay_shirky_par_1.php?page=all. Silverman, Craig. 2016. “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook.” BuzzFeed News, November 16, 2016. https://www.buzzfeed.com/craigsilverman/ viral-fake-election-news-outperformed-real-news-on-facebook. Simon, Herbert A. 1971. “Designing Organizations for an Information-Rich World.” In Computers, Communication, and the Public Interest, edited by Martin Greenberger. Baltimore and London: Johns Hopkins Press. Simon, Phil. 2011. The Age of The Platform: How Apple, Amazon, Facebook and Google Have Redefined Business. Las Vegas: Motion Publishing. Sollenberger, Roger. 2017. “How the Trump-Russia Data Machine Games Google to Fool Americans.” Paste, June 1, 2017. https://www.pastemagazine.com/ articles/2017/06/how-the-trump-russia-data-machine-games-google-to.html. Stamos, Alex. 2017. “An Update on Information Operations on Facebook.” FB Newsroom, September 6, 2017. Accessed September 7, 2017. https:// newsroom.fb.com/news/2017/09/information-operations-update/. Stephens-Davidowitz, Seth. 2017. Everybody Lies: What the Internet Can Tell Us about Who We Really Are. London: Bloomsbury Publishing. Sunstein, Cass. 2009. Republic.com 2.0. Princeton: Princeton University Press. Taplin, Jonathan. 2017. Move Fast and Break Things: How Facebook, Google and Amazon Have Cornered Culture and What It Means for All of Us. London: Macmillan. Vaidhyanathan, Siva. 2017. “Facebook Wins, Democracy Loses.” New York Times, September 8, 2017. https://www.nytimes.com/2017/09/08/opinion/facebook- wins-democracy-loses.html. Van Schewick, Barbara. 2010. Internet Architecture and Innovation. Cambridge, MA: MIT Press. Weber, Max. 1968. Economy and Society. Translated by Guenther Roth and Claus Wittich. New York: Bedminster Press. Original published in 1921. Williams, Alex. 2015. “Control Societies and Platform Logic.” New Formations 84–85: 209–27. Williams, James. 2017. “Stand out of Our Light: Freedom and Persuasion in the Attention Economy.” Nine Dots Prize essay. https://ninedotsprize.org/ extracts-stand-light-freedom-persuasion-attention-economy/ Wu, Tim. 2012. The Master Switch: The Rise and Fall of Information Empires. London: Atlantic Books. Wu, Tim. 2016. The Attention Merchants: The Epic Scramble to Get inside Our Heads. Knopf. Zittrain, Jonathan. 2008. The Future of the Internet—and How to Stop It. New Haven: Yale University Press. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1: 75–89. doi: 10.1057/jit.2015.5. Zuckerberg, Mark. 2016. Facebook status update, November 16, 2016. Accessed July 15, 2017. https://www.facebook.com/zuck/posts/10103253901916271?pnref= story.
P l at f or m P o w e r a n d R e s p o n s i b i l i t y i n t h e At t e n t i o n E c o n o m y
[ 395 ]
6 9 3
Conclusion Dominance, the Citizen Interest and the Consumer Interest DAMIAN TAMBINI AND MARTIN MOORE
L
iberal democracy is based on the complementary principles of individual autonomy and popular consent. Periodic elections are accompanied by a constant process of free and open deliberation, which permits processes of individual and collective will formation. Any forms of power and control that undermine autonomy and liberty to engage in that process of will formation risk making these principles appear more as rhetorical aspirations and less as genuine institutions of legitimate self-government. This volume has explored some of the far-reaching questions that are being asked about the implications of one such form of power: digital dominance. It has also touched on the economic basis of liberal democracy: market capitalism. The dominance of a few large software “platform” companies, per se, is not necessarily a problem from the point of view of competition and antitrust law. According to the current paradigm in competition law, companies like the tech giants discussed in this book are chosen by a large proportion of consumers and should not be punished for their success. While competition law does attempt to prevent the emergence of dominant positions by mergers, and the abuse of a dominant position to the consumer’s detriment, the fact of dominance itself and its wider social and political implications have been until recently somewhat neglected both in the practice of regulation and in the wider debate.
Analysis in this volume and elsewhere has focused on the implications for welfare analysis in economics, and for competition regulation, of the multisided nature of digital platform markets and other economic features of multi-sided platform markets. The fact that operators may tolerate low returns in one side of the market in order to exploit a dominant position in another, for instance, has created challenges for traditional competition analysis, as Lina Khan and Patrick Barwise and Leo Watkins point out. But recent analyses of social media suggest that analysts of digital dominance should also acknowledge that these are not in any traditional sense of the word markets at all: consumers are the main providers of services (such as free “digital labor”) on these platforms and are involved in a process of cocreation (Benkler 2016) with platforms (Fuchs 2015) The multisided markets are based on information provided by consumers freely in a conscious or unconscious exchange for free services, and this information in turn provides a barrier to switching as—in the absence of enforceable data portability—platforms face strong pressures to monetize and control the data—and by extension to control the subjects of that data. For all these reasons while the platforms can achieve powerful positions as “essential services,” they are not subject to the usual competitive pressures and “creative destruction” that have characterized previous periods of commercial dominance. This “surveillance capitalism” (Zuboff 2015) represents a new form of capitalism, and as such it requires a new regulatory and legal approach. Competition law and policy does not deal with the wider social and political issues of digital dominance, such as those related to democracy and fundamental rights and freedoms of individuals, and in its current form we should not expect it to. If new political and social issues emerge from the new dominance of some digital platforms, we need new public policies and laws designed to mitigate them. We are at the beginning of the debate about what those laws and policies should be.
POWER
Pegging the critique of platform dominance to a concept as notoriously slippery and contested as “power” has potential pitfalls. Epstein’s analysis suggests that Google has the power to swing election results, though this does not mean that Google executives or algorithms actually exercise this power. The concepts needed to analyze, and in time check, this new power, as John Naughton outlines, will need to be based on analysis of how it is deployed. The question of whether it matters that power is exercised
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 397 ]
8 9 3
by machines or humans, or whether that exercise is intentional/deliberate or self-interested needs more investigation, but it is certainly the case that humans currently tend to object more to humans exercising power over them than they do when bureaucracies or machines or what Lynskey describes as platforms’ “power of providence” constrain their options and even motives for action. In competition analysis the notion of market power has a much more precise and operationalized meaning, involving the “power” of a dominant actor to determine prices and vary quality to the detriment of consumers. This is “power” as measured by price elasticities rather than with reference to the value of human autonomy or the integrity and legitimacy of democratic forms of government. Currently, the former notion of power provides some certainty and direction to regulators, but it should not be confused with a much more inchoate and far-reaching notion that social and political power of intermediaries also needs to be limited and checked. The chapters in this volume thus go far beyond a concern with market power. In this sense the rising public and political concern with digital dominance is not merely concern with dominance of a particular market, it is a concern with user autonomy, user agency, and the power of platforms to impact the decision-making of consumers and citizens through profiling, information control, and behavioral nudges. One of the difficulties that policy has in addressing this form of power is that the combination of data-driven profiling with new targeted social media propaganda (Hillje 2017; Kreiss 2016) techniques is new, and existing checks and balances, for example on media and on campaign spending, or media plurality (Tambini and Labo 2016) have difficulty responding to them. Power per se is rarely the object of formal policy: and the large structural and constitutional questions regarding power, such as the separation of church and state for example, and the separation of the fourth estate referred to in this volume by Emily Bell, are not only intangible and intractable but also have often been resolved in long-term historical processes (Pickard 2015). So the policy response to platform power may combine all the various levels of governance: reform of competition and antitrust law, self-regulatory restraint on the part of platforms themselves, and sector-specific regulation for public interest goals. Schlosberg and Naughton both take Stephen Lukes’s (1974) theory of a third dimension of power, which has become central to critique of mass media, and apply it to social media. It is the power of media including social media to alter preferences and nudge actors toward certain courses of action (for example to vote, to not vote, or to make a purchase) that is of concern. This aspect of digital dominance over individual behaviors and decisions is quite separate
[ 398 ] Digital Dominance
from, and in addition to, market dominance, or the exercise of traditional corporate lobbying power, as John Naughton discusses. As Lukes himself pointed out, the fact that such power acts on preferences—the shaping of the attitudes and intentions of actors themselves—makes it invisible to approaches (polls for example) that treat preferences as given. Digital platforms possess the ability to shape and control the flow of information, ideas, and views through society, including the potential to filter out or amplify certain messages or voices, as we have seen with platforms that have developed self-regulatory approaches to take down hate speech such as Nazi campaigns. This ability appears to have been exercised thus far with restraint, but it is nonetheless a crucial source of power and there have been instances (in relation to net neutrality and copyright reform, for example) when platforms have shown themselves quite ready to wield it in their own interest.
THEORY OF HARM AND DIAGONAL INTEGRATION
In economic and legal terms, and in terms of fundamental rights, platform dominance is a genuinely new problem. Part of the challenge is that public policy has so far approached it with laws, institutions, and concepts designed for another age. As the chapters in this volume show, institutions with the economic characteristics of network externalities, multisided markets, lock-in, and welfare and competition effects not allowed for in stand ard economic analysis are simply not captured by current frameworks. In particular, the critical role of personal data, in permitting large platform companies to leverage market power in previously unrelated markets has created challenges in competition analysis. Traditionally economists have analyzed processes of horizontal integration of companies buying similar competitors, and vertical integration along the value chain; but it is the particular challenge of the competitive effect of monopolizing various forms of personal data, which permits new forms of “diagonal integration” and new forms of market foreclosure, that creates novel competition problems at the intersection of data protection and competition law, as Orla Lynskey and Inge Graef point out. Existing regulatory frameworks have led to a piecemeal approach. Fundamental rights, in particular the right to privacy and to free expression, do provide something of a foil against the excesses of digital dominance. Competition law, while it has been whittled back to the narrowest consumer-oriented notion of the public interest, does have the potential for further development. Articles 8 and 10 of the European Convention on
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 399 ]
04
Human Rights have been instrumental in developing a regulatory schema for data (the General Data Protection Regulation), which many argue will have a transformative effect on the emerging data ecology, if it is properly implemented. Freedom of expression rights have also been instrumental in the development of the “first settlement” (Tambini, Leonardi, and Marsden 2008, 120) for Internet content, that is, in section 230 of the Communications Decency Act in the United States, and its counterpart in the E-Commerce Directive intermediary liability provisions for the EU. This settlement, which was devised as a means to permit maximum innovation and openness in the embryonic internet sector has also been wedded to a particular market-driven notion of negative freedom of expression, which resonates with the developing law of freedom of expression under the European Convention. Delegating regulatory functions to private actors, including platforms, has therefore been seen as harmonious with the Convention, whereas regulation by public authority of Internet functions would have been seen as potentially conflicting with the Convention under Article 10. Thus the slide toward privatized regulation discussed by Ben Wagner introduces a dynamic asymmetry, a spiral, or even a ratchet effect. Content regulation is delegated to platforms but even when those private actors provide inadequate due process rights of appeal and transparency, public authorities could not reappropriate those delegated regulation functions because to do so would infringe free speech. The delegation of control over speech flows in one direction: toward the platforms, giving them ever-more power and control over the circulation of ideas, deliberation, and in Habermasian terms, will formation. Traditional approaches to freedom of expression have tended to be more concerned with presence of public authorities and the state in the space of speech than with the control exerted by private actors. The general concern with freedom as the means to facilitate “the search for truth” (Mill 1984 [1859]; O’Neil 2013) neglects one important point, which is confirmed by the research of Zollo and Quattrociocchi in this volume. Namely as Onora O’Neil has noted, negative freedom (the absence of state control of speech) does not in any sense guarantee that the truth will emerge. As Zollo and Quattrociocchi show, social media platforms, for example, seem to be facilitating misinformation, and the domination of a small number of platforms may exacerbate this problem. A sufficient response to platform dominance requires not a piecemeal approach, based separately on privacy, freedom of expression, and inequality as separate domains of risk, but a holistic approach, based on an integrated notion of power. Data and in particular personal data is a critical
[ 400 ] Digital Dominance
source of power in our new surveillance society. This cannot adequately be addressed as a competition law problem but requires new concepts relating to the public policy implications of individual information control, and its relationship to autonomous decision-making not only by consumers but also by citizens. Diakopoulos, Epstein, and Tambini’s examination of the role of dominant social media in elections points to a crucial new potentiality. The possibility that platforms may have a latent power to affect the outcome of elections in mature liberal democracies arguably changes everything, because it directly impacts the fundamental source of legitimacy of the liberal democratic state. Whether this potential power is in fact exercised is a secondary question: the effect on legitimacy of the opaque gatekeeping power and potential for targeted propaganda is fatal in itself. What is clear is that the combined powers of competition analysis and the wider social sciences and political philosophy do need to come up with a much more adequate “theory of harm” with reference to the implications of digital dominance, and that this will incorporate harms based on (1) fundamental rights, incorporating human freedom and autonomy, including freedom of thought and opinion; (2) social goods, such as the legitimacy of democratic self-government; and (3) the narrowly economic benefits contained in notions of “consumer welfare” and traditionally protected by antitrust. This theory of harm in the broadest sense should be used to articulate a new set of wider public policy objectives to deal with the broader issues raised by the “tech giants” (Moore 2016), and also within traditional competition law and policy when investigating such things as the emergence and potential abuse of a dominant position. As Sunstein (1994, 2000) has repeatedly pointed out, and Khan explores in this volume with reference to the example of Amazon, these wider social goods and benefits are not protected by current forms of competition and antitrust law, which reduces the individual interest to the consumer interest and has nothing to say about autonomy, opinion formation, or democratic legitimacy. It is time to be more alert to the fact that dominant platforms, like the press and broadcasting before them, are new phenomena, which require sector-specific responses of the like that have not been developed before. The expectation that competition law will be able to deal with the social and political implications of digital dominance is based either on an underestimation of those implications or a misunderstanding of the intent and content of competition law. As Helberger points out in relation to media pluralism, and Lynskey in relation to personal data, current paradigms of competition law fail to deal with current policy challenges related to privacy, data and targeting.
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 401 ]
2 0 4
THE RESPONSE
As Diane Coyle argues, there are some aspects of platform regulation that the platforms will rightly be expected to perform themselves, and indeed they would be the actors best placed to achieve effective consumer protection. But many of the wider social and political concerns raised in this volume, as social and political externalities in economic terms, would not create any particular incentives or enlightened self-interest to regulate on the part of the platforms. Thus the impact of platforms on the circulation of “truth” in society, through the institution of journalism and the notion of a “marketplace for ideas” is increasingly controversial, as discussed by Emily Bell and Natali Helberger in this volume. As Patrick Barwise and Leo Watkins illustrate, there are significant doubts regarding whether platforms themselves, and the market as a whole, can be relied on to resolve the wider political and social outcomes associated with the emergence of dominant platforms. It is apparent that the fact of dominance itself constitutes a public policy concern, and that governments and policymakers need urgently to respond. Countries around the world, and international organizations, have indeed begun to take the problem of digital dominance seriously in recent years. Numerous parliamentary and public inquiries have attempted to address both the competitive and the political and social implications of platform dominance. In France, the Conseil d’Etat (2014)1 concluded, “platforms’ . . . role as intermediaries gives platforms both economic power and influence, which have a significant impact on how third parties exercise their freedoms and raises unprecedented questions for the public authorities.” The subsequent Loi sur la Numerique was expected to deal with some of these issues but in fact was restricted to a narrower issues of social media, copyright, and net neutrality issues. It is the unprecedented nature of the challenge of platform power that resonates through this volume. Many of the public responses so far address one aspect of the problem: the UN special rapporteur on freedom of expression began in 2013 to grapple with the perils of private censorship that Wagner illustrates (UN 2011), and the Council of Europe, as the body responsible for protecting fundamental rights, has now issued not one but several communications and recommendations that states should set new standards that address the implications of platforms for fundamental rights such as freedom of expression (MSI-NET 2017). But the chapters in 1. http://www.conseil-etat.fr/Decisions-Avis-Publications/Etudes-Publications/ Rapports-Etudes/Etude-annuelle-2014-L e-numerique-et-les-droits-fondamentaux
[ 402 ] Digital Dominance
this volume confirm that the problem of digital dominance must be subject to holistic, political economy analysis. The power of platforms incorporates not only their actual and potential implications for freedom of speech, journalism, privacy, surveillance, and market power but also has a sui generis quality that links all of these concerns. This is part of the reason that the somewhat piecemeal policy responses in each of these separate sectors have had only peripheral effect. Obliging Google to fund news and journalism through a levy or putting moral pressure on them to develop open and transparent ethical codes with regard to hate speech or to the “right to be forgotten” (a practice that has now apparently fallen by the wayside) is clearly not effective in dealing with the interlinked problems of dominance and power. While they reflect good intentions to assist journalism during the crisis of its business model, projects such as the Google Digital News initiative and various other attempts to provide news media with “crumbs from the platform” of Facebook and others merely replace one problem of dependence and capture with another. The previous two-sided market that enabled journalism to survive, which was itself based on state tolerance of oligopolistic tendencies to concentration in media markets has simply been replaced by another, squeezing and distorting attempts to generate a public oriented media ethics in the process. The history of antitrust is referred to by numerous authors in this volume: some see it as a model for a wider notion of competition protection that addresses the citizen issues associated with the power wielded by the “kings” of dominant companies. The failure of the framework to deal with the wider problems of digital dominance is a result of the rise of a narrow, economistic notion of consumer welfare. Khan concludes, “In order to capture these anticompetitive concerns, we should replace the consumer welfare framework with an approach oriented around preserving a competitive process and market structure.” Public policy faces multiple challenges in responding to a complex mesh of problems raised by digital dominance. The 20th-century communication and information infrastructure was governed by a mixed system: where infrastructure investment economics demanded it, natural monopolies were permitted with tight price and service quality regulation. A combination of legal standards and self-regulation governed media and information markets and the nonvirtual physical world of commerce. It is unclear what combination will apply to the platform economy: if the combination of a lack of data portability, network effects, and first mover advantage entrenches dominance, regulators will have to decide to what extent such dominance should be tolerated. Self-regulation, which in general is
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 403 ]
4 0
either instituted “in the shadow of the state” (Baldwin, Cave, and Lodge 2010) or is in fact a rational collective action response to market incentives (Tambini, Leonardi, and Marsden 2008), will in some form continue to be implemented by platforms. But it remains to be seen to what extent the deeper and wider social and political consequences of platform dominance can be mitigated by voluntary action. Some aspects of platform dominance can only be checked and constrained by the law. Legal reforms could come in the form of a revised and extended antitrust/competition law framework that incorporates some of the noneconomic harms in a revised and extended public interest framework. This would in the view of many analysts be a return to the origins of antitrust in the early 20th century. As Julie Cohen (2016) and Orla Lynskey (this volume) have pointed out, a new regulatory regime would need a new definition of “platform power,” which would incorporate the forms of market power that Graef argues are emerging due to data consolidation. But reform of the competition framework may not be enough: the public policy and fundamental rights issues reviewed in this volume go beyond legal instruments that could be incorporated for example in a merger regime or obligations that attach to firms considered dominant. Lynskey argues that policy must also respond to the quasi-regulatory functions exercised by the platforms themselves not only in the gatekeeping function discussed by Wagner and others in this volume, but in their ability to set terms and conditions relating to applications connecting to their API.
RESEARCH: WHAT WE KNOW AND WHAT WE NEED TO KNOW
We have no doubt that some of the points made by our authors and the conclusions we draw as editors will be strongly disputed by the companies involved. That is inevitable and welcome in the dialogue between researchers, policymakers, and stakeholders. At the very least the chapters collected in this volume constitute ample evidence that future more ambitious and coordinated research into the phenomenon of digital dominance is urgently needed. Some claims are undisputed: the scale and market dominance of Google, Amazon, Facebook, Apple, and Microsoft, for example is an empirically describable fact: in the more obvious and measurable markets for advertising, search, music, or e-commerce, the position of these platforms is relatively clear, as Patrick Barwise and Leo Watkins describe. The extent to which such dominance can be challenged and is subject to normal competitive constraints is less clear. Network effects and multisided markets are rather well established as theoretical concepts in economics, but
[ 404 ] Digital Dominance
the interaction of these with the role of free digital labor and the shadow economy in personal data is less well understood. Lurking within many of the chapters in this volume and in the wider political debate related to platforms, is the concern that the combined effect of these platform characteristics could create platforms that become Frankenstein’s monsters: no longer possible to constrain with existing competition law and growing to the extent that their powerful lobbying might be able to prevent the emergence of a new regulatory paradigm capable of constraining them. Understanding whether the latter possibility may be true requires interdisciplinary collaboration between researchers and policymakers. The chapters in this volume suggest some core questions for research into each of the various areas that are discussed. Experimental research into search engine manipulation effects for example needs to address the issue of validity: to what extent effects apparent in controlled experiments are also present in the empirical reality of election campaigns. The search engine manipulation effect may be convincing as a possibility, but whether the potential for such an effect is matched by its actual realization and whether manipulation is deliberate remain obscure. This in turn raises deeper philosophical and theoretical questions about agency and power in a social and political system that is increasingly permeated with artificial intelligence. At what point, and to what extent should algorithmic power and agency that appears to influence processes of deliberation and preference measurement that are crucial to democratic self-government be seen as a threat to legitimacy of those processes? If it were revealed that the owner of a dominant social media company deliberately designed algorithms that disadvantaged her mainstream political opponents that would surely be a scandal; but if the same social media company automatically demotes Nazis or fundamentalists that may not. Indeed states increasingly request that they do.
OPEN UP OR BREAK UP?
We are approaching—or are already at—a moment of a fundamental decision for the dominant intermediaries. In brief terms, it is “Open Up or Break Up” time. Many of the emerging social and political problems with platforms are not a problem of the shift to social media or Internet 2.0 services per se. They are, as authors such as Ben Wagner point out, a consequence of the dominance of a few players. This dominance undermines the ability of existing competition and consumer protection regulation to effectively protect consumer and citizen interests.
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 405 ]
6 0 4
What is now abundantly clear is that the task of providing policymakers and regulators with a more adequate “theory of harm” and new paradigms for competition and wider social regulation is now an urgent one, and it is one that will require interdisciplinary collaboration. Platforms are the forerunners of wider social transformations, in the incorporation of artificial intelligence in institutional development more widely. In the longue durée of the development of the modern state bureaucracy, it is clear that platforms, which are now able to much more efficiently execute key functions such as identity verification and service delivery, find themselves in competition potentially with the state itself. As Ben Wagner points out in his chapter, when platforms write rules they in fact write laws. The risk of them working with the state to change the national laws may in fact be a secondary concern, as platform efficiencies may see them developing in some senses parallel state institutions. Other commentators such as Timothy Garton Ash (2016) have pointed out that platforms have developed many of the features of the modern state. But platforms are not states. This is why the relationship between elections—the fundamental source of legitimacy of democratic states—and the platforms is so crucial. Free and fair elections are the definitive institutions of liberal democratic societies. Institutions and processes that undermine them pose an existential threat to them. What can be done? Well, domestic election and media law, and international election monitoring standards surely need to be updated. But that is not enough. For large platforms that are developing forms of market power that appear to be immune to antitrust enforcement and are also becoming crucial to elections, the implication of the core problem of dominance is relatively simple. Open up, or break up. If it comes to a battle between social media companies and the defining characteristic that forms the basis of government legitimacy—elections and democracy itself—Facebook and other dominant platforms will lose.
REFERENCES Benkler, Yochai. 2016. “Degrees of Freedom, Dimensions of Power.” Daedalus 145, no. 1: 18–32. Baldwin, Robert, Martin Cave, Martin Lodge. 2010. The Oxford Handbook of Regulation. Oxford: Oxford University Press. Cohen, Julie. 2016. “The Regulatory State in the Information Age.” Theoretical Inquiries in Law 17, no. 2: 369–414. Council of Europe. 2017. “Recommendation of the Committee of Ministers on the Role and Responsibilities of Internet Intermediaries.” (MSI-NET 2017).
[ 406 ] Digital Dominance
Fuchs, Christian. 2015. Culture and Economy in the Age of Social Media. New York: Routledge. Garton Ash, Timothy. 2016. Free Speech. London: Atlantic Books. Hillje, Johannes. 2017. Propaganda 4.0. Wie rechte Populisten Politik Machen. Berlin: Dietz. Kreiss, Daniel. 2016. Prototype Politics. Technology Intensive Campaigning and the Data of Democracy. New York: Oxford University Press. Lukes, Steven M. 1974. Power: A Radical View. London and New York: Macmillan. Mill, John Stuart. 1984 (1859). On Liberty. London: Penguin. Moore, Martin. 2016. Tech Giants and Civic Power. King’s College London: Centre for the Study of Media, Communication & Power. O’Neil, Onora. 2013. “Media Freedoms and Media Standards.” In Ethics of Media, edited by N. Couldry, M. Madianou, and A. Pinchevski, 21–38. London: Palgrave Macmillan. Pickard, Victor. 2015. America’s Battle for Media Democracy. Cambridge: Cambridge University Press. Sunstein, Cass. 1994. “Incommensurability and Valuation in Law.” Michigan Law Review 92: 779. Sunstein, Cass. 2000. “Television and the Public Interest.” California Law Review 88, no. 2: 499–564. Tambini, Damian, Danilo Leonardi, and Chris Marsden. 2008. Codifying Cyberspace. London: Routledge. Tambini, Damian, and Sharif Labo. 2016. “Digital Intermediaries in the UK: Implications for News Plurality.” INFO 18, no. 4: 35–48. UN. 2011. “Report of the UN Special Rapporteur on Protection and Promotion of the Right to Freedom of Expression Frank La Rue.” May 21, 2011. A/HRC/17/27. Zuboff, Shoshana. 2015. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1: 75–89.
D o m i n a n c e , t h e Ci t i z e n I n t e r e s t a n d t h e C o n s u m e r I n t e r e s t
[ 407 ]
8 0 4
INDEX
Figures, notes, and tables are indicated by f, n, and t following the page number. Abbott, Tony, 302, 304 Accelerated Mobile Pages (AMP), 138, 252 Accertamenti Diffusione Stampa (ADS), 362 Accor, 56 Ace Project, 267n1 Acxiom, 187 Adobe, 73 AdSense (Google), 55 advertising by Amazon, 39 on Amazon, 40 data mining and differentiation in, 182 on Facebook, 38, 41, 61–62, 249, 373, 379 free expression and, 221 on Google, 35, 41, 55, 58, 61–62, 65, 249 in newspapers, 244 political, 268, 280t, 281, 282–84f, 379 targeting of, 211 on television, 268 AdWords (Google), 35, 55 agenda setting, 12, 202–18 aggregation and, 207–10 collective vs. individual consumption, 210–12 effects of, 212–13 news diversity and, 205–7 origins of control and, 203–5 AI (artificial intelligence), 35, 43 Airbnb, 27, 44, 54, 59, 382n12
Albright, Jonathan, 380–81 algorithmic filtering, 159–60, 167–68, 180, 205–7 Alibaba, 44 Al Jazeera, 246 Allcott, Hunt, 270 Allen, Paul, 31 Almunia, Joaquín, 90, 92n11 Alphabet (parent company of Google) advertising revenues, 249 founding of, 36 market capitalization of, 4, 21 price/earnings ratio for, 42 Amazon. See also specific products brand equity of, 25 business strategy of, 104–9, 105f Chicago School approach and, 101–4 data exploitation and, 117–19 delivery services, 113–17 discriminatory pricing and fees, 110–13 history of, 38–39, 382 indirect network effects and, 27, 28 leveraging dominance across sectors, 113–17 market capitalization of, 4, 21 market share, 98, 109, 109n8, 113n11 price/earnings ratio for, 42, 42n10 Prime memberships, 39, 106–7 structural dominance establishment, 109–19 user data collection by, 73 Whole Foods acquisition by, 41, 43 Amazon Alexa, 150
04 1
AmazonBasics (brand), 118–19 Amazon Echo, 10, 24, 39, 140 Amazon Marketplace, 109, 117–19 Amazon Mechanical Turk (AMT), 323 Amazon Web Services (AWS), 28, 29, 39, 99n1, 104 AMP (Accelerated Mobile Pages), 138, 252 Anderson, Chris, 210 Anderson, Ross, 381 Android operating system, 5, 32, 32n7, 35–36, 138, 189, 312 Anstead, Nick, 273 Answer Bot model, 309 The Antitrust Paradox (Bork), 3, 102 AOL, 73, 383 APIs (Application Programming Interfaces), 189–91, 382–83 Apple. See also specific products augmented and virtual reality (AR/VR) platforms and, 24 brand equity of, 25 history of, 33–34 market capitalization of, 4, 21, 43 mobile platforms and, 32 news diversity and, 252 price/earnings ratio for, 42 self-driving car technologies and, 24 user data collection by, 73 Apple Maps, 36 Apple News, 134, 138, 139, 150, 252 Apple TV, 40 Apple Watch, 140 Application Programming Interfaces (APIs), 189–91, 382–83 App Store (Apple), 28, 33 Arab Spring (2010–2011), 221, 245–46 artificial intelligence (AI), 35, 43 Ash, Timothy Garton, 406 asymmetries of information and power, 188–89 Athey, Susan, 144 attention economy digital power and, 15, 384–88 dimensions of attention, 387–88 misinformation and, 347 attribution in distributed environments, 147–49, 148f audience building and segmentation, 274 augmented and virtual reality (AR/VR) platforms, 24, 43
[ 410 ] Index
Auletta, Ken, 31 Australia Apple News in, 139 electoral studies in, 14, 301–2 Google market share in, 226 news diversity in, 147 Snapchat Discover in, 139 Autorité de la Concurrence (EU), 9 AWS. See Amazon Web Services Aykroyd, Dan, 296 backfire effect, 343, 355 Bagdikian, Ben, 203 Baidu, 44, 321 Ballmer, Steve, 32 Barocas, Solon, 11, 179 Barwise, Patrick, 4, 7–8, 21, 397, 402, 404 BBC, 148, 149 Beer, David, 227 Belgium, competition authority in, 91 Bell, Emily, 12–13, 241, 398, 402 Bellamy, Edward, 57 Bellflamme, Paul, 60 Bell Laboratories, 26 Benkler, Yochai, 391 Berners-Lee, Tim, 312–13 Beyond Freedom and Dignity (Skinner), 295 Bezos, Jeff, 10, 22, 34, 38–39, 98, 100, 105–6, 248–49 Big Data. See also data concentration and data abuse data protection legislation and, 191 digital dominance and, 28–29 Google and, 35 Bing search engine (Microsoft), 32, 299, 305, 307, 320 Bin Laden, Osama, 378 Blair, Tony, 285n7 Blume, Peter, 197 Borgesius, Frederik Zuiderveen, 183 Bork, Robert, 3, 102, 103 Bottin Cartographes, 66 Boulding, Kenneth E., 294 Bourdreau, Kevin J., 58 Bowman, Ward, 3 Bracha, Oren, 300 Brandeis, Louis, 1–2, 311n3 Brand Finance, 25n3
branding, 24–25 by news publishers, 147–48, 148f Bratton, Benjamin H., 382 Brazil Google market share in, 226 news diversity in, 149 Breitbart, 209, 332 Brexit referendum case study, 359–62, 360t, 361–62f echo chambers and, 15 electoral legitimacy and, 13, 270, 271–73, 277–81, 279f foreign intervention in, 285 press independence and, 258 Brin, Sergey, 4, 22, 34, 36 Brown, Campbell, 258 Bundeskartellamt (Germany), 9, 90, 93, 176 Buttarelli, Giovanni, 90 Buzzfeed, 13, 151, 214, 247–48, 252, 379n11 Cadwalladr, Carole, 270, 271 Cairncross, Frances, 44 Calico, 36 Cambridge Analytica, 227 Canada, Google market share in, 226 Canary (news site), 213 Carr, Nicholas, 391 Carvin, Andy, 246 cascades, 349–50 Castells, Manuel, 375 Castile, Philando, 5, 253–54 censorship, 5, 37, 165, 219, 236, 286 Charter of Fundamental Rights (EU), 197 Chen, Hsinchun, 29 Chiang, Roger H. L., 29 Chicago School approach, 100, 101–4, 116 Chile, social media use for news in, 134, 134f China search engines in, 321 tech market in, 43–44 Choudary, Sangeet P., 29, 45 Chisholm, Alex, 86 Cisco, 44 City Car Club, 56 class labels, 179 Clayton Act of 1914, 2
Cleland, Scott, 300 Clinton, Chelsea, 332 Clinton, Hillary search engine results and, 296, 305, 310, 326, 327 Twitter use by, 332 visual framing and, 333–35, 334–36f cloud computing, 32, 39–40, 43, 119 Cloudflare, 241 clusters, 358 CNN, 252, 331 Coase, Ronald H., 52 Coby, Gary, 379 cocreation process, 397 Cohen, Bernard, 204 Cohen, Julie, 176, 404 Comcast, 249 Committee to Protect Journalists, 258 Communications Decency Act of 1996 (US), 224, 389, 400 competition beyond core markets, 43–44 competitive discipline, 287 governance model for, 120 platform analysis and, 57–67 Competition and Markets Authority (UK CMA), 9, 85–86, 178, 183, 189 competitive authoritarianism, 269 Compuserve, 383 confirmation bias, 159, 353, 355, 367 conspiracy news, 344–45, 347–52, 367–68 content regulation General Data Protection Regulation (EU), 80, 93nn12–13, 178, 191, 192, 194, 195–96, 228 humanizing, 235 Netzwerkdurchsetzungsgesetz (Germany), 5, 223–24, 233, 235 self-regulation, 233–34, 403–4 Convention for the Protection of Human Rights and Fundamental Freedoms (EU), 155, 399–400 convergence, 55, 214 Corbyn, Jeremy, 212, 213 corporate responsibility, 388–90 Costolo, Dick, 255 Council of Europe, 155, 158–59, 266–67 Coyle, Diane, 8, 50, 402 credit scoring models, 179
Index [ 411 ]
241
Cruz, Ted, 324, 325, 326 cultural chaos, 214 Cummings, Dominic, 379 customer relationship management (CRM), 79–80 Daily Stormer, 241, 243 dark posts, 227–28 data concentration and data abuse, 8–9, 71–97 absence of close competition in markets for end products, 74–76 abuse of dominance, 88–93 by Amazon, 117–19 appraisal of merger decisions, 84–85 availability of data, 86–87 characterization of data, 85–86 exclusionary abuse, 90–92 exploitative abuse, 89–90, 117–19 Facebook/WhatsApp decision, 73–74 free expression and, 228–29 Google/DoubleClick decision, 73–74 indicators of market power, 85–88 infrastructure services and, 117–19 market analysis for data, 77–78 market definition requires supply and demand for data, 76–77 Microsoft/LinkedIn decision, 78–81 policy initiatives, 92–93 role of data and scale in machine learning, 87–88 Verizon/Yahoo decision, 81–82 Datalogix, 187 data mining differentiation and, 182–83 discrimination and, 178–81, 181f Data Protection Directive (EU) role and limits of, 191–97 scope of, 192–95 substantive rights provided by, 195–97 Daum, 137 Dawes, S., 170 daylight attention, 387 The Death of Distance (Cairncross), 44 debunking pages, 344, 354–55 DeepMind, 36 Delfi, 220 delivery services, 113–17 Denmark news diversity in, 138
[ 412 ] Index
political advertising on television banned in, 268 diagonal integration, 399–401 Diakopoulos, Nicholas, 14, 320, 401 digital dominance, 7–8. See also specific companies Big Data and, 28–29 competition beyond core markets, 43–44 direct network effects, 26 evolution of, 21–49 free expression and, 221–25 historical patterns of, 22–24, 23t indirect network effects, 26–28 machine learning and, 28–29 market effects on, 41–46 multisided markets, 26–28 news diversity and, 142–47, 143–44f, 146–47f “power of providence” and, 184–91 profiling practices and, 178–84 regulatory responses to, 232–35 switching costs and lock-in, 29–30 winner-take-all nature of tech markets, 24–31 digital power attention economy and, 16, 384–88 conceptualization of, 374–75 direct vs. indirect power, 376–81 generativity of platforms and, 383–84 platform power, 381–84 digital sharecropping, 391 direct network effects, 25, 26 Director, Aaron, 3 direct power, 374, 376–81 discriminatory pricing and fees, 9, 110–13 dissenting information, 354–55, 354f Dommering, E. J., 164 DoubleClick. See Google/DoubleClick decision (2008) duty of care, 390, 390n20 eBay, 53, 73, 382, 382n12 echo chambers agenda setting and, 206 media diversity and, 159, 166 misinformation and, 15, 343, 346–52, 359–66, 391 news diversity and, 133
Edelman, Benjamin, 59 Edsall, Thomas, 313 Egypt, Arab Spring protests in, 245–46 Einav, Liran, 29 Eisenhower, Dwight D., 294 election of 2016 (US). See also specific candidates foreign intervention in, 285 independent press and, 253–56 platform power and, 376 search engines and, 296 social media and, 270 Electoral Commission (UK), 272, 288 electoral legitimacy, 13–14, 265–92 Brexit referendum, 271–73, 277–81, 279f censorship effects, 286 competitive discipline and, 287 crisis of, 288–89 data-driven social media campaign mechanics, 273–74, 275–76f Facebook campaigns, 281, 282–84f free expression and, 226–28 implications of social media campaigning, 281–85 LSE/Who Targets Me research collaboration, 281, 282–84f media and, 266–68 message targeting and delivery, 278 online campaigning and, 269–70 platform dominance and, 286–88 prominence effects, 287 propaganda bubbles, 287 separation of powers and, 287–88 social construction of, 268–69 voter profiling and segmentation, 277–78 Electronic Privacy Information Center (EPIC), 186 Elliott, Matthew, 273n5, 278 e-mail, 138 enclave extremism, 391 encrypted messaging services, 5 End User License Agreements (EULAs), 376, 389 Enterprise (car rental company), 56 entry barriers, 57, 116 envelopment, 64 Epstein, Robert, 14, 226, 265, 294, 310, 311n3, 321, 401
Erickson, Jim, 31 essential facilities doctrine, 92 European Access Directive, 164 European Commission on Big Data, 88n9 Digital Single Market strategy, 184–85 Facebook/WhatsApp decision (2014), 9, 73–74, 74n1 free expression and, 222 Google/DoubleClick decision (2008), 9, 73–74, 89 Microsoft/LinkedIn decision (2016), 9, 73, 78–81 Microsoft/Yahoo decision (2010), 87 Verizon/Yahoo decision (2016), 9, 81–82 European Convention on Human Rights, 155, 399–400 European Court of Justice, 180, 187, 192–93, 195, 222–23, 251, 377 European Data Protection Board, 197 European Union (EU) Charter of Fundamental Rights, 197 competition law in, 2, 3 Data Protection Directive, 192 electoral legitimacy and, 266 Gender Goods and Services Directive, 180 General Data Protection Regulation, 80, 93nn12–13, 178, 191, 192, 194, 195–96, 228 Merger Regulation, 74n2, 89 mobile phone market in, 56 privacy laws in, 63 regulatory framework in, 45–46 Unfair Commercial Practices Directive, 93 Unfair Contract Terms Directive, 93 Europe Media Monitor, 356 Evans, David S., 29, 41, 60n2, 64 Evolve Politics (news site), 213 Excite (search engine), 34–35 exclusionary abuse of data, 90–92 “experience goods,” 24–25, 25n2, 53 exploitative abuse of data, 89–90, 117–19 externalities, 26n4. See also network effects ExxonMobil, 372–73 Ezrachi, Ariel, 6
Index [ 413 ]
41
Facebook. See also specific products advertising on, 38, 41, 61–62, 249, 373, 379 algorithmic filtering and, 207 augmented and virtual reality (AR/VR) platforms and, 24 brand equity of, 25 differentiation of, 372–74 digital agenda setting and, 202n1, 213 electoral campaigns on, 257, 281, 282–84f electoral legitimacy and, 226–28, 296–97 fake news and, 384 free expression and, 220, 221, 223, 229 generativity and, 384 history of, 37–38, 244 indirect network effects and, 27 Instant Articles, 252 Italian constitutional referendum and, 362–63, 363t, 365–66, 365–66f market capitalization of, 4, 21, 247, 372–73, 372n4 market share of, 187 media diversity and, 161–62, 161n6, 169 misinformation and, 345–46 news diversity and, 134, 135, 135f, 141–42, 145–46, 146f, 149, 246, 249–50, 252 normative effects of, 230–31 platform dominance by, 66, 176, 185 political and social aims of, 4 Trending Topics, 11, 168 Facebook Journalism Project, 258 Facebook Live, 5, 254 Facebook Messenger, 38, 75, 135, 373 Facebook/WhatsApp decision (2014), 9, 73–74, 74n1 fairness doctrine, 259 fake news, 5, 257–58, 266, 270–71, 296, 379–80. See also misinformation Fast Greedy (FG) algorithm, 358n2, 360n3 FBA (Fulfillment-by-Amazon), 40, 109, 114–15 feature selection, 179 Federal Communications Commission (FCC), 259
[ 414 ] Index
Federal Trade Commission (FTC, US), 103, 178, 180–81, 190 FedEx, 115, 116 filter bubbles, 133, 142, 166, 231, 287, 391 filtering content, 167 Finland, news diversity in, 138, 147 Fire TV (Amazon), 39 First Amendment (US), 242 flaming posts, 352 Fletcher, Richard, 10, 133 Flipboard, 139 Foley, Jim, 255 Fox News, 205 France competition authority in, 91 Google market share in, 226 Loi sur la République Numérique, 66, 66n4, 402 media diversity in, 155–56 news diversity in, 149 platform dominance in, 402 political advertising on television banned in, 268 Snapchat Discover in, 139 freedom of thought, 229–30 free expression, 12, 219–40, 400 data access having a chilling effect on, 228–29 digital dominance and, 221–25 freedom of thought and, 229–30 humanizing content regulation online, 235 normative value of platform dominance and, 230–31 regulatory responses to platform dominance and, 232–35 right to seek information and, 229–30 standards for content self-regulation, 233–34 swing elections and, 226–28 transparency for dominant platforms and, 232–33 Friendster, 37 Fulfillment-by-Amazon (FBA), 40, 109, 114–15 Fusion, 248 Galaxy Gear, 140 Galloway, Scott, 247
Gallup, 329 Gans, Joshua, 66 Ganter, S. A., 163, 164 gatekeeping trust, 209 Gates, Bill, 22, 31 GDF Suez, 91 Gellert, Raphael, 179 Gender Goods and Services Directive (EU), 180 General Data Protection Regulation (EU), 80, 93nn12–13, 178, 191, 192, 194, 195–96, 228 generativity of platforms, 383–84 Gentzkow, Matthew, 270 George, Gerard, 29 Germany Competition Authority, 176 content regulation in, 5, 223–24, 233, 235 Google market share in, 226 media diversity in, 155–56 news diversity in, 147 Snapchat Discover in, 139 social media use for news in, 134, 134f Gillard, Julia, 301–2, 304 Gillespie, Tarleton, 321 Gizmodo, 256 Global Network Initiative (GNI), 233 Global Privacy Enforcement Network (GPEN), 190 Gmail, 26, 35, 310, 312 Goodreads, 39 Google. See also specific products advertising on, 35, 41, 55, 58, 61–62, 65, 249 algorithmic filtering and, 206–7 antitrust actions against, 298 augmented and virtual reality (AR/VR) platforms and, 24 brand equity of, 25 differentiation of, 372–74 digital agenda setting and, 202n1 electoral influence of, 226–28, 320 European Commission antitrust investigation against, 185 fake news and, 384 free expression and, 220, 221, 223, 229 generativity and, 384 history of, 34–36
Issue Guide, 14, 322, 326–30, 327f, 337 market capitalization of, 43 market share of, 65, 185 In the News, 15, 322, 331, 337 news diversity and, 138, 141–42, 149, 249–50, 252 normative effects of, 230–31 platform dominance by, 185 political and social aims of, 4 search suggestion effect and, 305–8 self-driving car technologies and, 24 strategy of, 55 user data collection by, 73 Google Analytics, 312 Google Assistant, 36, 309–10 Google Chrome, 32, 35, 312 Google Chromecast, 40 Google Cloud Platform (GCP), 36 Google Digital News Initiative, 251, 258, 403 Google/DoubleClick decision (2008), 9, 73–74, 89 Google Home, 24, 36, 40, 140, 310 Google Maps, 26, 65 Google News, 134, 144, 161, 250, 328 Google Play Store, 190–91 Google Trends, 329 GPEN (Global Privacy Enforcement Network), 190 Graef, Inge, 8–9, 71, 85n7, 399, 404 Graham, Don, 248 The Great Dictator (film), 222 Greece, news diversity in, 138, 149 Grewal, David Singh, 374–75 Grove, Andy, 30 Guardian, subscriptions bump after 2016 election, 258 Haas, Martine R., 29 Hagiu, Andrei, 58 halo effect, 65 Harbour, Pamela Jones, 77n3, 89 hate speech, 221, 223, 224 Hayek, Friedrich A., 52 Hayes, Rutherford B., 296 Helberger, Natali, 6, 10–11, 153, 184, 189, 401, 402 Herrman, John, 257 Hesmondhalgh, David, 210
Index [ 415 ]
6 41
The Hill, 328 Hofstadter, Richard, 2 HomePod (Apple), 24 homophily, 348, 391 Hong Kong, news diversity in, 138 horizontal integration, 399 Huawei, 44 Huffington Post, 151, 214, 247, 328 Huxley, Aldous, 388 IAC, 73 IBM, 7, 22, 24, 31, 383 IMDb, 39 iMessage (Apple), 38 independent press, 12–13, 241–61 election of 2016 and, 253–56 fake news and, 257–58 future of, 258–60 media diversity and, 164–65 news publishers and, 245–53 platforms as publishers and, 243–45 stealth publishing and, 253–56 India electoral studies in, 14, 303 Google market share in, 226 indirect power, 376–81 inequality, 11, 176–201 information asymmetries, 188–89 Infowars, 332 infrastructure services, 9, 98–129. See also Amazon business strategy and, 104–9, 105f Chicago School approach and, 101–4 competition model for governance of, 120 data exploitation and, 117–19 delivery services, 113–17 discriminatory pricing and fees, 110–13 leveraging dominance across sectors, 113–17 models for addressing platform power, 119–21 regulatory model for governance of, 120–21 rental of, 36 structural dominance establishment, 109–19 innovation digital dominance and, 34, 44
[ 416 ] Index
free expression and, 230 platform power and, 60, 64, 66, 207, 382–83 press independence and, 254 social media power and, 278 Instagram electoral legitimacy and, 312 Facebook’s ownership of, 38 media diversity and, 161n6 news diversity and, 135, 135f, 248 platform power and, 373 Intel, 22, 32, 44 intellectual property, 22, 44, 63 Intelligence and Security Committee of Parliament (ISC, UK), 235 Interbrand, 25n3 internal media diversity, 166–70 Internet Explorer (Microsoft), 31, 32, 64 Internet of things (IoT), 43 Introna, Lucas D., 320 iOS operating system, 5, 32, 33, 33n8, 138, 189 iPad, 33 iPhone, 28, 32, 33, 244 iPod, 33 Ireland news diversity in, 147 political advertising on television banned in, 268 Isaacson, Walter, 31 Issue Guide (Google), 326–30, 327f editorial choices and, 329–30 news sources for, 328–29 quantifying statements, 326–28 Italy constitutional referendum, 15, 362–66, 363t, 365–67f Google market share in, 226 news diversity in, 149 political advertising on television in, 268 iTunes, 33 Japan news diversity in, 137, 147 social media use for news in, 134, 134f, 135 Jenkins, Henry, 205 Jobs, Steve, 22
John Deere, 63 Justice Department (US), 103, 298 Karppinen, Kari, 6, 160 Kasich, John, 326 Kennedy, John F., 294 Khan, Lina M., 9–10, 40, 45, 98, 397, 401, 403 Kindle (Amazon), 26, 39 Kingsley, Ben, 296 Kirkpatrick, David, 31 Kokott, Juliane, 193 Korff, Douwe, 193 Koslov, Tara Isa, 77n3 Kuzel, Rasto, 267 Kwon, K., 169 Laitenberger, Johannes, 87 language translation, 312 La Rue, Frank, 228 Leave.EU, 270, 273n5 Levin, Jonathan, 29 Libya, Arab Spring protests in, 246 LinkedIn, 33, 38 news diversity and, 135, 135f user data collection by, 73 lock-in, 29–30, 66, 381 Loi sur la République Numérique (France), 66, 66n4, 402 Looking Back (Bellamy), 57 LSE/Who Targets Me research collaboration, 281, 282–84f, 281n6 Luca, Michael, 59 Lukes, Steven, 203, 214, 398–99 Lync enterprise phone platform, 32 Lynskey, Orla, 11, 176, 398, 399, 401, 404 machine learning customer relationship management and, 80 data and scale in, 87–88 digital dominance and, 28–29 Google and, 35 Madden, Mary, 182 Mail Online, 151 Mandela, Nelson, 241 manipulation effect, 14, 300–5 Margetts, Helen, 392 May, Theresa, 212, 213
McCombs, Maxwell E., 204 McGee, John, 3 McGonagle, T., 154n2 media diversity, 10, 153–75. See also independent press; news diversity defined, 154n2, 156–58 diversity- vs. popularity-based recommender design, 167–68 electoral legitimacy and, 266–68 fairness and, 165 independence of media and, 164–65 internal diversity, 166–70 negotiation power and, 163–64 personal social media platform diversity and, 169 platform power and, 161–70 privacy and, 169–70 social media platforms and, 158–60 structural diversity, 162–65 message creation and testing, 274 message targeting and delivery, 274, 278 Mexico, news diversity in, 138 Mic.com, 248 Microsoft brand equity of, 25 history of, 31–33 indirect network effects and, 27 intellectual property, 32n7 market capitalization of, 4, 21, 43 mobile platforms and, 32 platform dominance and, 7, 22, 24, 383 user data collection by, 73 Microsoft/LinkedIn decision (2016), 9, 73, 78–81 Microsoft/Yahoo decision (2010), 87 Millar, Gavin, 284–85 misinformation, 15, 342–70. See also fake news attention patterns, 347 cascades and, 349–50 clusters, 358 datasets, 344–46, 345–46t dissenting information responses, 354–55, 354f echo chambers, 346–52, 359–66 electoral legitimacy and, 265, 271 emotional dynamics of, 350–52, 351–52f news consumption patterns and, 355–58, 355f, 356t, 357f, 359f
Index [ 417 ]
84 1
misinformation (cont.) news diversity and, 133 online spread of, 343–55 paradoxical information responses, 352–53, 353f polarization and, 347–48, 348–49f, 358 selective exposure and, 356, 358 mobile platforms Apple and, 32 growth of, 384n15 Microsoft and, 32 news diversity and, 138–41, 139f, 140t strategic choices for, 56 as tech market, 22 Mobius, Markus, 144 monitoring software, 311n3 Moon, S., 169 Moore, Gordon, 244 Moore, Martin, 1, 155, 234, 396 Muddiman, Ashley, 321 multihoming, 38, 42, 54, 65 multisided markets, 7, 26–28, 33, 50n1, 397 Murdoch, Rupert, 286 Mussenden, Sean, 14, 320 Muth, K. T., 61 MySpace, 37, 244 Nadella, Satya, 32, 33 Nader, Ralph, 313 National Lottery (Belgium), 91 natural language processing, 35 Naughton, John, 15, 371, 397, 398–99 Naver, 137 negativity bias, 308 negotiation power, 163–64 Nest, 36 Netflix, 28, 36 Netherlands, media diversity in, 155–56, 160 net neutrality, 121n12 net positivity analysis, 324–25, 325f networked gatekeeping, 204 network effects, 7, 25, 32, 53, 404–5 competition and, 57 digital dominance and, 8, 25, 26–28 modeling of, 27n5 platform dominance and, 53 network power, 374–75
[ 418 ] Index
Netzwerkdurchsetzungsgesetz (Germany), 5, 223–24, 233, 235 New Deal legislation, 2 Newman, Nic, 10, 133 news consumption patterns, 355–58, 355f, 356t, 357f, 359f NewsCorp, 37, 250 news diversity, 10, 133–52. See also independent press; media diversity attribution in distributed environments, 147–49, 148f digital dominance implications for, 142–47, 143–44f, 146–47f intermediaries, reasons for using, 141–42 issue guides, 328–29 mobile platforms and mobile aggregation, 138–41, 139f, 140t platform growth and, 134 preferred gateways, 135–36, 136f search and aggregators, 136–38, 137t social media and, 134–36, 134–36f news search results, 330–32, 330f New York Times advertising revenues for, 249 search results for articles in, 331 subscriptions bump after 2016 election, 258 1984 (Orwell), 296 Nielsen, Rasmus Kleis, 163, 164 Nissenbaum, Helen, 320 Nokia, 32 nonrivalrous goods, 24–25, 25n2 Norway, Snapchat Discover in, 139 NowThisNews, 248 Obama, Barack, 377–78 obligation responsibility, 390, 390n20 Ofcom (UK), 155, 159–60, 268, 268n2 Office for National Statistics (UK), 271 Office of Democratic Institutions and Human Rights (ODIHR), 267 onboarding, 186 One Fine Stay, 56 operant conditioning, 304 Oracle, 44 Organisation for Security and Cooperation in Europe (OSCE), 266–67, 270
Orwell, George, 296, 388 Overture, 35n9 Page, Larry, 4, 22, 34, 36, 300 PageRank technology, 34, 385 Pal, Jeno, 144 Palantir, 44 paradoxical information, 352–53, 353f Pariser, Eli, 206 Parker, Geoffrey G., 27n5, 29, 45 Parse.ly, 249 partisan selective exposure, 205 Pasquale, Frank, 6, 179, 187, 189, 300 Pataki, George, 324 Payne, Candace, 254 PayPal, 243 peer-to-peer platforms, 51 Pentland, Alex, 29 Peretti, Jonah, 247 Persily, Nathaniel, 313 personalized pricing, 183, 186 Petit, Nicolas, 66 PETs (privacy enhancing technologies), 190–91 Pew Research, 211, 244, 246, 250, 329 Pillow Pets, 118 Pinterest, 28, 38 Pixel smartphone (Google), 36 platforms, 8, 50–70 aggregation and, 207–10 asymmetries of information and power, 188–89 attention economy and, 384–88 collective vs. individual consumption, 210–12 competition analysis, 63–67 competition factors, 57–63 corporate responsibility of, 388–90 data mining and differentiation, 182–83 data mining and discrimination, 178–81, 181f data protection law and, 191–97 defined, 202n1 economics of, 53–55 effects of, 212–13 electoral legitimacy and, 286–88 entry barriers, 57, 116 “free” problem for, 60–62 generativity of, 383–84
humanizing content regulation online, 235 incentives to innovate, 60 inequality accentuated by, 11, 176–201 media diversity and, 161–70 mobile (see mobile platforms) monopolies, 12, 202–18 news diversity and, 134, 138–41, 139f, 140t, 205–7 normative value of platform dominance, 230–31 origins of control and, 203–5 ownership of information, 62–63 perception creation via, 183–84 power and responsibility of, 371–95 “power of providence” and, 184–91 price discrimination, 57–58 profiling practices of, 178–84 regulatory responses to platform dominance, 232–35 service provider data-processing practices influenced by, 189–91 standards for content self-regulation, 233–34 strategic choices of, 55–56 transparency requirements for, 232–33 trust factors, 58–59 types of, 51, 51t PlayStation (Sony), 32 Poell, T., 165 polarization, 133, 347–48, 348–49f, 358 Politico, 247, 248, 328 politics, 263–407 electoral legitimacy, 265–92 (see also electoral legitimacy) misinformation, 342–70 (see also misinformation) platform power and, 371–95 (see also platforms) search engine influence on votes and opinions, 294–341 (see also search engines) PolitiFact, 379n11 Poort, Joost, 183 Posner, Richard, 101 power. See also digital power asymmetries of, 188–89 diagonal integration and, 399–401 platform dominance and, 397–99 theory of harm and, 399–401
Index [ 419 ]
0 2 4
“power of providence,” 184–91, 398 Powles, Julia, 196 predatory pricing, 102–3, 110–11, 117, 120, 122 price discrimination, 57–58, 110n9 Prince, Matthew, 241–42 privacy, 29, 63, 169–70 privacy enhancing technologies (PETs), 190–91 professionally-generated content (PGC), 37 profiling election campaigns and, 266, 270 by platforms, 177, 178–84, 198 propaganda and, 398 Project Jigsaw, 243 prominence effects, 287 propaganda bubbles, 265, 284, 287, 398 proprietary standards, 22 public goods, 25n2 public service broadcasting, 157 Quattrociocchi, Walter, 15, 342, 400 Ramirez, Edith, 180 Ramsay, Gordon, 234 ratings systems, 58 Reagan, Ronald, 259 recommender algorithms, 167–68 Reddit, 332 Redford, Robert, 296 redlining, 269–70, 283 reductive representations of data, 180 redundant encodings, 181 regulatory framework, 45–46. See also content regulation regulatory model for governance, 120–21 Reich, Robert, 313 Reiter v. Sonotone Corp. (1979), 102 relevance, 338 Reputation Institute, 99 Reuters Institute, 159, 271–72 Reynolds, Diamond, 5, 253–54 Rhodes, Andrew, 61 right to seek information, 229–30 Ritson, Mark, 42 Robertson, Ronald E., 14, 226, 301, 310, 311n3, 321 Rochet, Jean-Charles, 6, 27, 54 Rockefeller, John D., 4
[ 420 ] Index
Rohlf, Jeffrey, 26 Roosevelt, Franklin D., 2 Rubin, Andy, 300 Russell, Bertrand, 374 Russia, electoral interventions by, 285 Russia Today, 332 Samsung, 42 Sanders, Bernie, 324, 325, 326 Schlosberg, Justin, 12, 202, 398 Schmalensee, Richard, 29, 41 Schmidt, Eric, 35, 36, 206–7, 296, 309 Schneier, Bruce, 6 Science news, 344–45, 347–52 Search & Destroy: Why You Can’t Trust Google Inc. (Cleland), 300 search engine manipulation effect (SEME), 14, 300–5 search engines. See also specific search engines case studies, 14, 322–36 issue guides, 326–30, 327f manipulation effect, 14, 300–5 mechanics of, 298–300 monitoring method, 310–11 net positivity analysis, 324–25, 325f news diversity and, 136–38, 137t, 142n6 news results, 330–32, 330f search engine manipulation effect (SEME), 14, 300–5 search suggestion effect (SSE), 14, 305–10, 306–7f, 309f visual framing by, 322, 333–36, 334–36f, 337 votes and opinions influenced by, 14–15, 294–341 search suggestion effect (SSE), 14, 305–10, 306–7f, 309f Selbst, Andrew, 11, 179, 196 selective exposure, 356, 358 self-affirmation, 231 self-censorship, 37 self-publication, 12 self-regulation, 59, 223, 233–34, 403–4 self-selected personalization, 167, 205, 281 SEME (search engine manipulation effect), 14, 300–5 separation of powers, 287–88
Shapiro, Carl, 32, 35 sharing economy, 51, 53 Sharpston, Eleanor V. E., 180 Shaw, Donald L., 204 Sherman, John, 1 Sherman Antitrust Act of 1890, 1–2, 102 Shirky, Clay, 386 Sidewalk, 36 Silverman, Craig, 257 Simon, Herbert, 371, 384 Skinner, B. F., 295 Skype, 32 SmartNews, 139 smartphones, 5. See also mobile platforms Smith, Adam, 1 Smith, Ben, 247 Snapchat, 8, 28, 38, 150, 248 Snapchat Discover, 139 Sneakers (film), 296 Snopes, 379n11 Snowden, Edward, 251 social media. See also specific platforms data-driven campaigns on, 273–74, 275–76f electoral legitimacy and, 265–92 media diversity and, 153–75 news diversity and, 134–36, 134–36f Sony, 32 SourceFed, 305, 305n2 South Korea, news diversity in, 137, 147 Spain Google market share in, 226 Google News in, 144 political advertising on television banned in, 268 Spotify, 53 spotlight attention, 387 SSE. See search suggestion effect Standard Oil, 4 Stark, Jennifer, 14, 320 starlight attention, 387 stealth publishing, 253–56 Stefanone, M., 169 Stone, Brad, 31 Storey, Veda C., 29 Strauss, Steven, 313 Straw, Jack, 273n5 Straw, Will, 277 Stronger In, 273n5
structural dominance, 109–19 structural media diversity, 162–65 Stucke, Maurice, 6 Sundararajan, Arun, 59 Sunlight Society, 311n3 Sunstein, Cass, 206, 401 superabundance of information, 385 surveillance capitalism, 265, 389, 397 Svirsky, Dan, 59 Swaffer, Hannen, 390n22 Sweden, news diversity in, 138 Sweeney, Latanya, 184 switching costs, 22, 29–30, 107 Taiwan, news diversity in, 138 Tambini, Damian, 1, 13, 265, 396, 401 Taplin, Jonathan, 313 target variables, 179 Taylor, Linnet, 179, 182, 194, 198 Telecommunications Act of 1996 (US), 259 Telecommunications Industry Dialogue, 233 television advertising, 268 Tencent, 44 Tesla, 24 TFEU (Treaty on the Functioning of the European Union), 89, 92 Thailand, censorship of YouTube in, 222 Thiel, Peter, 21 third-party sellers, 117–19 Thomas, Lucy, 273n5 Thomson/Reuters decision (2008), 89 Tirole, Jean, 6, 27, 50, 54 Toulemonde, Eric, 60 Tow Center for Digital Journalism, 252 transaction costs, 52 translation, 312 transparency electoral legitimacy and, 288 for platforms, 232–33 Treaty of Rome (1957), 2 Treaty on the Functioning of the European Union (TFEU), 89, 92 Trielli, Daniel, 14, 320 TripAdvisor, 27, 59 troll pages, 344, 353 Trump, Donald. See also election of 2016 campaign strategies of, 257, 258
Index [ 421 ]
24
Trump, Donald (cont.) search engine results and, 296, 305, 310, 326, 327 social media use by, 225, 332, 376 visual framing and, 333–35, 334–36f Tunisia, Arab Spring protests in, 245–46 Turow, Joseph, 183 Twitter data licensing by, 76 digital agenda setting and, 202n1, 213 founding of, 244 free expression and, 223 growth of, 38 indirect network effects and, 28 Italian constitutional referendum and, 363–66, 363t, 365–67f media diversity and, 161, 161n6 news diversity and, 135, 135f, 145–46, 146f, 246, 252 search results and, 312, 332 user data collection by, 73 Twitter Moments, 252 two-sided markets, 50. See also multisided markets Uber eBay as forerunner for, 382n12 indirect network effects and, 27, 28 self-driving car technologies and, 24 self-regulation by, 59 valuation of, 44 UGC (user-generated content), 37 Unfair Commercial Practices Directive (EU), 93 Unfair Contract Terms Directive (EU), 93 United Kingdom. See also Brexit referendum Apple News in, 139 Competition and Markets Authority (UK CMA), 85–86, 178, 183, 189 data mining and discrimination in, 178 electoral influence of Facebook and Google in, 227 electoral legitimacy in, 267–68 encrypted messaging services in, 5 general election (2017), 271–74 Google market share in, 226 Intelligence and Security Committee of Parliament (ISC), 235
[ 422 ] Index
media diversity in, 155–56 news diversity in, 137, 138, 147 online behavior in, 271 political advertising on television banned in, 268, 268n2 Political Parties, Elections and Referendums Act, 288 Representation of the People Act 1983, 267–68 Snapchat Discover in, 139 social media use for news in, 134, 134f United States Apple News in, 139 election of 2016 (see election of 2016) electoral influence of Facebook and Google in, 227 electoral studies in, 303 Google market share in, 226 mobile phone market in, 56 news diversity in, 138 regulatory framework in, 45–46 Snapchat Discover in, 139 social media use for news in, 134, 134f UPS, 114–15, 116 Upworthy, 255 user-generated content (UGC), 37 Usher, Nikki, 211 Ut, Nick, 5 Van Alstyne, Marshall W., 27n5, 29, 45 Van Dijk, J., 165 Varian, Hal R., 32, 35 Venice Commission (2010), 266 Verily, 36 Verizon/Yahoo decision (2016), 9, 81–82 vertical integration, 102, 103, 112, 117, 119, 120, 399 Vestager, Margrethe, 71–72, 77, 86 visual framing by search engines, 333–36, 334–36f voice recognition, 43 Vote Leave, 273n5, 379 voter profiling and segmentation, 277–78 Vox.com, 248 Wachter, Sandra, 196 Wagner, Ben, 12, 219, 400, 402, 405–6 Wallace, James, 31 Waller, S. W., 66
Washington Post sale of, 248–49 search results for articles in, 328 subscriptions bump after 2016 election, 258 watchdog journalism, 204 waterbed effect, 114 Watkins, Leo, 4, 7–8, 21, 397, 402, 404 Waymo, 36 Waze, 312 web crawlers, 298–99 WeChat, 161n6 WEF (World Economic Forum), 343 Western Union, 295–96 WhatsApp, 5, 38, 135, 248, 312, 373. See also Facebook/WhatsApp decision (2014) Whitaker, James, 301 Whole Foods, 41, 43 Who Targets Me (group), 266, 281, 282–84f, 281n6 Wigmore, Andy, 272–73, 273n5 Wikileaks, 243 Wikipedia, 244 Williams, James, 387, 388 Windows (Microsoft), 31–32 winner-take-all nature of tech markets, 7, 24–31 World Economic Forum (WEF), 343 WPP (marketing company), 25 Wu, Timothy, 6, 391
Xbox (Microsoft), 27, 32 Yahoo!, 73, 137, 298, 305, 307, 321. See also Verizon/Yahoo decision (2016) Yanez, Jeronimo, 253–54 Yelp, 73 Yoo, C. S., 66 YouGov, 134, 148 YouTube fake news and, 380 free expression and, 222 Google’s ownership of, 36 media diversity and, 161, 161n6 misinformation and, 346 news diversity and, 135, 135f, 145–46, 146f press independence and, 244 search engine results and, 312 in search results, 332 Zittrain, Jonathan, 383 Zollo, Fabiana, 15, 342, 400 Zuboff, Shoshana, 184 Zuckerberg, Mark on fake news, 297 founding of Facebook by, 22, 37 free expression and, 225 on mission of Facebook, 4, 376 monetization of Facebook by, 247, 374 on role of Facebook, 5, 162, 254–55, 256 Zweig, Katharina, 226
Index [ 423 ]
4 2
6 2 4
8 2 4