New Media and Freedom of Expression: Rethinking the Constitutional Foundations of the Public Sphere 9781509916481, 9781509916511, 9781509916498

The principles of freedom of expression have been developed over centuries. How are they preserved and passed on? How ca

199 28 5MB

English Pages [281] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgements
Table of Contents
List of Abbreviations
Table of Cases
Introduction
1. The Foundations of Free Speech and Freedom of the Press
I. Freedom of Speech in the Age of the Internet
II. The Category of 'Speech' and the Scope of Protection
III. Limitation of the Freedom of Speech
IV. Freedom of the Press and Media Regulation
2. The Regulation of the Internet and its Gatekeepers in the Context of the Freedom of Speech
I. Online Content Providers as 'Media'
II. The Regulation of Internet Gatekeepers
3. Internet Service Providers
I. Introduction
II. Obligations of the Internet Service Providers Regarding Illegal Content
III. The Problem of Network Neutrality
IV. Censorship by Internet Service Providers
4. Search Engines
I. Introduction - The Role of Search Engines in Online Public Sphere
II. Search Results as Speech
III. The Liability of Search Engines for Violations of Personality Rights
IV. The Manipulation of Search Results
V. Summary
5. Social Media Platforms
I. Introduction
II. Social Media Platforms and the Democratic Public Sphere
III. The Regulation of Platforms by Legislation
IV. Private Regulation by Platforms
V. Summary
6. Gatekeepers' Responsibility for Online Comments
I. The Case of Online Comments
II. The European Court of Human Rights Case Law Relating to Comments - Overview
III. The Relevant Criteria in the Cases before the European Court of Human Rights
IV. Main Criticism of the Jurisprudence of the European Court of Human Rights
V. The Case of Social Media Comments
VI. Summary
7. The Future of Regulating Gatekeepers
I. Introduction
II. Possible Interpretations of Existing Legal Doctrines Concerning the Public Sphere
III. The Possible Models of Future European Regulation
IV. Summary
Index
Recommend Papers

New Media and Freedom of Expression: Rethinking the Constitutional Foundations of the Public Sphere
 9781509916481, 9781509916511, 9781509916498

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

NEW MEDIA AND FREEDOM OF EXPRESSION The principles of freedom of expression have been developed over centuries. How are they preserved and passed on? How can large internet gatekeepers be required to respect freedom of expression and to contribute actively to a diverse and plural marketplace of ideas? These are key issues for media regulation, and will remain so for the foreseeable decades. The book starts with the foundations of freedom of expression and freedom of the press, and then goes on to explore the general issues concerning the regulation of internet as a specific medium. It then turns to analysing the legal issues relating to the operations of the three most important gatekeepers (ISPs, search engines and social media platforms) that affect freedom of expression. Finally it summarises the potential future regulatory and media policy directions. The book takes a comparative legal approach, focusing primarily on English and American regulations, case law and jurisprudential debates, but it also details the relevant international (Council of Europe, European Union) developments, as well as the jurisprudence of the European Court of Human Rights. Volume 25 in the series Hart Studies in Comparative Public Law

Hart Studies in Comparative Public Law Recent titles in this series: Regulation in India Design, Capacity, Performance Edited by Devesh Kapur and Madhav Khosla The Nordic Constitutions Edited by Helle Krunke and Bjӧrg Thorarensen Human Rights in the UK and the Influence of Foreign Jurisprudence Hélène Tyrrell Australian Constitutional Values Edited by Rosalind Dixon The Scope and Intensity of Substantive Review Traversing Taggart’s Rainbow Edited by Hanna Wilberg and Mark Elliott Entick v Carrington 250 Years of the Rule of Law Edited by Adam Tomkins and Paul Scott Administrative Law and Judicial Deference Matthew Lewans Soft Law and Public Authorities Remedies and Reform Greg Weeks Legitimate Expectations in the Common Law World Edited by Matthew Groves and Greg Weeks The Dynamics of Exclusionary Constitutionalism Mazen Masri Constitutional Courts, Gay Rights and Sexual Orientation Equality Angioletta Sperti Principled Reasoning in Human Rights Adjudication Se-Shauna Wheatle Human Rights and Judicial Review in Australia and Canada Janina Boughey The Foundations and Traditions of Constitutional Amendment Edited by Richard Albert, Xenophon Contiades and Alkmene Fotiadou

New Media and Freedom of Expression Rethinking the Constitutional Foundations of the Public Sphere

András Koltay

HART PUBLISHING Bloomsbury Publishing Plc Kemp House, Chawley Park, Cumnor Hill, Oxford, OX2 9PH, UK HART PUBLISHING, the Hart/Stag logo, BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2019 Copyright © András Koltay, 2019 András Koltay has asserted his right under the Copyright, Designs and Patents Act 1988 to be identified as Author of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. While every care has been taken to ensure the accuracy of this work, no responsibility for loss or damage occasioned to any person acting or refraining from action as a result of any statement in it can be accepted by the authors, editors or publishers. All UK Government legislation and other public sector information used in the work is Crown Copyright ©. All House of Lords and House of Commons information used in the work is Parliamentary Copyright ©. This information is reused under the terms of the Open Government Licence v3.0 (http://www. nationalarchives.gov.uk/doc/open-government-licence/version/3) except where otherwise stated. All Eur-lex material used in the work is © European Union, http://eur-lex.europa.eu/, 1998–2019. A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication data Names: Koltay, András, 1978Title: New media and freedom of expression : rethinking the constitutional foundations of the public sphere / András Koltay. Description: Oxford, UK ; Chicago, Illinois : Hart Publishing, 2019.  |  Series: Hart studies in comparative public law ; volume 25  |  Includes bibliographical references and index. Identifiers: LCCN 2019007723 (print)  |  LCCN 2019009138 (ebook)  |  ISBN 9781509916504 (EPub)  |  ISBN 9781509916481 (hardback : alk. paper) Subjects: LCSH: Digital media—Law and legislation.  |  Freedom of expression—Law and legislation.  |  Social media—Law and legislation.  |  Electronic information resource searching—Law and legislation.  |  Internet service providers—Law and legislation. Classification: LCC K4240 (ebook)  |  LCC K4240 .K654 2019 (print)  |  DDC 342.08/53—dc23 LC record available at https://lccn.loc.gov/2019007723 ISBN: HB: 978-1-50991-648-1 ePDF: 978-1-50991-649-8 ePub: 978-1-50991-650-4 Typeset by Compuscript Ltd, Shannon To find out more about our authors and books visit www.hartpublishing.co.uk. Here you will find extracts, author information, details of forthcoming events and the option to sign up for our newsletters.

Acknowledgements

I

would like to thank Eric Barendt, Balázs Bartóki-Gönczy, Paul Bernal, Peter Coe, Mark Cole, Ian Cram, Amy Gajda, Tom Gibbons, Robert Kahn, Uta Kohl, Ron Krotoszynski, David Mangan, Oreste Pollicino, Tamás Szikora, Bernát Török, Erzsébet Xénia Udvari, András Varga Zs, Russell Weaver, Paul Wragg and Vincenzo Zeno-Zencovich for their valuable comments, suggestions and support. Thanks are also due to my colleagues who helped with the publication of the book, especially Édua Reményi, who did a sterling job editing it, and to Szabolcs Nagy and Stephen Patrick, who helped with the English adaptation of my original drafts. I am also grateful for the enormous support of the publisher. Without them this work would never have been born.

vi

Table of Contents Acknowledgements����������������������������������������������������������������������������������������������������� v List of Abbreviations����������������������������������������������������������������������������������������������� xiii Table of Cases���������������������������������������������������������������������������������������������������������� xv Introduction��������������������������������������������������������������������������������������������������������������� 1 1. The Foundations of Free Speech and Freedom of the Press������������������������������������� 8 I. Freedom of Speech in the Age of the Internet������������������������������������������������ 8 A. Searching for the Truth��������������������������������������������������������������������������� 8 B. Operation of Democracy���������������������������������������������������������������������� 11 C. The Individualist Theory���������������������������������������������������������������������� 12 D. Justifications of the Freedom of Speech: Theory and Practice���������������� 13 E. Reconciling the Justifications���������������������������������������������������������������� 14 II. The Category of ‘Speech’ and the Scope of Protection��������������������������������� 15 A. Terminological Variations�������������������������������������������������������������������� 15 B. The Category of ‘Speech’��������������������������������������������������������������������� 16 i. Speech and ‘Speech’���������������������������������������������������������������������� 16 ii. ‘Speech’ and Action���������������������������������������������������������������������� 17 C. Categories and Protection��������������������������������������������������������������������� 18 D. The Open Debate of Public Affairs as the Core of Freedom of Speech��������������������������������������������������������������������������������������������� 20 E. Internet-Specific Questions������������������������������������������������������������������� 22 III. Limitation of the Freedom of Speech���������������������������������������������������������� 24 A. The Protection of Freedom of Speech in Different Legal Systems����������� 24 i. The United States�������������������������������������������������������������������������� 24 ii. The United Kingdom�������������������������������������������������������������������� 25 iii. The European Court of Human Rights����������������������������������������� 27 iv. The European Union��������������������������������������������������������������������� 28 B. The Protection of Reputation and Honour������������������������������������������� 29 C. The Protection of Privacy��������������������������������������������������������������������� 32 D. Instigation to Violence or a Criminal Offence and Threats Thereof������� 34 E. Hate Speech����������������������������������������������������������������������������������������� 36 F. Symbolic Speech����������������������������������������������������������������������������������� 38 G. False Statements����������������������������������������������������������������������������������� 39 H. Public Morals and Protection of Minors����������������������������������������������� 40 i. The Protection of Public Morals��������������������������������������������������� 40 ii. Paedophilia����������������������������������������������������������������������������������� 41 I. Commercial Communication��������������������������������������������������������������� 42

viii  Table of Contents IV. Freedom of the Press and Media Regulation������������������������������������������������ 44 A. Freedom of Speech – Freedom of the Press�������������������������������������������� 44 B. The Concept of the Media, the Rights Holders of the Freedom of the Press������������������������������������������������������������������������������������������ 46 C. Differentiated Regulation of Individual Media�������������������������������������� 50 D. Censorship and Prior Restraint������������������������������������������������������������� 50 E. Special Privileges Awarded to the Media����������������������������������������������� 52 i. The Protection of Information Sources������������������������������������������ 53 ii. Exemption from House Searches��������������������������������������������������� 53 iii. Investigative Journalism���������������������������������������������������������������� 54 F. The Principal Foundations of Special Privileges Awarded to the Media���������������������������������������������������������������������������������������� 54 G. Content Regulation of the Media��������������������������������������������������������� 57 i. Protection of Minors�������������������������������������������������������������������� 57 ii. Hate Speech���������������������������������������������������������������������������������� 58 H. Media Pluralism����������������������������������������������������������������������������������� 59 2. The Regulation of the Internet and its Gatekeepers in the Context of the Freedom of Speech������������������������������������������������������������������������������������ 65 I. Online Content Providers as ‘Media’����������������������������������������������������������� 65 A. Introduction����������������������������������������������������������������������������������������� 65 B. Frontier, Architecture, Feudalism and Other Metaphors������������������������ 67 C. Initial Hopes and the Reality of the Internet����������������������������������������� 70 D. Old Problems in a New Context����������������������������������������������������������� 72 E. New Problems on the Horizon�������������������������������������������������������������� 74 i. A Decline of Professional Media?�������������������������������������������������� 74 ii. Social Fragmentation and Polarisation������������������������������������������� 75 iii. The Issue of Applicable Law��������������������������������������������������������� 76 F. Regulatory Analogies and Starting Points��������������������������������������������� 77 i. On the Freedom of the Internet����������������������������������������������������� 77 ii. Internet Services as the Subjects of the Freedom of Speech and the Press��������������������������������������������������������������������������������� 79 iii. Applying the Limits of Offline Speech in an Online Environment��������������������������������������������������������������������������������� 79 iv. Media, Platform and General Regulations with an Impact on the Internet������������������������������������������������������������������������������ 80 v. Internet Access as a Fundamental Right?��������������������������������������� 81 II. The Regulation of Internet Gatekeepers������������������������������������������������������ 82 A. The Roles and Types of Gatekeepers����������������������������������������������������� 82 B. The ‘Freedom of Speech’ of Gatekeepers and Algorithms���������������������� 84 C. The Problems of the Media Survive in the Activities of Gatekeepers������������������������������������������������������������������������������������� 87 i. Media or Tech Companies?����������������������������������������������������������� 87 ii. Government Intervention�������������������������������������������������������������� 88 iii. Private Censorship������������������������������������������������������������������������ 88

Table of Contents  ix

iv. Diversity and Pluralism����������������������������������������������������������������� 90 v. Impact on the Audience����������������������������������������������������������������� 90 D. Analogies in Media Regulation and Regulatory Ideas���������������������������� 91 i. ‘Editing’��������������������������������������������������������������������������������������� 91 ii. Previous Analogies in Media Regulation���������������������������������������� 92 iii. Recommendations of the Council of Europe��������������������������������� 93 E. The Liability of Gatekeepers in General������������������������������������������������ 94 i. The European Union��������������������������������������������������������������������� 94 ii. The United States�������������������������������������������������������������������������� 98 3. Internet Service Providers�����������������������������������������������������������������������������������102 I. Introduction����������������������������������������������������������������������������������������������102 II. Obligations of the Internet Service Providers Regarding Illegal Content������������������������������������������������������������������������������������������103 A. Blocking and Filtering�������������������������������������������������������������������������103 B. Self-regulation�������������������������������������������������������������������������������������105 C. Injunctions against Internet Service Providers��������������������������������������106 III. The Problem of Network Neutrality����������������������������������������������������������109 A. Theoretical Questions�������������������������������������������������������������������������109 B. Regulation in the United States������������������������������������������������������������111 C. Regulation in Europe���������������������������������������������������������������������������112 IV. Censorship by Internet Service Providers����������������������������������������������������113 4. Search Engines���������������������������������������������������������������������������������������������������115 I. Introduction – The Role of Search Engines in Online Public Sphere������������115 II. Search Results as Speech����������������������������������������������������������������������������117 A. Search for Analogies����������������������������������������������������������������������������117 B. Search Results as Opinions������������������������������������������������������������������120 C. The Regulation of Search Engines��������������������������������������������������������122 III. The Liability of Search Engines for Violations of Personality Rights�����������124 A. Search Engines as Publishers����������������������������������������������������������������124 B. Defamatory Search Results������������������������������������������������������������������126 C. Defamatory Autocomplete Suggestions������������������������������������������������129 D. Other Violating Content����������������������������������������������������������������������131 E. The Right to Be Forgotten�������������������������������������������������������������������132 IV. The Manipulation of Search Results����������������������������������������������������������137 A. The Issue of Search Engine Neutrality�������������������������������������������������137 B. External Manipulation������������������������������������������������������������������������141 C. Internal Manipulation�������������������������������������������������������������������������141 V. Summary��������������������������������������������������������������������������������������������������145 5. Social Media Platforms��������������������������������������������������������������������������������������146 I. Introduction����������������������������������������������������������������������������������������������146 II. Social Media Platforms and the Democratic Public Sphere��������������������������147 A. New Forms of Speech and the Expansion of the Public Sphere�������������147

x  Table of Contents B. ‘Bubbles’ and Other Psychological Effects of Social Media Platforms���������������������������������������������������������������������������������149 C. Social Media as a Public Forum�����������������������������������������������������������152 D. ‘Cheap Speech’ and the Traditional Media�������������������������������������������154 E. Tech or Media Companies?�����������������������������������������������������������������157 III. The Regulation of Platforms by Legislation������������������������������������������������159 A. Introduction����������������������������������������������������������������������������������������159 B. Applicable Legislation�������������������������������������������������������������������������160 C. Jurisdictional Issues�����������������������������������������������������������������������������165 D. Protection of Reputation���������������������������������������������������������������������167 E. Protection of Privacy���������������������������������������������������������������������������170 F. Threats, Hate Speech and Other Violent Content Breaching Public Order���������������������������������������������������������������������������������������173 G. Fake News������������������������������������������������������������������������������������������177 IV. Private Regulation by Platforms�����������������������������������������������������������������180 A. Introduction����������������������������������������������������������������������������������������180 B. The Legal Basis of Private Regulation: Contract Terms and Conditions�����������������������������������������������������������������������������������183 C. Moderation and Private Censorship�����������������������������������������������������187 i. The Pros and Cons of Moderation�����������������������������������������������187 ii. The Legal Status of Moderation – Possible Analogies�������������������188 iii. Community Standards and Codes of Conduct������������������������������190 D. Editing and Content Diversity on Social Media Platforms��������������������194 E. Fake News and Private Regulation�������������������������������������������������������197 F. The Ideology of a Platform�����������������������������������������������������������������199 V. Summary��������������������������������������������������������������������������������������������������200 6. Gatekeepers’ Responsibility for Online Comments���������������������������������������������202 I. The Case of Online Comments������������������������������������������������������������������202 A. Comments as ‘Speech’�������������������������������������������������������������������������202 B. Anonymity������������������������������������������������������������������������������������������202 C. Moderation�����������������������������������������������������������������������������������������204 D. Basis of Legal Responsibility for Unlawful Comments��������������������������205 II. The European Court of Human Rights Case Law Relating to Comments – Overview��������������������������������������������������������������������������209 III. The Relevant Criteria in the Cases before the European Court of Human Rights��������������������������������������������������������������������������������������211 A. The Content of the Comment��������������������������������������������������������������211 B. Identifiability of the Author�����������������������������������������������������������������212 C. The Content Provider��������������������������������������������������������������������������213 D. The Party Attacked�����������������������������������������������������������������������������213 E. The Effect of the Comment on the Party Attacked�������������������������������214 F. The Conduct of the Content Provider��������������������������������������������������214 G. Sanctions Applied�������������������������������������������������������������������������������215 H. Summary��������������������������������������������������������������������������������������������216

Table of Contents  xi IV. Main Criticism of the Jurisprudence of the European Court of Human Rights��������������������������������������������������������������������������������������216 A. Liability of a Gatekeeper (Content Provider)����������������������������������������216 B. Importance of the ‘Economic Service’��������������������������������������������������217 C. Expecting Moderation�������������������������������������������������������������������������218 D. Assessment of the Comment’s Content������������������������������������������������218 V. The Case of Social Media Comments��������������������������������������������������������219 VI. Summary��������������������������������������������������������������������������������������������������220 7. The Future of Regulating Gatekeepers���������������������������������������������������������������222 I. Introduction����������������������������������������������������������������������������������������������222 II. Possible Interpretations of Existing Legal Doctrines Concerning the Public Sphere���������������������������������������������������������������������������������������223 A. Media Regulation��������������������������������������������������������������������������������223 B. The Law of Electronic Commerce��������������������������������������������������������225 C. The Law of Contract���������������������������������������������������������������������������225 D. The Law of Public Forums�������������������������������������������������������������������227 E. The Chances of Finding a Comprehensive Regulatory Solution������������229 III. The Possible Models of Future European Regulation����������������������������������230 A. Transferring the US Model to Europe��������������������������������������������������230 B. Preserving the Status Quo through Limited Regulatory Intervention (Weaker Co-regulation Model)�����������������������������������������231 C. Strengthening Government Regulation (Stronger Co-regulation Model)������������������������������������������������������������������������������������������������235 D. Prohibiting Private Regulation�������������������������������������������������������������239 IV. Summary��������������������������������������������������������������������������������������������������241 Index�����������������������������������������������������������������������������������������������������������������������243

xii

List of Abbreviations Art

Article

Arts

Articles

AVMS Directive

Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive)

CDA

Communications Decency Act 47 USC § 230 (1996)

CJEU

Court of Justice of the European Union

E-Commerce Directive

Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce)

ECHR

European Convention on Human Rights

ECJ

European Court of Justice

ECtHR

European Court of Human Rights

EU

European Union

FCC

Federal Communications Commission

GDPR

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

New AVMS Directive

Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018, amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities

para

paragraph

paras

paragraphs

pt

part

xiv  List of Abbreviations pts

parts

s

section

ss

sections

StGb

Strafgesetzbuch (German Criminal Code)

UK

United Kingdom

US

United States (of America)

Table of Cases Australia Bleyer v Google, Inc [2014] NSWSC 897�������������������������������������������������������������������127 Google, Inc v Duffy [2017] SASCFC 130����������������������������������������������������������� 128, 130 Google, Inc v Trkulja [2016] 342 ALR 504�����������������������������������������������������������������127 Lange v Australian Broadcasting Corporation [1997] 189 CLR 520����������������������������� 21 Trkulja v Google, Inc (No 5) [2012] VSC 533������������������������������������������������������������126 Trkulja v Google, Inc [2015] VSC 635��������������������������������������������������������������� 127, 130 Trkulja v Google, LLC [2018] HCA ������������������������������������������������������������������� 127–28 Canada Google, Inc v Equustek Solutions, Inc [2017] SCC 34������������������������������������������������135 Crookes v Newton [2011] SCC 47����������������������������������������������������������������������������149 European Court of Justice C-68/93, Fiona Shevill, Ixora Trading, Inc, Chequepoint SARL and Chequepoint International Ltd v Presse Alliance SA������������������������������������������������������������������ 76 C-70/10, Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM)�������������������������������������������������������������������� 97, 108, 206 C-155/73, Italy v Sacchi��������������������������������������������������������������������������������������������� 29 C-131/12, Google Spain SL, Google Inc v Agencia Española de Protección Datos Mario Costeja González���������������������������������� 29, 92, 97, 132, 134 C 210/16, Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH����������������171 C-236/08–C-238/08, Google France SARL and Google, Inc v Louis Vuitton Malletier SA and Others������������������������������������������������������������������������������� 96, 122 C-314/12, UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH and Wega Filmproduktionsgesellschaft mbH�������������������������������������������������������108 C-324/09, L’Oréal SA and Others v eBay International AG and Others���������������� 96, 206 C 360/10, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV����������������������������������������������������������������������� 97, 206 C-484/14, Tobias Mc Fadden v Sony Music Entertainment Germany GmbH�������������107 C-498/16, Schrems v Facebook Ireland Ltd�������������������������������������������������������� 166, 185 C-509/09, eDate Advertising GmbH v X and C-161/10, Olivier Martinez and Robert Martinez v MGN������������������������������������������������������������������������������� 76

xvi  Table of Cases European Court of Human Rights Ahmet Yıldırım v Turkey, no 3111/10�����������������������������������������������������������������������104 Aksoy v Turkey, no 28635/95������������������������������������������������������������������������������������� 36 Animal Defenders International v the United Kingdom, no 48876/08�������������������������� 63 Arslan v Turkey, no 23462/94������������������������������������������������������������������������������������ 36 Ashby Donald and Others v France, no 36769/08�������������������������������������������������� 22, 79 Autronic AG v Switzerland, no 15/1989/175/231��������������������������������������������������� 49, 79 Bărbulescu v Romania, no 61496/08�������������������������������������������������������������������������172 Barthold v Germany, no 10/1983/66/101�������������������������������������������������������������������� 43 Barthold v Germany, no 8734/79������������������������������������������������������������������������������� 27 Bladet Tromsø and Stensaas v Norway, no 21980/93��������������������������������������������� 31–32 Bowman v the United Kingdom, no 141/1996/760/961������������������������������������������� 13, 18 Casado Coca v Spain, no 8/1993/403/481������������������������������������������������������������������� 27 Castells v Spain, no 11798/85������������������������������������������������������������������������������������� 27 Cengiz and Others v Turkey, nos 48226/10 and 14027/11�������������������������������������������104 Centro Europa 7 srl and Di Stefano v Italy, no 38433/09��������������������������������������������� 60 Ceylan v Turkey, no 23556/94����������������������������������������������������������������������������������� 27 Dammann v Switzerland, no 77551/01����������������������������������������������������������������������� 54 Delfi AS v Estonia, no 64569/09����������������������������������������������������� 14, 23, 50, 78–79, 88, 97, 204, 207, 209–18, 220 Demuth v Switzerland, no 38743/98��������������������������������������������������������������������������� 46 Ediciones Tiempo SA v Spain, no 13010/87���������������������������������������������������������������� 61 Ernst and Others v Belgium, no 33400/96������������������������������������������������������������������ 54 Financial Times and Others v the United Kingdom, no 821/03����������������������������������� 53 Flux and Samson v Moldova, no 28700/03����������������������������������������������������������������� 47 Garaudy v France, no 65831/01����������������������������������������������������������������������������� 20, 39 Glimmerveen and Hagenbeek v the Netherlands, no 8348/78������������������������������������� 37 Goodwin v the United Kingdom, no 16/1994/463/544�������������������������������������������� 27, 53 Guja v Moldova, no 14277/04������������������������������������������������������������������������������������ 53 Gündüz v Turkey, no 35071/97����������������������������������������������������������������������������� 27, 38 Handyside v the United Kingdom, no 5493/72��������������������������������������14, 22, 28, 37, 41 Hertel v Switzerland, no 59/1997/843/1049������������������������������������������������������������ 27, 43 Incal v Turkey, no 41/1997/825/10������������������������������������������������������������������������������ 27 Informationsverein Lentia v Austria, nos 13914/88, 15041/89, 15717/89, 15779/89 and 17207/90����������������������������������������������������������������������������������������� 59 Jacubowski v Germany, no 7/1993/402/480���������������������������������������������������������������� 43 Jersild v Denmark, no 15890/89��������������������������������������������������������������������������������� 38 Kaperzynski v Poland, no 43206/07���������������������������������������������������������������������������� 61 Karhuvaara and Iltalehti v Finland, no 53678/00�������������������������������������������������������� 34 Karsai v Hungary, no 5380/07����������������������������������������������������������������������������������169 Kulis v Poland, no 15601/02��������������������������������������������������������������������������������������� 34 Lehideaux and Isorni v France, no 55/1997/839/1045�������������������������������������������������� 39 Lingens v Austria, no 12/1984/84/131�������������������������������������������������������������������� 13, 31 Lingens v Austria, no 9815/82����������������������������������������������������������������������������� 27, 167

Table of Cases  xvii Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary, no 22947/13��������������������������������������������������������������� 14, 23, 97, 206 Manole v Moldova, no 13936/02������������������������������������������������������������������������������� 60 Markt Intern and Beerman v Germany, no 3/1988/147/201���������������������������������������� 43 Melnychuk v Ukraine, no 28743/03���������������������������������������������������������������������������� 61 MGN Ltd v the United Kingdom, no 39401/04���������������������������������������������������������� 46 ML and WW v Germany, no 60798/10 and 65599/10������������������������������������������������135 Mosley v the United Kingdom, no 48009/08���������������������������������������������������������� 22, 79 Müller and Others v Switzerland, no 10737/84����������������������������������������������������������� 41 Observer and Guardian v the United Kingdom, no 13585/88��������������������������������� 46, 52 Open Door Counselling and Dublin Woman v Ireland, no 64/1991/316/387–388��������� 43 Otto-Preminger-Institut v Austria, no 11/1993/406/485����������������������������������������������� 28 Peck v the United Kingdom, no 44647/98������������������������������������������������������������������� 34 Perinçek v Switzerland, no 27510/08�������������������������������������������������������������������������� 40 Perrin v the United Kingdom, no 5446/03�������������������������������������������������������������� 22, 79 Pihl v Sweden, no 4742/14�������������������������������������������������������������� 23, 207, 210, 212–18 Sener v Turkey, no 26680/95�������������������������������������������������������������������������������������� 36 Stoll v Switzerland, no 69698/01�������������������������������������������������������������������������������� 54 Sunday Times v the United Kingdom, no 6538/74������������������������������������������������� 28, 46 Sunday Times v the United Kingdom (No 2), no 13166/87������������������������������������� 46, 52 Sürek and Özdemir v Turkey, no 23927/94 and 24277/94��������������������������������������� 27, 36 Sürek v Turkey (No 1), no 26682/95��������������������������������������������������������������������������� 27 Tamiz v the United Kingdom, no 3877/14���������������������������������������������������� 210, 212–14 Tele 1 Privatfernsehgesellschaft mbH v Austria, no 32240/96�������������������������������������� 59 Thorgeir Thorgeirsson v Iceland, no 13778/8�������������������������������������������������������������� 46 Thorgeirson v Iceland, no 47/1991/299/370����������������������������������������������������������������� 31 Tillack v Belgium, no 20477/05���������������������������������������������������������������������������������� 54 Times Newspapers Ltd v the United Kingdom (Nos 1 and 2), nos 3002/03 and 23676/03�������������������������������������������������������������������������������������������������� 22, 79 TV Vest AS and Rogaland Pensjonistparti v Norway, no 21132/05������������������������������ 63 Uj v Hungary, no 23954/10��������������������������������������������������������������������������������� 46, 214 Vajnai v Hungary, no 33629/06���������������������������������������������������������������������������������� 39 Verein gegen Tierfabriken Schweiz (VgT) v Switzerland, no 24699/94������������������������� 63 Von Hannover v Germany, no 59320/00�������������������������������������������������������������� 34, 171 Von Hannover v Germany (No 2), nos 40660/08 and 60641/08����������������������������������� 34 Von Hannover v Germany (No 3), no 8772/10����������������������������������������������������������� 34 White v Sweden, no 42435/02������������������������������������������������������������������������������������ 47 Wingrove v the United Kingdom, no 19/1995/525/611������������������������������������������������� 28 Witzsch v Germany, no 41448/98�������������������������������������������������������������������������� 20, 39 Yildirim v Turkey, no 3111/10����������������������������������������������������������������������������������� 88 Germany Themel v Facebook Ireland, Inc, 18 W 1294/18������������������������������������������������� 186, 226 User v Facebook Ireland, Inc, 1O 71/18������������������������������������������������������������� 186, 226

xviii  Table of Cases Hong Kong Oriental Press Group Ltd and Others v Fevaworks Solutions Ltd and Others [2013] HKCFA 47������������������������������������������������������������������������������208 Ireland Muwema v Facebook Ireland Ltd [2016] IEHC 519��������������������������������������������� 167–68 The United Kingdom Bonnard v Perryman [1891] 2 Ch 269, CA������������������������������������������������������������������ 51 Brutus v Cozens [1973] AC 854���������������������������������������������������������������������������������� 26 Bunt v Tilley [2006] EWHC 407 (QB)��������������������������������������������������������������� 108, 124 Campbell v MGN [2004] 2 AC 457, HL�������������������������������������������������������������� 33, 171 Carr v News Group Newspapers Ltd and Others [2005] EWHC 971�������������������������132 Cartier International AG and Others v British Telecommunications Plc and Another [2018] UKSC 28������������������������������������������������������������������������������109 CG v Facebook Ireland Ltd [2016] NICA 54�������������������������������������������78, 166, 171–72 Cream Holdings v Banerjee [2004] 4 All ER 617��������������������������������������������������������� 52 DPP v Collins [2006] UKHL 40��������������������������������������������������������������������������������173 Emmens v Pottle [1885] 16 QBD 354�������������������������������������������������������������������������124 Flood v Times Newspapers [2012] UKSC 11��������������������������������������������������������� 30, 47 Fraser v Evans [1969] 1 QB 349, CA��������������������������������������������������������������������������� 51 Godfrey v Demon Internet Ltd [1999] EWHC QB 244����������������������������������������������124 Greene v Associated Newspapers [2005] 1 All ER 30�������������������������������������������������� 52 Jameel v The Wall Street Journal Europe (No 2) [2006] UKHL 44������������������������������� 30 McLeod v St Aubyn [1899] AC 549���������������������������������������������������������������������������124 Metropolitan International Schools Ltd v (1) Designtechnica Corp (2) Google UK Ltd (3) Google, Inc [2009] EWHC 1765 (QB)��������������������������������126 NT1 and NT2 v Google, Inc [2018] EWHC 799 (QB)������������������������������������������������135 PJS v News Group Newspapers Ltd [2016] UKSC 26������������������������������������������������� 34 R (on the application of ProLife Alliance) v BBC [2003] 2 All ER 977, HL������������������ 58 R v Secretary of State for the Home Department, ex parte Simms [2000] 2 AC 115��������������������������������������������������������������������������������������������������� 26 Reynolds v Times Newspapers Ltd [2001] 2 AC 127��������������������������������� 26, 30, 47, 167 Silkin v Beaverbrook Newspapers [1958] 1 WLR 743������������������������������������������������� 26 Smith v Trafford Housing Trust [2012] EWHC 3221 (Ch)������������������������������������������172 Tamiz v Google, Inc [2013] EWCA Civ 68�������������������������������������������������� 124, 208, 211 The Lord McAlpine of West Green v Sally Bercow [2013] EWHC 1342 (QB)�������������148 Vizetelly v Mudie’s Select Library Ltd [1900] 2 QB 170����������������������������������������������124 X (formerly known as Mary Bell) and Y v News Group Newspapers Ltd and Others [2003] EWHC 1101 (QB)�������������������������������������������������������������������������132

Table of Cases  xix The United States of America Abrams v United States 250 US 616 (1919)�������������������������������������������������������10, 13, 47 Ashcroft v ACLU 535 US 564 (2002)�������������������������������������������������������������������������104 Ashcroft v ACLU 542 US 656 (2004)�������������������������������������������������������������������������104 Ashcroft v Free Speech Coalition 535 US 234 (2002)��������������������������������������������������� 42 Barenblatt v United States 360 US 109 (1959) ������������������������������������������������������������ 24 Barnes v Glen Theatre, Inc 501 US 560 (1991)������������������������������������������������������������ 16 Batzel v Smith 333 F3d 1018, 1033 (9th Cir 2003)������������������������������������������������������168 Bland v Roberts 857 FSupp2d 599 (ED Va 2012)��������������������������������������������������������147 Bland v Roberts No 12-1671 (4th Cir 2013)����������������������������������������������������24, 74, 148 Brandenburg v Ohio 395 US 444 (1969)���������������������������������������������������������������������� 35 Brown v Entertainment Merchants Association 131 SCt 2729 (2011)�������������������������� 57 Buckley v Valeo 424 US 1 (1976)��������������������������������������������������������������������������� 17–18 Burwell v Hobby Lobby 573 US ___ (2014)���������������������������������������������������������������� 18 Central Hudson Gas and Electric Corp v Public Service Commission of New York 447 US 557 (1980)����������������������������������������������������������� 19, 25, 42–43 Chaplinsky v New Hampshire 315 US 568 (1942)���������������������������������������������19, 25, 35 Citizens United v Federal Election Commission 558 US 310 (2010)������������������������ 17–18 City of Erie v Pap’s AM 527 US 277 (2000)����������������������������������������������������������������� 16 Cohen v California 403 US 15 (1971)������������������������������������������������������������������������� 22 Cohen v Facebook, Inc 2017 WL 2192621 (EDNY 2017)�������������������������������������������100 Collin v Smith 578 F2d 1197 (7th Cir 1978)���������������������������������������������������������������� 36 Comcast Corp v FCC 600 F3d 642 (2010)�����������������������������������������������������������������111 Curtis Publishing Co v Butts, Associated Press v Walker 388 US 130 (1967)����������������� 30 Doe v MySpace, Inc 528 F3d 413 (5th Cir 2008)��������������������������������������������������������100 Elonis v United States 575 US ___ (2015)������������������������������������������������������������������175 Fair Housing Council of San Fernando Valley v Roommates.com LLC 521 F3d 1157 (9th Cir 2008)������������������������������������������������������������������������ 100, 131 FCC v Fox Television Stations, Inc 132 SCt 2037 (2012)���������������������������������������������� 57 FCC v Pacifica Foundation 438 US 726 (1978)������������������������������������������������������������ 57 Fields v Twitter, Inc 2016 WL 6822065 (ND Ca 2016)������������������������������������������������100 Franco Caraccioli v Facebook, Inc No 16-15610 (9th Cir 2017)����������������������������������168 Freedman v Maryland 380 US 51 (1965)��������������������������������������������������������������������� 51 Gertz v Welch 418 US 323 (1974)�������������������������������������������������������������������������� 30, 39 Griswold v Connecticut 381 US 479 (1965)���������������������������������������������������������������� 32 Hustler Magazine v Falwell 485 US 46 (1988)������������������������������������������������������������� 30 Jane Doe No 1 v Backpage.com, LLC 817 F3d 12 (1st Cir 2016)���������������������������������100 Kinderstart.com, LLC v Google, Inc C 06-2057 JF (RS) (ND Ca 2007)�����������������������142 Knight First Amendment Institute at Columbia University v Trump No 17 Civ 5205 (NRB) (SDNY 2018)����������������������������������������������������������� 154, 228 Langdon v Google Corporation Inc, Yahoo! Inc, Microsoft Corporation, Inc No 06-319-JJF 2007��������������������������������������������������������������������������������������������143 McIntyre v Ohio Elections Commission 514 US 334 (1995)���������������������������������������203 Miami Herald Publishing v Tornillo 418 US 241 (1974)����������������������������������������� 62–63

xx  Table of Cases Miller v California 413 US 15 (1973) ���������������������������������������������������������������40, 42, 57 National Endowment for the Arts v Finley 524 US 569 (1998)������������������������������� 17–18 Near v Minnesota 283 US 697 (1931)�������������������������������������������������������������������� 25, 51 New York Times v Sullivan 376 US 254 (1964)������������������������������13, 19, 29–30, 39, 118, 167–68, 173, 178–79 New York v Ferber 458 US 747 (1982)������������������������������������������������������������������� 19, 42 Packingham v North Carolina 137 SCt 1730 (2017)���������������������������������81, 152–53, 228 RAV v City of St Paul 505 US 377 (1992)���������������������������������������������������������20, 25, 36 Reno v ACLU 521 US 844 (1997)������������������������������������������������������������65, 77, 111, 152 Rosenberger v Rector and Visitors of University of Virginia 515 US 819 (1995)������������������������������������������������������������������������������������������������ 18 Rosenblatt v Baer 383 US 75 (1966)���������������������������������������������������������������������������� 30 Rust v Sullivan 500 US 173 (1991)������������������������������������������������������������������������������ 18 Schenck v the United States 249 US 47 (1919)������������������������������������������������������������� 34 Search King, Inc v Google Technology, Inc No Civ-02-1457-M (WD Okla 2003)�������������������������������������������������������������� 24, 119, 138, 140, 142, 144 Smith v California 361 US 147 (1959)������������������������������������������������������������������������� 24 Snyder v Phelps 562 US 443 (2011)����������������������������������������������������������������������������� 33 State v Packingham 777 SE2d 738 (NC, 2015)������������������������������������������������������������ 81 Street v New York 394 US 576 (1969)������������������������������������������������������������������������� 38 Texas v Johnson 491 US 397 (1989)���������������������������������������������������������������������������� 38 Turner Broadcasting System v FCC 512 US 622 (1994)�����������������������������������64, 85, 110 Turner Broadcasting System v FCC 520 US 180 (1997)�����������������������������������64, 85, 110 United States v Alvarez 132 SCt 2537 [567 US 709 (2012)]�������������������������������19, 39, 178 United States v American Library Association 539 US 194 (2003) �����������������������������120 United States v O’Brien 391 US 367 (1968)������������������������������������������������������������ 17, 38 United States v Stevens 130 SCt 1577 (2010)��������������������������������������������������������������� 40 United States v Stevens 559 US 460 (2010)������������������������������������������������������������������ 19 United States v Williams 128 SCt 1830 (2008)������������������������������������������������������������� 42 Valentine v Chrestensen 316 US 52 (1942)������������������������������������������������������������������ 42 Verizon v FCC 740 F3d 623 (DC Cir 2014)����������������������������������������������������������������111 Virginia State Board of Pharmacy v Virginia Citizens Consumer Council 425 US 748 (1976)������������������������������������������������������������������������������������������� 17, 42 Virginia v Black 538 US 343 (2003)�������������������������������������������������������������������25, 35–36 Watts v the United States 394 US 705 (1969)��������������������������������������������������������������� 25 Whitney v California 274 US 357 (1927)���������������������������������������������������������������� 13, 15 Zeran v America Online, Inc 129 F3d 327 (4th Cir 1997)��������������������������������������� 65, 99 Zhang v Baidu.com, Inc 932 FSupp2d 561 (2013)���������������������������� 24, 119, 123, 144–45 Zurcher v Stanford Daily 436 US 547 (1978)��������������������������������������������������������������� 44

Introduction

I

n from hell, a graphic novel written by Alan Moore, Victorian serial killer Jack the Ripper states, as a kind of explanation for his actions, that ‘One day, men will look back and say that I gave birth to the twentieth century.’1 Even though this ­sentence does not appear in any letter written by Jack the Ripper to Scotland Yard (assuming that those letters were indeed written by him), and it is an obvious and deliberate exaggeration used for rhetorical effect, it is noticeable because Jack was the first serial killer who became a media phenomenon in the modern sense of the term; he committed his crimes in an era of explosive developments in tabloid journalism, thereby providing suitable material for the masses, who craved sensational stories,2 and his story still has business potential some 150 years later. His terrible crimes and their media coverage were intertwined and, in a sense, the killer showed the way to the next era of tabloid journalism. When Steve Stephens broadcast a live Facebook video feed of himself mercilessly shooting a random and innocent pedestrian in April 2017, it seemed clear that we have arrived at a new era of tabloid ‘journalism’, when we can watch murders being committed in real time and killers can satisfy their craving for attention.3 Even though the murder committed by Stephens is not attributable to social media, just as the crimes of Jack the Ripper were not caused by the hunger of contemporary media for sensation, it is a sure sign that the public sphere of this day and age has a different quality than the one created by traditional media (printed press, radio and television). In the twenty-first century, most public debates are conducted online, and major platforms and businesses have user numbers and economic power, including the power to shape public opinions, unprecedented in media history. As a paraphrase of the statement made by the Victorian killer in Moore’s novel, one might almost say that Google and Facebook gave birth to (the public sphere of) the twenty-first century. The rise of the Internet since the 1990s reaffirms Marshall McLuhan’s thesis from the 1960s (commonplace during the last decades but which has never been disproved) that the ‘Medium is the message’.4 In light of the earlier means of public discourse, the above statement seems especially applicable to online communication, as it has an impact on the operation of the public sphere not only through the content of communications, but also through the particular methods, structure and architecture of online communication; this equally applies to private correspondence (e-mail), friendly conversations (chat services, social media platforms), looking at pictures and watching videos

1 Alan Moore and Eddie Campbell, From Hell (London, Knockabout Comics, 1999). This sentence is also used in a movie based on the comics (From Hell, Albert Hughes (the movie), 20th Century Fox, 2001). 2 See L Perry Curtis, Jr, Jack the Ripper and the London Press (New Haven, CT, Yale University Press, 2001). 3 Daniel Kreps, ‘Alleged Facebook Killer: What We Know So Far’ Rolling Stone (11 April 2017) at https:// www.rollingstone.com/culture/culture-news/alleged-facebook-killer-what-we-know-so-far-121260/. 4 Marshall McLuhan, Understanding Media: The Extensions of Man (New York, McGraw-Hill, 1964).

2  Introduction (video-sharing portals), the means of accessing information (search engines), discussions of public affairs, and the nature and use of information. People speak much more in public, or at least somewhat publicly, and access much more information than ever before – but we use entirely different means than before to do so. McLuhan’s main thesis is that while we focus on the content (message) conveyed by the media, the media (technology) itself slowly and unnoticeably transforms social norms, values, our rules of coexistence and the ways we manage our affairs. This is the real content of the ‘media’. Everyday life has been changed by new online services (including search engines and the social media) considerably and certainly not in a subtle way; such changes also have had an impact on the nature of public discourse, as well as on the meaning of terms such as ‘knowledge’ and ‘information’. Some even argue that the functioning of the human brain has changed to adapt to the new means of communication.5 It seems somewhat ironic that the Internet was heralded in the beginning as the domain of unlimited freedom and free speech without government interference,6 even though its development was commissioned by the US Government for use by US defence forces, because the Internet was seen by the Government as a suitable means of monitoring c­ itizens, and it was opened for non-military use only decades after its creation, through the privatisation of the existing infrastructure.7 The idea of a ‘free Internet without a government’ was utopian from the very beginning – either a naive dream or the product of a sinister scheme to mislead people. Freedom of speech is a cornerstone of democracy and western civilisation. Its fundamental principles and doctrines were elaborated over the course of centuries and take their time to change. The emergence of the Internet posed new challenges to those fundamental principles. Some two decades ago, the Internet gave rise to wild and romantic hopes and ideas that the emergence of the new medium would democratise the public sphere and culture without any legal interference, simply through the nature of technology itself.8 Such early expectations have been replaced by a general sense of disappointment, as people have realised that many of the offline media’s problems apply to the Internet as well (see, eg, the issue of ownership concentration), while it is also affected by new and Internet-specific challenges that hinder the functioning of a democratic public sphere. As Lawrence Lessig, a pioneer in scholarly theories on the function of the Internet, put it when he recognised this disillusionment, ‘The Net was not in Kansas anymore’.9 ­Nonetheless, we must not forget that online communication has brought about revolutionary developments in several fields, and it has widened the public sphere. However, while these achievements form part of the everyday life of many people, they cannot let us forget the difficulties we have to face concerning this new medium. There seems to be

5 Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains (New York, WW Norton, 2011). 6 John Perry Barlow, ‘A Declaration of the Independence of Cyberspace’ Electronic Frontier Foundation (8 February 1996) at www.eff.org/cyberspace-independence. 7 Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (New York, Public Affairs, 2018). 8 Jack Balkin, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society’ (2004) 79 New York University Law Review 1. 9 Reference to a line of Dorothy in the 1939 movie The Wizard of Oz, when she realises that she and Toto have left their safe home. See Lawrence Lessig’s ‘Foreword’ in Jonathan Zittrain, Future of the Internet – And How to Stop It (New Haven, CT, Yale University Press & Penguin UK, 2008) viii.

Introduction  3 a strange duality here: even though people have more opportunities to address the public and access information than ever before, the digitisation of everyday life implies a danger for many people that human culture and communication may become shallower than it was before.10 A particular and worrying phenomenon in online communication is that gatekeepers have an increased influence on the operation of the public sphere. This is a bizarre turn indeed, considering that the initial promise of the public Internet was to make traditional gatekeepers (press houses, newspaper stands, post offices, cable service providers, etc) less important and influential. Instead, the Internet created new gatekeepers with greater influence over the public than ever before (in addition to creating a radical expansion of the public sphere). Just like in the offline world, services that produce content and services that transmit content to the audience both play an important role in online communication. Internet service providers, search engines and video-sharing and social media platforms (ie gatekeepers or intermediaries) are indispensable parts of the system, and they all can influence the public sphere. They are capable of rendering pieces of content unavailable or of raising considerable obstacles to accessing content, while they can push other content in front of the general public. While they challenge numerous aspects of the existing legal framework (data protection, privacy, defamation, hate speech), they have also become dominant actors in the public sphere. Both major online platform providers and governments, the latter being in charge of regulating such providers, suffer from mutual schizophrenia. On the one hand, governments try to force service providers to remove certain pieces of content (eg hate speech, user-generated content that may jeopardise children or violate personality rights) from their services; on the other hand, service providers are trusted to judge the legal status of such pieces of content, meaning that governments essentially ‘outsource’ the courts’ monopoly to apply the law. At the same time, the service providers concerned also select content according to their own policies, deciding on what to delete, who to silence and what items to feature for their users. This means the enforcement of a ‘pseudo legal system’, with its own code, case law, sanctions and fundamental approach to the matter of the pluralism of opinions – all this taking place in a privately-owned virtual space. Even though the Internet promised unlimited access to opinions some 20 years ago, the emergence of a monopoly of opinion is much more likely today that it was ever before; while nation states are helplessly watching the erosion of their legal system and the fall of their constitutional guarantees protecting the public sphere, they are (at least in part) voluntarily delegating important chunks of their law enforcement tasks to tech giants. Earlier in history, the freedom of individuals needed and was afforded protection against the government by restricting government powers. Today, the human rights rules and principles, developed and elaborated through hundreds of years, need to be applied and enforced with regard to the trilateral relationships between governments, citizens and service providers. This book covers issues relating to the regulation of a democratic public sphere, with due regard to the activities of online gatekeepers. For our purposes, ‘public sphere’ 10 Jonathan Franzen, ‘What’s Wrong with the Modern World’ The Guardian (13 September 2013); Andrew Keen, The Cult of the Amateur: How Today’s Internet Is Killing Our Culture (New York, Currency, 2007).

4  Introduction means Jürgen Habermas’ ideal of debate and discussion on public affairs,11 in which far more people participate today than at the time Habermas wrote his book, thanks to the general public access to the Internet. This public sphere is a counterbalance to excessive political power, acts as a guarantee of social balance, and provides protection against both government and excessive private power. For our purposes, an online gatekeeper is defined as a person or entity whose activity is necessary for publishing the opinion of another person or entity on the Web, and which include Internet and blog service providers, social media, search engine providers, entities selling apps, webstores, news portals, news aggregating sites and the content providers of websites who can decide on the publication of comments to individual posts. This book reviews how gatekeepers respond to the speech of their users (ie citizens expressing their opinions on public affairs) and how and why they restrict such discussions. It also considers the legal status of ‘speech’ by gatekeepers, namely if and when their activities can be considered ‘speech’ and what kind of protection it should be afforded (if any), as well as the issue of whether or not gatekeepers can be held liable for their users’ unlawful speech.12 A fundamental question raised in this book is whether or not online gatekeepers must be regarded as media, considering that they do not produce or publish their own content but merely host content produced by their users and make it available. A conclusion is that the activities of some gatekeepers may be regarded as a kind of editing, as they include the organisation and sorting of user-generated content, which in turn implies making decisions on presenting individual pieces of content to users, thereby influencing their chances of being included in a public debate. Furthermore, intermediaries have no way of escaping the need to judge the legality of content generated by their users – either to serve their own purposes, or because they are coerced into to doing so by government agencies. This means that gatekeepers can and are sometimes even obliged to remove pieces of content from their systems, and they are free to decide to ‘hide’ (render quasi-invisible) otherwise non-infringing content or push (display in a featured ­location) whatever content they wish to the general public. In this context, it seems to be the ­inevitable conclusion that gatekeepers are in fact editors, even if they are somewhat different from the editors of traditional media outlets. This complex set of relationships necessitates the examination of the various issues from multiple aspects. One must consider: (i) the relationship between a speaker and government, including the duties (if any) a gatekeeper must perform with regard to speech crossing the limits of the freedom of speech; (ii) the relationship between government and gatekeepers, which also raises various questions, such as the issue of gatekeepers’ liability for infringing user-generated content and the matter of legal obligations to take action against infringing content; (iii) the relationship between gatekeepers and speakers – in addition to the removal of infringing content, various questions arise from the discretionary power of gatekeepers to remove or hide certain opinions from their platform.

11 Jürgen Habermas, The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (Cambridge, Polity, 1989 [1962]). 12 Jack Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 University of California Davis Law Review 1149, 1152, 1157–58.

Introduction  5 The questions pertaining to such powers of gatekeepers seem to have a new quality, as gatekeepers ‘edit’ content, both pursuant to government instructions and on their own initiative and discretion, and their decision-making powers are not limited by the traditional principles, standards, laws and legal practices concerning the freedom of speech. This is why, from a constitutional perspective, we cannot say that gatekeepers restrict the freedom of speech, but in effect that is exactly what happens. Even though a content deleted or suppressed by a gatekeeper can be published in another way or on another site, some influential gatekeepers are capable of exerting considerable influence over the public in and of themselves. The privatisation of the freedom of speech is a main issue discussed in this book, meaning that the limits of that freedom are drawn by private actors, that is major platform and infrastructure providers. It is argued that this process brings about a paradigm shift in the constitutional protection of speech, as well as the role of government in maintaining and preserving the democratic public sphere. As Damian Tambini and his co-authors put it, we are living in an era of the ‘deconstitutionalisation of freedom of speech’.13 This process gave birth to new ways and means of restricting speech, either because the courts apply traditional legal doctrines to new circumstances in lawsuits between users, or because online gatekeepers restrict their users’ freedom of speech and right of access to information. Data protection and privacy-related issues are mentioned but not covered thoroughly in this book. Some readers might want to set this book aside right here and now, and they might be right, considering that the business model of major online gatekeepers is based on the surveillance of users (even prospective users) and on the exploitation of the data collected on them; not to mention that government surveillance of citizens is also supported now by market actors. The Internet, as a phenomenon, may not be understood without addressing the issue of surveillance, which makes privacy and data protection one of the most important legal fields in the context of the Internet. The issue of privacy and data protection is also relevant to the questions raised by contentrelated interference, as most gatekeepers (social media platforms, search engines and webstores) determine what to show to and hide from users on the basis of data collected on their users. Facebook’s news feed or Google’s search results may not be understood properly without the information each gatekeeper has on each user of their services. If a news feed or a search ranking is considered an ‘opinion’ that is eligible for constitutional protection, the collection of data serving as a basis for such an opinion should also be assessed from a legal perspective.14 Similarly, copyright and trademark issues, as well as the problem of and possible ways of managing media ownership concentration, are of great importance in the context of the Internet, but they are mentioned only incidentally in this book. The main focus is on the direct impact of gatekeepers on content, as well as the impact of the gatekeepers’ services on the public sphere. Issues that affect content regulation only indirectly (such as data protection, copyright and ownership restrictions) are not covered here. 13 Damian Tambini, Danilo Leonardi and Christopher T Marsden, Codifying Cyberspace: Communications Self-Regulation in the Age of Internet Convergence (London, Routledge, 2007) 275. 14 Lisa M Austin, ‘Technological Tattletales and Constitutional Black Holes: Communications Intermediaries and Constitutional Constraints’ (2016) 17 Theoretical Inquiries in Law 451.

6  Introduction The actors not covered here also include gatekeepers who have only a limited impact on public discourse and freedom of speech due to their size or nature, such as blogs and websites managed by a single content provider (which may still be involved in public discourse; see chapter 6 on online comments), as well as webstores. It seems safe to say that, for example, the book selection offered by (ie the list of books marketed by and removed from the selection of) Amazon does have an impact on the freedom of speech, but that impact is only indirect and far more limited than the impact of other gatekeepers mentioned in this book, the activities of which sometimes make it outright necessary to interfere with the public sphere. In general, this book attempts to tackle the issues covered with due care and diligence. Even though our intuition may suggest otherwise, most of the problems raised by online gatekeepers are not entirely new but were also present in the traditional media sector. In the early 1960s, Habermas himself warned of the end of the public sphere, as he was worried that public discourse and debate pertaining to public affairs was prevented by the commercialisation and consumerisation of mass media. The argument that democracy is weakened by new online services (eg the matter of fake news) is not entirely new either. It is not argued here that all problems could be solved by adapting existing solutions to the new circumstances, as not all components of the new media order are analogous to a phenomenon that was already known in the context of traditional media. It seems doubtful that the limitation of speech by private actors could be tackled within the centuries-old framework of fundamental rights. It also seems that the meaning of ‘regulation’ has become somewhat elastic. Governments can (or at least can attempt to) regulate the activities of both gatekeepers and users, including matters pertaining to the freedom of speech. However, gatekeepers also act as ‘regulators’ in the sense that they adopt their own quasi-law in the form of public policies and other internal (ie not public) guidelines and regulations, and this latter form of ‘regulation’ (which is not even regulation in the legal sense of the word) is far more effective than government legislation and is enforced by gatekeepers every day. Legal systems somehow need to come to terms with this duality. The first chapter of this book covers the foundations and primary limitations of the freedom of speech and the press, to serve as a background to the subsequent analysis of the relationship between such traditional rules, on the one hand, and the activities of gatekeepers and the interactions between users, on the other hand. Thus, chapter 1 lays down the starting point and constitutional framework for assessing the impact of new services and new gatekeepers on the freedom of speech. Chapter 2 discusses general questions raised by the Internet, as a medium and a public forum, with regard to the freedom of speech. Chapters 3, 4 and 5 offer legal analysis of various matters that are relevant to the freedom of speech, and which arise regarding the activities of three major gatekeepers with a profound impact on the public sphere (including Internet service providers that provide the infrastructure, search engines that help users find relevant information in vast amounts of data, and social media platforms that are the primary means of public individual expression). Chapter 6 reviews a number of issues raised by anonymous comments in the context of free speech, while chapter 7 provides an overview of possible regulatory and media policy directions and assesses proposals that have been raised so far.

Introduction  7 A book discussing various aspects of the online public sphere is necessarily limited to the examination of a snapshot of reality, as its field of enquiry keeps changing on a daily basis. However, the fundamental principles and doctrines of free speech are centuries old and take a long time to change, meaning that they are fortunately better able to resist the great speed of technological advancement. Consequently, it seems quite likely that tackling the questions raised by online gatekeepers will take some time for legal scholars and practitioners. For this reason, there is hope that the legal framework and theoretical approaches presented in this book will remain relevant and useful for a while, despite the rapid changes that might be brought about by codified laws and the increasing number of court cases, or the decisions of different authorities.

1 The Foundations of Free Speech and Freedom of the Press I.  FREEDOM OF SPEECH IN THE AGE OF THE INTERNET

T

he theoretical foundations of free speech can be categorised in a number of ways. Eric Barendt puts these theories into four categories: discovering truth, self-fulfilment, citizen participation in a democracy and suspicion of ­ government.1 A few decades earlier, Thomas Emerson came up with a very similar list, which included participation in decision-making on common matters, self-fulfilment, acquiring knowledge, discovering truth, and maintaining the balance between social stability and change (where debate is suppressed, coercion replaces arguments).2 Another scheme by Russell Weaver identified arguments for the protection of freedom of speech based on the theory of the democratic process, the marketplace of ideas, individual ­freedom and self-fulfilment, and the safety valve supporting social stability.3 Even when minor differences are taken into account, it is apparent that the various authors typically refer to the same theories when providing possible justifications for the freedom of speech, or to theories that only differ in certain regards, applying these well-entrenched ideas to more recent questions arising in respect of the use of the Internet. An overview of these theories will thus be a useful point of departure for this study. A.  Searching for the Truth John Milton, the first modern theoretician of free speech, advocated both free speech and a free press, because restricting them might obstruct God’s will and love, preventing the flourishing of the ‘free and knowing spirit’.4 God gave man the freedom to live with this spirit appropriately. Man received the burden of responsibility, but if society restricts his individual freedom and choices, he will not be able to bear the burden. With the help of the gift of reason, everyone must choose independently and by himself from among the available options. ‘And yet, on the other hand, unless wariness be used, as good almost

1 Eric Barendt, Freedom of Speech, 2nd edn (Oxford, Oxford University Press, 2005) 6–23. 2 Thomas Emerson, ‘Toward a General Theory of the First Amendment’ (1962–63) 72 Yale Law Journal 877, 877–84. 3 Russell L Weaver, Catherine Hancock and John C Knechtle, The First Amendment: Cases, Problems, and Materials, 5th edn (Durham, NC, Carolina Academic Press, 2017) 7–14. 4 John Milton, Areopagitica, 3rd edn (Oxford, Clarendon Press, 1882) 30.

Freedom of Speech in the Age of the Internet  9 kill a man as kill a good book; who kills a man kills a reasonable creature, God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were in the eye.’5 Milton firmly believed in the power of truth and that man, using God’s gifts, has the ability to find the truth in the debate of different views: Give me the liberty to know, to utter, and to argue freely according to conscience, above all liberties … And though all the winds of doctrine were let loose to play upon the earth, so truth be in the field, we do injuriously by licensing and prohibiting to misdoubt her strength. Let her and falsehood grapple; who ever knew truth put to the worse, in a free and open encounter.6

Before identifying him as a liberal icon, let us not forget that Milton approached freedom of speech from a theological perspective and did not separate the discovery of truth from the intentions of divine providence.7 In line with this, he did not call for unrestricted ­freedom of the press. He insisted on censorship of the books of ‘Papist bigots’ and required consistent but ex post punishment for any abuse of the press. However, he found suppression of free speech in general to be unacceptable. John Stuart Mill, the liberal English philosopher, laid down the classic foundations of freedom of speech, which are still cited very frequently today, in his essay On Liberty.8 For Mill, truth is a fundamental value that is recognisable, and its recognition is a prerequisite for social development. Nobody is infallible. We can never be absolutely certain that what we think to be the truth is indeed the truth. Limitation of free speech is therefore impermissible, because a restricted opinion might carry the truth.9 As such, tolerance of varying opinions is necessary, even when they contradict a genuinely true position, because in the absence of constant debate such a view becomes the unchallenged truth, will be accepted only out of habit, and becomes petrified, dead dogma. Moreover, before being recognised, the truth must suffer repeated persecution, and, although it is a ‘­pious lie’ that truth prevails despite all persecution, ‘in the course of ages there will generally be found persons to rediscover it … until it has made such head as to withstand all subsequent attempts to suppress it’.10 Hence, free debate also serves the truth that has already been recognised. Beyond this useful function, free speech also belongs to the domain of the individual’s freedom, and, as such, it cannot be restricted unless its practice harms others.11 Mill believes in the individual, whose noble task is to develop his faculties as much as possible. However, even in his strong individualist views, Mill recognises that there is also a place for the interest of the community, the development of which requires the recognition of truth. Mill’s theory assumes that the publication of a possibly true opinion is of the greatest societal importance in all circumstances. However, there may be a number of situations



5 ibid

6. 50–52. 7 Frederick M Schauer, ‘Facts and the First Amendment’ (2010) 57 UCLA Law Review 897, 902. 8 John Stuart Mill, On Liberty and Other Essays, 3rd edn (London, Longman, Roberts, Green, 1864). 9 ibid 25–26, 34. 10 ibid 54. 11 ibid 65–66, 100–01. 6 ibid

10  The Foundations of Free Speech in which the protection of other interests would seem to be more important than the ­declaration of the potential truth. Mill probably overvalues the role of public debate in society, since, even if there is complete freedom, only a fraction of the people participate. For the majority, expressing their opinion is not important at all, and they do not necessarily care which opinion prevails in a given debate. They perhaps do not even care what the ‘truth’ is. Even participants in a debate do not necessarily formulate or modify their opinion based on reason or proper consideration of arguments and counter-arguments. Short-term interests, such as, for example, the protection of public peace and public order, may override the aim of discovering the truth, because in most cases that is a long and not necessarily successful process. Open debate does not necessarily lead to the truth or to better individual or public decisions. While in certain institutions free opinion may generally ensure that correct decisions are taken (for example in universities and research organisations on scientific matters), this cannot be automatically extended to the whole of society. Fortunate is the community in which the means for discussing the truth are freely accessible (free media, democratically elected parliament, etc), but these can be just as effectively employed to prevent the truth from being revealed.12 Further developments of Mill’s theory are discernible in Abrams v the United States, a landmark freedom-of-speech decision of the United States (US) Supreme Court.13 ­Ironically, it was not the judgment itself but the dissenting opinion of the legendary Supreme Court judge Justice Oliver Wendell Holmes that became a legal classic. ­According to Holmes, ‘the ultimate good desired is better reached by free trade in ideas – that the best test of truth is the power of the thought to get itself accepted in the competition of the market’.14 The ‘marketplace of ideas’ metaphor, based on Mill’s theory but coined by Justice Holmes and used by the Court in their deliberations, had a great effect on the development of the right to freedom of speech in the US. Later, based on this, the Supreme Court overturned a number of regulations and court decisions that allowed state infringement of freedom of speech, quoting the unrestrictability of the ‘market’. According to this view, the only possible way leading to the truth is through the creation of a free marketplace of ideas, the principal potential enemy of which is the state (the government). There are also grounds for criticising Holmes’s view, however. In his analysis, the concept of truth is highly relative and the truth is the view that emerges victorious from market competition. In this vision, does objective truth exist at all? The person of today tends to believe in it less and less, but even if objective truth does exist, open debate on public affairs is absolutely necessary to find it. Nevertheless, the theory on searching for the truth has strangely gathered momentum in the so-called ‘Fake News Debate’, which started during the presidential campaign in the US in 2016.15

12 Stephen Sedley, ‘Information as a Human Right’ in Jack Beatson and Yvonne Cripps (eds), Freedom of Expression and Freedom of Information. Essays in Honour of Sir David Williams (Oxford, Oxford University Press, 2000) 241. 13 Abrams v the United States 250 US 616 (1919). 14 ibid 630. 15 See ch 5, section III.G.

Freedom of Speech in the Age of the Internet  11 B.  Operation of Democracy According to Alexander Meiklejohn, the primary objective and purpose of the right to free speech is citizens’ participation in the debate of and decisions on public affairs.16 Hence, the purpose of laws on free speech is to ensure opportunities for democratic self-government and decision-making. Participation must be effective, and this effectiveness can be achieved only with the establishment of certain rules. We have to differentiate between various opinions based on their content: opinions relating to political debate (here interpreted widely, with ‘politics’ encompassing all public matters) enjoy special protection, while those not necessary for decision-making in public affairs may be more strictly limited. This theory considers freedom of speech primarily as an instrument. The desired objective is not to reach a higher ideal but to ensure the proper operation of society, which, in democratic systems, is impossible without extensive and public decision-making. The most frequent criticism is that this theory leaves ‘non-political speech’, which does not contribute to the expression of democratic will, defenceless. Meiklejohn soon refined the group of opinions he found worthy of protection. This includes any views connected to education, philosophy, science, literature and the arts, as well as opinions emerging from any debates on public affairs, which are protectable in the name of democratic foundations.17 Reading works of literature, for example, increases the knowledge of the individual who, in turn, will participate more effectively in public debate. The  original version of the theory was criticised because it was overly narrow, while its refined version was judged to be limitlessly broad.18 Judge Robert Bork drew stricter limits. He asserts that if we want to assign real and enforceable substance to the right of freedom of speech, it can only and exclusively include ‘political speech’, the purpose of which is to contribute to decisions over public affairs. Although he acknowledges that literature plays an important role in public ­decision-making, he believes that this can also apply to other behaviours for which the law does not explicitly provide constitutional protection. If we wish to avoid overextending the defences around the freedom of speech then full constitutional protection may only be granted to political speech. Otherwise, by including speech that does not contribute to decisions on public affairs in the scope protected by freedom of speech, such as advertisements or obscenity, we would inevitably narrow the general coverage of protection. This does not mean that, beyond this, all other speech would remain unprotected, and indeed even Meiklejohn does not believe this of opinions outside of the categories he defines, but Judge Bork would leave it to the discretion of society and its elected­ representatives to determine the level of protection.19 Different authors emphasise different areas in their theories. For example, Robert Post  does not consider freedom of speech as an instrument of collective 16 See Alexander Meiklejohn, Political Freedom: The Constitutional Powers of the People (Oxford, Oxford University Press, 1965). 17 Alexander Meiklejohn, ‘The First Amendment is an Absolute’ [1961] Supreme Court Review 245, 256–57. 18 Wojciech Sadurski, Freedom of Speech and its Limits (Boston, MA, Kluwer Academic Publishers, 1999) 21–22. 19 Robert H Bork, ‘Neutral Principles and Some First Amendment Problems’ (1971) 47 Indiana Law Journal 1, 27–28.

12  The Foundations of Free Speech decision-making but as an instrument of democratic self-governance. His theory does not focus on the method of democratic decisions but on the participation of the individual in the public domain; the essence is not the appropriate application of the method but providing the opportunity for the individual to participate, thus recognising his or her individual autonomy, and by doing so reinforcing the legitimacy of the decision taken.20 The theory of Martin Redish is a blend of democratic and individual justifications: in his concept, freedom of speech is not the instrument of joint decision-making but the guarantee of competition between individuals and/or the freedom to persuade one another, which is the foundation of the democratic social order.21 C.  The Individualist Theory Individualist justifications may be divided into two main groups, one of which considers freedom of speech as a value in itself and not as an instrument serving to achieve some other goals.22 In this sense, people deserve free speech because the existence of freedom in itself contributes to the creation of a ‘good life’. According to Ronald Dworkin’s interpretation, everyone is entitled to this right because, in a just political system, the state treats every adult citizen as a ‘responsible moral being’.23 The other group of individualist justifications also considers the right to freedom of speech as an instrument, but as an instrument to improve character, perfect the ­individual’s personality and attain autonomy. Representatives of the individualist views think that the developers of the theories already discussed err when they confuse fundamental values with the instruments that serve to uphold them. The effective functioning of democracy and the search for the truth are not fundamental values in themselves but merely tools that help achieve the truly fundamental value of attaining the freedom of the individual, and which allow the fulfilment of the human personality. Freedom of speech is not merely a right that derives from a chosen social order. Its value is not exclusively and primarily the result of the democratic system; it comes from respect for the individual, which is a morally based requirement, an innate value, not subordinated to any other purpose. It is not that democracy demands the protection of free speech but, on the contrary, that guaranteeing freedom of speech allows and at the same time requires the functioning of democracy.24 According to Edwin Baker’s ‘liberty model’, Meiklejohn is wrong, and it is not important that we express everything but that society does not restrict anybody’s right 20 Robert C Post, ‘Meiklejohn’s Mistake: Individual Autonomy and the Reform of Public Discourse’ (1993) 79 University of Colorado Law Review 1109; Robert C Post, Participatory Democracy and Free Speech (2011) 97 Virginia Law Review 477. 21 Martin H Redish, The Adversary First Amendment (Palo Alto, CA, Stanford University Press, 2013). 22 Martin H Redish, ‘The Value of Free Speech’ (1982) 130 University of Pennsylvania Law Review 591, 603–04. 23 Ronald Dworkin, Freedom’s Law. The Moral Reading of the American Constitution (Oxford, Oxford University Press, 1996) 199–202; Ronald Dworkin, Taking Rights Seriously (Cambridge, MA, Harvard ­University Press, 1977) 266–78, 364–68. 24 John Laws, ‘Meiklejohn, the First Amendment and Free Speech in English Law’ in Ian Loveland (ed), Importing the First Amendment. Freedom of Expression in American, English and European Law (Oxford, Hart Publishing, 1998) 123, 135–37.

Freedom of Speech in the Age of the Internet  13 to free speech.25 However, Baker does not extend the validity of his theory to all types of speech. Only those that can really be viewed as the manifestation of the individual should enjoy full protection, in his view. If the goal of the speech is primarily generating profit, as, for example, in the case of advertisements or the media (which, according to Baker, are not different from other precisely regulated commercial enterprises), the freedom of these speeches is only justified for Baker if they are beneficial for society.26 According to this, the freedom of the media may only be justified instrumentally and may not be a goal in itself. D.  Justifications of the Freedom of Speech: Theory and Practice Justifications of the freedom of speech do not have only theoretical significance. These theories may be taken into account by legislators and courts applying the law. Holmes and Brandeis, the great US judges in the first third of the twentieth century, expounded their theories on the freedom of speech in court decisions.27 In New York Times v ­Sullivan in 1964, the US Supreme Court clearly expressed its opinion in favour of the role of freedom of speech in democracy, which provided a new orientation for the several-centuries-old common law rules in respect of defamation: according to the judges in the US, it has ‘a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open’.28 In its individual decisions, the US Supreme Court applied all of the justifications discussed earlier, but perhaps attaches the greatest importance to the democratic theory.29 The European Court of Human Rights based in Strasbourg (ECtHR) also judges the compatibility of the restrictions on the freedom of speech in certain states with the ­European Convention on Human Rights (ECHR or ‘Convention’) to the extent that they were ‘necessary in a democratic society’.30 Accordingly, the well-established practice of the Court supports both individual freedom and open public discourse, because the ‘­freedom of political debate’ is the ‘bedrock of any democratic system’.31 Justifications of freedom of speech also underpin discussion of the freedom of Internet communication. Public debates on the Internet were considered in the 1990s as key instruments leading to a newer, higher quality of democracy: on the Internet, everybody may participate in the discourse; everybody is free to express their opinions and everybody has access to all information without previous obstacles to public communication (such as access difficulties and high costs).32 Internet communication

25 Edwin C Baker, Human Liberty and Freedom of Speech (Oxford, Oxford University Press, 1989) 23. 26 ibid 194–249. 27 Abrams v the United States (n 13); Whitney v California 274 US 357 (1927). 28 New York Times v Sullivan 376 US 254 (1964) 270. 29 Russell L Weaver, Understanding the First Amendment, 6th edn (Durham, NC, Carolina Academic Press, 2017) 12. 30 Art 10(1) ECHR. 31 Lingens v Austria, App no 12/1984/84/131, judgment of 24 June 1986, paras 41 and 42; Bowman v the United Kingdom, App no 141/1996/760/961, judgment of 19 February 1998, para 42. 32 James Curran, Natalie Fenton and Des Freedman, Misunderstanding the Internet, 2nd edn (London, ­Routledge, 2016) 11–12.

14  The Foundations of Free Speech may contribute to combating an oppressive government,33 and in democracies it might exert a direct influence on government decisions and provide a forum for interested citizens.34 The strongest argument in favour of the freedom of the Internet is the one related to the protection of democratic, open public debate, but the other two justifications of the freedom of speech are no less important. The Internet has also created new ways of self-expression for the individual and, through the option of anonymity, it has inspired many to express their opinions on public or private matters, people who previously – perhaps being afraid of the consequences – had remained silent.35 Similarly, a huge amount of data, information, documents and books, previously more difficult to access, have become available on the Internet, which in theory also makes finding the ‘truth’ on certain issues a subject of debate. However, only a few expected that identifying true facts among such a mass of information would become so difficult. The fake news scandal that erupted in 2016 is an excellent illustration of that.36 In certain cases, Internet communication does not support objective reality but instead the affirmation and spread of falsehoods, which are wrongly considered by the observer or the community to be reality.37 E.  Reconciling the Justifications Indeed, it is impossible to force all speech worthy of protection into the Procrustean bed of a single value.38 At the same time, freedom of speech serves, even if independently, other values that are loosely or closely connected with each other. The three groups of justifications discussed in section I.D do not exclude the possibility that, in connection with a given question, we could apply several or even all three of them at the same time. According to the decision of the ECtHR in Handyside v the United Kingdom, which is fundamental in terms of its interpretation of freedom of speech, ‘Freedom of expression constitutes one of the essential foundations of a [democratic] society, one of the basic conditions for its progress and for the development of every man.’39 One of the classic formulations of the synthesis of different values underpinning freedom of speech was made by US Justice Louis Brandeis: Those who won our independence believed that the final end of the state was to make men free to develop their faculties, and that, in its government, the deliberative forces should

33 Russell L Weaver, From Gutenberg to the Internet. Free Speech, Advancing Technology, and the Implications for Democracy (Durham, NC, Carolina Academic Press, 2013) 74–85. 34 Curran, Fenton and Freedman (n 32) 16–21. 35 See Delfi AS v Estonia, App no 64569/09, judgment of 15 June 2015 [GC]; Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary, App no 22947/13, judgment of 6 February 2016. 36 Olivia Solon, ‘Facebook’s Failure: Did Fake News and Polarized Politics Get Trump Elected?’ The Guardian (10 November 2016) at https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-electionconspiracy-theories. 37 Cass R Sunstein, #Republic: Divided Democracy in the Age of Social Media (Princeton, NJ, Princeton University Press, 2016) 59–136. 38 Robert C Post, ‘Recuperating First Amendment Doctrine’ (1995) 47 Stanford Law Review 1249, 1272. 39 Handyside v the United Kingdom, App no 5493/72, judgment of 7 December 1976, para 49.

The Category of ‘Speech’ and the Scope of Protection  15 prevail over the arbitrary. They valued liberty both as an end, and as a means. They believed liberty to be the secret of happiness, and courage to be the secret of liberty. They believed that ­freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that, without free speech and assembly, discussion would be futile; that, with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine; that the greatest menace to freedom is an inert people; that public discussion is a political duty, and that this should be a fundamental principle of the American government.40

As it transpires, the values protected by the various justifications are not independent of each other. They are interconnected, although this connection may sometimes be weaker and sometimes stronger.41 The Holmesian theory of the marketplace of ideas is fairly close to democratic views. Theories based on searching for the truth all have strong ­individualist features. Although some individualist views hold that speech is a value in itself, most of these theories reject this belief and consider speech as an instrument, specifically an instrument for the perfection of personality. These values not only exist parallel to each other, but also relate to each other. The function of democracy requires making a multitude of free, independent and individual decisions, not only during the election process but also in everyday life. Free decision-making cannot exist without the autonomy of the individual; autonomous decision-making requires the free marketplace of ideas, and the speaker needs autonomy just as much as the audience of the speech. Meanwhile, it is not at all necessary that we give up the hope of Milton, Mill, Justice Brandeis and others that, during the debate, we might discover the ‘true’ answer to the issue in question. According to Joseph Raz, the value at the centre of justification is a pluralistic, multi-coloured society, which comes into existence through free speech, in which different opinions coexist with each other, and in which these opinions also tolerate the existence of others.42 II.  THE CATEGORY OF ‘SPEECH’ AND THE SCOPE OF PROTECTION

A.  Terminological Variations Different names have been attached to the right ensuring the freedom of public debate in various legal documents. The term ‘freedom of speech’ was allegedly first used by King James I,43 while the First Amendment of the US Constitution also included this term in 1791. Nevertheless, in American legal science, the term ‘freedom of expression’ is more frequent nowadays.44 In contrast, the Meinungsfreiheit and Meinungsäusserungsfreiheit

40 Whitney v California 274 US 357, 375–78 (1927). 41 Judith Lichtenberg, ‘Foundations and Limits of Freedom of the Press’ in Judith Lichtenberg (ed), Democracy and the Mass Media (Cambridge, Cambridge University Press, 1990) 334. 42 Joseph Raz, ‘Free Expression and Personal Identification’ (1991) 11 Oxford Journal of Legal Studies 303. 43 See Leonard W Levy, Emergence of a Free Press (Chicago, IL, Ivan R Dee, 2004) 3–4. 44 See Emerson (n 2); Martin H Redish, ‘The First Amendment in the Marketplace: Commercial Speech and the Values of Free Expression’ (1971) 39 George Washington Law Review 429; David A Strauss, ‘Persuasion, Autonomy, and Freedom of Expression’ (1991) 91 Columbia Law Review 334.

16  The Foundations of Free Speech used in the German legal system, and ‘freedom of opinion’, as well as the archaicsounding ‘freedom of thought’,45 are used much less frequently. The French Declaration on the Rights of Man and of the Citizen of 1789 declared the right to the freedom to express thoughts and opinions.46 Several terms refer to the scope of protection of the right (‘speech’, ‘expression’, ‘opinion’, ‘thought’), of which ‘expression’ is the b ­ roadest, and therefore its use seems to be the most useful. However, insisting on the literal meaning of these terms, their ordinary meanings, is pointless; the general category of ‘speech’ differs significantly from the category of ‘speech’ that is protected by the right to the freedom of speech. B.  The Category of ‘Speech’ i.  Speech and ‘Speech’ In this context, it is obvious that written, printed or digitally available texts also qualify as ‘speech’, as do the storylines of various media or cinematographic works. If they express some recognisable idea then pictures or other visual forms of expression, such as artefacts or even music without text,47 or even dance,48 may also qualify as ‘speech’. Frederick Schauer discussed the differences between everyday speech and legal ‘speech’ in several articles.49 These also make it clear that certain oral or written, printed texts and expressions are not even assumed to be protected by the freedom of speech: in the event of perjury, the conclusion of a contract, taking a marriage oath or instigating a criminal offence, it does not occur to anyone that the speech expressed should be protected by the freedom of speech. The words used in an unlawful threat are prohibited acts themselves; perjury itself is a criminal offence committed through verbal expressions; the ‘yes, I do’ uttered in front of the registrar is the act of marriage itself; verbal notice given to the employer/employee is the act of terminating the legal relationship itself – even though here we talk about verbal expressions without any exception, none of them may be considered as a form of freedom of speech being exercised.50 The words of a young American girl, Michelle Carter, urging her friend in text messages to commit suicide, while composed of words and meaningful sentences (‘Hang yourself, jump off from a building’, ‘You have to do it’), were not considered by the court in the criminal proceedings after the tragedy as being protected by the freedom of speech.51 At the same time, there are acts that, even without actual speech, words or identifiable sentences, are clearly protected by the freedom of speech: silent ­demonstrations,

45 John B Bury, A History of Freedom of Thought (New York, Henry Holt, 1913); Art 9(1) ECHR. 46 Déclaration des droits de l’homme et du citoyen, Art XI. 47 Mark V Tushnet, Alan K Chen and Joseph Blocher, Free Speech Beyond Words. The Surprising Reach of the First Amendment (New York, New York University Press, 2017) 15–69. 48 Barnes v Glen Theatre, Inc 501 US 560 (1991); City of Erie v Pap’s AM 527 US 277 (2000). 49 See, eg, Frederick M Schauer, ‘Speech and “Speech” – Obscenity and “Obscenity”: An Exercise in the Interpretation of Constitutional Language’ (1979) 67 Georgetown Law Journal 899. 50 Cass R Sunstein, ‘Words, Conduct, Caste’ (1993) 60 University of Chicago Law Review 795, 836–40. 51 ‘Michelle Carter Gets 15-Month Jail Term in Texting Suicide Case’ New York Times (3 August 2017); this limitation of the category of ‘speech’ is questioned by Robbie Soave, ‘Michelle Carter Didn’t Kill With a Text’ New York Times (16 June 2017).

The Category of ‘Speech’ and the Scope of Protection  17 strikes or, for example, wearing a piece of clothing that expresses an opinion (for example, a uniform, an armband or a badge). The scope of the freedom of speech has been gradually extended over the past few decades. This process is also clearly visible in Europe, although it has unfolded with full force in the US, where nowadays it covers not only advertisements52 but also questions of campaign financing,53 or issues related to the distribution of state aid,54 as well as nude dancing performed at nightclubs.55 According to Schauer, this process undermines the original interpretation of the right to freedom of speech, because under this notion it covers certain behaviour that, even if it has a communicative content, is not a decisive factor. If, however, as in the US Constitution, there is no specific entitlement that could be invoked to protect these kinds of behaviour then, in its absence, these are categorised as falling under the scope of the freedom of speech, which leads to the consequence that the original objective of the right – whether guaranteeing undisturbed public debates or the autonomy of the individual – fades away.56 ii.  ‘Speech’ and Action The categories of ‘speech’ and action are inseparable. By acting in one way or another, we may express a certain opinion; to this end, there is no need for either an oral or a written statement. Certain signs and symbols may also be considered as ‘speech’. ­ However, creating boundaries is important, because nearly any and every act may be said to express something.57 The demarcation lines can be drawn according to various criteria. According to Post, ‘speech’ is not valuable in itself and may only be granted legal protection if it has some constitutional value (whether the functioning of democracy or the autonomy of the individual).58 Can we distinguish whether or not the act promoted an interest that can be accepted as a justification of the freedom of speech? Has it contributed to democratic decision-making or the exercise of the individual’s autonomy? We could, instead, examine the act expressly from the perspective of the person committing the act, the actor. Did he or she intend to express something by committing this act? If the restriction of the otherwise expressive act or its prohibition is justified by a legitimate interest and not expressly by the limitation of the freedom of speech, then it might be permissible from a constitutional perspective. An example of this is the prevention of damage to another party (a political murder might be an act with expressive content, but the protection of human life clearly makes it possible to prohibit such murders, and we do not consider this prohibition as a form of restricting the freedom of speech). In United States v O’Brien,59 the US Supreme Court decided that burning a draft card, though it might undoubtedly be considered an expression of opinion (O’Brien,

52 Virginia State Board of Pharmacy v Virginia Citizens Consumer Council 425 US 748 (1976). 53 Buckley v Valeo 424 US 1 (1976); Citizens United v Federal Election Commission 558 US 310 (2010). 54 National Endowment for the Arts v Finley 524 US 569 (1998). 55 See n 49. 56 Frederick M Schauer, ‘First Amendment Opportunism’ in Lee C Bollinger and Geoffrey R Stone (eds), Eternally Vigilant: Free Speech in the Modern Era (Chicago, IL, University of Chicago Press, 2002) 174. 57 Schauer (n 49) 912. 58 Post (n 38) 1249. 59 The United States v O’Brien 391 US 367 (1968).

18  The Foundations of Free Speech with his act, clearly wanted to protest against the Vietnam War), violated the interest of the state with regard to orderly conscription and its undisturbed process. Therefore, the rule according to which the destruction of or damage to the draft card was punishable, and on the basis of which O’Brien was convicted, did not expressly aim at limiting the freedom of speech but at protecting some other reasonable state or government ­interest. The judgment was hence legitimate and the law serving as its legal basis was not ­unconstitutional. Can financial support or sponsorship for the sponsored party to ‘speak’ (eg to create an artefact or to fund an election campaign) be considered as ‘speech’? Without money and influence the voice of the individual remains unheard in the crowd. Money may be a means to effectively exercise the freedom of speech, but in most cases it might also be necessary to be able to address a large number of people. The three major problems of the relationship between freedom of speech and money are as follows: (i) ­supporting election and referendum campaigns;60 (ii) the method of distributing financial support granted by the government for culture and certain public institutions;61 and (iii) the question of whether the sponsor may make rules for the speaker as to what or what not to say in return for its sponsorship.62 A question related to the problems of ‘speech’ and money is the extent to which large and well-funded corporations may exercise freedom of speech.63 Legal systems, when deciding on the coverage of this right, generally do not differentiate between the status of the speakers; it is only conceivable in specific situations.64 However, with regard to undertakings operating in the market and interested in generating revenues, it is more difficult to identify the interest behind their speech, which is to be protected by granting them freedom of speech: an undertaking is not able to realise selffulfilment through freedom of speech, and its utterances do not typically constitute participation in the democratic debate. While their participation in the public sphere and their constitutional protection might also serve the interests of the community, large corporations (­especially media service providers and online companies dealing with the functioning of the public sphere) participating in the debate may also distort the discourse, since by having a powerful presence they may also weaken the power of the other voices. C.  Categories and Protection The US legal system attempts to differentiate between the coverage of the freedom of speech and the strength of protection of the ‘speech’ covered. Whatever speech or action 60 See Bowman v the United Kingdom (n 31); Buckley v Valeo (n 53); Citizens United v Federal Election Commission (n 53). 61 National Endowment for the Arts v Finley (n 54). 62 As for abortion/family planning, see, eg, Rust v Sullivan 500 US 173 (1991); Burwell v Hobby Lobby 573 US ___ (2014). As for the freedom of religious beliefs, see Rosenberger v Rector and Visitors of University of Virginia 515 US 819 (1995). 63 For the summary of this issue, see Randall P Bezanson, ‘Institutional Speech’ (1995) 80 Iowa Law Review 735; Martin H Redish and Howard Wasserman, ‘What’s Good for General Motors: Corporate Speech and the Theory of Free Expression’ (1998) 66 George Washington Law Review 235. 64 Such as, eg, granting special privileges to ‘nominating organisations’ and political parties during campaigns.

The Category of ‘Speech’ and the Scope of Protection  19 is not included in the coverage shall not be granted the constitutional protection of the freedom of speech. However, if an expression/action is included in the coverage (and thereby becomes ‘speech’ from the perspective of law), any limitation of it needs to be subjected to a thorough analysis of whether it complies with the Constitution, and this analysis will consist of elements that go beyond the content and nature of the speech.65 This differentiation in US law is rooted in Chaplinsky v New Hampshire.66 According to Justice Murphy: There are certain well-defined and narrowly limited classes of speech, the prevention and punishment of which have never been thought to raise any constitutional problem. These include the lewd and obscene, the profane, the libellous, and the insulting or ‘fighting’ words, those which by their very utterance inflict injury or tend to incite an immediate breach of the peace. It has been well observed that such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.67

Since 1942, the list of utterances falling outside the scope of the freedom of speech has undergone significant changes. Libellous speech relating to public affairs has been granted significant protection,68 and speech with a commercial purpose, which was not even mentioned in the Chaplinsky case, also enjoys the protection of the freedom of speech, unless its content is false and misleads the consumer.69 In 2012, in United States v Alvarez,70 the Supreme Court drew up an extended list: [C]ontent-based restrictions on speech have been permitted, as a general matter, only when confined to the few ‘historic and traditional categories [of expression] long familiar to the bar.’ Among these categories are advocacy intended, and likely, to incite imminent lawless action, obscenity, defamation, speech integral to criminal conduct, so-called ‘fighting words’, child pornography, fraud, true threats, and speech presenting some grave and imminent threat the government has the power to prevent.71

Not long before that, United States v Stevens72 seems to have drastically limited the classes of utterances that are conceivably still not covered by freedom of speech, by refusing to define the new categories of these utterances; the ‘speech’ that does not fall into any category, and which traditionally may be forbidden, may not be put outside the scope by the court.73 In the specific case related to this, the too broad criminalisation of animal crush videos was considered to be unconstitutional. At the time of the Chaplinsky case, the Supreme Court could not have been thinking of this type of ‘speech’ and so did not include it in the list. However, this approach is contradicted by the Ferber case,74 which

65 Mark Tushnet, ‘The Coverage/Protection Distinction in the Law of Freedom of Speech: An Essay on ­Meta-Doctrine in Constitutional Law’ (2017) 25 William & Mary Bill of Rights Journal 1073, 1083–88. 66 Chaplinsky v New Hampshire 315 US 568 (1942). 67 ibid 571–72. 68 New York Times v Sullivan (n 28). 69 Central Hudson Gas v Public Service Commission of New York 447 US 557 (1980). 70 The United States v Alvarez 132 SCt 2537 [567 US 709 (2012)]. 71 ibid 2544. 72 United States v Stevens 559 US 460 (2010). 73 ibid 468–73. 74 New York v Ferber 458 US 747 (1982).

20  The Foundations of Free Speech dated from earlier, as child pornography did not exist as a legal problem in 1942 and yet, four decades later, the Supreme Court considered this as an utterance outside the scope of the freedom of speech. Nevertheless, drawing the boundaries of ‘speech’ needs to be done with caution, as at the end of the exercise certain utterances and acts will end up outside the scope of protection, and they will not be entitled to receive full constitutional protection.75 When the coverage of free speech is defined properly, two further aggravating circumstances arise. The utterances outside its coverage may also not be limited freely by governments; for example, the prohibition of ‘fighting words’ also needs to be viewpointneutral (it is not possible, for example, to prohibit only fighting words against people of colour and leave fighting words against Catholics unrestricted).76 Certain constitutional standards must, then, also to be applied to provisions regulating expressions and ­utterances outside the coverage of the freedom of speech. On the other hand, the level of protection granted to utterances covered by the freedom of speech may vary. Hence, a distinction needs to be made between the legal boundaries of ‘speech’ and the level of protection granted to categories defined in this manner. European legal systems or the ECtHR do not define classes of ‘speech’ in the same way that the US Supreme Court does. These legal regimes instead take interests to be protected by the limitation of the right as their point of departure, and the most typical limitations on the freedom of speech are similar in nature all over Europe: the protection of human personality (against defamatory speech or speech interfering with someone’s private sphere), the protection of certain social communities against hatred, the maintenance of public order and national security (eg against vandalism or terrorism), the protection of children against content that is harmful for their development, and the protection of consumers against misleading commercial communication.77 The only exception in the practice of the ECtHR is the class of statements the Court classifies as abuse of the freedom of speech pursuant to Article 17 of the ECHR, and which it does not even examine for the compatibility of their limitation with Article 10 of the ­Convention.78 D.  The Open Debate of Public Affairs as the Core of Freedom of Speech Both in Europe and in the US there are certain statements, even in the protected domain of the freedom of speech, that are granted an even higher level of protection. The protection of ‘political speech’, that is the open debate on public affairs and opinions

75 Frederick Schauer, Freedom of Speech: A Philosophical Enquiry (Cambridge, Cambridge University Press, 1982) 101–02. 76 RAV v City of St Paul 505 US 377 (1992). 77 See the categorisations in monographs that comprehensively discuss issues related to the freedom of speech: Barendt (n 1); Perry Keller, European and International Media Law: Liberal Democracy, Trade, and the New Media (Oxford, Oxford University Press, 2011); Kevin Saunders, Free Expression and Democracy (Cambridge, Cambridge University Press, 2017); Jan Oster, Media Freedom as a Fundamental Right (Cambridge, Cambridge University Press, 2015). 78 Witzsch v Germany, App no 41448/98, admissibility decision; Garaudy v France, App no 65831/01, ­admissibility decision.

The Category of ‘Speech’ and the Scope of Protection  21 expressed on public affairs, is a priority in every Western legal system, even if this induces a violation of the rights of others. Political speech can thus be considered to be the most strongly protected kernel of the freedom of speech.79 Defining political speech is not an easy task: Certain legal systems, for example, consider only statements related to the functioning of the national parliament as political speech,80 while Judge Bork also opted for a restricted interpretation of the category of political speech worthy of protection.81 Typically, however, this category is defined much more broadly: Meiklejohn extended it to literature and the arts,82 whereas Eric Barendt is of the opinion that any statement or utterance that potentially contributes to or shapes public opinion on issues that may be considered as public affairs by an intelligent citizen can be considered as political speech.83 Thus, in terms of the force of legal protection given to it, political speech is generally ‘more valuable’ than other forms of speech. However, there are dangers associated with differentiating between statements and categorising them as ‘high-’ or ‘low-value’ based on their contribution to decisions on public affairs.84 Although speech deemed to have a lower value is not left without any protection, the level of protection granted to it will understandably be of a lower level. Categorisation, however, is necessary, at least categorisation into political and non-political speeches, because, in the absence of such differentiation, either the same standards would be applied to all forms of speech, which would result in excessive, high-level protection being granted to less valuable speech (for example, pornography, advertising or expressions of hatred), or there would be a drop in the level of protection afforded to speech that contributes to the debate on public affairs.85 Steven Shiffrin also warns that over-stretching the category of ‘speech’ might undermine or water down the objectives to be achieved by protecting the freedom of speech.86 The strongest level of protection granted to free speech does not correspond to the damage that results from the expression of one’s opinion; in other words, freedom of expression may prevail under certain conditions, even if it causes damage. For instance, the need to guarantee an open debate on public affairs justifies a significant curtailment of the protection of personality rights in favour of the freedom of speech. The question arises of whether speech on public affairs should automatically enjoy the highest level of protection, solely due to its specificity as mentioned, or whether it is

79 Bernát Török, ‘A közügyek vitájának körvonalai a véleményszabadság magyar doktrínájában’ in András Koltay and Bernát Török (eds), Sajtószabadság és médiajog a 21. század elején, 4 (Budapest, Wolters Kluwer, 2017) 425. 80 See the judgment of the Australian High Court in Lange v Australian Broadcasting Corporation [1997] 189 CLR 520. 81 Robert H Bork, ‘Neutral Principles and some First Amendment Problems’ (1971) 47 Indiana Law Journal 1. 82 Meiklejohn (n 17) 245. 83 Barendt (n 1) 162. 84 Genevieve Lakier, ‘The Invention of Low-Value Speech’ (2015) 128 Harvard Law Review 2166; Cass R Sunstein, ‘Pornography and the First Amendment’ [1986] Duke Law Journal 589. 85 Russell L Weaver, ‘The Philosophical Foundations of Free Expression’ in Russell L Weaver et al (eds), Free Speech and Media in the Twenty-First Century (Durham, NC, Carolina Academic Press, 2019). 86 Steven H Shiffrin, What’s Wrong with the First Amendment? (Cambridge, Cambridge University Press, 2016) 184–92.

22  The Foundations of Free Speech possible to limit speech due to its tone, manner and style. This is because opinions on public affairs may be expressed not only in a refined manner through reasoned argument, but also in a crude, uncultured fashion. The accused in Cohen v California87 presented himself at court wearing a jacket that displayed the phrase ‘Fuck the draft’, in protest against the forced drafts due to the war in Vietnam. The Supreme Court acquitted Robert Cohen, saying that an opinion may not be limited on the grounds of its style and crudeness. This decision reflects the general or universal principle of equality: not everybody is able to express their opinions rationally and eloquently, not all opinions expressed may be perfectly justified or supported with arguments, and not everybody has the means to communicate their opinion to the audience in a sophisticated manner. On the other hand, drastic means or manners might amplify the otherwise weak voice of the individual. In any case, the ECtHR is also of the opinion that the offensive, shocking or disturbing nature of speech itself does not constitute grounds for restrictions, if no ­interest protected by law is otherwise injured.88 The group of statements of a lower value – if we accept that this category exists – is broad and may include every expression that might not be considered as participating in the debate on public affairs, such as commercial communication (advertisements), content depicting sexuality or violence for its own sake, or content that is dangerous in other ways to the development of children, or defamation in relation to private matters. E.  Internet-Specific Questions Logically, the legal principles and standards related to the coverage and power of freedom of speech and its protection are also applicable to speech published on the Internet.89 However, the special characteristics of Internet communication have also contributed to the surfacing of new issues, as well as revealing new aspects to old problems. In certain legal documents, for example, the right to ‘freedom of thought’ appears,90 which may be partly linked to the freedom to choose one’s religious conviction and partly to the freedom of speech. As regards the latter, the ‘freedom of thought’ did not have proper content for a long time because legal protection may only be granted to a thought already expressed, and a thought expressed in words or actions may be subject to restrictions in justified cases. However, some of the more recent Internet services seem to jeopardise the individual’s freedom of thought.91 Undertakings that collect data about all their users are constantly making attempts to guess what a given person might need,

87 Cohen v California 403 US 15 (1971). 88 Handyside v the United Kingdom (n 39) para 49. 89 See, eg, the restriction of obscene opinions: Perrin v the United Kingdom, App no 5446/03, decision of 18 October 2005; the violation of one’s reputation: Times Newspapers Ltd v the United Kingdom (Nos 1 and 2), App nos 3002/03 and 23676/03, judgment of 10 March 2009; Mosley v the United Kingdom, App no 48009/08, judgment of 10 May 2011; the protection of copyright: Ashby Donald and Others v France, App no 36769/08, judgment of 10 January 2013, all in the context of the Internet. 90 Art 9 ECHR. 91 Susie Alegre, ‘Rethinking Freedom of Thought for the 21st Century’ (2017) European Human Rights Law Review 221.

The Category of ‘Speech’ and the Scope of Protection  23 or what product or service he or she might possibly purchase, and accordingly bombard potential customers with targeted advertisements.92 Others collect messages, which exist only in the form of a thought, not shared with other users (written, then deleted without having been sent), in order to infer the user’s profile.93 An experiment by Facebook in 2012 instead manipulated the emotional state of users, that is the thoughts of its users, through targeted messages.94 It seems to be obvious that, on the Internet, the individual is not the omnipotent master of his or her own thoughts. As Neil Richards warns, the lack of privacy accompanies the rise of the Internet, and in the online world – where almost nothing is private – it is much harder to have the respite necessary to form one’s own thoughts.95 On the Internet, there has been a fundamental expansion of the group of ‘­speakers’. Large media corporations that drive market concentration have acted as subjects of the freedom of speech, but have also been present in the universe of the offline media. However, the largest online undertakings do not become the subjects of freedom of speech by acting as speakers themselves, that is they do not produce or order their own content. Search engines list the contents of the Internet at the request of the user; these contents were produced independently from them, and yet they invoke their freedom of speech when the idea of restricting their activities arises.96 Social media networks and videosharing portals provide platforms for their users, and collect and compile the content of the users; however, the legal regulations aiming to act against infringing content do not only cover the freedom of speech of the users but also their own. Because these services operate using algorithms that make direct human intervention unnecessary, the freedom of speech of these algorithms has also by now become a legitimate question.97 ­Nevertheless, today it is indisputable that it is possible to be the subject of the freedom of speech without the production of actual ‘speech’ (an opinion expressed).98 In the online universe, the possible forms of ‘speech’ have also become more numerous. Whereas the text, picture, sound or audiovisual content available on the Internet is by and large analogous with content previously published in the press or on radio and television, online communication has also created new forms of expressing opinions. A  court decision in the US has established (in an entirely reasonable way) that clicking on a ‘like’ on Facebook is considered ‘speech’, that it is a statement that is granted

92 See, eg, Crystal Schreiber, ‘Google’s Targeted Advertising: An Analysis of Privacy Protections in an Internet Age’ (2014) 24 Transnational Law and Contemporary Problems 269. 93 Sauvik Das and Adam Kramer, ‘Self-Censorship on Facebook’ Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media (2013) at www.aaai.org/ocs/index.php/ICWSM/ICWSM13/ paper/viewFile/6093/6350. 94 Adam DI Kramer, Jamie E Guillory and Jeffrey T Hancock, ‘Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks’ Psychological and Cognitive Sciences (22 July 2014) at www.pnas.org/content/111/24/8788.full.pdf. 95 Neil Richards, Intellectual Privacy: Rethinking Civil Liberties in the Digital Age (Oxford, Oxford ­University Press 2015). 96 Eugene Volokh and Donald M Falk, ‘First Amendment Protection for Search Engine Search Results: White Paper Commissioned by Google’ UCLA School of Law Research Paper no 12-22. 97 Stuart Minor Benjamin, ‘Algorithms and Speech’ (2013) 161 University of Pennsylvania Law Review 1445. 98 Furthermore, see the comment issues decided by the ECtHR in Delfi v Estonia (n 35); Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary (n 35); Pihl v Sweden, App no 4742/14, decision of 9 March 2017.

24  The Foundations of Free Speech constitutional protection.99 A list generated by a search engine in response to a query is also ‘speech’.100 Not only have the forms of uttering ‘speech’ become more numerous as a result of the Internet, but the style and manner have also undergone major changes. While classic literature and scientific work can be read on the Internet, the communication style of a significant proportion of speakers using the new tool in the press and on radio and the television is in a new and different vein. The most obvious examples of these are ­anonymous comments and the problems of certain online fora, where an abusive tone is rather frequent. The legal system has therefore had to address the question of what legal consequences vulgarity and offensive insults on the Internet should lead to, and whether they are protected or may be restricted. III.  LIMITATION OF THE FREEDOM OF SPEECH

A.  The Protection of Freedom of Speech in Different Legal Systems i.  The United States According to the First Amendment to the US Constitution, ‘Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press.’ The textualist view, which has never had majority backing in the US Supreme Court, says that ‘shall make no law’ and ‘­abridging’ mean simply what they say, hence these are to be interpreted literally. According to the interpretation given by Justice Black, the most adamant representative of this view, freedom of speech is simply unrestrictable,101 and not even judges are permitted to override those who drafted the First Amendment, and as such the words are to be interpreted according to their literal meaning because only that accurately reflects the intentions of the drafters. Attempting to discern the intentions of the drafters – the application of the so-called originalist interpretation of the Constitution – is not possible on the basis of the pure text without the legal practitioner ‘adding’ something to it. Originally, the Constitution did not define the frames of the new federal state from the perspective of the protection of human rights: it was much more concerned with restricting the extra power of the federal state in order to protect the freedom of individual States. This is also reflected in the drafting of the First Amendment.102 The Supreme Court decisions affecting the freedom of speech in the twentieth century defined clear principles on the basis of which the constitutionality of the restriction is to be judged. A very strong assumption is in favour of the unconstitutional nature of

99 Bland v Roberts no 12-1671 (4th Cir). See Ira Robbins, ‘What Is the Meaning of Like: The First Amendment Implications of Social-Media Expression’ (2013) 7 The Federal Courts Law Review 127. 100 Zhang v Baidu.com, Inc 932 FSupp2d 561; Search King, Inc v Google Technology, Inc no Civ-02-1457-M (WD Okla, 13 January 2003). See Oren Bracha, ‘The Folklore of Informationalism: The Case of Search Engine Speech’ (2014) 82 Fordham Law Review 1630. 101 Barenblatt v the United States 360 US 109, 143–44 (1959) and Smith v California 361 US 147, 157 (1959). 102 Leonard W Levy, Emergence of a Free Press (Chicago, IL, Ivan R Dee, 2004) 225.

Limitation of the Freedom of Speech  25 prior restraint that prevents the publication of opinions.103 The court has to examine whether the restriction of freedom of speech was meant to restrict the content of the speech (content-based), or whether it is to be applied in a content-neutral way.104 Contentbased restrictions, that is restrictions that expressly restrict speech due to its offensive or unlawful nature (for example, defamation or incitement to violence) should be subjected to strict scrutiny. It is necessary to examine whether there is a compelling interest that justifies the restriction, and whether the objective could be achieved through other, less restrictive means. With regard to other statements that are further away from the most strongly protected core of the freedom of speech (for example obscenity or commercial communication), intermediate scrutiny is sufficient, on the basis of which it must be determined whether a major governmental interest can be recognised in the restriction, whether the restriction directly promotes this interest and whether or not the restriction is disproportional to the necessity of promoting the interest.105 By contrast, with content-neutral restrictions (for example, a general ban on demonstrations at night), it has to be scrutinised whether the prohibition or ban complies with the requirements of ­reasonability. If the restriction of the freedom of speech is content-based, the next step is to examine whether it discriminates against some of the viewpoints on a given subject, that is whether it relates to the expression of certain viewpoints (viewpoint discrimination), or prohibits the expression of any viewpoints on a given subject (viewpoint-neutrality). In the former case, that is content-based restrictions discriminating between different types of content, the Supreme Court will certainly establish unconstitutionality. It is allowed, for example, to impose a general ban on the use of fighting words, if they pose a true threat to the party attacked or cause that party to feel intimidated,106 but it is not possible to prosecute attacks solely on the basis of their being motivated by race, skin colour or religious conviction.107 With regard to content-neutral restrictions, the scrutiny of reasonability is sufficient. Restriction is generally allowed if it only restricts the time, place and manner of expression and does not lead to a complete ban on a type of expression.108 In addition, courts examine whether the restriction of speech is too ‘broad’ (the doctrine of overbreadth),109 that is whether it also covers statements the restriction of which is not justified in respect of the objective to be achieved, and whether the normative text prescribing the restriction is ‘vague’ or not,110 that is whether it is suitably precise. ii.  The United Kingdom While it is well known that the UK does not have a codified constitution included in a single document, and neither does it have a catalogue of fundamental rights s­ tipulated 103 Near v Minnesota 283 US 697, 716 (1931). 104 Geoffrey R Stone, ‘Content-neutral Restrictions’ (1987) 54 University of Chicago Law Review 46; Geoffrey R Stone, ‘Content Regulation and the First Amendment’ (1983) 25 William and Mary Law Review 189. 105 Central Hudson Gas and Electricity Corp v Public Service Commission 447 US 557 (1980). 106 Chaplinsky v New Hampshire 315 US 568 (1942); Watts v the United States 394 US 705 (1969); Virginia v Black 538 US 343 (2003). 107 RAV v City of St Paul (n 76). 108 Weaver (n 29) 121–23. 109 ibid 98–100. 110 ibid 101–02.

26  The Foundations of Free Speech in law, its historical traditions concerning the protection of the freedom of speech render its legal recognition unquestionable. William Blackstone, in his work summarising English law written in the eighteenth century, did not include freedom of speech in the chapter on personal liberties but amongst torts against public order, and discussed the freedom of the press amongst objections to claims arising from defamation and other forms of damage.111 Albert Dicey, in his famous work on constitutional law dated a good century later, sets forth that: As every lawyer knows, the phrases ‘freedom of discussion’ or ‘liberty of the press’ are rarely found in any part of the statute-book nor among the maxims of the common law. As terms of art, they are indeed quite unknown to our Courts. At no time has there in England been any proclamation of the right to liberty of thought or to freedom of speech.112

The absence of legal provisions did not, however, mean the absence of a recognition of the constitutional right to freedom of speech. With the passage of time, the courts, in their individual decisions, tended to refer to the freedom of speech with increasing frequency.113 Within the meaning of the proper application of the rule-of-law doctrine, freedom of speech, as one of the constituents of individual liberties, is not granted by Parliament, and therefore the Parliament does not have to regulate its protection in order for the courts to take that into account in their decisions, even if this undermines the sovereignty of Parliament (the foundation stone of the English constitutional­ establishment). In the second half of the twentieth century, court decisions referring to the freedom of speech and extending its coverage became more numerous.114 Certain decisions taken by the House of Lords in the last decade of the last century and at the beginning of the new millennium were unambiguously based on the recognition of the fundamental right to freedom of speech.115 As a step in the European unification process, in 1998 Parliament adopted the Human Rights Act, which provides for the legal protection of fundamental rights in the UK. The law incorporated the ECHR into the legal system of the UK, and through its Annex it has become a supplement to other legal and common law provisions. As such, Article 10 of the Convention is the only legal norm to declare the protection granted to freedom of speech in the UK. English common law did not create categories and rules as US law did, which, given the limited role of the freedom of speech as an individual entitlement, does not come as a surprise.116 The highest judicial forum, the Supreme Court (previously the House of Lords) does not operate as a constitutional court, that is it does not have the right to strike down laws, but it may establish their incompatibility with freedom of speech, and it may actively influence the development of common law.

111 William Blackstone, Commentaries on the Laws of England, 6th edn (London, 1825 [1765–69]) vol IV, 151–52. 112 Albert V Dicey, Introduction to the Study of the Law of the Constitution, 8th edn (London, Macmillan, 1915) 147. 113 Alan Boyle, ‘Freedom of Expression as a Public Interest in English Law’ [1982] Public Law 574. 114 Brutus v Cozens [1973] AC 854; Silkin v Beaverbrook Newspapers [1958] 1 WLR 743; R v Secretary of State for the Home Department, ex parte Simms [2000] 2 AC 115. 115 ex parte Simms (n 114) 125; Reynolds v Times Newspapers Ltd [2001] 2 AC 127, 207. 116 Barendt (n 1) 42.

Limitation of the Freedom of Speech  27 iii.  The European Court of Human Rights The Convention for the Protection of Human Rights and Fundamental Freedoms, announced for ratification in 1950 in Rome under the auspices of the Council of Europe and finally entering into force in 1953, provides for the protection of the freedom of expression under Article 10: (1) Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises. (2) The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.

The ECtHR monitors compliance with the Convention, and seeks to define a sort of common European minimum requirement regarding individual rights. The standards of the ECtHR on certain aspects of the freedom of expression formulate and standardise the jurisprudence of the signatory states to the Convention in an indirect manner. On certain issues, the ECtHR does not try to find common ground but sets a very high level of protection of the freedom of expression. Protection of political speech, the possibility of debating public affairs and the freedom to criticise public figures are issues of special interest to the ECtHR.117 Political debates can only be limited in the most necessary cases, such as to protect public order or state sovereignty, or when there is an actual risk of violent acts occurring or of incitement to violence.118 The same applies to the individual partial rights deduced from the freedom of the press, for example journalists’ right to keep secret the names of the persons from whom they obtain their information.119 For other types of opinions, the ECtHR is content to record certain general principles only, for example it does not determine a uniform standard for hate speech but gives only general guidance.120 Similarly, commercial content and commercial advertisements can enjoy the protection guaranteed under Article 10, but this protection remains below the level of protection granted to political opinions.121 With respect to the

117 Lingens v Austria, App no 9815/82, judgment of 8 July 1986; Castells v Spain, App no 11798/85, judgment of 23 April 1992. 118 See, eg, Incal v Turkey, App no 41/1997/825/1031, judgment of 9 June 1998; Ceylan v Turkey, App no 23556/94, judgment of 8 July 1999; Sürek v Turkey (No 1), App no 26682/95, judgment of 8 July 1999; Sürek and Özdemir v Turkey, App nos 23927/94 and 24277/94, judgment of 8 July 1999. 119 Goodwin v the United Kingdom, App no 16/1994/463/544, judgment of 22 February 1996. 120 Jersild v Denmark, App no 36/1993/431/510, judgment of 22 August 1995; Gündüz v Turkey, App no 35071/97, judgment of 4 December 2003. 121 Casado Coca v Spain, App no 8/1993/403/481, judgment of 26 January 1994; Barthold v Germany, App no 8734/79, judgment of 25 March 1985; Hertel v Switzerland, App no 59/1997/843/1049, judgment of 25 August 1998.

28  The Foundations of Free Speech third type of opinions, the ECtHR leaves a certain room to manoeuvre for the Member States regarding the limitation of freedom of expression, and does not determine a common European standard for speeches defaming religion or violating public morals.122 The Convention and the decisions of the ECtHR are of a subsidiary nature – they supplement rather than override the systems of contracting states intended to protect fundamental rights. The principle of subsidiarity means that all effective legal remedies have to be exhausted in the contracting state before one turns to the ECtHR. Furthermore, this principle also affects the interpretation of the freedom of expression in which the universally protected standard content of individual rights is separated from other questions, which are left to the states to be decided.123 The freedom of expression may only be restricted pursuant to Article 10(2). The ECHR lists several circumstances as grounds for restriction, but also notably includes the guarantees against the restriction of the right to freedom of expression in its text. In practice, the ECtHR first examines whether a disputed state decision has affected the fundamental right included in the application. If it comes to the conclusion that the ­freedom of speech was restricted, it shall examine its compliance. Any restriction must be prescribed by law, which means that the restriction is based on a predefined legal norm, which is accessible and whose content is clear. It must also be demonstrable that the restriction is necessary in a democratic society. Being necessary implies proportionality, and the necessity always has to be examined with respect to the protection of one of the reasons listed in paragraph (2). The restriction is deemed to be necessary, based not on the text but on the jurisprudence of the court, if it is necessitated by a pressing social need.124 iv.  The European Union Article 11 of the Charter of Fundamental Rights of the European Union (EU) declares the right to freedom of expression: (1) Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. (2) The freedom and pluralism of the media shall be respected.

The Charter may only be applied against the legal acts of the EU, and there are relatively few harmonised areas in the domain of freedom of speech. In certain areas, the Member States’ media regulations are defined by the Directive on audiovisual media services;125

122 Otto-Preminger-Institut v Austria, App no 11/1993/406/485, judgment of 23 August 1994; Wingrove v the United Kingdom, App no 19/1995/525/611, judgment of 22 October 1996; Handyside v the United Kingdom (n 39). 123 Paul Mahoney, ‘Universality versus Subsidiarity in the Strasbourg Case Law on Free Speech: Explaining Some Recent Judgments’ [1997] European Human Rights Law Review 364. 124 Handyside v the United Kingdom (n 39) para 48; Sunday Times v the United Kingdom (No 1), App no 6538/74, judgment of 26 April 1979, para 59. 125 Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the

Limitation of the Freedom of Speech  29 from the perspective of online content provision, the Directive on electronic commerce (and services related to the information society)126 and the law of data protection are significant.127 The Court of Justice of the European Union may hand down a decision on the proper application of an EU legal provision, in the course of which it also takes the aspects of fundamental rights into account.128 Individual organs of the EU issue soft law documents in addition to mandatory legal acts and recommendations, and these also deal with the regulation of the public sphere.129 B.  The Protection of Reputation and Honour One of the most important areas of the legal protection of human personality is defamation law and the protection of reputation and honour, which seeks to prevent unfavourable and unjust changes to the individual’s image and evaluation by society. The regulation of this area aims to prevent an opinion published in the public sphere concerning an individual from destroying the external image of the individual without proper grounds, primarily through false statements. The protection granted to the personality may clash with interests related to the freedom of debate on public affairs, and the legal system has to weigh up these two aspects when drawing the boundaries of the freedom of speech. Rules to prevent defamation appear in various areas of law, including civil law, criminal law or in media regulations. On this question, the approaches taken by individual states differ remarkably, but the common point of departure in Western legal systems is the strong protection of debates on public affairs; as such, the protection of the personality rights of public figures is forced into the background when compared to the protection of the freedom of speech. The boundaries of the protection of the personality rights of public figures are primarily shaped by court decisions. The most illuminating example of this is the judgment in New York Times v Sullivan,130 in which the US Supreme Court set a new standard in protecting the freedom of debate on public affairs. According to this decision, elected public officials may only successfully sue a publisher for the publication of a ­statement

provision of audiovisual media services (AVMS Directive). In November 2018, the Directive was amended by Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018. Hereinafter, the AVMS Directive’s text will be referred to with these newly adopted modifications. 126 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the internal market (hereinafter ‘E-Commerce Directive’). 127 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [GDPR]. 128 For the first decision concerning the media, see Case C-155/73, Italy v Sacchi [1974] ECR 409; in respect of the freedom of speech, see the decision on the right-to-be-forgotten: Google Spain SL, Google, Inc v Agencia Española de Protección Datos Mario Costeja González, Case C-131/12, judgment of 13 May 2014. 129 See Recommendation of the European Parliament and of the Council of 20 December 2006 on the protection of minors and human dignity and on the right of reply in relation to the competitiveness of the European audiovisual and online information services industry (2006/952/EC); Council Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and ­xenophobia by means of criminal law. 130 New York Times v Sullivan (n 28).

30  The Foundations of Free Speech that is related to their office and harms their reputation if they can show that the publisher acted in bad faith, that is knew the statement to be false or did not know of its falsehood because he or she proceeded with reckless disregard in the course of verifying the ­statement.131 The robust protection of personality rights in the case of certain categories of individuals is less justified, and the number of these categories has grown rapidly as the scope of protection of the freedom of speech has broadened.132 However, as regards the protection against defamation of private persons who come into contact with public affairs, in Gertz v Welch133 the Court established that the plaintiff attorney was not a public figure and therefore in his case the New York Times rule should not apply. In such a case, a condition of establishing liability for libel or defamation is for the plaintiff to prove some ‘error’ on the part of the libellous person, that is at least the fact that he or she acted with negligence. In addition to the tort of libel, the tort of ‘the intentional infliction of emotional distress’ may also be utilised for the protection of the personality, although the US Supreme Court’s decision significantly narrowed the opportunity for this to be applied to public figures. According to the judgment in Hustler Magazine v Falwell,134 it would be in breach of the First Amendment to establish legal liability for statements on public figures only because their disclosure was intended to insult the person concerned, or because the statement was disclosed with other harmful intent. The insult and the outrage themselves do not provide a good foundation for restricting the freedom of speech. In English law, the statutory and common law rules of defamation are mixed. The exemption from liability as defined by statute was extended by the judgment handed down in Reynolds v Times Newspapers.135 According to the judgment, the protection of the freedom of speech must be expanded, subject to certain conditions, to cover those events when the disclosing party publishes false allegations in matters of public interest. This provides far less strong protection to debates on public affairs than the New York Times rule, and it necessitates a case-by-case assessment of the circumstances in which the disclosure was made. In the Jameel case,136 the House of Lords established that a disputed publication must not only be related to public affairs, but ‘it should also be responsible and fair’.137 The Defamation Act 2013 triggered a major change in regulation. The Act guarantees exemption for allegations published on matters of public interest. The precondition of this exemption is that the allegation must be made on a matter of public interest, and also that the disclosing party must reasonably believe that disclosure is in the public ­interest.138 131 ibid 279–80. 132 For elected public officials, see Rosenblatt v Baer 383 US 75 (1966), and for other public figures, see Curtis Publishing Co v Butts, Associated Press v Walker 388 US 130 (1967). 133 Gertz v Welch 418 US 323 (1974). 134 Hustler Magazine v Falwell 485 US 46 (1988). 135 Reynolds v Times Newspapers (n 115). 136 Jameel v The Wall Street Journal Europe (No 2) [2006] UKHL 44, [2007] 1 AC 359. 137 ibid para 53. See also Flood v Times Newspapers [2012] UKSC 11. John Campbell, ‘The Law of Defamation in Flux: Fault and the Contemporary Commonwealth Accommodation of the Right to Reputation with the Right of Free Expression’ in András Koltay (ed), Media Freedom and Regulation in the New Media World (Budapest, Wolters Kluwer, 2014). 138 Defamation Act 2013, s 4. James Price and Felicity McMahon (eds), Blackstone’s Guide to the Defamation Act 2013 (Oxford, Oxford University Press, 2013) 61–79.

Limitation of the Freedom of Speech  31 The protection of the ‘honest opinion’139 also serves the public interest, in spite of the fact that this protection is not conditional upon the fact that the opinion should relate to a matter of public interest. ‘Honest opinion’ as a ground for exemption may only be applied if the opinion is based on factual information, the reality of which can be proved, and the disclosing party acted in good faith. The ECtHR aims to formulate common European principles in respect of this question. Barendt identifies three distinctive features of the Court’s case law with regard to the protection of reputation: (i) it is not primarily a lower level of protection of the reputation of public figures that the ECtHR prescribes but rather the broadest protection of debates on public issues, that is the decisive factor is not the status of the person who is the subject of an allegation but the extent to which the debate serves the public interest; (ii) the Court distinguishes between statements of fact and statements of opinion, and offers broader protection to the latter, recognising the civil and criminal sanctioning of false statements of fact and, with certain restrictions, placing the burden of proof on the respondent/defendant; (iii) the Court examines the published statements in their entirety, that is in the context of the entire publication, and thus certain expressions that would, in themselves, be considered to be defamatory do not necessarily constitute a legal violation in the eyes of the Court.140 At the same time, the position of the attacked party and his or her previous behaviour, as well as the content of the speech (true or false) and the other circumstances (the way the information was obtained, the style of the speech or its consequences), also need to be examined.141 The extent of the protection offered to the freedom of speech may vary according to whether the subject of criticism is a government member, a politician, a public official, an employee of the judiciary or a celebrity without public authority, or even a private individual. The Strasbourg Court’s first judgment favouring the openness of public debate over the enforceability of personality rights was passed in Lingens v Austria.142 The ECtHR decided that, on the one hand, the demonstration of the truth cannot be demanded in relation to value judgments and, on the other hand, that the threshold of tolerance of prominent politicians with regard to defamatory statements must be much higher than that of other individuals. The Court declared that the freedom of political debate is central to the operation of a democratic society, and its protection is among the paramount objectives of the ECHR. The freedom to criticise politicians is commonly acknowledged, according to the well-established jurisprudence of the ECtHR, as they act in the public sphere of their own free will and as such undertake to tolerate strong criticism. Later, the ECtHR extended the principle to all persons related to public affairs. In Thorgeirson v Iceland,143 the Court stated that the statements made in all cases of relevance to public debate, rather than just political cases, are to be awarded special protection. According to Bladet Tromsø and



139 Defamation

Act 2013, s 3. (n 1) 222–25. 141 Jan Oster, European and International Media Law (Cambridge, Cambridge University Press, 2017) 73–89. 142 Lingens v Austria (n 31). 143 Thorgeirson v Iceland, App no 47/1991/299/370, judgment of 28 May 1992. 140 Barendt

32  The Foundations of Free Speech Stensaas v Norway,144 in respect of statements on matters of public interest, the threshold of tolerance of private persons concerned in the case is also higher. C.  The Protection of Privacy The degree and form of the protection of privacy may vary significantly in individual legal systems.145 Similarly to defamation law, the point of departure in this area is also the guarantee of public debate on public affairs; the right of certain public figures to privacy may be overridden by the freedom of speech. The protection of privacy is simultaneously an issue of private law (tort law) and an issue of data protection law. The former protects the human personality in a similar way to libel law, and makes the enforcement of the right conditional upon the individuals’ will (they need to initiate a private lawsuit), while the latter defines the obligations of those managing personal data. The protection of privacy might also have a dimension of criminal law; however, this is less relevant in the case of legal disputes in the public sphere. The scope of the private sphere is much broader in individual legal systems than the one demarcated by infringements that may be committed through ‘speech’. The US  concept of privacy also considers certain elements of self-determination, for example the right of the individual to dispose of his or her own body (and uses this to be the foundation for arguments in favour of abortion or euthanasia rights),146 whereas the case law of the ECtHR applies Article 8 of the Convention protecting respect for private and family life in the context of general civil liberties. We shall limit our discussion to problems related to potential clashes between the private sphere and freedom of speech. A classic study by Warren and Brandeis expressly formulated the necessity of ‘the right to be let alone’ with regard to the tabloid press.147 According to the authors, the insatiable appetite of the press for new sensations and ‘rumours’, and the development of photographic techniques, endangers the sovereign, inner world of the individual and its untouchability to an unprecedented degree. According to William Prosser’s classic categorisation, the individual statements of facts of the privacy tort that gradually took shape in US law in the twentieth century are as follows: (i) intrusion upon the plaintiff’s seclusion or solitude, or into his or her private affairs; (ii) public disclosure of embarrassing private facts about the plaintiff; (iii) publicity that portrays the plaintiff in a false light in the public eye; (iv) unauthorised appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness.148 The general protection of privacy, that is the tort of invasion of privacy, was first recognised at federal level in a 1965 decision of the US Supreme Court.149 It should be noted,

144 Bladet Tromsø and Stensaas v Norway, App no 21980/93, judgment of 20 May 1999. 145 Ronald J Krotoszynski Jr, Privacy Revisited. A Global Perspective on the Right to Be Left Alone (Oxford, Oxford University Press 2016) 1–14. 146 Jed Rubenfeld, ‘The Right of Privacy’ (1989) 102 Harvard Law Review 737. 147 Samuel Warren and Louis D Brandeis, ‘The Right to Privacy’ (1890) 4 Harvard Law Review 193. 148 William L Prosser, ‘Privacy’ (1960) 48 California Law Review 383, 389. 149 Griswold v Connecticut 381 US 479 (1965).

Limitation of the Freedom of Speech  33 however, that the protection of privacy is considered as a constitutional or fundamental right only in terms of relations with the Government (see the Eleventh Amendment), not in disputes between private individuals. Accordingly, in cases where the freedom of speech conflicts with the interest related to privacy, generally the former prevails. The US law in general considers the aspects of ‘newsworthiness’ to be more powerful, and so events that occurred on public property or which are related to public affairs, as well as information and images related to the private life of public figures, may be disclosed if they have something to do with informing the public.150 The freedom of speech may also be awarded stronger protection even in situations where the individual under attack does have a legitimate interest in preserving the intimacy of a given event, such as a funeral.151 The scope of protection of privacy is limited to information that belongs to the innermost sphere of the individual, such as the status of his or her health or sexual life. A naked or even pornographic image of an individual disclosed without his or her consent is clearly infringing, even if it is made of a celebrity.152 English law does not recognise the general tort of privacy, and in contrast to the ­American situation, no individual standalone privacy tort exists. The name, voice and image of a person are protected by the tort of appropriation of personality, while the tort of breach of confidence offers protection against the publication of confidential information. There are also specific statutes in the UK that offer protection against certain special forms of privacy violation (the Data Protection Act 1998, the Post Office Act 1969 protecting private correspondence, and the Interception of Communications Act 1985 providing protection against illegal wiretapping). In Campbell v MGN,153 the House of Lords handed down a decision concerning confidential but newsworthy information relating to celebrities. Photographs were taken in secret of the claimant (a ‘super model’) leaving a rehabilitation clinic, and the disclosure of these photographs was found to be unlawful while the report on her addiction and the treatment was allowed, but only if the details of her healthcare treatment were not disclosed. According to the court, it was not of public interest for the readers to dwell on the details: it might be ‘interesting’ for them, but it did not serve their interests. Reviewing the English case law of the past decade or so, the major criteria applied in deciding disputes may be summarised as follows: (a) Did the claimant in the given situation have a reasonable expectation concerning the respect that would be paid to his or her privacy? (b) Does any public interest justify the publication of information of a private nature (or photographs)? (c) How broadly was the information shared?

150 Amy Gajda, The First Amendment Bubble. How Privacy and Paparazzi Threaten a Free Press (Cambridge, MA, Harvard University Press, 2015) 50–87. 151 Snyder v Phelps 562 US 443 (2011). 152 See the so-called ‘fappening’ case: Mike Isaac, ‘Nude Photos of Jennifer Lawrence Are Latest Front in Online Privacy Debate’ New York Times (2 September 2014); or the Bollea v Gawker case: Sydney Ember, ‘Gawker and Hulk Hogan Reach $31 Million Settlement’ New York Times (2 November 2016). 153 Campbell v MGN [2004] 2 AC 457, HL.

34  The Foundations of Free Speech With respect to information that has already been shared in the public sphere, there is a slimmer chance of legal protection. The tort of misuse of private information also qualifies as a violation of privacy. In the PJS case, the Supreme Court established that reporting on the sexual life of celebrities may not be considered as a matter of public interest.154 The ECtHR has an abundant case law concerning the protection of privacy. ­According to the judgment in Karhuvaara and Iltalehti v Finland,155 the private life of politicians may be shared in the public sphere up to a certain point. In Kulis v Poland,156 the ECtHR confirmed that family affairs and the relationship between a parent and the child usually qualify as confidential information, even with regard to public figures, although the b ­ ehaviour of family members may render it a public affair. In Peck v the United ­Kingdom,157 a decision had to be made on the right of a private person to privacy regarding recordings made in a public place. The applicant, who was in a vulnerable ­situation  – he had just attempted to commit suicide – did not become a public figure in spite of the fact that he was in a public space, and the disclosure of his image and the recordings represents a violation of his privacy. As regards the evaluation of images made of public figures, Von Hannover v Germany is a milestone.158 Although the ability of public figures to protect their private life is more limited than that of ordinary individuals, their privacy is nonetheless also acknowledged and protected by law. At the same time, the applicant (the Princess of Monaco), although a public figure, does not hold public powers and is not a politician. The general public is obviously interested in her private life, but this in itself does not constitute an interest that overrules her rights. However, newsworthiness takes precedence over the protection of privacy. In the two more recent cases initiated by the princess, the ECtHR rejected her application on these grounds.159 D.  Instigation to Violence or a Criminal Offence and Threats Thereof Instigation to unlawful acts, threats of violence and harassment are punishable in all Western legal systems, but the standard of the limitation of freedom of speech also varies in these circumstance. In the US legal system, for example, acts against the speaker are sanctionable only if there is a clear and present danger. The doctrine was first formulated by Justice Oliver Wendell Holmes in Schenck v the United States:160 The question in every case is whether the words used are used in such circumstances and are of such a nature as to create such a clear and present danger that they will bring about the ­substantive evils that Congress has a right to prevent.161

154 PJS v News Group Newspapers Ltd [2016] UKSC 26. 155 Karhuvaara and Iltalehti v Finland, App no 53678/00, judgment of 16 November 2004. 156 Kulis v Poland, App no 15601/02, judgment of 31 March 2008. 157 Peck v the United Kingdom, App no 44647/98, judgment of 28 April 2003. 158 Von Hannover v Germany (No 1), App no 59320/00, judgment of 24 June 2004. 159 Von Hannover v Germany (No 2), App nos 40660/08 and 60641/08, judgment of 7 February 2012; Von Hannover v Germany (No 3), App no 8772/10, judgment of 19 September 2013. 160 Schenck v the United States 249 US 47 (1919). 161 ibid 52.

Limitation of the Freedom of Speech  35 The clear and present danger doctrine was formulated in Brandenburg v Ohio,162 and still holds. Based on the test defined by the court, statements that incite violence or other unlawful infringing acts may only be restricted if (i) the speaker had the intention to instigate unlawful acts, (ii) the speech incited immediate unlawful acts, and (iii) the speech is highly likely to incite or produce such an action. (The speech is ‘directed to inciting or producing imminent lawless action’, and is ‘likely to incite or produce such action’.)163 In Chaplinsky v New Hampshire,164 the Supreme Court established that so-called fighting words are outside the coverage of the protection of the freedom of speech. ­Fighting words are words addressed directly to the attacked party who is present and which injure the attacked party, or inflict injury or tend to incite an immediate breach of the peace.165 In Virginia v Black,166 according to the definition of the court, ‘true threats’ are statements or utterances that express the intention to take lawless violent actions against the individual or group attacked.167 Several provisions of the UK’s Public Order Act 1986 punish violent acts or affray, which might also be manifested in the form of threats.168 The incitement of violence or threat of violence is only punishable, however, if the party attacked had a reason to be afraid of actual violence taking place and of a threat to his or her personal safety (the  act would cause a person of reasonable firmness present at the scene to fear for his or her personal safety). The use of threatening, defamatory or offensive expressions or other similar conduct is subject to punishment if the intention was to make the attacked party believe that an immediate, unlawful violent act will take place against him or her, or to provoke an immediate violent action on the part of the party attacked.169 The imminence and reality of the danger posed by the speech is also a condition of legal prohibition in English law, even if the standard applied to the restriction of the freedom of speech is lower. The Council Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law170 aimed at providing an EU-level uniform prohibition of violence against communities or their members. According to the Framework Decision, Member States shall prohibit agitation for violence (and hatred) against social communities. Member States may decide to sanction exclusively those forms of conduct that ‘have a high possibility of disturbing public order’ and which are threatening, disturbing or offensive.171 While in several Turkish cases the ECtHR established that political speech is to be afforded a higher level of protection if the opinion published incites violent acts, the state



162 Brandenburg

v Ohio 395 US 444 (1969). 447–49. 164 Chaplinsky v New Hampshire 315 US 568 (1942). 165 ibid 571–72. 166 Virginia v Black (n 106). 167 ibid 359. 168 Public Order Act 1986, pt I, ss 1–5. 169 ibid s 4. 170 Council Framework Decision 2008/913/JHA (n 129). 171 ibid Art 1. 163 ibid

36  The Foundations of Free Speech has the necessary latitude to restrict the freedom of speech if it deems it necessary in order to protect public order.172 E.  Hate Speech According to the recommendation of the Council of Europe on hate speech, ‘hate speech’ shall be understood as covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.173

Thus, hate speech can be defined as the public expression of hatred against certain groups of society or their members because they belong to a given group. Such speech, on the one hand, offends the members of the group, but on the other hand it is in most cases considered to comprise opinions on important public affairs, and as such the legal judgment of hate speech is made especially difficult by this duality. In the US, the First Amendment does not allow for a general ban on hate speech. The most well-known example of the legal response to hate speech in the US is the Skokie case, in which the Supreme Court found that banning a neo-Nazi demonstration was not allowed.174 In RAV v City of St Paul,175 the Supreme Court established that it is not allowed to discriminate arbitrarily against statements that can be otherwise restricted (in the given case, racist ‘fighting words’). The local government’s ordinance, which banned a certain form of speech (with the intention to combat hatred), violated the principle of viewpoint neutrality because it restricted incitement to hatred only if it was based on race, skin colour, conviction, religion or gender. The judgment in Virginia v Black176 shows that prohibiting the threatening of violence is not unconstitutional. The state may therefore identify and restrict solely those forms of conduct which are the most threatening (in the given case it was the display of a burning cross). The recognition of the constitutionality of the restriction is not justified by the alarm or outrage caused by the burning cross but by the actual threat. We can therefore conclude that hate speech may not be prohibited in general in the US, but it may be restricted if it expresses a real threat or generates a clear and direct danger of a violent act. In Europe, in contrast, hate speech is broadly restricted, predominantly in criminal codes.177 In most European states, these restrictions, unlike the American provisions, do not make the restriction conditional upon the intensity of the threat or the danger

172 See, eg, Arslan v Turkey, App no 23462/94; Sürek and Özdemir v Turkey, App nos 23927/94 and 24277/94, judgments of 8 July 1999; Sener v Turkey, App no 26680/95, judgment of 18 July 2000; Aksoy v Turkey, App no 28635/95, judgment of 10 January 2001. 173 Council of Europe, Recommendation no R (97) 20, of the Committee of Ministers to Member States on ‘Hate Speech’ (‘scope’). 174 Collin v Smith 578 F2d 1197 (7th Cir, 1978). 175 RAV v City of St Paul (n 76). 176 Virginia v Black (n 106). 177 See, eg, the German Criminal Code (Strafgesetzbuch (StGb), ss 130 and 185), or the French Law on the Freedom of the Press (Loi du 29 juillet 1881 sur la liberté de la presse, s 24).

Limitation of the Freedom of Speech  37 posed by the speech. The most widespread type of restriction on hate speech in Europe ­prohibits ‘incitement’ to various acts against communities. Such restrictions exist in nearly all EU Member States. These provisions of criminal law are largely similar to one another both in terms of their objectives and in identifying the protected communities, as well as in terms of the possible sanctions. Individual states in most cases prohibit agitation for hatred, violence or discrimination, sometimes in conjunction, in other cases under different provisions.178 Based on the UK’s Public Order Act of 1986, threatening, defamatory and offensive expressions may be restricted. The Act prohibits certain expressions of hatred against communities based on race, the colour of a person’s skin, nationality, citizenship and ethnic origin (racist expressions), and expressions of hatred based on religious or sexual orientation. Racist expressions may be banned if there is a high probability of their inciting hatred, or, in the absence of such a probability, if the intention of the speaker is to incite hatred. In the case of hatred based on religious or sexual orientation, the Act differs in that it only prescribes the intention to incite hatred as a requirement.179 The Council Framework Decision on combating racism and xenophobia180 expects the Member States of the EU to prohibit the incitement of violence or hatred against communities in a uniform manner. As mentioned, this has already been achieved even without the Framework Decision, as reflected in the case law of the ECtHR. The Court in Handyside v the United Kingdom181 concluded that the shocking, outrageous and disturbing character of the opinion itself did not provide sufficient grounds for restriction if none of the interests protected by the Convention is violated.182 This is also a proper point of departure in the case of hate speech, although at the same time it has to be noted that in the criminal codes of a large number of Member States, ‘offensive’ opinions against communities (‘only’ offensive but not inciting or agitating others) may continue to be punishable as crimes.183 A specific aspect of the Strasbourg case law in respect of the restriction of hate speech is that, when admitting applications, the Court does not take only Article 10 of the Convention protecting the freedom of speech into account but also Article 17 prohibiting the abuse of rights. In Glimmerveen and Hagenbeek v the Netherlands,184 the European Commission of Human Rights held that opinions with racist content qualify as an abuse of the freedom of speech. Racist opinions violate Article 17 and therefore the restriction imposed by the state on the freedom of speech is not in breach of the Convention, because the exercise of the rights protected by the Convention may not aim to destroy or undermine the rights or liberties of others; neither can the opinion published run counter to fundamental values protected by the Convention. The ECtHR made a statement of principle in this case that tolerance and the recognition of the equality of all human beings constitute the fundaments of a democratic, 178 For the overview of European regulations, see Blasphemy, Insult and Hatred: Finding Answers in a ­Democratic Society (Strasbourg, Council of Europe Publishing, 2010). 179 Public Order Act 1986, pt III and pt 3A. 180 Council Framework Decision 2008/913/JHA (n 129). 181 Handyside v the United Kingdom (n 39). 182 ibid para 49. 183 See Blasphemy, Insult and Hatred (n 178). 184 Glimmerveen and Hagenbeek v the Netherlands, App no 8348/78, admissibility decision.

38  The Foundations of Free Speech pluralistic society. This is why, in certain democratic societies, the sanctioning or prevention of the use of expressions inciting, spreading, proclaiming and justifying hatred is permitted, with the proviso that the restriction is proportionate to the objective to be achieved. Hateful expressions are therefore not protected by Article 10.185 While the restriction of hate speech and defining the standards applied are left more or less to the discretion of states, they do not have unlimited room to manoeuvre. In Jersild v Denmark,186 the Court established that sentencing a journalist who had conducted an interview with racist youths on charges of hate speech, violated the ECHR because the journalist acted in compliance with his obligation when he reported on an issue of high interest to the public, drawing the attention of society to the grave problem of racism. The television programme clearly served the purpose of participating in the debate, and not spreading the ideas presented in that the interview.187 F.  Symbolic Speech Symbolic acts may also fall under the coverage of freedom of speech if they express an opinion. In the O’Brien case in the US,188 such an act was the burning of a draft card, although the Supreme Court decided that this act violated the interests of the state in administering the draft without any disturbance, and thus found that the rule a­ ccording to which the elimination or destruction of the draft card may be punishable was not aimed expressly at restricting the freedom of speech, and therefore the prosecution of David O’Brien was not in breach of the First Amendment. The US Supreme Court has handed down decisions in several cases in which the ­American flag was burnt or distorted as a form of expressing a political standpoint. In Street v New York,189 the application of the law making it prohibited ‘[to] mutilate, deface, defile, or defy, trample upon, or cast contempt upon, either by words or act, in public, any flag of the United States’ was found to be unconstitutional by the Court, because it restricted freedom of speech. In the reasoning of the Texas v Johnson decision of 1989,190 Justice Brennan was of the opinion that burning the flag was a communicative act.191 The law may not restrict freedom of speech on the grounds that it does not agree with the content expressed. Neither may it prescribe the behaviours that are prohibited if their restriction targets the opinion implied in them. The provision of the act that prohibits defiling the flag only if that may trigger the outrage of others clearly indicates that it aims to silence opinions. This content-based restriction may not be permitted in line with the well-established principles of the freedom of speech, because the outrage shown by others and the interest of preserving their peace of mind are not sufficiently weighty ­arguments in favour of the restriction.192

185 Gündüz

v Turkey, App no 35071/97, judgment of 4 December 2003. v Denmark, App no 15890/89, judgment of 23 September 1994. 187 Mahoney (n 123) 372–74. 188 The United States v O’Brien (n 59). 189 Street v New York 394 US 576 (1969). 190 Texas v Johnson 491 US 397 (1989). 191 ibid 411. 192 ibid 398. 186 Jersild

Limitation of the Freedom of Speech  39 The ECtHR, in Vajnai v Hungary,193 held that the Communist red star, which is considered to be the symbol of dictatorship in Hungary and public use of which was prohibited by the Criminal Code, can have multiple meanings beyond its relationship with a Communist dictatorship; it is also the symbol of the international workers’ movement, and it is not unequivocally related to the dictatorship or its ideology. The behaviour of the applicant did not pose any danger to society or democracy, and in the concrete situation leading to prosecution (at a political rally), the opportunity to disturb social peace did not exist. G.  False Statements The question of prosecution on the grounds of false statements has traditionally arisen in the law on defamation and with respect the protection of privacy. The American law of freedom of speech finds it difficult to come to terms with the prohibition of false statements. As the Gertz decision puts it,194 ‘under the First Amendment, there’s no such thing as a false idea’.195 Nevertheless, in certain cases in debates on public affairs, in the event of reckless disregard, false statements may have legal consequences.196 Still, in United States v Alvarez,197 the Supreme Court held that a statement’s falsehood is not enough, by itself, to exclude speech from First Amendment protection.198 Pursuant to the Council Framework Decision on combating racism and xenophobia in Member States of the Union,199 a universal prohibition shall be applied to the denial of crimes against humanity, war crimes and genocides. In most Member States of the EU, there is a law prohibiting the denial of the crimes against humanity committed by the National Socialists, or questioning them or watering down their importance.200 In countries where the denial of the Holocaust is illegal, prosecution pursuant to these rules is not deemed by the Strasbourg Court to be a violation of the freedom of speech either. In Witzsch v Germany,201 the ECtHR considered the application inadmissible because the opinion of the applicant was excluded from the protection of the freedom of speech by Article 17 of the Convention. This position of the Court was confirmed by other decisions in which the application of the complainant prosecuted for Holocaust denial was again considered inadmissible by the Court.202 Lehideaux and Isorni v France203 made it clear, however, that the Court excludes only the denial of the

193 Vajnai v Hungary, App no 33629/06, decision of 8 October 2010. 194 Gertz v Welch (n 133). 195 ibid 339. 196 New York Times v Sullivan (n 28) 279–80. 197 The United States v Alvarez 567 US 709 (2012). 198 ibid 711. 199 Council Framework Decision 2008/913/JHA (n 129). 200 See the French Gayssot Act (13 July 1990, amending the Law on the Freedom of the Press of 1881, by adding a new Art 24); German StGb, s 130(3). 201 Witzsch v Germany, App no 41448/98, admissibility decision. 202 See, eg, Garaudy v France, App no 65831/01, admissibility decision. 203 Lehideaux and Isorni v France, App no 55/1997/839/1045, judgment of 23 September 1998.

40  The Foundations of Free Speech facts of the Holocaust from the protection of freedom of speech; other historical facts do not benefit from this preferential treatment. This means that the standards of general hate speech laws can be lowered in the case of Holocaust denial. According to the argument of the Grand Chamber of the ECtHR in the Perinçek case, denial of the Holocaust can be punishable without fulfilling the ‘­incitement’ or ‘stirring up’ element, based on its generally racist and anti-Semitic nature: For the Court, the justification for making its [the Holocaust’s] denial a criminal offence lies not so much in that it is a clearly established historical fact but in that, in view of the historical context in the States concerned … its denial, even if dressed up as impartial historical research, must invariably be seen as connoting an antidemocratic ideology and anti-Semitism.204

According to the Grand Chamber, however, the denial of the Armenian genocide generally does not have this effect.205 H.  Public Morals and Protection of Minors i.  The Protection of Public Morals The concept of public morals is not defined by any legal system or international legal document. Recently, however, as a result of rapid societal changes, we have tended to deem public morals as less of a legitimate basis for restricting freedom of speech, although Article 10(2) of the Convention expressly legitimises the restriction of the freedom of speech on the grounds of protecting public morals in European states. In the US, the standard applied by the courts in this regard is the Miller test, established by the Supreme Court in 1973. Based on the standards set in Miller v California,206 the following expressions may be considered obscene and therefore may be limited: (1) Whether the average person, applying contemporary community standards, would find that the work, taken as a whole, appeals to the prurient interest; (2) Whether the work depicts or describes, in an offensive way, sexual conduct or excretory functions, specifically defined by applicable state law; and (3) Whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.207

Similarly to obscenity, crush videos may also be deemed as immoral, the demand for these videos (aimed at satisfying the sexual fetishes of some) having grown in the US so that Congress deemed it necessary to adopt a law prohibiting them from being made and distributed.208 However, the Supreme Court rejected the establishment of a new category outside the freedom of speech, that is the general prohibition of such films; in the case of content-based restriction, the legislator needs to justify the existence of a ‘compelling interest’, which, according to the Court, was unsuccessful in the given case.209 Certainly, this does not change the criminal law status of the act of torturing animals.

204 Perinçek

v Switzerland, App no 27510/08, judgment of 15 October 2015 [GC], para 243. para 244. 206 Miller v California 413 US 15 (1973). 207 ibid 24–25. 208 18 USC § 48(a). 209 The United States v Stevens 130 SCt 1577 (2010). 205 ibid

Limitation of the Freedom of Speech  41 In the UK, the previous precedent-based regulation was replaced by the Obscene Publications Act, enacted in 1959. On the basis of the drafting of the Act, an article is taken to be obscene if the entire article ‘is, if taken as a whole, such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it’.210 The ‘article’ here refers to every visible, readable or audible content and expression. In respect of articles qualifying as obscene based on the test, there are two forms of prohibited conduct: public disclosure and/or retaining such articles for the purpose of gaining profit through public disclosure; possessing such an article is not illegal per se. The content must have a depraving, corrupting effect, independently from the intention of the entity disclosing it, which – going beyond the previous definition – means much more than merely the emergence of immoral thoughts: it means mental and moral deterioration. The most influential judgment of the ECtHR on this to date, in Handyside v the United Kingdom,211 was handed down in 1976. Prior to the case, the UK authorities banned the distribution of the Little Red Schoolbook for children due to its content, which was deemed to have a harmful effect on their development. Article 10(2) of the ECHR makes the restriction of the freedom of speech possible on the grounds of the protection of public morals. The Court did not define the term ‘public morals’; according to its conclusion, in the absence of a uniform European standard, it could not decide on this question for States Parties and therefore, with respect to restrictions based on m ­ orality, States Parties enjoyed a rather broad, though not unlimited, margin of ­appreciation.212 Thus, the Court did not establish the violation of the Convention in the case. In Müller and Others v Switzerland,213 the Swiss authorities imposed a fine for the exhibition of obscene paintings offending public morals, pursuant to the relevant provisions of the Criminal Code. The ECtHR concluded that the sanction imposed to protect public morals served a legitimate purpose; it was based on legal provisions but, again, the Court did not make an attempt to set a standard that could be universally applied in Europe to define the limits of ‘public morals’. It accepted the opinion of the Swiss court on the paintings, according to which they were by their nature offensive to the sensitivity of an average person, even taking into account the special requirements relating to the interpretation of artworks, and therefore the decisions by the Swiss authorities did not breach the Convention. ii. Paedophilia The prohibition of paedophilia and its depiction and the prohibition on fulfilling the needs of paedophiles evidently fall outside the scope of the freedom of speech in every Western state. This does not raise any constitutional concerns, but the US Supreme Court has also handed down significant decisions related to this issue. According to its case  law, a pornographic depiction does not need to qualify as obscene based on the

210 Obscene

Publications Act 1959, s 1(1). v the United Kingdom (n 39). 212 ibid para 49. 213 Müller and Others v Switzerland, App no 10737/84, judgment of 24 May 1988. 211 Handyside

42  The Foundations of Free Speech Miller test for it to be restricted, if it was made with the participation of minors. In New York v Ferber,214 it established the constitutionality of a law (creating a new exception to the First Amendment) that prohibited the trade in and distribution of photographic images of children younger than 16 years of age engaging in sexual intercourse. The norms of other states render the possession of such images punishable. Child pornography is also prohibited by the US Code.215 However, any regulation needs to meet certain conditions that aim to ensure the freedom of speech. The US Child Pornography Prevention Act set out various prohibitions concerning pornographic images and recordings made with the participation of children. All images depicting children under 18 years of age, or children who seemed to be younger than 18 years of age, engaged in genuine or imitated sexual intercourse fell under the scope of the prohibition. Hence, the distribution of images depicting persons older than 18 years of age but looking younger than that, and of images depicting imitated sexual acts, would also have been classified as a criminal offence. In consequence, the US Supreme Court held that this law was unconstitutional, because the legislator could not provide credible evidence to demonstrate that such a broad restriction was necessary to protect children and prevent paedophilia.216 After a decision in 2003, the Protect Act (on the protection ‘of minors against exploitation’) was adopted, which banned the advertising or distribution of images that ‘create a belief’ that they contain illegal child pornography and where the intention of the distributor was to create the impression that the image showed child pornography. In United States v Williams,217 the Supreme Court deemed this prohibition constitutional on the grounds that the rule does not restrict the content of the expression but only the acts transmitting or aimed at transmitting them. I.  Commercial Communication In the words of Justice Powell, commercial speech ‘is an expression that is related exclusively to the economic interest of the speaker and the addressees of the communication’.218 Nowadays, legal systems identify the interests underlying the protection of commercial communication in a broader context from the point of view of freedom of speech, and thus they also provide broad protection to such expressions. In Valentine v Chrestensen,219 the US Supreme Court had no concern of any sort in considering commercial communication to be outside the purview of the freedom of speech. The change was brought about by the decision taken in Virginia State Board of Pharmacy v Virginia Citizens Consumer Council.220 From the opinion written by Justice Blackmun, it becomes clear that the judicial body placed great emphasis on the interests



214 New

York v Ferber 458 US 747 (1982). Code, Title 18, §§ 2251–52. 216 Ashcroft v Free Speech Coalition 535 US 234 (2002). 217 The United States v Williams 128 SCt 1830 (2008). 218 Central Hudson Gas and Electricity Corp v Public Service Commission 447 US 557, 561 (1980). 219 Valentine v Chrestensen 316 US 52 (1942). 220 Virginia State Board of Pharmacy v Virginia Citizens Consumer Council 425 US 748 (1976). 215 US

Limitation of the Freedom of Speech  43 of consumers and society relating to the free flow of information. As an advertisement does not exclusively serve the interests of the advertiser but may equally serve the interests of the addressee, genuine and fair commercial communication therefore enjoys protection. Another American decision of outstanding importance, Central Hudson Gas v Public Service Commission of New York,221 established a four-step test on the basis of which it can be established whether a restriction on commercial speech is legitimate from the perspective of the freedom of speech. When examining a restriction, the following need to be established: • Is the expression protected by the First Amendment? For speech to come within that provision, it must concern lawful activity and not be misleading. • Is the asserted governmental interest substantial? • Does the regulation directly advance the governmental interest asserted? • Is the regulation more extensive than is necessary to serve that interest?222 If the first or the last questions can be answered positively while the second and third ­questions can be answered negatively, restricting the freedom of speech is ­unconstitutional. The ECtHR, in its decision in Markt Intern and Beerman v Germany,223 declared for the first time that advertisements serving purely commercial interests, that is not participating in debates in the public sphere, are also to be awarded the protection of the freedom of expression.224 Nevertheless, this protection is of a lower level than in the case of ‘political speech’. However, a commercial communication may also relate to the debate on public affairs, and in that case a different standard is to be applied to it. According to Barthold v Germany,225 a discussion of 24-hour veterinary care is a public matter. The issue of abortion is clearly a public matter, and therefore the prohibition of publicity relating to abortion is contrary to the ECHR.226 The disclosure of the unfair market practices of a courier service is a public matter, therefore an article published in the form of an advertisement is entitled to the protection of the freedom of expression.227 A warning on the potential risk of cancer contained in an article published on microwave ovens is the reflection of an opinion in a public discussion; therefore, despite the lack of conclusive scientific proof, its disclosure is permitted and cannot conflict with any provisions of competition law.228 At the same time, unfair conduct or the publication of false statements are not allowed in advertisements, even in respect of matters of public interest.229

221 Central Hudson Gas v Public Service Commission of New York 447 US 557 (1980). 222 ibid 566. 223 Markt Intern and Beerman v Germany, App no 3/1988/147/201, judgment of 25 October 1989. 224 ibid para 36. 225 Barthold v Germany, App no 10/1983/66/101, judgment of 25 February 1985. 226 Open Door Counselling and Dublin Woman v Ireland, App no 64/1991/316/387-388, judgment of 23 September 1992. 227 Markt Intern and Beerman v Germany (n 223). 228 Hertel v Switzerland, App no 59/1997/843/1049, judgment of 25 August 1998. 229 Jacubowski v Germany, App no 7/1993/402/480, judgment of 26 May 1994.

44  The Foundations of Free Speech IV.  FREEDOM OF THE PRESS AND MEDIA REGULATION

A.  Freedom of Speech – Freedom of the Press In the past, freedom of speech and freedom of the press were not typically distinguished by clear legal doctrines. Dicey used these two categories as synonyms.230 They were also used synonymously in the legal system of the US, in the rulings and legal literature relating to the First Amendment, despite freedom of the press being mentioned distinctly in the form of the Press Clause in the US Constitution.231 The lack of distinction between the terms is so prevalent that not even the US Supreme Court has been able to give the fundamental right of freedom of the press a distinct and independent content.232 Notwithstanding that, the extremely extensive protection granted to the freedom of speech means that this lack of distinction does not pose any disadvantage to media operations. The media is thus not subject to stringent legal restrictions, although no special privileges are awarded to it either. However, this has not discouraged certain US authors from arguing for the distinction between freedom of the press and freedom of speech.233 As early as the 1970s, Justice Stewart argued that the freedom of the press, as opposed to the freedom of speech, is not an individual right but is the right of the media as an institution.234 The media industry is the only part of private enterprise that enjoys specific constitutional protection.235 The freedom of the press does not protect individuals working for a media outlet but the institution itself, therefore it is the institution that also has the possible additional rights and obligations attached to the freedom of the press.236 Justice Brennan stated in one of his speeches that he did not view the freedom of the press as a right similar to that of the freedom of speech, which cannot be broadly restricted. Instead, the media outlets must acknowledge that the nature of their work is such that they must bear in mind multiple, even possibly conflicting interests, as well as having to meet certain extra obligations.237 Brennan, similarly to Stewart, underlines

230 Albert V Dicey, Introduction to the Study of the Law of the Constitution (London, 1885) 223–52. 231 Melville B Nimmer, ‘Introduction: Is Freedom of the Press a Redundancy: What Does it Add to Freedom of Speech?’ (1975) 26 Hastings Law Journal 639, 640; David Lange, ‘The Speech and Press Clauses’ (1975) 23 UCLA Law Review 77; William W Van Alstyne, ‘The Hazards to the Press of Claiming a “Preferred Position”’ (1977) 28 Hastings Law Journal 761. 232 Edwin C Baker, ‘The Independent Significance of the Press Clause Under Existing Law’ (2007) 35 Hofstra Law Review 955, 955–59; Eugene Volokh, ‘Freedom for the Press as an Industry, or for the Press as a Technology? From the Framing to Today’ (2012) 160 University of Pennsylvania Law Review 459. 233 Jerome A Barron, ‘Access to the Press: A New First Amendment Right’ (1967) 80 Harvard Law Review 1641; Jerome A Barron, Freedom of the Press for Whom? The Right of Access to Mass Media (Bloomington, IN, Indiana University Press, 1975). 234 Potter Stewart, ‘Or of the Press’ (1975) 26 Hastings Law Journal 631. 235 Zurcher v Stanford Daily 436 US 547, 576 (1978), Justice Stewart’s dissenting opinion. 236 See also Randall P Bezanson, ‘Institutional Speech’ (1995) 80 Iowa Law Review 735, 823; Frederick Schauer, ‘Towards an Institutional First Amendment’ (2005) 89 Minnesota Law Review 1256; Edwin C Baker, Human Liberty and Freedom of Speech (Oxford, Oxford University Press, 1989), particularly 229 and 233; Edwin C  Baker, ‘Press Rights and Government Power to Structure the Press’ (1980) 34 University of Miami Law Review 819; Owen M Fiss, ‘Free Speech and Social Structure’ (1986) 71 Iowa Law Review 1405; Allan C ­Hutchinson, ‘Talking the Good Life: From Free Speech to Democratic Dialogue’ (1989) 1 Yale Journal of Law and ­Liberation 17. 237 William J Brennan, ‘Address’ (1979) 32 Rutgers Law Review 173.

Freedom of the Press and Media Regulation  45 that the interests of the society as the foundation of freedom of the press is the most ­important difference compared to the freedom of speech. Sonja West argues for the ‘distinctness’ of journalism – and so the media – as an ­institution.238 She is of the opinion that the freedom of the press should not be restricted to the right of publication and distribution, and that it not only serves to protect the free use of certain technologies, but also has a democratic function.239 According to West, if we grant the protection of the freedom of the press to everyone who exercises their ­freedom of speech, and to the same extent, this paradoxically would result in the devaluation of the freedom of the press, since this approach would fail to take into consideration the unique social role of the media.240 The media and the journalists working in it not only publish diverse opinions but first and foremost also act as the main driver, engine and forum of the public sphere. The media cannot be put on the same footing as a group of individuals exercising their right to freedom of speech through loudspeakers, not even if today, thanks to the new technologies, anyone can collect and even publish news.241 Unlike in the US practice, European constitutions and the individual legal systems actively try to separate the freedom of speech from the freedom of the press. The media is separately regulated by all European states, and the EU has even created separate legislation for audiovisual media services.242 The distinction between the two rights is also reflected in the English legal literature.243 A recent article by Thomas Gibbons highlights that the freedom of speech is the right of individuals and not of institutions (media enterprises), while the owners’ rights attached to the media are not unconditional and are not the same as those of freedom of speech.244 At the same time, the recognition of the media as an ‘institution’ is important, since if it is strong enough, it can resist external pressure. With that said, it can also be subject to restrictions for the benefit of the community.245 Although Article 10 does not mention freedom of the press as a separate right, the social role of the media and the characteristics of its operation are nevertheless taken into account by the general ECtHR practice. This practice attributes high importance to the media, as it ensures the supervision of the democratic social structure.246 The media and journalists thus enjoy increased protection, but at the same time they have much more responsibility than a street orator: the ‘duties and responsibilities’ accompanying the exercise of the freedoms, as per Article 10(2), acquire an independent meaning in the

238 Sonja R West, ‘Press Exceptionalism’ (2014) 127 Harvard Law Review 2434; and Sonja R West, ‘Awakening the Press Clause’ (2011) 58 UCLA Law Review 1025. 239 West, ‘Press Exceptionalism’ (n 238) 2441. 240 ibid 2442. 241 ibid 2445. 242 AVMS Directive (n 125). 243 See, eg, Eric Barendt, ‘Inaugural Lecture – Press and Broadcasting Freedom: Does Anyone Have any Rights to Free Speech?’ (1991) 44 Current Legal Problems (1991) 63, 79; Geoffrey Marshall, ‘Press Freedom and Free Speech Theory’ [1992] Public Law 40. 244 Thomas Gibbons, ‘Free Speech, Communication and the State’ in Merris Amos, Jackie Harrison and Lorna Woods (eds), Freedom of Expression and the Media (Leiden, Martinus Nijhoff, 2012) 36. 245 ibid. 246 Jersild v Denmark, App no 36/1993/431/510, judgment of 22 August 1995.

46  The Foundations of Free Speech work of the media. The ECtHR has consistently stressed in its practice that the press has an obligation to impart information and ideas concerning matters of public interest.247 Sunday Times v the United Kingdom was the case in which the ECtHR highlighted the key social role of the media for the first time.248 Based on its judgment, any limitation of freedom must serve the public interest, hence both the exercise of the right to free speech and its limitation are allowed only if actually necessary in a democratic society. In the judgment in Jersild v Denmark,249 the Court called the press a public watchdog with a decisive role in democratic society, and therefore its freedom must enjoy special protection. In Informationsverein Lentia v Austria,250 the ECtHR highlighted that the provision of Article 10(2) that allows the Contracting States to require media services to obtain a certain licence, does not mean that those States, by abusing this right, may use this monopoly to their own ends. On the contrary, they have an obligation to structure the media market in a manner that ensures the freedom of the media, by imposing the fewest possible limitations on freedom. When establishing a media system, the primary objective of the licensing procedure or of restrictions is to create or maintain the plurality of the media market.251 B.  The Concept of the Media, the Rights Holders of the Freedom of the Press Despite the emergence of new forms of media in quick succession, for many years the notion of what constituted the media remained clear. It was not disputed that the printed press, television and radio were all included in the notion of media.252 This ‘traditional’ view defines the activities concerned according to the individual forms of the media (the  form of publication and distribution). However, in the present stage of technical advancement, the form itself does not provide sufficient orientation. Due to the phenomenon of media convergence, individual content is no longer closely related to the form or appearance of the media that carry it: printed newspapers may also be read on the Internet, and we can follow television shows on mobile telephones. The newly emerging forms of communication (blogs, comments, private websites, social media) cannot be put into previous categories without difficulties and inconsistencies. As for the concept of ‘media’, orientation is provided by the definition in the EU’s Directive on ‘audiovisual media services’,253 pursuant to which the scope of the Directive shall include services: (a) offered as business services; (b) offered under the editorial responsibility of the service provider; 247 See, eg, Observer and Guardian v the United Kingdom, App no 13585/88, decision of 26 November 1991; Sunday Times v the United Kingdom (No 2), App no 13166/87, decision of 26 November 1991; Thorgeir ­Thorgeirsson v Iceland, App no 13778/88, judgment of 25 June 1992; MGN Ltd v the United Kingdom, App no 39401/04, judgment of 18 January 2011; Uj v Hungary, App no 23954/10, judgment of 19 July 2011. 248 Sunday Times v the United Kingdom (n 124). 249 Jersild v Denmark (n 246). 250 Informationsverein Lentia v Austria, App no 36/1992/381/455-459, judgment of 24 November 1993. 251 Demuth v Switzerland, App no 38743/98, judgment of 5 November 2002. 252 David A Anderson, ‘Freedom of the Press’ (2002) 80 Texas Law Review 429, 442–44. 253 AVMS Directive, Art 1(1)a.

Freedom of the Press and Media Regulation  47 (c) offered with a purpose to inform, entertain or educate; (d) offered with the purpose of providing them to the general public. The concept in principle may also be extended to other forms of media, such as the radio and the press, though the Directive does not do so. Importantly, as the most important objective of the EU when regulating the single market by means of the Directive, it deals only with business services (of a commercial nature, provided in the hope of financial gains, taking financial risks), which constitutes a major limitation when compared to the ‘traditional’ content of the notion of media. Were we to apply this concept to the entirety of the media, we would exclude all services that are not provided by the publisher with the intention of making a profit. In the US context, David Anderson also mentions the possibility of defining the new notion of the media from the perspective of its function.254 In other words, if a given newspaper, television station or website functions in the traditional role of the media (to provide information to the audience) then, in a legal sense, it shall be considered as ‘media’. All discussion papers, expert opinions and recommendations analysing the new concept of media have underlined its democratic functions.255 Jan Oster follows a similar line of argument. Approaching from the perspective of the democratic functions of the media, he suggests a new functional approach, on the basis of which, from the perspective of regulation, the media is to be defined as a natural or legal person gathering and disseminating to a mass audience information and ideas pertaining to matters of public interest on a periodical basis and according to certain standards of conduct governing the newsgathering and editorial process.256

Oster’s line of thought is logical and well-structured. According to his point of departure, the media is a legally recognised institution, with certain predefined rights and ­obligations.257 The protection afforded to the media is not identical to the protection awarded by the freedom of speech, because the role of the media is indispensable for democracy to function and for citizens to be well-informed.258 This interest requires those mediums that are actors in the marketplace of ideas,259 periodically or regularly, to serve it responsibly and professionally and to apply proper professional and ethical standards.260 Oster’s definition therefore does not include those who address broad audiences regularly, whether they

254 Anderson (n 252) 445–46. 255 See, amongst others, The Report of the High Level Group on Media Freedom and Pluralism, ch 1, at ec.europa.eu/information_society/media_taskforce/doc/pluralism/hlg/hlg_final_report.pdf; Recommendation CM/Rec(2011)7 of the Committee of Ministers to member states on a new notion of media, s 2, at https:// www.osce.org/odihr/101403?download=true. 256 Jan Oster, ‘Theory and Doctrine of “Media Freedom” as a Legal Concept’ (2013) 5 Journal of Media Law 57, 74. 257 ibid 59−62, 64−68. 258 ibid 68−74. 259 See Justice Holmes, Dissenting Opinion, Abrams v the United States (n 13) 630. 260 See also the English test of ‘responsible journalism’, the cases of Reynolds v Times Newspapers (n 115) and Flood v Times Newspapers (n 137). See also the decisions of the ECtHR in, eg, White v Sweden, App no 42435/02, judgment of 19 September 2006, para 21; Flux and Samson v Moldova, App no 28700/03, ­judgment of 23 October 2007, para 26.

48  The Foundations of Free Speech are bloggers or traditional newspapers, but whose objective is not to serve the democratic public sphere in line with the above.261 West also calls attention to the fact that, in the context of the decisions taken by the US Supreme Court, the overexpansion of the notion of the media to new services will bring about risks – in particular the risk of the lack of special privileges provided for the media  – and that a distinction needs to be made between a media worker and an ‘occasional public commentator’.262 The function of the media is to monitor social and political elites, collect information about them and safeguard democracy. To this end, the intention to express an opinion in public or its actual expression is not enough. To serve the interest of the community as a whole, the blogger sitting in front of his or her computer and the constitutionally protected institution (the media) need to be distinguished from each other.263 This does not mean that the former will be left without any protection, since the blogger continues to be protected by the right to freedom of speech and his or her situation otherwise is more favourable than that of the media, because although the blogger cannot be awarded special privileges, extra obligations are not imposed upon him or her either, unlike those on the latter. A study by Karol Jakubowicz discussed the possible elements of the new notion of the media. He did not come up with a definition but rather provided aspects or viewpoints, or considerations for analyses, and defined further possible categories within the new media. He drew attention to the fact that all ‘old’ media (press, television and radio) are at the same time ‘new’ media as soon as they become accessible online. Beyond that, he made a distinction between user-generated content, which appears in institutional media, and user-created content that appears outside the institutional media.264 The report of the European Commission’s High Level Group on Media Freedom and Pluralism also stipulates that, in order to provide effective protection for the rights of journalists and to simultaneously identify their obligations and responsibilities, journalists need to be identified.265 The report mentions some of the possible criteria that could be used for identification – participation in organised education for journalists, mandatory membership of journalists’ chambers, membership of professional organisations, journalism pursued as a full-time job – but it also dismisses them, leaving the question open. The Recommendation of the Council of Europe of 2011 also calls for definition of the new notion of the media.266 The starting point of the Recommendation is that the ‘new ecosystem’ of the media should include all those new actors that take part in the process of content production and distribution, transmitting content to a potentially large number of people (content aggregators, application designers, content-producing users, undertakings ensuring the operation of the infrastructure), as long as they have editorial control or oversight over the given content.267 Hence, the decisive element is 261 Oster (n 256) 78. 262 West, ‘Awakening the Press Clause’ (n 238) 1070. 263 West, ‘Press Exceptionalism’ (n 238) 2451−57. 264 Karol Jakubowicz, A New Notion of Media? Media and Media-like Content and Activities on new Communication Services (Strasbourg, Council of Europe, 2009) at https://rm.coe.int/168048622d. 265 High Level Group (n 255) s 4(3), para 34. 266 Recommendation on a new notion of media (n 255). 267 ibid paras 6 and 7.

Freedom of the Press and Media Regulation  49 the existence of editorial responsibility. The Appendix to the Recommendation sets out six criteria. If all of them are met, the given service can be regarded as ‘media’. These are: (a) intent to act as media; (b) purpose and underlying objectives of media – produce, aggregate or disseminate media content; (c) editorial control; (d) professional standards; (e) outreach and dissemination; (f) compliance with public expectations such as accessibility, diversity, reliability, transparency, etc. The Recommendation of the Council of Europe does not offer a clear notion of media, only outlining certain aspects that may be helpful when defining the concept. Its weakness is that individual aspects themselves give rise to a large number of disputable questions of detail. (What are professional standards and who sets them? What are the expectations of the public and who identifies them, etc?) The definition of the ‘media’ by Oster also leaves some questions open, but it is much more limited, which makes it easier to handle; at the same time, though, it might undoubtedly hide risks. If a law defines the notion of the media, and it even decides within this context what is meant by public affairs and which acts of publication are required in the public interest, then, on the one hand, this will be extremely difficult to codify and, on the other, the definition may miss its target more easily, leaving out important outlets while unnecessarily including others in the scope of the notion. Anyway, emphasising the professional nature of the media no longer has exclusively legal relevance: the advent of the Internet has led to such substantial changes in the forms of news services and the previous forms of consumption that the economic foundations of traditional print and online media are at risk; the new definition of the notion of the media today rest primarily on the fundamental vested interests of professional journalists and the media. The report of the High Level Group on media freedom and pluralism also supports this statement by emphasising the importance of the ‘quality of sources’, and it identifies delivering high-quality journalism as the task of the media.268 Who are the rights holders of the freedom of the press? Article 10 of the ECHR logically protects the individual expressing an opinion, as well as the media that produce content. The ‘freedom of expression’ mentioned in the Convention is to be granted, in addition to the obvious candidates (journalists, publishing houses, media service providers, etc), to everyone in the field of the media who plays an indispensable role in disseminating opinions and content to the public, even if they are not mentioned specifically in Article 10 of the Convention. This was made clear by the ECtHR quite early, in the Autronic case.269 According to the reasoning of the ruling, ‘Article 10 applies not only to the content of information but also to the means of transmission or reception since any restriction imposed on the means necessarily interferes with the right to receive and impart information.’270 Media service

268 High

Level Group (n 255) ch 3, paras 26 and 29. AG v Switzerland, App no 15/1989/175/231, judgment of 24 April 1990. 270 ibid para 47. 269 Autronic

50  The Foundations of Free Speech distributors (cable and satellite service providers) may be the subjects of the freedom of speech, and freedom of the press, and so may Internet intermediaries, without producing or commissioning content of their own.271 C.  Differentiated Regulation of Individual Media Traditionally, individual types of media fall under regulation to differing extents. Whereas major disciplinary codices – criminal and civil codes (where such exist) – apply to the activities of all types alike, targeted regulation also defines the boundaries of their a­ ctivities. In every European state there is a separate law on media services.272 Broadcasting and on-demand media services are also subject to common European legislation.273 In addition, in several European states, a separate law on the press exists: its scope typically includes both printed and Internet press, and frequently even media services beyond those.274 These regulations specify special requirements in terms of content applicable to individual forms of media, and set out the conditions for the provision of services as well as limiting their possible market conduct. The lawful functioning of media services is overseen by media authorities in European states. Media regulations also exist in the US – with a much narrower scope than those in Europe275 – and these are partly included in legal provisions and partly in rules established by the regulatory authority, the Federal Communications Commission (FCC). From the perspective of online content services, the law on electronic commerce is of significance, which covers what are described as ‘services related to the information society’ (including amongst others social media, search engines, webstores, blog hosts and Internet providers), while the common basis in Europe is the E-Commerce Directive.276 D.  Censorship and Prior Restraint Originally the word ‘censorship’ referred to state interference with the content published by the media. Over the historic development of the notion of freedom of the press, it has become a generally accepted view that ‘censorship’, as a prior and arbitrary intervention in content, is impermissible, whereas a posteriori accountability or prosecution for the publication of unlawful content is acceptable. Formally, making the publication of newspapers conditional on a licence and official censorship ceased to exist in

271 Oster (n 77) 77−78. The ECtHR, in the Delfi v Estonia case (n 35), made it clear in respect of Internet intermediaries. 272 See, eg, in the UK: Communications Act 2003; in France: Loi no 86-1067 du 30 septembre 1986 relative à la liberté de communication; in Italy: Decreto legislativo 31 luglio 2005, no 177 (Testo unico dei servizi di media audiovisivi e radiofonici); or German provincial media laws. 273 AVMS Directive (n 125). 274 See, eg, in France: Loi sur la liberté de la presse du 29 juillet 1881; in Italy: Legge 8 febbraio 1948, no 47 (Disposizioni sulla stampa); in Sweden: Tryckfrihetsförordning, Lag (2002:908); or press laws in German States. 275 Kenneth C Creech, Electronic Media Law and Regulation (New York, Routledge, 2013). 276 E-Commerce Directive (n 126).

Freedom of the Press and Media Regulation  51 England in 1694,277 and after Blackstone it has become a generally accepted view that the liberty of the press means the absence of prior restraints: ‘The liberty of the press is indeed essential to the nature of a free state; but this consists in laying no previous restraints upon publications, and not in freedom from censure for criminal matter when published.’278 However, censorship and the prohibition of the publication of certain content, that is prior restraint, are different from each other. The possibility of prior restraint is not fully excluded in the US but, according to Near v Minnesota, there is a strong assumption that it is unconstitutional in nature.279 In this case, the Supreme Court considered a State law unconstitutional that prohibited the publication of malicious, defamatory and scandalous content in newspapers and made it possible to apply prior restraint to the publication, except if the publishing entity proved that the statements were true and that their publication was carried out in good faith and served an objective that could be justified. According to the decision, the possibility of prior restraint did not violate the freedom of the press in general, but the requirement of evidence justifying the truth as a prior requirement did violate the freedom of the press. Publication may be prohibited only by a court in an accelerated procedure after hearing both parties. The US Supreme Court, in Freedman v Maryland,280 specified the procedural guarantees that ensure the constitutionality of prior restraint; and that on the basis of them the burden of proof in terms of the breach of law is with the applicant, while the final decision on the restriction needs to be taken by the court and the proceedings need to be completed in a short time. If the court is fair, it takes the decision in a procedure that also includes the hearing of the parties; in this way, the risk of making the publication of some valuable content impossible may be excluded almost entirely.281 In English law, there has long been an opportunity for the preliminary restriction of publishing content that is considered as infringing, through an interim injunction. The injunction restricting the freedom of the press and temporarily banning publication may only be valid for a specific period of time until the court, in an ordinary procedure, takes a decision on the lawfulness or unlawfulness of the content concerned. In the latter case, the content may not be published at a later date either.282 The condition for issuing such an injunction was already rather limited by the old common law, where it was restricted to cases where the breach of law would be obvious were the content to be published.283 In defamation cases the courts in practice hardly ever decided in favour of restriction.284

277 Michael Tugendhat, Liberty Intact. Human Rights in English Law (Oxford, Oxford University Press, 2017) 118. 278 Blackstone (n 111) 151–52. 279 Near v Minnesota 283 US 697, 716 (1931). 280 Freedman v Maryland 380 US 51 (1965). 281 Martin H Redish, ‘The Proper Role of the Prior Restraint Doctrine in First Amendment Theory’ (1984) 70 Virginia Law Review 53, 55–58. 282 Eric Barendt et al, Media Law: Text, Cases and Materials (London, Pearson, 2013) 30–32. 283 Bonnard v Perryman [1891] 2 Ch 269, CA. 284 Fraser v Evans [1969] 1 QB 349, CA.

52  The Foundations of Free Speech The UK Human Rights Act 1998 now tackles the regulation of ‘prior restraint’, which continues to be based on the common law. Under the Act’s provisions, before applying prior restraint, both parties need to be represented before the court. In the absence of such representation, the applicant needs to prove that it has taken all steps that could be expected of it to inform the other party, or that the other party has been excluded for a suitably compelling reason. Prior restraint may be applied if the court has ascertained that ‘the applicant is likely to establish that publication should not be allowed’.285 However, the assumption does not provide full certainty and does not necessarily provide any likelihood, it simply means that the applicant would stand a good chance of proving that it is right.286 English courts still do not allow prior restraint be applied in defamation cases,287 and the applicant may succeed exclusively with regard to statements relating to privacy and court proceedings. The practice of the ECtHR does not exclude prior restraint in principle. In Sunday Times v the United Kingdom (No 2) and Observer and Guardian v the United Kingdom, the English courts prevented the publication of articles written by a former MI5 agent by means of injunctions. According to the statement of the ECtHR, the Convention does not in terms prohibit the imposition of prior restraints on publication, as such … On the other hand, the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court. This is especially so as far as the press is concerned, for news is a perishable commodity and to delay its publication, even for a short period, may well deprive it of all its value and interest.288

In spite of the opportunity for prior restraint that exists in principle, the temporary injunction applied in the concrete case was in breach of Article 10 according to the ECtHR, given that the information in question had already been published in the US and Australia, so that prior restraint was no longer justified by national security, only perhaps by protecting the efficiency of the national security services and their reputation. E.  Special Privileges Awarded to the Media Resolution 1003 on the ethics of journalism of the Parliamentary Assembly of the Council of Europe firmly declared that the ‘journalist’s profession comprises rights and obligations, freedoms and responsibilities’.289 In order to fulfil its democratic obligations, the media may enjoy rights and privileges that are not available to non-journalists exercising the right to freedom of speech. The Recommendation of the Committee of Ministers of the Council of Europe on a ‘new notion of media’ lists the privileges of journalists as the protection of sources, freedom of movement and access to information, the right to accreditation (collectively: the right of access), protection against the abuse of libel

285 Human Rights Act 1998, s 12(3). 286 Barendt (n 1) 127. 287 Cream Holdings v Banerjee [2004] 4 All ER 617; Greene v Associated Newspapers [2005] 1 All ER 30. 288 Sunday Times v the United Kingdom (No 2) (n 247) para 51; and Observer and Guardian v the United Kingdom (n 247) para 60. 289 Resolution 1003 (1993) on the ethics of journalism.

Freedom of the Press and Media Regulation  53 and defamation laws, entitlement to tax benefits providing financial advantages, and the ‘internal’ freedom and pluralism of the press.290 i.  The Protection of Information Sources The first major decision of the ECtHR in this regard was given in Goodwin v the United Kingdom.291 Here, the ECtHR established that the English court had violated Article 10 of the ECHR because, in order to prosecute the information source leaking confidential business information and to prevent a future violation of the law, it obliged the publication to reveal its source. If the investigative press revealing information is not able to guarantee the anonymity of potential information sources, the media is not able to sufficiently perform the role of watchdog. Certainly this right of the media may not be unlimited, but the given circumstances did not justify the obligation to reveal the source. The head of the press department of the Public Prosecutor’s Office in Moldova turned to the media as a whistleblower as part of the background to Guja v Moldova.292 The ECtHR established that in a democratic society, it is essential for information of public interest to be published, even if it has to be done at the cost of breaching the professional obligations of a civil servant. In light of this, it established that Article 10 was violated by the state as it had not protected his anonymity. Financial Times and others v the United Kingdom293 related to the leak of confidential business information on a planned acquisition. The English court obliged the newspapers publishing the information to reveal their source on the grounds that the partly confidential and partly false information obstructed the proper functioning of the Stock Exchange, and that identification was necessary for reasons of administering justice and the prevention of crime. By contrast, the ECtHR held that doubts over whether the source acted in bad faith and had harmful intentions, and doubts concerning the authenticity of the leaked documents were not essential circumstances when deciding the case. Taking into account that the injured company did not initiate the prior ban on publishing the information – which is possible according to English law – and taking into account that it did not make use of all instruments otherwise available to identify the source, as well as taking into account the degree of the pressing public interest in respect of keeping the sources undisclosed, the ECtHR established breach of the Convention. ii.  Exemption from House Searches The question arises of the extent to which the media and journalists may be exempted from house searches by the police (‘house search’ in this case also meaning searches of editorial offices of newspapers or other media outlets). In one case, the ECtHR decided that searching the premises of editorial offices beyond a certain point might violate Article 10 of the Convention. In the given case, eight house searches were carried out

290 Recommendation

on a new notion of media (n 255) point 42. v the United Kingdom, App no 16/1994/463/544, judgment of 22 February 1996. 292 Guja v Moldova, App no 14277/04, judgment of 12 February 2008. 293 Financial Times and Others v the United Kingdom, App no 821/03, judgment of 15 December 2009. 291 Goodwin

54  The Foundations of Free Speech simultaneously by 160 policemen in Belgium in order to identify the source of a certain piece of information.294 In Tillack v Belgium295 the Court also established violation of the freedom of the press. The background to the case was that the applicant wrote a newspaper article about inspections made by the European Anti-Fraud Office in EU institutions. Following this, the Belgian court, when the applicant had refused to identify the source of his information, and based on the suspicion that he had bribed the source, carried out a house search at his office and seized the documents, computers and telephones found there. The ECtHR established that the right to keep the sources of information secret is not a privilege granted to the press, the exercise of which is conditional upon the lawful or unlawful nature of the activity carried out by the given source, but it is instead a fundamental right, which may only be restricted in a certain limited number of cases and if a real need arises. iii.  Investigative Journalism The ECtHR emphasised in several of its decisions that Article 10(2) of the Convention – the exercise of the freedom of expression – prescribes ‘duties and responsibilities’ for the press. This is of special importance if the journalist finds classified data, or state, service provider or other secrets, and needs to weigh up which is more important – respecting the confidentiality of the data, or the public interest relating to the publication of such data. In Dammann v Switzerland,296 the ECtHR established the violation of Article 10. In this case, Viktor Dammann, a journalist, had been sentenced by a criminal court for having access to the previous convictions of private persons through the violation of a state secret. According to the ECtHR, this was not confidential information as it could be accessed through court reports and newspaper articles, and it served the public interest; in addition, finding these pieces of information was not Dammann’s responsibility but that of the state. In Stoll v Switzerland,297 the ECtHR found that Article 10 of the Convention was not violated by the conviction of a journalist on the ground of publishing a diplomatic document that was classified as confidential. In its reasoning, the Court said that although the activities of the journalist were linked to a debate of public interest, not publishing diplomatic documents was justified by a greater interest. Although the publication of the information did not undermine diplomatic negotiations, it was nevertheless capable of violating Swiss national interests. F.  The Principal Foundations of Special Privileges Awarded to the Media If we recognise the independent nature of media freedom, we have to deal with a new problem stemming from it. Fundamental rights typically have a negative character,

294 Ernst

and Others v Belgium, App no 33400/96, judgment of 15 October 2003. v Belgium, App no 20477/05, judgment of 27 November 2007. 296 Dammann v Switzerland, App no 77551/01, judgment of 25 April 2006. 297 Stoll v Switzerland, App no 69698/01, judgment of 10 December 2007. 295 Tillack

Freedom of the Press and Media Regulation  55 since they oblige the state to respect these rights vis-à-vis its citizens, for the benefit and protection of its citizens. However, the negative character of media freedom in itself is not sufficient to include and reflect the entirety of the important interests related to the operation of the democratic public sphere. For that reason, the positive character of the freedom of the press must also be recognised. This differentiation between the different fundamental rights has its roots in the theory of Isaiah Berlin.298 He considered the latitude, free of external interference, ensured to the individual as a negative freedom (freedom from something or somebody), whereas he viewed freedom of choice, that is the individual’s freedom as self-mastery, as a positive freedom (freedom to do something). John Rawls also underlines that, for most individuals, it is not the civic status of being free that is really important but the opportunity to enjoy that liberty.299 As far as the ­freedom of the press is concerned, the negative character means that communication via the media is free from external interference, whereas the positive character means ensuring that anyone can have free access to media content and that media content is diverse and conveys diverse opinions. This means that whereas the negative character (freedom from the interference of external powers, hence primarily from the state) is an actual right of the media, it is in the interest of the media audience (and hence, in a broader sense, of all society) for its positive character to be recognised. It also follows from the recognition of this interest that media regulations impose certain public service obligations on the media that serve to satisfy the interest of providing free information. By taking these (seemingly conflicting) positive and negative characteristics into account together, by subjecting them to a single legal regulatory framework and by ensuring a certain balance between rights and obligations, the content and contours of the freedom of the press are defined. The positive character of media freedom does not shift the audience out of its passivity. Its members still cannot influence media content actively; nevertheless, their interest in obtaining a wide range of information can be recognised under the law. The recognition of the positive character of press freedom (the audience’s interests) originates in the acknowledgement of the democratic tasks of the media. These issues had spawned a substantial body of literature before the spread of the Internet. The idea of the media’s social responsibility is not unheard of in the US either, though it has not been dealt with in the media regulations. The Hutchins Commission, entrusted to review and redefine the social role of the media, created a theory of the social responsibility of the media in a report issued in 1947.300 According to the Commission, the greatest risk to the freedom of the press is that, despite technical developments and the increase in the volume of the press (which at the same time made market entry significantly more expensive), access to the media is more restricted; fewer voices can be heard in

298 Isaiah Berlin, Four Essays on Liberty (Oxford, Oxford University Press, 1969). 299 John Rawls, A Theory of Justice (Cambridge, MA, Harvard University Press, 1971) 204. 300 Commission on Freedom of the Press, A Free and Responsible Press. A General Report on Mass Communication: Newspapers, Radio, Motion Pictures, Magazines, and Books (Chicago, IL, The University of Chicago Press, 1947). See John C Nerone, Last Rites: Revisiting four Theories of the Press (Chicago, IL, University of Illinois Press, 1995) 77–100.

56  The Foundations of Free Speech the media, and most of the voices of these few fail to acknowledge their responsibility towards society.301 Some American authors came to similar conclusions to Europeans, notwithstanding the differences between the two legal systems. According to Judith Lichtenberg, the freedom of the press is a tool necessary for the proper operation of democracy, which can obviously be used for economic purposes as well, but always subject to certain conditions serving the public interest.302 Cass Sunstein argues for a ‘second New Deal’, since modern media fail to provide any help; moreover, they make the workings of democracy more difficult. In proportion to the comprehensive expansion of commercial media, the hope of raising active citizens who play a crucial role in participatory democracies is ­decreasing.303 Gibbons underlines that, in relation to the media, it is not credible to maintain that private organisations do not have a public function in addition to their private activities.304 In order to have access to media, privately owned media enterprises also need to take into account the interests of their audiences, and they also need to impart diverse opinions (the owner or the editor does not have the freedom to select as his or her taste or line of thinking goes).305 On the other hand, arguments against the recognition of the positive character of the freedom of the press typically come from the US. According to the mainstream of US legal literature, anything (such as the operation of the market left to its own devices) is better than state interference (regulation) – state interference is either unnecessary or harmful. Andrew Kenyon briefly summarises the essence of those arguments in an a­ rticle referring to the most important authors: based on this, the freedom of the press is a universal right; everybody may express their opinions freely and may use the media to impart them freely without any state interference, and in this way equality will prevail without state/governmental interference (action). Interference by the state necessarily distorts the functioning of the media, advantaging some at the expense of others and, by doing so, curtailing rights arising from the First Amendment.306 In contrast, Barendt argues that the freedom of speech can be subject to regulation in order to make its exercise more effective.307 According to Gibbons, the state should not avoid responsibility for the protection of speech, in particular access to the audience and fair participation in dialogue.308 Kenyon, having analysed the legal literature examining the positive nature of the freedom of the press, concludes that debate and diversity of ideas cannot be assumed in market-based mass media for, to flourish, debate and diversity require support beyond markets.309 Article 10(2) of the ECHR mentions that ‘[t]he exercise of these freedoms … carries with it duties and responsibilities’. According to the case law of the ECtHR, it is the task – even more, the obligation – of the media to publish information of general interest and opinions on matters of general interest.310

301 Lee

C Bollinger, Images of a Free Press (Chicago, IL, University of Chicago Press, 1991) 28–34. (n 41) 104–05. 303 Cass R Sunstein, Democracy and the Problem of Free Speech, 2nd edn (New York, Free Press, 1995) 17–52. 304 Gibbons (n 244) 33. 305 ibid 39. 306 Andrew T Kenyon, ‘Assuming Free Speech’ (2014) 77 Modern Law Review 379, 381–85. 307 Barendt (n 1) 69. 308 Gibbons (n 244) 42. 309 Kenyon (n 306) 398. 310 See n 247. 302 Lichtenberg

Freedom of the Press and Media Regulation  57 G.  Content Regulation of the Media As mentioned, legal regulation may also set forth provisions on media content. In Europe, for example, commercial communications published in the media are subject to detailed regulation.311 Beyond this, regulation with the purpose of child protection and the prohibition of hate speech is of major importance from the perspective of the freedom of the press. i.  Protection of Minors According to the US Code, an entity broadcasting obscene, indecent or profane content on television or the radio may be sentenced to a fine or imprisonment.312 The notion of obscenity can be checked against the Miller test;313 the elaboration of the category of indecency was left to the FCC. Based on the definition of the authority overseeing the electronic media, indecent refers to ‘the language that depicts or describes, in an offensive way, sexual conduct or excretory functions’.314 Such language, and obscene expressions, may be restricted in the electronic media (with the exception of cable television) based on the decision of the FCC authorised to do this by virtue of the Communications Act of 1934. The broadcaster may be obliged to discontinue broadcasting and may have a fine imposed upon it,315 in addition to sanctions included in the US Code. In the Pacifica case,316 the Supreme Court found the FCC regulation constitutional, according to which indecent expressions meeting the above definition may only be b ­ roadcast at times when there is no real risk that the programme schedule will be watched by children. The period between 10 pm and 6 am is the ‘safe harbour’ for indecent programmes. Restricting indecency in the media is not unconstitutional according to more recent decisions either, but the FCC has to define banned content accurately; the policy issued by it may not be vague.317 The requirements applied in media regulations may not be extended automatically to other services that are harmful to children. In Brown v Entertainment Merchants ­Association,318 the Supreme Court struck down the Californian act that prescribed that a label reading ‘18+’ be attached to the packaging of violent video games. According to the Court, video games also qualify as opinions, and although the legal system may attempt to protect children from content that is harmful to them, it may not place certain ideas out of their reach. The scope of the Californian act was too narrow and too broad at the same time: it was not applicable to all violent content (only to video games), but it did obstruct the purchase of such games even if parents did not object to purchasing them.



311 AVMS

Directive, Arts 19–26. Code, Title 18, § 1464. 313 See Miller v California (n 206). 314 FCC v Pacifica Foundation 438 US 726 (1978). 315 s 312. 316 FCC v Pacifica Foundation (n 314). 317 FCC v Fox Television Stations, Inc 132 SCt 2037 (2012). 318 Brown v Entertainment Merchants Association 131 SCt 2729 (2011). 312 US

58  The Foundations of Free Speech The UK’s Communications Act 2003 prohibits the broadcasting of ‘offensive and harmful’ content.319 This change clearly represents the setting of lower standards. The Broadcasting Code of Ofcom, the media authority, provides a detailed description of the legal provisions. Pursuant to the Code, which is obligatory for all broadcasters, for the purpose of protecting minors, special rules are applicable to programmes depicting sexuality or violence and containing crude language. Usually, these may not be broadcast in the ‘watershed’ periods, that is before 9 pm and after 5.30 am. In addition, pornographic films may only be broadcast through systems that are accessible with a password, or which are protected in some other way, which ensures that the content is available only to adults.320 Judging what is meant by ‘offensive’ content is not entirely clear. The House of Lords decided in favour of limiting the freedom of speech in a case of outstanding importance in R (on the application of ProLife Alliance) v BBC.321 The anti-abortion party submitting a complaint against the BBC protested because the broadcaster had refused to air the party’s political broadcast, which included scenes showing the remains of aborted foetuses. The BBC, in its reasoning, held that airing the political broadcast would have violated the legal ban on broadcasting offensive content, and the courts accepted this reasoning. However, it should be noted that in this case the otherwise most-protected ‘political’ speech was restricted, which could raise serious concerns.322 Although the regulation of ‘violence’ and ‘offence’ in media content is an issue that primarily falls under the authority of Member States, the single market of television broadcasting necessitated the harmonisation of EU law on the protection of minors.323 The Directive only defines the objectives to be achieved by Member States and the framework of regulation; the choice of the means and the exact method of implementing the Directive is left to legislation in individual Member States. The AVMS Directive specifies concrete requirements in respect of the protection of minors in two areas. First, it contains restrictive rules in several places relating to commercial communication.324 Then it places a prohibition on making available content in audiovisual media services that may impair the physical, mental or moral development of minors, in particular pornography or programmes that involve gratuitous violence.325 ii.  Hate Speech According to the AVMS Directive, Member States shall ensure that audiovisual media services (ie linear television and video-on-demand services) do not contain any incitement to hatred directed against a group of persons or a member of a group based on any of the following grounds: sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national



319 s

319(2)(f).

320 Broadcasting

Code, s 1. (on the application of ProLife Alliance) v BBC [2003] 2 All ER 977, HL. 322 Eric Barendt, ‘Free Speech and Abortion’ [2003] Public Law 580. 323 Keller (n 77) 363. 324 AVMS Directive, Art 9. 325 ibid Art 6a. 321 R

Freedom of the Press and Media Regulation  59 minority, property, birth, disability, age or sexual orientation.326 Beyond the obligation for Member States to introduce prohibitions, the restriction of hate speech provides an opportunity for Member States to restrict the free flow of services within the Union: if an audiovisual media service that comes from one Member State and is accessible in another Member State contains incitement to hatred against a community, countermeasures may be taken pursuant to the conditions and the procedures defined in the Directive.327 There are several cases where the media regulations of an EU Member State contain a separate prohibition on incitement to hatred against communities, partly duplicating criminal law regulations. This expands the margin of appreciation of Member States in combating hate speech by opening the opportunity for sanctions provided for in media regulations.328 The Additional Protocol to the Convention on Cybercrime329 prescribes that Member States sign the Convention in order to criminalise and prohibit, by means of criminal law instruments, the forms of hate speech made through computer systems: ‘[R]acist and xenophobic material’ means any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, colour, descent or national or ethnic origin, as well as religion if used as a pretext for any of these factors.330

Pursuant to the Protocol, the threat of racist and xenophobic content and its distribution causing offence to communities shall be sanctioned by the state’s applying instruments of criminal law. H.  Media Pluralism In line with the requirement on media pluralism, the entire media market shall collectively cater for the diversity of opinions and available content, and establish the balance thereof. This requirement primarily imposes tasks on the state in respect of the regulation of the media market, and in practice such regulation mainly concerns traditional television and radio broadcasting. The requirement of pluralism prescribes not only content requirements, but also restrictions that influence the structure of the media market, and in addition it potentially restricts market behaviour.331 The ECtHR, in several of its decisions, expressly makes reference to the importance of media pluralism.332 Pluralism is also included in both Article 11(2) of the Charter 326 ibid Art 6. The list of the possible grounds of hatred is from the EU’s Charter of Fundamental Rights (2000/C, 364/01). 327 ibid Art 3. 328 See, eg, the Irish regulation (Broadcasting Act 2009, Art 71(6)), the French regulation (Act on Communication no 86-1067, the Law on the Press of 1881, Arts 23 and 24) or the German regulation (Jugendmedien-Schutzstaatsvertrag, Art 4(1)3). 329 Additional Protocol to the Convention on Cybercrime, concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed through Computer Systems. 330 Art 2(1). 331 Peggy Valcke et al, ‘The European Media Pluralism Monitor: Bridging Law, Economics and Media Studies as a First Step towards Risk-Based Regulation in Media Markets’ (2010) 2 Journal of Media Law 85. 332 Informationsverein Lentia v Austria, App nos 13914/88, 15041/89, 15717/89, 15779/89, 17207/90, judgment of 24 November 1993; Tele 1 Privatfernsehgesellschaft mbH v Austria, App no 32240/96, judgment of

60  The Foundations of Free Speech of Fundamental Rights of the EU (‘The freedom and pluralism of the media shall be respected’) and the recitals to the AVMS Directive.333 Thus, according to the EU, the pluralism of the media is equivalent to its freedom – it is an interest of the utmost ­importance. The first level of regulation to foster media pluralism is the level of structural restrictions: obstacles to acquiring property to prevent concentration in the media market, including undertakings and cross-ownership in the same market (eg television).334 The second level of regulation comprises rules that aim to foster the general broadening of the supply of content. In all European countries, public service media function with state aid.335 Furthermore, individual broadcasters may be obliged to broadcast programmes that discuss public affairs in general, or news that is expressly of local interest.336 Pursuant to EU regulation, television broadcasters need to take into account programme quotas, which seek to protect the European media market and independent production companies. Articles 16–18 of the AVMS Directive stipulate that Member States shall ensure, where practicable and by appropriate means, that broadcasters reserve for European works a majority of their transmission time, excluding the time allotted to news, sports events, games, advertising and teletext services. In the case of programmes produced by production companies independent from the broadcaster, the quota shall be 10 per cent. According to Article 13 of the AVMS Directive, on-demand audiovisual media services shall secure at least a 30 per cent share of European works in their ­catalogues. The third level of regulation promoting media pluralism includes regulations implementing direct intervention regarding content. An example of these is the requirement for balanced coverage, on the basis of which public affairs need to be reported impartially in programmes providing information on them. Regulation may apply to television and radio broadcasters, and it is implemented in several states in Europe.337 Those who argue against maintaining this rule say that since the former scarcity of information has been eliminated, and hence, in this new media world, everyone can obtain information from countless sources, the earlier regulatory solutions have therefore become redundant or, one might say, anachronistic.338 By contrast, Steven Barnett and Mike Feintuck argue for the maintenance of balanced (impartial) news coverage, emphasising the importance of reliable media operating under ethical standards that are taken seriously by audiences, even in the

21 September 2000; Manole v Moldova, App no 13936/02, judgment of 17 September 2009; Centro Europa 7 srl and Di Stefano v Italy, App no 38433/09, judgment of 7 June 2012. 333 AVMS Directive, Preamble paras (5), (8), (12), (34), (48) and (94). 334 Keller (n 77) 416–22; Ewa Komorek, Media Pluralism and European Law (New York, Wolters Kluwer, 2012). 335 See Benedetta Brevini, Public Service Broadcasting Online: A Comparative European Policy Study of PSB 2.0 (New York, Palgrave, 2013); Karen Donders, Public Service Media and Policy in Europe (Basingstoke, Palgrave Macmillan, 2012). 336 Communications Act 2003, ss 287–89. 337 See, eg, the German regulations (Rundfunkstaatsvertrag, ss 25–34) and the UK regulation (ss 319(2)(c) and (d), 319(8) and 320 of the Communications Act 2003, and s 5 of the Broadcasting Code). 338 This view is disputed by Mike Feintuck, ‘Impartiality in News Coverage: The Present and the Future’ in Amos, Harrison and Woods (eds) (n 244) 88.

Freedom of the Press and Media Regulation  61 new media  ­environment.339 As Barnett notes, for as long as television journalism can be differentiated from Internet journalism, there is no reason to stop having mediaspecific rules.340 Feintuck argues that the earlier assumption, suggesting that in a free and unrestricted media market a diversity of opinions would automatically appear and hence impartiality would be created, proved to be false. As Richard Sambrook put it, ‘[i]f the words “impartiality” and “objectivity” have lost their meanings, we need to reinvent them or find alternative norms to ground journalism and help it serve its public purpose – providing people with the information they need to be free and self-governing’.341 A further means, employed in some states, is to make rules ensuring access to the media. Based on the right of reply, access to the content of the media service provider is granted by the legislator not due to some external condition, but due to the content published (before) by the service provider. Article 28 of the AVMS Directive prescribes that EU Member States have national legal regulations in place in respect of television broadcasting that ensure adequate legal remedies for those whose personality rights have been infringed through false statements. Such regulations are known Europewide; they typically impose obligations on the printed and online press alike.342 The compatibility of the right of reply and Article 10 has been confirmed by the ECtHR in several decisions.343 In Melnychuk v Ukraine,344 the Court established that the right of reply is part of the freedom of speech of the applicant. That is, rather than limiting the freedom of the press of the publisher of the paper carrying the injurious content, the opposite is true: the right is an instrument that enables the complainant to effectively exercise his or her freedom of speech in the forum where the complainant has been attacked. In Kaperzynski v Poland,345 the ECtHR held: The Court is of the view that a legal obligation to publish a rectification or a reply may be seen as a normal element of the legal framework governing the exercise of the freedom of expression by the print media … Indeed, the Court has already held that the right of reply, as an important element of freedom of expression, falls within the scope of Article 10 of the Convention. This flows from the need not only to be able to contest untruthful information, but also to ensure a plurality of opinions, especially on matters of general interest such as literary and political debate.346

The first major decision by the US Supreme Court concerning a reply law was Red Lion Broadcasting v FCC.347 It examined the constitutionality of the FCC’s fairness doctrine,

339 Steven Barnett, ‘Imposition or Empowerment? Freedom of Speech, Broadcasting and Impartiality’ in Amos, Harrison and Woods (eds) (n 244) 45; Feintuck (n 338). 340 Barnett (n 339) 58. 341 Richard Sambrook, Delivering Trust: Impartiality and Objectivity in the Digital Age (Oxford, Reuters Institute for the Study of Journalism, University of Oxford, 2012) 39. 342 Kyu Ho Youm, ‘The Right of Reply and Freedom of the Press: An International and Comparative Perspective’ (2008) 76 George Washington Law Review 1017; András Koltay, ‘The Right of Reply in a European Comparative Perspective’ (2013) Hungarian Journal of Legal Studies – Acta Juridica Hungarica 73. 343 First see Ediciones Tiempo SA v Spain, App no 13010/87, decision of 12 July 1989. 344 Melnychuk v Ukraine, App no 28743/03, decision of 5 July 2005. 345 Kaperzynski v Poland, App no 43206/07, judgment of 3 April 2012. 346 ibid para 66. 347 Red Lion Broadcasting v FCC 395 US 367 (1969).

62  The Foundations of Free Speech which required that some discussion of public issues must be presented on broadcast stations, and that each side of those issues must be given fair coverage. It contained a specific right of reply element: if, during the presentation of a controversial issue, an attack was made – ‘upon the honesty, character, integrity or like personal qualities of an identified person or group’ – the attacked person must be given an opportunity to reply. The same obligation applied if a political candidate’s views were endorsed or opposed, so the broadcaster should give the opposing candidate or the opponents of the endorsed candidate the opportunity to respond. The Supreme Court unanimously upheld the regulations: [The] people as a whole retain their interest in free speech by radio and their collective right to have the medium function consistently with the ends and purposes of the First Amendment. It is the right of the viewers and listeners, which is paramount … [We cannot] say that it is inconsistent with the First Amendment goal of producing an informed public capable of conducting its own affairs to require a broadcaster to permit answers to personal attacks occurring in the course of discussing controversial issues … Otherwise, station owners and a few networks would have unfettered power to make time available only to the highest bidders, to communicate only their own views on public issues, people and candidates, and to permit on the air only those with whom they agreed. There is no sanctuary in the First Amendment for unlimited private censorship operating in a medium not open to all.348

This democratic vision views the broadcast media as trustees of the people. The Red Lion decision stands alone in the long line of decisions concerning the meaning of the First Amendment, as it considers the right of the community to be explicitly superior to individuals’ right to free speech. It perhaps came as a surprise in the light of Red Lion that, only five years after the decision, the Court – again unanimously and without any reference made in its judgment to Red Lion – struck down a piece of Florida legislation that required the printed press to give the right of reply to candidates for political office who were assailed over their personal character or official record.349 The Court ruled in favour of the autonomous press: [The] implementation of a remedy such as an enforceable right of access necessarily [brings] about a confrontation with the express provisions of the First Amendment … Compelling editors or publishers to publish that which ‘“reason” tells them should not be published’ is what is at issue in this case. The Florida statute operates as a command in the same sense as a statute or regulation forbidding [the newspaper] to publish specified matter … The Florida statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors.350

The two decisions are in considerable tension. The spectrum scarcity and the state of technology, which at that time limited the available number of broadcasting channels, cannot explain this ambiguity. As was rightly recognised, there were more radio and ­television stations than daily newspapers in the US, even in the 1970s.351



348 ibid

390. Herald Publishing v Tornillo 418 US 241 (1974). 350 ibid 256. 351 Lucas A Powe, Jr, ‘Or of the (Broadcast) Press’ (1976–77) 55 Texas Law Review 36, 55–56. 349 Miami

Freedom of the Press and Media Regulation  63 Many commentators celebrated the Miami Herald decision as a victory of press freedom and blamed the Court for the fatal mistake it made in Red Lion.352 They argued that the free media market, even with its considerable failings, is always better than one that is regulated by the state. This distrust in government has deep roots in First Amendment theory; conceivably it does not give sufficient weight to the threat of private censorship as a limitation to press freedom. Other authors celebrate Red Lion and hold that Miami Herald was wrong. For them, ensuring that people are presented with a wide range of views about public issues is necessary for making democracy work, and this aim can justify state intervention.353 Another aspect of direct access to media is related to the regulation of political advertisements published in the media. In Verein gegen Tierfabriken Schweiz (VgT) v Switzerland,354 the Strasbourg Court had to decide whether or not the general prohibition on publishing advertisements having political content in the media outside election campaign periods violated the freedom of speech. According to the ECtHR, since the prohibition was imposed only on radio and television media service providers, and as the Court did not ascertain that the non-governmental organisation wishing to publish the statement was a group that could exert any influence on the public to enhance its political objectives, in this case the prohibition violated the Convention. The next decision of the ECtHR in this regard was taken in the case of a Norwegian pensioners’ party.355 Norwegian laws prohibited the publication of political opinion on television through advertisements, and when the senior citizens party sought to do so, financial sanctions were imposed upon it. In its decision, the ECtHR established that to safeguard the fairness of political processes, it may be necessary to regulate political advertisements; at the same time, in the case in question, the full, undifferentiated ban on television advertising was not proportionate to the objective to be achieved, and this led to the silencing of the small party’s opinion, and thus the Norwegian decision was in violation of the ECHR. Compared to the previous decision, Animal Defenders International v the United Kingdom356 had a different ending: in this case an animal rights organisation turned to the ECtHR because the English authorities had banned the publication of its advertisement showing the treatment of chimpanzees, on the grounds that it had a political nature and was published outside the period of election campaigns. According to the applicant, the state organs had interpreted the scope of the prohibition restrictively and could instead have ensured the opportunity to publish the advertisement for stakeholders of society, even outside election campaign periods. By contrast, the Government highlighted

352 Just a handful of examples from the vast amount of literature: Kenneth A Karst, ‘Equality as a Central Principle in the First Amendment’ (1975–76) 43 University of Chicago Law Review 20; Vincent Blasi, ‘The Checking Value in First Amendment Theory’ (1977) American Bar Foundation Research Journal 521; Edwin C Baker, Human Liberty and Freedom of Speech (Oxford, Oxford University Press, 1989). 353 Cass Sunstein, ‘A New Deal for Speech’ (1994–95) 17 Hastings Communications and Entertainment Law Journal 137; Charles W Logan, ‘Getting Beyond Scarcity: A New Paradigm for Assessing the Constitutionality of Broadcast Regulation’ (1997) 85 California Law Review 1687; Jerome A Barron, ‘Rights of Access and Reply to the Media in the United States Today’ (2003) 25 Communications and the Law 1. 354 Verein gegen Tierfabriken Schweiz (VgT) v Switzerland, App no 24699/94, judgment of 28 June 2001. 355 TV Vest AS & Rogaland Pensjonistparti v Norway, App no 21132/05, judgment of 11 December 2008. 356 Animal Defenders International v the United Kingdom, App no 48876/08, judgment of 22 April 2013.

64  The Foundations of Free Speech two fundamental risks: the risk of abuse and arbitrariness. The ECtHR considered the risks referred to by the Government as justified, referring to the fact that this would make it possible for economic stakeholders to gain societal support and influence through the media, even if a financial cap were introduced. Safeguarding media pluralism imposes obligations not only on the state and media service providers, the publishers of printed and online press products, but also on those distributing television and radio programmes (cable and satellite broadcasters). Pursuant to the ‘must carry’ rules, distributors need to include the programmes of certain broadcasters in the services broadcast to audiences, which means that they need to allocate a certain part of the distribution capacity to certain broadcasters, typically public service or local broadcasters, in order to safeguard media pluralism in the interest of the public. The ‘must carry’ restriction was also recognised by the US Supreme Court as constitutional in the Turner cases.357 Justice Kennedy, drafting the reasoning in the first case, established that the constitutional restriction of cable services is easier than that of broadcasting, because the former is not expressly related to the restriction on content. However, more in-depth analysis is required to establish whether or not the legal restriction is really necessary and justified. The second Turner decision answered this question in the affirmative. Sunstein considered this decision of the Court to be a sort of overture to the new, community-based interpretation of the First Amendment, as a small majority of the board recognised that a strictly content-neutral restriction of access to media is constitutional, as it is in the public interest.358

357 Turner Broadcasting System v FCC 512 US 622 (1994); Turner Broadcasting System v FCC 520 US 180 (1997). 358 Cass R Sunstein, ‘The First Amendment in Cyberspace’ (1995) 104 Yale Law Journal 1757, 1765–81.

2 The Regulation of the Internet and its Gatekeepers in the Context of the Freedom of Speech I.  ONLINE CONTENT PROVIDERS AS ‘MEDIA’

A. Introduction

I

t is clear that the Internet has brought about fundamental changes in the l­andscape of public communication. While such changes affect the flow of information in ­general, this volume focuses on communication pertaining to the exchange of opinions as opposed to webstores, corporate and private websites, etc. When seeking answers to legally relevant questions in the context of the Internet, it is necessary to revise, wholly or at least partly, the fundamental concepts of the freedom of speech and the press, including the theoretical justification, scope and limits of the freedom of speech. Communication on the Internet even blurs the notion of speech (ie the meaning of ‘speech’ for legal purposes). One important question is whether public communication through the Internet is subject to the same rules as offline speech, or if the Internet is a radically new and distinctive phenomenon, one that is either incapable of being regulated or which needs to be regulated more narrowly and in a different manner from traditional media or general instances of freedom of speech. A close symbiosis has inevitably emerged between the earlier rules of freedom of speech and the Internet, ever since the appearance of the various kinds of online communication. The First Amendment in the United States (US) has played an important role in shaping the legal environment of the Internet. According to Anupam Chander and Uyên Lê, early attempts to regulate the Internet were held to be unconstitutional, and with regard to the freedom of the Internet were interpreted by the courts with reference to freedom of speech;1 and unsurprisingly, this was facilitated by technology companies.2 Around the turn of the millennium, US courts and legislators shaped the legal landscape in a deliberate effort to promote the development of Internet companies.3 The first Internet-related judgment passed by the US Supreme Court in Reno v ACLU4 was very much like an ode to a free and

1 Reno

v ACLU 521 US 844 (1997); Zeran v America Online, Inc 129 F3d 327 (4th Cir 1997). Chander and Uyên P Lê, ‘Free Speech’ (2015) 100 Iowa Law Review 501. 3 Anupam Chander, ‘How Law Made Silicon Valley’ (2014) 63 Emory Law Journal 641. 4 Reno v ACLU (n 1). 2 Anupam

66  Regulation of the Internet and its Gatekeepers lawless Internet: Judge Stevens emphasised the freedom of the Internet from any kind of central control,5 where the natural scarcity of broadcasting frequencies affecting the earlier media landscape was not applicable and everyone could present their opinion freely.6 As such, the Internet was considered a suitable means of achieving the main objectives of freedom of speech. Consequently, the First Amendment developed naturally into a point of reference in the context of the Internet, and it changed from a legal regulation into an industrial policy.7 However, the above considerations apply to the US only. In Europe, the legal approach toward freedom of expression and the press allows for more extensive regulation of communication, including the Internet. Nonetheless, both the online pioneers and the now dominant global corporations are, almost without exception, based in the US, meaning that the law of the First Amendment has emerged as the universal law as a result of the increasing role played by such technology companies (search engines, platform providers, social media, webstores, etc). The consequences of this trend include the fact that such companies cannot be sued in other countries without considerable difficulties, and the US approach to the protection of freedom of speech is a fundamental component of the identity of such companies. Indeed, the shelter afforded by First Amendment case law has played an essential role in facilitating the rise of such companies. It hardly comes as a surprise that such technology companies invoke the First Amendment if needed, even when acting far away from the US. Thus it is not only the technological basis and design features of online public communication that have their origins in the US, but also the underlying legal framework and values that underpin it, even if attempts are made by European countries and the European Union (EU) to establish some sort of regulation in this domain. It seems that there has been a kind of reversal of roles in this area. In the early days, the legal framework of the freedom of speech shaped the regulation of the Internet, while today the limits of the freedom of speech are being influenced by the Internet, through the emergence of all kinds of online communication.8 Any attempt to regulate the Internet is seen as a limitation of the freedom of speech and, as such, may not be accepted as part of a constitutional order, even if it is not directed towards speech.9 In other words, the law facilitated the emergence of technology companies, and now the services of technology companies assist in the preservation of the law (ie the First Amendment). However, if freedom of speech and the freedom of the press are seen as specific rights that also perform some kind of public and social functions10 (such as the removal of harmful, dangerous or violating content, or the adequate provision of news), we need

5 ibid 853. 6 ibid 870. 7 Chander and Lê (n 2) 507. 8 ibid 505. 9 ibid 507. 10 See ch 1, section IV.B, and Sonja R West, ‘Press Exceptionalism’ (2014) 127 Harvard Law Review 2434. Sonja R West, ‘Awakening the Press Clause’ (2011) 58 UCLA Law Review 1025; Patrick J Charles and Kevin Francis O’Neill, ‘Saving the Press Clause from Ruin: The Customary Origins of a Free Press as Interface to the Present and Future’ [2012] Utah Law Review 1691, 1772; Thomas Gibbons, ‘Free Speech, Communication and the State’ in Merris Amos, Jackie Harrison and Lorna Woods (eds), Freedom of Expression and the Media (Leiden & Boston, Nijhoff, 2012).

Online Content Providers as ‘Media’  67 to consider the question of the extent to which Internet services must participate in the performance of such functions and whether they may be subject to legal obligations to such an end, or if they may be afforded additional rights. B.  Frontier, Architecture, Feudalism and Other Metaphors In the beginning (ie in the 1990s), two main ideologies appeared in relation to the possible regulation of the Internet. They are known as cyber-idealism and cyber-realism, or the Internet-optimist and the Internet-pessimist ideologies.11 Cyber-idealists believed that the Internet should (as a philosophical stance) and could (due to technology constraints) not be regulated. A most vivid expression of this position was provided by John Perry Barlow in his pamphlet Declaration of the Independence of Cyberspace (in 1996), in which Barlow warned governments against regulating the Internet.12 According to cyberidealists, the Internet (ie cyberspace) is a ‘place’ or ‘space’, which is different and separate from physical space, and which may not be subject to regulation by any government, as governments have neither the moral basis upon which to establish any kind of regulation nor the actual force or power to enforce it. Furthermore, the Internet was considered to be international, limitless, global and free (in both senses), and it was held that ‘all bits are equal’.13 Lawrence Lessig has also been an influential author in the Internet regulation debate. Instead of legal regulation adopted by a legislator, Lessig focused on the possibility of regulation through the entirety of the Internet itself, as well as by means of the norms created through online services (‘architecture’).14 He considered this architecture to be the ‘code’ that regulates and limits online communication. The code is made up of the entirety of rules and software required for running the Internet and accessing online services; it is created and, in turn, can be changed by people, just like the law. Among other things, the code enables service providers to record user data, affords users various possibilities to speak anonymously, and makes it possible to set up new websites and to allocate domain names in a reasonable manner. Lessig argues that legal regulation is not particularly needed in addition to the code (bearing in mind that extensive protection is already granted to freedom of speech by the First Amendment), so the real questions are by whom and on the basis of what values and principles the code is created.15 Cyber-realists, on the other hand, challenged the ideas of cyber-idealists at a fundamental level. Whatever a person may do online, he or she is still physically present in the offline world and is subject to the laws applicable at a given location. Governments are

11 Jan Oster, European and International Media Law (Cambridge, Cambridge University Press, 2017) 206; Kate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harvard Law Review 1599, 1613–15. 12 John Perry Barlow, ‘A Declaration of the Independence of Cyberspace’ 8 February 1996, see www.eff.org/ cyberspace-independence. 13 Oster (n 11) 208. 14 Lawrence Lessig, Code 2.0 (New York, Basic Books, 2006). 15 ibid 103. See also Jack Balkin, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society’ (2004) 79 New York University Law Review 1, 54.

68  Regulation of the Internet and its Gatekeepers capable of establishing rules concerning the online activities of individuals, and are most suited to this task. The Internet can be regulated, even if there are and always will be individuals who can circumvent the rules through advanced computer skills and competences, as most users lack such skills and competences. Only the necessity for and extent of such regulation can be subject to dispute: should a government build a Great Wall as a separator between its citizens and the information available, as China has done, or should it follow the path of Western governments and carefully refine its constitutional standards for limiting freedom of speech? In the 1990s, the Internet was frequently compared to the American frontier. When settlers began to move westward from the East coast in the nineteenth century, they entered a mostly lawless land without a central government, and they worked hard to rebuild civilisation and recreate the rules of coexistence (at least according to the popular myth).16 The pioneers of cyberspace were much like these early settlers – they set out to populate another world, where the rules of the physical world hardly applied, every person was free and equal, the business opportunities were virtually endless, and the role and responsibility of individuals were more significant due to the diminished role of government.17 Naturally, this is a considerably romanticised description of the frontier, which fails to take note of the violence, racism, exploitation, inequalities and other injustices that are experienced when the institutions of law enforcement are weakened (just as violence and anarchy play a dominant role in all Hollywood movies depicting everyday life at the frontier and which seek to reinforce American national myths). While the Internet does indeed offer numerous opportunities to flourish, increase economic prosperity and exercise individuals’ right to free speech in public, such opportunities may be abused in the same way that the increased freedom of individuals was commonly abused at the frontier. In this sense, the metaphor of the frontier can hardly be taken seriously without removing all traces of romanticism first, and furthermore the last two decades have proved that the Internet can be regulated by governments, in much the same way as law and a central government eventually conquered the frontier. According to Alfred Yen, feudalism offers a more appropriate metaphor for the Internet. First, and despite all appearances, there is a ‘central power’ in the virtual world: the various actors of Internet governance coordinate the operation of the system centrally, with a view to ensuring that all computers can connect to the network and all services are available to everybody.18 This kind of coordination applies to communication protocols (Internet Society, ISOC) and the allocation of domain names (Internet Corporation for Assigned Names and Numbers, ICANN).19 In turn, access to the network is provided by the Internet service providers (ISPs). According to the metaphor, ICANN regulates the use of domain names as a (fair and just) lord, and the ISPs allow users to enter the network for a fee; in this sense, it is clear that individuals cannot get by on the Internet on their own and without the assistance of the powers that be (like at the old frontier).

16 Frederick J Turner, The Frontier in American History (New York, Henry Holt, 1920). 17 Alfred C Yen, ‘Western Frontier or Feudal Society?: Metaphors and Perceptions of Cyberspace’ (2002) 17 Berkeley Technology Law Journal 1207, 1211. 18 ibid 1236. 19 See, eg, Ian Brown (ed), Research Handbook on Governance of the Internet (Cheltenham, Edward Elgar, 2013).

Online Content Providers as ‘Media’  69 Of course, the metaphor of a feudal estate is also imperfect, as the governance of the Internet, unlike the historical feudal regimes, does not regulate the online activities of users in detail, and persons who already have access are not limited in their freedom. According to Timothy Garton Ash, the Internet is just like the physical world, and it even has a ZIP code: CA 94305 (the ZIP code of Stanford University in California).20 California is home to ICANN, and the headquarters of Google, Facebook, Twitter, Wikipedia and countless other major technology companies are located within a 40-mile radius of Stanford.21 As discussed in section I.A, the case law of the US First Amendment had a fundamental impact on our understanding of the freedom of the Internet, both in the US and in the rest of the world; in this sense, the norms of global freedom of speech were primarily influenced by a ‘traditional’ legal system (ie a set of laws enforced through traditional state sovereignty).22 Damian Tambini, Danilo Leonardi and Chris Marsden call ‘the ideal of a pristine internet, free from regulation’, a myth, since it cannot be detached from social life, and hence from all the responsibilities, legal and ethical rules, disputes and harms that come with it.23 Des Freedman reminds us that it is a misconception to regard anything related to the Internet as being ‘inherently subordinated’ to technology.24 According to ­Freedman, the Internet is a technological system that serves private and public interests at the same time, and as such, it is not the first in history.25 In line with this approach, it is a totally legitimate proposal that democratic states (and also the representatives of non‘outsourced’ private interests, authoritarian regimes or non-transparent supranational organisations) recognising the public interest, apply regulations to the Internet in order to ensure both greater access for their citizens and accountability.26 As the Internet became a commonly used tool, its fundamental services changed and it turned into an ‘established’ and ‘civilised’ business environment: individual governments (acting independently or at international or, regarding the EU, supranational level) developed the human rights and legal regulations pertaining to the Internet, and such rights and regulation became aligned with the interests of online service providers, meaning that they also provide support for their enforcement.27 However, the ‘Law of the Internet’ is still not consolidated but remains a fragmented set of rules, which may be different – consistent or inconsistent – in each country, and the lack of a consolidated body of rules tends to cause complications.28

20 Timothy Garton Ash, Free Speech: Ten Principles for a Connected World (London, Atlantic Books, 2016) 20–24. 21 ibid 21. 22 ibid 24. 23 Damian Tambini, Danilo Leonardi and Chris Marsden, Codifying Cyberspace: Communications Selfregulations in the Age of Internet Convergence (London & New York, Routledge, 2007) 294. 24 Des Freedman, ‘The Internet of Rules: Critical Approaches to Online Regulation and Governance’ in James Curran, Natalie Fenton and Des Freedman (eds), Misunderstanding the Internet, 2nd edn (London & New York, Routledge, 2016) 117, 139–40. 25 ibid 139. 26 ibid 120. 27 Andrew D Murray, ‘Mapping the Rule of Law for the Internet’ in David Mangan and Lorna E Gillies (eds), The Legal Challenges of Social Media (Cheltenham, Edward Elgar, 2017) 13, 18–19. 28 ibid 30.

70  Regulation of the Internet and its Gatekeepers C.  Initial Hopes and the Reality of the Internet A widespread view holds that the Internet is capable of renewing the structure of democratic societies and of helping societies in authoritarian states to bring about grassroots democratisation. Russell Weaver cites many examples of both of these processes (most typically, see the supposed connection between the ‘Arab Spring’ and Twitter use).29 At  the same time, James Curran emphasises that, in those societies embarking on the path to democratisation, the Internet or the social networking websites made available via the Internet did not actually generate social change but instead only amplified and boosted already existing processes.30 Without doubt, the Internet is an extremely effective means for activists to connect with each other, exchange opinions and organise different events. However, this increased ability to communicate should not be confused with the actual impact of such communication: the Internet has not reduced global inequalities, has not eliminated the linguistic and cultural barriers between people, and has not prevented value, faith or interest-based tensions and conflicts.31 According to William Dutton, the Internet would become the ‘Fifth Estate’32 that connects people to each other so that they can gain more control over the government, politics and other sectors than ever before. Similarly, the growing use of the Internet and related digital technologies is creating a space for networking individuals in ways that enable a new source of accountability in government, politics and other sectors.33 So far, at least, the contribution that Internet usage has made to the development of democracy in Western countries is unclear.34 It seems that the political relations of the real world have more or less been reproduced on the Internet. The strongest voices of the Internet are actually the duplicated voices of the mainstream media in the offline world, while independent opinion leaders, if any, are forced to stay in the background.35 Jerome Barron’s remark from the 1960s is still valid: ‘The test of a community’s opportunities for free expression rests not so much in an abundance of alternative media but

29 Russell L Weaver, From Gutenberg to the Internet: Free Speech, Advancing Technology, and the Implications for Democracy (Durham, NC, Carolina Academic Press, 2013) 73–142. 30 James Curran, ‘The Internet of History: Rethinking the Internet’s Past’ in Curran, Fenton and Freedman (eds) (n 24) 64–70. 31 James Curran, ‘The Internet of Dreams: Reinterpreting the Internet’ in Curran, Fenton and Freedman (eds) (n 24) 7–10. 32 According to the traditional concept, the media is the ‘Fourth Estate’. Thomas Carlyle said that it was Edmund Burke who had coined this term, which later on made such a spectacular a ‘career’ in the world of ideas (and was thoroughly distorted in the process), although it does not appear anywhere in Burke’s writings: ‘Burke said that there were Three Estates in Parliament; but, in the Reporters’ Gallery yonder, there sat a Fourth Estate, more important far than they all … Literature is our Parliament too. Printing, which comes necessarily out of Writing, I say often, is equivalent to Democracy: invent Writing, Democracy is inevitable … Whoever can speak, speaking now to the whole nation, becomes a power, a branch of government, with inalienable weight in law-making, in all acts of authority.’ See Thomas Carlyle, On Heroes, Hero-Worship, and the Heroic in History (Boston, MA, Ginn, 1901) 188–89. 33 William H Dutton, ‘The Fifth Estate Emerging through the Network of Networks’ (2009) 27 Prometheus 1. 34 Jacob Rowbottom, Democracy Distorted: Wealth, Influence and Democratic Politics (Cambridge, Cambridge University Press, 2010) 243. 35 Jacob Rowbottom, ‘Media Freedom and Political Debate in the Digital Era’ (2006) 69 The Modern Law Review 489.

Online Content Providers as ‘Media’  71 rather in an abundance of opportunities to secure expression in media with the largest impact.’36 What is more, no one can say that those states wishing to stand against freedom of speech cannot take measures against the Internet. States like Iran and China ‘successfully protect’ their political structure. Furthermore, the Internet provides them with a helping hand in suppressing the private sector, and facilitates constant surveillance of their citizens. As such, it seems that the statement that the Internet simply cannot do any harm to democratisation and the movements fighting for it is simply false.37 Early expectations regarding the Internet were that the new medium could be the catalyst of a democratisation process of a different nature, meaning that it could have shaken the excessive concentration of ownership enjoyed by certain entities in the mainstream media. It was hoped that since the Internet can be used to express opinions freely and without considerable expense, and since the capacity is available for storing a theoretically unlimited amount of content, the Internet could therefore accommodate many more voices and more diverse opinions, and that the ‘barrier to entry’ (meaning the serious costs of the printed press, radio and television) that so far had crippled the expression of independent opinions would actually be eliminated. All these would benefit the audience too, as they could choose the voice they wanted from a wide range of voices. No external player could restrict access, and access would involve no significant costs for the audience either. Thus, public life would become more diverse and democratic,38 and the hope for a democratisation-based culture would appear.39 In reality, however, the economic differences did not vanish on the Internet at all. The costs of operating successful content services (ie effective speech) are also huge, and hence market scarcity remained – in a different way, but with similar results.40 Everyone else, the many independent bloggers and opinion leaders, is mostly invisible to the general public. Their content is sought after and read by only a few. (Even if some of the ‘Internet stars’ can reach very large audiences, which creates a false impression that on the Internet anyone can ‘hit it big’.) Not even the Internet is free from corporate dominance, market concentration and gatekeepers controlling content and economics-based exclusion.41 Matthew Hindman suggests that on the Internet, market forces drive the vast majority of the traffic to a very small number of websites, and so the cost of building and keeping a large audience remains high.42 Eli Noam goes so far as to say that the ‘fundamental economic characteristics of the internet’ suggest that ‘when it comes to media pluralism, the internet is not the solution but it is actually becoming the problem’.43 We may call

36 Jerome A Barron, ‘Access to the Press: A New First Amendment Right’ (1967) 80 Harvard Law Review 1641, 1653. 37 See Evgeny Morozov, The Net Delusion. The Dark Side of Internet Freedom (New York, Public Affairs, 2011). 38 Eugene Volokh, ‘Cheap Speech and What It Will Do’ (1995) 104 Yale Law Journal 1805. 39 Balkin (n 15). 40 Andrew T Kenyon, ‘Assuming Free Speech’ (2014) 77 The Modern Law Review 379, 403. 41 James Curran, Natalie Fenton and Des Freedman, ‘The Internet We Want’ in Curran, Fenton and Freedman (eds) (n 24) 204. 42 Matthew Hindman, The Internet Trap. How the Digital Economy Builds Monopolies and Undermines Democracy (Princeton, NJ, Princeton University Press, 2018). 43 Philip M Napoli and Kari Karppinen, ‘Translating Diversity to Internet Governance’ First Monday (2 December 2013) at http://firstmonday.org/ojs/index.php/fm/article/view/4307/3799.

72  Regulation of the Internet and its Gatekeepers it an exaggeration, but there is no doubt that the Internet failed to bring about a more balanced playing field for small and large companies alike.44 However, it should be noted that the Internet has actually enabled more individuals to speak and be heard, as well as increasing the scope for a democratic public sphere. The rapid commercialisation of the Web also contributed to the rapid spread and popularity of the Internet and the continuous development of related technology. However, it turned out to be less capable of fulfilling the initial hopes invested in it, as ‘profit conquers ­principles’.45 D.  Old Problems in a New Context People violate the same norms online and offline, but in different ways. Online communication has made it easier to commit various prohibited, illegal or outright criminal acts, thereby increasing the scale of acts such as the violation of personality rights, paedophilia, hate speech, promotion of drug use and the support of terrorism. This is particularly true on the ‘Dark Net’, a segment of the Internet that is not accessible to everyday users.46 The problems themselves are not new; these are old ones reappearing in a new context, some of which nonetheless require new answers from the law. Hate speech and the propagation and support of terrorism are often mentioned when the negative aspects of online communication are discussed. While numerous new pieces of legislation have been adopted to prevent the support of terrorism in the past few years,47 the new laws do not actually affect the established limits of freedom of speech,  which already include a general prohibition on calling for acts of terror, and other similar acts usually committed through ‘speech’. From a legal point of view, hate speech is regulated through various penal codes and other laws, which are not specifically tailored to the online environment.48 Furthermore, Internet providers also attempt to take action against hateful or pro-terrorism speech using their own means, for example by deleting such material when it is uploaded by users of their services, either on their own initiative or under pressure from a government or regional organisation,49 regardless of whether or not the deleted speech constituted a legally prohibited communication. In the context of the Internet, the most visible free speech issue is clearly pornography, most of which is legal but which is still harmful and dangerous for children, meaning that their access to this kind of content should be restricted.50 While the issue of access is fairly easy to solve by technical means in the case of television (by introducing watershed periods), such a straightforward solution is not available for the Internet. Taking action

44 Curran (n 31) 5–6. 45 Sandor Vegh, ‘Profit over Principles: The Commercialization of the Democratic Potentials of the Internet’ in Katharine Sariakis and Daya K Thussu (eds), Ideologies of the Internet (Cresskill, NJ, Hampton, 2006) 63. 46 Jamie Bartlett, The Dark Net: Inside the Digital Underworld (New York, Random House, 2014). 47 See, eg, Clive Walker, ‘The War of Words with Terrorism: An Assessment of Three Approaches to Pursue and Prevent’ (2017) 22 Journal of Conflict & Security Law 523. 48 See ch 1, section III.E. 49 ‘European Commission and IT Companies announce Code of Conduct on Illegal Online Hate Speech’, Press release, Brussels, 31 May 2016, at europa.eu/rapid/press-release_IP-16-1937_en.htm. 50 See ch 1, section IV.G.i.

Online Content Providers as ‘Media’  73 against pornography is thus a fundamental issue for the regulation of the Internet (just like for traditional media). A new phenomenon, moreover, is the appearance of nonprofessional pornography published by users themselves: it is possible to publish such content even without the consent of one or more of the persons shown in the footage (eg a former partner or spouse).51 This phenomenon is known as ‘revenge porn’, and it raises questions as to the liability of websites publishing such content, as well as the legal treatment of the cruel violation of the victim’s privacy and dignity. The former is subject to the regulation of intermediary service providers, while the latter is more and more commonly regulated through the introduction of new criminal provisions.52 The Internet has also brought about a surge in the violation of personality rights by enabling general access to the public sphere and allowing users to remain anonymous. The restructuring of the public sphere, the changes to the language used for communication, the absence of the traditional editorial process prior to the publication of content, and the voluntary publication of private images and data in bulk also change what is considered harmful content, and where the limit of the legal protection of privacy really lies.53 Nonetheless, defamation and the publication of private information are subject to the same rules and doctrines as before, although such rules and doctrines are adapted to the new environment. It is clear, then, that these same debating points concerning freedom of speech are also present on the Internet, while the rules of the offline world are undergoing considerable erosion due to changes to the scope of the constitutional protection afforded to speech and difficulties with regard to law enforcement since avoiding jurisdiction has become easier, as has remaining anonymous. With regard to the Internet, the scope of protection afforded to speech may be extended (which is good news), but the voluntary or legally motivated decisions taken by service providers on content to be tolerated or removed from their services usually exceed earlier doctrines and have resulted in the restriction of freedom of speech (not to mention other factors with an impact on the enforcement of freedom of speech, such as anomalies concerning the processing of personal data,54 the issue of targeted or unsolicited advertising,55 advertising as part of editorial content56 or the increasing technical ability of employers to monitor the private ­communication 51 Furthermore, fake porn videos can be created by manipulating a video recording by pasting a face onto a body. See Nick Statt, ‘Fake Celebrity Porn is Blowing up on Reddit, Thanks to Artificial Intelligence’ The Verge (24 January 2018) at https://www.theverge.com/2018/1/24/16929148/fake-celebrity-porn-ai-deepfake-faceswapping-artificial-intelligence-reddit. 52 Chris Morris, ‘Revenge Porn Law Could Make It A Federal Crime to Post Explicit Photos Without Permission’ Fortune, 28 November 2017, fortune.com/2017/11/28/revenge-porn-law/; US Revenge Porn Laws: 50 State Guide. Kelly/Warner Law PLLC, kellywarnerlaw.com/revenge-porn-laws-50-state-guide/; UK’s Criminal Justice and Courts Act 2015, s 33. 53 Lyrissa Barnett Lidsky and RonNell Andersen Jones, ‘Of Reasonable Readers and Unreasonable Speakers: Libel Law in a Networked World’ (2016) 23 Virginia Journal of Social Policy & Law 155; Erin C Carroll, ‘Making News: Balancing Newsworthiness and Privacy in the Age of Algorithms’ (2017) 106 Georgetown Law Journal 69. 54 Ash (n 20) 283–97. 55 Theo Miller, ‘Facebook Proves Targeted Ads Are Worth Your Privacy’ Forbes, 7 August 2017, https://www.forbes.com/sites/theodorecasey/2017/08/09/facebook-proves-targeted-ads-are-worth-yourprivacy/#3b2db57a725c. 56 International Consumer Protection and Enforcement Network, Guidelines for Digital Influencers, June 2016, at https://www.icpen.org/sites/default/files/2017-06/ICPEN-ORE-Guidelines%20for%20Digital%20 Influencers-JUN2016.pdf; ‘CSGO Lotto Owners Settle FTC’s First-Ever Complaint Against Individual Social

74  Regulation of the Internet and its Gatekeepers of their employees57). While the monopolistic market power of large online service providers – against which the EU is trying to take legal action58 – may refute the initial hopes we had for the Internet, it is certainly not new to the world of media: Tim Wu offers a convincing description of the historical ‘cycle’ in which the market of all new media is closed and centralised over time; it seems that the Internet is subject to that same cycle.59 E.  New Problems on the Horizon i.  A Decline of Professional Media? The Internet expanded the possibilities of public communication considerably, allowing virtually anybody to publish his or her opinion without financial cost. The various online forums, blogs, comment streams and social media sites, are full of opinions on important matters (and trivial ones). The Internet has started to dismantle the obstacles standing between professional journalists and independent opinion leaders, and has contributed to the democratisation of the public sphere, at least in a sense that it has made possible the emergence of more voices in the public space. In turn, this phenomenon has had an impact on the scope of freedom of speech.60 The Internet thus also has an impact on the future of professional journalism. First, the Internet news services and social networking websites have greatly transformed earlier reader/user habits and turned a considerable section of the public away from professional media products, thereby undermining the economic foundations of the latter.61 Second, the news aggregator sites and social networking websites (also) profit from the content produced by professional journalists, without any real effort on their part (ie content production), thereby disrupting the earlier business models.62 Furthermore, changes to the habits of users do not necessarily expand the number of people contributing to public debates (or the number of opinions expressed): blogs that can be considered independent forums usually do not attract large crowds,63 and the most powerful and popular websites

Media Influencers’, Press release (7 September 2017) at https://www.ftc.gov/news-events/press-releases/2017/09/ csgo-lotto-owners-settle-ftcs-first-ever-complaint-against. 57 In the US, see Bland v Roberts no 12-1671 (Fourth Circuit), for a UK analysis: David Mangan, ‘Social Media in the Workplace’ in Mangan and Gillies (eds) (n 27). 58 Antitrust: Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving Illegal Advantage to Own Comparison Shopping Service. Press release, 27 June 2017, europa.eu/rapid/pressrelease_IP-17-1784_en.htm. 59 Tim Wu, The Master Switch. The Rise and Fall of Information Empires (New York, Alfred A Knopf, 2010) 289. 60 Ian Cram, Citizens Journalists. Newer Media, Republican Moments and the Constitution (Cheltenham, Edward Elgar, 2015). 61 Robin Foster, News Plurality in a Digital World (Oxford, Reuters Institute for the Study of Journalism and University of Oxford, 2012) 16–24. 62 Ben Rossi, ‘The Reinvention of Publishing: Media Firms Diversify to Survive’ Guardian (30 January 2017) at https://www.theguardian.com/media-network/2017/jan/30/reinvention-publishing-media-firms-diversifysurvive. 63 Curran (n 31) 23–25; Matthew Hindman, The Myth of Digital Democracy (Princeton, NJ, Princeton University Press, 2008).

Online Content Providers as ‘Media’  75 are mostly the online versions of dominant offline news outlets that have also managed to exploit their economic power on the online markets.64 On the other hand, less fortunate or capitalised media have to struggle to survive. The world of news services is thus changing, but not necessarily in the way one might have hoped. The biggest loser in the market restructuring is the primary ‘home’ of serious journalism, the printed press. Though the voices replacing the printed press are indeed numerous, their power is negligible and their function is not the same as that of professional journalism. The spare-time breed of writers or (on the contrary) elite opinion leaders disguised as ‘independent bloggers’ are incapable of investigative journalism due to their obvious financial constraints, and the mainstream media products adapted to the Internet do not especially contribute to the growth of the diversity of content and opinions. Some authors are arguing already that the Internet will lead to the demise of professional media.65 ii.  Social Fragmentation and Polarisation According to some theories, the Internet has a negative impact on various social groups and their members. Cass Sunstein warned us of the dangers of fragmentation almost two decades ago.66 First, every user can decide individually which content to read, view or follow. This might prompt users to prefer forums that reinforce and resonate with their existing opinions and which typically provide positive feedback, without having to face others holding an opposing opinion. Second, social media amplify this phenomenon by delivering a personalised news stream to each user, which consists of content originating from friends with similar opinions and media outlets preferred by the user. This is a Daily Me, a form of news source that is always in agreement with the reader, thereby intensifying his or her pre-existing liberal or conservative views and opinions.67 These customised services create the ‘filter bubble effect’, that is they trap users in a circle of content that is identical to or consistent with their own views and mostly hide other content from them.68 On the other hand, traditional media compile content on their own without any input from the reader/viewer, making it inevitable that members of the audience will be confronted with various points of view. The benefits of this approach include the emergence of a more complex world view and the reinforcement of critical thinking.69 In contrast, the dominance of social media and search engines deepens the gap between individuals with conflicting opinions, thereby weakening social cohesion and strengthening extremism (polarisation).70 Other researchers seek to disprove Sunstein’s theory and 64 Curran (n 31) 23. 65 Robert W McChesney and John Nichols, The Death and Life of American Journalism: The Media Revolution that Will Begin the World Again (New York, Nation Books, 2009); Andrew Keen, The Cult of the Amateur: How Today’s Internet is Killing Our Culture (New York, Currency, 2007). 66 Cass R Sunstein, Republic.com (Princeton, NJ, Princeton University Press, 2001); Cass R Sunstein, R ­ epublic. com 2.0 (Princeton, NJ, Princeton University Press, 2007). 67 Cass R Sunstein, #Republic. Divided Democracy in the Age of Social Media (Princeton, NJ, Princeton University Press, 2017) ch 1. 68 In the context of search engines, see Eli Pariser, The Filter Bubble: What The Internet Is Hiding From You (London, Penguin, 2012). 69 Sunstein (n 67) 140–48. 70 ibid ch 3.

76  Regulation of the Internet and its Gatekeepers argue that ‘omnivore’ Internet users are the rule, while calling for the previous world of media to be presented in a more critical manner.71 iii.  The Issue of Applicable Law Obviously, the matters of jurisdiction and the enforcement of foreign judgments raise problems in the context of the Internet. The mere fact that an illegal act was committed on the Internet does not mean that governments do not have any means of enforcing the law. In Europe, jurisdiction may be established with regard to the nature of the service involved and of the alleged violation. Under the AVMS Directive, the country of ­establishment of a media service has jurisdiction over legal disputes that may arise concerning a matter regulated by the Directive.72 Jurisdiction is established in a similar manner concerning information society services.73 However, there might be exceptions (eg the exhaustion of rights concerning media services) when the jurisdiction of the receiving country can be established, or when a national authority that cannot proceed on its own initiates official proceedings at its counterpart in the country of establishment.74 Regulation (EU) no 1215/2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters (Brussels Ia)75 lays down provisions concerning the establishment of jurisdiction in civil and commercial matters, and also applies to proceedings launched with regard to a violation of a personality right by the media. According to Article 5(3) of the Regulation, a lawsuit may be filed ‘in matters relating to tort, delict or quasi-delict, in the courts for the place where the harmful event occurred or may occur’. In practice, any place where violating content can be accessed constitutes a place where the plaintiff may suffer damages,76 meaning that an action for damages may be filed in the respective country. A similar judgment was passed by an Australian court in the Dow Jones v Gutnick case,77 where the court held that a harmful article published on an American website may serve as grounds for establishing the violation of personality rights, if the content is accessible from another country and the reputation of the party concerned is diminished as a result.78 In the EU case law, an action for all damages may be filed in the Member State where the ‘centre of interest’ of the conflict is located.79

71 Wu, Master Switch (2010) 214–15; Matthew Gentzkow and Jesse M Shapiro, Ideological Segregation Online and Offline. Chicago Booth Research Paper no 10-19 (2010). 72 Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (AVMS Directive), Art 2. 73 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) [‘E-Commerce Directive’], Art 3. 74 AVMS Directive, Arts 3 and 4; E-Commerce Directive, Art 3(4). 75 Regulation (EU) 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters. 76 Judgment of 7 March 1995 in Case C-68/93 Fiona Shevill, Ixora Trading, Inc, Chequepoint SARL and Chequepoint International Ltd v Presse Alliance SA [1995] ECR I–00415. 77 Dow Jones and Co, Inc v Gutnick [2002] HCA 56. 78 ibid para 48. 79 Joined Cases C-509/09 and Case C-161/10 eDate Advertising GmbH v X Olivier Martinez;Robert Martinez v MGN Limited [2011] ECR I-10269.

Online Content Providers as ‘Media’  77 If data protection-related rules are violated, the proceeding authority is identified by applying the provisions laid down in the EU Data Protection Regulation.80 It is unclear whether a decision passed by a state authority or court obliges the service provider to eliminate the problem from its service altogether (and not only with regard to the page downloads originating from the given state). For this reason and in relation to the right to be forgotten, some states exceed their own strictly interpreted jurisdiction and impose a universal obligation on the infringing entity.81 Most large US Internet companies also have European headquarters, meaning that EU law and, in turn, the national law of individual Member States can be applied without any obstacle (at least in theory). If the defendant is not established in an EU Member State, the EU rules may not be applied. This means that, in theory, European countries may apply their own national rules concerning services originating from a third country, such as the US.82 The issue of jurisdiction in and of itself does not raise any difficulty for a European government concerning the blocking of Facebook or Google, but doing so may be prevented by the constitutional protection of the freedom of speech and the ­political-economic considerations of governments. F.  Regulatory Analogies and Starting Points i.  On the Freedom of the Internet Regarding the Internet as a technological novelty is a double-edged sword: it can provide arguments both for enhancing the protection of freedom of speech and for adopting new and specific regulations.83 The statement of reasons for the judgment passed in Reno v  ACLU84 concluded that enhanced protection is needed due to the technical features of the Internet. However, recognising the harm that may be caused by the Internet, opinions to the contrary have also emerged in Europe over the last two decades, and both general limitations on freedom of speech and Internet-specific regulations have been adopted.85 In the context of regulating the Internet, the method and extent of limiting the freedom of a speaker is no longer the fundamental question. The fundamental question is how it relates to the infrastructure of the Internet. The devices needed to use the Internet, the network itself and the services available on the network are privately owned, meaning that a government needs to adopt mandatory rules for them if it intends to intervene in

80 Regulation (EU) 2016/679, General Data Protection Regulation (GDPR), Arts 55 and 56. 81 Alex Hern, ‘ECJ to Rule on whether “Right to be Forgotten” can Stretch beyond EU’ Guardian (20 July 2017) at https://www.theguardian.com/technology/2017/jul/20/ecj-ruling-google-right-to-be-forgotten-beyondeu-france-data-removed. See also ch 4, section III.E on the ‘right to be forgotten’. 82 Ligue contre le racisme et l’antisémitisme et Union des étudiants juifs de France c. Yahoo!, Inc et Société Yahoo! France, Tribunal de Grande Instance de Paris, Ordonnance de référé, 22 May 2000. 83 András Sajó and Clare Ryan, ‘Judicial Reasoning and New Technologies: Framing, Newness, Fundamental Rights and the Internet’ in Oreste Pollicino and Graziella Romeo (eds), The Internet and Constitutional Law (London, Routledge, 2016) 3, 7–11. 84 Reno v ACLU (n 1). 85 AVMS Directive (n 72), E-Commerce Directive (n 73).

78  Regulation of the Internet and its Gatekeepers their operation. This also applies to matters relating to freedom of speech. Any regulation of the infrastructure is considered as a limitation of the freedom of speech, if it affects the exercise of such rights,86 even if it does not affect the limits of what can and cannot be said in the way that traditional penal and civil provisions do. According to Balkin, ‘old-school’ regulation (that punishes infringing speech) is about to be replaced by ‘newschool’ regulation that regulates the possible means and necessary accessories (search engines, payment systems and advertisers) of online speech.87 This trend may result in the emergence of collateral censorship,88 when freedom of speech is not limited by government regulation but is implemented by the operator of an infrastructure component – on the basis of government regulation – in order to avoid liability (eg in cases where, without taking any action, the content provider,89 platform provider90 or social media platform91 would respectively be held liable for an anonymous comment, copyright infringement or unlawful content). In light of these developments, we should ask if the freedom of the Internet means the freedom of Internet businesses to operate (the industry providing the infrastructure and platforms suitable for exercising the freedom of speech), or the freedom of individuals to use such technology? Clearly, this question is not entirely new but represents a classic fundamental issue concerning freedom of speech and of the press: What does freedom of the press mean? Does it refer to the freedom to use the printing press (or radio or television), or to enhanced protection afforded to the press institutions (editorial offices, publishers, media service providers)? For the purpose of fundamental rights, is the press an industry or a technology?92 According to Eugene Volokh, US constitutional case law identifies the press as a technology, meaning that any person speaking through the media is entitled to the comprehensive protection afforded by the freedom of speech, but not to anything else or anything more; as an industry, the press would not be eligible for any extra protection.93 On the other hand, this approach may not lead to the conclusion that a person has a personal right to use a printing press or appear in the media. Similarly, if the Internet is considered primarily as a technology, the speech of interested companies may be restricted according to the general standards of freedom of speech: note that the scope of permitted restrictions and the applicable standards differ considerably between Europe and the US, meaning that this approach may have fundamentally different consequences in the states concerned.

86 Jack M Balkin, ‘Old-School/New-School Speech Regulations’ (2014) 127 Harvard Law Review 2296, 2297. 87 ibid. 88 Michael I Meyerson, ‘Authors, Editors, and Uncommon Carriers: Identifying the “Speaker” Within the New Media’ (1995) 71 Notre Dame Law Review 79. See also Balkin (n 86) 2298, and András Sajó’s dissenting opinion to the judgment in Delfi v Estonia, App no 64569/09, judgment of 15 June 2015 [GC]. 89 Delfi v Estonia (n 88). 90 Proposal for a Directive of the European Parliament and of the Council on copyright in the Digital Single Market, COM(2016) 593, Art 13. 91 CG v Facebook Ireland Ltd [2016] NICA 54. 92 Eugene Volokh, ‘Freedom for the Press as an Industry, or for the Press as a Technology? From the Framing to Today’ (2012) 160 University of Pennsylvania Law Review 459; Edward S Lee, ‘Freedom of the Press 2.0’ (2008) 42 Georgia Law Review 309. 93 Volokh (n 92) 538–40.

Online Content Providers as ‘Media’  79 ii.  Internet Services as the Subjects of the Freedom of Speech and the Press In the context of legal disputes pertaining to the Internet, it seems that certain services rely on the right to free speech, even if they do not publish any kind of opinion at all. It follows from the protection of the Internet as a technology that service providers involved in the communication process may attempt to invoke freedom of speech in their defence if the limitation of their activities has an impact on this process (meaning that the limitation jeopardises the future publication of opinions). This phenomenon is not a novelty either: with regard to the activities of cable companies, the European Court of Human Rights (ECtHR) held in Autronic AG v Switzerland94 that ‘Article 10 applies not only to the content of information but also to the means of transmission or reception since any restriction imposed on the means necessarily interferes with the right to receive and impart information’.95 This suggests that the cable companies would be afforded protection with regard to the freedom of expression of others, but in reality the case was necessarily related to the freedom of the cable company concerned. Internet services are in a similar situation: while in Delfi, an Estonian online news portal argued that a reader comment published anonymously was not its own edited content, it still claimed before that ECtHR that its freedom of expression had been violated.96 This is a contradiction that naturally follows from the provisions of the Convention and the characteristics of the national systems of basic rights protection, and it does not change the fact that it is practical to subject the activities of ‘opinionless’ service providers to the scope of the freedom of expression, and that the diversity of Internet services makes it necessary to do so on a mass scale. However, it also follows that service providers need to share the limitations, and not only the benefits, of such a freedom. iii.  Applying the Limits of Offline Speech in an Online Environment While the ‘romantic’ concept of the freedom of the Internet suggested that the rules of the offline world could not be applied to the Internet, it was not long before legal regulation disproved this idea. According to the case law of some states and the ECtHR, limitations on offline speech can be applied as a general rule in an online environment just as they are offline. The validity of rules developed by traditional media and the scope of relevant laws (civil and penal codes primarily) extend to acts covered therein and performed using the Internet, such as defamation, insult, violation of privacy, hate speech, etc.97 As already discussed, this may have consequences, for both the speaker and a service provider involved in the publication of infringing content: while liability for the infringing content primarily rests with the person who published the content, the­

94 Autronic AG v Switzerland, App no 15/1989/175/231, judgment of 24 April 1990. 95 ibid para 47. 96 Delfi v Estonia (n 88). 97 Regarding violations committed on the Internet, see, eg, the limitation of obscene opinions: Perrin v the United Kingdom, App no 5446/03, decision of 18 October 2005; violation of reputation: Times Newspapers Ltd v the United Kingdom (Nos 1 and 2), App nos 3002/03, 23676/03, judgment of 10 March 2009; Mosley v the United Kingdom, App no 48009/08, judgment of 10 May 2011; protection of copyright: Ashby Donald and Others v France, App no 36769/08, judgment of 10 January 2013.

80  Regulation of the Internet and its Gatekeepers secondary liability of a service provider may be established occasionally – mostly if it fails to remove the violating content98 – meaning that such limitations on speech will also have an inevitable impact on the activities of other entities. iv.  Media, Platform and General Regulations with an Impact on the Internet So far, the EU has made attempts to regulate the Internet in two regards: first, to regulate individual Internet services; and, second, to adopt general regulations concerning Internet services. Individual Member States may also adopt separate legislation that goes beyond the matters harmonised by EU law, but such regulation may not impede the free flow of services principle within the EU without justification. The amendment of the AVMS Directive in 2007 applied to media services available online (among other means),99 while the material scope of the Directive was expanded to video-sharing platforms (a notion that incorporates social media platforms’ audiovisual content) by the next amendment in 2018.100 The primary objective of the Directive is to promote the free flow of media services, regardless of national borders, and to lay down specific legislation on the content of certain services,101 including television (ie the traditional subject of media regulation), on-demand and similar audiovisual services. However, instead of establishing a comprehensive and detailed regulatory framework, it only reflects the existing European consensus in the areas of protection of minors, the fight against hate speech and the limits of commercial communication. The regulation of information society services or e-commerce services in Europe is based on an EU directive adopted in 2000. The material scope of the E-Commerce ­Directive includes information society services,102 that is ‘any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services’, including intermediary services, which are categorised by the directive as ‘mere conduit’, ‘caching’ and ‘hosting’ services. Moreover, the Directive applies to all services falling within the meaning of the rather broad definition employed, such as webstores, search engines and social media sites. The Directive sought to establish a ­European single market in the regulated field, and to lay down common rules, primarily for consumer protection purposes (in other words, it did not include any provision regarding the content of the regulated services). The obligations imposed on intermediaries regarding infringing content are also of considerable importance for our purposes.103 Although not specifically designed for Internet services, important rules were also adopted in the fields of copyright,104 advertising105 and data protection,106 and these also have an 98 E-Commerce Directive, Arts 12–14. 99 AVMS Directive, Art 1. 100 Proposal for a Directive of the European Parliament and of the Council amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services in view of changing market realities, COM/2016/0287 (AVMS Directive Reform Proposal), Art 1. 101 See ch1, section IV.H. 102 E-Commerce Directive, Art 2(a), with reference to Art 1(2) of Directive 98/34/EC. 103 E-Commerce Directive, Arts 12–14, see section II of this chapter. 104 Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society. 105 See, eg, Directive 2006/114/EC concerning misleading and comparative advertising. 106 GDPR (n 80).

Online Content Providers as ‘Media’  81 impact on freedom of expression. While the times when the Internet could be considered a lawless area in Europe have clearly passed, the adoption of a comprehensive regulatory framework specifically designed for the Internet with a direct impact on the freedom of speech is not a likely development in the near future. v.  Internet Access as a Fundamental Right? The explosive spread of the Internet was accompanied by the fervour of legislators to expand the scope of fundamental rights, and thus asking whether access to the Internet (possibly broadband) is a constitutional and fundamental right no longer sounds so startling. In some European countries this right is stated in the relevant sectoral laws,107 but it has not been included in any constitution and has not yet been given the status of a constitutional right. Some authors consider it not impossible to recognise this as a new kind of social right – a true fundamental right.108 In this context, the problem of access to the Internet arises as part of the relationship between the state and its citizens (and not as part of the relationship between an ISP and one of its users: an ISP may cancel a subscription if the customer does not pay his or her bills, and a user posting an infringing comment may be banned from Facebook). The problem may arise due to the scarcity of material goods required for such access (which problem may be solved by providing free access in public institutions and places) or in special situations, for example in a penal institution or with regard to a particular group of persons. The latter situation was addressed by the US Supreme Court judgment passed in ­Packingham v North Carolina.109 According to the laws of North Carolina, registered sex offenders were not allowed to use commercial social media sites, where they might come into contact with minors. The Court held that such a rule was excessively broad and unjustified: since those concerned were banned from using such websites altogether, their freedom of speech was thereby restricted in a disproportionate manner. The State court held that the rules restricted ‘actions’ rather than ‘speech’,110 and it applied a more lenient standard. However, the US Supreme Court held that the First Amendment was affected in the case. The Supreme Court considered social media services to be a modern public square,111 where ideas are exchanged freely, and that being denied access to this restricted the freedom of speech of the individual concerned. The comparison of mostly privately owned online forums to public squares, streets and parks may have a fundamental impact on the regulation of such privately owned services.112

107 These are Estonia, Finland, France, Greece, and Spain. See Emily Laidlaw, Regulating Speech in Cyberspace (Cambridge, Cambridge University Press, 2015) 20. 108 Ivar A Hartmann, ‘A Right to Free Internet? On Internet Access and Social Rights’ (2013) 13 Suffolk ­University Journal of High Technology Law 297. 109 Packingham v North Carolina 137 SCt 1730 (2017). 110 State v Packingham 777 SE2d 738, 744 (NC 2015). Regarding the distinction between ‘speech’ and ‘action’, see ch 1, section II.B.ii. 111 Packingham v North Carolina (n 109). 112 On this matter, see ch 5, section II.C.

82  Regulation of the Internet and its Gatekeepers II.  THE REGULATION OF INTERNET GATEKEEPERS

A.  The Roles and Types of Gatekeepers This volume focuses on the impact of Internet gatekeepers on the process of public communication, and on the legal issues associated with their activities. Generally speaking, a gatekeeper is an entity tasked with deciding if a person or thing can pass through a ‘gate’ controlled by the gatekeeper.113 Gatekeepers have existed in all historic periods of public communication, and defining their legal status has often caused problems for the law. Generally, newspaper kiosks, postal carriers or cable and satellite providers were not considered to have a direct impact on the media content they made available to the public. A postal carrier or cable provider could prevent individual readers or viewers from accessing information by refusing to deliver a paper or fix a network error (thereby also hurting its own financial interests), but it was not in a position to decide on the content of newspaper articles or television programmes. Such actors had limited potential to interfere with the communication process, even though they were indispensable parts of it, and this made them a tempting target for governments seeking to regulate, or at least keep within certain boundaries, the freedom of speech of others by regulating the intermediaries. Even though the Internet seems to provide direct and unconditional access for persons wishing to exercise their freedom of speech in public, gatekeepers still remain an indispensable part of the communication process. A gatekeeper is more specifically defined as a person or entity whose activity is necessary for publishing the opinion of another person or entity, and gatekeepers include ISPs, blog service providers, social media, search engine providers, entities selling apps, webstores, news portals, news aggregating sites and the content providers of websites who can decide on the publication of comments to individual posts. Some gatekeepers may be influential or even indispensable, with a considerable impact on public communication, while other gatekeepers may have more limited powers, and may even go unnoticed by the public. It is true of all gatekeepers that they are capable of influencing the public without being government actors, and that they are usually even more effective at influencing it than governments­ themselves.114 As private entities, they are not bound by any constitutional requirements to maintain the freedom of speech, so they can establish their own service rules concerning that freedom. According to the classification developed by Laidlaw, ‘internet gatekeepers’ form the largest group and they control the flow of information. Among these entities, the ‘internet information gatekeepers’ form a smaller group, and through this control they are capable of affecting individuals’ participation in democratic discourse and public debate.115 In this model, a gatekeeper belongs to the latter group if it is capable of facilitating or hindering democratic discourse.116 Such activities raise more direct questions regarding



113 Laidlaw 114 ibid

39. 115 ibid 44. 116 ibid 46.

(n 107) 37.

The Regulation of Internet Gatekeepers  83 the enforcement of the freedom of speech, on the side both of the party influenced by the gatekeeper and of the gatekeeper itself. Laidlaw’s ‘internet information gatekeepers’ can be divided into three groups: (a) ‘macro-gatekeepers’, which are indispensable parts of using the Internet: each and every user needs to use them (eg ISPs and search engine providers); (b) ‘authority gatekeepers’, which are the operators of websites with considerable ­traffic: they play a key role in online communication, but they are not indispensable (eg mobile ISPs, social media sites and other popular websites); (c) ‘micro-gatekeepers’, which are the minor actors in the flow of information (eg blog service providers, content providers and moderators of less frequented websites), which are still part of democratic public sphere due to their content.117 Natali Helberger and her co-authors apply a different logic and identify two fundamental groups of gatekeepers. Gatekeepers of the first group control access to information directly, while gatekeepers of the second group control access to the important services that are needed to connect the user to various types of content.118 Members of the first group are similar to traditional editors, who decide on the content to be published, while members of the second group become gatekeepers as ISPs (or cable providers in the context of television) due to the structure of the flow of information. For the purpose of freedom of speech, the most important online gatekeepers may belong to either of these groups, depending on the activity they perform: social media platforms, search engines and application platforms routinely make ‘editorial’ decisions on making content unavailable, deleting or removing it (whether to comply with a legal obligation, to respect certain sensibilities, to protect their business interests or to act at their own discretion). As such decisions have a direct impact on the flow of information, these gatekeepers belong to the first group. When the activities of such gatekeepers are related to sorting content, changing the focus among various pieces of content (that is the ‘findability’ of such content) or to creating a personalised offering for a user, they belong to the second group.119 Thus, as Uta Kohl notes, the most important theoretical questions pertaining to the gatekeepers of the Internet relate to whether they play an active or a passive role in the communication process, the nature of their ‘editorial’ activities, and the extent of the similarities between their editorial activities and actual editing.120 The role of gatekeepers covered in this volume is not passive. They are key actors of the democratic public sphere and actively involved in the communication process, including making decisions about what their users can access and what they cannot, or can access only with substantial difficulty. The EU Directive, which regulates (in part) the activities of individual gatekeepers, does not require such gatekeepers to acknowledge their own role as editors, but it does allow them to be held liable for infringements in accordance with

117 ibid 53–54. 118 Natali Helberger, Katharina Kleinen-von Königslöw and Rob van der Noll, ‘Regulating the New Information Intermediaries as Gatekeepers of Information Diversity’ (2015) 17 info 50, 52. 119 ibid 53–54. 120 Uta Kohl, ‘Intermediaries within Online Regulation’ in Diane Rowland, Uta Kohl and Andrew C ­ harlesworth (eds), Information Technology Law, 5th edn (London, Routledge, 2016) 72, 85–87.

84  Regulation of the Internet and its Gatekeepers their r­ elationship with the content. A gatekeeper may not be held liable if it is not actively involved in the public transmission of unlawful content, or if it is not aware of the infringing nature of the content, but it is required to remove such content after becoming aware of the infringement.121 However, this does not prevent gatekeepers from sorting through the various pieces of content of their own volition and in a manner permitted (not ­regulated) by law. Under the current legal approach, gatekeepers are not considered as ‘media services’. This means that while they do demand protection for the freedom of speech in order to enable their selection activities, they are not bound by the various legal guarantees concerning the right of individuals to access the media,122 and they are not subject to obligations that are otherwise applicable to the media as a private institution of constitutional value,123 as it is conceptualised in the European legal approach.124 B.  The ‘Freedom of Speech’ of Gatekeepers and Algorithms Due to the large volumes of data transmitted, gatekeepers use not only human resources, but also algorithms to process information. A term borrowed from mathematics, an algorithm is a method, guidelines or set of instructions that consists of a sequence of steps and is suitable for solving a problem. In general, computer programs embody algorithms used to instruct a computer how to execute a task. In the context of gatekeepers, a decision concerning the flow of information (ie the filtering, removal or higher or lower ranking of content and its presentation to users) is usually ‘determined’ by an algorithm, meaning that the legal status of such decisions, as well as the nature and subject of legal rights and obligations, poses fundamental questions. An algorithm is a kind of ‘editor’, which presents the user with content according to the decisions of its creator and employing data collected about the user during the use of the service or other services (concerning his or her interests and preferences).125 This method is also suitable for manipulating the users en masse: in an infamous experiment, groups of Facebook users were exposed to increased amounts of positive and negative information (by changing the relevant algorithm), and the emotions of the subjects were thereby influenced in positive and negative directions without the subjects’ being aware of their participation in the experiment.126 The control exercised by gatekeepers also includes the power to decide who may reach the public, who is banned from the public, who is to follow the rules of the public 121 E-Commerce Directive, Arts 12–15. 122 András Sajó and Monroe Price (eds), Rights of Access to the Media (The Hague, Kluwer Law International, 1996). 123 William J Brennan, ‘Address’ (1979) 32 Rutgers Law Review 173. 124 Commission on Freedom of the Press, A Free and Responsible Press. A General Report on Mass Communication: Newspapers, Radio, Motion Pictures, Magazines, and Books (Chicago, IL, University of Chicago Press, 1947); John C Nerone, Last Rights: Revisiting four Theories of the Press (Urbana, IL, University of Illinois Press 1995) 77–100. 125 Sue Halpern, ‘Mind Control & the Internet’ New York Review of Books (23 June 2011). 126 Vindu Goel, ‘Facebook Tinkers with Users’ Emotions in News Feed Experiment, Stirring Outcry’ New York Times (29 June 2014). For the company’s response, see Adam DI Kramera, Jamie E Guillory and Jeffrey T Hancock, ‘Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks’, at www.pnas.org/content/pnas/111/24/8788.full.pdf.

The Regulation of Internet Gatekeepers  85 and who is to remain silent.127 This means that the legal treatment of algorithms, and whether or not their operations are covered by the freedom of speech, is of the utmost importance. In previous decades, constitutional protection was granted to actual speech and opinions expressed in public, as well as to other forms of expression, the function of which can be considered analogous to ‘traditional’ speech. For example, video games and search results compiled by search engines are protected by the freedom of speech.128 It is a reasonable question whether or not the ‘communication’ produced with the help of algorithms used by gatekeepers is protected by the freedom of speech. If an algorithm conducts editing, meaning that it makes decisions concerning the sorting, removal and ranking of pieces of content, it might be considered ‘speech’. Such decisions have a fundamental impact on the public appearance of the actual speaker (who is preferred or disfavoured by the algorithm), giving the decision of the algorithm a certain communicative content that is protected under the aegis of the freedom of speech. Such decisions also convey a material communicative message to other users, which influences the capability of such users to access information. For this reason, such users experience the decision as an ‘opinion’ even more directly.129 On the other hand, it may be argued that a decision made by an algorithm (ie a search ranking or the compilation of a news stream) is fully automated and without any actual content (it only sorts through or makes other kinds of decisions concerning the content of others), meaning that it should be considered as an ‘action’ instead of ‘speech’. Indeed, the algorithms of gatekeepers often operate without any communicative content – think of the operation of the TCP/IP protocol or cache storage, which do not convey any ‘message’ to users.130 Tim Wu argues that the activities of devices needed to convey speech but which merely transmit information without making any decision, may not be considered speech.131 A  typical embodiment of this proposition in the offline world is a telephone service. On the other hand, cable television services are different, in that cable service providers make decisions on how to present a channel to the audience (‘editing’).132 Wu also argues that the activities of an online gatekeeper should be considered ‘action’ instead of ‘speech’ if they are merely functional, in the sense that they are necessary to transmit the speech of others but do not carry any independent meaning themselves (Wu considers the search rankings of a search engine to be without such meaning).133 The legal approach toward search engines is a complex matter and will be revisited later. Probably even Wu would agree that the activities of Facebook go beyond being merely functional and convey material messages in and of themselves, because a personalised news stream is compiled for each user upon login (selecting some of the content available to the user). It seems inevitable that the activities of an ‘editing’ algorithm must be considered ‘speech’,

127 Andrew Tutt, ‘The New Speech’ (2014) 41 Hastings Constitutional Law Quarterly 235, 250. 128 ibid 259. See ch 1, section IV.G.i. 129 Stuart M Benjamin, ‘Algorithms and Speech’ (2013) 161 University of Pennsylvania Law Review 1445, 1447. 130 ibid 1471. 131 Tim Wu, ‘Machine Speech’ (2013) 161 University of Pennsylvania Law Review 1495, 1525–33. 132 Turner Broadcasting System v FCC 512 US 622 (1994); and Turner Broadcasting System v FCC 520 US 180 (1997). 133 Wu (n 131) 1524–31.

86  Regulation of the Internet and its Gatekeepers as the algorithm conveys a material message itself. In addition to the trends in legal ­development, a reason for this is that such activities are experienced by their recipients as ‘speech’. However, it seems unlikely that decisions made by algorithms and human beings can be distinguished from each other in a consistent manner.134 If they are not recognised as speech, the most important services of search engines and the social media can be regulated without respect for the most fundamental constitutional restrictions. Even if a certain way of presenting content is considered speech, the speech of an algorithm may be regulated in fundamentally different ways in the US and Europe. Recognition as speech also implies that the algorithms would also be expected to respect the limits of freedom of speech, and the providers of services using such ­algorithms would not be exempted from the application of general laws that are not related to the content of speech (eg anti-trust or tax laws). It also seems clear that such ‘communicating’ algorithms do not ‘make decisions’ on their own, and the actual person who created the program is always there in the ­background making the actual ‘editorial’ decisions, that is, determining the way the program should operate. In the end, the ‘machine does not speak’135 and, obviously, it does not become a beneficiary of freedom of speech. Frank Pasquale dedicated an entire book to the problems posed by algorithms, with a focus on data protection and privacy. In his presentation of the situation, data collected by algorithms concerning the users are put in a ‘black box’, the content of which is unknown to the users, as is the extent of the information stored. Pasquale argues that the algorithms that ‘decide’ on lending, employment, investments and the selection of romantic partners also amplify existing social problems, such as discrimination, racism and sexism, as they replace human decision-making and operate without any morality or transparency, making it easy for their operators to escape liability.136 On the other hand, Chander argues that these problems are not caused by the online world or the ‘black box’ as such problems already exist, and therefore they should be handled in the real world: racism is a property of people, not of algorithms. Algorithms can help to solve such problems, if they are calibrated properly, in order to facilitate the combating of such phenomena.137 In the near future, the free speech of robots and generally ­artificial ­intelligence will raise serious legal concerns. According to Massaro, Norton and Kaminski, free speech doctrine provides a basis for regulating and protecting the speech of nonhuman speakers.138 Algorithms can also play a role in the provision of news services. They can help in compiling personalised collections of news items and supporting the selection of news collection sites, and they may even be capable of generating articles if provided with suitable information. Naturally, the editor working behind such an algorithm is responsible

134 Benjamin (n 129) 1493. 135 ibid 1479. 136 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA, Harvard University Press, 2015) 6–14. 137 Anupam Chander, ‘The Racist Algorithm?’ (2017) 115 Michigan Law Review 1023, 1025. 138 Toni M Massaro and Helen Norton, ‘Siri-ously? Free Speech Rights and Artificial Intelligence’ (2016) 110 Northwestern University Law Review 1169; Toni M Massaro, Helen L Norton and Margot E Kaminski, ‘SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment’ (2017) 101 Minnesota Law Review 2481.

The Regulation of Internet Gatekeepers  87 for any infringement caused by such articles.139 Sarah Eskens and her co-authors warn that the personalisation of news (the ‘Daily Me’) raises several questions that are relevant to media law, at least in Europe. Such questions relate to issues including the guarantees of open public debate, diversity and pluralism, the elimination of censorship and the maintenance of social cohesion.140 If such a service is not subject to the rules of media regulation (as the news streams of social media sites are not) then it is exempted from the legal obligations that are applicable with regard to the above issues, and this exemption may sooner or later result in the need to change the regulatory provisions. C.  The Problems of the Media Survive in the Activities of Gatekeepers i.  Media or Tech Companies? Large Internet gatekeepers consider themselves tech companies.141 It is in their best interests to do so, for two reasons. First, the regulations applicable to technology companies are far narrower and less stringent than those applicable to media companies (which are also subject to content regulation, special restrictions on competition, the prohibition of concentration, and the obligation to perform public interest tasks). Second, the moral requirement of social responsibility is far less frequently mentioned concerning the activities of tech companies.142 However, the legal classification of a given service does not derive from the self-image of the service provider but from the nature of its activities. For this reason, some of the gatekeepers, primarily the social media platforms, are generally considered to be media undertakings.143 Philip Napoli and Robyn Caplan offer a summary of the questions that arise in this field. The authors argue that, considering their main activities, large online gatekeepers should no longer be considered tech companies.144 The identity of these companies is based on the argument that they do not produce any content themselves but merely facilitate the publication of content created by their users (social media), or provide links to such content upon request by a user (search engines). This may be true, but Napoli and Caplan identified a number of features that make the operation of such companies quite similar to that of the media. From the perspective of the public sphere, the distribution of content is also of great importance, in addition to that of content creation, and such activities used to form part of the activities of media companies before the emergence 139 Pieter-Jan Ombelet, Aleksandra Kuczerawy and Peggy Valcke, ‘Supervising Automated Journalists in the Newsroom: Liability for Algorithmically Produced News Stories’ Revue du Droit des Technologies de l’Information (Summer 2016). 140 Sarah Eskens, Natali Helberger and Judith Moeller, ‘Challenged by News Personalisation: Five Perspectives on the Right to Receive Information’ (2017) 9 Journal of Media Law 259. 141 Ashley Rodriguez, ‘Zuckerberg Says Facebook will Never Be a Media Company – Despite Controlling the World’s Media’ Quartz (31 August 2016) at https://qz.com/770743/zuckerberg-says-facebook-will-never-be-amedia-company-despite-controlling-the-worlds-media/. 142 See ch 1, section IV.F. 143 Charles Warner, ‘Fake News: Facebook is a Technology Company’ Forbes (27 November 2016) at https://www. forbes.com/sites/charleswarner/2016/11/27/fake-news-facebook-is-a-technology-company/#7b65816e1381. 144 Philip M Napoli and Robyn Caplan, ‘Why Media Companies Insist They’re Not Media Companies, Why They’re Wrong, and Why it Matters’ (2017) 22 First Monday, at http://firstmonday.org/ojs/index.php/fm/article/ view/7051/6124.

88  Regulation of the Internet and its Gatekeepers of the Internet. These companies employ human workers, either to make certain editorial decisions or to configure the algorithms that make such decisions automatically, and such editorial decision-making is an essential part of their services. Similarly to ‘traditional’ media, the services of these companies seek to provide the members of their audience (their users) with whatever they want to see, meaning that they may not be considered neutral platforms. Last but not least, the main source of income of these companies is advertising – just like the media.145 The operators of the most influential online platforms – such as Facebook and Google – are considered media companies by legal scholars (even if not by existing legal doctrine).146 ii.  Government Intervention While this volume focuses on the restriction of speech by gatekeepers, it should be noted that the activities of gatekeepers may be restricted by the government, which in turn may have an indirect impact on free speech. In an authoritarian state, this may mean blocking the Internet or mobile phone services,147 or commonly frequented gatekeeper websites,148 or it could mean other softer means that are more compatible with the concept of the rule of law, such as holding the gatekeepers liable for infringing content published by others.149 Even though government regulation or a specific official decision is not directly targeted at the content of speech in such situations, the gatekeepers, by the nature of their activities, are also granted protection by the freedom of speech, meaning that the standards and doctrines relating to the freedom of speech need to be applied to such restrictions. iii.  Private Censorship The gatekeepers themselves also have various ways of restricting the freedom of speech. Such restrictions may be implemented through the configuration of the instructions given to the service (the algorithms employed or the moderators), or ad hoc decisions passed in individual cases concerning specific items of content. The service may interfere with the freedom of speech of others to serve its own business or political interests, or it may do so in cooperation with an oppressive regime. It is a rather ironic example of profit maximisation, when a tech company that ostensibly stands for the freedom of individuals and businesses in its home country voluntarily decides to agree to the expectations of a dictatorial regime and implement the censorship required by the regime in its services.150 ‘Private censorship’ and gatekeepers are frequently mentioned in the same sentence,151 but the definition of ‘censorship’ should be clarified before the ability of such gatekeepers 145 ibid. 146 See, eg, Balkin (n 86) 2304. 147 ibid 2308–09. 148 Yildirim v Turkey, App no 3111/10, judgment of 18 December 2012. 149 E-Commerce Directive, Arts 12–14; Delfi v Estonia (n 88). 150 Ash (n 20) 360–63; Kaweh Waddell, ‘Why Google Quit China – and Why It’s Heading Back’ The Atlantic (19 January 2016) at https://www.theatlantic.com/technology/archive/2016/01/why-google-quit-china-and-whyits-heading-back/424482/. 151 See also Rowland, Kohl and Charlesworth (eds) (n 120) 16.

The Regulation of Internet Gatekeepers  89 to limit the freedom of speech is accepted as a proven fact. Traditionally, the concept of censorship was clearly connected to governments and devices that are potentially capable of suppressing freedom of speech. Censorship is a form of restriction that is applied arbitrarily and without any legal guarantee prior to the publication of a given piece of content, which prevents that content from being presented to the public. With regard to the modern media, the meaning of censorship has expanded considerably since the second half of the twentieth century, and it is used today as a legal term to describe a far wider spectrum of situations. On the one hand, censorship is not necessarily limited to government restrictions, as media content may also be restricted in the service of private interests. On the other hand, censorship is not always the result of external interference; it may also be the result of internal influencing factors, which is known as self-censorship. Frederick Schauer argues that the meaning of censorship has become vague.152 Censorship may originate from the government (Schauer argues that this happened when, for example, criminal proceedings were initiated against an arts centre in Cincinnati for displaying pictures by Robert Mapplethorpe) or from private undertakings (eg when General Motors dismissed employees who criticised the reliability of Chevrolet vehicles in public, or when a newspaper editor changed the content of an article written by a subordinate journalist). Furthermore, censorship may be direct (when the government or another actor interferes with the process of publishing an already finished piece of content), or indirect (eg when the government arbitrarily deprives an artist of an art grant, thereby preventing the artist from presenting his or her views in public), and may be applied in advance (ie effected prior to publication) or ex post (ie an arbitrary punishment after publication). The contemporary concept of censorship is so broad that it is hardly usable as a legal category; basically, it refers to an arbitrary act interfering with the freedom of speech in a manner that is inconsistent with the principles of free speech. In fact, interference by gatekeepers with the communication process (eg when the ranking of an opinion or link is downgraded, a link is deleted from a service, or a comment is moderated during the compilation of the search results by a search engine or of a news feed by a social media platform) does not constitute censorship (even in the broad sense discussed above). It is much closer to the exercise of rights arising from private ownership and other individual rights, which is permitted, in the absence of statutory provisions, and may not be considered arbitrary in a legal sense (even if it can be considered arbitrary in another sense, such as a moral sense). Nonetheless, regulation may have a role to play with regard to this kind of moral arbitrariness, as gatekeepers are certainly capable of influencing the public – what happens to protesting, tolerance and civil commitment if speech is monitored, controlled and suppressed by invisible external powers, even if one cannot cry censorship?153

152 Frederick Schauer, ‘The Ontology of Censorship’ in Robert C Post (ed), Censorship and Silencing: The Practices of Cultural Regulation (Los Angeles, CA, Getty Research Institute for the History of Art and the Humanities, 1998) 147–68. 153 Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (New York, Public Affairs, 2011) 197–203; Tutt (n 127) 286.

90  Regulation of the Internet and its Gatekeepers iv.  Diversity and Pluralism The activities of gatekeepers raise questions concerning both their possible direct interference with the freedom of speech and also the issue of media pluralism, which is one of the main objectives of media regulation in Europe. The regulatory regimes aim to increase the diversity of published content and opinions concerning public matters regarding the television and radio markets. To this end, provisions pertaining to content and structural restrictions seeking to prevent the concentration of ownership may also be adopted to a limited extent, and the public service media can also work to expand the media offering.154 Given the market influence of gatekeepers, it seems reasonable to raise the issue of media pluralism, a matter that is quite common with regard to television and radio services,155 and the possible adaptation of known media regulatory solutions to the online environment. Note the strange paradox here: in the beginning, the Internet was heralded as the ultimate solution to the scarcity of content and promised to render earlier forms of media regulations meaningless, while we are now facing problems already all too wellknown from the media market, but on a much larger scale. Nevertheless, freedom of speech remains primarily a ‘negative right’ that protects the content of opinions against external sources of interference, but it does not guarantee the right to use platforms that facilitate the effective exercise of this freedom. Similarly to traditional press and media regulations, a recognised right to reach an audience does not exist in the context of the Internet either.156 v.  Impact on the Audience In addition to discussing matters that are closely related to the freedom of speech, it should also be noted that the activities of large online gatekeepers have an enormous impact on the operation of the public sphere, the quality of public debate and the information the public receives, as well as on the working of the human brain due to the very nature, essence and characteristics of their services.157 The way people search for information will never be the same as it was before Google appeared; the ways in which people verify information have changed fundamentally, and the methods of keeping in contact with others and researching information on public matters has been transformed by Facebook, Twitter and our mobile devices. The presence of these services in the lives of millions and billions of people has had a fundamental impact on our concept of, respect for, perception of and need for privacy, reputation, gender equality and the rights of children.158 154 See ch 1, section IV.H. 155 Robin Mansell, ‘The public’s interest in intermediaries’ (2015) 17 Journal of Policy, Regulation and Strategy for Telecommunication, Information and Media 8; Helberger, Kleinen-von Königslöw and van der Noll (n 118). 156 Jennifer A Chandler, ‘A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet’ (2007) 35 Hofstra Law Review 1095, 1096–103. 157 Nicholas Carr, The Shallows: What the Internet is Doing to Our Brains (New York, WW Norton, 2011). For a different perspective, see John Palfrey and Urs Gasser, Born Digital: Understanding the First Generation of Digital Natives (New York, Basic Books, 2008). 158 Marcelo Thompson, ‘Beyond Gatekeeping: The Normative Responsibility of Internet Intermediaries’ (2016) 18 Vanderbilt Journal of Entertainment & Technology Law 783, 787.

The Regulation of Internet Gatekeepers  91 These changes may in turn bring about amendments to the regulatory regimes designed to protect such rights. The meaning of human freedom and personality rights is therefore also changing, as we have seen examples in history where the appearance of new  media (such as the printing press, radio or television) triggered similar changes. Today, the activities of gatekeepers are assessed from a perspective based on the old doctrines of freedom of speech, but one should remember that the impact of those activities may also cause considerable changes to those doctrines themselves. D.  Analogies in Media Regulation and Regulatory Ideas i. ‘Editing’ Numerous references have been made to the ‘editorial’ activity of gatekeepers, which is a cornerstone for assessing their activities from a legal perspective. The concept of editorial activities is a key part of the notion of ‘media’.159 If the activities of gatekeepers are similar to such editorial activities, the gatekeepers themselves may be subject to media regulation; otherwise they may be considered technology companies. The terms ‘editing’, ‘editorial decision-making’ and ‘editorial discretion’ are usually not defined in legal documents. For the media, these refer to making unavoidable decisions on the content of a given medium, decisions which are indispensable for the operation of any medium. Note that in the context of the press, radio and television (and a considerable number of websites), editors make decisions on content that was commissioned by them or produced by their colleagues (journalists, producers). The definition of ‘commonplace’ editorial activity is set in the EU AVMS Directive within the following notion of ‘editorial responsibility’: ‘editorial responsibility’ means the exercise of effective control both over the selection of the programmes and over their organisation either in a chronological schedule, in the case of television broadcasts, or in a catalogue, in the case of on-demand audiovisual media services. Editorial responsibility does not necessarily imply any legal liability under national law for the content or the services provided.160

According to this definition, the editor of a media service is the person who selects and compiles the programmes to be published, and without whom such programmes would not reach the public. The activities of gatekeepers are considerably different from this model, as the users produce and share independent content en masse, and the platform operator normally does not interfere with this process. However, some aspects of this model are quite similar to traditional editing. Gatekeepers do not usually decide on the publication of any content prior to its publication, but they may decide to remove a piece of content subsequently (either voluntarily or to perform a legal obligation). In certain

159 See ch 1, section IV.B, and Matthew Ingram, ‘Sorry Mark Zuckerberg, But Facebook is Definitely a Media Company’ Fortune (30 August 2016) at http://fortune.com/2016/08/30/facebook-media-company/; Samuel Gibbs, ‘Mark Zuckerberg Appears to Finally Admit Facebook is a Media Company’ Guardian (22 December 2016) at https://www.theguardian.com/technology/2016/dec/22/mark-zuckerberg-appears-to-finally-admitfacebook-is-a-media-company. 160 AVMS Directive, Art 1(1)c.

92  Regulation of the Internet and its Gatekeepers cases, gatekeepers may even prevent the publication of a piece of content (preliminary filtering), and similarly they may display some content to the users in a prominent place while almost hiding other content. In general, gatekeepers can influence the content (eg the news feed of Facebook) displayed to their users in line with their own interests. Notably, such editing is performed in bulk and on a daily basis, using human resources, with a view to improving service quality or serving business or other interests, or, in Europe, to comply with legal obligations (if required for the removal of violating content161 or the protection of personal data162). It is clear that the law is heading towards regarding gatekeepers as editors, and another step in this direction is the 2018 amendment of the AVMS Directive on the regulation of platform providers. While the new AVMS Directive emphasises that ‘video-sharing platform services’ do not bear any ‘editorial responsibility’, they still may be subject to content-related obligations (with regard to the protection of children and taking action against hate speech or the advocacy of terrorism). The Directive recognises that such platforms organise (‘display’, ‘tag’, ‘sequence’) user content, and, when accompanied by an obligation to take action against infringing content, their role is clearly similar to editing: (aa) ‘video-sharing platform service’ means a service, as defined by Articles 56 and 57 of the Treaty on the Functioning of the European Union, which meets the following­ requirements: (i) the service consists of the storage of a large amount of programmes or usergenerated videos, for which the video-sharing platform provider does not have editorial responsibility; (ii) the organisation of the stored content is determined by the provider of the service including by automatic means or algorithms, in particular by hosting, displaying, tagging and sequencing.163

ii.  Previous Analogies in Media Regulation The analogy between the activities of online gatekeepers and other media distribution services (the sale of newspapers, cable and satellite broadcasting, etc) has already been made. While the latter actors also enjoy the freedom of the press themselves, they may be subject to regulation with a view to achieving the goals of media regulation. However, a possible analogy between gatekeepers and distributors is insufficient to determine the legal status of gatekeepers, as gatekeepers have far more scope to interfere with content (and they are already required by law to do so in numerous cases), while the role of distributors is technical in nature and consists of the transmission of content created somewhere else (ie in a place other than their own platform). 161 E-Commerce Directive, Arts 12–14. 162 Case C-131/12 Google Spain SL, Google, Inc v Agencia Española de Protección Datos Mario Costeja González, ECLI:EU:C:2014:317. 163 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities [‘new AVMS Directive’], Art 1(1)b(aa) (legislative references are made with regard to the AVMS Directive in its 2010 form, and indicate amendments thereto).

The Regulation of Internet Gatekeepers  93 The current legal status of gatekeepers is somewhat reminiscent of some of the statutes passed concerning the press in Europe in the nineteenth century. Press law gradually introduced liability, as with, for example, the old Hungarian press and criminal law.164 With regard to delicts defined in the Press Act, criminal liability was not tied to guilt, and the printing house involved in the publication of material was automatically held liable, unless the guilty author, editor or publisher was identified.165 Today, the application of gradual and guiltless liability is unacceptable under the principles of criminal law, and applying criminal law may be quite excessive for the purposes of media regulation. However, the principle that an entity may be held liable for a piece of violating content, even if it was not involved in its publication and was not aware of its content prior to publication, seems to be reappearing in the modern media environment with regard to the liability of gatekeepers. In other words, the liability of gatekeepers is quite similar to secondary liability regimes.166 In such cases, the liability of an entity is not established on the basis of its own action or omission but on the unlawful act of another person (eg a legal person is liable for the acts of its members, and employers are liable for the violations of their employees). However, this similarity is only superficial, as secondary liability may not be applied in media regulation – whenever the liability of a gatekeeper is established by law today, the entity is held liable for its own act or omission (eg, for not removing a piece of infringing content when the gatekeeper was aware of its infringing nature). iii.  Recommendations of the Council of Europe The documents, recommendations, statements and other materials issued by the Council of Europe are not binding sources of law. Even so, they have a significant role for two reasons. On the one hand, they signify the major directions of common European thinking, which may ultimately lead (though only slowly and to varying extents and in a haphazard manner) to real state-level legislation; and on the other hand, these documents are able to inform the practice of the ECtHR, which is inclined to make references to the documents issued by the Council of Europe (and within that the Committee of Ministers in particular) in terms of Article 10. Over the last decade, several important commitments have been expressed with regard to Internet intermediaries. The Recommendation (2007) related to the public service value of the Internet highlights that Member States are required to identify the roles and responsibilities of all parties concerned, under clear legal frameworks, and calls upon the governments of the Member States to encourage the private sector to acknowledge and familiarise itself with its evolving ethical roles and responsibilities. The key actors of online communication must be accountable with some (primarily self- and co-) regulation methods.167

164 Mihály Révész T, ‘A sajtójogi felelősség kérdése a magyar jogban’ (1992) 4 Jogtörténeti Szemle 48. 165 Hungarian Act XIV of 1914, ss 32–36. 166 Laidlaw (n 107) 38. 167 Recommendation CM/Rec(2007)16 of the Committee of Ministers to member states on measures to promote the public service value of the internet.

94  Regulation of the Internet and its Gatekeepers The Recommendation on Internet freedom (2016) once again underlines the extremely important role private undertakings play in the process of communication. At the same time, it asks these actors not only to comply with Article 10, but also – expressly – to meet the requirements of legality, legitimacy and proportionality when they are blocking, filtering and removing content.168 With this, the Council of Europe recommends that obligations similar to those imposed on the different bodies of the state should be met in this context. In the new three-year strategy of the Council of Europe on Internet Governance, the improvement of the policy relating to the role of intermediaries is set as a priority task for the Council itself.169 The Committee of Ministers addressed their concerns in a 2011 Declaration about Internet companies, which ‘are not immune to undue interference. … [F]ree speech online is challenged in new ways and may fall victim to action taken by privately owned internet platforms and online service providers’.170 According to the document, private interference with content should also be judged against freedom of expression standards, and especially the Article 10 jurisprudence of the ECtHR.171 The Council of Europe drafted a recommendation specifically for Member States in order to set an appropriate European regulatory framework for Internet­ intermediaries.172 This can be regarded as an important step towards a common European understanding of the arising issues. The document suggests that Member States ‘take all necessary measures to ensure that internet intermediaries fulfil their responsibilities to respect human rights’. The Appendix details not only the duties of the state but also the responsibilities of the intermediaries, relating to consistent and non-discriminatory application and enforcement of their own standards, procedural safeguards, transparency and the availability of effective remedies and direct redress in cases of user (content provider) grievances.173 E.  The Liability of Gatekeepers in General i.  The European Union If gatekeepers provide only technical services when they make available, store or transmit the content of others (much like a printing house or a newspaper stand) then it would seem unjustified to hold them liable for the violations of others, as long as they are unaware of the violation. However, according to the European approach, the gatekeepers may 168 Recommendation CM/Rec(2016)5[1] of the Committee of Ministers to member States on Internet­ freedom, Appendix, points 2.2.1 and 2.2.2. 169 Internet Governance – Council of Europe Strategy 2016–2019. Democracy, human rights and the rule of law in the digital world, s 13(c). 170 Declaration of the Committee of Ministers on the protection of freedom of expression and freedom of assembly and association with regard to privately operated internet platforms and online service providers (7 December 2011), points 4 and 5. 171 ibid point 6. See also the issue paper by the Council of Europe Commissioner for Human Rights, The rule of law on the Internet and in the wider digital world, CommDH/IssuePaper(2014)1 of 8 December 2014, 73, at rm.coe.int/16806da51c. 172 Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries. 173 ibid Appendix, point 2.

The Regulation of Internet Gatekeepers  95 be held liable for their own failure to act after becoming aware of the violation (if they fail to remove the infringing material). The relevant EU Directive requires intermediaries to remove such materials after becoming aware of their infringing nature.174 Even though the gatekeeper activities falling within the scope of the Directive (ie mere conduit, caching and hosting) play an important role in online communication, the issue of liability has arisen since 2000, the year of adoption of the Directive, with regard to various gatekeepers that did not even exist at the time or which were not included in the scope of legislation for other reasons (eg search engines or social media platforms). In the absence of a better analogy, courts tend to liken such entities, for example, to h ­ osting providers. In principle, legislation in the Member States may expand the group of regulated gatekeepers, but only with regard to those falling within the respective Member State’s jurisdiction. The material scope of the regulation is of great importance: if certain conditions are met, the EU Directive exempts gatekeepers from liability even if they let violating pieces of content pass. This exemption-based system should not necessarily be considered outdated, but something has certainly changed since 2000: there are fewer and fewer reasons to believe that the gatekeepers of today remain passive with regard to content and perform nothing more than storage and transmission. While content is still produced by users or other independent actors, the services of gatekeepers select from and organise, promote or reduce the ranking of such content, and may even delete it or make it unavailable within the system. The fairness rule of the Directive that grants exemption to a passive actor as long as it does not get involved in the process (ie until it becomes aware of the violation) seems less and less to be the only feasible solution to the problem of the new gatekeepers. Still, it seems true that the volume of content processed by the new gatekeepers makes it impossible and unreasonable to impose a comprehensive obligation to control prior to publication, or even after publication without the requirement of an external notice. According to the preamble to the Directive: The exemptions from liability established in this Directive cover only cases where the activity of the information society service provider is limited to the technical process of operating and giving access to a communication network over which information made available by third parties is transmitted or temporarily stored, for the sole purpose of making the transmission more efficient; this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.175

Accordingly, Articles 12 to 14 grant wide exemptions to intermediaries. For hosting providers (Article 14), this means that a provider is not held liable for the transmission or storage of infringing content, if it is not its own content and it is not aware of the infringing nature of the content, provided that it takes action to remove the information or make it inaccessible without delay. If such measures are not taken, the provider may be held liable for its own omission. In addition, the Directive also stipulates that intermediaries may not be subject to a general monitoring obligation to identify illegal activities (Article 15).

174 E-Commerce 175 ibid

Directive, Arts 12–14. recital (42).

96  Regulation of the Internet and its Gatekeepers The notion of ‘illegal’ raises an important issue, as the obligation of removal is independent from the outcome of an eventual court or official procedure that may establish the violation, and the storage provider is required to take action before a decision is passed (provided that a legal procedure is initiated at all). This means that the provider has to decide on the issue of illegality on its own, and its decision is free from any legal guarantee (even though it may have an impact on freedom of speech). This rule may encourage the provider concerned to remove content to escape liability, even in highly questionable situations. It would be comforting (but probably inadequate, considering the speed of communication) if the liability of an intermediary could not be established unless the illegal nature of the content it has not removed is established by a court.176 The interpretation of the Directive has been somewhat clarified by certain decisions of the Court of Justice of the European Union (CJEU). One of the cases related to the protection of intellectual property. In Google France v Louis Vuitton,177 the Court held that the storage provider (Google’s AdWords service) was exempted from liability, as ‘the rule laid down therein applies to an internet referencing service provider in the case where that service provider has not played an active role of such a kind as to give it knowledge of, or control over, the data stored. If it has not played such a role, that service provider cannot be held liable for the data which it has stored at the request of an advertiser’, provided that it took action after becoming aware of the situation.178 According to the judgment handed down in L’Oréal and Others v eBay and Others,179 the operator of a webstore is not liable for any content uploaded by a client because the mere fact that the operator of an online marketplace stores offers for sale on its server, sets the terms of its service, is remunerated for that service and provides general information to its customers cannot have the effect of denying it the exemptions from liability.180

However, the exemption applies only as long as it remains neutral toward the respective piece of content: Where, by contrast, the operator has provided assistance which entails, in particular, optimising the presentation of the offers for sale in question or promoting those offers, it must be considered not to have taken a neutral position between the customer-seller concerned and potential buyers but to have played an active role of such a kind as to give it knowledge of, or control over, the data relating to those offers for sale. It cannot then rely, in the case of those data, on the exemption from liability referred to in Article 14(1) of Directive 2000/31.181

The question is thus whether or not the storage provider could have been aware of the infringing content.182 The above rulings are to be regarded in the light of the absence of

176 Christina M Mulligan, ‘Technological Intermediaries and Freedom of the Press’ (2013) 66 Southern ­Methodist University Law Review 157, 175. 177 Joined Cases C-236/08 to C-238/08 Google France SARL and Google, Inc v Louis Vuitton Malletier SA and Others, ECLI:EU:C:2010:159. 178 ibid para 120. 179 Case C-324/09 L’Oréal SA and Others v eBay International AG and Others, ECLI:EU:C:2011:474. 180 ibid para 115. 181 ibid para 116. 182 ibid para 120.

The Regulation of Internet Gatekeepers  97 a general monitoring and control obligation, as providers may not be required to implement such general technical solutions (filtering).183 In other areas, the situation of gatekeepers seems less comfortable. The (draft) directive on combating terrorism sets forth a similar obligation to remove content.184 As already discussed, the new AVMS Directive makes it mandatory to take action against content that is dangerous for children or inconsistent with the prohibition of hate speech or advocacy of terrorism,185 while the data protection regulations require gatekeepers to comply with data protection rules with regard to their own activities (ie the obligation does not arise as a result of a possible violation by another person or user).186 It also seems possible for a court of a Member State to decide that the activities of a certain gatekeeper are not covered by the E-Commerce Directive (ie it does not qualify as a host provider), meaning that the general rules of civil and criminal law may be applied. The ECtHR did not object to this interpretation when the content provider of a website was held liable by a national court for comments posted anonymously.187 The EU strategy on the digital single market takes note of the practical issues that may arise in the course of applying the Directive: according to the European Commission, the issues pertaining to the laborious removal of infringing content by gatekeepers and the removal of lawful content need to be addressed.188 The 2017 communication issued on the basis of this strategy seeks primarily to take action against sexual abuse, child pornography, terrorism, hate speech and copyright infringement, and to promote the effective application of consumer protection rules in an online environment. However, it does not seek to change the liability regime established by the E-Commerce Directive, even though it does suggest clarifications and adjustments regarding the obligations of storage providers (including platform providers, such as social media, search engines and webstores) in line with Article 14. Such suggestions include a requirement to take proactive (pre-publication) action against infringing content, while not excluding the possibility of ex post exemption from liability for infringing content that may be published despite the (voluntary) preliminary filtering,189 and call for the transparency of policies adopted by service providers concerning user content and the notice and removal procedure,190

183 See also Case C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM), ECLI:EU:C:2011:771; Case C‑360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV, ECLI:EU:C:2012:85. 184 Monica Horten, ‘Content “Responsibility”: The Looming Cloud of Uncertainty for Internet Intermediaries’ Center for Democracy and Technology (September 2016) at https://cdt.org/files/2016/09/2016-09-02-ContentResponsibility-FN1-w-pgenbs.pdf, 12. 185 New AVMS Directive, Art 28b. 186 Google Spain SL (n 162); GDPR (n 80) Art 17. 187 Delfi v Estonia (n 88); Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary, App no 22947/13, judgment of 6 February 2016; see ch 1, section I.D. 188 A Digital Single Market Strategy for Europe. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, COM(2015) 192 final, 55–56 (point 4.6). 189 Tackling Illegal Content Online Towards an Enhanced Responsibility of Online Platforms. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, COM(2017) 555, point 3.3.1. 190 ibid point 4.2.

98  Regulation of the Internet and its Gatekeepers the establishment of the appropriate rules of notices191 and the prompt removal of actual infringing content.192 The following recommendation by the European Commission confirms this approach, and so the liability exemptions (the notice and takedown system) are set in the E-Commerce Directive as the firm basis for tackling illegal online content.193 The document addresses the Member States, but aims to widen the duties and responsibilities of the gatekeepers through state legislation regarding the proper submission and processing of user notices, the capability of hosting providers to send counter-notices, transparency (which now seems to be the magic word in these debates) and procedural safeguards. Most importantly, ‘[h]osting service providers should be encouraged to take, where appropriate, proportionate and specific proactive measures in respect of illegal content’,194 but ‘there should be effective and appropriate safeguards to ensure that hosting service providers act in a diligent and proportionate manner in respect of content that they store’.195 The recommendation clearly shows the approach taken by the Commission, which aims to strengthen the already available regulatory mechanisms by formalising the so far nonjuridical procedures and policies of the gatekeepers (hosting services). ii.  The United States In the US, the liability of gatekeepers is regulated on the basis of a different theoretical background. For the purpose of ensuring the smooth growth and economic strengthening of Internet companies, the courts took a step backwards by claiming to protect freedom of speech.196 Today, gatekeepers are granted virtually complete immunity when it comes to infringing content of others. Interestingly, the service providers would consider any interference with the current status quo to be a limitation of their freedom, even though their immunity is based on the theory that they have nothing to do with any infringing content and it is not their ‘opinion’. In US legal literature, the term ‘proxy censorship’197 or ‘collateral censorship’198 is used to describe a situation when the law restricts freedom of speech by regulating the ­activities of gatekeepers, thereby requiring the intermediaries to do the ‘dirty work’. It seems almost certain that the solutions applied in Europe are considered as such censorship in the US. Seth Kreimer offers a thorough, logical and easy-to-follow description of these arguments: ‘proxy censorship’ constitutes a restrictive interference with the freedom of speech if a government requires a gatekeeper to decide what is legal or illegal (or what can and cannot be said where freedom of speech is a relevant factor when

191 ibid point 3.2. 192 ibid point 4. 193 Commission Recommendation of 1 March 2018 on measures to effectively tackle illegal content online (C(2018) 1177 final). 194 ibid point 18. 195 ibid point 19. 196 Chander (n 3). 197 Seth F Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link’ (2006) 155 University of Pennsylvania Law Review 11. 198 Felix T Wu, ‘Collateral Censorship and the Limits of Intermediary Immunity’ (2013) 87 Notre Dame Law Review 293.

The Regulation of Internet Gatekeepers  99 assessing the respective piece of content) without providing any formal procedural guarantee. A comforting remedy to the lack of guarantees could be if the gatekeepers were not obliged to make such decisions, meaning that they are exempted from liability by the regulation.199 Section 230 of the 1996 Communications Decency Act (CDA) lays down the ‘Good Samaritan’ protection for the providers of ‘interactive computer services’: (c) Protection for ‘Good Samaritan’ blocking and screening of offensive material (1) Treatment of publisher or speaker No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (2) Civil liability No provider or user of an interactive computer service shall be held liable on account of— (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).200

However, the protection is not complete and unconditional. The Act relies on judicial case law to establish when a gatekeeper becomes a ‘publisher’ or ‘speaker’, thereby losing its immunity, and point (e) stipulates that the protection does not apply in the event of committing a federal crime or violating intellectual property rights. The scope of exceptions from immunity was extended in April 2018, by a law that permits action against websites, including hosting service providers, that promote trafficking of human beings for sexual exploitation.201 It seems clear that the CDA explicitly allows gatekeepers to make content-related decisions. The absence of guarantees noted by Kreimer also applies with regard to this authorisation: in the US regime, gatekeepers are free to decide which pieces of content to remove and how to present content to their users, and this fact implies the restriction of the freedom of speech, even if it is not considered as such in a legal sense.202 Section 230 has given rise to extensive case law, which seems to tend to interpret the obligations of a ‘Good Samaritan’ in a restrictive manner. In Zeran v AOL,203 the court established that exemption from liability does not cease to exist when a letter or takedown notice is sent to a service provider drawing its attention to the infringing content. The judgment also noted that the publication, editing and removal of a piece of content falls within the discretion of the service provider, and it does not exclude its immunity.

199 Kreimer (n 197) 14–31. 200 47 USC 230 Title 47, pt I. 201 Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (FOSTA). 202 See Rebecca Tushnet, ‘Power without Responsibility: Intermediaries and the First Amendment’ (2008) 76 George Washington Law Review 986, 1009. 203 Zeran (n 1).

100  Regulation of the Internet and its Gatekeepers In Fair Housing Council of San Fernando Valley v Roommates.com,204 the court ruled against the operator of the website. The website was meant to bring together co-tenants (eg students living away from home), so that accommodation-seekers could specify the features of persons with whom they would be willing to live, thereby excluding persons of different races and colours, while they were required to specify their own characteristics. The website operator was held liable for such discriminatory practices, as it encouraged its users to violate the requirement of equal treatment by performing targeted searches. This interpretation of the law was weakened by the judgment passed in Doe v ­Backpage.205 The users of the website in this case were allowed to publish advertisements, including solicitations for prostitution, and the plaintiffs argued that this may have facilitated the trafficking of human beings for sexual purposes. The court held that the decision made by the website operator on the organisation of user content and establishing the rules of publication did not prevent him from invoking the immunity granted by section 230 CDA. This decision goes against the conclusions reached in the above-mentioned Roommates.com case. Backpage’s offering of adult services sections remained highly controversial, due to allegations that Backpage knowingly allowed and encouraged users to post ads relating to prostitution and human trafficking, particularly involving minors, and took steps to intentionally obfuscate the activities. After a series of court cases and the arrest of the company’s CEO and other officials, Backpage removed the adult services subsection in the US in 2017. On 6 April 2018, backpage.com and affiliated websites were seized by the federal law enforcement bodies.206 The case of Doe v MySpace207 involved a social media service through which a user contacted and, in the course of a personal meeting, eventually sexually molested another user. The court held that the website provided only a means of communication to its users and it was not held liable for the crime committed. Similarly, a service provider may not be held liable if its social media service is used to organise the perpetration of an act of terror.208 The broad interpretation of the conditions for exemption has been criticised by various authors,209 either as a means of urging action against websites that deliberately store or outright encourage the creation of infringing contents,210 or to speak out against the discrepancies regarding the legal status of ‘Good Samaritans’: if they are not held liable for violations, they should be expected to introduce self-regulation in a more consistent

204 Fair Housing Council of San Fernando Valley v Roommates.com, LLC 521 F3d 1157 (9th Cir 2008). 205 Jane Doe No 1 v Backpage.com, LLC 817 F3d 12 (1st Cir 2016). 206 Paul Demko, ‘The Sex-Trafficking Case Testing the Limits of the First Amendment’ Politico.com, 29 July 2018, at https://www.politico.com/magazine/story/2018/07/29/first-amendment-limits-backpage-escortads-219034. 207 Doe v MySpace, Inc 528 F3d 413 (5th Cir 2008). 208 Fields v Twitter, Inc 2016 WL 6822065 (ND Cal 2016); Cohen v Facebook, Inc 2017 WL 2192621 (EDNY, 18 May 2017). 209 Corey Omer, ‘Intermediary Liability for Harmful Speech: Lessons from Abroad’ (2014) 28 Harvard Journal of Law & Technology 289. 210 Danielle Keats Citron and Benjamin Wittes, ‘The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity’ (2017) 86 Fordham Law Review 401.

The Regulation of Internet Gatekeepers  101 manner, ensuring that harmful content is removed voluntarily.211 However, this would increase the number of decisions that restrict the freedom of speech without any legal guarantee, as discussed above. As Anthony Fargo notes, the consolidation of the quite different approaches followed in the US and Europe is inevitable sooner or later.212 While large American online businesses necessarily adapt the services they provide in Europe to the local regulations, their operations and principles are deeply rooted in the liberal, anti-paternalist and pragmatic tradition of American policies and the First Amendment, which creates complicated conflicts and inconsistencies in their day-to-day operations.

211 Andrew M Sevanian, ‘Section 230 of the Communications Decency Act: A “Good Samaritan” Law without the Requirement of Acting as a “Good Samaritan”’ (2014) 21 UCLA Entertainment Law Review 122. 212 Anthony Fargo, ‘ISP liability in the United States and Europe: Finding Middle Ground in an Ocean of Possibilities’ in András Koltay (ed), Comparative Perspectives on the Fundamental Freedom of Expression (Budapest, Wolters Kluwer, 2015).

3 Internet Service Providers I. INTRODUCTION

A

n internet access service (in this chapter, ‘Internet service provider’ or ‘ISP’) is ‘a publicly available electronic communications service that provides access to the Internet, and thereby connectivity to virtually all end points of the Internet, irrespective of the network technology and terminal equipment used’.1 For the purposes of this chapter, we shall consider not only those service providers providing Internet service for profit – usually for a regular monthly subscription fee – that are at the same time the owners of the physical infrastructure of the internet network, as ISPs, but also any private individuals or undertakings that provide Internet access to others via a wireless network (Wi-Fi) or in any other manner (eg, shared home Internet access through a WLAN router). The activities of these service providers are indispensable for accessing the Internet, and the owners and operators of the cable networks usually also have a monopoly in any given geographic territory. Internet service providers are typically thought of as neutral technical service providers, although their activities can also fundamentally affect the transmission of free speech. Although it is true that they usually do not interfere with the flow of information transmitted to their users over their networks, save for traffic management (in the technical sense), in specific cases they nevertheless do carry out a unique editing activity, insofar as they block or filter the available content, or just slow down access to it. The apparently neutral technical provider status seems to require the application of a corresponding neutral regulatory analogy. Directive 2002/22/EC considers access to telephone services at fixed locations, including ‘to make and receive … data communications, at data rates that are sufficient to permit functional internet access’ as a ‘universal service’; everyone must be ensured non-discriminatory access to these services.2 This provision does not, in itself, reveal anything about the restriction on or prohibition of filtering, blocking or slowing down by the service providers, however. The telephone service qualifies as a common carrier service in the US, with obligations similar to those 1 Regulation (EU) 2015/2120 of the European Parliament and of the Council of 25 November 2015 laying down measures concerning open internet access and amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services and Regulation (EU) no 531/2012 on roaming on public mobile communications networks within the Union ‘Regulation on open Internet access’), Art 2, definition of ‘internet service provider’. 2 Directive 2002/22/EC of the European Parliament and of the Council of 7 March 2002 on universal service and users’ rights relating to electronic communications networks and services (Universal Service Directive), Arts 3–4.

Obligations of the Internet Service Providers Regarding Illegal Content  103 of the European universal service,3 including non-discrimination and the prohibition of interference by the service provider in the flow of information. One of the cornerstones of the network neutrality debate to be discussed in this chapter is that for many years the Federal Communications Commission (FCC) in the US did not considered ISPs to be common carriers, and hence they were not subject to the above obligations. The issue of freedom of speech of ISPs did not arise for a long time. Internet service providers were considered merely as undertakings allowing others to exercise their ­freedom of speech.4 However, the situation is more complex than that. Today, not only must the relation between the state and Internet users be taken into account in this context (where the state can limit the freedom of speech of its citizens through courts, by obliging the ISPs to block or take down certain content), but also ISPs’ own freedom of speech, through the exercise of which they can limit the freedom of speech of users and their access to information. Internet service providers thus act as gatekeepers, and as such the state needs not only to raise barriers to its own limiting activity, but also to consider the issue of regulating ISPs, with the goal of protecting the freedom of users. Many argue that ISPs should be forced into a passive role, not interfering with freedom of speech, although at the same time the state strongly relies on them to make unlawful content inaccessible.5 This kind of restriction, however, affects not only the free speech of ISPs but also their property, since in most cases they actually own the networks. II.  OBLIGATIONS OF THE INTERNET SERVICE PROVIDERS REGARDING ILLEGAL CONTENT

A.  Blocking and Filtering Internet service providers are capable of making certain content inaccessible by filtering or blocking it. This filtering activity is inherently imperfect, since on the one hand a filter can never identify all filterable content, and on the other hand there is always content that is actually filtered though it should not have been according to the filter criteria. What is more, information technology-savvy users can always circumvent most of the filters. However, this argument against filtering is not strong enough, since the truth is that most of the content concerned can be closed off from most users with this solution anyway. Regardless of this, filtering and blocking are not the preferred tools in democratic societies, because they resemble old-school censorship in many ways: they prevent the content concerned from reaching the audience, instead of adopting the more acceptable solution of imposing the threat of a posteriori punishment if an infringement arises. States therefore only apply filtering and blocking in a specific and relatively limited range of instances. At the same time, they do not necessarily raise objections to the voluntary filtering applied by ISPs, whereby the latter aim to create a safe online environment for their users. The Committee of Ministers of the Council of Europe proposed to the States



3 Communications

Act of 1934, as amended by the Telecom Act 1996, Title II. van Hoboken, Search Engine Freedom (Alphen aan den Rijn, Wolters Kluwer, 2012) 121. 5 ibid 123. 4 Joris

104  Internet Service Providers Parties in its recommendation of 2008 that they should prepare guidelines for the entities applying Internet filters, thereby preventing the filtering action from limiting the enforcement of Article 10 of the ECHR.6 The study, published by the Council of Europe in 2015, provides a brief summary of the respective practices applied by the individual Member States. It showed that a significant proportion of European states had not implemented an independent regime for filtering and blocking, with the applicable provisions usually placed in different pieces of legislation of a general scope.7 The most frequent reasons for filtering are child protection (to prevent child pornography, paedophilia and harassment), actions aimed at fighting illegal online gambling, the fight against terrorism, national security, and the protection of intellectual property, personality rights and personal data.8 Up to this day, the ECtHR has not handed down any decisions regarding cases involving the complex questions of detail of filtering and blocking; however, in two Turkish cases, the Court made it clear that the complete blocking of an online service (Google Sites and YouTube in the two cases), on the grounds that the given service had hosted, as an intermediary, specific user content and this allegedly constituted a criminal act, is not permissible. According to the judgments, the state’s measure violated the rights of other users to freedom of expression; their freedom to use the Internet intermediaries concerned followed from their right to freedom of expression.9 In the Yıldırım case, the mandatory blocking required by the state also made the applicant’s page on Google Sites inaccessible on the ground that another Google Sites page allegedly insulted the memory of President Atatürk. In the Cengiz case, the applicants were university academics whose free speech was violated as a result of their access to information and opinions being restricted, since the service provider blocked the whole of YouTube and not only the allegedly infringing content. The US legal system envisages strict scrutiny regarding any legislation that would require ISPs to apply blocking, and if any other means more lenient than the restriction applied are available (the least restrictive means), it is highly probable that such regulation would be found to be in conflict with the freedom of speech.10 In such cases, according to the Supreme Court, the solution that is compatible with the First Amendment is for users to apply filtering software based on their own decision, rather than imposing a state obligation on the service providers.11

6 Recommendation CM/Rec(2008)6 of the Committee of Ministers to member states on measures to promote the respect for freedom of expression and information with regard to internet filters (adopted on 26 March 2008). 7 Comparative Study on Blocking, Filtering and Take-Down of Illegal Internet Content (Strasbourg, Council of Europe, 2017) 12, at https://edoc.coe.int/en/internet/7289-pdf-comparative-study-on-blockingfiltering-and-take-down-of-illegal-internet-content-.html. The study is based on the research carried out by the Swiss Institute of Comparative Law, based on which the reports from all states concerned are available at https://www.coe.int/en/web/freedom-expression/study-filtering-blocking-and-take-down-of-illegal-content-onthe-internet. 8 Comparative Study (n 7) 14–16. 9 Ahmet Yıldırım v Turkey, App no 3111/10, judgment of 18 December 2012; Cengiz and Others v Turkey, App nos 48226/10 and 14027/11, judgment of 1 December 2015. 10 Ashcroft v ACLU 535 US 564 (2002); Ashcroft v ACLU 542 US 656 (2004). 11 Ashcroft v ACLU 542 US 656, 664–68 (2004).

Obligations of the Internet Service Providers Regarding Illegal Content  105 B. Self-regulation Internet service providers may also apply filtering and blocking voluntarily, in the form of a kind of self-regulation. In the US, Section 230 of the Communications Decency Act (CDA)12 exempts ISPs from liability arising on the basis of infringing content available through their system; however, it supports voluntary, bona fide filtering and blocking, aimed at ensuring the provision of a safe Internet service. The 1998 Digital Millennium Copyright Act, aiming to provide protection to copyrights in the Internet environment, has similar provisions.13 In the UK, there has been cooperation between the Government and ISPs since 2013; they accordingly apply network-level filtering, even without a legal obligation, and provide a filtered Internet to all of their users, except those who expressly ask for the removal of filtering.14 Later, the filtering of pornographic content became a statutory obligation pursuant to the Digital Economy Act 2017.15 Voluntary filtering by ISPs became widespread in Europe too. In the US this practice would be considered ‘censorship’,16 or the ‘end of internet freedom’.17 The Recommendation of the Council of Europe related to the public service value of the Internet highlights that states are required to identify the roles and responsibilities of all parties concerned, under clear legal frameworks, and calls upon the governments of the States Parties to encourage the private sector to acknowledge and familiarise itself with its evolving ethical roles and responsibilities. The key actors of online communication must be accountable under certain (primarily self- and co-) regulatory measures.18 The Council of Europe issued an explicit warning regarding voluntary filtering, stating that this action should be considered based on the tests applied in terms of the limitation of freedom of speech by the state, with due consideration of the fact that both have a similar result.19 A similar admonition is included in the issue paper prepared at the request of the Council of Europe’s Commissioner for Human Rights: There are serious doubts as to whether a blocking system that effectively imposes a restriction on most ordinary people’s access to online information will ever be in accordance with the rule of law when it is chosen and operated by private parties, in the absence of public scrutiny, in the absence of a democratic debate, in the absence of a predictable legal framework, in the absence of clear goals or targets, in the absence of evidence of effectiveness, necessity and

12 Communications Decency Act 47 USC § 230. 13 Digital Millennium Copyright Act 17 USC § 512(g)(1). 14 Ofcom, Report on Internet Safety Measures: Internet Service Providers – Network Level Filtering Measures, 22 July 2014, see at https://www.ofcom.org.uk/__data/assets/pdf_file/0019/27172/Internet-safety-measuressecond-report.pdf. 15 Digital Economy Act 2017, pt III. 16 Dawn Nunziato, ‘How (Not) to Censor: Procedural First Amendment Values and Internet Censorship Worldwide’ (2011) 42 Georgetown Journal of International Law 1123. 17 Dawn Nunziato, ‘The Beginning of the End of Internet Freedom’ (2014) 45 Georgetown Journal of International Law 383. 18 Recommendation CM/Rec(2007)16 of the Committee of Ministers to member states on measures to promote the public service value of the internet. 19 Declaration of the Committee of Ministers on the protection of freedom of expression and freedom of assembly and association with regard to privately operated internet platforms and online service providers, adopted by the Committee of Ministers on 7 December 2011.

106  Internet Service Providers ­ roportionality, and in the absence, either before or after the system is launched, of any assessp ment of possible counter-productive effects.20

The Recommendation of the Council of Ministers of 2015 proposes that voluntary filtering and self-regulation should be brought into line with human rights standards.21 The  2015 EU Regulation on Network Neutrality gave a reassuring answer to this­ question.22 According to this Regulation, blocking (in addition to slowing down, modification, etc) is prohibited in general, apart from a few exceptions specified in the Regulation. Hence voluntary blocking without appropriate grounds is not permissible,23 and this requires the respective practice of service providers to be modified from that applied previously. C.  Injunctions against Internet Service Providers The notice and takedown procedure discussed in general in chapter 2, required for hosting service providers pursuant to Article 14 of the E-Commerce Directive,24 is not only familiar from the Directive but it is also present in copyright law, defamation law and privacy law. These rules force hosting service providers into a kind of editing role, whereby they filter and block in order to take action against illegal content. By contrast, ISPs are not subject to any of these obligations. At the same time, the Directive also provides conditional exemption to ISPs. Article 12 of the Directive considers ISPs as ‘mere conduits’, and Article 13 may also be relevant for them, as long as the given ISP performs caching (the rules laid down in Article 13 resemble those specified under Article 12 to a great extent, hence only the latter is reproduced below): Article 12 ‘Mere conduit’ 1.

Where an information society service is provided that consists of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network, Member States shall ensure that the service provider is not liable for the information transmitted, on condition that the provider: (a) does not initiate the transmission; (b) does not select the receiver of the transmission; and (c) does not select or modify the information contained in the transmission.

2.

The acts of transmission and of provision of access referred to in paragraph 1 include the automatic, intermediate and transient storage of the information transmitted in so far as this takes place for the sole purpose of carrying out the transmission in the communication network, and provided that the information is not stored for any period longer than is reasonably necessary for the transmission.

20 The Rule of Law on the Internet and in the Wider Digital World, CommDH/IssuePaper(2014)1 of 8 December 2014, 72, at https://rm.coe.int/16806da51c. 21 Free, transboundary flow of information on the Internet: Recommendation CM/Rec(2015)6. 22 Regulation on open Internet access (n 1). 23 ibid Art 3(3). 24 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) [‘E-Commerce Directive’].

Obligations of the Internet Service Providers Regarding Illegal Content  107 3. This Article shall not affect the possibility for a court or administrative authority, in accordance with Member States’ legal systems, of requiring the service provider to terminate or prevent an infringement.

Hence, if the ISP is passive regarding the content (meaning it was not produced or modified by the ISP, data transmission was not initiated by the ISP and the recipients were not selected by the ISP) then it is exempted from liability. However, based on the authorisation granted under Article 12(3) (or under Article 13(2) in terms of caches), the Member States may oblige the ISP to make infringing content inaccessible after it becomes aware of such an infringement, or to take any other action suitable for preventing that infringement. In McFadden v Sony,25 the courts’ room to manoeuvre against ISPs according to ­Article  12 was at stake. Tobias McFadden operated a local wireless network in the direct vicinity of his retail shop, offering free and anonymous Internet access to the general public. He used the services of a telecommunications undertaking for providing the Internet access. He deliberately did not restrict access to the network, so as to draw the attention of the customers of neighbouring stores, passers-by and neighbours to his enterprise. Musical works were made available for download on the Internet through this Wi-Fi network without the permission of the owners of the respective rights.­ McFadden claimed that he had not committed any infringement, but he could not exclude the possibility that an infringement had been carried out by one of the users of his network. First of all, the European Court of Justice (ECJ) established that service providers allowing access to these types of Wi-Fi networks also fall under the scope of Article 12 in terms of the E-Commerce Directive.26 Article 12(1) stipulates that Member States shall ensure that the providers of services ensuring access to a communication network are not liable for the information transmitted by the recipients of the service; hence if the conditions for this are met according to Article 12, it is excluded for owners of copyright to claim compensation from the service provider on the grounds that others used the network connection to infringe their copyright. However, if a third party uses the Internet connection made available to it by a service provider, enabling access to a communication network for committing the infringement, then Article 12(1) does not preclude the possibility for the person harmed by the copyright infringement to request the courts or national authorities to prevent the service provider from allowing that infringement to continue.27 The person harmed by the copyright infringement may claim compensation from the service provider granting access to the communication network if third parties used the network access to infringe the rights of that person after the service provider had been required by the authority or the court to prevent or terminate the infringement.28 The Copyright Directive of the EU contains a specific rule to protect copyright, requiring the Member States to ensure that holders of intellectual property rights have 25 Case C-484/14, Tobias McFadden v Sony Music Entertainment Germany GmbH, judgment of 15 September 2016. 26 ibid para 43. 27 ibid para 77. 28 ibid para 79.

108  Internet Service Providers the possibility to request an injunction in order to protect their rights, even, if necessary, against the Internet intermediaries used by the infringing person to commit the infringement.29 In the UPC Telekabel case,30 the ECJ ruled that an injunction can even be requested against the ISP based on Article 8(3) of the Directive, whereby the ISP is ordered to block the page resulting in the copyright infringement, provided that: (1) the access provider can choose the specific measures how to implement the order and would not incur penalties for breach of the order if it had taken all reasonable measures (ie, right to conduct business); (2) the order is strictly targeted and does not prevent access to lawful content and users are able to challenge the order once implemented (ie, the user right to freedom of information); (3) the order would have the ‘the effect of preventing unauthorised access to the protected subject-matter or, at least, of making it difficult to achieve and of seriously discouraging internet users who are using the services of the addressee of that injunction from accessing the subject-matter that has been made available to them in breach of the intellectual property right’ (ie, effectiveness).31

In contrast, Article 15 of the E-Commerce Directive precludes in general (and hence also for ISPs) the imposition of an obligation of monitoring. This was confirmed by the European Court judgment handed down in Sabam v Scarlet,32 where the applicant, a Belgian collective rights management organisation (SABAM), wanted to request the ISP to continuously monitor copyright infringement of the copyrighted works managed by the applicant, based on Article 8(3) of the Copyright Directive. According to the Court, not only would imposing such an obligation on the service provider be contrary to the E-Commerce Directive, but also its implementation (which would require the monitoring of the total data traffic) would be impossible.33 Injunctions can also be requested against ISPs, at least in principle, in cases involving the protection of personality rights and defamation, in order to ensure the particular infringement is terminated.34 In an English case, Bunt v Tilley,35 the court was required to clarify this in some detail. Here, the claimant requested that three large ISPs be held liable for defamatory posts written about him, saying that the publication of the infringing content was made possible as a result of the activities of the service providers. The claimant requested that the provision of service to the publishers of the said content be terminated by the ISPs. However, Judge Eady found that the ISPs were not liable for libel, since they could not be considered ‘publishers’ in terms of defamation law due to their

29 Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society, Art 8(2)–(3). 30 Case C-314/12, UPC Telekabel Wien GmbH v Constantin Film Verleih GmbH and Wega Filmproduktions­ gesellschaft mbH, judgment of 27 March 2014. 31 Uta Kohl, ‘Intermediaries within Online Regulation’ in Diane Rowland, Uta Kohl and Andrew ­ Charlesworth (eds), Information Technology Law, 5th edn (London, Routledge, 2016) 95; see UPC Telekabel (n 30) paras 51–57 and 62. 32 Case C-70/10, Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM), judgment of 24 November 2011. 33 ibid paras 38–39. 34 Comparative Study (n 7) 16. 35 Bunt v Tilley [2006] EWHC 407 (QB), [2007] 1 WLR 1243.

The Problem of Network Neutrality  109 passive role.36 In this case, defamation law did not allow a solution that would otherwise have been permissible under Article 12(3) of the Directive.37 In June 2018, the UK Supreme Court ruled that the ISPs did not have to bear the costs of implementation of a blocking injunction.38 The decision argues that common European legislation does not set rules for allocation of costs of intermediary injunctions, and that ‘an innocent intermediary is entitled to be indemnified by the rights-holder against the cost of complying with a website-blocking order’.39 III.  THE PROBLEM OF NETWORK NEUTRALITY

A.  Theoretical Questions Based on the principle of network neutrality (landline and mobile), ISPs may not discriminate with regard to the data and content transmitted over their networks, and as such they must provide an open Internet and must be neutral regarding the content and its providers.40 According to this principle, first elaborated by Tim Wu in 2003,41 the traffic management practice of ISPs must be neutral in terms of the transmitted content, the application, the terminal equipment connected to the network, and the IP addresses of the sender and receiver,42 meaning that ISPs are not allowed to transmit content at different speeds or demand extra fees for services transmitted faster, and they may not filter and block content at their own discretion (except if ordered to by a court or other authorities). If network neutrality is not enforced, ISPs would be able to collect substantial revenues if they charged extra money for faster transmission of certain content, and similarly they could slow down the transmission of content from non-paying service providers. Such a practice could damage the free speech of content providers and the users’ interests in relation to obtaining information. Ignoring network neutrality is unethical; it restricts market competition, reduces innovation, can damage privacy and is not transparent.43 At the same time, reasonable and legitimate traffic management, carried out in order to meet technical requirements or comply with the decisions of authorities or courts (such as the blocking of illegal content), is not incompatible with network neutrality. Although the requirement of network neutrality prohibits ISPs from any arbitrary and illegitimate interference in the process of data transmission, the fulfilment of requests by the state that are substantiated by appropriate decisions is not prohibited.

36 ibid 36–37. 37 For more details on this case, see Uta Kohl, ‘The Rise and Rise of Online Intermediaries in the Governance of the Internet and Beyond: Connectivity Intermediaries’ (2012) 26 International Review of Law, Computers & Technology 185, 192–93. 38 Cartier International AG and Others v British Telecommunications Plc and Another [2018] UKSC 28. 39 ibid point 31. 40 Rowland, Kohl and Charlesworth (eds) (n 31) 87. 41 Tim Wu, ‘Network Neutrality, Broadband Discrimination’ (2003) 2 Journal on Telecommunications and High Technology Law 141. 42 Balázs Bartóki-Gönczy, ‘Attempts at the Regulation of Network Neutrality in the United States and in the European Union: The Route Towards the “Two-speed” Internet’ in András Koltay (ed), Media Freedom and Regulation in the New Media World (Budapest, Wolters Kluwer, 2014) 117. 43 Rowland, Kohl and Charlesworth (eds) (n 31) 88.

110  Internet Service Providers If we consider ISPs as common carriers as under US law, the legal obligation of neutrality, similar to that of the operators of fixed (landline) telephone networks, can be justified. This is an age-old debate in US law. Certain large, well-funded and usually monopolistic ISPs (Comcast, Verizon, AT&T) obtained significant revenues from extra fees collected from large content providers (such as Netflix), and spent huge amounts in the US on lobbying against the introduction of network neutrality.44 However, the advocates of this principle do not see this as the revival of the romantic Internet concept of the 1990s but as a fundamental and legitimate expectation of today’s society and an essential component of Internet use, which could also be termed a constitutional right.45 As Amit Schejter and Moran Yemini note, the former, traditional ‘speaker and state’ relationship is no longer capable of describing the more complex, multiple speaker mode of public communication, and hence is no longer capable of protecting freedom of speech. On the Internet, it is not only the producer and publisher of the content, but also the intermediary service provider, and in certain cases the ISP, that can claim the right to freedom of speech.46 According to the authors, the network neutrality rule is very similar to the ‘must carry’ rule imposed on cable providers, which restricts their freedom to select the broadcasting services they wish to be transmitted by them, but this rule was nevertheless approved by the Supreme Court as being content neutral,47 not targeted at speech and as such constitutional, and rules similar to this are also widespread in Europe. Internet service providers, similarly to cable providers, are both transmitters and ‘editors’ at the same time, even if the Internet audience has a much stronger influence on the content available to and consumed by them. Nevertheless, as a result of their ability to select from among the transmitted content, ISPs too have some editorial discretion,48 which, according to the opponents of network neutrality, deserves to be protected under the freedom of speech.49 Limiting this – by the requirement of network neutrality – is carried out in order to protect the free speech of others and does not set disproportionate limits on the similar freedom of ISPs, the expressive content of which, by exercising this editorial discretion, can indeed hardly be identified (or is actually absent). In this scheme, the interests of the content provider and the audience outweigh the editorial discretion of ISPs. ­Internet service providers do not produce content and do not perform any activities similar to those of journalists, for instance, when they rank the services transmitted

44 Timothy Garton Ash, Free Speech. Ten Principles for a Connected World (London, Atlantic Books, 2016) 359. 45 Christoph B Graber, ‘Bottom-Up Constitutionalism: The Case of Net Neutrality’, i-call working paper, University of Zurich, no 2017/01; cf the idea of recognising ‘the right to Internet access’ as a fundamental human right, in ch 2, section I.F.v. 46 Amit M Schejter and Moran Yemini, ‘“Justice, and Only Justice Shall Pursue”: Network Neutrality, the First Amendment and John Rawls’s Theory of Justice’ (2007) 14 Michigan Telecommunications and Technology Law Review 138, 162. 47 Turner Broadcasting System v FCC 512 US 622 (1994); and Turner Broadcasting System v FCC 520 US 180 (1997). 48 Schejter and Yemini (n 46) 167. 49 Andrew Patrick and Eric Scharphorn, ‘Network Neutrality and the First Amendment’ (2016) 22 Michigan Telecommunications and Technology Law Review 93, 129.

The Problem of Network Neutrality  111 by them or decide to slow down or even block certain services.50 If the prophecy about the best forum for facilitating speech and participation in the public sphere, as described in Reno  v ACLU,51 is to become a reality, ISPs need to be regulated in harmony with the First Amendment values, so that they can truly serve the marketplace of ideas.52 B.  Regulation in the United States So far the FCC has made three major attempts to introduce network neutrality: in the Comcast case,53 in the ‘Open Internet’ regulation54 and finally again in 2015.55 The results of the first two attempts were annulled by the courts, whereas the Order of 2015 was revoked by the Donald Trump Administration at the end of 2017. The categorisation of Internet service and its insertion into the conceptual framework of the Telecommunications Act 1996 lie at the root of this problem. Starting in 2002,56 for a long time the FCC refused to identify ISPs as common carriers, which would have required the service providers to serve everyone without discrimination and to fulfil the obligations specified under Chapter II of the Act.57 In contrast, an Internet service qualified as an ‘information service’ according to the currently effective deregulation trend, with a much narrower set of obligations.58 Over time, the FCC realised that the absence of public service obligations impeded competition and innovation, and consequently it made an attempt to introduce certain neutrality rules, leaving the classification as an information service intact, which was obviously bitterly opposed by the ISPs concerned, referring to their editorial discretion.59 After it was annulled on jurisdictional grounds60 and hence the first attempt failed, the FCC drew up a new network neutrality order on several legal bases at the same time. On the one hand, it was now grounded on a general authorisation under Article 706 of the Act: The Commission and each State commission with regulatory jurisdiction over telecommunications services shall encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans … by utilizing, in a manner consistent with the public interest, convenience and necessity, price cap regulation, regulatory forbearance,

50 Nicholas Bramble, ‘Ill Telecommunications: How Internet Infrastructure Providers Lose First Amendment Protection’ (2010) 17 Michigan Telecommunications and Technology Law Review 68, 107. 51 Reno v ACLU 521 US 844 (1997). 52 Dawn Nunziato, ‘First Amendment Values for the Internet’ (2014) 13 First Amendment Law Review 282, 313–14. 53 Comcast Corp v FCC 600 F3d 642 (2010). 54 FCC Open Internet Order 2010, GN Docket no 09-191, adopted on 21 December 2010. 55 FCC Open Internet Order 2015, GN Docket no 14-28, adopted on 26 February 2015. 56 Deregulation was carried out in several stages: in terms of the cable providers, Appropriate Regulatory Treatment for Broadband Access to the Internet Over Cable Facilities, FCC 02-77 (2002) (Cable Broadband Order); in terms of the other landline Internet service providers, Appropriate Regulatory Treatment for Broadband Access to the Internet Over Wireless Networks, 22 FCCR 5901 (2007); in terms of mobile operators, Appropriate Framework for Broadband Access to the Internet Over Wireline Facilities (2007). 57 James Grimmelmann, Internet Law: Cases and Problems (Oregon City, OR, Semaphore, 2017) 573. 58 ibid 574. 59 Susan Crawford, ‘First Amendment Common Sense’ (2014) 127 Harvard Law Review 2343, 2364, 2391. 60 Verizon v FCC 740 F3d 623 (DC Cir 2014).

112  Internet Service Providers ­ easures that promote competition in the local telecommunications market, or other regulatm ing methods that remove barriers to infrastructure investment.

On the other hand, broadband Internet service was classified this time as a telecommunications service, and hence as a common carrier, thus requiring compliance with the specific obligations laid down in Chapter II of the Act, which are compatible with the requirements of network neutrality.61 These rules can be considered network neutral, and therefore, in contrast to free speech, only intermediate scrutiny is required regarding them.62 Although this solution would presumably have passed the test of the courts too, under the Trump Administration the FCC, with a new composition, applying a libertarian approach (free from regulation), revoked the Order of 2015.63 A typical reaction to this was to envision the elimination of a ‘free’ Internet as a result of the decision – while the FCC called the ­decision on revoking the Order ‘the restoration of internet freedom’.64 While freedom of speech was previously thought to be in danger from the restrictive power of the Government, this time the lack of state regulation to restrict private companies with significant market power led to a similar reaction. C.  Regulation in Europe The Declaration of the Committee of Ministers of the Council of Europe on network neutrality (2010) specified the requirement that traffic management carried out by the ISPs (that is, when they make certain content hard to access in order to promote other content) or the limitation of access to certain content needs to be justified by overriding public interests.65 This document has not had a binding effect so far. Already prior to this, back at the time of the 2009 amendment of Directive 2002/21/EC of the European Parliament and of the Council of 7 March 2002, the principle of network neutrality appeared in EU regulations, but here the application of the principles of non-discrimination and transparency was indicated as a duty of the communications authorities only.66 Following this, the EU adopted a Regulation on Open Internet in 2015.67 According to Article 3(3) of this Regulation: Providers of internet access services shall treat all traffic equally, when providing internet access services, without discrimination, restriction or interference, and irrespective of the sender and 61 FCC Open Internet Order 2015 (n 55) paras 274–87. 62 Dawn Nunziato, ‘By Any Means Necessary? The FCC’s Implementation of Net Neutrality’ (2009) 8 First Amendment Law Review 138, 172. 63 ‘Restoring Internet Freedom’ at https://www.fcc.gov/restoring-internet-freedom. 64 ‘FCC Invokes Internet Freedom While Trying to Kill It’ (editorial) New York Times (29 April 2017) at https://www.nytimes.com/2017/04/29/opinion/sunday/fcc-invokes-internet-freedom-while-trying-to-kill-it. html. 65 Declaration on network neutrality (adopted by the Committee of Ministers on 29 September 2010). 66 Directive 2009/140/EC of the European Parliament and of the Council of 25 November 2009, amending Directives 2002/21/EC on a common regulatory framework for electronic communications networks and services, 2002/19/EC on access to, and interconnection of, electronic communications networks and associated facilities, and 2002/20/EC on the authorisation of electronic communications networks and services. See the new para (5) of Art 8 integrated into Directive 2002/21/EC (Framework Directive). 67 Regulation on open Internet access (n 1).

Censorship by Internet Service Providers  113 receiver, the content accessed or distributed, the applications or services used or provided, or the terminal equipment used.

Based on the Regulation (applicable since 2016), directly effective in the EU Member States, ISPs are not allowed to discriminate between the data, content and services transmitted over their networks. This does not, however, preclude the application of ‘reasonable traffic management measures’, as long as they are transparent, nondiscriminatory and proportionate, and are not based on commercial considerations but on technical requirements for ensuring quality of service.68 Similarly, it is not precluded that the service provider, without violating the requirement of ensuring equal treatment to similar services, may discriminate between different services, and although the Regulation does not expressly name them, it refers to zero-rating services and the provision of managed services.69 ­Illegal content can only be filtered or blocked in order to comply with EU or national law, or pursuant to the decisions of courts or authorities passed for the sake of such compliance; hence voluntary filtering and blocking by the ISPs are not permitted.70 IV.  CENSORSHIP BY INTERNET SERVICE PROVIDERS

Robert McChesney and John Foster raise the possibility that in the US, since the communications networks, which at an earlier stage of technical development used to qualify as public property, have become the private property of the communications service providers, and may already be protected by the Constitution, private undertakings may in future carry out censorship activities, discriminating between opinions and defining the possibilities of access. According to the US legal approach, such private undertakings are not subject to any of the obligations attached to the freedom of the press.71 Dawn Nunziato cites numerous examples of how ISPs interfere with the free flow of opinions. These examples indicate that, contrary to popular belief, the largest US ­Internet companies employ these practices not only to further their economic interests, with the direct purpose of making a profit, but also to favour certain political opinions, by applying a special kind of private censorship. Internet service providers can block e-mail campaigns, thereby preventing messages from reaching their audience via their systems. Remarks criticising President George W Bush made by the singer of Pearl Jam were muted by the webcasting provider airing the concert, and hence censored, a move it later described as an ‘error’,72 whereas in other cases the Voice over Internet Protocol (VoIP) service that users wanted to use was made inaccessible.73 Emily Laidlaw mentions a case

68 ibid Art 3(3). 69 ibid Arts 3(2) and 3(5). 70 ibid Art 3(3)(a). 71 John B Foster and Robert W McChesney, ‘The Internet’s Unholy Marriage to Capitalism’ (2011) 62 The Monthly Review 1, at http://monthlyreview.org/2011/03/01/the-internets-unholy-marriage-to-capitalism/. 72 ‘AT&T Calls Censorship of Pearl Jam Lyrics an Error’ Reuters.com (9 August 2007) at https://reut. rs/2JIGd3u. 73 Dawn C Nunziato, Virtual Freedom. Net Neutrality and Free Speech in the Internet Age (Stanford, CA, Stanford University Press, 2009) 5−11.

114  Internet Service Providers where a Canadian ISP blocked a website promoting a trade union when it had a labour law dispute with its own employees.74 Others cite similar cases.75 In cases like these, the majority of which never come to light anyway due to the nature of the interventions, it would be hard to contest that the state has a duty, in addition to refraining from interfering with the exercise of free speech, to preclude the possibility of interference by private parties. Putting the principle of network neutrality into practice can serve this purpose too. At the same time, it is hard to ignore the ironic character of this phenomenon. If all the ISPs do is to let others’ data and content flow through their own networks, one could argue that their activity – even if one does not overlook any eventual selection made by them between the different content, whether for technical reasons or upon a court order – does not have a sufficiently expressive character to be considered an act of exercising freedom of speech.76 However, when the service provider interferes with the flow of content in a direct and arbitrary manner (as, for instance, with the Pearl Jam concert), such an activity has a clear message (even if it means different things to different people), and it is beyond dispute that the service provider’s rights to freedom of speech have to be weighed in order to judge the case in depth. But this obviously does not preclude the state from deciding to limit the freedom of the ISP in order to protect the freedom of users and content providers.

74 Emily Laidlaw, Regulating Speech in Cyberspace (Cambridge, Cambridge University Press, 2015) 120. 75 Jennifer A Chandler, ‘A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet’ (2007) 35 Hofstra Law Review 1095, 1119–22. 76 See Bramble (n 50) 77.

4 Search Engines I.  INTRODUCTION – THE ROLE OF SEARCH ENGINES IN ONLINE PUBLIC SPHERE

Online search engines typically perform three main activities: (a) they collect information available on the Internet using automated programs that follow links to jump from site to site (crawling); (b) they analyse the collected data, assign various metadata and build indexes that help to find individual sites later on; and (c) they use these indexes to select and present users with pages, ranked on the basis of the available metadata, that meet user-specified query criteria in response to searches run by users by entering a keyword or expression.1 Finding information on the Internet would be considerably more complicated without search engines. Most users utilise search engines every day as a matter of course. For all practical purposes, a piece of content not indexed by a search engine does not really exist.2 As such, search engines can be considered as media, as the information available through them has a fundamental influence on the opinions of users.3 However, by their nature, they are also different from traditional media; search engines are not the authors or publishers of the content they index and present to users but typically collect and rank the content of others. Nonetheless, this activity can be considered to be protected by freedom of speech. Joris van Hoboken argues that search engines constitute a ‘­meta-media’, as they collect and organise the content of others, but the product of their activities (the search results or rankings displayed in response to a search query) can also be regarded as their own independent content.4 According to Karol Jakubowicz, search engines are ‘information services’, and thus cannot be considered as media, but they ‘create special challenges and pose considerable risks’ to a number of values that are important in the context of press freedom, as well as to the effective application of regulations such as those aimed at the exclusion of access to infringing content, discrimination between various types of content, and influencing

1 Oren Bracha, ‘The Folklore of Informationalism: The Case of Search Engine Speech’ (2014) 82 Fordham Law Review 1629, 1636. 2 Oren Bracha and Frank Pasquale, ‘Federal Search Commission: Access, Fairness, and Accountability in the Law of Search’ (2008) 93 Cornell Law Review 1149, 1164. 3 Emily Laidlaw, Regulating Speech in Cyberspace (Cambridge, Cambridge University Press, 2015) 178. 4 Joris van Hoboken, Search Engine Freedom (Alphen aan den Rijn, Wolters Kluwer, 2012) 189.

116  Search Engines the exercise of the freedom of opinion and those intended to prevent the fragmentation of public life and the distortion of market competition.5 Search engines are not neutral, in the sense that they do not aim at providing equal access to online content;6 instead, they are designed to list hits (search results) that will most closely satisfy the needs and curiosity of the users. Naturally, search engines may return several hits for a keyword or expression, but these hits are ranked by the engine, so that the hits it considers less relevant are displayed lower in the list. It seems obvious that at some stage such ranking involves making value-based decisions regarding each individual query, a process that by definition excludes ‘neutrality’ of any kind. Search engines are indispensable tools for accessing information, and as such their creators have a responsibility for the maintenance and strengthening of freedom of speech and democratic society in general.7 The image and brand of US Internet giants commonly lay considerable emphasis on the protection of freedom of speech, and the measures taken by the company to maintain it. As Timothy Garton Ash notes, Google executives and lawyers are ‘disciples of the Church of the First Amendment’.8 At the same time, however, the various criteria used to rank hits are not transparent to the outsider. Search engines do not tell the users what gets listed and how in response to each search query, and how this thereby influences the access to information. Ash also points out that the results returned by a search engine reflect the values of the given search engine provider (what gets displayed, what is not displayed, the order of display). These values are largely determined by the business interests of the operator, who is not prevented from employing political censorship if it wants to. Search results are also influenced by the identity of the person running the search, as search engines attempt to return a list of results that are most relevant to the user. To this end, search engines usually keep a database of the search history and other online activities of users (which fact might raise privacy issues of its own). Due to the nature of such services, users tend to use only a single search engine, thereby enabling the possible emergence of a monopoly. Moreover, the business model (ie the provision of a free service to users paid for by advertisements) of search engines makes it also necessary to take the interests of advertisers into account, and this might also hinder the democratic flow of information.9 Considering (i) the nature of search engine services acting as gatekeepers (ie the continuous selection of pieces of online content and the lack of transparency in the process); (ii) the business environment of such services (ie, a focus on advertisers’ interests and a tendency toward monopoly), and (iii) the general habit of users to ignore all hits

5 Karol Jakubowicz, A New Notion of Media? Media and Media-like Content and Activities on New Communication Services (background text, Council of Europe, 2009) at www.coe.int/t/dghl/standardsetting/ media/doc/New_Notion_Media_en.pdf, 3, 34–35. 6 James Grimmelmann, ‘Some Scepticism about Search Neutrality’ in Berin Szoka and Adam Marcus (eds), The Next Digital Decade: Essays on the Future of the Internet (Washington, DC, TechFreedom, 2010) 435, 442–43. 7 Anupam Chander, ‘Googling Freedom’ (2011) 99 California Law Review 1. 8 Timothy Garton Ash, Free Speech. Ten Principles for a Connected World (London, Atlantic Books, 2016) 168. Considering its considerable dominant position in Europe and the US, Google can be considered a­ synonym for ‘search engine’ in this chapter. 9 ibid 169.

Search Results as Speech  117 ranked below the third place in the search rankings,10 Jeffrey Rosen’s view that Google’s executives and lawyers ‘exercise far more power over speech than does the [US] Supreme Court’ does not seem to be extremely excessive.11 The implications of search engines for the protection of human rights are also reflected in the recommendations of the Council of Europe. Its Recommendation of 2012 on the protection of human rights with regard to search engines notes that search engines enable the public to search for, receive and transmit information online.12 However, the rights of individuals, especially the right to privacy and the protection of personal data, may also be violated by search engine services.13 In addition to the protection of these rights, other important criteria include guaranteeing access to the service, the diversity of the service and the unbiased treatment of pieces of content. Accordingly, the Recommendation calls for increasing transparency regarding search results (ie what pieces of content are considered as hits by the service and why, and what is the reason behind the ranking of hits) with a view to ensuring the plurality and diversity of the presented content.14 Furthermore, the Recommendation also invites service providers not to make any piece of content unavailable through their service unless they do so on the basis of a ground for limiting the freedom of expression set forth in ­Article 10(2) of the ECHR.15 Alas, the current regulatory framework permits search engine providers to be invited, but not obliged, to show this kind of respect for freedom of expression. II.  SEARCH RESULTS AS SPEECH

A.  Search for Analogies Search engines are rather complex, meaning that various analogies can be used to describe their operations, and it is thus worth selecting the closest analogy to facilitate the ­establishment of the most suitable regulatory framework for them. One possible analogy is that of a ‘mere conduit’. An operator acting as a mere conduit does not interfere with the compilation of a search ranking but merely performs technical and functional operations, meaning that the search rankings may not be considered as the ‘speech’ of a mere conduit operator. Consequently, a content provider may argue that a search engine is operated in an unlawful manner when it did not afford equal access to users.16 The analogy of a mere conduit must be discarded, however. On the

10 Matthew Hindman, The Myth of Digital Democracy (Princeton, NJ, Princeton University Press, 2008) 68–70. 11 Jeffrey Rosen, ‘The Deciders: The Future of Privacy and Free Speech in the Age of Facebook and Google’ (2012) 80 Fordham Law Review 1525, 1529. 12 Recommendation CM/Rec(2012)3 of the Committee of Ministers to member States on the protection of human rights with regard to search engines. 13 ibid point 4. 14 ibid point 7. 15 ibid point 8. 16 James Grimmelmann, ‘Speech Engines’ (2014) 98 Minnesota Law Review 868, 879–84.

118  Search Engines one hand, such neutral activities are unknown in the realm of media regulation,17 and on the other hand, it is impossible for a search engine to operate in an entirely neutral way, considering that selecting and ranking hits (by relevance) is the very essence of its operations. If a search engine afforded equal chances for each piece of content to be transmitted to users, it would be of no help to users in finding relevant content. The second potential analogy is that of an ‘editor’. Editors actively interfere with the flow of information. Activities that could be criticised for demonstrating bias and discrimination on the part of a mere conduit operator constitute the defining tasks of an editor; such operations are not just legal but outright indispensable in this context. The compilation of a search result is the outcome of editorial decisions, and search engines are free to select pieces of content and conceal them from users or display them prominently. This statement applies even if such activities themselves are mostly carried out by algorithms,18 since the algorithms are calibrated by human actors and triggered by human interaction (as opposed to running automatically on their own), meaning that human interaction remains an essential part of the entire process.19 One of the most powerful arguments against the editor analogy holds that a search engine may not be considered as an editor because, unlike the editor of the New York Times for example, it does not commission or produce any content on its own. While this observation is true, the editor analogy never claims that the online content available through the links ranked by a search engine would become the content of the search engine itself. On the contrary, this analogy suggests that the product of running the query (ie the list compiled upon request by the user) should be considered as ‘speech’ by the search engine, as it does in fact produce content of its own, which would not exist without the search engine. Some authors argue that search engine operators can be compared to media companies that make editorial decisions in the normal course of business.20 Just as certain media companies act as gatekeepers in the sense that they decide which information to publish and how, search engines also act as rather powerful gatekeepers, considering that they are familiar with users’ habits, and online content providers are largely dependent on them, since most of their revenues come through search engines.21 Robert Post also argues that search engine services are important components of the communications infrastructure, just like media services, because their activities are indispensable parts of the system of the online public sphere and they play a considerable role in the spread of information, serving the personal needs and interests of each and every member of the public, just like journalism does.22 It is also interesting to see that search engines are facing a confusion of their own regarding their identity. When it comes to legal liability, or if any criticism is raised

17 See section I.F.ii of ch 2 on the status of media distributors (cable providers), who are quite close to operating as mere conduits and, similarly to media service providers, are considered entities having a right to the freedom of speech. 18 See ch 2, section II.B. 19 Grimmelmann (n 16) 885–89. 20 Eric Goldman, ‘Search Engine Bias and the Demise of Search Engine Utopianism’ (2006) 8(1) Yale Journal of Law and Technology 189. 21 Grimmelmann (n 6) 447. 22 Robert Post, ‘Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere’ (2018) 67 Duke Law Journal 981, 1016, 1041.

Search Results as Speech  119 concerning their activities, search engines tend to portray themselves as mere conduit operators, who do not interfere with the flow of information in any way apart from the identification of relevant content pertaining to a search query23 (even though they cannot remain completely neutral, as already explained.) At the same time, however, the editorial freedom of search engines has been invoked in every one of the handful of US cases relating to this matter, which would be to assume that the activities of search engines are similar to those of traditional media.24 This happened despite the fact that search engines do not ‘speak’, at least not in the sense actual editors do. Nevertheless, what they do can indeed be considered as a kind of editorial activity. As we shall see, regulatory frameworks also tend to take a similar position regarding the activities of search engines in certain cases when search engine operators are forced to take the role of an editor (eg  to protect personality rights). Another important factor is that the editor analogy may have fundamentally different implications in Europe and in the US. While US authors arguing for this analogy seek to protect the First Amendment, in Europe, considering a search engine as a medium would mean that it would be expected to comply with the main objectives of media regulation (such as pluralism, diversity and remedying damage caused by the media).25 James Grimmelman offers a third analogy that would classify search engines as advisors.26 An advisor serves and considers the interests of its users as top priorities, suggesting that the product of its activities could not be considered as ‘speech’ for the purposes of constitutional protection. An advisor is not neutral, because it would not be much help to its users if it were, but presents the most relevant results to its users and, as such, if it were to discard or hide pieces of content arbitrarily from the users, in the service of its selfish interests, this would be inconsistent with its role as an advisor. In other words, search engines acting as advisors are loyal to their users and provide them with access to the information they need.27 Grimmelman also clarifies the limits of such access and loyalty by arguing that evil users (ie users looking for recipes for building DIY bombs for example), foolish users (ie users looking for entertaining home videos only) and simple-minded users (ie who believe anything they see online simply because they found it through Google) could be susceptible to manipulation through influencing search rankings.28 At the end of the day, advisor search engines fall somewhere between mere conduit and editor-like operators, and also this category is yet unknown in media regulation. A fourth analogy proposes that search engines could be considered as the descendants of libraries. Traditional libraries (ie those with actual books on the shelves) are similar to advisors. They do not keep copies of every book but only those they think potentially useful for their readers. As financial and storage-related constraints do not exist on the Internet in this sense, the library analogy seems misleading, but its opposite could be

23 Bracha and Pasquale (n 2) 1192. 24 Langdon v Google, Inc 474 FSupp2d 622 (D Del 2007); Search King, Inc v Google Tech, Inc no ­Civ-02-1457-M, 2003 WL 21464568 (WD Okla 27 May 2003); Zhang v Baidu 932 FSupp2d 561 (SDNY 2013). 25 See ch 1, section IV.H. 26 Grimmelmann (n 16) from 893. 27 ibid 893–906. 28 ibid 907–10.

120  Search Engines worth considering: a decision is needed not to transmit a piece of content to the audience but to make it unavailable, or at least difficult to access (eg by an ISP or a search engine).29 In other words, a search engine could be viewed as a library that proposes options to a user, while hiding other options from him or her. Joris van Hoboken argues that, as libraries are typically public institutions, a state-owned search engine could be likened to a public library, including its neutrality and its role in facilitating the existence of an adequately diverse public discourse and the exchange of ideas.30 However, private libraries can curate their collections at their own discretion, and they do not need to take measures to represent the plurality of opinions – an important point considering that search engines are privately owned in the Western world. B.  Search Results as Opinions To define the relationship between search rankings generated by search engines and freedom of speech, one needs to explore the issue of how such a list could be considered as an ‘opinion’. First and foremost, it must be noted that search engines can ‘speak’ in various ways. The most typical and most frequently requested data sets are search results (which are defined by an algorithm known as PageRank in the case of Google) generated in response to search queries by users. Other ways include, for example, the Google OneBox and the Knowledge Panel, which present users with content compiled by Google in response to keywords. Even though the latter are clearly not pieces of content generated by the search engine, they are most certainly a means of making the editorial activities of a search engine directly apparent.31 While a search ranking that contains links to each hit along with a two-line segment (snippet) of each website is clearly a compilation of pieces of content produced by others, the above extra services can be regarded as content compiled by Google (albeit on the basis of the content of others), making it Google’s own content. The status of autocomplete suggestions is quite similar, as these offer complementary information to users in response to their search keywords and expressions with a view to fine-tuning the keyword or expression entered by the user. A ‘typical’ search ranking does not necessarily have the same legal status as other content displayed as Google-edited content. The following paragraphs examine the ‘opinion nature’ of the primary service provided by each and every search engine, that is of the search results. Eugene Volokh and Donald Falk take a firm stand that search results are fully protected by the freedom of speech.32 In their view, a list compiled by a service provider upon request by a user is an expression of the service provider’s opinion on sources of information that would be useful for the user and which are available online both for the search engine and the user. In the course of the process, the search engine distils the result, that is the 29 See the Dissenting Opinion of Justice Souter, the United States v American Library Association 539 US 194 (2003) 237. 30 van Hoboken (n 4) 185. 31 Tansy Woan, ‘Searching for an Answer: Can Google Legally Manipulate Search Engine Results?’ (2013) 16 University of Pennsylvania Journal of Business Law 294, 297–302. 32 Eugene Volokh and Donald Falk, First Amendment Protection for Search Engine Search Results: White Paper Commissioned by Google, UCLA School of Law Research Paper no 12-22 (2012).

Search Results as Speech  121 search ranking, from the available information, the keyword entered and known particulars of the user, pursuant to criteria determined by the service provider. The process is carried out with absolute editorial freedom and, in turn, is fully protected by the freedom of speech.33 This approach is also followed by the US courts. In contrast, Frank Pasquale argues that the larger and more influential a search engine is, the more its users tend to accept its search results as real facts in terms of the relevance, quality and importance of the listed (or not listed) websites.34 However, a statement of fact can be false, and its legal status is subject to standards that are different from those applicable to opinions. The fundamental difference between the views of Volokh and Pasquale derives from the difference in the interests they consider more important. On  the one hand, Volokh focuses on the interests of search engine operators to enjoy freedom of speech, while Pasquale, on the other hand, emphasises the interests of users to obtain search results that are as accurate, relevant and reliable as possible, as well as the interests of content providers (who actually provide the content that the users search) to exercise their freedom of speech.35 It is also important to note that users do not typically consider a search result to be the opinion of the search engine. For them, a search ranking merely plays a functional role,36 and it is not intended to be the service provider’s self-expression, participation in a public discourse or discovering the truth. It follows from the above considerations that a form of speech is not protected by the freedom of speech if it cannot be supported by any of the reasons supporting that freedom.37 The functional ‘speech’ of a search engine does not seek to identify itself with, and should not be held liable for, any violation committed by any piece of content it lists, and it does not represent any idea that would be worthy of any protection by the freedom of speech.38 Google frequently argues that its search rankings are generated by an algorithm, which is objective and free from any manipulation. However, courts afford protection for such lists on the basis of their nature as subjective opinions. This is a clear contradiction, and one that is not easily resolved.39 On the one hand, such lists could be regulated without raising constitutional concerns if they were indeed objective and not protected by the freedom of speech;40 while on the other hand, they might be protected by the freedom of speech if they were of a subjective nature, or at least susceptible to interference in the interest of the service provider, although such protection would not be absolute and its scope might be greatly different in Europe and in the US. It seems that the more subjectively a service provider interferes with the compilation of a search ranking, the more confidently it can invoke its freedom of speech and the more its search rankings will resemble actual ‘speech’. The more editorial activities are apparent on the side of the service provider (in calibrating the search algorithm and in unique human interaction),

33 ibid 8. 34 Frank Pasquale, ‘Rankings, Reductionism, and Responsibility’ (2006) 54 Cleveland State Law Review 115, 125. 35 ibid 139. 36 Bracha and Pasquale (n 2) 1198. 37 See ch 1, section II. 38 Tim Wu, ‘Machine Speech’ (2013) 161 University of Pennsylvania Law Review 1495, 1528–29; Bracha (n 1). 39 Benjamin Edelman, ‘Bias in Search Results? Diagnosis and Response’ (2011) 7 Indian Journal of Law & Technology 16, 25; van Hoboken (n 4) 214. 40 Bracha and Pasquale (n 2) 1190.

122  Search Engines the more similar it becomes to the traditional media, and the more it can share their rights and obligations. Such intervention might be required by law and might be carried out by service providers on their own initiative. Examples include the removal of violating pieces of content, the application of interim injunctions if a personality right is violated, or in the context of the right to be forgotten. Indeed, under European law, search engines are not considered mere conduit operators without liability for pieces of content they make available. It should also be noted that not even a simple ‘organic’ search result (ie a hit that is relevant to the query and is not sponsored, meaning that it is not displayed among the first results against the payment of consideration) can be considered entirely neutral, even in the absence of any human interaction,41 as the opinions of the service provider form inherent components of the algorithm that generates the search rankings. This remains true even if the operators take measures to hide such values from users and attempt to present their hits as objective reality. The above arguments, then, support the acceptance of the opinion-like nature of search results and rankings. C.  The Regulation of Search Engines As Nico van Eijk notes, search engines operate in a legal vacuum because they are a mixture of telecommunications and content services and do not fall into either of these two categories, or possibly they fall into both at the same time.42 In the context of ­European law, this duality is also present in the E-Commerce Directive,43 under which search engines are considered information society services, but they are not subject to the liability and exemption rules laid down in Articles 12 to 14 (concerning mere conduits, caching and hosting). This means that the application of the notice and takedown procedure to search engines is not mandatory in the Member States under the Directive, either. However, the CJEU established in its ruling in Google France, by applying Article 14, that certain services provided by search engines (eg storing the data and messages of advertisers in relation to sponsored links) could be considered hosting services.44 The notice and takedown procedure (making the liability of a service provider dependent on the actions it takes after it becomes aware of a violation) can be applied where those data are ­illegal.45 However, this approach cannot be extended to organic search results in general. European states may draw various conclusions from this situation. They can: (a) impose obligations on search engines by applying general legislation to take action against illegal content, regardless of the provisions of the E-Commerce Directive; 41 Uta Kohl, ‘Google: The Rise and Rise of Online Intermediaries in the Governance of the Internet and Beyond (Part 2)’ (2013) 21(2) International Journal of Law and Information Technology 187, 194. 42 Nico van Eijk, ‘Search Engines: Seek and Ye Shall Find? The Position of Search Engines in Law’ (2006) Iris Plus, no 2, 1, 7. 43 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) [‘E-Commerce Directive’]. 44 Joined Cases, C-236/08 to C-238/08 Google France SARL and Google, Inc v Louis Vuitton Malletier SA and Others [2010] ECR I-02417, 111. 45 ibid 120.

Search Results as Speech  123 (b) apply the notice and takedown procedure described in Article 14 to search engines; (c) extend the scope of the liability rules laid down in Articles 12 and 13 to search engines, which affords greater protection to search engines and also permits the application of interim injunctions regarding illegal content; (d) afford absolute immunity to search engines; or (e) create special rules introducing a regulatory framework that is even stricter than the provisions of the Directive. An overview of the solutions applied in Europe indicates that options (a) to (c) are commonly applied, while options (d) and (e) – affording them absolute immunity, or the introduction of more stringent rules concerning search engines – are not applied at all.46 In addition to resolving the problem of taking action against illegal content in general, search engines are often required to make certain links unavailable in their system if they violate personality rights or personal data, or are used to commit a criminal offence.47 The UK introduced a dedicated framework regulating the liability of website operators for defamatory statements as a supplement to the general legal provisions on defamation and the rules of common law.48 It is doubtless that the most effective means of regulating search engines is selfregulation, which is already used by Google in connection with numerous subjects that could be dangerous or harmful to users (eg pornography and violence). These rules are far more restrictive than the constitutionally permitted limitations to the freedom of speech.49 In the US, Section 230 CDA affords broad immunity to search engines against legal liability for illegal content that is accessible through their services.50 However, copyright legislation requires search engines to apply the rules on notice and takedown procedures.51 In autocratic countries, the introduction of strict rules concerning search engines is one of the most effective means of keeping undesirable opinions back from the public. If the activities of search engines were protected by the freedom of speech (meaning in this context the legal corpus originating from the First Amendment of the US ­Constitution), it would not be possible to object to such practices if they were applied, for example, by the state-owned search engine of China.52 However, internal ‘private censorship’ is not fully consistent with the European idea of freedom of expression, and private parties can also be expected under legal regulations to serve and support democratic public life.

46 Kohl (n 41) 201; van Hoboken (n 4) 252–56. 47 Concerning criminal offences, see Kohl (n 41) 228–30; concerning personality rights, see section III following. 48 Defamation Act 2013, s 5. 49 van Hoboken (n 4) 237–46. 50 See ch 2, section II.E.ii. 51 Digital Millennium Copyright Act 1998, 17 USC 512(d). 52 See Zhang v Baidu (n 24).

124  Search Engines III.  THE LIABILITY OF SEARCH ENGINES FOR VIOLATIONS OF PERSONALITY RIGHTS

A.  Search Engines as Publishers A person can be held liable for violation of personality rights if he or she publishes a harmful opinion. However, a person merely distributing the violating content cannot be held liable for defamation or the violation of privacy. Publishing also includes the transmission (dissemination) of information received from others, meaning that the violating person does not necessarily publish his or her own opinions when committing the violation. Under UK defamation law, a person involved in the communication process must meet two criteria to be regarded as a publisher: (i) the publication of content must be intentional, meaning that passive transmission does not suffice;53 and (ii) the content concerned, including its violating nature, must or should have been known to the publisher if he or she exercised reasonable care.54 In Godfrey v Demon Internet Ltd,55 Justice Morland established that an ISP is considered the publisher of defamatory content, regardless of whether it was aware of the violating nature of the content in question or not. In Bunt v Tilley,56 Justice Eady, invoking prior case law, required the existence of a ‘mental element’ as a condition for establishing the liability of an ISP for the transmission of violating content. It was Justice Eady’s ­position that a service provider may not be held liable for transmitting such content to users if it plays only a passive role (transmitter). However, the situation changes when a passive transmitter becomes aware of the violating nature of a piece of content available through its services. According to the old precedents mentioned above, a passive entity (in the old cases, a newspaper stand or library) can be held liable for defamation (on the grounds of its acquiescence to defamation) if it fails to stop an unlawful situation even after becoming aware of its existence. Tamiz v Google57 is a contemporary online equivalent of the same case, where Google, acting as a host for blog services, was considered the publisher of defamatory statements published on a blog after it became aware of the violating nature of the published content (the injured party gave notice to Google accordingly).58 If an online gatekeeper is considered a secondary publisher (once it becomes aware of the violating nature of the content at the latest), it must act to end the unlawful situation even if, due to its secondary role, it is not the actual publisher of the violating content, and does not even agree with its content or does not want to endorse its message. The search engine’s activities are considered innocent dissemination (a valid defence against liability) until it becomes aware of the situation.59 This approach of English defamation

53 McLeod v St Aubyn [1899] AC 549. 54 Emmens v Pottle [1885] 16 QBD 354, 357–58; Vizetelly v Mudie’s Select Library Ltd [1900] 2 QB 170, 177–80. Enacted, see Defamation Act 1996, s 1. 55 Godfrey v Demon Internet Ltd [1999] EWHC 244 (QB), [26]. 56 Bunt v Tilley [2006] EWHC 407 (QB). See ch 3, section II.C. 57 Tamiz v Google [2013] EWCA Civ 68. See the discussion in section I.D of ch 6. 58 Tamiz v Google (n 57) [35]. 59 Defamation Act 1996, s 1.

The Liability of Search Engines for Violations of Personality Rights  125 law is essentially consistent with the notice and takedown procedure set forth regarding hosts in Article 14 of the E-Commerce Directive. Jan Oster, however, argues that online gatekeepers should be considered publishers even before they are notified of a violation, and he does have a point, as why should the legal status of the service provider, as a publisher, depend on any action taken by an injured party? Publication does not automatically imply that the publisher will be found liable for the published content, since a service provider can use the defence of innocent dissemination (or ‘innocent publication’ as referred to by Oster).60 The actual involvement of an online gatekeeper in the publication of a violating piece of content (ie  its role as a publisher) does not change in any way when it becomes aware of the violation. In other words, becoming aware of the situation does not constitute a change in the communications process that would affect the role or legal status of a gatekeeper; the outside world does not even notice when the service provider becomes aware of the violation. Whether a gatekeeper subsequently takes action (removes the content) or remains passive (does not do anything) is a matter that affects its legal liability, but it does not change the inherent status or nature of its activities. The main activity of a gatekeeper is to transmit the communications of others. In this context, it is not a passive entity in the process of information flow, since its economic interests compel it to serve that process in the most effective way. Such gatekeepers include ISPs, search engines, social media and any commercial public forum that permits the posting of comments. When a service provider has an economic interest in facilitating the exchange of views and opinions among people (through the publication of advertisements) and gains an income from it, it can hardly be said not to have an interest in the intensive flow of information, or not to play an active role in the provision of its services. If the product of the activities of a gatekeeper (eg a search result) is considered an opinion protected by the freedom of speech, then the gatekeeper must be considered the publisher of that product. However, the matter of liability is different: even though the defence of innocent dissemination (publication) can be used to avoid excessive restrictions on gatekeepers, it cannot grant full immunity (ie exemption from liability even after becoming aware of the violation). If we accept the approach, as a common minimum, that the duty of search engines is limited to the removal of links to defamatory content from their services of which they have been notified, it might raise difficulties in some legal systems where there are also objective sanctions (ie against which no defence is available) on the publication of defamatory content, thus the establishment of the ‘publisher’ status also implies the establishment of legal liability regarding certain (less harsh)­ sanctions (not including pecuniary sanctions or liability for damages).61 David Rolph warns that a defamatory meaning may be absent from the activities of a search engine if the content presented by the search engine to its users (including, primarily, websites available from links displayed on a search ranking or keywords suggested automatically) is not considered by reasonable users to be the expression of the views and opinions of the search engine itself. In such cases, there would be no need for a

60 Jan Oster, ‘Communication, Defamation and Liability of Intermediaries’ (2015) 35(2) Legal Studies 348, 355–57. 61 See the Civil Code of Hungary, section 2:51 (‘sanctions that are independent from imputability’).

126  Search Engines specific defence or exemption.62 Consequently, users who regard search engines primarily as technical intermediaries understand that the service provider, even though it might be involved in the transmission of defamatory opinions, does not make any defamatory statements itself, as such opinions do not form part of the speech of the search engine. Nonetheless, we must distinguish between these two possible approaches to the nature of speech by a search engine: a search engine may be held liable for defamation (after it becomes aware of the violating nature of a link) even if the search ranking itself is­ considered the speech of the search engine, but not the content available through the listed links, when reasonable users can assume that the list includes content that the search engine believes to be the most relevant and accurate but the list features false and defamatory content in prominent places. B.  Defamatory Search Results If a search ranking returned by a query is considered the speech of the search engine, expressing an opinion on the relevance and up-to-date nature of hits and snippets ranked high on the list, one might argue that featuring links to defamatory content and violating snippets in a prominent place (among the first few results) in the search rankings constitutes defamation committed by the search engine. However, a search engine could not be held liable for defamation (and would not enjoy protection under the freedom of speech) if it were regarded as a neutral intermediary playing a merely functional, technical role. Currently, it seems that the legal systems already struggling with this matter have yet to elaborate a solid approach towards this issue. In Metropolitan International Schools Ltd v (1) Designtechnica Corp, (2) Google UK Ltd, (3) Google, Inc,63 a relevant UK case, Justice Eady followed the approach already used in Blunt v Tilley and refused to accept the activities of search engines as publishing due to the lack of a ‘mental element’. The results are generated automatically upon user request by the search engine’s algorithm and without any human interaction, thus this process does not involve the ‘authorisation’ of the listed content but simply facilitates the dissemination of such content.64 Australian case law presents a relatively rich landscape of possible approaches toward this matter. In Trkulja v Google Inc (No 5),65 Justice Beach, a judge of the Supreme Court of Victoria, accepted that a search engine may be held liable for unlawful content in certain circumstances. In that case, the plaintiff worked as an advertising and marketing ­professional in the music industry, and filed an action against Google for presenting pictures and articles concerning him in connection with the criminal underworld of Melbourne, suggesting that he was involved in illicit activities and his rivals wanted

62 David Rolph, ‘The Ordinary, Reasonable Search Engine User and the Defamatory Capacity of Search Engine Results in Trkulja v Google, Inc’ (2017) 39 Sydney Law Review 601, 607–11. 63 Metropolitan International Schools Ltd v (1) Designtechnica Corp, (2) Google UK Ltd, (3) Google, Inc [2009] EWHC 1765 (QB). 64 ibid [51]. 65 Trkulja v Google Inc (No 5) [2012] VSC 533.

The Liability of Search Engines for Violations of Personality Rights  127 to have him killed by a hitman. The jury decided that Google was the publisher of the defamatory content, and the judge ruled that the material compiled by the search engine in response to the search query (including links, photos and snippets) had not been published before by any other person, meaning that the search ranking was, in a sense, a new ‘opinion’. The search engine was considered the publisher of that material (­naturally, as it was the only possible candidate for that role), but it might invoke the defence of innocent dissemination regarding any period prior to receipt of a notice from the­ plaintiff.66 In Bleyer v Google, Inc,67 Justice McCallum followed a fundamentally different approach. Following the traditional notions of common law, he argued that a person cannot be considered a publisher, even a secondary one, until the receipt of a notice of violation, if he did not and, even if proceeding with reasonable care, could not have known that his activities constituted defamation.68 As already mentioned, the plaintiff filed an action in Trkulja for the search ­ranking,69 images and snippets returned by Google in response to a search using the plaintiff’s name, connecting him to the organised criminal underworld. Justice McDonald refused to follow the approach applied in Bleyer, arguing that Justice McCallum failed to take into account the human factor, present primarily through coding the search algorithm, in compiling a search ranking to the full necessary extent. Justice McDonald established that any misleading association suggested by a search result between the name of the plaintiff and the organised crime would not be a coincidence but a direct consequence of the operation of the search engine.70 Google appealed against the ruling, and the Victoria Court of Appeal clarified even further that a search engine is considered the publisher of a search ranking returned by a query, even though it is only a secondary publisher because it does not add anything to the content already published by others.71 However, the proceeding was dismissed because the plaintiff had no real prospect of success due to the minor nature of the violation (only a handful of pictures connected the plaintiff to criminals, while pictures of police officers were also returned by the search engine).72 But the case did not end with this decision. After an appeal by Trkulja, the High Court of Australia held that at least some of the search results complained of had the capacity to convey to an ordinary reasonable person viewing the search results that the appellant was somehow associated with the Melbourne criminal underworld, and, therefore, that the search results had the capacity to convey one or more of the defamatory imputations alleged.73

The Court of Appeal had therefore erred in concluding that the proceeding had no real prospect of success. What is even more significant, according to the High Court, the innocent dissemination defence is not automatically maintainable for search engines in a period before notification of an alleged defamation. ‘Given the nature of the proceeding,



66 ibid

paras 18–20. v Google, Inc [2014] NSWSC 897. 68 ibid paras 78 and 83. 69 Trkulja v Google, Inc [2015] VSC 635. 70 ibid para 45. 71 Google, Inc v Trkulja (2016) 342 ALR 504, 590, paras 348 and 349. 72 Rolph (n 61) 602–607. 73 Trkulja v Google, LLC [2018] HCA 25, para 35. 67 Bleyer

128  Search Engines there should not have been a summary determination of issues relating to publication or possible defences, at least until after discovery, and possibly at all.’74 This verdict is of extreme importance; in the future, summary judgment applications of Google would likely fail.75 Thus, Trkulja’s appeal was allowed. In Google, Inc v Duffy,76 the proceeding court followed a similar approach. The plaintiff visited a website offering the services of psychics. As he was displeased with the service, he criticised the website in numerous opinions, and in response various defamatory statements were published about the plaintiff (alleging that he had harassed the psychics, disseminated lies, accessed private e-mails unlawfully, etc). The plaintiff notified Google about the links to such pieces of content, but Google refused to make the links inaccessible. The case eventually came before the Supreme Court of Australia, and the Court established that the search engine was to be considered as the publisher of the list it had compiled.77 Google was not a passive medium; accepting any allegation to the contrary would mean the ignorance of the basic functioning of online media and its predetermined programming in the service of users.78 Hence, a search engine is involved in the publication of violating content as publication is the very activity for which it is programmed.79 Still, it is important to note that Google does not have any means to conduct a review of the contents of its publications prior to publication, meaning that it should be considered as a secondary publisher, and it may not be held liable for any ­violation committed before it became aware of the violating content.80 However, the defence of innocent dissemination may not be invoked if it fails to take action after it is informed about the violation. The above review of relevant cases indicates that, pursuant to the principles of common law, search engine operators may be required to take action only after receipt of a notice of violation,81 thus it is almost irrelevant whether they are considered publishers or not: if they are, they can invoke the defence of innocent dissemination; while if they are not, they are not liable in any way. A search engine operator would not need to take any action to avoid liability if it were not considered a publisher at all (not even a ­secondary one). However, if it is already considered to be a publisher when the search ranking is published, it may be held liable for the publication in certain jurisdictions that do not recognise the general defence of innocent dissemination (covering any and all sanctions), regardless of any action it may take subsequently. To return to Grimmelmann’s categories, the editor analogy is consistent with regulatory frameworks with conditional liability for search engines, and it also seems to be in line with the liability-related provisions laid down in Article 14 of the E-Commerce

74 ibid para 39. 75 Justin Castelan, ‘The Return of Trkulja: Episode IV – Trkulja v Google, LLC [2018] HCA 25’ Defamation Watch Blog (21 June 2018) at http://defamationwatch.com.au/?p=1116. 76 Google, Inc v Duffy [2017] SASCFC 130. 77 ibid para 181. 78 ibid para 140. 79 ibid paras 155 and 181. 80 ibid para 184. 81 Stavroula Karapapa and Maurizio Borghi, ‘Search Engine Liability for Autocomplete Suggestions: Personality, Privacy and the Power of the Algorithm’ (2015) 23 International Journal of Law and Information Technology 261, 273.

The Liability of Search Engines for Violations of Personality Rights  129 Directive, even though search engines do not fall within the scope of that Article. The US law, however, applies the analogy of a mere conduit, as Section 230 CDA exempts search engines in general from any liability regarding the content they list. The advisor analogy promoted by Grimmelmann falls somewhere in between the other two options in terms of liability – in this sense, a list compiled by a search engine is nothing more than an opinion on the relevance of pieces of content, but not on their veracity or reliability.82 It certainly would raise concerns if the liability of search engines were to be made dependent on any notice to be given by the injured party, while search engines do not have any means to verify whether a violation has actually occurred. In such a situation, the logical solution for a search engine would be to remove the link without any delay, regardless of whether or not the given piece of content was published in a lawful manner. The safe harbour offered to search engines might provide protection for them, but it is ill-suited to tackling issues requiring the delicate balancing of personality rights and the freedom of speech. C.  Defamatory Autocomplete Suggestions Google suggests words and expressions to its users to complete their keywords and search strings automatically. For example, at one point, it is quite probable that if a user had entered the name of Bettina Wulff, the name of the wife of former German President Christian Wulff, into the search engine, it would have suggested the words ‘prostitute’ or ‘escort’ to supplement the keywords, thereby orienting the user to search for that word combination, and Google would have returned a list of relevant links it compiled in response to the search query it suggested.83 After Bettina Wulff brought an action against Google, the case was resolved via settlement out of court, but the issue of liability for such autocomplete search suggestions was not resolved. Autocomplete suggestions are pieces of information that are different in their nature from search rankings. While the individual items on a search ranking exist independently from the search engine, autocomplete search suggestions are generated by the system of the search engine. The suggestions are intended to help users find their way through vast amounts of information, seeking to catch the attention of users and point them in a likely direction (naturally only when a user has already begun to search for a name such as Bettina Wulff).84 Just like search rankings, search suggestions are also compiled by algorithms on the basis of the most popular previous searches by users, meaning that human interaction is present only at the very beginning of the process (ie when the algorithms are calibrated), even though search suggestions can also be interfered with manually (similarly to search rankings). Furthermore, it is also possible to apply filters that block otherwise frequently used expressions (eg those relating to pornography, violence or hatred) from being automatically suggested to users.

82 Grimmelmann (n 16) 944–45. 83 ‘Autocompleting Bettina Wulff: Can a Google Function Be Libellous?’ Spiegel Online (20 September 2012) at http://www.spiegel.de/international/zeitgeist/google-autocomplete-former-german-first-lady-defamation-casea-856820.html. 84 Karapapa and Borghi (n 81) 264.

130  Search Engines The more options a search engine has to intervene in the process, the more applicable the editor analogy becomes. One could hardly claim that a search suggestion would be an opinion endorsed by Google, considering how such suggestions reflect the behaviour of users, but it certainly conveys some information to other users and may constitute a false statement concerning a person in and of itself. Furthermore, a suggestion is presented to a user as a piece of content produced and published by the search engine.85 The various legal systems handle the issues of liability for automatic search suggestions differently. In Yeung v Google, Inc,86 the High Court of Hong Kong found that the plaintiff, who was associated with organised crime by the search engine, had a convincing case. When it comes to search suggestions, a search engine does not play a passive or neutral role but creates (aggregates and ranks) the suggestions itself.87 Google was considered a publisher, the liability of which was based not necessarily on its knowledge of the violating content but on its involvement in the defamation and the transmission of such content to users.88 In the Australian cases mentioned in section III.B,89 courts ruled that the search engine operator was a secondary publisher of search suggestions and, as such, it should have removed the violating suggestions after it became aware of their violating nature. Prior to the Wulff case, the German Bundesgerichtshof had already established the liability of a search engine operator in another case, where the name of the plaintiff was associated by the system with such words as ‘scientology’ and ‘fraud’.90 According to the court’s ruling, liability arises when a search suggestion articulates a false and defamatory statement. While Google is not obliged to verify its search suggestions in advance, it must, however, remove suggestions that violate a personality right once it becomes aware of their violating nature.91 The approaches followed by other European legal systems are  somewhat variable. The arguments for holding search engines liable presuppose that the autocomplete suggestions are different in nature from the search engines’ basic service (ie the compilation of search rankings) and that search engines play an even less passive or neutral role when generating these suggestions. A suggestion conveys information and bears a meaning of its own, and users tend to connect them with the available online content. For example, if a search suggestion describes someone as a criminal, users tend to assume that there are (numerous) available websites providing corresponding content.92 Since the system collects, organises and processes search queries previously entered by users, the operator can be considered a host of such information. Consequently, it may be subject to the general rules of liability under civil law and the law of torts, as well as Article 14 of the E-Commerce Directive (on notice and takedown), regardless of whether or not the Member State concerned has 85 Oster (n 60) 359. 86 Yeung v Google, Inc HCA 1383/2012 (23 October 2014). 87 ibid 116. 88 ibid 119. Concerning the case, see also Anne Cheung, ‘Defaming by Suggestion: Searching for Search Engine Liability in the Autocomplete Era’ in András Koltay (ed), Comparative Perspectives on the Fundamental Freedom of Expression (Budapest, Wolters Kluwer, 2015) 467. 89 Trkulja v Google (n 69); Google, Inc v Duffy (n 76). 90 BGH, 14.05.2013, VI ZR 269/12. 91 Corinna Coors, ‘Reputations at Stake: The German Federal Court’s Decision Concerning Google’s Liability for Autocomplete Suggestions in the International Context’ (2013) 5(2) Journal of Media Law 322, 324. 92 Karapapa and Borghi (n 81) 274–78.

The Liability of Search Engines for Violations of Personality Rights  131 introduced a general liability regime pertaining to the activities of search engines. Legal systems that do not consider the search engine operator to be the publisher of search suggestions regard such suggestions as mere functional data provision, lacking any kind of ‘opinion’.93 In the context of the US legal system, one might ask whether Section 230 CDA also affords immunity to search engine operators regarding automatic search suggestions. Michael Smith has some doubts about this,94 basing his arguments on Fair Housing Council of San Fernando Valley v Roommates.com,95 where the liability of the content provider of a website was established by the court, since the service provider did in fact ‘endorsed’ the production of violating content by the design of the website and by encouraging users to submit violating pieces of content.96 Smith believes that autocomplete search suggestions can actively contribute to the dissemination of false statements, and a search engine can be considered to have proceeded with ‘reckless disregard’ ­ regarding untrue allegations.97 Search suggestions may also raise privacy concerns when they divulge information the person concerned would have reason to consign to the realm of private life. For example, a search engine suggested the word ‘swallows’ and the title of an adult entertainment movie in relation to the name of a French woman who played a role in this movie in her youth, but this fact was not current or relevant to her at the time of running the search. In other words, this piece of personal data should not have been communicated.98 This case might be more relevant to the right to be forgotten, which will be discussed shortly in section III.E. D.  Other Violating Content So far, the possible liability of search engines has not been raised in any cases other than those involving the violation of personality rights, probably because search engines filter legally questionable content (hate speech, content that is harmful for children, etc) on their own initiative.99 Nonetheless, if a filtering error occurs or a search engine ceases this practice, the question may arise (despite the E-Commerce Directive’s silence on the matter) of whether or not a search engine is liable for the violating pieces of content it shows on its search ranking. In the context of hate speech and other limitations to the freedom of speech under criminal law, the act of publication or presentation is not necessarily required to establish liability. If a racist website was ranked high on a search ranking, would it be possible to argue that the search engine incited hatred or violence?100 On the one hand, it seems

93 ibid 278–81. 94 Michael L Smith, ‘Search Engine Liability for Autocomplete Defamation: Combating the Power of Suggestion’ [2013] Journal of Law, Technology and Policy 313. 95 Fair Housing Council of San Fernando Valley v Roommates.com, LLC 521 F3d 1157 (9th Cir 2008). 96 ibid 1167–68; see ch 2, section II.E.ii. 97 Smith (n 94) 318. 98 Mme C/Google France, TGI Montpellier, 28 October 2010. See Karapapa and Borghi (n 81) 285–86. 99 van Hoboken (n 4) 237–46. 100 Public Order Act 1986, s 18.

132  Search Engines that applying any kind of objective (strict) liability in such a case would be a mistake, while on the other hand, intentional perpetration or criminal fault could hardly be established. As the removal of the violating piece of content seems to be a convenient solution to such situations, a search engine can both escape liability and become the judge of any violating content concerned. E.  The Right to Be Forgotten Even before the emergence of the Internet, the law had recognised and afforded protection to the legitimate interest of persons in disappearing from the public sphere and making previously published information inaccessible. The perpetrators of criminal offences are absolved of the detrimental consequences of their punishment after they have served their sentence, meaning that they may not be confronted with the crimes they committed earlier, and they may even start a new life by changing their identity in justified cases.101 In some European jurisdictions, the veracity of a statement may not be proved in criminal proceedings launched for defamation, unless there is a public interest (eg taking a position in a public debate) or a considerable private interest in the publication of that information. In the absence of such an interest, even statements that are in fact true can be considered defamatory, meaning that they result in the truth’s being forgotten.102 Privacy, an important aspect of the protection of personality, is afforded protection through the rules of liability for damages and the provisions of private law, and it can also be used in certain circumstances to prevent the publication of otherwise true statements.103 The protection of personal data is also relevant in this context. The EU Directive on the protection of personal data104 (since repealed) provided that personal data processed by a controller must be adequate, relevant and non-excessive regarding the further processing of such data,105 and must also be accurate and, if necessary, up-to-date.106 The data subject had to be permitted to request from the controller the rectification, erasure or blocking of data the processing of which did not comply with the provisions of the Directive.107 While it caused considerable excitement at its time, the Google Spain case108 did not introduce much novelty in terms of possibly affording any additional right to persons who are concerned about their personal data, though it was certainly novel in that the Court of Justice of the European Union (CJEU) held that the obligations laid down in the Directive could also be extended to search engine operators. 101 See X (formerly known as Mary Bell) and Y v News Group Newspapers Ltd and Others [2003] EWHC 1101 (QB); Carr v News Group Newspapers Ltd and Others [2005] EWHC 971. 102 See, eg, s 229 of the Hungarian Penal Code, Art 270 of the Danish Penal Code and Art 14(14) of ch 7 of the Freedom of the Press Act (Sw Tryckfrihetsfördningen (2002:908)) in Sweden. 103 See ch 1, section III.C. 104 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. 105 ibid Art 6(c). 106 ibid Art 6(d). 107 ibid Art 12(b). 108 Case C-131/12, Google Spain SL, Google Inc v Agencia Española de Protección de Datos, Mario Costeja González, ECLI:EU:C:2014:31.

The Liability of Search Engines for Violations of Personality Rights  133 According to the CJEU, the activities of search engines constitute data processing for the purpose of Article 2(b) of the Directive, as the operator of a search engine ‘collects’ such data which it subsequently ‘retrieves’, ‘records’ and ‘organises’ within the framework of its indexing programmes, ‘stores’ on its servers and, as the case may be, ‘discloses’ and ‘makes available’ to its users in the form of lists of search results.109

Its activities are, however, different from those of a publisher of websites, the activities of which are involved in a search.110 Search engines play a decisive role in the global dissemination of data and information, considering that some users would be unable to find a given piece of information without them.111 This means that the activities of a search engine ‘have an additional effect’ to the activities of the publishers of websites, and they are capable of having a considerable impact on the rights relating to the protection of privacy and personal data.112 A search engine does not simply facilitate access to information; it also ‘may play a decisive role in the dissemination of that information, [and] it is liable to constitute a more significant interference with the data subject’s fundamental right to privacy than the publication on the web page’.113 As such, a search engine may be required to render personal data in its possession inaccessible, meaning that it would be removed from the list of search results, but it would remain available at its original location (ie on the relevant website). However, a search engine may not be required to delete such data if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.114

This means that the freedom to discuss public affairs is a limitation of the right to the protection of personal data. The decision caused considerable excitement and triggered a lively debate, as it gave rise to extensive commentaries in legal literature almost immediately upon its being handed down.115 An interesting detail of the case is that Mario Costeja decided to request that a search engine render its links inaccessible, instead of taking action against the newspapers that featured the original 1998 article (pertaining to his old social insurance debt and the forced auction of his property) in their online archives. This fact is telling about the role Google plays in public life. Online archives are far less frequently consulted and, as Filippo Fontanelli points out, Costeja would not have bothered to take action if Google had featured the article concerned on the twenty-third page of its list of search results.116 109 ibid para 28. 110 ibid paras 35 and 83. 111 ibid para 36. 112 ibid paras 38 and 83. 113 ibid para 87. 114 ibid para 97. 115 David Lindsay, ‘The “Right to Be Forgotten” by Search Engines under Data Privacy Law: A Legal Analysis of the Costeja Ruling’ (2014) 6(2) Journal of Media Law 159. 116 Filippo Fontanelli, ‘The Court of Justice of the European Union and the Illusion of Balancing in ­Internet-Related Disputes’ in Oreste Pollicino and Graziella Romeo (eds), The Internet and Constitutional Law (London, Routledge, 2016) 106.

134  Search Engines However, a legal procedure was initiated as the article was featured by Google’s algorithm in a prominent place, thereby making the information on the plaintiff easily accessible. Irini Katsirea is concerned that the logic behind this decision might snowball, possibly affecting online press archives as well, since the limits of the right to be forgotten seem to be vague.117 It certainly seems to be the case that the Court failed to make a clear distinction between archives and the search rankings of a search engine, and the judgment itself does not offer many possible defences against a possible request to delete ‘irrelevant’ or ‘outdated’ data from online archives as well. Oster also points out that the decision does not say much on issues relating to the freedom of speech (the interests of the public are mentioned in the extract quoted above). Article 9 of the Directive enabled Member States to grant exemptions from the ­obligations pertaining to the protection of personal data for the processing of such data for ‘journalistic purposes’: Member States shall provide for exemptions or derogations from the provisions of this Chapter, Chapter IV and Chapter VI for the processing of personal data carried out solely for journalistic purposes or the purpose of artistic or literary expression only if they are necessary to reconcile the right to privacy with the rules governing freedom of expression.

This provision offered a possibility to Member States to protect freedom of expression (as a sole exemption from the general rules), but did not impose any obligation to do so. It is also apparent from the approach taken by the CJEU that the data processing carried out by Google is not considered similar or analogous to journalistic activities.118 The justification for just this distinction is vehemently challenged by Robert Post.119 While it seems clear that a search engine makes personal data (among others) more easily accessible, it is not clear why its activities should be regarded with more stringency than the activities of a press archive, taking into account that, at the end of the day, the publication of the information is a result of the activities of the latter.120 However, this is not the main concern raised by Post: he argues that Google is an indispensable actor in the infrastructure of communications, and it is absolutely necessary to maintain a vibrant public sphere.121 Google serves the same public interests as journalism or the media in general,122 and its activities are similar to those of newspapers in that it serves the public and shapes democratic public opinions. The press plays the same role, with the difference that it publishes the views and opinions of journalists and editors. While this does not apply to a search engine, Post argues that this is an insignificant difference, as the point is that they both convey information to the public (users), which makes the press and search engines highly similar to each other.123 The available case law on search engines and the right to be forgotten took off shortly after the decision was handed down.124 It is hardly a surprise that one of the main 117 Irini Katsirea, ‘Search Engines and Press Archives between Memory and Oblivion’ (2018) 24(1) European Public Law 125. 118 Google Spain (n 108) 85. 119 Post (n 22) 981. 120 ibid 1010. 121 ibid 1016. 122 ibid 1041. 123 ibid 1042–43. 124 Theo Bertram et al, ‘Three Years of the Right to Be Forgotten’, at https://elie.net/static/files/three-years-ofthe-right-to-be-forgotten/three-years-of-the-right-to-be-forgotten-paper.pdf.

The Liability of Search Engines for Violations of Personality Rights  135 ­ ifficulties resulting from the judgment is the weighing of interests required under the d judgment, that is the weighing of interests in continuing to process the given piece of information, for the purposes of discussing public affairs openly, against the interests in discarding irrelevant or outdated information. Common law countries have also faced the problem of assessing and weighing the status of the applicant, his or her role in public life at the time of the affair, and the significance of the public interest in preserving a given piece of information.125 In NT1 and NT2, the court had to answer the question whether the right to be forgotten was applicable in cases involving spent criminal convictions. According to Warby J, NT1’s claims were unfounded, as the claimant failed to satisfy the criteria established in Google Spain.126 In the case of NT2, the information available through Google had become irrelevant ‘and of no sufficient legitimate interest to users of Google Search to justify its continued availability, so that an appropriate delisting order should be made’.127 In ML and WW v Germany,128 the ECtHR held that Article 8 of the Convention was not violated by the respondent state. The facts of the case were similar to those in NT1 and NT2, as the publications by the media concerned a murder conviction. But this was not a case against search engines: the articles appeared in search engine results but the applicants did not make applications in Germany for search engine delisting. However, the decision confirmed the theoretical availability of an Article 8 claim against search engines as well.129 The ECtHR emphasised that the balance of interests may lead to different results depending on whether an individual directs his or her request for erasure to a search engine operator or primary publisher.130 Essentially, the decision of the CJEU in Google Spain forced search engines to assume the role of an editor (a somewhat inverted version of the editor’s role), as they do not decide on the publication but on the removal of information, and their decision applies to their own system only, even though it is usually indispensable for finding old information. While Google had already been carrying out similar editorial tasks (filtering and ranking content to be presented to users), it had been doing so on its own initiative and in the service of its own economic interests. However, now it is required to do so by law, similarly to the situation of autocomplete search suggestions. The scope of Google Spain was limited to the Spanish version of the search engine (.es domain), as the Spanish authorities believed that their jurisdiction was limited to that scope. However, the Supreme Court of Canada in Equustek required Google to perform the removal of links concerning all of its domains, considering that the right to the protection of intellectual property could not be enforced in any other way, for example by removing such links from the .ca domain only.131 Though Equustek was an

125 NT1 and NT2 v Google, Inc [2018] EWHC 799 (QB). See Róisin A Costello, ‘The Right to Be Forgotten in Cases Involving Criminal Convictions’ [2018] European Human Rights Law Review 268. 126 NT1 and NT2 v Google (n 125) para 170. 127 ibid para 223. 128 ML and WW v Germany, App nos 60798/10 and 65599/10, judgment of 28 June 2018. 129 Hugh Tomlinson, QC and Aidan Wills, ‘Case Law, Strasbourg: ML and WW v Germany, Article 8 Right to Be Forgotten and the Media’ Inforrm blog (4 July 2018) at https://inforrm.org/2018/07/04/case-law-strasbourgml-and-ww-v-germany-article-8-right-to-be-forgotten-and-the-media-hugh-tomlinson-qc-and-aidan-wills/. 130 ML and WW v Germany (n 128) para 97. 131 Google, Inc v Equustek Solutions, Inc [2017] SCC 34 (Sup Ct (Can)). See Michael Douglas, ‘A Global Injunction against Google’ (2018) 134 Law Quarterly Review 181.

136  Search Engines intellectual property case, it may also have some consequences for the future of the right to be forgotten. According to Ronald Krotoszynski, ‘it seems highly likely that a similar kind of analysis could be brought to bear in mind in right to be forgotten cases that involve domestic court orders that require the de-indexing of content not only within the issuing court’s territory’.132 According to him, in the context of the right to be forgotten, this approach would limit the citizens of a country only to the information that the domestic court of another country deems appropriate. The issue of the scope of removal will also be faced by the CJEU, as its preliminary ruling on the matter has been sought in a French case.133 The General Data Protection Regulation (GDPR) adopted by the EU and replacing the previous Directive introduced considerable changes regarding the right to be ­forgotten.134 First, it is significant that the Regulation is directly applicable in each Member State, thus the same provisions are to be applied with the same wording as adopted by the EU. Second, the right to be forgotten is mentioned by precisely this name in the ­­Regulation (and as a synonym for the right to erasure), while the Regulation also seeks to afford greater protection to the freedom of expression by providing that this right may not be exercised where ‘processing is necessary … for exercising the right of freedom of expression and information’.135 The recognition of this exception is not merely an option but an obligation for each Member State. The vague phrase ‘journalistic purposes’ mentioned in the Directive has now been replaced by the protection of the freedom of expression and the right to information, although only the latter is mentioned in the preamble, along with audiovisual and news archives.136 The GDPR seeks to maintain the obligation of search engines to respect the right to be forgotten, even though the text of the Regulation in fact reinforces the arguments for their exemption: a search engine is certainly capable of serving both the freedom of expression and the right to information. Furthermore, the Regulation focuses on the exercise of such rights and indicates where there are exceptions from their exercise, so that these exceptions do not necessarily mean the exercise of any right by a service provider but may apply to users as well. This means that, when considering the situation while bearing in mind the interests of users, a search engine might even be exempted from its erasure obligation, regardless of whether or not a search result is recognised as a form of free expression by the search engine, although this observation does not apply if irrelevant or outdated pieces of information are not considered to be necessary for the exercise of the freedom of expression or the right to information. Thus the criticism offered by Post also remains relevant in the context of the GDPR, although it should be noted that if search engines were considered to be media-like services, as he suggests, it would be an unequivocally welcome change for search engines under the law of the First Amendment. Following such an approach in Europe, however, 132 Ronald Krotoszynski, ‘Privacy, Remedies, and Comity: The Emerging Problem of Global Injunctions’ (manuscript, on file with author) 22. 133 ‘French Court Refers “Right to Be Forgotten” Dispute to Top EU Court’ Reuters.com (19 July 2017) at https://www.reuters.com/article/us-google-litigation/french-court-refers-right-to-be-forgotten-dispute-to-topeu-court-idUSKBN1A41AS. 134 Regulation (EU) 2016/679. 135 GDPR, Art 17(3)(a). 136 GDPR, preamble (153).

The Manipulation of Search Results  137 would mean that the public interest obligations of the media would also apply to search engines,137 including the requirement for diversity in content, the right to reply for anyone attacked through false factual allegations in the media, and the protection of the ­audience from harmful content. The situation is ambiguous: on the one hand, the right to be forgotten requires search engines to fulfil a kind of editorial role, while on the other hand the performance of this task is in part why search engines cannot be considered as media and, as a key conceptual component of media services, become an actual ‘editor’ in the legal sense of the term. The US legal system would not be comfortable with recognising a general right to be forgotten, and Section 230 CDA also excludes any kind of liability on the part of search engines in this respect. Nonetheless, it is worth pointing out that certain rights akin to the right to be forgotten might exist in certain limited fields of the law, for example under the Fair Credit Reporting Act requiring the erasure of financial data,138 a Californian statute allowing minors to request the subsequent erasure of data they published online ­concerning themselves,139 and other similar provisions and drafts.140 IV.  THE MANIPULATION OF SEARCH RESULTS

A.  The Issue of Search Engine Neutrality Search engines use algorithms with complex configurations, so the events taking place during the period between a user’s entering a keyword and a search ranking’s being compiled by the search engine are hard to detect (indeed, they are impossible to detect for common users). A result is produced on the basis of a user request, but it is not known how it is created. This ‘black box’ phenomenon raises various concerns, including issues connected with democratic public life.141 First, the criteria taken into account when compiling a search ranking (ie the search engine’s ‘opinion’ on the relevance of a website) are not sufficiently clear, so it is difficult to know if a search engine is attempting to manipulate or mislead its users, or if it is indeed presenting the results it considers the most relevant. Second, a search engine relies on a user’s personal data to find the most relevant content for the user, including the user’s search history, browsing history and e-mails, a practice that might constitute a violation of privacy.142 Third, customised search results reinforce the ‘filter bubble effect’143 and might work as a ‘Daily Me’, a term coined by Cass Sunstein, meaning that users could be trapped in a circle of content that is identical to or consistent with their own views, thereby weakening social cohesion

137 See ch 1, section H. 138 Frank Pasquale, ‘Reforming the Law of Reputation’ (2015) 47 Loyola University Chicago Law Journal 515, 530. 139 California Senate Bill Act 568 (Privacy: Internet – Minors). 140 Lawrence Siry, ‘Forget Me, Forget Me Not: Reconciling Two Different Paradigms of the Right to Be ­Forgotten’ (2014) 103(3) Kentucky Law Journal 311, 331. 141 Frank Pasquale, The Black Box Society. The Secret Algorithms That Control Money and Information (Cambridge, MA, Harvard University Press 2015). 142 Ira Rubinstein, ‘Regulating Privacy by Design’ (2012) 26 Berkeley Technology Law Journal 1410. 143 Eli Pariser, The Filter Bubble: What the Internet is Hiding From You (London, Penguin, 2012).

138  Search Engines and limiting the chances of engaging in a meaningful public discourse.144 Grimmelmann denies the existence of such effects by relying on the frequently invoked criticism that search engines prefer mainstream, popular and widely accessed content, while marginal, newer or less commonly accessed pages have a more difficult time when trying to make it to the top of a search ranking. Grimmelmann argues that these two criticisms are­ mutually exclusive and cannot be true at the same time.145 Search engine bias is an issue that is often overlooked by the legal literature on search engines. This concern arises from the possibility of manipulating users by suppressing relevant websites, preferring less relevant pages without a valid reason, blurring the line between organic and sponsored search results, etc. In other words, search engine bias is actually a set of issues pertaining to multiple aspects of the operation of search engines.146 Manipulation is in fact a problem, as it could undermine the democratic process by hindering the functioning of an open and diverse public discourse if a search engine were to interfere with the free flow of content or its delivery to the public instead of satisfying the curiosity of users based on their requests, or if it featured only a minor segment of available content in prominent places.147 It is common knowledge that users normally pick websites from the top of a search ranking, thus search engines push forward hits that are popular and relevant to a high number of users.148 However, the issue of such practices is more closely related to user awareness and the structural characteristics of search engines than to the problem of manipulation. The activities of search engines may also raise concerns from the standpoints of competition law and fair market practices.­ Pushing already popular services to the top of search rankings raises barriers to market entry to new actors,149 and a search ranking may even mislead users by way of manipulation, thereby preventing them from learning about the availability of new service providers or suggesting that a prominently featured hit is more relevant than it actually is.150 However, the manipulation of search rankings by a search engine does not raise any concerns, if it happens at all, when a search ranking is considered merely an opinion, protected by freedom of speech. In Langdon v Google, Inc,151 the plaintiff argued that he was prevented from advertising his service as paid content to users because Google rejected his advertisement, and he claimed that the search engine ranked his website below the placing it should have received. The court believed that obliging the search engine to publish a sponsored link or modify its search results would have constituted ‘compelled speech’, and it would have violated the freedom of speech of the search engine.152 In Search King,153 the court similarly established that search rankings constitute ‘­opinions of the significance of particular websites as they correspond to a search query’,154 and 144 Cass R Sunstein, #Republic. Divided Democracy in the Age of Social Media (Princeton, NJ, Princeton University Press, 2017). 145 Grimmelmann (n 16) 908–909. 146 Bracha and Pasquale (n 2) 1167–70. 147 ibid 1171–73. 148 Hindman (n 10) 68–70. 149 Bracha and Pasquale (n 2) 1174. 150 ibid 1176–79. 151 Langdon v Google, Inc (n 24). 152 ibid 629–30. 153 Search King, Inc v Google Tech, Inc (n 24). 154 2003 US Dist, LEXIS 27193, 11–12.

The Manipulation of Search Results  139 if a search ranking is considered to be an opinion afforded constitutional protection, its content may be restricted by law only in exceptional circumstances. Here, the question arises again as to whether a search engine exercises its freedom of speech during its activities (ie if its activities are covered by the freedom of speech, or if its transfer of information to its users is more of a functional nature) and, if so, whether its opinion is limited to its rankings and search results (as a piece of ‘newly produced’ content) or also includes the content of the listed websites (as mediated content). For the purposes of manipulation, a broader (but not unlimited) range of possible interventions might be permitted where the activity is not protected by free speech doctrines, but the possibilities are more limited (indeed, they are limited to exceptional situations) if constitutional protection is afforded. If a search ranking is considered as an opinion on the websites that are most relevant to the user running the search then the list is by definition a subjective compilation (the result may not be objective due to the requirement of ‘relevance’, especially when it is customised to each user) and, in turn, it is afforded a wide range of protection under the freedom of speech. Similarly to the concept of network neutrality, the term ‘search neutrality’ was coined in response to the recognition of this problem.155 The proponents of requiring search engines to operate in a neutral manner argue that search engines need to afford equal access to all content and must be prevented from interfering with the flow of information. In this sense, Pasquale considers search engines to be essential cultural and political facilities that must be regulated.156 It seems clear that a search engine obliged to operate in a neutral manner would be operating as a mere conduit. However, search engines do not seem capable of remaining neutral in the sense that ISPs can, primarily because the very essence of their activities is the selection and ranking of pieces of content. They would not be performing their most important role if they were not selecting, ranking, promoting and relegating content. Whether or not search engines may provide­ preferential treatment to certain websites without a valid reason, or if they could demote other websites arbitrarily, seems to be a different issue. Grimmelmann breaks down the concept of ‘search engine neutrality’ into its parts and seeks to counter each and every component in turn.157 He recognises that search neutrality originates in the old idea of guaranteeing access to the media by means of the law,158 and he argues that the requirements pertaining to search engines would serve to protect the interests of users (the audience) and not those of content producers.159 However, one might argue that having access has always served the interests of both those accessing content and the recipients of such content, and such interests are hardly separable. The other observations made by Grimmelmann seem convincing: the activities of a search engine may not be based on ‘equality’, because its purpose is selection; they cannot be expected to work ‘objectively’, because the needs of a user can only be guessed; and ‘bias’ is actually a basic function of a search engine, since it needs to rank

155 Kohl (n 41) 221. 156 Frank Pasquale, ‘Dominant Search Engines: An Essential Cultural & Political Facility’ in Szoka and Marcus, (eds) (n 6) 401. 157 Grimmelmann (n 6) 442–58. 158 See ch 1, section IV.H. 159 Grimmelmann (n 6) 441.

140  Search Engines websites relative to each other.160 Grimmelmann believes that not even the pursuit of its own interests, for which Google is frequently reprimanded (by promoting its own services through its search engine and by outperforming the similar services of its competitors), is an anomaly that would require legal action, because, on the one hand, integration with other services could actually improve the performance of the search engine and, on the other hand, the other services provided by Google are usually indeed superior to those of its competitors, as a successful market presence does not depend exclusively on search rankings compiled by search engines.161 In a general sense, the requirement of search neutrality does not seem to be suited as the basis of a regulatory framework, even though it effectively points out the issues relating to the possible manipulation of search results. In a subsequent paper even Grimmelmann would be willing to accept taking action against such manipulation. In Search King, the court agreed with Google in that search rankings are fundamentally subjective and no evidence should be submitted regarding the veracity or falsehood of their content;162 this notion indicates the application of the editor analogy, pursuant to which a search engine is free to compile its search rankings at its own discretion and without any substantive limitation, thereby also shaping the market position of competing enterprises. However, any manipulation of a search ranking by a search engine would constitute misleading users according to Grimmelmann’s advisor analogy, which considers search engines as actors serving the needs of users. In such a situation, a search ranking is considered an opinion on the relevance of websites to a given user and, as such, is considered subjective by nature, but the use of manipulation would make the process dishonest and would hence entail regulation.163 A search ranking may be incorrect or defective and, like any other possibly false opinion, it can be countered, challenged or rebutted. However, if a search engine demotes a website in order to put its operator in a less advantageous market position while knowing that the website concerned would be more relevant to the user than the other website with a higher ranking, the search ranking would not only be false but also knowingly falsified. In such a situation, it would not matter whether the knowing falsification was made by way of ex post manual intervention or by changing the basic settings of the search algorithms. Grimmelmann suggests that such matters could be brought to court under the tort of deceit.164 The arguments used by proponents of the editor analogy are not limited to the freedom of speech of search engines when taking a stand against government interference. For example, Geoffrey Manne and Joshua Wright argue that search neutrality could not be implemented in practice because the operation of a search engine is too complex for a common user to understand, and it would cause difficulties in the course of exercising user rights, while any government interference would be likely to cause more damage than benefits (for the users, amongst others). They therefore conclude that government must be kept from interfering with the activities of search engines.165 Eric Goldman 160 ibid 442–47. 161 ibid 451–53. 162 Search King, Inc v Google Tech, Inc (n 24) 3–4. 163 Grimmelmann (n 16) 912–32. 164 ibid 931–32. 165 Geoffrey A Manne and Joshua D Wright, ‘If Search Neutrality is the Answer, What’s the Question?’ (2012) 1 Columbia Business Law Review 151, 198–204.

The Manipulation of Search Results  141 argues that the bias of search engines is both inevitable and desirable, and it serves the interests of the users by keeping numerous irrelevant and misleading pieces of information and content from them. The increasing personalisation and customisation of online searches also make claims of bias immaterial.166 It should be noted in this context that the realisation of Sunstein’s Daily Me scenario would cause additional harm to search engine services, not to mention various privacy-related issues. Moreover, increased personalisation does not exclude the possibility of manipulation but may even increase its impact, since the practice of manipulation could also be exercised by search engines in a personalised manner. B.  External Manipulation Search results may also be manipulated externally, even though such external interference becomes increasingly difficult as the complexity of the operation of search engines increases. The most common motive for such interference is to gain a position at the top of the list. For example, companies provide search engine optimisation services to clients who are seeking to improve their search ranking. Such activities seek to ‘circumvent’ the opinion of search engines regarding the relevance of the given website. In this sense, they are contrary to the interests of search engines, which are actively trying to take action against such interference (Google has an impressive track record in such efforts).167 Google-bombing is another method of influencing the public through the joint action of users (eg placing links on their own websites) seeking to ensure that a given website is displayed at the top of search rankings returned in response to certain search keywords. The first example of this practice happened in 2001, when Adam Mathes managed to get his friend’s website displayed as the first hit in response to the expression ‘talentless hack’. The most famous incident was when a campaign of political activities resulted in the official website of then US President George W Bush being returned as the No 1 hit for the expression ‘miserable failure’. In another case, users tried to interfere with the normal operations of Google by preventing jewwatch.com (a website publishing a list of Jewish public figures) from being returned as a top hit for the keyword ‘Jew’.168 Taking defensive action against external manipulation is of utmost importance for the integrity of a search engine’s search ranking (opinion). C.  Internal Manipulation Internal manipulation refers to intervention by the search engine itself, as a result of which a search ranking is not compiled entirely on the basis of relevance as the most important criterion but with a focus on the interests of the search engine operator, its advertisers or other actors (eg a government). 166 Goldman (n 20) 189; Eric Goldman, ‘Revisiting Search Engine Bias (2011) 38(1) William Mitchell Law Review 96. 167 Laidlaw (n 3) 182. 168 James Grimmelmann, ‘The Google Dilemma’ (2009) 53 New York Law School Law Review 939, 941–44.

142  Search Engines One of the most common scenarios is when a website is demoted on the search ranking due to the business interests of the search engine. Google offers a number of online services (price comparison service, map, news service, translation, webstore, e-mail, etc), which compete with other services that are also trying to reach the public through ­Google’s search engine (amongst others). If the website of such a competitor is demoted on a search ranking, it is technically prevented from reaching a significant segment of potential customers. This problem was touched upon by a few US court cases. Search King was also offering search engine services as its flagship services when it introduced a new one that established a connection between advertisers and websites. The new service sought to manipulate Google Search by using a linkfarm. In response to the manipulation attempt, Google blocked the ranking of Search King in its PageRank algorithm.169 The measure taken by the search engine operator was clearly a punishment for Search King’s attempted manipulation.170 In KinderStart, the plaintiff operated a vertical search service, used to find useful or interesting content for children. Google considered the service to be its competitor and reduced the ranking of the website on its search rankings returned in response to typical and frequent keywords overnight.171 As mentioned earlier, the court held that Google exercised its freedom of speech by taking such actions; in the view of the court, Google was expressing its opinion, the falsity or veracity of which could not be verified, even though Google itself insisted earlier that its search rankings were objective.172 According to Michael Ballanco, if a search ranking is considered as a constitutionally protected opinion, a search engine hiding the services of others or listing its own service in more prominent positions than justified should be deemed to be engaging in commercial speech, and the corresponding principles and doctrines should be applied accordingly.173 In other words, he argued that the freedom of speech might be limited if users are misled and a false impression is given by search results. If a website is important and relevant for a user (as Google should have assumed if it acted in good faith even without manipulating its own service) then demoting that website on a search ranking is in fact a false statement.174 Most observers considered Google’s practice to be a competition law issue. Even though the US Federal Trade Commission did not find that Google used any restrictive practices,175 some authors still think it possible that the company engages in them.176 In 2017, the European Commission imposed a fine of €2.42 billion on Google for violating EU anti-trust rules. It was established that Google had abused its search engine’s dominant market position when it afforded preferential treatment to another Google service

169 Search King, Inc v Google Tech, Inc (n 24) 1–2. 170 Jennifer A Chandler, ‘A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet’ (2007) 35 Hofstra Law Review 1095, 1110–11. 171 Kinderstart.com, LLC v Google, Inc C 06-2057 JF (RS) (ND Ca 16 March 2007) 2. 172 James Grimmelmann, ‘The Structure of Search Engine Law’ (2007) 93 Iowa Law Review 1, 59. 173 Michael Ballanco, ‘Searching for the First Amendment: An Inquisitive Free Speech Approach to Search Engine Rankings’ (2013) 24(1) George Mason University Civil Rights Law Journal 89, 101–11. 174 Grimmelmann (n 172) 60. 175 Woan (n 31) 304–15. 176 Joshua G Hazan, ‘Stop Being Evil: A Proposal for Unbiased Google Search’ (2013) 111 Michigan Law Review 789.

The Manipulation of Search Results  143 (its price comparison service) in an unlawful manner.177 In July 2018, Google broke the record as it received an €4.34 billion fine, this time for imposing anticompetitive restrictions on device manufacturers and mobile network operators in order to safeguard its position in the Internet search market.178 While these cases have yet to be closed with final effect, it seems that competition law and not the right to the freedom of speech is the means that can be used most effectively against Google in the current regulatory environment. The websites listed as paid advertisements by a search engine give rise to different problems. The system associates advertisements with certain keywords and search expressions, and it displays advertised websites to users as top hits (eg a user searching for the keyword ‘refrigerator’ might find that the first three or four hits are links to websites that sell refrigerators and are willing to pay to be ranked at the top of search rankings). It seems clear that this is a commercial transaction, and users should be advised accordingly. Otherwise, even if a search ranking is considered an opinion, the requirement of separating its own ‘edited’ content from sponsored content would be violated.179 An indirect way of manipulating search results could be if Google prevented certain services from purchasing sponsored advertisements. For example, an anti-abortion ­organisation sought to advertise its website through Google AdWords by associating it with the keywords ‘abortion clinic’. The website featured a list of institutions that do not accept abortion and offer alternative solutions. However, actual abortion clinics complained that the advertisements were misleading to users. Google responded to the complaint, removed the advertisements concerned and, by doing so, indirectly took a stand on an important public matter. The removal was intended to take action against a misleading advertisement, but, considering the sensitive nature of the matter, it is somewhat uncomfortable to see how a neutral search engine can prevent individuals from exercising their right to free speech in such a manner.180 Other similar and even more clear-cut cases are discussed frequently in legal literature. For example, political content may not be listed as an advertisement (eg Google invoked the sensitivity of the subject matter concerning a book on the Guantanamo and Abu Ghraib prisons).181 Also, Google’s AdSense, an online advertisement delivery system that connects websites to advertisers, has no problem with blocking persons promoting lewd or obscene content.182 In addition to economic considerations, Google also seems to pick its clients on the basis of their political opinions. In Langdon,183 the plaintiff operated websites covering

177 ‘Antitrust: Commission Fines Google €2.42 Billion for Abusing Dominance as Search Engine by Giving ­Illegal Advantage to Own Comparison Shopping Service’ European Commission, Press release, Brussels, 27 June 2017, at http://europa.eu/rapid/press-release_IP-17-1784_en.htm. 178 Antitrust: Commission Fines Google €4.34 Billion for Illegal Practices Regarding Android Mobile Devices to Strengthen Dominance of Google’s Search Engine. European Commission, Press release, Brussels, 18 July 2018, at http://europa.eu/rapid/press-release_IP-18-4581_en.htm. 179 Chandler (n 170) 1113–15. 180 Google Removes Anti-Abortion Ads Deemed Deceptive. Wall Street Journal blogs (29 April 2014) at http:// blogs.wsj.com/digits/2014/04/29/google-removes-anti-abortion-ads-deemed-deceptive/. 181 Dawn C Nunziato, ‘The Death of Public Forum in Cyberspace’ (2005) 20 Berkeley Technology Law Journal 1115, 1124. 182 Andrew Tutt, ‘The New Speech’ (2014) 41 Hastings Constitutional Law Quarterly 235, 275. 183 Langdon v Google Corporation, Inc, Yahoo!, Inc, Microsoft Corporation, Inc, case no 06-319-JJF (2007).

144  Search Engines the corruption of government politicians in North Carolina and Attorney General Roy Cooper, as well as violations committed by the Government of the People’s Republic of China. The latter created a particularly thorny problem for Google, as the company harboured grand plans to enter the Chinese market at the time. Google refused to run the advertisements for the websites, and, similarly to Search King, the court ruled that doing so is a way of exercising the company’s freedom of speech, emphasising that any decision to the contrary (ie imposing an obligation to publish the advertisements) would constitute compelled speech and would be acceptable only in rather special circumstances.184 Just like the media, search engines are also free to refuse the publication of third-party content and advertisements.185 The relationship between Google and China is illuminating for another reason. While Google was present in the Chinese market, it exercised self-censorship of the results of its search engine at the request of the Chinese Government. While the Western versions of its services also include preliminary filtering (eg for pornographic, violent or hateful content), the filtering is consistent with reasonable self-regulation exercised in the protection of users. Even though the filtering goes beyond the limitations required under the doctrine of freedom of speech (meaning that even pieces of lawful content might remain unlisted), it does not raise any considerable constitutional concerns in and of itself. However, the limitation of political opinions and the dissemination information required for the discussion of public affairs is an entirely different story. A visual example mentioned by Grimmelmann is that US users searching for the keyword ‘Tiananmen Square’ are shown, by Google’s image search engine, the tragic and moving images of the demonstrations in 1989, while the Chinese version of the service returns only idealistic pictures of the square, probably intended for tourists, without any tanks and bodies.186 It should be mentioned to Google’s credit that the company eventually left the Chinese market due to the high number of data requests submitted by the Chinese Government regarding the country’s citizens,187 but the plain fact that it was willing to engage in political censorship in the hopes of profit reveals a frightening picture of the power and considerable influence of the company over the public. The case of a search engine owned and developed by the Chinese Government may also find its way to a high US court. In Zhang v Baidu,188 a New York-based civil group brought proceedings against the operator of Baidu, a Chinese search engine, for not listing content posted online by the group that criticised the Chinese Government. The plaintiff argued that this act of the defendant comprised various violations, including conspiracy to violate the plaintiff’s civil rights189 and violation of its civil rights on the basis of race.190 Again, the court ruled in favour of the search engine operator and established that the selection of pieces of content constitutes an editorial judgement

184 ibid 7. 185 ibid. 186 Grimmelmann (n 168) 947–50. 187 ‘Google and China’ (editorial) New York Times (23 March 2010) at https://www.nytimes.com/2010/03/24/ opinion/24wed2.html. 188 Zhang v Baidu (n 24). 189 42 USC § 1985. 190 42 USC § 1981.

Summary  145 on the part of the ­operator, making up part of the operator’s freedom of speech and, as such, is protected by the First Amendment.191 While the decision is consistent with other US courts’ j­ urisprudence on similar matters, it still seems troublesome that government censorship is not fought but protected with great force by the legal doctrine of free speech. V. SUMMARY

As Uta Kohl notes, search engines may be exposed to strong, possibly irresistible ­pressure from economic and government actors, reinforcing their role as gatekeepers and possibly having a detrimental impact on the democratic process.192 It seems hard to dispute that search engines should be considered ‘editors’. This position is supported by (i) the preliminary and self-regulatory filtering of content that could offend users (pornography, violence, etc); (ii) the role search engines play in the removal of links to content violating copyright, personality rights or other rights; and (iii) the possibility of manipulating their search results in their own or someone else’s interest. However, such activities should not be confused, as the activities mentioned in points (i) and (iii) are conducted by a search engine on its own initiative, while the ­activities mentioned in point (ii) are required by government, even though all of these activities are similar, in that each of them represents a deviation from the mission of the search engine, and the compilation of search rankings that are most relevant to users is influenced by external considerations. This is a kind of editorial activity, which, coupled with the special role search engines play in the online public sphere, makes search engines highly influential entities that cannot be considered passive at all. Recognising the editorial nature of the activities of search engines affords them protection against regulatory interference only in the US. Even if the analogy of an editor, as described at the beginning of this chapter, is accepted, the European approach dictates that search engines are considered a special type of media, and the main goals and ­principles of media policy can be applied to them. However, accepting and implementing the advisor analogy proposed by Grimmelmann would mean that regulatory measures may be taken concerning search engines, even if they are protected by the First ­Amendment in the US.



191 Zhang 192 Kohl

v Baidu (n 24) 6. (n 41) 234.

5 Social Media Platforms I. INTRODUCTION

S

ocial media platforms have become the primary arena of online public life. There is no generally accepted definition for such platforms (also known as social networks). For the purposes of this chapter, social media platforms also include video-sharing portals, where users can upload publicly available content, as well as­ platforms where user-generated content (including videos, texts, images, links, etc) is made available to, and then shared by, an audience selected by the user. This chapter applies the same approach towards YouTube, Facebook and Twitter. The main reason for doing so is that, from a freedom of speech perspective, the activities of these platforms are quite similar and can be examined using similar methods. Each of these services is capable of restricting user-generated content, and they are even legally required to do so from time to time. The Audiovisual Media Services Directive1 of the EU has introduced common rules for these two different services – the scope of the Directive extends to audiovisual content that is present on both video-sharing platforms and social media platforms.2 According to the European Commission’s Communication, an online platform may fall into one or more of the following categories: online advertising platforms, marketplaces, search engines, social media and creative content outlets, application distribution platforms, communications services, payment systems and platforms for the collaborative economy.3 However, it does not seem possible to use the very same approach for all social media platforms due to their highly diverse nature; despite their name, they are not ‘media’ in the traditional sense of the word, even though regulators are making attempts to regulate them as such. The problems of relevance to the freedom of speech arising in the course of their activities allow them to be examined as similar entities. The challenges to governments posed by such platforms are quite considerable: governments need to afford protection to users against harm caused by various speech acts,

1 Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) [‘AVMS Directive’], as amended by Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 (‘new AVMS Directive’). 2 Francisco J Cabrera Blázquez et al, ‘The Legal Framework for Video-Sharing Platforms’ Iris Plus, 2018-01, at https://rm.coe.int/the-legal-framework-for-video-sharing-platforms/16808b05ee. 3 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Online Platforms and the Digital Single Market. Opportunities and Challenges for Europe, COM(2016) 288 final, at http://ec.europa.eu/transparency/regdoc/ rep/1/2016/HU/1-2016-288-HU-F1-1.pdf.

Social Media Platforms and the Democratic Public Sphere  147 while they also need to establish safeguards for the freedom of speech4 – it is often a difficult task to reconcile these two objectives, as will be shown in this chapter. Such challenges are further complicated by the fact that governments tend to cede the role of arbiter of the legality of user-generated content and speech to the platforms themselves. II.  SOCIAL MEDIA PLATFORMS AND THE DEMOCRATIC PUBLIC SPHERE

A.  New Forms of Speech and the Expansion of the Public Sphere It is indisputable that social media platforms have brought about fundamental changes in the structure of the public sphere. Such websites allow a nearly unlimited number of persons to publish and share their views and opinions, and to learn and comment on the opinions of others. The level of publicity of content published on these platforms depends on various factors, such as the size of the platform (user numbers), the number of contacts of the user concerned and the volume of other users forwarding (sharing) it. In addition to the method of reaching the public and the category of publicity, the form of published opinions has also changed. An obvious example is when users express their opinion by pressing Facebook’s ‘like’ button. Any user of the platform can express his or her opinion regarding a piece of content by clicking or tapping the ‘like’ button. Until 2016, this service allowed users to express a single emotion, that is that they actually like the content. Today, users are able to express a range of emotions (love, amused, amazed, sad or angry). No ‘dislike’ button is offered to the users, probably because ­Facebook does not wish to upset its advertisers.5 In its early days, it was unclear whether or not the use of the ‘like’ button qualified as speech. In Bland v Roberts, the federal district court ruled that ‘liking’ a given piece of content does not qualify as protected speech, since it lacks any substance or material content that would be eligible for constitutional protection.6 In other words, the actual content of an opinion is still missing, as pressing the ‘like’ button does not reflect any affiliation or endorsement regarding the content concerned. The facts of this case occurred during a campaign to elect the sheriff of Hampton town (since the sheriff is a directly elected official), when a new candidate was running against the eventually re-elected former sheriff. Certain staff members working in the sheriff’s office expressed their support for the challenger by pressing the ‘like’ button on the Facebook campaign page. When the eventually re-elected sheriff did not re-appoint those workers to their former position, they filed a lawsuit due to the violation of their freedom of association and speech. The district court ruled against the plaintiffs and refused to apply the

4 Recommendation CM/Rec(2012)4 of the Committee of Ministers to member States on the protection of human rights with regard to social networking services (Adopted by the Committee of Ministers on 4 April 2012 at the 1139th meeting of the Ministers’ Deputies) at https://search.coe.int/cm/Pages/result_details.aspx? ObjectID=09000016805caa9b, para 6. 5 Kevin Whipps, ‘Facebook is Changing Your Like Button into a Range of Emotions’ Creative Market (2 May 2016) at https://creativemarket.com/blog/facebook-is-changing-your-like-button-into-a-range-of-emotions. 6 Bland v Roberts 857 FSupp 2d 599 (ED Va 2012).

148  Social Media Platforms doctrine of symbolic speech,7 thereby exposing itself to harsh criticism from communities of professionals dealing with social media platforms.8 On appeal against the district court’s judgment, the Fourth Circuit court established the violation of the freedom of speech.9 The court accepted pressing a ‘like’ button as being symbolic speech, which expresses an opinion that can be perceived without any words, similarly to displaying a support sign for a candidate on one’s front lawn during an election campaign (as is customary in the US). The interest of employees in expressing their opinion prevailed over the sheriff’s interest in serving the community without any disturbance. Indeed, pressing a ‘like’ button expresses a legally relevant opinion, as was also recognised by the appeal court. This must be accepted, even if ‘liking’, or expressing sadness or anger concerning a given piece of content (possibly a long and complex piece comprising numerous and various statements), is not exactly a nuanced expression of one’s position, and even if a person seeing the approval of another user may ­misunderstand his or her emotional state or arrive at a wrong conclusion. Nonetheless, the ‘like’ is perceived as an opinion without any written word, and the constitutional protection afforded to an opinion may not be conditional on the opinion’s being understood correctly. There is another possible approach to the issue of whether pressing a ‘like’ button is the expression of an opinion. It is also a relevant question whether or not re-sharing a link to or liking a given (possibly infringing) piece of content should be considered as the reiteration/republishing of that piece of content. This idea was rejected by a US court on the ground that publishing a link to an article does not change the content published by another person in any way; ‘liking’ a piece of content is not the same as republishing either.10 However, the situation is different if a user not only publishes a link to a piece of infringing content, but also shares the content itself with others. As was made clear in the English McAlpine case,11 the user becomes a publisher in such a scenario. Yet if pressing a ‘like’ button is considered an opinion, it is therefore also subject to the general limitations of the freedom of speech. Beyond the scope of the First Amendment, jurisdictions with rules that differ from US law may place a stronger emphasis on the protection of personality rights when considering such matters. ‘Liking’ (ie expressing agreement with) false and insulting defamatory statements may be considered defamation in and of itself.12 As new forms of speech are identified by legal practice with regard to new public platforms, one might assume that such forums have nothing but beneficial effects on the

7 See ch 1, section III.F. 8 Ira P Robbins, ‘What Is the Meaning of Like: The First Amendment Implications of Social-Media ­Expression’ (2013) 7(1) The Federal Courts Law Review 127; Leigh E Gray, ‘Thumb War: The Facebook “Like” Button and Free Speech in the Era of Social Networking’ (2013) 7 Charleston Law Review 447. 9 Bland v Roberts no 12-1671 (4th Cir 2013). 10 Lizzy McLellan, ‘Court: Facebook Link, “Like” not Defamation’ Pittsburg Post-Gazette (1 December 2015) at http://www.post-gazette.com/business/legal/2015/12/01/Court-Facebook-link-Like-not-defamation/ stories/201512010001. 11 The Lord McAlpine of West Green v Sally Bercow [2013] EWHC 1342 (QB). 12 Gill Grassie, ‘The Traps of Social Media: To “Like” or Not to “Like”’ Oxford University Press blog (18  September 2017) at https://blog.oup.com/2017/09/traps-of-social-media/; ‘Man Fined by Swiss Court for “Liking” Defamatory Comments on Facebook’ Guardian (30 May 2017) at https://www.theguardian.com/ technology/2017/may/30/man-fined-swiss-court-liking-defamatory-comments-facebook.

Social Media Platforms and the Democratic Public Sphere  149 expansion of public discourse and that they contribute to a growing volume of opinions competing in the marketplace of ideas. However, some studies suggest that social media, by their nature, may also facilitate the suppression of certain user opinions. A study by Pew Research Center describes this phenomenon as the ‘spiral of silence’ and argues, taking the revelations of Edward Snowden as an example, that users preferred to discuss that case in person rather than on social media, and that social media platforms did not offer a real alternative for those who did not want to discuss the matter in person (if a user did not talk about it with his friends, he did not post about it on social media, either). The study found that those who disagree with the majority tend to remain silent on Facebook, just as they tend to remain silent during a family dinner.13 This example might serve as a warning for all of us that these new platforms might facilitate and stimulate the exchange of opinions much more effectively than previous technological innovations, but they are not necessarily able to overcome some ‘traditional’ barriers to expressing an opinion. It remains unclear if this issue may have been addressed through previous Canadian jurisprudence. The Supreme Court of Canada’s decision in Crookes v Newton14 (where the Court ruled that hyperlinks in an article did not constitute publication in itself) raises a question regarding the extent of expression, re-publication or affirmation. The acts of liking, re-tweeting or employing an emoji can be understood as forms of expression that are positive acts, distinguishable from hyperlinks insofar as they are forms of expression. It is understood that of these forms of communication, re-tweeting may be the closest to reproducing a hyperlink (following Crookes), and so there may be distinctions made within the forms of expression denoted here. B.  ‘Bubbles’ and Other Psychological Effects of Social Media Platforms The ‘filter bubble’ theory15 is mentioned several times in previous chapters, as an argument that online gatekeepers weaken social cohesion by generally presenting pieces of content of which the recipient approves, or with which he or she agrees. Such pieces of content are selected on the basis of data collected about individual users, meaning that the more time a user spends on a given platform (or the more capable of tracking the online activities of a user a platform is), the more capable the platform becomes of identifying content that is relevant to the given user, showing him or her carefully targeted advertisements and producing a news feed (as does Facebook) for that user. As humans tend, for psychological reasons, to accept content with which they agree, and which corresponds to their own opinions, platforms strive to find and present more and more of the same kind of content, thereby making all other opinions – those that challenge or differ from the user’s own opinions – less and less visible. Cass Sunstein calls this phenomenon the Daily Me and identifies social media platforms as its most important proponents.16 13 Keith Hampton et al, Social Media and the ‘Spiral of Silence’ (Pew Research Center, 2014) at http://www. pewinternet.org/2014/08/26/social-media-and-the-spiral-of-silence/. 14 Crookes v Newton [2011] SCC 47. 15 See ch 2, section I.E.ii. 16 Cass R Sunstein, #Republic. Divided Democracy in the Age of Social Media (Princeton, NJ, ­Princeton University Press, 2017). See also Peter Coe, ‘(Re)embracing Social Responsibility Theory as a Basis for

150  Social Media Platforms According to this popular theory, social media weaken the cohesion of society, hinder dialogue between people who hold different views and convictions, create closed online ‘enclaves’ and intensify the impact of extreme opinions.17 Even though the Internet promised to reinforce the democratic process and to reinvigorate discourse between citizens who were marginalised by the media, it makes an abundance of information easily available, and the absence of a common ‘information sphere’ or ‘digital agora’ that could be used by the various groups of society places democracy at risk and leads to the emergence of conflicting opinion groups that are segregated by ideology.18 Opponents of the ‘filter bubble’ theory argue that individual user habits fundamentally refute the axioms of that theory, because a large number of news sources are available online and no single platform is capable of locking its users into an enclave, as users tend to obtain information from diverse and independent sources.19 Technology is incapable of hindering the functioning of democracy as long as users acquire information from diverse sources. Research commissioned by Facebook found that the algorithm that compiles the news feed suppresses sources that are unlikely to meet the preferences of a user to only a minimal extent (cross-cutting). Even though users also have other filter options for removing additional sources from their news feed, the service provides every user with a significant volume of ‘ideologically diverse’ news.20 The methodology and conclusions of this research were subject to much professional criticism, and Facebook was even accused of attempting to manipulate public opinion as a means of escaping its responsibilities concerning democracy.21 It seems desirable for the public sphere that users obtain their information from diverse sources. The necessary question is ‘From what sources?’ While a large number of users are active on more than one platform at the same time, those platforms tend to apply similar business models and content-related principles, meaning that they are unlikely to counter but seem to reinforce the bubble effect. Another question is whether the ability to collect information from various sources might become weaker if a platform were allowed to grow to a size that eliminated any real possibility of obtaining diverse information. In 2017, about two-thirds of the adult population of the US obtained information on public matters from Facebook.22 While they might have also received information Media  Speech: Shifting the Normative Paradigm for a Modern Media’ (2018) 69(4) Northern Ireland Legal Quarterly 403, 415–16. 17 Nathan Heller, ‘The Failure of Facebook Democracy’ The New Yorker (18 November 2016) at https://www. newyorker.com/culture/cultural-comment/the-failure-of-facebook-democracy. 18 Vyacheslav Polonski, ‘The Biggest Threat to Democracy? Your Social Media Feed’ World Economic Forum (4 August 2016) at https://www.weforum.org/agenda/2016/08/the-biggest-threat-to-democracy-your-socialmedia-feed/. 19 William H Dutton et al, Social Shaping of the Politics of Internet Search and Networking: Moving Beyond Filter Bubbles, Echo Chambers, and Fake News, Quello Center Working Paper no 2944191, at https://papers. ssrn.com/sol3/papers.cfm?abstract_id=2944191. 20 Eytan Bakshy, Solomon Messing and Lada A Adamic, ‘Exposure to Ideologically Diverse News and Opinion on Facebook’ Science (5 June 2015) at science.sciencemag.org/content/348/6239/1130. 21 Christian Sandvig, ‘The Facebook “It’s Not Our Fault” Study’ Social Media Collective (7 May 2015) at https://socialmediacollective.org/2015/05/07/the-facebook-its-not-our-fault-study/; Eszter Hargittai, ‘Why doesn’t Science Publish Important Methods Info Prominently?’ Crooked Timber (7 May 2015) at http:// crookedtimber.org/2015/05/07/why-doesnt-science-publish-important-methods-info-prominently/. 22 Elisa Shearer and Jeffrey Gottfried, News Use Across Social Media Platforms 2017 (Pew Research Center, 2017) at http://assets.pewresearch.org/wp-content/uploads/sites/13/2017/09/13163032/PJ_17.08.23_socialMedia Update_FINAL.pdf.

Social Media Platforms and the Democratic Public Sphere  151 from other sources, the length of time spent by individual users on the platform (the very real psychological addiction of users) and Facebook’s detrimental impact on the media market (a reduction in the number of independent news outlets) makes it doubtful that the average citizen has enough time and opportunities to obtain information carefully from other sources as well. It is not difficult to appreciate the global significance of this problem if we take into consideration that Facebook had 2.27 billion active (monthly) users in third quarter of 2018.23 Facebook’s business model is based on its users sharing more and more personal data and content, and engaging in increasingly intensive interactions with each other. To do so, users need to trust the company, so it seeks to build trust in manipulative ways.24 The platform also seeks to foster a psychological dependence in its users, so that they return more often to and spend more time using the service. The ‘like’ button mentioned in section II.A is a means to this end, as it exploits the human weakness of users and has become a unit of measurement of the acceptance and love expressed by others toward the user’s own content; as such, users need to check their level of acceptance regularly, and they experience a ‘minor dopamine surge’ when it increases.25 Even though it is difficult to see the platform’s long-term impact on the functioning of the human brain, it is already clear, quite alarmingly, that the platform is capable of manipulating people. According to research carried out using data provided by Facebook, such manipulation is not difficult at all. Some 700,000 Facebook users were unwittingly involved in a study into emotional impacts.26 The news feed of these users was manipulated over a period of one week: some users were presented with content conveying mostly negative emotions, while others were provided with more positive content. The users who received more negative, sad and angry content became sadder and angrier themselves (as reflected in the content they published), while the other users, who received happier content, became more cheerful.27 Even though it has already been argued that dependence on Facebook should be recognised as a psychiatric illness,28 some of the experts who were involved in the implementation of the service argue that Facebook creates bubbles and emotional dependence, making users spend otherwise useful and productive time on the service, and making communication through the platform unsuitable as a forum for discussing public affairs, thereby ripping society apart.29

23 ‘Number of Monthly Active Facebook Users Worldwide as of 3rd Quarter 2018 (in millions)’, Statista, at https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/ (periodically updated). 24 Ari E Waldman, ‘Manipulating Trust on Facebook’ (2016) 29 Loyola Consumer Law Review 175. 25 Olivia Solon, ‘Ex-Facebook President Sean Parker: Site Made to Exploit Human “Vulnerability”’ Guardian (9 November 2017) at https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerabilitybrain-psychology. 26 Robinson Meyer, ‘Everything We Know About Facebook’s Secret Mood Manipulation Experiment’ The Atlantic (28 June 2014) at https://www.theatlantic.com/technology/archive/2014/06/everything-we-knowabout-facebooks-secret-mood-manipulation-experiment/373648/. 27 Adam DI Kramer, Jamie E Guillory and Jeffrey T Hancock, ‘Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks’, at http://www.pnas.org/content/pnas/111/24/8788.full.pdf. 28 Michael Fenichel, Facebook Addiction Disorder (FAD) at http://www.fenichel.com/facebook/. 29 Julia C Wong, ‘Former Facebook Executive: Social Media is Ripping Society Apart’ Guardian (12 December 2017) at https://www.theguardian.com/technology/2017/dec/11/facebook-former-executive-ripping-society-apart.

152  Social Media Platforms C.  Social Media as a Public Forum Despite all their negative aspects, social media platforms are indeed forums of public communication. Such communication is subject to the general restrictions on the freedom of speech,30 and it also promotes and facilitates the exercise of the freedom of assembly by persons embracing similar ideals.31 In extreme situations, social media may also facilitate considerable political changes, as happened during the Arab Spring of 2011, when young anti-establishment protesters used social media (among other means) to organise their activities.32 (Later, the role that the Internet and in particular social media played in the Arab Spring came under serious questioning, parallel with the evaporation of the promises of the events.)33 The Internet freedom doctrine, which also applies to social media, is an effective instrument of US foreign policy for dismantling autocratic regimes.34 The attractive slogan of Internet freedom can also be said to hide the intention of interfering in politics with the goal of preserving US military and economic dominance.35 Social media platforms provide a means of communication for the general public and are an indispensable part of everyday life for many people – and they may also be used to commit criminal offences. In this context, it is an interesting question whether certain users may be banned from such platforms on the basis of crime prevention considerations. Sex offenders constitute a group of particular importance, as they can easily contact minors via the platforms. John Hitz argues that a general ban on those convicted of such offences (following the enforcement of their punishment) would be inconsistent with freedom of speech.36 Once rehabilitated, sex offenders may exercise their freedom of speech without any restriction;37 according to US doctrine, banning such persons from commonly used platforms (ie not only those exclusively used by minors) would constitute a content-neutral restriction on speech38 that is not tailored narrowly enough.39 This very issue was considered by the US Supreme Court in Packingham v North Carolina,40 which is the second most important decision on the freedom of online speech since Reno v ACLU 20 years earlier.41 Justice Kennedy, drafter of the majority ­opinion, described the Internet as the ‘modern public square’, where members of the

30 Benjamin F Jackson, ‘Censorship and Freedom of Expression in the Age of Facebook’ (2014) 44 New Mexico Law Review 121, 136. 31 ibid 137. 32 Russell L Weaver, From Gutenberg to the Internet: Free Speech, Advancing Technology, and the Implications for Democracy (Durham, NC, Carolina Academic Press, 2013) 74–84. 33 Paul Bernal, ‘Publication Review. Free Speech in an Internet Era: Papers from the Free Speech Discussion Forum. C Walker, RL Weaver (eds)’ [2015] Public Law 192, 195. 34 Sarah Joseph, ‘Social Media, Political Change, and Human Rights’ (2012) 35 Boston College International and Comparative Law Review 145, 171; Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom (New York, Public Affairs, 2011) 231–34. 35 Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (New York, Public Affairs, 2018) 234–36, 247. 36 John Hitz, ‘Removing Disfavored Faces from Facebook: The Freedom of Speech Implications of Banning Sex Offenders from Social Media’ (2014) 89(3) Indiana Law Journal 1327. 37 ibid 1341. 38 See ch 1, section III.A.i. 39 Hitz (n 36) 1349–56. 40 Packingham v North Carolina 137 SCt 1730 (2017). 41 Reno v ACLU 521 US 844 (1997). See ch 1, section II.E.

Social Media Platforms and the Democratic Public Sphere  153 public exchange opinions.42 While the lower court considered the law of North Carolina as a restriction on action but not on speech, and consequently applied a less stringent ­standard,43 the Supreme Court disagreed and changed the ruling, as its restriction on speech was not of a narrowly tailored character.44 By comparing the Internet to a physical space, Justice Kennedy raised the issue of whether the public forum doctrine could be applied with regard to the Internet: US law distinguishes three different kinds of public forums, each of which is subject to different standards concerning the limitation of speech. The first kind includes traditional public forums (public squares, streets and parks), where the exercise of the freedom of speech is customary. The second kind includes designated or limited public forums, which are traditional but specifically designated places for exercising the freedom of speech (eg conference halls that can be used with the permission of their owner or manager). The third kind of public forums include non-public forums that do not serve as a place where anyone can speak but where speech takes place nonetheless (eg hospitals, prisons and military bases).45 More and more restrictions on free speech may be applied to the ­different kinds of public forums, proceeding from the first to the third category. In his concurring opinion to the Packingham judgment, Justice Alito suggested that the court might be wrong to compare the Internet per se to streets and public parks.46 The comparison may be valid regarding social media platforms used by government organs and bodies, but most of the communication and exchange of ideas conducted through such platforms is private in nature,47 meaning that the doctrine of public forums cannot be applied, and the platforms in general, as well as their individual users (regarding their own profile), are free to adopt their own rules of speech. According to this approach, user access to such platforms may not be prohibited by the government using legal means; however, the service provider may certainly do so without any limitation, with possible exceptions when applying anti-discrimination rules. This approach may be challenged in that it allows a platform to act as a kind of ‘government’ in itself, meaning that the ­freedom of speech should also be guaranteed with regard to the restrictive practices of the platform.48 However, such requirements would come close to challenging the ownership rights of the platforms themselves. According to the traditional approach to freedom of speech, a social media platform is similar to a privately-owned shopping centre, in that its owner or operator can ban or remove any person from the premises if the rules of using the property as determined by the owner have been violated.49 This might result in a ­situation where services that once promised to facilitate the exercise of individual freedoms enter into deals with oppressive regimes in the hope of business advantages.50 42 Packingham (n 40) 1737. 43 See ch 2, section I.F.v. 44 Packingham (n 40) 1736–38. 45 Lyrissa Lidsky, ‘Public Forum 2.0’ (2011) 91 Boston University Law Review 1975, 1980–92. 46 Packingham (n 40) 1743. 47 ‘Packingham v North Carolina’, Case Note, (2017) 131 Harvard Law Review, 233, 238. 48 Kate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harvard Law Review 1599, 1609. 49 ‘Facebook Is Not the Public Square’ New York Times (25 December 2014) at https://www.nytimes. com/2014/12/26/opinion/facebook-is-not-the-public-square.html. 50 Mike Isaac, ‘Facebook Said to Create Censorship Tool to Get Back into China’ New York Times (22 ­November 2016) at https://www.nytimes.com/2016/11/22/technology/facebook-censorship-tool-china.html.

154  Social Media Platforms Social media platforms are also used by public institutions and officials to provide information, collect opinions, etc, and the public profile of a politician or a local government may be subject to different rules than those of private users. President Donald Trump is a prominent user of Twitter. It seems likely that he has violated the speechrelated policies of the service on numerous occasions, but he has not been banned from using its services, as whatever he says is clearly newsworthy (and thus good for advertising the service), regardless of its possibly defamatory or insulting nature.51 Apart from the issue of whether he can be banned from using Twitter, another question is whether or not he (or the manager of his account) can prohibit others from reading his messages (ie ‘following’ him) – as numerous US citizens have experienced. According to the dominant approach, the public forum doctrine may not be applied concerning the relationship between Twitter and its users, but it may be applied concerning the relationship between two users, in particular if one of them is an elected public official. In other words, an area of a social media platform may be considered a public forum if it is used for public political communication.52 This approach was reaffirmed by a district court in Trump.53 Numerous plaintiffs filed a lawsuit against the President and the White House staffer managing his Twitter account because they had been banned from following that account and, as a result of the ban, could not have direct access to and comment on the President’s tweets, nor could they read the related comments; they could only learn about communications made by the President through comments made by their contacts. After analysing the applicability of the public forum doctrine, the court ruled that the President’s account was a designated or limited forum from which a person whose speech did not cross the limits of the freedom of speech could not be banned (the plaintiffs were banned because of their tweets that disputed the content of the presidential tweets).54 The White House is not the owner of the service, not even in the context of the single Trump account concerned, but the account is operated under its control and supervision, which is enough to consider it a public forum.55 The restriction of political opinions is unacceptable in such a forum56 (meaning, a contrario, that even a political figure might be banned from the service if he or she crosses the limits of free speech). D.  ‘Cheap Speech’ and the Traditional Media In an essay published in 1995, Eugene Volokh sought to predict the future path of the transformation of the online public sphere.57 He welcomed the phenomenon he called ‘cheap speech’, as he believed it would eliminate the existing technological scarcity and

51 Jason Murdock, ‘Twitter: Why the Social Network Still Won’t Ban President Donald Trump’ Newsweek (11 April 2018) at https://www.newsweek.com/twitter-why-social-network-still-wont-ban-president-trump-881455. 52 Lidsky (n 45) 1994–2002. 53 Knight First Amendment Institute at Columbia University v Trump no 17 Civ 5205 (NRB) (SDNY 23 May 2018). 54 ibid paras 59–62. 55 ibid paras 41–50. 56 ibid paras 67–68. 57 Eugene Volokh, ‘Cheap Speech and What It Will Do’ (1995) 104 The Yale Law Journal 1805.

Social Media Platforms and the Democratic Public Sphere  155 enable any person to articulate an opinion on public matters cheaply (or even for free) and without any intermediary (television, radio, press), thereby moving the democratic process of decision-making onto broader and more direct foundations.58 Over two decades later, it seems unclear whether such ‘cheap speech’ is indeed a welcome development. The opportunities afforded by online mass communication and the emergence of social media platforms have challenged the business model of traditional journalism and the enforcement of professional standards. Due to the drop in revenue from advertising and the material weakening of the press, investigative journalism has lost its prominent role and has been replaced by sensationalist and impulse-based content production. User habits have also changed, and lengthy and thorough articles (if written at all) have a difficult time finding (sufficient) readership. These phenomena facilitate the spread of ‘fake news’, while the decline of local news services enables the spread of local corruption and the deterioration of public political discourse, making a mockery of ­election campaigns and breeding extremism.59 Some studies found that anonymous ‘trolls’, who challenge reasonable public discourse on numerous forums, cannot be disciplined or banned, and there is no adequate solution to the problems they raise. Trolls therefore keep provoking and insulting others and making it impossible to engage in a thoughtful and progressive debate. Moreover, this state of affairs does not even represent a problem for the social media platforms but rather but a benefit, as their economic interests seem to be better served by heated and active interaction than by calm and reasonable discussion of public affairs.60 Indeed, social media, as a new means of consuming news, seem not to be conducive to revealing any truth.61 Whether or not it comes from an authentic or reliable source, all news is presented in Facebook’s news feed in the same way as gossip and scandals; sensationalist titles and reports are much more popular than pieces of actual journalism (which are difficult to read on mobile devices anyway), and even the existing products of real journalistic effort get lost in the endless and continuously updating flood of ­information. The market of traditional media is occupied by personalised news feeds and the freely available mass of junk news.62 Social media have conquered the production and consumption of news. The general consensus on a commonly accepted ‘truth’ and some common ground that connects members of society has been weakened or even eliminated – every social group, if not each and every person, has its own ‘truth’ on the Internet.63 The professional requirements of accuracy and the verification of facts have also fallen victim to the decline of the institutionalised press.64 In this changed market 58 ibid 1849. 59 Richard L Hasen, Cheap Speech and What it Has Done (To American Democracy), School of Law, ­University of California, Irvine, Legal Studies Research Paper Series no 2017-38, 202–16. 60 Lee Rainie, Janna Anderson and Jonathan Albright, The Future of Free Speech, Trolls, Anonimity and Fake News Online (Pew Research Center, 2017) at http://assets.pewresearch.org/wp-content/uploads/ sites/14/2017/03/28162208/PI_2017.03.29_Social-Climate_FINAL.pdf. 61 See ch 1, section I.A. See also Peter Coe, ‘Redefining “Media” Using a “Media-as-a-ConstitutionalComponent” Concept: An Evaluation of the Need for the European Court of Human Rights to Alter its Understanding of “Media” within a New Media Landscape’ (2017) 37(1) Legal Studies 25, 42–44. 62 Matt Taibbi, ‘Can We Be Saved From Facebook?’ Rolling Stone (3 April 2018) at https://www.rollingstone. com/politics/politics-features/can-we-be-saved-from-facebook-629567/. 63 Katharine Viner, ‘How Technology Disrupted the Truth’ Guardian (12 July 2016) at https://www.theguardian. com/media/2016/jul/12/how-technology-disrupted-the-truth. 64 Lili Levi, ‘Social Media and the Press’ (2012) 90 North Carolina Law Review 1531, 1555–72.

156  Social Media Platforms environment there is no pressure to meet popular demand, and it is increasingly difficult to enforce legal liability; these were the two main means of holding the press accountable by or on behalf of the public. Facebook has not only occupied the news and traditional journalism – it has occupied everything, from political campaigns to the banking system, from the entertainment industry to trade. Not even government or national security ­functions the same way as it used to in the pre-social media era.65 In such a landscape, it seems to have become the responsibility of governments to promote the production of content and news, and to guarantee equal access to information through grants and regulations in order to strengthen democracy. Social media platforms open the gates to the spread of false news, that is the deliberate dissemination of false information.66 Despite appearances, this is not a malfunction that could be dealt with by an appropriate intervention but a nearly inevitable consequence of the very nature of such platforms.67 Accurate profiling is made possible by huge volumes of data and information collected about users in bulk, and such profiles can be used to deploy algorithms that display targeted advertisements and select pieces of content to be presented to users – a decisive factor is the goal of triggering a­ psychological need to return to the platform with increasing frequency. Another factor is the architecture of such platforms, including the nature of communication through them, as they facilitate the spread of sensationalist content that can be consumed quickly but which is not interesting for a long period. False news has always existed, even before the existence of media and at earlier stages of technical advancement. The difference is that such news now becomes available quickly and en masse, and that the new information platforms do not simply disseminate false news randomly but provide an ideal environment for it to spread.68 Traditional media outlets use citizen journalists and social media generally as sources of news. Thus, in the same way that bloggers may regurgitate false or misleading information obtained, for instance, from the traditional media or other bloggers, the traditional media may do the same in respect of information obtained from social media.69 Another alarming aspect of the functioning of social media platforms is that they provide incentives for committing brutal criminal acts in public. It is now possible for a killer to use Facebook to broadcast the shooting of another person, thereby capturing an audience that would not be available to him or her without the platform.70 Other users might report on criminal offences and acts of violence committed by others, and while such reports could be newsworthy, they tend to diminish the dignity of the victims of crime.71 Some users may become angry at their platform providers when they feel 65 Emily Bell, ‘Facebook Is Eating the World’ Columbia Journalism Review (7 March 2016) at https://www. cjr.org/analysis/facebook_and_media.php. 66 Lili Levi, ‘Real “Fake News” and Fake “Fake News”’ (2018) 16 First Amendment Law Review 232. 67 Paul Bernal, ‘Fakebook: Why Facebook Makes the Fake News Problem Inevitable’ (2018) 69(4) Northern Ireland Legal Quarterly 497. 68 ibid. 69 Coe (n 16) 414–15. 70 Daniel Kreps, ‘Alleged Facebook Killer: What We Know So Far’ Rolling Stone (18 April 2017) at https:// www.rollingstone.com/culture/culture-news/alleged-facebook-killer-what-we-know-so-far-121260/. 71 Mike Isaac and Sydney Ember, ‘Live Footage of Shootings Forces Facebook to Confront New Role’ New York Times (8 July 2016) at https://www.nytimes.com/2016/07/09/technology/facebook-dallas-live-videobreaking-news.html.

Social Media Platforms and the Democratic Public Sphere  157 ­ arginalised; for example, a woman started shooting at the YouTube headquarters m because she thought the service provider had deprived her of the advertising revenue realised from the footage she had uploaded.72 Platforms also serve as an excellent means of attacking the integrity of others for no reason by exploiting the smooth spread of accusations and the limited effectiveness of rebuttal.73 While engaging in a real discourse and exchange of opinions is still not impossible on social media, it is certainly not easy to do so, and the endless flood of gossip, false information, ‘virtual content’, targeted advertisements and editorial decisions made by algorithms74 seems to suggest that the primary purpose of social media platforms is not to facilitate such a discourse or to enable a reasonable exchange of opinions. These ­platforms have reshaped the public space at its core. They not only enable the public display of content, but also play an active role in formulating their content and selecting the pieces of information presented to users. Marshall McLuhan’s famous axiom still applies in this context,75 but now ‘the platform is the message’, not the medium.76 E.  Tech or Media Companies? One might believe that McLuhan’s bon mot remains true without any modification, because platform is equal to medium, even though the platform providers themselves have had other ideas for a long time. Facebook insisted, with an almost religious fervour, that its service is nothing but a neutral platform, and the company does not have anything to do with how or for what purpose it is used by users.77 Discussions conducted through the platform may improve participation in elections, but the service itself does not influence the outcome of elections in any way.78 It is more like a billboard: anybody can sign or display anything on it. In the light of the events that unfolded in recent years, this ­position does not seem easy to maintain any longer. It was revealed in the Spring of 2016 that Facebook’s Trending Topics service distorts the significance of certain pieces of news on a political basis, so that some content was presented as if it were more or less important than it actually was. The distortion was deliberate and implemented manually, that is, not caused by a possibly miscalibrated algorithm.79 After the US presidential ­election

72 Nick Allen, ‘Gun Attacker Compared YouTube to Adolf Hitler, Accusing it of Censorship’ Daily Telegraph (5 April 2018) 14. 73 ‘Vegan YouTube “Drama”: “I Was Falsely Accused of Offering Online Sex”’ BBC (11 June 2018) at https:// www.bbc.co.uk/news/blogs-trending-44387678. 74 Timothy B Lee, ‘Mark Zuckerberg is in Denial about How Facebook is Harming Our Politics’ Vox (10 November 2016) at https://www.vox.com/new-money/2016/11/6/13509854/facebook-politics-news-bad. 75 Marshall McLuhan, Understanding Media: The Extensions of Man (New York, McGraw-Hill, 1964). See the Introduction to this book. 76 James Grimmelmann, ‘The Platform is the Message’ (2018) 2 Georgetown Law Technology Review 217. 77 Nicholas Thompson and Fred Vogelstein, ‘Inside the Two Years That Shook Facebook – And the World’ Wired (12 February 2018) at https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/. 78 Alexis C Madrigal, ‘The False Dream of a Neutral Facebook’ The Atlantic (28 September 2017) at https:// www.theatlantic.com/technology/archive/2017/09/the-false-dream-of-a-neutral-facebook/541404/. 79 Sam Thielman, ‘Facebook News Selection is in Hands of Editors Not Algorithms, Documents Show’ Guardian (12 May 2016) at https://www.theguardian.com/technology/2016/may/12/facebook-trending-newsleaked-documents-editor-guidelines.

158  Social Media Platforms of  2016, the platform was widely accused of not having done anything to prevent the spread of false news, thereby contributing to the victory of Donald Trump and the defeat of Hillary Clinton.80 Around the same time, a ‘fake news factory’ was discovered in Macedonia,81 followed by news of meddling in social media by Russian intelligence services, sparking endless investigations.82 Compounded by the scandal concerning Cambridge Analytica in 2018 (which also touched upon the debate on the protection of users’ personal data),83 Facebook could not deflect these accusations by disclaiming any responsibility for its users and it could not claim to remain neutral any longer. By 2018, the platform realised that, similarly to the media and other publishers, it bears responsibility toward the public for the state of democracy.84 It is another issue how a series of such problems could be handled (if they can be handled at all), considering that the platform was designed to rapidly provide a wide audience to all statements, including false ones. It should be noted here that ­Facebook was not a neutral platform even before 2016. Its algorithms produce a personalised news feed for each and every user, using settings that are dependent on but not entirely under the control of the user concerned. Naturally, the news feed settings serve the business interests of the platform, which are legitimate interests but unlikely to foster any neutral behaviour. It was also revealed during the Trending Topics controversy that such settings may also be changed with the aim of political manipulation. Social media platforms have become the primary domain of news services and have acquired a dominant role in online media distribution markets without actually collecting any news.85 Facebook also sorts the presented pieces of content in a deliberate manner – it removes certain content due to mandatory statutory requirements or because they are in violation of the platform’s own policies. Some 583 million fake Facebook profiles were closed in the first quarter of 2018 alone.86 As Tim Berners-Lee put it, Facebook makes billions of editorial decisions each and every day.87 And while other platforms, such as Twitter, give more control to

80 Antonio G Martínez, ‘How Trump Conquered Facebook – Without Russian Ads’ Wired (18 February 2018) at https://www.wired.com/story/how-trump-conquered-facebookwithout-russian-ads/. 81 Samanth Subramanian, ‘Welcome to Veles, Macedonia, Fake News Factory to the World’ Wired (15 ­February 2017) at https://www.wired.com/2017/02/veles-macedonia-fake-news/. 82 Sheera Frenkel and Katie Benner, ‘To Stir Discord in 2016, Russians Turned Most Often to Facebook’ New York Times (17 February 2018) at https://www.nytimes.com/2018/02/17/technology/indictment-russiantech-facebook.html. 83 ‘The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box. A whistleblower has revealed to the Observer how Cambridge Analytica used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.’ See ‘The Cambridge Analytica Files’ in Guardian, at https://www. theguardian.com/news/series/cambridge-analytica-files. 84 Thompson and Vogelstein (n 77). 85 Frederic Filloux, ‘How Facebook and Google Now Dominate Media Distribution’ Monday Note (19  October 2014) at https://mondaynote.com/how-facebook-and-google-now-dominate-media-distribution6263365d141a. 86 Alex Hern, ‘Facebook Closed 583m Fake Accounts in First Three Months of 2018’ Guardian (15 May 2018) at https://www.theguardian.com/technology/2018/may/15/facebook-closed-583m-fake-accounts-in-first-threemonths-of-2018. 87 Lee (n 74).

The Regulation of Platforms by Legislation  159 their users over the compilation of their news feed, editorial decisions are inevitably made by such platforms as well when, for example, pieces of infringing content are removed. It seems clear now that platform providers are similar to traditional media in terms of making editorial decisions, and as a consequence they have a constitutional right to free speech by way of selecting and sorting pieces of content. In a sense, its news feed is Facebook’s ‘opinion’ on what its users might be most interested in and how the platform’s business interests could be best served in that context. If a platform has an opinion, it is afforded protection under the constitutional rules, but it may also be subject to ­restriction, pursuant to applicable legal principles. III.  THE REGULATION OF PLATFORMS BY LEGISLATION

A. Introduction The operation of social media platforms is not free from government regulation. The questions concerning the extent and adequate limits of free speech are also relevant in the context of public communication conducted through such platforms, and there are considerable differences between the regulations introduced in the US and Europe. For the purpose of discussing government regulation, the issues pertaining to disputes between users and their legal liability need to be handled separately from the issues relating to the legal liability of social media platforms. Users may sue each other in the real world, pursuant to general procedural rules about defamation or violation of privacy, among other matters, and they may also file police reports if they suspect that a criminal offence (eg hate speech or incitement to violence) has been committed. Even though the applicable legal procedures are the same, the method, form and impact of speech in and through social media (including the extent of damages or dangers) differ from the corresponding aspects of offline communication. For this reason, the distinctive features of communication on social media platforms need to be taken into consideration when analysing the limits of free speech on them. The matter of the legal liability of such platforms raises complex issues. A platform per se is not the primary communicator of a harmful opinion, as it only enables others to speak, and sorts and displays such content to other users. The European legal approach dictates that social media platforms must play an active role in the removal of violating content and they may held be liable if they fail to do so.88 As such, government regulation involves platform service providers in remedying various violations, so that such providers are obliged to make a decision on the lawfulness of the content concerned. In other words, the task of enforcing the law is shared between the government (courts) and private actors.89 A rather limited version of the same model is also used in the US, under the aegis of the authorisation granted in section 230 of the 1996 Communications Decency Act (CDA).90 In this way, platform providers become both media companies 88 See ch 2, section II.E.i. 89 Jack Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 University of California Davis Law Review 1179. 90 See ch 2, section II.E.ii.

160  Social Media Platforms that decide on the permissibility of content and participants in applying and enforcing the law.91 While courts naturally play a considerable role in deciding social media-related issues that are relevant to the freedom of speech, the moderators and legal counsels of these platforms have a far greater influence on the freedom of discussions and exchanges on the platform than does the dedicated government apparatus.92 B.  Applicable Legislation Under EU law, social media platforms are considered to be hosting service providers, as the users of such services store, sort and make available their own content in and through the system. This means that, pursuant to the E-Commerce Directive, the platforms are required to remove any violating content after they become aware of its infringing nature, but they may not be subject to any general monitoring and control obligation.93 In an Austrian case (which is currently pending, as a preliminary ruling by the Court of Justice of the European Union (CJEU) has been sought), certain issues were raised concerning the extent of the obligation of removal in the context of Facebook’s activities. Briefly, the question is whether a platform may be required, under Article 14 of the Directive, to remove not only a specifically identified piece of content, but also all other identical and even ‘similar’ content that might be made available in the future. Would such a requirement be inconsistent with the Directive’s prohibition on introducing any monitoring obligation?94 The proposal of the Austrian court seems reasonable, considering that this particular remedy could be rendered obsolete and useless by the prompt spread and reproduction of harmful content, if a platform were required to remove only specifically identified pieces of content without any such extension. In addition to the E-Commerce Directive, more general pieces of legislation also apply to communications through social media platforms, including legislation on data protection, copyright, personality rights, public order and criminal law. Such legal provisions may also introduce further obligations on hosting service providers in the context of removing violating content.95 Offline restrictions of speech are also applicable to communications through social media platforms.96 Common violating behaviours in social media can be fitted into more traditional criminal categories (ie those that were adopted in the context of the offline world) almost without exception, making the introduction of new prohibitions ­unnecessary.97 However, this duality gives rise to numerous difficulties as, on the 91 Balkin (n 89) 1181. 92 Marvin Ammori, ‘The “New” New York Times: Free Speech Lawyering in The Age of Google and Twitter’ (2014) 127 Harvard Law Review 2259, 2261–63. 93 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) [‘E-Commerce Directive’], Arts 14 and 15. See ch 2, section II.E.i. 94 OGH, 6Ob116/17b, decision of 25 October 2017. 95 See ch 2, section II.E.i. 96 Jacob Rowbottom, ‘To Rant, Vent and Converse: Protecting Low Level Digital Speech’ (2012) 71(2) Cambridge Law Journal 355, 357–66. 97 Social Media and Criminal Offences, House of Lords (2014) at https://publications.parliament.uk/pa/ ld201415/ldselect/ldcomuni/37/3702.htm.

The Regulation of Platforms by Legislation  161 one hand, such limitations are defined as part of the national legislation of each and every country (and the law of free speech is also far from being fully harmonised among EU Member States) while, on the other hand, the concept of social media is of its essence a global phenomenon, meaning that it transcends national borders. For example, an opinion that is protected by the freedom of speech in Europe might constitute punishable blasphemy in an Islamic country. Since harmful content can be made available worldwide, and is quickly spread and multiplied on a social media platform, the absence of a uniform standard leads to tension and violence.98 It seems that there is no good solution, as neither the limitation of the freedom of speech nor the forced projection of one’s norms onto others seems to be acceptable. Robert Kahn uses the Holocaust as an example to demonstrate that the limitation of free speech cannot be enforced to any great effect (in part due to the lack of uniform standards), meaning that any insulting content can easily find its way into countries that ban such expression. In the era of Facebook and YouTube, it is unlikely that such problems can be handled decisively. (Naturally, geoblocking could offer some kind of solution, but it is also ill-equipped to counter unique and quickly spreading pieces of content.)99 Kahn is of the opinion that stricter restrictions would not solve the problem, arguing that the distinctive features of communication through social media should be accepted, and it should also be acknowledged that such problems and challenges of social media ­platforms cannot be completely resolved from a legal perspective. On-demand media services have fallen within the scope of the AVMS Directive since 2007. Such services can be accessed through the Internet, but social media are not one form of such services. The main reason for this is that providers of on-demand media services bear editorial responsibility for the content they publish; they order and purchase such content, and they have a final say in publishing a piece of content.100 However, social media only provide a communication platform; they may not make any decision ­regarding a piece of content before it is published (the situation is different if some kind of preliminary filtering is used, but such filtering affects only specific types of content). As social media platforms spread, it became clear, about a decade after the previous amendment of the Directive, that media regulation could not be interpreted in such a restrictive manner any longer. A proposal for the amendment of the AVMS Directive published in May 2016 introduced the terms ‘video-sharing platform service’ and ‘video-sharing platform provider’.101 According to the amendment eventually adopted in November 2018, the material scope of the Directive was extended to cover such services: ‘video-sharing platform service’ means a service as defined by Articles 56 and 57 of the Treaty on the Functioning of the European Union where the principal purpose of the service 98 Uta Kohl, ‘Islamophobia, “Gross Offensiveness” and the Internet’ (2018) 27(1) Information & Communication Technology Law 111. 99 Robert A Kahn, Denial, Memory Bans and Social Media (manuscript, on file with author). 100 AVMS Directive, Art 1. 101 Proposal for a Directive of the European Parliament and of the Council amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services in view of changing market realities, COM/2016/0287 final, 2016/0151 (COD) (AVMS proposal) at https://eur-lex.europa.eu/legal-content/HU/TXT/HTML/?uri= CELEX:52016PC0287&from=EN.

162  Social Media Platforms or of a dissociable section thereof or an essential functionality of the service is devoted to providing programmes, user-generated videos, or both, to the general public, for which the videosharing platform provider does not have editorial responsibility, in order to inform, entertain or educate, by means of an electronic communications network within the meaning of point (a) of ­Article 2 of Directive 2002/21/EC and the organisation of which is determined by the video-sharing platform provider, including by automatic means or algorithms in particular by displaying, tagging and sequencing.102

Even though the original proposal would not have extended the scope of the Directive to social media platforms (in terms of the audiovisual content uploaded to the site), it became clear during the legislative process that they could not be exempted from the Directive, and it could not focus solely on portals used to share videos (eg YouTube).103 For this reason, the recital of the amending Directive provides: Video-sharing platform services provide audiovisual content which is increasingly accessed by the general public, in particular by young people. This is also true with regard to social media services, which have become an important medium to share information and to entertain and educate, including by providing access to programmes and user-generated videos. Those social media services need to be included in the scope of Directive 2010/13/EU because they compete for the same audiences and revenues as audiovisual media services. Furthermore, they also have a considerable impact in that they facilitate the possibility for users to shape and influence the opinions of other users. Therefore, in order to protect minors from harmful content and all citizens from incitement to hatred, violence and terrorism, those services should be covered by Directive 2010/13/EU to the extent that they meet the definition of a video-sharing platform service.104

This means that, despite their somewhat misleading name, video-sharing platforms include audiovisual content published on social media. An important aspect of the newly defined term is that service providers do not bear any editorial responsibility for such content; although service providers do sort, display, label and organise such content as part of their activities, they do not become media service providers. Article 28b of the amended Directive provides that Articles 12 to 15 of the E-Commerce Directive (in particular the provisions on hosting service providers and the prohibition on introducing a general monitoring obligation) remain applicable. Member States must ensure that video-sharing platform providers operating within their respective­ jurisdiction take appropriate measures to ensure: • the protection of minors from programmes, user-generated videos and audiovisual commercial communications that may impair their physical, mental or moral development; • the protection of the general public from programmes, user-generated videos and audiovisual commercial communications containing incitement to violence or hatred directed against a group of persons or a member of a group;

102 New AVMS Directive, Art 1(1)b(aa) (legislative references are made with regard to the AVMS Directive in its 2010 version, and indicate amendments thereto). 103 Duncan Robinson, ‘Social Networks Face Tougher EU Oversight on Video Content’ Financial Times (25 May 2017) at https://www.ft.com/content/d5746e06-3fd7-11e7-82b6-896b95f30f58. 104 New AVMS Directive, recital (4).

The Regulation of Platforms by Legislation  163 • the protection of the general public from programmes, user-generated videos and audiovisual commercial communications containing content the dissemination of which constitutes an activity that is a criminal offence under EU law, namely public provocation to commit a terrorist offence within the meaning of Article 5 of Directive (EU) 2017/541, offences concerning child pornography within the meaning of Article 5(4) of Directive 2011/93/EU of the European Parliament and of the Council, and offences concerning racism and xenophobia within the meaning of Article 1 of Framework Decision 2008/913/JHA; • compliance with the requirements set out in Article 9(1) with respect to audiovisual commercial communications that are marketed, sold or arranged by the video-sharing platform providers (general restrictions of commercial communications and p ­ rovisions in order to safeguard minors from commercials). What constitutes an ‘appropriate measure’ is to be determined with regard to the nature of the content in question, the harm it may cause and the characteristics of the category of persons to be protected, as well as the rights and legitimate interests at stake, including those of the video-sharing platform providers and the users who created, transmitted and/or uploaded the content, as well as the public interest.105 According to the Directive, such measures should extend to the following (among others): • defining and applying the above-mentioned requirements in the terms and conditions of the video-sharing platform providers; • establishing and operating transparent and user-friendly mechanisms for users of video-sharing platforms to report or flag up content to the video-sharing platform provider concerned; • with a view to protecting children, establishing and operating age verification systems for users of video-sharing platforms with respect to content that may impair the physical, mental or moral development of minors; • providing for parental control systems with respect to content that may be harmful for minors; • providing users with easy-to-use means of identifying violating content; • establishing and operating transparent, easy-to-use and efficient procedures for managing and settling disputes between video-sharing platform providers and users; • providing information and explanations from service providers regarding the protective measures; • implementing effective measures and controls aimed at media awareness, and ­providing users with information regarding such measures and controls.106 While the new provisions of the Directive appear rather verbose and overly detailed, the major platform providers have already been making efforts to comply with the requirements that have now become mandatory. The regulation applies to only a narrow range of content (ie audiovisual content), and government is only granted control over

105 ibid

Art 28b(3).

106 ibid.

164  Social Media Platforms the operation of platform providers in connection with a handful of content-related issues (protection of minors, hate speech, support for terrorism, child pornography and denial of genocide). Such content is in any case commonly banned or removed by the platforms upon receiving notice of it under their own policies. Nonetheless, not all content prohibited in Europe is inconsistent with such policies. Once the provisions of the Directive are transposed into the national law of EU Member States, platform providers will be required to take action under both the E-Commerce Directive and the AVMS Directive. These two pieces of legislation act mostly in parallel, as the former requires infringing content to be removed in general, while the latter defines certain specific types of i­nfringing content and lays down detailed rules for their removal. The  new AVMS ­ Directive lays down ­ numerous provisions that both facilitate the ­application of the rules and work as procedural s­ afeguards. Article 28a(2) seeks to settle jurisdiction-related matters with reference to the­ principle of establishment, a general principle of EU media regulation,107 providing: A video-sharing platform provider which is not established on the territory of a Member State pursuant to Paragraph 1 shall be deemed to be established on the territory of a Member State for the purposes of this Directive if that video-sharing platform provider (a) has a parent undertaking or a subsidiary undertaking that is established on the territory of that Member State; or (b) is part of a group and another undertaking of that group is established on the territory of that Member State.

It remains to be seen how the Member States will implement the provisions of the new AVMS Directive, how national provisions can be harmonised with the detailed rules, and also how the major platform providers can be forced to cooperate in full (which is a regulatory requirement for each Member State). The legal situation is somewhat simpler in the US, as section 230 CDA also protects social media platforms against government interference.108 If a platform only provides the facilities needed to upload content, it cannot be held responsible for the possibly infringing nature of that content, even if it encourages users to speak and sorts user content.109 However, if a platform controls, generates, actively edits or modifies such content, it loses its immunity.110 The scope of exceptions from this rule can also be extended, as happened in April 2018, when a recently adopted law permitted the taking of action against websites, including hosting service providers, promoting trafficking in human beings for sexual exploitation.111

107 See ch 2, section II.E.i. 108 See ch 2, section II.E.ii. 109 Claire Ballentine, ‘Yelp Can’t Be Ordered to Remove Negative Posts, California Court Rules’ New York Times (3 July 2018) at https://www.nytimes.com/2018/07/03/technology/yelp-negative-reviews-court-ruling. html. 110 Tom Jackman and Jonathan O’Connell, ‘Backpage has Always Claimed it doesn’t Control Sex-Related Ads. New Documents Show Otherwise’ Washington Post (11 July 2017) at https://www.washingtonpost.com/ local/public-safety/backpage-has-always-claimed-it-doesnt-control-sex-related-ads-new-documents-showotherwise/2017/07/10/b3158ef6-553c-11e7-b38e-35fd8e0c288f_story.html?utm_term=.d2d3064b22c8. 111 Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (FOSTA). See ‘Trump Signs “FOSTA” Bill Targeting Online Sex Trafficking, Enables States and Victims to Pursue Websites’ Washington Post

The Regulation of Platforms by Legislation  165 C.  Jurisdictional Issues Rather diverse jurisdictional rules apply to actions taken against infringing content published on a social media platform, and a party initiating proceedings needs to select the most suitable venue for his or her action. In the context of disputes between users, intra-EU jurisdiction in defamation cases is to be determined pursuant to the Brussels Ia Regulation; thus claimants may file their actions with the courts of their country if they suffered damage in that country.112 Member States may also permit claimants and ­defendants established in a third country (not an EU Member State) to file an action with their courts. For example, in the UK, the Defamation Act 2013 seeks to limit the possibilities for ‘forum shopping’ by providing that an action may not be filed in England or Wales unless it is clear that it would be the most suitable venue for the action.113 As for actions brought against a platform, its obligation to remove infringing content may be enforced pursuant to the jurisdiction-related provisions of the E-Commerce ­Directive.114 The Directive applies the principle of establishment, so that an established service provider means ‘a service provider who effectively pursues an economic ­activity using a fixed establishment for an indefinite period’.115 ‘Member States may not, for reasons falling within the coordinated field, restrict the freedom to provide information society services from another Member State.’116 However, Member States may derogate from this provision if doing so is necessary in the interests of: • public policy (in particular the prevention, investigation, detection and prosecution of criminal offences, the protection of minors and the fight against any incitement to hatred on the grounds of race, sex, religion or nationality, and violations of human dignity concerning individual persons); • the protection of public health (eg taking action against advertisements promoting products that are harmful to human health); • public security (eg combatting terrorism); • the protection of consumers (tackling misleading advertisements).117 Platform providers with at least one establishment within the EU fall within the scope of the Directive as hosting service providers, and their obligation to remove a piece of content may be enforced, if the applicable conditions are met, by a court of the country of establishment. In certain cases, the Member States themselves may also restrict the operation of a platform, if the conditions set forth in the Directive are met. EU law does

(11  April 2018) at https://www.washingtonpost.com/news/true-crime/wp/2018/04/11/trump-signs-fosta-billtargeting-online-sex-trafficking-enables-states-and-victims-to-pursue-websites/?noredirect=on&utm_term=. faccd8cf7174. 112 Regulation (EU) no 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters. See ch 2, section I.E.iii. 113 Defamation Act 2013, s 9. See Alex Mills, ‘The Law Applicable to Cross-Border Defamation on Social Media: Whose Law Governs Free Speech in “Facebookistan”?’ (2015) 7(1) Journal of Media Law 1. 114 Anupam Chander, ‘Facebookistan’ (2012) 90 North Carolina Law Review 1807, 1836–37. 115 E-Commerce Directive, Art 2(c). 116 ibid Art 3(2). 117 ibid Art 3(4)a.

166  Social Media Platforms not apply to service providers established in a third country, meaning that the Member States are free to determine the rules for taking action against such service providers. In addition to the E-Commerce Directive, further provisions relating to jurisdiction are laid down in certain pieces of specialised legislation. Where the rules of data ­protection are violated, for example, the jurisdiction of a court is determined pursuant to the General Data Protection Regulation,118 and the court of the domicile of the injured party may also have jurisdiction over the case.119 It is important that a platform provider may not exclude the jurisdiction of a court by applying any contract term or condition determined unilaterally.120 Similarly, consumer protection rules also provide injured consumers with numerous options for enforcing their rights in the manner that is the least troublesome for them. Pursuant to the above-mentioned Brussels Ia Regulation, the venue is to be determined according to the registered address, branch office, representation or establishment of the party, with the proviso that a consumer may also file a lawsuit with the court having jurisdiction over his or her place of residence.121 The meaning of the term ‘consumer’ is of utmost importance in this regard, as business associations that registered with or operate a profile or account on a platform do not fall within the scope of the regulation. The CJEU analysed the meaning of ‘consumer’ in Schrems v Facebook Ireland Ltd,122 concluding that a Facebook user is considered a consumer even if his activities are not limited to private communication but he also publishes books, delivers lectures, operates websites, and promotes and reports on such activities on the platform, as long as he does not qualify as an economic operator, that is does not pursue his activities for profit. Max Schrems, a known Austrian privacy activist, therefore successfully sued the social media platform before an Austrian court to enforce his claims under a consumer contract.123 Platforms may attempt to stipulate provisions in their general terms and conditions on specifying the jurisdiction for disputes arising from such contracts. For example, there was a time when Facebook sought to ensure that it could only be sued in the US before the courts of North California or San Mateo County, so that the laws of California would be applicable during the proceedings. Obviously, such a provision would have deprived European Facebook users of any real opportunity to enforce their claims. Today, the platform’s Terms of Service stipulate that an EU citizen

118 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Arts 55 and 56. This possibility was also available under the previous Data Protection Directive, see, eg, CG v Facebook Ireland Ltd [2016] NICA 54 before the Northern-Irish courts. 119 Art 56(2), Regulation (EU) 2016/679. See also ‘Facebook in Breach of German Data Protection Law’ Verbraucherzentrale Bunderverband (14 February 2018) at https://www.vzbv.de/sites/default/files/downloads/ 2018/02/14/18-02-12_vzbv_pm_facebook-urteil_en.pdf, and ‘Facebook Loses Belgian Privacy Case, Faces Fine of up to $125 Million’ Reuters (16 February 2016) at https://www.reuters.com/article/us-facebook-belgium/ facebook-loses-belgian-privacy-case-faces-fine-of-up-to-125-million-idUSKCN1G01LG. 120 See, eg, the Canadian decision in Douez v Facebook, Inc, 2017 SCC 33, at https://scc-csc.lexum.com/ scc-csc/scc-csc/en/item/16700/index.do. 121 Brussels Ia Regulation, Arts 17(2) and 18(1). 122 Case C-498/16, Schrems v Facebook Ireland Ltd, ECLI:EU:C:2018:37. 123 The case was launched before the effective date of Brussles Ia under the previous Regulation (Council Regulation (EC) no 44/2001 of 22 December 2000 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters); nonetheless, the previous Regulation laid down identical provisions to the subsequent Regulation regarding jurisdiction in consumer disputes.

The Regulation of Platforms by Legislation  167 (provided that he or she qualifies as a consumer) may file an action with the courts of his or her own country, and economic operators established in the EU may sue the platform before the courts of Ireland.124 But we have to keep in mind that referring to someone as a ‘consumer’ can also be used to limit that person’s rights – suggesting that an individual’s rights are only protected insofar as he or she can be considered ‘just’ as a consumer, not as a human, a citizen, etc. But we are all of those, and should have all the respective rights – consumer rights, human rights, citizens’ political rights – and all the relevant protections. D.  Protection of Reputation The general rules on protecting one’s reputation are also enforced on social media platforms. Obviously, public affairs may be discussed on such forums with considerable freedom.125 However, the narrowing effect of traditional media does not apply to social media, meaning that many more people can express their opinions, and there is no editorial control over such communications. As a result, the risk of defamation is much higher on social media.126 The language used on and shaped by the features of social media, as well as the new methods of communication (‘liking’, sharing, using hashtags to provide context, etc), make it complicated to apply, as well as necessary to re-interpret, the ­traditional rules of reputation protection in the context of social media platforms. The traditional rules of reputation protection and the obligations of hosting service providers apply simultaneously and may be inconsistent with each other. Muwema v Facebook Ireland Ltd, a case decided by the High Court of Ireland, provides an excellent example of such conflicts.127 Defamatory statements concerning the claimant were published on Facebook, so he requested that the platform provider remove the infringing content. As Facebook refused to comply with the request, the claimant filed an action. Following the rules of defamation law, the Court did not order Facebook to remove the infringing content because, due to its passive role in the matter, it could use the innocent publication defence,128 under which a person or entity escapes liability if he or it (i) is not the author, editor, or publisher of the infringing statement; (ii) proceeded with reasonable care regarding the communication, and (iii) did not know and did not need to know that his or its actions contributed to the publication of defamatory statements.129 All of these conditions must exist in order to escape liability. As Eoin O’Dell noted, it seems doubtful that conditions (ii) and (iii) were in fact met once Facebook had received

124 https://www.facebook.com/legal/terms, s 4.4. 125 See New York Times v Sullivan 376 US 254 (1964); Reynolds v Times Newspapers Ltd [2001] 2 AC 127; Defamation Act 2013, s 4; Lingens v Austria, App no 9815/82, ECtHR, judgment of 8 July 1986. 126 Mills (n 113) 26. 127 Muwema v Facebook Ireland Ltd [2016] IEHC 519. 128 See s 27 of the Irish Defamation Act 2009. The defence under English law would be that of ‘innocent dissemination’. See s 1(3) of the Defamation Act 1996. 129 The English rule is the same essentially. Note that, in Continental (civil) jurisdictions, the condition of establishing liability for a violation may be objective as well, meaning that it is based on the fact the publication is in fact unlawful, and the only way for the platform service provider would be to show that it does not qualify as a speaker regarding the harmful communication.

168  Social Media Platforms a notice.130 In the course of the proceedings, the Court also considered the obligation of a hosting service provider to remove infringing content, as it was to be applied in addition to the law of defamation. The defendant claimed that it did not have any actual knowledge of the defamatory and, hence, illegal nature of the statement,131 and the Court ruled that the conditions of mandatory removal were not met. As a consequence, a person suffering a similar injury would remain almost without any remedy.132 Nonetheless, the Court ordered the platform to disclose the identity of the person who made the defamatory statement to the claimant; if successful, the claimant could bring an action against that person. O’Dell does not agree with this conclusion,133 but it seems clear that the High Court of Ireland revealed a weakness of the liability rules for hosting service providers, that is, that a gatekeeper is unable to determine clearly whether a statement is harmful or not. This uncertainty may either lead to excessive leniency, or to certain personality violations remaining unremedied (as happened in this Irish case). In US law, section 230 CDA is also applicable regarding the protection of reputation. The courts have already reaffirmed Facebook’s immunity in this respect, as the platform does not become a content provider merely because it has control over user-uploaded content and might refuse to remove a piece of content after review, meaning that its attitude is not entirely passive regarding the allegedly infringing content.134 Such immunity might be extended to a user as well, provided that he or she does not personally publish defamatory statements but merely passes on a harmful opinion expressed by another person (for example by ‘re-tweeting’ it on Twitter or sharing it on Facebook). This rule protects both users and platform providers. Even though it focused on the liability of a private e-mail list operator, the decision in Batzel v Smith135 may be considered analogous: in it, the court ruled that the operator who forwarded a defamatory statement to members of an e-mail list was to be considered a mere user regarding its action, and only the liability of the original e-mail’s author was established.136 Disputes among users are not subject to any special rule on liability, but numerous scholars warn that the rules and doctrines of defamation law are applicable to discussions and disputes on social media, with only certain reservations. Lyrissa Lidsky and RonNell Andersen Jones pointed out the malleability of the term ‘opinion’ and, with regard to the judgment in New York Times v Sullivan,137 they called for a suitable interpretation of the concept of ‘actual malice’.138 Whether or not an utterance is considered to be an opinion or a statement of fact is a key issue in defamation law, as opinions

130 Eoin O’Dell, ‘Ireland: Reform of the Law of Defamation – The Defence of Innocent Publication (Muwema v Facebook, Part 2)’ Inforrm’s Blog (29 September 2016) at https://inforrm.org/2016/09/29/ireland-reform-ofthe-law-of-defamation-the-defence-of-innocent-publication-muwema-v-facebook-part-2-eoin-odell/. 131 Muwema v Facebook Ireland (n 127) para 40. 132 ibid para 65. 133 O’Dell (n 130). 134 See, eg, Franco Caraccioli v Facebook, Inc no 16-15610 (9th Cir 2017). 135 Batzel v Smith 333 F3d 1018, 1033 (9th Cir 2003). 136 On the relationship between transmission and s 230 of the CDA, see Daxton RC Stewart, ‘When Retweets Attack: Are Twitter Users Liable for Republishing the Defamatory Tweets of Others?’ (2013) 90(2)­ Journalism & Mass Communication Quarterly 233. 137 New York Times v Sullivan (n 125). See ch 1, section III.B. 138 Lyrissa Lidsky and RonNell Andersen Jones, ‘Of Reasonable Readers and Unreasonable Speakers: Libel Law in a Networked World’ (2016) 23 Virginia Journal of Social Policy and the Law 155.

The Regulation of Platforms by Legislation  169 might be afforded wider legal protection due to their subjective nature and the considerable difficulties of verifying such statements.139 In addition to the grammatical meaning of the published words themselves, context also plays an essential role in interpreting such an utterance appropriately, and here too communications on social media have some ­distinctive features. Some examples of possible problems include:140 • the limitation on the number of characters that can be used in a tweet on Twitter can make it impossible to present a detailed and duly justified position; • the ‘culture’ of using links on such platforms makes it difficult to consider any linked content (if it contains a specific statement of fact) to be a statement of fact made by the user sharing the link; • hashtags that might limit any injury can be used to provide a context to a message (eg marking it as a joke), meaning that such hashtags also need to be taken into consideration; • the language used on social media might be informal, free, vulgar, rude or insulting, and it needs to be interpreted in the context of the platform (see also ‘medium is the message’).141 With regard to the requirement of actual malice in public disputes, the musician ­Courtney Love claimed, in a defamation case filed against her by her former attorney over some tweets, that she had developed an addiction to social media, thus her psychological condition excluded the possibility of any actual malice on her part. However, Lidsky and Jones agree that insanity is not recognised in defamation law as a defence that would exclude liability.142 Megan Richardson pointed out the importance of taking ‘extra-legal’ measures against defamatory statements, such as conducting effective counter-campaigns against false accusations that spread rapidly, in addition to warning users of the possibility of taking legal action.143 Jacob Rowbottom warned of the consequences of a symbiosis between professional and amateur content producers in the context of defamation law, as users generally do not perceive articles and other content produced by journalists in the same way as they do reader comments or reshares with comments.144 Nonetheless, the law of defamation has remained consistent so far and does not at this point take into account developments such as the new means of communication, the changing style and content of public discourse, the absence of filters from the process of publishing, and the differences in reader and user attitudes toward information and reality itself.

139 See, eg, Karsai v Hungary, App no 5380/07, judgment of 1 December 2009. 140 Lidsky and Andersen Jones (n 138), 161–73. 141 McLuhan (n 75); see also the Introduction to this book. 142 Lidsky and Andersen Jones (n 138) 177. 143 Megan Richardson, ‘Honour in a Time of Twitter’ (2013) 5 Journal of Media Law 45. Nonetheless, Lord McAlpine also filed a lawsuit, see McAlpine v Bercow (n 11). 144 Jacob Rowbottom, ‘In the Shadow of the Big Media: Freedom of Expression, Participation and the Production of Knowledge Online’ [2014] Public Law 491.

170  Social Media Platforms E.  Protection of Privacy The freedom of opinion, a freedom that governments have rarely interfered with in recent years, is now clearly in danger due to the functioning of new platforms. Freedom of opinion means that people are free to form their own convictions and hold their own opinions on the basis of the information available. It follows that this freedom is also related to the protection of private life, freedom of religious beliefs and the freedom of speech.145 However, social media platforms tend to ignore this aspect of privacy. The business model of such platforms is based on the targeted display of advertisements, and they need to know their users to fulfil this goal. Knowing their users means, on the one hand, knowing the information users share and publish, so that their profile can be charted for targeted marketing, and, on the other hand, knowing (or at least guessing) the ideas users do not speak of or do not even have at the time. This is how advertisements, which suggest what to buy, which hotel to book or which website to visit, become shown to users. The opinions of users can not only be guessed but they can also be influenced and directed by suggestions. Users may be able to control such suggestions through their free will, but their freedom to decide for themselves is clearly restricted. The more information is available on a given user, the more detailed a profile is built up by a platform, and the more targeted the advertisements shown to that user are, and consequently the more restricted that freedom becomes.146 It seems particularly worrisome that data collection may also extend to opinions that are not communicated and exist only as thoughts – as happens when the analysis of messages never sent and content never posted is also used to develop the profile of a user.147 This means that not even the content of unsent messages remains secret, as it would otherwise remain in the offline world. Nonetheless, most users permit platform providers to process their personal data without being particularly concerned, in exchange for a high-quality personalised service. This phenomenon raises fundamental questions regarding the future of privacy protection and the practicality of its basic principles.148 As already noted, a platform is not only capable of guessing the thoughts of its users, it can also manipulate them, as was confirmed by the infamous experiment conducted on Facebook.149 Common challenges concerning the protection of privacy (eg publication of confidential information, abuse of images) are handled on social media similarly to the challenges in the field of protection of reputation. Measures to protect privacy, various torts and the provisions of a civil code may be applied in the context of disputes among users. Such means and measures must be applied with due regard to and in line with the values of free speech and the open discussion of public affairs, especially in the context of discourses

145 See ch 1, section II.E. 146 Susie Alegre, ‘Rethinking Freedom of Thought for the 21st Century’ (2017) European Human Rights Law Review 221, 226–27. 147 Sauvik Das and Adam Kramer, ‘Self-Censorship on Facebook’ Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media (2013) at https://www.aaai.org/ocs/index.php/ICWSM/ ICWSM13/paper/viewFile/6093/6350. 148 See The Future of Privacy (Pew Research Center, 2014) at http://assets.pewresearch.org/wp-content/ uploads/sites/14/2014/12/PI_FutureofPrivacy_1218141.pdf. 149 See section II.B.

The Regulation of Platforms by Legislation  171 conducted through social media.150 Privacy concerns and violations by social media platforms are not covered here, as they are only indirectly related to free speech issues. Platforms do not publish the data they collect on their users, but such information is used subsequently to determine the content (eg Facebook news feed) provided to each user concerned. If such content is considered the opinion of the platform, it becomes clear that free speech issues can be closely related to various privacy issues.151 It is difficult for social media platforms to decide what to do with information uploaded by their users after their death. The right to privacy dictates that the photographs, messages and other content of a deceased person are to be protected, even against any next of kin, but this consideration is often in conflict with the family members’ right to respect for the deceased. To date, this conflict seems irresolvable at the level of ­principles.152 The notice and takedown procedure laid down in the E-Commerce Directive is a specific area where the liability of a platform for a privacy violation may have a direct impact on free speech. Generally, the procedure is conducted in a similar way to the removal of any other infringing content, meaning that the platform is required to decide on the permissibility of a specific piece of content if it receives a privacy violation notice.153 This means that the platform has to play the roles of both editor and judge; it has to decide on whether to remove a piece of content or keep it accessible, thereby also passing a decision on its permissibility. This form of regulation raises some obvious concerns. If a platform decides not to remove a piece of allegedly violating content, the claimant may sue the platform as well as its author. For example, a dispute between private parties turned into a dispute between a platform and a private individual in CG v Facebook Ireland Ltd in Northern Ireland.154 A Facebook user set up two sites on the platform where he published the personal data of persons convicted of committing paedophile offences (sex crimes against children). A photograph and the home address of the claimant (referred to by the abbreviation ‘CG’ in the case documents) were also displayed next to an exchange. The claimant had indeed been convicted of a paedophile offence, but he had served his sentence, had been released and was no longer considered a threat to society by public authorities. However, he was harassed and received threats against his life because of the Facebook page. The communication concerning the claimant was considered a clear misuse of private information, but Facebook refused to remove the information after receipt of a notice on the grounds that the notice was not submitted in the required form (it had been sent by post instead of using Facebook’s own system). The court pointed out that a platform provider may not specify a required form for sending a notice, and

150 Von Hannover v Germany, App no 59320/00, judgment of 24 June 2004; Campbell v MGN Ltd [2004] 2 AC 457 HL. 151 Case C-210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH (GC, 5 June 2018), as the CJEU held that both the platform and the administrators of a fan page are liable for the privacy violations committed by Facebook concerning user data managed by the fan page operators on Facebook. 152 See Damien McCallig, ‘Facebook after Death: An Evolving Policy in a Social Network’ (2014) 22 International Journal of Law & Information Technology 107; Bo Zhao, ‘Posthumous Defamation and Posthumous Privacy Cases in the Digital Age’ (2016) 3 Savannah Law Review 15. 153 E-Commerce Directive, Art 14; see ch 2, section II.E.i. 154 CG v Facebook Ireland Ltd (n 118).

172  Social Media Platforms that the only relevant factor was whether or not a notice was received. During the period between receipt of the notice and the removal of the infringing content (as it was removed eventually), Facebook was committing a violation.155 In addition to enabling their users to express their opinions, social media platforms also provide employers with means of controlling their employees. When an employer believes that a published opinion about it is harmful to its interests, it may impose a sanction on the employee who expressed this opinion. This is exactly what happened in Bland v Roberts.156 It appears from UK case law that the freedom of speech of employees is afforded extensive protection, as an opinion presented regarding a private or public matter may not be followed by sanctions where it did not violate the interests of the employer. That is the situation, for example, where an opinion remains within the protected boundaries of free speech and it may not be considered the e­ mployer’s opinion or an ‘official’ position published in its name and on its behalf.157 In the context of US  law, Cara Magatelli argued that employees are free to pursue any kind of activity outside work as long as such activities are not illegal and do not violate the legitimate business interests of their employer.158 Other rules apply when Facebook is used during working hours, but in such cases any possible sanctions are not based on the content of a message but on the misuse of working time. The judgment of the Grand Chamber of the European Court of Human Rights (ECtHR) in the case of Bărbulescu v Romania159 found that the monitoring of an employee’s e-mail account resulted in the violation of his right to respect for his private life and correspondence within the meaning of ­Article 8 of the European Convention on Human Rights (ECHR). Though this was not a case relating to social media usage, the decision has important implications generally to employees’ online privacy rights at the workplace, and so it is relevant for our purposes. According to the ECtHR, the applicant was not informed of the extent and the nature of the monitoring activities, or of the possibility of his employer’s having had access to the content of the communications. The Court observed that the domestic courts did not pay attention to the scope of the monitoring or the degree of the intrusion, nor to whether the monitoring was justified by legitimate reasons. The specific aim of such strict monitoring was not identified, while neither the seriousness of the consequences for the applicant nor alternative less intrusive measures were examined. Mary-Rose Papandrea raised the interesting issue of whether or not an employee may communicate with others through social media concerning a work-related matter

155 Lorna Woods, ‘When is Facebook Liable for Illegal Content under the E-commerce Directive? CG v ­Facebook in the Northern Ireland Courts’ EU Law Analysis (19 January 2017) at https://eulawanalysis.blogspot.com/2017/01/when-is-facebook-liable-for-illegal.html. 156 See section II.A. 157 See Smith v Trafford Housing Trust [2012] EWHC 3221 (Ch). For more details on case law, see Dominic McGoldrick, ‘The Limits of Freedom of Expression on Facebook and Social Networking Sites: A UK Perspective’ (2013) 13(1) Human Rights Law Review 125, 139–49. But note the difficulties in protecting free speech rights at a workplace: see Paul Wragg, ‘Free Speech Rights at Work: Resolving the Differences between Practice and Liberal Principle’ (2015) 44(1) Industrial Law Journal 1; David Mangan, ‘Online Speech and the Workplace: Public Right, Private Regulation’ (2018) 39(2) Comparative Labor Law and Policy Journal 357. 158 Cara Magatelli, ‘Facebook is Not Your Friend: Protecting a Private Employee’s Expectation of Privacy in Social Networking Content in the Twenty-First Century Workplace’ (2012) 6 Journal of Business, Entrepreneurship & the Law 103. 159 Bărbulescu v Romania, App no 61496/08, judgment of 5 September 2017.

The Regulation of Platforms by Legislation  173 (as if doing so were part of his or her job). This issue is particularly sensitive in the context of exchanges between teachers and students. If an employer prohibits teachers from communicating with students (eg to prevent inappropriate communication and relationships), the prohibition might limit the teachers’ freedom of speech, even in situations where they wish to discuss a non-educational matter with a student. Naturally, the freedom of speech may be restricted in justified cases, but only if doing so satisfies the appropriate tests.160 Regulating the use of social media is a possibility, and an established practice in some work communities, and it does not seem to be an objectionable one where the legitimate interests of an employer require the regulation of those activities of its workers in situations where their posts and tweets could be regarded as the opinion of the employer. This is why the New York Times introduced guidelines for the use of social media by its journalists. It may seem ironic that the social media presence of journalists working for a traditional medium is subject to such regulation, but the reasons for regulation can easily be appreciated. As is also admitted by the journalists themselves, whatever they publish, even as a private individual, might be construed as the position of the paper for which they work.161 F.  Threats, Hate Speech and Other Violent Content Breaching Public Order The distinctive features of online communication make it necessary to define new criminal categories from time to time. Before the appearance of mobile devices and the Internet (in the era of VHS cassettes and traditional photography), it seemed unlikely that revenge porn would become a phenomenon on the present scale, but footage showing former partners is now commonly published on video-sharing platforms without consent,162 and such publication is recognised in numerous jurisdictions as a sui generis criminal offence.163 General prohibitions apply to threats, hatred, harassment and indecent content, regardless of whether they were published on a social media platform, offline or by using any other technology. For example, the UK Communications Act 2003 prohibits the improper use of public electronic communications networks.164 According to the statutory provisions, the offence is committed by a person who ‘sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character’. The meaning of the phrase ‘grossly offensive’ is elaborated in more detail by the House of Lords in its judgment in DPP v Collins.165 In that case, racist messages were sent through the telephone network,

160 Mary-Rose Papandrea, Social Media, Public School Teachers, and the First Amendment, Boston College Law School, Legal Studies Research Paper Series no 267 (2012). 161 ‘The Times Issues Social Media Guidelines for the Newsroom’ New York Times (13 October 2017) at https://www.nytimes.com/2017/10/13/reader-center/social-media-guidelines.html. 162 Nick Hopkins and Olivia Solon, ‘Facebook Flooded with “Sextortion” and “Revenge Porn”, Files Reveal’ Guardian (22 May 2017) at https://www.theguardian.com/news/2017/may/22/facebook-flooded-withsextortion-and-revenge-porn-files-reveal. 163 See, eg, the English legislation at https://www.gov.uk/government/publications/revenge-porn. 164 Communications Act 2003, s 127(1)(a). 165 DPP v Collins [2006] UKHL 40.

174  Social Media Platforms not by social media, but the approach expressed by Lord Bingham might also be relevant to the use of other kinds of technology, as he explained that the context of publishing words is a material aspect in deciding a case, and the applicable test required the assessment of whether or not the message was grossly offensive to the persons to whom it relates.166 The tension between this test and the right to free speech is increased even further by the increasing number of insults voiced on social media platforms and their rapid spread. Social media have become the primary domain of publishing quickly formulated and often foolhardy opinions en masse and without any filter or control.167 In 2012, Welsh football player Daniel Thomas published a homophobic message on Twitter concerning the British Olympic diving team.168 The Director of Public Prosecutions (DPP) believed that the message was offensive but not grossly offensive, in particular because its author meant it to be humorous; it was sent to followers (mostly family and friends), it was swiftly removed after it became public, the author showed remorse and was punished by his team, and the targeted divers were not among the addressees but learned about the message after it became public.169 This shows that the context of speech in social media is to be considered broadly and should be taken into account before an act is considered a criminal offence. In other cases, the authors of statements that were meant to be humorous have even been sent to prison. For example, the actions of the author who made racist and offensive remarks when he got into a heated debate for joking about a footballer who had suffered a cardiac arrest were not considered to be a one-time ‘mistake’ but, knowing the context, should not have been considered a criminal offence.170 In Chambers v DPP, the proceeding court needed to consider another quality mentioned in section 127(1)(a) of the Communications Act 2003, that is the ‘menacing’ nature of the message.171 Paul Chambers was charged and sentenced because he ‘threatened’ to blow up Doncaster Robin Hood Airport when he realised that his flight had been cancelled due to bad weather conditions. The case became known in England as the ‘Twitter joke trial’, because Chambers did not actually intend to act on his foolhardy threats.172 Nonetheless, he was found guilty by the lower courts based on the consideration that the message was menacing in and of itself, and it was capable of causing unease in average citizens.173 Eventually, Chambers was found not guilty by the Divisional Court due to the absence of any real threat, considering that the message was not capable of creating fear or apprehension in those to whom it was communicated, or who might

166 ibid [9]. 167 Peter Coe, ‘The Social Media Paradox: An Intersection with Freedom of Expression and the Criminal Law’ (2015) 24(1) Information & Communications Technology Law 16, 32–33. 168 ‘If there is any consolation for finishing fourth at least Daley and Waterfield can go and bum each other #teamHIV.’ 169 ‘DPP Statement on Tom Daley Case and Social Media Prosecutions’ Crown Procesution Service Blog (20  September 2012) at http://blog.cps.gov.uk/2012/09/dpp-statement-on-tom-daley-case-and-social-mediaprosecutions.html. 170 Steven Morris, ‘Student Jailed for Racist Fabrice Muamba Tweets’ Guardian (27 March 2012) at https:// www.theguardian.com/uk/2012/mar/27/student-jailed-fabrice-muamba-tweets. 171 Chambers v DPP [2012] EWHC 2157. 172 Martin Beckford, ‘Twitter Joke Trial Conviction Quashed in High Court’ Telegraph (27 July 2012) at https:// www.telegraph.co.uk/technology/twitter/9431677/Twitter-joke-trial-conviction-quashed-in-High-Court.html. 173 Chambers v DPP (n 171) [17].

The Regulation of Platforms by Legislation  175 reasonably be expected to see it.174 Another person was subject to a fine of £800 for teaching his dog to do a Nazi salute and publishing a video of it doing so on YouTube, because the message was grossly offensive.175 Other rules may also be applicable to messages carelessly published in social media. For example, the UK Malicious Communications Act 1988 provides for the punishment of those who send indecent, grossly offensive or threatening messages to others through an electronic communications network,176 while the Public Order Act 1986 makes it ­illegal to use threatening, abusive or insulting words, to act in such manner, to use racist language and to incite hatred on the basis of religion.177 This indicates that general criminal legislation and other rules protecting public order are to be applied with due regard to the context and distinctive features of social media, as well as to the unique publicity provided.178 Certain aspects of the criterion of being ‘threatening’ were also taken into consideration by the US Supreme Court in Elonis v the United States.179 Anthony Elonis was in the middle of a divorce when he posted numerous posts on Facebook concerning his wife, co-workers, certain federal law enforcement agencies and a nursery school group, along with threats to kill and shoot them. As a consequence, he was found guilty by the lower courts of the federal offence of sending a communication containing threats to injure the person of another.180 Elonis claimed that his messages were published as the lyrics of rap songs, inspired by rap artist Eminem, meaning that they were protected under the freedom of speech.181 The Supreme Court did not take the submitted First Amendment arguments into account but considered whether or not the mens rea of the defendant (the subjective intent to threaten another person), the threatening nature of the published words, or the interpretation of those words as threats by a reasonable person would be sufficient to find the defendant guilty as charged.182 Did the defendant intend to harm the targeted persons; did he have the required mens rea? The Supreme Court considered that the offence was certainly committed by a person who ‘transmits a communication for the purpose of issuing a threat or with knowledge that the communication will be viewed as a threat’.183 As the intent of the defendant had not been taken into account by the lower courts, since they had relied on the meaning of words and their interpretation by a reasonable person, the Supreme Court changed the judgment and found the defendant not guilty. Even though the judgment in Elonis suggests that the context of a communication is relevant, the requirement of taking into account the intent of a person who publishes

174 ibid [30]. 175 ‘Man Who Filmed Dog Giving Nazi Salutes Fined £800’ Guardian (23 April 2018) at https://www.theguardian. com/uk-news/2018/apr/23/man-who-filmed-dog-giving-nazi-salutes-fined. 176 Malicious Communications Act 1988, s 1(1)(a). 177 Racial Hatred Act 2006, amending the Public Order Act 1986, pt 3A. 178 See Coe (n 167) 21–39; Ammar Oozeer, ‘Internet and Social Networks: Freedom of Expression in the Digital Age’ (2014) 40(2) Commonwealth Law Bulletin 314, 351–56; McGoldrick (n 157) 125–38. 179 Elonis v United States 575 US ___ (2015). 180 USC Title 18 s 875(c). 181 Elonis (n 179) 14. 182 ibid 13–17. 183 However, the Court did not consider whether or not committing the crime in an irresponsible manner and without any consequence (recklessness) is also punishable (apart from committing it intentionally).

176  Social Media Platforms a harmful message is a considerable burden for social media platforms. In Europe, such platforms are considered hosting service providers, and as such they are required to remove content that is threatening, hateful or against public order, as soon as they become aware of it. In the US, section 230 CDA does not provide any defence to a federal crime, and the platforms are also obliged to remove any such content. However, if a platform provider intends to remove illegal content only, it needs to examine the context and the intent of the communicating person, although it seems unlikely that a platform provider would have a real possibility to take into account all relevant circumstances of that nature.184 The difficulties of applying the US doctrines of ‘true threat’ and ‘imminent danger’ have also been pointed out by Daniel Harawa.185 The direct, informal and often vulgar language used on social media platforms might cause doubt regarding the seriousness of certain threats, and the imminent nature of a danger is difficult to comprehend when there is a great distance between the communicating persons. Such concerns might cause difficulties even in taking action against content that supports terrorism,186 even though the support of terrorism, being a federal crime, is also exempted from section 230 CDA, meaning that social media platforms are obliged to take action against such content.187 Social media platforms have already been sued by family members of the victims of certain terrorist attacks for failing to take action with sufficient determination, a­ ssuming that there was a connection between the murders and related communications on the platform concerned.188 With a view to assessing communication through social media platforms appropriately, Rowbottom recommends using the category of low-level speech as a doctrine of law enforcement. A distinction between high- and low-level speech has already been drawn189 – where the former might include discussions concerning public affairs and the latter might include pornography or commercial speech. According to this doctrine, the more an utterance is related to a public discourse, the more valuable it is and the greater the level of protection that should be afforded to it. The new category takes the context of publishing an utterance into account but not its content, meaning that all disputes, quarrels, threats and hatred expressed on a platform would be considered low-level speech, and they would be protected less than utterances with the same content published in traditional media or expressed among people who are physically present.190 Paul Bernal raised the issue of placing a higher emphasis on the reliability of sources of defamatory statements, arguing that a social media user should not be held liable for passing on a 184 Emily Laidlaw, ‘What is a Joke? Mapping the Path of a Speech Complaint on Social Networks’ in David Mangan and Lorna E Gillies (eds), The Legal Challenges of Social Media (Cheltenham, Edward Elgar, 2017) 127, 140. 185 Daniel S Harawa, ‘Social Media Thoughtcrimes’ (2014) 35(1) Pace Law Review 366, 384–88. On doctrines, see ch 1, section III.D. 186 Alexander Tsesis, ‘Terrorist Speech on Social Media’ (2017) 70 Vanderbilt Law Review 651. 187 Alexander Tsesis, ‘Social Media Accountability for Terrorist Propaganda’ (2017) 86 Fordham Law Review 605. 188 Andrew Buncombe, ‘Orlando Nightclub Shooting Victims’ Families Sue Facebook and Twitter For “Providing Support” to Isis’ Independent (20 December 2016) at https://www.independent.co.uk/news/world/americas/ orlando-nightclub-shooting-victims-families-sue-facebook-twitter-isis-material-support-provide-a7486906. html. 189 See ch 1, section II.D. 190 Rowbottom (n 96) 370–76.

The Regulation of Platforms by Legislation  177 defamatory statement if the user relied on a source he had reason to consider reliable in the matter.191 Some European legislatures consider the obligation of removal set forth in A ­ rticle 14 of the E-Commerce Directive to be insufficient, and they impose additional obligations on platform providers. The corresponding Act in German law (effective as of 1  January 2018) is an excellent example of this trend.192 According to the applicable provisions, all platform providers within the scope of the Act (ie platform providers with over 2 million users from Germany) must remove all user content that commits certain criminal offences specified by the Act. Such offences include defamation, incitement to hatred, denial of the Holocaust and the spreading of scare-news.193 Manifestly unlawful pieces of content must be removed within 24 hours after receipt of a notice, while any ‘ordinary’ unlawful content must be removed within seven days.194 If a platform fails to remove a given piece of content, it may be subject to a fine of up to €50 million (­theoretically, in cases of severe and multiple violations).195 Some argue that this regulation is inconsistent with the E-Commerce Directive, as it provides for a general exception, instead of ad hoc exceptions, from the free movement of services. In addition, the Directive requires urgency as a condition of applying the exception, but the German Act does not refer to specific pieces of content, meaning that it cannot meet that requirement.196 This piece of German legislation has been widely criticised as limiting the freedom of speech,197 even though it does not go much further than the EU Directive itself; it simply refines the provisions of the Directive, lays down the applicable procedural rules and sets harsh sanctions for violating platforms. Nonetheless, the rules are followed in practice, and Facebook seems eager to perform its obligation to remove objectionable content.198 The German regulation shows how difficult it is to apply general pieces of legislation and platform-specific rules simultaneously, and it demonstrates how governments seek to have social media platforms act as judges of user-generated content. G.  Fake News The expression ‘fake news’ became a popular term during the US presidential election of 2016. The term ‘fake news’ is used here as a type of propaganda that consists of 191 Paul Bernal, ‘A Defence of Responsible Tweeting’ (2014) 19(1) Communications Law 12. 192 Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act, 2017) at https://www. bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_engl.pdf?__blob=publicationFile&v=2. 193 Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerkdurchsetzungsgesetz) Artikel 1 G. v 01.09.2017 BGBl. I S. 3352 (Nr 61), s 1. 194 ibid s 3. 195 ibid s 4. 196 Gerald Spindler, ‘Internet Intermediary Liability Reloaded: The New German Act on Responsibility of Social Networks and its (In-) Compatibility with European Law’ (2017) 8 Journal of Intellectual Property, Information Technology and E-Commerce Law 166, 167–70. 197 See, eg, ‘Germany: Flawed Social Media Law’ Human Rights Watch (14 February 2018) at https://www.hrw. org/news/2018/02/14/germany-flawed-social-media-law. 198 ‘Facebook Deletes Hundreds of Posts under German Hate-Speech Law’ Reuters (27 July 2018) at https:// uk.reuters.com/article/us-facebook-germany/facebook-deletes-hundreds-of-posts-under-german-hate-speechlaw-idUKKBN1KH21L.

178  Social Media Platforms ­ eliberate disinformation spread via traditional news media or online (social) media. d Probably the most widely-known example of fake news is the scandal commonly known as ‘­Pizzagate’, referring to rumours that Democratic presidential candidate Hillary ­Clinton was ­operating a child trafficking network in a Washington DC pizzeria. The owner and staff of the restaurant were sent death threats, and one person even entered the restaurant with a gun and fired it with the aim of helping the children.199 In the era of fake news and deception, one might recall an anecdote told by the twentieth-century physicist and inventor Leó Szilárd, who found the key to nuclear chain reaction in 1933, patented the idea of nuclear reactors and was later involved in the Manhattan Project that produced the first atom bomb. According to Hans Christian von Baeyer, the physicist Leó Szilárd announced to Hans Bethe that he was thinking of keeping a diary: ‘I don’t intend to publish. I am merely going to record the facts for the information of God.’ Bethe asked him, ‘Don’t you think God knows the facts?’ Szilárd replied, ‘God knows the facts, but not this version of the facts.’200 It seems obvious that people have a long history of trying to alter the facts to suit their needs,201 just as the media has a long history of publishing false facts.202 However, the volume and quick spread of such news is certainly a novelty brought about by social media, making it almost impossible to take effective legal measures against fake news. The first possible means of action would be a legitimate restriction of free speech. However, it was clarified in United States v ­Alvarez that US law also protects false statements.203 The more closely related the statement is to public affairs, the higher the degree of protection it is afforded.204 According to the judgment, false statements can be remedied most appropriately by responding to them and engaging in a debate with the speaker (ie by counterspeech).205 Naturally, defamation law can also be used to seek some remedy from the speaker pursuant to the standard described in New York Times v Sullivan,206 that is if actual malice can be shown.207 This means that Mrs Clinton could have tried to restore her damaged reputation, but the rapid spread of the rumour, the large number of people sharing and forwarding the messages, and the short time permitted by the campaign did in fact prevent her from taking action. Furthermore, various torts (such as intentional infliction of emotional distress)208 could be used to take action against fake news,

199 Amanda Robb, ‘Anatomy of a Fake News Scandal’ Rolling Stone (16 November 2017) at https://www. rollingstone.com/politics/politics-news/anatomy-of-a-fake-news-scandal-125877/. 200 Spencer R Weart and Gertrud Weiss Szilard (eds), Leo Szilard: His Version of the Facts (Cambridge, MA, MIT, 1978) 149. 201 See also the category of alternative facts as used by the Trump Administration: Aaron Blake, ‘Kellyanne Conway Says Donald Trump’s Team has “Alternative Facts”. Which Pretty Much Says it All’ Washington Post (22  January 2017) at https://www.washingtonpost.com/news/the-fix/wp/2017/01/22/kellyanne-conway-saysdonald-trumps-team-has-alternate-facts-which-pretty-much-says-it-all/?noredirect=on&utm_term=. 12c228961a16. 202 Bernal (n 67) 500–503. 203 United States v Alvarez 567 US 709 (2012) (see ch 1, section III.G). 204 ibid 722. 205 ibid 726. 206 New York Times v Sullivan (n 125). 207 See ch 1, section III.B. 208 ibid.

The Regulation of Platforms by Legislation  179 provided that the target actually suffers such harm and injury.209 Social media platforms are protected under section 230 CDA, m ­ eaning that they could not even be required to remove such fake news.210 European jurisdictions allow for a wider range of actions against fake news, defined as action on the ground of defamation or violating the prohibition of hate speech or scaremongering,211 while platforms, being hosting service providers, can be required to remove infringing content. Nonetheless, these measures in and of themselves seem inadequate to resolve such situations in a reassuring manner. These concerns have been addressed by the EU in various documents since 2017. All of these documents make recommendations for platform providers on how to take measures against fake news most effectively, focusing primarily on self-regulation while refraining from introducing new and mandatory legal provisions. Furthermore, they do not seek to reform the principles of hosting service providers’ liability as defined in the E-Commerce Directive, so platform providers are not expected to exercise comprehensive preliminary control or monitoring. While countries are expected to meet more stringent requirements, those requirements focus on supervising the platform providers’ operations more closely and expanding Internet-related awareness-raising programmes rather than introducing stricter liability rules for platforms. The communication on tackling illegal content online lays down the requirement that platforms should take action against violations in a proactive manner and even in the absence of a notice, even though the platforms are still exempted from liability (ie they are not subject to any editorial liability for user-generated content).212 The recommendation that follows the communication reaffirms the requirement of applying proportionate proactive measures in appropriate cases, which permits the use of automated tools to identify illegal content.213 In Europe, the High Level Expert Group on Fake News and Online Disinformation published a Report in March 2018.214 Despite its title, the Report does not focus on fake news but on problems related to online disinformation, as the category of fake news, according to the authors, does not cover all the problems that may arise from disinformation in its entirety.215 The Report defines disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted for profit or to intentionally cause public harm’.216 This definition might be accurate, but the Report refrains from raising the issue of government regulation, and it is limited to providing a review of resources and measures that are available to and may be applied voluntarily by social 209 David Klein and Joshua Wueller, ‘Fake News: A Legal Perspective’ [2017] Journal of Internet Law 1. 210 Joel Timmer, ‘Fighting Falsity: Fake News, Facebook, and the First Amendment’ (2017) 35 Cardozo Arts & Entertainment Law Journal 669. 211 See ch 2, sections III.B and III.E. 212 Communication on Tackling Illegal Content Online – Towards an enhanced responsibility of online platforms, 28 September 2017, COM(2017) 555 final, 10, point 3.3.1. 213 Commission Recommendation of 1.3.2018 on measures to effectively tackle illegal content online (C(2018) 1177 final), points 12 and 18. 214 Final Report of the High Level Expert Group on Fake News and Online Disinformation (2018) at https:// ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-onlinedisinformation. 215 ibid 10. Another tacit reason may be that President Trump keeps using the term ‘fake news’ as a reference to certain traditional media that are less understanding concerning him, such as the New York Times. See Levi (n 66) 257–62. 216 Final Report (n 214) 10.

180  Social Media Platforms media platforms. The most important point of the Report is an implied threat that the European Commission will revisit the matter in the Spring of 2019, considering the impact of self-regulatory mechanisms and the possibly effected policy changes by social media platforms, and it might announce further initiatives as necessary, which could include the application of various legal means, including measures available under competition law.217 Based on the report of the High Level Expert Group, the European Commission published a Communication on tackling online disinformation with unusual urgency in April 2018.218 While this document reaffirms the primacy of means that are applied voluntarily by platform providers, it also shows restrained strength to compel the service providers concerned to cooperate (in a forum convened by the Commission). If the impact of voluntary undertakings falls short of the expected level, the necessity of actions of a regulatory nature might arise.219 A Code of Practice laying down the obligations undertaken voluntarily by platform providers was agreed in September 2018.220 The new regulations introduced or planned by certain EU Member States also do not seek to interfere with the system of current EU-level policies but essentially preserve the basic notice and takedown procedure and introduce new procedural rules and harsher sanctions in an attempt to handle the situation. Accordingly, the German Act mentioned in section III.F permits taking action not against disinformation in general, but against content that violates the provisions of the German Criminal Code.221 The corresponding French legislative draft provides that a court decision will be made within 48 hours concerning any allegedly fake news, although this only applies during campaign ­periods.222 IV.  PRIVATE REGULATION BY PLATFORMS

A. Introduction Section III of this chapter provided an overview of cases where government regulation forced social media platforms to assume some kinds of editorial tasks, requiring them to assess the legality of content and to remove any illegal content, as the case may be. This section presents the opposite side of the same activity, that is situations where ­platforms

217 ibid 6. 218 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Tackling online disinformation: A European Approach, 26 April 2018, COM(2018) 236 final. 219 ibid point 3.1.1. 220 EU Code of Practice on Disinformation (2018) at https://ec.europa.eu/digital-single-market/en/news/codepractice-disinformation. See also the Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Action Plan against Disinformation (JOIN(2018) 36 final) at https://ec.europa.eu/digital-single-market/en/news/actionplan-against-disinformation. 221 Netzwerkdurchsetzungsgesetz (n 193) s 1. 222 Laura Smith-Spark and Saskya Vandoorne, ‘As France Debates Fake News Bill, Critics See: Threat to Free Speech’ CNN (7 June 2018) at https://edition.cnn.com/2018/06/07/europe/france-fake-news-law-intl/index. html.

Private Regulation by Platforms  181 proceed on their own initiative and decide on the status of user-generated content. Jack Balkin calls this phenomenon ‘private governance’;223 others prefer to use the less euphemistic term ‘private censorship’,224 although the term ‘private regulation’ also seems to capture the essence of the matter, whereby a platform provider influences the publication or further accessibility of content published by users to an extent and in a manner permitted by law by exercising its ownership rights over the platform and other rights stipulated in its contract with the users. As Balkin warned, it seems unreasonable to attempt to discuss compliance with government regulations, regardless of private regulation, considering government regulation incentivises platform providers to introduce private regulations as the providers have an interest in avoiding any troublesome interference by the government.225 Such government ‘incentives’ for platforms to self-regulate might also include the use of indirect pressure.226 The significance of such incentivisation should not be ignored, considering that much more speech and many more opinions are expressed on social media than in traditional media today, and (even though they are not public forums in the sense of property law or applicable legal doctrines)227 these forums are public forums in the sense that people use them to talk about public affairs and participate in democratic public life.228 Platform providers also have other motives for adopting private regulations. An obvious motive for doing so is to protect their business interests. Platform providers have an interest in making sure that their users feel safe while using their platform and are not confronted by insulting, upsetting or disturbing content. The moderation and removal of such content is not done in line with the limitations of free speech, meaning that a piece of content may be removed using this logic even if it would otherwise be permitted by law, while a piece of content may remain available even if it violates the limitations of free speech. The typically US-owned and established platforms are in a strange and somewhat ambivalent situation: on the one hand, their activities are protected by the First Amendment and the CDA, and their developers and employees represent a culture of American-style free speech; but on the other hand, the private regulation they apply provides far less protection for public speech than the US legal system.229 From a European perspective, this confusing situation occasionally leads to bizarre consequences. For instance, denial of the Holocaust is prohibited by law in most of Europe, while it is afforded protection under the First Amendment in the US. In the name of protecting free speech, Mark Zuckerberg does not wish to ban such utterances on Facebook,230 but Facebook was informed by the German Federal G ­ overnment

223 Balkin (n 89) 1182. 224 Marjorie Heins, ‘The Brave New World of Social Media Censorship’ (2014) 127 Harvard Law Review Forum 325. 225 Balkin (n 89) 1193. 226 ibid 1194. 227 See section II.C. 228 Balkin (n 89) 1194. 229 ibid 1195; Klonick (n 48) 1625. 230 ‘Zuckerberg in Holocaust Denial Row’ BBC (19 July 2018) at https://www.bbc.com/news/technology44883743.

182  Social Media Platforms that it is required by law to do so.231 In a sense, Facebook has no choice but to observe the provisions of German (and, in general, European) law, as failure to do so might make it liable as a hosting service provider, and additional sanctions could also be applied pursuant to the national law of EU Member States (eg the owner of a platform might be held liable under criminal regulations, and a legal entity operating a platform could be held liable under certain special rules pertaining to such platforms’ operations). Moreover, Facebook also tends to remove pieces of content that are clearly protected by the freedom of speech in Europe, in an attempt to provide a ‘safe space’ for its users.232 A major problem with private regulation is that it may be both more strict and more lenient than government regulation, and, as a result, the regulation of content is unpredictable. Another significant concern is that there is no adequate decision-making procedure in place regarding the removal of pieces of content, meaning that the constitutional safeguards commonly available in legal proceedings (eg notification of users concerned, possibility of appeal, public proceedings, the identification of the decisionmaker, making decisions in writing so that they might be read, etc) are absent. The absence of an appropriate procedure (known as ‘due process’ in US constitutional law,233 and as the ‘right to a fair trial’ in Europe234) greatly contributes to the lack of transparency regarding decisions made on the basis of private regulation, and does nothing to clarify existing uncertainties concerning the rules applied in such important forums of public life. The removal of undesirable content is not the only means of implementing private regulation. A far more powerful means is the editing and sorting of content presented to individual users, as well as the promotion and suppression of certain pieces of content, the impact of which is not limited to individual pieces of content but to the entire flow of content on the platform. This measure enables a platform to increase the popularity and impact of highly visible content, while marginalising and limiting the impact of other content. All of this is done in the spirit of providing personalised services and serving individual user needs (as guessed by the platform), relying on information collected about each and every user, their previous online presence and their platform-generated profile. Thus, each user unknowingly, and indeed without explicit consent, influences the content of the service he or she receives, while the platform actively exerts an influence over the user’s intentions and is capable of influencing the user. The resulting consequences have an impact on the decisions users make as consumers, and also on the discussion of public

231 Janosch Delcker, ‘Germany to Zuckerberg: There Won’t Be Holocaust Denial on German Facebook’ Politico (19 July 2018) at https://www.politico.eu/article/germany-to-zuckerberg-there-wont-be-holocaustdenial-on-german-facebook/. 232 See, eg, some questionable editorial decisions made based on the general prohibition of nudity: Cecilia Rodriguez, ‘Facebook Finally Lands in French Court for Deleting Nude Courbet Painting’ Forbes (5 February 2018) at https://www.forbes.com/consent/?toURL=https://www.forbes.com/sites/ceciliarodriguez/2018/02/05/ facebook-finally-lands-in-french-court-for-deleting-nude-courbet-painting/; Sam Levin, Julia C Wong and Luke Harding, ‘Facebook Backs Down from “Napalm Girl” Censorship and Reinstates Photo’ Guardian (9 September 2016) at https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girlphoto. 233 Balkin (n 89) 1196–98; Constitution of the United States of America, Fifth and Fourteenth Amendments. 234 Art 6 ECHR.

Private Regulation by Platforms  183 affairs, access to information and the diversity of opinion – in other words, the quality of the democratic public sphere. B.  The Legal Basis of Private Regulation: Contract Terms and Conditions The enforcement of the freedom of speech on a social media platform is much more dependent on the rules applied and implemented by each platform than on the government (legal) regulations relating to the freedom of speech as a fundamental right. The standards, policies, and service terms and conditions applied by social media platforms result in decisions made in bulk, and they cannot be matched by any lengthy legal ­procedure that might be launched in individual cases. As Marvin Ammori has noted, the legal counsels of platform providers have an enormous and, indeed, global impact on the freedom of speech.235 As mentioned in section IV.A, implementing such private regulations (focusing primarily on protecting the business interests of the service provider concerned) is not the same as extending the scope of the First Amendment to the entire globe, even though aspects and elements of the US approach to free speech might be reflected by these in-house rules. These regulations, in line with the global presence of social media platforms, also reflect and may even exceed the free speech-related policies and objectives of other ­jurisdictions.236 The platforms may also seek to promote and achieve social goals in addition to their own business objectives. Zuckerberg outlined Facebook’s ‘social mission’ in this spirit, explaining that Facebook seeks to reorganise the world’s ‘information infrastructure’ by creating a network, the rules of which are determined in a bottom-up process (ie from user-level), as opposed to the government-imposed topdown legislation.237 This goal seems surprising to say the least, considering that the rules enforced on the platform are introduced by Facebook (and, in a top-down manner, imposed on the users, similarly to an actual government) and it compiles the news feed of its users, even if that process can be influenced by the users. However, the possibility of such influence does not change the fact that Facebook is capable of censoring content (a censorship which, as it originates from the platform’s ownership, is certainly not comparable to government censorship) and it does indeed choose to exercise this right.238 In addition to the ownership of a platform, a contract by and between the platform and each user serves as the legal basis for the platform’s capacity to interfere with the freedom of speech of its users. The provisions of that contract are determined solely by the platform. Users are not in a position to request the amendment of the contract, while it may be amended by the platform unilaterally at any time. It is also important that the same contract is concluded with each and every user. Even though the contract, and the interference permitted by it, affects the exercise of a constitutional right, and countless debates, conversations and exchanges of information on public affairs are taking place

235 Ammori 236 ibid.

237 Chander 238 ibid

(n 92) 2263.

(n 114) 1814. 1816.

184  Social Media Platforms on the platform at any given time, no interference by the platform can be considered as state action, and the platform itself is not considered a public forum.239 An action taken by a platform, even if it limits the opinions of its users, cannot be attributed to the government, meaning that it is not subject to any constitutional safeguard relating to the freedom of speech.240 In practical terms, the solution to any conflict or dispute that may arise between a platform provider and a user concerning free speech is to be found among the rules of contract law and not the various principles of constitutional law.241 When a user decides to subscribe to a platform and accepts the terms and conditions of that platform by a simple click of a mouse, he or she becomes subject to ‘private regulation’, including all content-related provisions as well, and the safeguards of free speech are no longer applicable concerning the user’s relationship with the platform.242 It should not come as a surprise that the contracts used by all major platforms are carefully considered and precisely drafted documents (or, conversely, that they use vague language for the very purpose of extending the discretionary powers of the platform). A comparative analysis prepared by Michael Rustad and Thomas Koenig provides a detailed overview of such contract terms and conditions.243 Their investigations pointed out numerous concerns pertaining to consumer protection, including the difficulty of reading the provisions, the arbitration clauses used in such contracts, which make it difficult for users to file a lawsuit, the vague meaning of various provisions, etc. Section 3.2 of Facebook’s Terms of Service requires compliance with the Community Standards of the service, making those standards part of the contract itself.244 The content-related provisions of the regulation are covered in the following section of this chapter (section IV.C) in more detail, but it should be noted that these provisions look prima facie as if they were actual pieces of legislation, even though the wording ­occasionally seems somewhat clumsy, inaccurate, discursive and vague. This regulation is the code of free speech for Facebook users, which each of its users accepts, thereby submitting to the control and discretion of Facebook’s moderators. From the perspective of formalities and constitutionality, this aspect of the platform’s functioning is not objectionable. The current legal framework does not provide users with any powerful means should they find themselves in a quarrel with the platform. Even though section 230 CDA incentivises platforms not to use private regulation by granting them immunity regarding illegal content available on the platforms, it certainly does not prohibit private censorship.245 Moreover, the European concept of the liability of host providers (as adopted pursuant to Article 14 of the E-Commerce Directive) is a direct incentive for platforms to implement private censorship. In regard to the lack of a balance

239 See section II.C. 240 Jacquelyn E Fradette, ‘Online Terms of Service: A Shield for First Amendment Scrutiny of Government Action’ (2013–14) 89 Notre Dame Law Review 947, 953–57. 241 ibid 971. 242 ibid 977. 243 Michael L Rustad and Thomas H Koenig, Wolves of the World Wide Web: Reforming Social Networks’ Contracting Practices, Suffolk University, Law School, Legal Studies Research Paper Series no 14-25 (2015). 244 Available at https://www.facebook.com/terms.php. 245 Heins (n 224) 328.

Private Regulation by Platforms  185 of power between service providers and users, any dispute that may arise between them regarding the enforcement of their contract may be settled within the legal framework of consumer protection.246 However, this option is available only if the user concerned qualifies as a consumer, meaning that it is not available to ‘institutional’ users (eg media businesses).247 Furthermore, consumer protection does not seem to provide any broad possibilities for protecting the freedom of speech of users when the platform’s policies and their application are reasonable and justifiable but not arbitrary, which they typically are; even though they might be questionable, it does not suggest any violation of the consumers’ rights in and of themselves. It seems also difficult to object to the application of such policies on a legal basis, considering that a platform is free to determine its own policies and instruct its moderators without being required to respect the constitutional safeguards and legal limitations of the freedom of speech. The only option for a user is to show that the platform removed a piece of content it was not authorised to remove248 – something that seems nothing short of impossible to demonstrate due to the widely defined limitations of content and the broad discretionary powers of the platform. A user may also try to make use of the existing anti-discrimination rules if his or her right to equal treatment is violated, but producing adequate evidence in such a situation (showing that a piece of content was removed from one user but was not removed when published by another user) seems rather difficult, and the enormous volume of content and the absence of a monitoring obligation on the side of the platform (which may be invoked as a defence by the platform) also considerably limit the chances of a user. Two further arguments also seem to reinforce the discretionary powers of a platform. First, a platform itself has a right to free speech when it sorts user-generated content, and it exercises this very right in the course of compiling lists of content that are visible to and easily accessible by users.249 This argument might be challenged, in that the removal of pieces of content on the basis of a policy does not convey any actual meaning that could be considered an utterance or opinion in and of itself, with the possible exception that the platform seeks to provide a peaceful, calm, safe and secure environment for its users. Second, it could be argued that no single platform is large enough to become the single ruler of the public sphere, and a user who falls victim to private censorship can choose to present his or her opinion by other means, such as via a personal blog, website, e-mail list, etc.250 However, this argument is somewhat countered by the fact that certain platforms (Facebook and Google’s search engine) have become so dominant and are used by so many people every day that they are de facto indispensable factors in the public communications process. Indeed, individual users can speak more freely elsewhere, but that speech is likely to have a much more limited impact.

246 See Kevin Park, ‘Facebook Used Takedown and It Was Super Effective! Finding a Framework for Protecting User Rights of Expression on Social Networking Sites’ (2012–13) 68 New York University Annual Survey of American Law 892. 247 Schrems v Facebook (n 122). 248 Fradette (n 240) 957. 249 Park (n 246) 901. See also ch 2, section II.A. 250 Heins (n 224) 327.

186  Social Media Platforms Two recent German cases clearly show the contradiction and ambiguity in applying the constitutional free speech doctrines to a contractual relationship between a social media platform and its user. In Themel v Facebook Ireland, Inc,251 the Court held that the deletion of the plaintiff’s comment by Facebook constituted a breach of contract, as the platform was required to respect her right to freedom of expression under A ­ rticle 5 of the German Constitution (Grundgesetz). As to the facts of the case: in 7 August 2018, ­Spiegel-Online, a German news website, posted an article on its Facebook page entitled ‘Austria announces border controls’. There was a harsh debate in the comments under the Facebook post, and Heike Themel, a German politician and member of the rightwing AfD Party, was referred to as a ‘Nazi-slut’. She responded to that comment, quoting a German poem: ‘I can’t compete in an argument with you. You are unarmed and this wouldn’t be very fair from my side.’ Facebook deleted the comment and suspended her account for 30 days. This decision was based on Facebook’s Community Standards, rule 5.2 (which prohibits hate speech on the platform). After she approached the Court, it held that the application of rule 5.2 violated section 241(2) of the German Civil Code, which says that ‘[a contractual] obligation may also, depending on its contents, oblige each party to take account of the rights, legal interests and other interests of the other party’. As the Community Standards give Facebook the power to decide on its own which posts or comments violate its rules, the Court noted that this power contradicts the Civil Code’s requirement. The Court emphasised that Facebook as a social media platform provides a ‘public marketplace’ for an exchange of views and opinions, and that legally permissible expressions cannot be deleted from the platform. As Themel’s comment did not constitute hate speech, Facebook’s deletion of the comment and suspension of Themel’s account was unlawful.252 In another German case, User v Facebook Ireland, Inc,253 the Court arrived at the completely opposite conclusion. The Court rejected the plaintiff’s argument that her right to freedom of expression had been infringed. Prior to the decision, in July 2018, a  Facebook user commented below a post concerning integration of migrants in Germany: ‘Respect! That is the keyword! Fundamentalist Muslims regard us as soft grown heathens, pig-gluttons and our women as whores. They do not respect us.’ On 16 July 2018, ­Facebook deleted the user’s comment and blocked her profile for 30 days. After Facebook refused to reverse its decision, the plaintiff sought a preliminary injunction before the Regional Court in Heidelberg. The central issues before the Court were whether ­Facebook was entitled to remove the post and block the user, and whether ­Facebook’s Community Standards were consistent with section 307 of the Civil Code, which says in its p ­ aragraph (1) that provisions in standard business terms are ineffective if, contrary to the requirement of good faith, they unreasonably disadvantage the other party to the contract with the user. An unreasonable disadvantage may also arise from the provision not being clear and comprehensible.

251 Themel v Facebook Ireland, Inc, 18 W 1294/18, 24 August 2018, Higher Regional Court Munich, Germany. See the judgment in German at https://openjur.de/u/2111165.html. 252 See more at https://globalfreedomofexpression.columbia.edu/cases/heike-themel-v-facebook-ireland-inc/. 253 User v Facebook Ireland, Inc, 1O 71/18, 28 August 2018, Regional Court in Heidelberg, Germany. See the judgment in German at https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2018/10/User-vFacebook-Ireland.pdf.

Private Regulation by Platforms  187 The Court noted that Facebook’s standards list the types of expression that are not protected, and define the boundaries of what can be considered restricted speech. In addition, the rules indicate the kind of consequences each user faces if he or she violates these standards. Accordingly, the Court held that the standards cannot be considered nontransparent, and they did not discriminate against users inappropriately. As a conclusion, the Court found that Facebook’s rules adequately take into consideration the right to freedom of expression and that, even though aggressive opinions or extreme expressions are protected under the Constitution, Facebook as a private party does not have to grant its users the full right to freedom of expression that is provided by the state in the ­constitutional context.254 These two decisions take two different paths. Whereas the latter decision fits into the usually applied legal framework, which – through the recognition of the platform’s ­property and free speech rights – allows Facebook to delete more or less any users’ content it finds inappropritate, the former one aims to restrict the platform’s powers in this regard. The decisions depict the possible strengths and weaknesses of mandating the law of contracts to resolve free speech issues arising between private parties. C.  Moderation and Private Censorship i.  The Pros and Cons of Moderation The moderation of user-generated content by a platform interferes with the free speech of its users. Platforms that decide to moderate such content are trying to walk a tightrope between the ‘chaos of too much freedom’ and the ‘sterility of too much control’.255 Not surprisingly, balancing is not exactly easy. Platforms might be pressured by governments into removing content that is not necessarily illegal without conducting an adequate procedure for a number of reasons;256 platforms also have a number of reasons of their own for interfering with their users’ free speech. The primary reason, as already mentioned, is the protection of their own business interests by way of filtering and removing content that might scare away other users or any major business partner or advertiser of the platform. Pieces of content that are inconsistent with the political agenda or social mission of a platform provider may also fall victim to moderation.257 The possibility of private censorship is quite worrisome from the perspective of free speech and the public sphere. The possibility of speaking through the largest and most important platforms cannot be substituted by speaking on minor platforms that reach fewer people (and which may also moderate user content) or, in particular, by publishing material on an unmoderated personal blog or private website. Major platforms are the hubs of public life, where any interference with the process of expressing opinions might result in far-reaching consequences.258 Furthermore, the goal of providing most users 254 See more at https://globalfreedomofexpression.columbia.edu/cases/user-v-facebook-ireland-inc/. 255 James Grimmelmann, ‘The Virtues of Moderation’ (2015) 17(1) Yale Journal of Law and Technology 42. 256 Benjamin F Jackson, ‘Censorship and Freedom of Expression in the Age of Facebook’ (2014) 44 New Mexico Law Review 121, 127–29. 257 ibid 130–31. 258 ibid 132–33.

188  Social Media Platforms with a peaceful, safe and secure environment (thereby also serving the business interests of the platform concerned) could also be brought in line with the policies of oppressive regimes; for some, the temptation of being allowed to enter the vast market of China might be worth paying the price of meeting the requirements of government censorship. It seems unlikely that social media platforms would seek to overthrow an oppressive regime by way of a revolution; probably they would be more interested in not having a bad press, avoiding disputes with certain governments, and increasing their market share and revenues.259 Moderation and the decisions made by a platform concerning individual pieces of content do not follow the legal standards of free speech but seek to satisfy the requirements of an external environment (ie government requirements) and the assumed expectations of users. The higher the frequency of individual interactions between users and the more lively communication on the platform, the higher the economic value of the platform becomes.260 Consequently, protecting opinions that are most valuable for the public sphere is unlikely to be the primary goal of social media platforms, as those opinions, even though they are related to public affairs, are often subject to debate, might be divisive, insulting or provocative, and could urge people to engage in a debate and think for themselves. Such opinions could be frightening for and might drive away some users, unlike other, less hurtful and harmless pieces of content with which most users are comfortable. Videos showing playful kittens and family photographs prevail over powerful political debates, as they are more important and, from a financial perspective, more valuable to social media platforms. While keeping a peaceful environment might attract more users to a platform and, by doing so, could enable more people to exercise their freedom of speech, the platform also restricts the freedom of speech of its users. ­Consequently, the primary aim of communication between users is not to discuss public affairs openly, and that very goal is in fact restricted by the platform itself. At the end of the day, it seems unclear that private interference with the freedom of speech could be particularly beneficial to the public. ii.  The Legal Status of Moderation – Possible Analogies In the US, section 230 CDA grants immunity from legal liability for social media platforms and urges them to moderate their users’ speech as ‘good Samaritans’.261 The discretion of platform providers concerning the removal of content and the suspension or banning of users is unrestricted, so it does not raise any free-speech concern. The situation is quite similar in Europe, where Article 14 of the E-Commerce Directive grants conditional exemption for such platforms and permits them to moderate (more accurately, does not prohibit the moderation of) user content at their own discretion. Free-speech considerations do not affect the proceedings conducted against users pursuant to their contract with the platform in any material way.262 259 Joseph (n 34) 178. 260 Klonick (n 48) 1627–30. 261 Andrew M Sevanian, ‘Section 230 of the Communications Decency Act: A “Good Samaritan” Law without the Requirement of Acting as a “Good Samaritan”’ (2014) 21 UCLA Entertainment Law Review 121. 262 See Young v Facebook, Inc 790 FSupp2d 1110, before the District Court of Northern California. The plaintiff user also claimed (unsuccessfully) that his freedom of speech was violated, because when the number of his

Private Regulation by Platforms  189 It seems extremely risky to grant social media platforms almost unlimited discretion in assessing pieces of content. The wording of the guidelines and codes on which their decisions are based tends to be vague, and decisions are passed quickly and without transparency or procedural safeguards. When considering the factors a moderator is likely to take into account when trying to determine if a specific piece of content conforms to the platform’s policies, one should remember the famous axiom Justice Potter Stewart used when trying to define hard-core pornography: I cannot define it, but ‘I know it when I see it’.263 If a moderator believes, or the internal procedure (which is not transparent to users) concludes that a piece of content must be removed, it will be removed. An issue the adjudication of which by the judiciary could take years is thus decided by a moderator within a few hours. Any approaches followed previously in the context of regulating the media and communications seem to serve as but poor analogies when assessing the nature of private regulation. As we have seen, a social media platform is a public forum in the legal sense of the term, and it is certainly not a government entity. A platform is not a ‘speaker’, in the sense that it does not publish its own content and does not associate itself with its users’ content in any way;264 the removal of or refusal to remove certain pieces of content expresses nothing but a judgement that a given piece of content is inconsistent with or in conformity with the applicable policies of the platform. A platform is not an ‘editor’, or at least it is not similar to press editors, because it does not initiate or commission the production of content and does not purchase content for its own purposes.265 However, it is an editor in the sense that it makes decisions concerning pieces of content and filters, removes content or keeps it available. It also controls all communications through the platform. All in all, it is clearly not neutral toward content. The regulation of radio and television broadcasting before the emergence of digital technology relied on the use of scarce resources and also serves as a poor analogy. The only reason for not disposing of such regulation wholesale when technological scarcity becomes nothing but history (as has already happened) is the grotesque phenomenon that the abundance of speech creates a particular kind of scarcity. The incomprehensible volume of information and communication users must face makes it very difficult for users to access valuable content and real news sources.266 However, this does not make it necessary to regulate platforms using the old methods, which would mean using broadcasting as an analogy and would result in the revival of the previous system of licensing. The third possible analogy would be the analogy of a common carrier (such as telephone or postal service providers), the application of which would mean that moderation of and any interference with the process of communication between users (apart from providing the necessary technical means) and platforms would be prohibited for platform providers. Again, this option seems unfeasible as platforms would be ‘friends’ reached 5,000, Facebook required him to turn his private profile into a public figure profile, even though doing so would have made his communications through that profile much more accessible to the public. As the plaintiff refused the request, his profile was deleted by the platform. 263 Jacobellis v Ohio 378 US 184 (1964) 197. 264 Christina M Mulligan, ‘Technological Intermediaries and Freedom of the Press’ (2013) 66 Southern ­Methodist University Law Review 157, 174. 265 Klonick (n 48) 1660. 266 ibid 1661.

190  Social Media Platforms required to operate with total neutrality, even though doing so would be incompatible with their needs and interests, and it would also make it impossible for them to comply with government requests (aimed at removing illegal content). As Kate Klonick pointed out, a social media platform, in the absence of a more appropriate analogy, must be considered as a new and independent regulator (a g­ overnor). It has established, controls and operates its own infrastructure, which is used by users for communication according to its own interests. It also has a centralised organisation that follows its own pre-determined rules (even if those are not necessarily accessible to outsiders in detail) and makes ex ante or ex post decisions regarding various pieces of content.267 In other words, a platform decides on pieces of content using a particular aggregational theory of free speech. It seeks to become and remain open and attractive for as many users as possible while trying to protect its users from insults or other forms of communication that could scare them away.268 This strange, aggregated and hybrid system brings together the principles of the First Amendment and the European approach to free speech, all interpreted and applied according to the interests of the platform itself, with possible differences in each state or region (according to the respective territory’s government’s approach toward free speech and the platform’s free  activities), through decision-making procedures that are not transparent to the parties concerned. iii.  Community Standards and Codes of Conduct Facebook’s Community Standards get longer and longer each year, and they now go into much more detail than an average piece of legislation.269 For example, the limitation of hate speech and nudity and other sexual content are two major topics that are relevant to the freedom of speech. Let us examine the standards set by Facebook itself regarding these issues! We define hate speech as a direct attack on people based on what we call protected ­characteristics  – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below … Do not post: Tier 1 attacks, which target a person or group of people who share one of the above-listed ­characteristics or immigration status (including all subsets except those described as having carried out violent crimes or sexual offenses), where attack is defined as: –– any violent speech or support in written or visual form; –– dehumanizing speech or imagery including (but not limited to): –– reference or comparison to filth, bacteria, disease, or faeces;

267 ibid 1662–64. 268 Brett J Johnson, ‘Facebook’s Free Speech Balancing Act: Corporate Social Responsibility and Norms of Online Discourse’ (2016) 5 University of Baltimore Journal of Media Law & Ethics 19, 33–34. 269 Facebook’s Community Standards. See at https://www.facebook.com/communitystandards/.

Private Regulation by Platforms  191 –– reference or comparison to animals that are culturally perceived as intellectually or physically inferior; –– reference or comparison to subhumanity; –– mocking the concept, events or victims of hate crimes even if no real person is depicted in an image; –– designated dehumanizing comparisons in both written and visual form.270

Any content attacking a group of people based on ‘protected characteristics’ is considered objectionable hate speech that is removed from the platform. Facebook provides a list of such protected characteristics, and the broadly defined circumstances are interpreted and applied by moderators. While criminal codes tend to use general wording and prohibit, for example, any ‘incitement to hatred’, Facebook’s Community Standards describe the forms of prohibited speech and also allow for broad discretion in the limitation of speech. Excerpts from Facebook’s manual for moderators, published by the Guardian in 2017, bring up specific examples.271 With regard to violent content, the publication of ‘credible violence’ is prohibited. While a statement like ‘I hope someone kills you’ may not be removed, the statement ‘I hate foreigners and I want to shoot them all’ would be. The publication of the statement ‘someone shoot Trump’ is prohibited, while a message like ‘let’s beat up fat kids’ would not be removed.272 The mechanistic application of these rules (usually by an algorithm) leads to absurd consequences at times. For example, F ­ acebook removed excerpts from the Declaration of Independence in July 2018 because the phrase ‘merciless Indian savages’ violated the platform’s Community ­Standards.273 After and as a result of various debates and scandals, the Community Standards have become increasingly nuanced over the years. These standards are not necessarily the source of worry. They often correspond to the legal limitations of free speech, even though there are quite a few legal limitations that are missing from the Community Standards, such as the prohibition of defamation and insult. For example, Facebook does not prohibit the defamation of public figures; only hate speech directed at, credible threats against and the bullying of such persons would be removed from the platform. It was revealed during a public debate in the summer of 2018 that Zuckerberg does not believe that denials of the Holocaust, that is speech commonly prohibited in Europe, should be removed.274 Individual governments may have the means to make the platform 270 ‘Hate Speech’ in Facebook’s Community Standards. See at https://www.facebook.com/communitystandards/ hate_speech. 271 ‘Facebook Files’ in the Guardian. See at https://www.theguardian.com/news/series/facebook-files. In December 2018, the New York Times was provided with more than 1,400 pages from the rulebooks by an employee of Facebook. According to the paper, ‘an examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.’ See Max Fisher, ‘Inside Facebook’s Secret Rulebook for Global Political Speech’ New York Times (27 December 2018) at https://www.nytimes. com/2018/12/27/world/facebook-moderators.html. 272 ‘Facebook’s Manual on Credible Threats of Violence’ Guardian (21 May 2017) at https://www.theguardian. com/news/gallery/2017/may/21/facebooks-manual-on-credible-threats-of-violence. 273 ‘Facebook Finds Independence Document “Racist”’ BBC (5 July 2018) at https://www.bbc.co.uk/news/ technology-44722728. 274 Kara Swisher, ‘Zuckerberg: The Recode Interview’ Recode (18 July 2018) at https://www.recode. net/2018/7/18/17575156/mark-zuckerberg-interview-facebook-recode-kara-swisher.

192  Social Media Platforms respect and comply with their statutory provisions (as discussed in section IV.A), but it nevertheless seems disturbing that, at the end of the day, the freedom of speech of billions of users depends on the whims of a single person and his personal taste, preferences and business interests. Furthermore, Facebook’s Community Standards specify numerous circumstances and examples that affect content which is not necessarily prohibited even in Europe (eg  attacks against certain groups of people by comparing them to animals that are culturally perceived as intellectually or physically inferior or to an inanimate object). This extends the limitations of free speech, and it does so using procedures that are not transparent, lack relevant safeguards and guarantees, and do not even attempt to avoid the perception of arbitrariness in situations where there are disputes. As Nicolas Suzor noted when excerpts from the manual became public, ‘without good data about how Facebook makes such decisions, we can’t have informed conversations about what type of content we’re comfortable with as a society’.275 Facebook’s attitude toward nudity is another suitable example to show how private regulation may result in sometimes grotesque outcomes. Note that the platform’s p ­ olicies have been changed considerably in response to occasional outbreaks of scandal and public outrage. Pursuant to the current standards: We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. Additionally, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content. Restrictions on the display of sexual activity also apply to digitally created content unless it is posted for educational, ­humorous, or satirical purposes. Our nudity policies have become more nuanced over time. We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring. We also allow photographs of ­paintings, sculptures, and other art that depicts nude figures.276

Video footages of abortions are also permitted on the platform as long as they do not depict nudity.277 Thus seeing the destruction of a living organism, a human foetus, is permitted, while nobody should be exposed to the sight of a bare female breast. The platform used to prohibit the publication of images of breastfeeding mothers278 and works of art showing a naked human body. Even today, showing the nipple of a

275 Nicolas Suzor, ‘After the “Facebook Files”, the Social Media Giant Must Be More Transparent’ The Conversation (22 May 2017) at http://theconversation.com/after-the-facebook-files-the-social-media-giantmust-be-more-transparent-78093. 276 ‘Adult Nudity and Sexual Activity’ in Facebook’s Community Standards. See at https://www.facebook.com/ communitystandards/adult_nudity_sexual_activity. 277 Nick Hopkins, ‘Revealed: Facebook’s Internal Rulebook on Sex, Terrorism and Violence’ Guardian (21  May 2017) at https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sexterrorism-violence. 278 Mark Sweney, ‘Mums Furious as Facebook Removes Breastfeeding Photos’ Guardian (30 December 2008) at https://www.theguardian.com/media/2008/dec/30/facebook-breastfeeding-ban.

Private Regulation by Platforms  193 woman is still not permitted.279 Gustave Courbet created his famous painting showing a naked vagina (L’origine du monde) in 1866, which is currently on display in the Museé d’Orsay in Paris. Facebook blocked the painting multiple times for depicting nudity, and the case was brought to court eventually when the account of a French teacher was suspended for publishing the image. The court held that the absence of an advance notice regarding the foreseen measure was in fact a breach of contract between the platform and its user, but the injured party was not awarded any damages.280 It seems that such contentrelated restrictions cannot be challenged in court, but a user may be afforded some sort of procedural and consumer protection. The list of works of art that have fallen victim to Facebook’s rules of decency is quite long and includes, among others, the sculpture of the Little Mermaid in Copenhagen,281 Bologna’s Fountain of Neptune,282 the Venus of Willendorf283 and paintings by Rubens.284 Eventually, the Community Standards were amended to provide that ‘We also allow photographs of paintings, sculptures and other art that depicts nude figures.’ The removal of the most famous photograph taken during the Vietnam war was also followed by public outrage.285 The image shows people, including a crying, naked girl, running away from a napalm attack. In response to the public reactions, the platform eventually permitted the publication of the photo.286 At the same time, Facebook maintained its position that footage of human decapitations posted by Mexican drug cartels is not inconsistent with its Community Standards,287 but it eventually changed its opinion in response to public criticism.288 Platforms also have a history of limiting the expression of opinions that are clearly political in nature, as happened with YouTube when the platform suspended US right-wing channels. The platform claimed that the suspension

279 Rob Price, ‘Facebook Bans Most Photos of Female Nipples for “Safety” Reasons, Exec Says’ Business Insider (26 April 2018) at https://www.businessinsider.com/freethenipple-facebook-bans-photos-femalenipples-safety-2018-4. 280 Philippe Sotto, ‘French Court Issues Mixed Ruling in Facebook Nudity Case’ US News (15 March 2018) at https://www.usnews.com/news/business/articles/2018-03-15/french-court-issues-mixed-ruling-in-facebooknudity-case. 281 Doug Bolton, ‘Facebook Removes Image of Copenhagen’s Little Mermaid Statue for Breaking Nudity Rules’ Independent (6 January 2016) at https://www.independent.co.uk/life-style/gadgets-and-tech/news/littlemermaid-copenhagen-denmark-removed-by-facebook-nudity-rules-a6799046.html. 282 Edward Helmore, ‘Facebook Blocks Photo of Neptune Statue for Being “Explicitly Sexual”’ Guardian (2  January 2017) at https://www.theguardian.com/world/2017/jan/02/facebook-blocks-nude-neptune-statuebologna-italy. 283 Alexandra Ma, ‘Facebook Banned a User from Posting a Photo of a 30,000-year-old Statue of a Naked Woman – And People Are Furious’ Business Insider (1 March 2018) at https://www.businessinsider.com/ facebook-bans-venus-of-willendorf-photos-over-nudity-policy-2018-3. 284 Daniel Boffey, ‘Barefaced Cheek: Rubens Nudes Fall Foul of Facebook Censors’ Guardian (23 July 2018) at https://www.theguardian.com/technology/2018/jul/23/barefaced-cheek-rubens-nudes-fall-foul-of-facebookcensors. 285 Julia Carrie Wong, ‘Mark Zuckerberg Accused of Abusing Power after Facebook Deletes “Napalm Girl” Post’ Guardian (8 September 2016) at https://www.theguardian.com/technology/2016/sep/08/facebookmark-zuckerberg-napalm-girl-photo-vietnam-war. 286 Levin, Wong and Harding (n 232). 287 Leo Kelion, ‘Facebook Lets Beheading Clips Return to Social Network’ BBC (21 October 2013) at https://www.bbc.co.uk/news/technology-24608499. 288 Haroon Siddique et alii, ‘Facebook Removes Mexican Beheading Video’ Guardian (23 October 2013) at https://www.theguardian.com/technology/2013/oct/23/facebook-removes-beheading-video.

194  Social Media Platforms happened due to an error by its recently hired moderators.289 During the parliamentary elections in Hungary, Facebook removed a video posted by a cabinet member that purported to show the failure of integrating immigrants in Vienna, Austria, thereby ­seeking to promote the anti-immigration rhetoric of his political party in Hungary. Even though the video was eventually permitted by the platform, it seems worrisome that it has such a propensity to interfere with political discourse in this manner and even during a most sensitive period (election campaigns), when free speech should be afforded the greatest possible ­protection.290 At the same time, Twitter does not remove President Trump’s tweets, even though it seems clear that some of these tweets are inconsistent with the platform’s policies, because they are obviously news sources of great value and attract users to the platform.291 Naturally, the platform’s decision can be approved of from the perspective of free speech. The EU urged social media platforms to adopt a stricter position on hate speech in 2016. As a result, Facebook, Microsoft, Twitter and YouTube signed a code of conduct on countering illegal hate speech online.292 This code is the EU’s first attempt to force platforms to follow suitable procedures. Under the code, the platforms undertook to conduct effective proceedings, to review notifications appropriately, and to decide on pieces of content on the basis of their own policies and with due regard to national legislation transposing Council Framework Decision 2008/913/JHA.293 The platforms agreed to ‘review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary’. The European Commission is to review compliance with the obligations undertaken annually, and it has already established that the platforms remove countless posts and profiles each year.294 Nonetheless, the EU is considering further measures to take in case the issues of hate speech and extremism are not handled by the platforms in a reassuring manner.295 D.  Editing and Content Diversity on Social Media Platforms The real power of a platform to influence the discussion of public affairs is not rooted in the capacity to remove individual pieces of content or ban users. Platforms use algorithms 289 Adi Robertson, ‘YouTube Says New Moderators might Have Mistakenly Purged Right-Wing Channels’ Verge (28 February 2018) at https://www.theverge.com/2018/2/28/17064470/youtube-infowars-right-wingchannels-strike-ban-moderator-mistake. 290 David Gilbert, ‘Why Facebook Censored a “Racist” Video from Hungary’s Government – Then Put it Back’ Vice (9 March 2018), at https://news.vice.com/en_ca/article/gy87m4/why-facebook-censored-a-racist-videofrom-hungarys-government-then-put-it-back. 291 Barbara Ortutay, ‘Here’s Why Twitter Won’t Ban Donald Trump’ Black America Web (23 July 2018) at https://blackamericaweb.com/2018/07/23/heres-why-twitter-wont-ban-donald-trump/. 292 European Commission: Countering Illegal Hate Speech Online, #NoPlace4Hate (2018) at http://ec.europa. eu/newsroom/just/item-detail.cfm?item_id=54300. 293 See ch 1, section III.E. 294 Results of Commission’s Last Round of Monitoring of the Code of Conduct against Online Hate Speech (2018) at http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=612086. 295 European Commission: Countering Illegal Hate Speech Online. Commission Initiative Shows Continued Improvement, Further Platforms Join (2018) at http://europa.eu/rapid/press-release_IP-18-261_en.htm; Daniel Boffey, ‘EU Threatens to Crack Down on Facebook Over Hate Speech’ Guardian (11 April 2018) at https://www. theguardian.com/technology/2018/apr/11/eu-heavy-sanctions-online-hate-speech-facebook-scandal.

Private Regulation by Platforms  195 that enable them, on the basis of data collected about each user, to personalise each and every piece of content accessible to and consumed by their users. An obvious example is Facebook’s news feed, which includes only a small portion of all content published by a user’s acquaintances and the pages he or she follows. Obviously, this practice is also ­justified by practical considerations, since the platform serves its users by keeping the content available to them organised in some way and by showing them the content in which they are most likely to be interested. However, we do not know the basis on which the platform relies when sorting such content, how it tries to find an appropriate balance between public issues and holiday photos, and what the business interests are or could be behind featuring certain specific pieces of content. It should be noted that not all social media platforms ‘edit’ user content as comprehensively as Facebook, but each platform attempts to show pieces from all potentially available content to a user that are most likely to meet his or her interests. An issue that arises in relation to the compilation of a news feed and the pieces of content shown to a user in general is whether it can be considered as protected speech by the platform. While such an argument would seem difficult to maintain in the context of filtering and removing content, it could be possible that the compilation of a news feed does eventually produce some kind of content that is new and did not exist before, the individual components of which were not produced or commissioned by the platform, but where the work of compilation was indeed performed by the platform according to its own decisions and considerations.296 On the one hand, if such a compilation is protected under the freedom of speech, it would be difficult to influence it from the outside. On the other hand, if the compilation is considered similar to the editing activities of the ­traditional media, it might be possible to apply the rules and doctrines of such media with some reasonable adjustments. In the words of Robin Foster: There are no exact parallels for the new digital intermediaries identified here – most are not neutral ‘pipes’ like ISPs, through which all internet content flows (although Twitter is close to this); nor are they pure media companies like broadcasters or newspapers, heavily involved in creative and editorial decisions. But they do perform important roles in selecting and ­channelling information, which implies a legitimate public interest in what they do.297

Paul Bernal even calls the supposed neutrality of gatekeepers (Facebook among them) a ‘myth’.298 The selection of content is not a neutral or value-neutral activity, and it reflects numerous interests of a platform. This might not be a problem in and of itself, but it raises concerns that users are not familiar with those interests and values; users cannot really know (or it would take extreme effort on their side to find out) what else is really out there apart from the content they are shown. As Klonick has highlighted, it is not a priority for social media platforms at this time to ensure adequate opportunities for all to participate in public discourse.299 296 See ch 2, section II.B. 297 Robin Foster, News Plurality in a Digital World (University of Oxford/Reuters Institute for the Study of Journalism, 2012) at https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-11/News%20Plurality%20 in%20a%20Digital%20World_0.pdf, 30. 298 Paul Bernal, The Internet, Warts and All. Free Speech, Privacy and Truth (Cambridge, Cambridge University Press, 2018) 71–101. 299 Klonick (n 48) 1665.

196  Social Media Platforms It seems that the architecture of platforms, the editorial decisions governing their general operations and their underlying values are more important than any individual decision made by a platform in response to a notice concerning a given piece of content, as those factors determine the overall functioning of the platform and have an impact on the ability of all to access information.300 The provision of a personalised service to users could act to suppress their overall picture of news and information on public affairs. This means that the number of news reports produced by traditional media could be drastically reduced at the whim of Zuckerberg.301 The reasons for this are predominantly financial, not a matter of principle: Facebook is in fact in competition with such media, even if it does not produce any content, and the company seeks to maximise its revenues from those media in return for presenting their content to its users. The provision of personalised services and news services in particular can reduce the diversity and selection of news that individual users come across when using the platform. In the era of traditional media, it was inevitable for readers to see content they did not specifically look for or agree with, but the comfort of personalisation eliminates this unpleasantness.302 The personalisation of news goes against the very concept of the marketplace of ideas, as users do not meet opinions that contradict their own personal views and opinions unless they specifically look for them.303 A platform may also interfere with its news feed in line with its political views and social objectives, and this very capacity poses a direct threat to the public sphere and the democratic expression of opinions. The existence of this phenomenon was demonstrated by a scandal in 2016 when tech-blog Gizmodo reported allegations from Facebook staff members that the company suppressed conservative topics and sources deliberately and in a systemic manner. The platform had claimed previously that the content of ­Trending Topics (a service listing topics that are most actively discussed by other users of the platform, aka ‘hot topics’) was compiled by algorithms exclusively on the basis of actual user activity and without any direct human intervention. However, former employees of the company reported that they had to select the topics on the basis of political ­considerations. According to reports, links to certain conservative websites were not allowed in the Trending Topics section, even if they were among the most frequently shared content on the platform.304 The case showed clearly that the selection of pieces of news that were to be featured and widely discussed on Facebook was not influenced by neutral algorithms but human editors (known as ‘news curators’).305 In essence, the scandal resulted in the

300 Frank Fagan, ‘Systemic Social Media Regulation’ (2018) 16(1) Duke Law & Technology Review 393. 301 Emily Bell, ‘Why Facebook’s News Feed Changes Are Bad News for Democracy’ Guardian (21 January 2018) at https://www.theguardian.com/media/media-blog/2018/jan/21/why-facebook-news-feed-changes-badnews-democracy. 302 See the ‘filter bubble’ theory described in ch 2, section I.E.ii. 303 Sarah Eskens, Natali Helberger and Judith Moeller, ‘Challenged by News Personalisation: Five Perspectives on the Right to Receive Information’ (2017) 17(2) Journal of Media Law 259, 281. 304 Philip Bump, ‘Did Facebook Bury Conservative News? Ex-Staffers Say Yes’ Washington Post (9 May 2016) at https://www.washingtonpost.com/news/the-fix/wp/2016/05/09/former-facebook-staff-say-conservative-newswas-buried-raising-questions-about-its-political-influence/?noredirect=on&utm_term=.8338634b2401; Michael Nunez, ‘Former Facebook Workers: We Routinely Suppressed Conservative News’ Gizmodo (9 May 2016) at https://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006. 305 Thielman (n 79).

Private Regulation by Platforms  197 defeat of an important taboo and a paradigm shift regarding the role of the platform: Facebook became an actual news editor and, as such, similar to traditional media.306 Even though Trending Topics was phased out by the platform eventually,307 it seems hard not to believe that similar news editing practices might be used by other services of the platform or outside the US. If Facebook is considered a news editor, it might just be reasonable to extend the scope of legal provisions applicable to the news editors of the ‘traditional’ media to social media platforms.308 E.  Fake News and Private Regulation The spread of fake news is hard to stop by legal regulation, as discussed in section III.G. It also seems unlikely that the rules and regulations applied by the platforms themselves could provide a comprehensive and comforting solution to this problem, because, as Bernal pointed out, the spread of scare stories, insults and bad-spirited gossip is not a fault but an inevitable consequence of the features of the system.309 However, negative PR could be detrimental to a platform, so platforms inevitably try to tackle the spread of fake news, and even surpass their legal obligations requiring them to do so. Measures taken in this regard might include raising tariffs for or reducing the prominence in the news feed of sites that present false and fictitious statements as news.310 Other options could be to increase transparency in connection to paid advertisements and sponsored content, so that users could know who paid for the dissemination of a given piece of content.311 It has also been suggested that social media platforms should recruit fact-checkers to verify pieces of content and either designate pieces of fake news as such or, alternatively, inform the platforms of such news, so that they could demote the ranking of or even ban such websites.312 Ironically, designating a piece of news as fake (as ­Facebook attempted) only increased the popularity and reinforced the credibility of the false information among  users.313 The activities of fact-checkers are indeed quite similar to news editing, and this increases the similarities between social and traditional media even further.

306 Natali Helberger and Damian Trilling, ‘Facebook is a News Editor: The Real Issues to be Concerned About’ LSE Media Policy Project (26 May 2016) at http://blogs.lse.ac.uk/mediapolicyproject/2016/05/26/facebook-is-anews-editor-the-real-issues-to-be-concerned-about/. 307 Chris Morris, ‘Facebook Kills “Trending” Topics, Will Test “Breaking News” Label’ Fortune (1 June 2018) at http://fortune.com/2018/06/01/facebook-kills-trending-news-topics/. 308 See ch 1, section IV.B. 309 Bernal (n 67) 506–09. 310 Hasen (n 59) 226–30. 311 Levi (n 66) 285. 312 Jessica Stone-Erdman, ‘Just the (Alternative) Facts, Ma’am: The Status of Fake News under the First Amendment’ (2017–18) 16 First Amendment Law Review 410; Sarah Perez: Facebook Expands FactChecking Program Adopts New Technology for Fighting Fake News’ TechCrunch (21 June 2018) at https:// techcrunch.com/2018/06/21/facebook-expands-fact-checking-program-adopts-new-technology-for-fightingfake-news/?guccounter=1. 313 Catherine Shu, ‘Facebook will Ditch Disputed Flags on Fake News and Display Links to Trustworthy Articles Instead’ TechCrunch (20 December 2017) at https://techcrunch.com/2017/12/20/facebook-will-ditchdisputed-flags-on-fake-news-and-display-links-to-trustworthy-articles-instead/.

198  Social Media Platforms Essentially, the Report by the High Level Expert Group on Fake News and Online Disinformation builds its strategy against fake news on the basis of reinforcing the private regulation performed by social media platforms.314 The Report suggests that platforms give more and more options for their users to personalise the service they receive.315 It would indeed be a welcome improvement if the Daily Me were not produced by a­ platform in a non-transparent manner and on the basis of obscure databases and personal profiling, but such empowerment of the users could also reinforce the filter-bubble effect.316 Other suggested measures (such as the ideas that a platform should recommend additional news from reliable sources to its users in addition to popular topics, that it should give more visibility to reliable news sources317 and that users should be enabled to exercise their right to respond to allegations)318 would increase the similarities between platform moderators and traditional news editors, as well as between social media ­platforms and traditional news media. The Communication published by the European Commission in April 2018 follows a similar path. Essentially, it seeks to encourage private regulation by platforms while ­pointing out that the introduction of legal obligations might follow if private regulation fails to deliver the desired outcome (even though the indirect liability regime established by the E-Commerce Directive would not be changed).319 In a sense, this document ­represents a milestone in EU media regulation. It does not simply encourage self-regulation (which is not an absolute novelty in media policy), where a non-governmental organisation, which does not form part of the regulated media landscape itself, supervises the operation of the media, but it reinforces private regulation (ie the regulation of content by the platforms themselves) by also suggesting the possibility of obliging social media platforms to ­implement such regulations. According to this approach, platforms must themselves decide on the permissibility of various content – and even on going beyond the provisions of the E-Commerce Directive. By taking this step, a government would hand over almost all regulatory responsibilities to social media platforms while retaining only the control of this rather peculiar supervisory regime. All in all, this would be a move toward a new and unknown model of co-regulation that has never existed before. This model appears not only in various documents of the EU, but also in regulatory initiatives taken by certain Member States, the first example being the law of Germany noted in section III.F,320 where regulation is not directed at illegal content as such but requires social media platforms to take action, while also serving as a basis for government intervention should the statutory procedures be violated or the expected results (ie the speedy removal of content violating the ­Criminal Code) not be delivered.



314 High

Level Expert Group on Fake News and Online Disinformation (n 214). 33. 316 See ch 2, section I.E.ii. 317 High Level Expert Group on Fake News and Online Disinformation (n 214) 33. 318 ibid. See ch 1, section IV.H on the right of reply. 319 Communication from the Commission (n 218). 320 Netzwerkdurchsetzungsgesetz (n 193). 315 ibid

Private Regulation by Platforms  199 F.  The Ideology of a Platform As noted in numerous contexts throughout this chapter, the activities social media platforms carry out concerning the user content they manage is not neutral. Either because they seek to comply with legal regulations or since they act on their own initiative, the operators of these platforms carry out a kind of editorial task that involves the assessment and evaluation of such content. The private regulation implemented by platforms is not value-neutral either, as it clearly reflects their objective of providing a ‘safe space’ to as many users as possible; this objective is not always compatible with the goal of acting as a robust defender of free speech. This lack of neutrality gains a completely new quality if the editorial decisions the platform makes are guided by an ideology or a goal of achieving or promoting certain social objectives (if this social objective is other than the open discussion of public affairs or free public sphere). Social media platforms usually seem to be the champions of free speech. But, as Bernal notes, ‘in practice free speech is just a tool for them. They will champion it when it suits them and not champion it when it does not.’321 Facebook is often compared to a nation state.322 Even though social media platforms have far more extensive ways of modelling private regulation than a nation state, the analogy applies in that nation states are neutral from a religious or philosophical point of view, but they are not value-neutral. A social media platform can also make value-based decisions and could ban hateful people from its system. If we accept the public sphere to be a fundamental institution (it would be difficult not to do so), surrendering ideological neutrality and embracing bias would lead to serious problems, even though it cannot be prohibited using the currently available legal and regulatory means. However, Western countries are democracies, meaning that the limits of free speech (among other things) are set out and compliance with the rules is supervised by elected officials, courts and other authorities operating within a framework of constitutional safeguards and guarantees. If a social media platform were a state, it most certainly would not be a democracy. According to the (hardly surprising) findings of a survey, the owners and executives of US tech companies established in Silicon Valley hold liberal, cosmopolitan and globalist political views and support the extension of human rights – in all matters that do not interfere with their business interests. However, they are against government regulation in general, and labour and employment policies in particular, on which matters they tend to agree with conservative libertarians.323 With regard to Facebook, signs of ideological bias also exist. The manifesto published by Zuckerberg in 2017 clearly reflects a political agenda (albeit a rather naive one, considering the chances of its implementation) to build a global community above and beyond nation states.324 The wording goes beyond the goal of providing a safe space and envisages the abandonment of the concept of nation states. Naturally, this idea is not new in the era of globalisation, but, reading between

321 Bernal (n 298) 127. 322 Chander (n 114). 323 Farhad Manjoo, ‘Silicon Valley’s Politics: Liberal, With One Big Exception’ New York Times (6 September 2017). 324 Mark Zuckerberg, ‘Building a Global Community’ Facebook (16 February 2017) at https://www.facebook. com/notes/mark-zuckerberg/building-global-community/10154544292806634/.

200  Social Media Platforms the lines, one might find the objective of surpassing ‘national’ societies an aim that is still somewhat surprising in the age of world trade, international organisations and an increasingly united Europe. It became clear during the Trending Topics scandal in 2016 that Facebook is capable of exerting direct political influence, even without any noble cause or a publicly acknowledged ideological stance.325 Google was also exposed to harsh criticism for firing an employee for advancing harmful gender stereotypes when he wrote a memo challenging the ideological bias of the platform.326 The ‘echo chamber’ criticised in the memo, that is the tech giant itself, which normally presents itself as a defender of free speech both toward its employees and users, decided to remove the author from his position instead of engaging in a debate. Other studies revealed that pieces of content are selected on an ideological basis,327 and this suspicion is quite difficult to dispel (or prove) for the very reason that the decision-making mechanisms of these platforms lack transparency. The owners and executives of social media platforms can exercise their freedom of speech, including making decisions concerning the infrastructure they own, but a platform can also be harmful to democratic public life if it grows really large but fails to manage debates conducted on the platform with due regard to the notion of the ‘marketplace of ideas’, that is if it attempts to influence such exchanges using obscure means that lack transparency. V. SUMMARY

The advent of social media platforms has brought about fundamental changes in the structure of the public sphere, the communication of opinions, the range of means of accessing information and the rules pertaining to the freedom of speech. It is a significant fact that all the major and most influential platforms operate out of the US, and their owners, executives and developers follow an approach toward the freedom of speech that is deeply rooted in the principles and doctrines of the First Amendment. It is also noteworthy that the major platforms have a global presence, meaning that they have significant numbers of users in many different countries and that the rules of multiple jurisdictions can be applied to the content they make available. As a result, the functioning of these platforms is influenced by a number of different approaches toward the freedom of speech. These circumstances have various and sometimes contradictory consequences. On the one hand, the platforms sometimes try to make the First Amendment the law of the entire world. On the other hand, the platforms limit free speech much more strictly upon government request than they normally do in the US, merely to maintain a comfortable ‘safe space’ for users and to ensure a peaceful business environment. US-based global companies, which are used by countless US citizens for daily

325 See section IV.D. 326 Daisuke Wakabayashi, ‘Contentious Memo Strikes Nerve Inside Google and Out’ New York Times (8 August 2017) at https://www.nytimes.com/2017/08/08/technology/google-engineer-fired-gender-memo.html. 327 Craig Parshall, ‘The Future of Free Speech, Free Press, and Religious Freedom on Facebook, Google, Apple, etc’ National Religious Broadcasters (2013) at http://nrb.org/files/6213/9819/8017/The_Future_of_Free_Speech_ Free_Press_and_Religious_Freedom_on_Facebook_Google_Apple_etc__A_Current_Assessment.pdf.

Summary  201 c­ ommunication and exchange, are also subject to European free-speech rules and regulations that follow a different tradition.328 In addition to conforming to government policies in order to avoid legal liability, social media platforms also enforce their own ‘private rules’, which are not entirely focused on affording strong protections and safeguards to the freedom of speech. On the one hand, governments themselves force platforms to assume a decision-making role, as, according to the European approach, assessing illegal content is a requirement for escaping legal liability. On the other hand, platforms also tend to set the limits of free speech on their own platform without any particular external restriction. For these fundamental reasons, platforms may not be considered neutral intermediaries and their activities must be recognised as being similar to those of traditional media editors. Once they are recognised as such, the next obvious issue is the enforcement of media policy goals and the possible application of the available means of media regulation to the new services. Regardless of the present or future obligations a government or EU law may impose on social media platforms, governments and platforms exist in a state of forced interdependence; the both enforced and voluntary application of the law by the platforms supersedes and limits the application of the actual legal doctrines of free speech. At the same time, the law of contracts and consumer protection (two areas that used to operate at a safe distance from media policy) might gain a more considerable role in protecting users against the limitation of their right to free speech by social media platforms.



328 Ammori

(n 92) 2263.

6 Gatekeepers’ Responsibility for Online Comments I.  THE CASE OF ONLINE COMMENTS

A.  Comments as ‘Speech’

A

n online ‘comment’ involves reader input on a piece of content published on a website (news portal, blog, forum, social media account, etc) by the content provider of that website (the editorial office, the blogger, the holder of the­ profile). A comment cannot exist independently, ‘on its own’, but rather necessarily relates to a piece of content published on the site. This characteristic helps explain the potential liability of the content provider, since the comment would not ‘exist’ in the absence of the provider’s activity. User opinions on online content typically appear anonymously (by displaying a ­username or nickname only), in a manner usually not suitable for identifying the user. At  the same time, under Facebook posts, comments can only be posted with the real name of the user, who hands over his or her personal data when registering for the service. Although the intention of the service provider is to preclude anonymity, this policy can be fairly easily circumvented, but this does not alter the fact that most Facebook users specify their true details on the website. A single comment or comment feed can either be moderated or unmoderated. In the latter case, the service provider facilitates only the opportunity to comment, and does not interfere with the process either before or after posting. As a result, the provider is not aware that a comment has been posted, or of its possibly unlawful nature, unless somebody draws its attention to it. Moderation can be either prior or ex post: in the former case, the posting of the comment, in the latter, the further accessibility or the removal of a comment that has already been posted, depends on the service provider’s decision. B. Anonymity A possible feature of a comment is thus the anonymity of the author. Recent literature considers the right to anonymity as an inseparable component right of the freedom of speech;1 that is the protection of freedom of speech must extend to anonymous speech. 1 Dirk Voorhoof, ‘Internet and the Right of Anonymity’ in Jelena Surculija (ed), Proceedings of the Conference Regulating the Internet (Belgrade, Center for Internet Development, 2010) 163; Connor Francis Linehan, ‘Anonymous Speech on the Internet’ (unpublished paper), see at https://works.bepress.com/conor_linehan/2/.

The Case of Online Comments  203 In recent decades, as the scope of freedom of speech has grown, the range of component rights that can be derived from freedom of speech has also increased with the emergence of new means of communication. For example, in Europe, it is undisputed that the media is entitled to keep confidential the identity of its information sources,2 and in most states this is ensured by a separate statutory provision.3 However, so far, the ‘right’ to ­anonymity has only been recognised by the US Supreme Court in McIntyre v Ohio Elections Commission,4 where the Court declared that anonymous communication is generally protected by the First Amendment.5 The reasons offered in defence of the constitutional protection of anonymity are similar to the arguments on which the unprecedentedly wide protection of freedom of speech is based, including the fear of the despotism of the majority, that is the additional protection afforded to minority opinions, as well as a reluctance to apply any kind of content-based regulation. If we consider the name of the author as part of the text, that is as ‘content’, then insisting that this be displayed qualifies as interference with the content.6 Peter Coe sees anonymity (and pseudonymity) as a double-edged sword: it encourages public speaking, whilst making unwanted and damaging speech harder to restrict.7 The European approach puts greater emphasis on the interests of the audience of the speech, and thus it is less supportive of those who wish to engage in anonymous expression. That approach focuses on the ‘true’, or at least well-founded, nature of published speech, the possible assessment of an opinion’s credibility, the possible avoidance of misleading publications and the identifiability of the person violating the law, all of which are factors that are viewed as serving the public interest.8 In general, Internet communication magnifies or casts new light on old issues of freedom of speech, which raises questions regarding the further sustainability of earlier approaches of the legal system to those issues. In the context of Internet content, the power of legal doctrines that are well-established in the offline world are weaker. The same applies to the issue of anonymity: it is much easier to maintain anonymity on the Internet in the sense that it is more difficult to identify the author. The Internet is unique in that many more people are able to speak, with or without giving their names. This is due to the democratic nature of the Internet, compared to earlier types of media, its virtually free and unrestricted access, the ease with which infringements of freedom of speech can be committed and the difficulties of identification. All of these factors lead to the conclusion that, due to the difficulties of enforcement being disproportionately larger in scale relative to the targeted goal, no legal recourse is needed for minor infringements

2 Jan Oster, Media Freedom as a Fundamental Right (Cambridge, Cambridge University Press, 2015) 87–91. 3 ELSA’s International Legal Research Group, Freedom of Expression and Protection of Journalistic Sources, Final Report, see at https://files.elsa.org/AA/LRG_FoE_Final_Report.pdf. 4 McIntyre v Ohio Elections Commission 514 US 334 (1995). 5 It is true that in the case at hand the subject of the case was an election flyer, on which the identity of the publisher of the flyer should have been displayed under a local law. Nevertheless, the general right of anonymity covers online communication mutatis mutandis. 6 Eric Barendt, Anonymous Speech. Literature, Law and Politics (Oxford, Hart Publishing, 2016) ch 3. 7 Peter Coe, ‘Anonymity and Pseudonymity: Free Speech’s Problem Children’ (2018) 22 Media & Arts Law Review 173. 8 ibid 175–81.

204  Responsibility for Online Comments of personality rights committed anonymously. However, the additional difficulties caused by technology provide a solid basis per se for extending the limits of freedom of speech without there being a foundation in principle. Accordingly, in Europe, anonymous speech is only a possibility that can be guaranteed by the service provider; it is not regarded as a generally enforceable right. Hence, by using the available measures of criminal prosecution, even an anonymous person can be identified on the basis of his or her MAC address. In addition, those entitled to complain about comments usually initiate civil defamation procedures, in which the investigating authority has no role but where the applicant and the court often have no effective means of identifying the person who posted the comment. Each of the four European Court of Human Rights (ECtHR) cases to be discussed in this chapter fell into this category, and in all four cases the civil (tort) law responsibility of the comment’s author was examined by the acting courts. C. Moderation The existence or non-existence of moderation, and its prior or ex post nature, can have important implications for the establishment of liability. If the service provider performs prior moderation then it makes an intentional decision to publish or not publish certain comments, which is undoubtedly an editorial decision. When the service provider decides in advance to publish (approve the comment) then it makes the comment in essence part of its edited content. Hence, it assumes the possible legal consequences of an infringement, that is civil law liability, and when the service provider itself is subject to press or media regulation, the liability arises from that status.9 The case is not as straightforward for ex post moderation, when the service provider decides on the removal or non-removal of a comment after rather than before the posting. If moderation is subsequent (after the publication), it is arguable whether: (a) the service provider’s responsibility for the unlawful comment relates only to removal after an external signal is received; or (b) the service provider is subject to a continuous monitoring obligation after publication, on the basis of which unlawful comments must be identified and removed even in the absence of an eventual external signal; (c) the publication of an unlawful comment is an objective infringement, which can only be mitigated by its subsequent removal. Options (a) and (c) are relevant also in the case of websites without automatic moderation; therefore, the question is whether the publication of the comment itself is an infringement, or whether only the failure to remove it immediately after notification qualifies as infringement. In a broader sense, this issue can be raised relating to the ‘duties and responsibilities’ to which a content provider is subject under Article 10(2) of the European Convention

9 This was the Estonian Government’s argument in the Delfi case: see Delfi AS v Estonia, App no 64569/09, judgment of 15 June 2015 [GC], paras 85, 125 (‘Delfi [GC]’).

The Case of Online Comments  205 of Human Rights (ECHR), with regard to controlling comments posted or intended to be posted on a website.10 D.  Basis of Legal Responsibility for Unlawful Comments In the context of comments, the European Union’s (EU) E-Commerce Directive and the national law11 implementing it might be applied, and they treat the website’s content provider as ‘hosting’. Affected content providers used this argument before the ECtHR in both the Estonian case and the Hungarian case to be discussed later in this ­chapter. Under this approach, a content provider that allows commenting is an ‘intermediary service provider’ for those comments, which stores the information ­ provided by the user (hosting).12 However, as a rule, the hosting provider is not responsible for the information provided by others, which is simply transmitted, stored or made available through the information society services provided by the intermediary service provider.13 The hosting provider is also not responsible for unlawful content when the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.14

The concept of the hosting service, however, can only be applied to commenting with some difficulty. The hosting service provider is clearly different from the content provider that publishes the content in the storage place: the latter provides the infrastructure for the publication (an element of it), and although it might technically be able to amend the content, it does not generate the content itself, and those using its services do not ‘connect’ to the content of the hosting provider. By contrast, the content provider of a website that permits the posting of comments does not create the comments itself, but they are nonetheless inseparable from its own content and would be meaningless without its own content since they respond to it. The content provider who allows commenting has its own hosting provider (a third actor), who enables and permits the operation of the website on its servers, therefore the relationship between the author of the comment and the content provider is completely different from that of the content provider and the hosting provider. If providing a comment section on a website would itself constitute a hosting service, there would be two separate hosting providers for the content uploaded to that section by users, one being the website’s content provider and the other the provider of the technical means for operating a website. The legal differentiation between service providers,

10 ibid para 115. 11 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce) [‘E-Commerce Directive’]; Act CVIII of 2001 on certain issues of electronic commerce services and information society services. 12 E-Commerce Directive, Art 14(1). 13 ibid. 14 ibid Art 14(1)(a)−(b).

206  Responsibility for Online Comments who have a role in the publication of a given website, and identification of the liability rules applying to them is not always simple. For instance, in the Hungarian case, the Magyar Tartalomszolgáltatók Egyesülete itself was the blog host, the Blog.hu platform was the (blog) provider, while a third party was the hosting service provider. In theory, liability for the infringing comment may be established against all of them, but according to different rules and in different circumstances. In theory, one can assume a close relationship between the comment and the article being commented on, since the former is a reaction to the latter. However, this is not necessarily the case. The actual article can be just an excuse for speaking out, and there might be no content correlation between the article and the comments on it. Indeed, some of the comments may be ‘off-topic’ or written by ‘trolls’. The importance of this distinction did not, however, emerge in the cases before the ECtHR. Identifying the provision of commenting as a hosting service is indirectly supported by the case law of the Court of Justice of the European Union (CJEU). In the Delfi case, three decisions of the Luxembourg tribunal were referred to.15 In these decisions, the CJEU held that the providers of services who provide access to unlawful content (search engine, webstore, internet service provider (ISP)) are only liable under Article 14 of the E-Commerce Directive16 if they have gained knowledge of the unlawful content and have failed to take steps to remove it. Hence, by analogy, a content provider that allows commenting could be a hosting provider, although it is true that, in the above-mentioned cases, the relationship between the service provider and the content published by the person committing the infringement may be much more remote, and search engines, webstores and ISPs are more akin to ‘classic’ hosting providers supplying the infrastructure than the hosts of the websites containing the comments (for ISPs, this is not in question, but in the other two cases, no related content exists). However, the ECtHR so far has not endorsed this broad interpretation of the ‘hosting provider’ concept in its jurisprudence on comments, and has focused instead on the civil (tort) law responsibility of content providers. Hungarian case law considers comments to be ‘dissemination’ involving the transmission of information originating from others.17 If this information is unlawful, the distributor is also liable, in the sense that the website’s content provider can be liable for a comment, and in the absence of identification of the author, it can have exclusive liability. In the Hungarian Civil Code, ‘dissemination’ is covered in the section on violation of reputation. ‘Dissemination’ can take place by transmitting a piece of information, typically one containing an untrue statement of facts, but it assumes active conduct and a conscious decision by the content provider. For comments that are not moderated in advance, the content provider does not engage in such active conduct. Where prior moderation does exist, however, this constitutes active conduct and thus a conscious 15 Case C-324/09 L’Oréal SA and Others v eBay International AG and Others, ECLI:EU:C:2011:474; Case C-70/10 Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM), ECLI:EU:C:2011:771; Case C-360/10 Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV, ECLI:EU:C:2012:85. 16 E-Commerce Directive, Art 14. 17 Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary, App no 22947/13, judgment of 6 February 2016 (‘MTE’) paras 26 and 42; 2013 Civil Code, s 2:45 § (2), 1959 Civil Code, s 78 (no longer in force).

The Case of Online Comments  207 ­decision

has been made; hence there is no obstacle to the establishment of an infringement via dissemination. Liability for breaching civil law rules on personality rights was also the starting point in the other two relevant cases. The background to the Delfi case reveals that the Estonian authorities established that an infringement had been committed by the service provider, applying civil law liability rules.18 Hence, similarly to Hungary, not only the author but also the content provider involved in transmission to the public is liable under civil law for defamatory statements that violate personality rights. In the most recent judgment by the ECtHR relating to comments, delivered in the Pihl case, the applicant originally claimed a violation of his personality rights in the Swedish court, in a way similar to that of the plaintiffs in the Estonian and Hungarian cases.19 Although the issue of dissemination did not come up in the majority statement of reasons in the Hungarian Constitutional Court’s resolution made in the comment case,20 it was raised in the Dissenting Opinion of Judge Stumpf, where he refers to a decision made by the Pécs High Court of Appeal in another case.21 The latter sought to clarify the exact definition of dissemination and held that dissemination means the forwarding of news, ie, to communicate a thought and share ­information … This assumes active and conscious conduct, ie, the consciousness of the person violating the right must cover the unlawful claim and expression of opinion, and must share it with the public as the content of their own consciousness, in a communicative manner … In  this case, the defendant is capable of removing the unlawful content and it can moderate the comments, provided that it becomes aware of the comments’ unlawful nature. The website operator cannot be expected to monitor all inputs and entries continuously, in particular in relation to larger content providers. However, it can be expected that, when it becomes aware of unlawful content on the website operated by itself, it shall remove that as soon as possible. Failure to remove it expresses its agreement with the unlawful content. In this case, however, it performs the violation of reputation as set out in Section 78 of the Civil Code.

On this basis, the High Court of Appeal declared, establishing a principle, that [t]he operator of the website accessible via the internet (intermediary service provider) is subject to civil law liability for the content of comments violating reputation uploaded by another to the website, if it fails to act for the removal immediately upon gaining knowledge of it.22

This means, according to the High Court of Appeal, that where the content provider removes the challenged content upon request, it will not be liable. However, if it consciously refuses to fulfil the request, it can be considered to disseminate, and consequently, for comments of an unlawful nature, will be liable for the infringement. However, the Constitutional Court chose a different path in its Resolution no 19/2014. (V. 30.) AB, and took a step towards the recognition of objective liability – in which case, the free posting of the comment by another person can be considered in itself as an infringement by the content provider.



18 Delfi

[GC] (n 9) paras 21−43. v Sweden, App no 4742/14, decision of 9 March 2017. 20 Resolution no 19/2014. (V. 30.) AB. 21 ibid paras 84–88. See Pécs High Court of Appeal Pf.VI.20.776/2012/5., BDT2013. 2904. 22 ibid para 85. 19 Pihl

208  Responsibility for Online Comments This raises the question: If we consider the posting of a specific comment to be unlawful (violating personality rights), what is the legal basis for the content provider’s exemption from civil law liability if the comment is removed? The approach taken by the Constitutional Court may be justified to the extent that traditionally, the civil law fails to differentiate between unlawful situations of differing duration – this can only be taken into account when imposing the sanction rather than when establishing the infringement. Alongside the notions of a hosting service and of dissemination, a possible third interpretation is that the comment is nothing more than a ‘reader’s letter’, making it very similar to the genre well known from the traditional press, and so it can be assessed in a similar way.23 This would mean the unconditional liability of the content provider for the content. However, the analogy with the reader’s letter could emerge consistently only with regard to comments moderated in advance and passed through the filter, since in the printed press, each reader’s letter is published as a result of a conscious editorial decision, and so the analogy would require such a decision to have been taken for comments before they are displayed. A comment moderated subsequently, or not moderated at all, can hardly be regarded as equivalent to a reader’s letter. In the UK, prior to 2013, comments damaging good reputation were judged by courts according to common law rules.24 The situation was similar, for example, in Hong Kong.25 Accordingly, in assessing the liability of gatekeepers, the main question was whether or not they could be considered publishers. In the English case, the Court of Appeal did not exclude this option, at least in terms of the period after the infringing content became known (but prior to that, they were not considered as such in any circumstances).26 Based on this approach, if gatekeepers failed to remove content infringing good reputation of which they had obtained knowledge, within the appropriate time, they would have liability similar to that of publishers. However, the Court of Appeal did not consider the content of the comments to be infringing in a real and significant way. In the Hong Kong case, the operator of the website qualified as a publisher, but it could claim the protection granted to innocent dissemination and hence was exempted from liability.27 Following the reform of the law on defamation in the UK, the Defamation Act 2013 created a separate rule, to be applied primarily in terms of defamation committed by anonymous comments. Section 5 of the Act provides express protection to website operators. The term ‘website operator’ includes not only the service providers of edited sites publishing texts and posts open for comments, but also platform providers, social media and video-sharing sites allowing users to publish their own content.28 Section 5(2) and (3) read as follows: (2) It is a defence for the operator to show that it was not the operator who posted the statement on the website. 23 Nádori Péter, ‘Kommentek a magyar interneten: polgári jogi gyakorlat’ (2012) 1 In Medias Res 319. 24 Tamiz v Google, Inc [2013] EWCA Civ 68. 25 Oriental Press Group Ltd and Others v Fevaworks Solutions Ltd and Others [2013] HKCFA 47. 26 Tamiz v Google, Inc (n 24) paras 27–36. 27 Anne SY Cheung, ‘Liability of Internet Host Providers in Defamation Actions: From Gatekeepers to ­Identifiers’ in András Koltay (ed), Media Freedom and Regulation in the New Media World (Budapest, Wolters Kluwer, 2014) 289. 28 James Price and Felicity McMahon (eds), Blackstone’s Guide to the Defamation Act 2013 (Oxford, Oxford University Press, 2013) 93.

The European Court of Human Rights Case Law Relating to Comments  209 (3) The defence is defeated if the claimant shows that— (a) it was not possible for the claimant to identify the person who posted the statement, (b) the claimant gave the operator a notice of complaint in relation to the statement, and (c) the operator failed to respond to the notice of complaint in accordance with any provision contained in regulations.

This statutory rule is supplemented by Regulations clarifying certain further important questions of detail.29 The procedure, based on the poster’s reply, will develop in one of the following directions: • if the poster fails to reply, or if the reply does not meet the requirements laid down by the Regulations, the operator of the website is required to delete the comment; • if the poster does not consent to the deletion of the comment, provides his or her personal data and consents to the transmission of his or her personal data to the claimant, the claimant may sue the poster directly; • if the poster does not consent to the deletion of the comment, provides his or her personal data, but does not consent to the transmission of that personal data, the claimant may request the court to order the release of the data to serve any eventual lawsuit; • if the poster consents to the deletion, it has to be completed within two working days  – regardless of this, the claimant may sue the poster of the comment, but presumably with much less hope of success.30 Hence, there are at least three possible ways to establish liability for comments: the ­application of the general rules of civil law (personality rights), the exemption granted under the E-Commerce Directive to hosting service providers, and the new rules specifically targeted at resolving this issue. II.  THE EUROPEAN COURT OF HUMAN RIGHTS CASE LAW RELATING TO COMMENTS – OVERVIEW

The Delfi case was the first ECtHR case on comments.31 Delfi is a popular news website in Estonia, on which seriously defamatory comments were posted by readers regarding a company referred to in an article.32 Although the Estonian court of first instance

29 The Defamation (Operators of Websites) Regulations 2013 (SI 2013/3028), available at www.legislation.gov. uk/uksi/2013/3028/pdfs/uksi_20133028_en.pdf. 30 Ministry of Justice, Complaints about Defamatory Material Posted on Websites: Guidance on Section 5 of the Defamation Act 2013 and Regulations (January 2014) at www.gov.uk/government/uploads/system/uploads/ attachment_data/file/269138/defamation-guidance.pdf. 31 Delfi [GC] (n 9) and its background: Delfi v Estonia, App no 64569/09, judgment of 10 October 2013(‘Delfi’). 32 A few examples of the 20 comments cited by the ECtHR in its judgment: ‘[O]pen water is closer to the places you referred to, and the ice is thinner. Proposal – let’s do as in 1905, let’s go to [K]uressaare with sticks and put [L.] and [Le.] in a bag’; ‘[G]ood that [La.’s] initiative has not broken down the lines of the web ­flamers. go ahead, guys, [L.] into the oven!’; ‘[Little L] go and drown yourself’; ‘What are you whining for, kill this bastard once and for all [.] In the future the other ones … will know what they will risk, even they will only have one little life’; ‘[A]nd can’t anyone defy these shits?’

210  Responsibility for Online Comments applied the law on e-commerce, classifying the news portal’s service provider as a hosting provider, the appeal court and the Supreme Court decided the case on the basis of the law on civil law obligations, establishing a violation of the rights of the plaintiff (the owner and at the same time a member of the supervisory board of the company affected).33 The service provider’s application was turned down in 2013 by the First Section of the ECtHR, and in 2015 by the Grand Chamber, which declared that holding the service provider liable for the comments did not violate Article 10 of the ECHR. In the MTE case, the Fourth Section of the ECtHR decided in favour of the applicant. One of the two applicants in the case was a professional association and the other a large news portal. As in the Delfi case, personal rights procedures were initiated, relating to defamatory readers’ comments attached to an article criticising a company’s ­operation.34 None of the Hungarian courts considered the rules on hosting providers to be applicable, so they decided that the Civil Code’s rules on the protection of reputation had been violated.35 However, the ECtHR established that the judgment of the H ­ ungarian courts was contrary to Article 10, primarily due to the less defamatory nature of the comments in the instant case compared to the comments in the Delfi case.36 The applicant in Pihl, this time not a legal entity, unsuccessfully resorted to the ­Swedish courts regarding the violation of his personal rights by comments posted about an article written about him.37 In its decision, the ECtHR did not find a violation of A ­ rticle  8 of the ECHR. In the Swedish legal system, there are separate regulations regarding comments, based on which seriously unlawful comments (certain statutory definitions of criminal law offences, and violating copyright) must be removed. However, the Swedish courts concluded that this law did not apply to comments that were ‘merely’ defamatory.38 According to the Third Section of the ECtHR, the comments were not so seriously unlawful that not sanctioning them would result in the violation of Article 8.39 In Tamiz v the United Kingdom,40 the applicant was a UK citizen and a London council candidate. On 27 April 2011, an article concerning him appeared on a blog, which was hosted by Blogger.com, a blog-hosting service owned by Google. After its publication, anonymous comments were posted in the comments section.41 The applicant later complained to Google that some of the anonymous comments were defamatory. After an exchange of e-mails, Google forwarded the complaint to the blogger; eventually the 33 Delfi [GC] (n 9) paras 22 and 38. 34 The three comments found to be unlawful were as follows: ‘They have talked about these two rubbish real estate websites a thousand times already’; ‘Is this not that Sándor-Benkő-sort-of sly, rubbish, mug company again? I ran into it two years ago, since then they have kept sending me emails about my overdue debts and this and that. I am above 100,000 [Hungarian forints] now. I have not paid and I am not going to. That’s it’; ‘People like this should go and shit a hedgehog and spend all their money on their mothers’ tombs until they drop dead.’ 35 MTE (n 17) paras 15−25. 36 ibid paras 76 and 91. 37 The only comment affected: ‘[T]hat guy pihl is also a real hash-junkie according to several people I have spoken to.’ 38 Pihl (n 19) para 12. 39 ibid para 36. 40 Tamiz v the United Kingdom, App no 3877/14, admissibility decision, 19 September 2017. 41 The comments were too long to be quoted here; however, they are quoted in detail in the decision of the ECtHR, at paras 9–16.

The Relevant Criteria in the Cases before the European Court of Human Rights  211 post and all the comments were removed by the blogger. After all these events, the applicant issued libel proceedings against Google. The Court of Appeal found that the High Court was right to conclude that the claim should not be allowed to proceed, because both the damage and any eventual vindication would be minimal, and the costs of the exercise would be out of all proportion to what would be achieved.42 After receiving an application, the ECtHR concluded that the UK courts had not failed in their duty to protect the applicant’s right to respect for his private life under Article 8 of the ECHR, and declared the application inadmissible. III.  THE RELEVANT CRITERIA IN THE CASES BEFORE THE EUROPEAN COURT OF HUMAN RIGHTS

In the four judgments of the ECtHR discussed in section II, the outlines of a multielement set of criteria are taking shape, albeit without any guidance regarding the degree to which each criterion is relevant. We discuss the criteria in what follows. The common starting point for the decisions is that the establishment of the gatekeeper content­ provider’s liability in itself does not infringe its right to freedom of speech. A.  The Content of the Comment The decision in MTE establishes the idea that the subject of the original article (on which comments were posted) concerned a public matter,43 and that this is of material importance. When a post concerns public matters, there should be increased protection for freedom of speech.44 Several authors have argued that certain parts of Internet communications, such as comments, should be assessed according to different criteria than others, since they are often less well thought-out due to their nature, often written in anger, fail to reflect actual considerations and do not suggest actual intentions to act, hence their nature is different from that of opinions published in the traditional media.45 This (­properly substantiated) approach is partly contradicted by the ECtHR, which considers not only the relevant articles but also the comments posted on them as deserving of protection, due to their involvement in the public debate, thereby attributing key ­importance to them.46 According to the former approach, comments (or other similar online p ­ henomena) need not be taken completely seriously, and it is unnecessary to restrict them due to their relative immateriality, while the ECtHR consider comments as public/­political opinion

42 Tamiz v Google, Inc (n 24) para 50. 43 MTE (n 17) para 72. 44 Eric Barendt, Freedom of Speech, 2nd edn (Oxford, Oxford University Press, 2005) 155–62. 45 Jacob H Rowbottom, ‘To Rant, Vent and Converse: Protecting Low Level Digital Speech’ (2012) 71 Cambridge Law Journal 355; Ian Cram, Citizen Journalists Newer Media, Republican Moments and the Constitution (Cheltenham, Edward Elgar, 2015) 73–111; Péter Nádori, ‘Anonymous Mass Speech on the Internet and the Balancing of Fundamental Rights’ in András Koltay (ed), Comparative Perspectives on the Fundamental Freedom of Expression (Budapest, Wolters Kluwer, 2015) 217, 245−46. 46 Delfi [GC] (n 9) paras 11, 28 and 29; MTE (n 17) para 72.

212  Responsibility for Online Comments and as deserving of the strictest protection, although it declares their lower stylistic value, which is typical of numerous websites and mitigates the effect related to such ­infringements.47 With that said, the ECtHR considers that such content can still be restricted according to the usual tests applied to public debates. It is to be noted too that this reference to their ‘low value’ can be turned on its head: if comments are less capable of shaping the public debate, their restriction obviously affects the public in a less ­negative way. Some of the comments in Delfi were assessed by the Grand Chamber as ‘hate speech’,48 and this interpretation was taken over later by MTE and Pihl, although in Delfi, the First Section itself did not make such an assessment.49 While the Grand Chamber made unusually frequent repetitions in Delfi, by talking about ‘clearly unlawful’ comments,50 the MTE and the Pihl judgments reject the restriction of freedom of speech on exactly this basis, saying that, contrary to the comments encouraging hate and violence to be assessed in Delfi, the comments in the other two cases, although offensive51 and defamatory,52 are altogether less serious, and thus they can be subjected to the protection afforded to ­freedom of expression by Article 10. The common lesson to be learned from these four cases is that an assessment of the comment’s content is of essential importance in determining the content provider’s responsibility. In the Tamiz case, the ECtHR engaged in a long discourse on the general characteristics of online comments and comment streams, concluding that they also contain a lot of offensive or even blatantly defamatory speech; however, according to the Court, the majority of these are of a particularly trivial character and have limited impact.53 In connection with this, the ECtHR concurred with the UK courts insofar as, although the comments the claimant had complained about were indeed offensive, they nevertheless amounted to little more than vulgar abuse, which is not so rare on Internet portals these days, and which the claimant, being an active politician, must tolerate.54 B.  Identifiability of the Author It has already been noted that identification of the communicator depends on the type of procedure initiated in regard to the comment being complained about. In a criminal procedure (after a report for criminal defamation), identification of the perpetrator is more likely. The ECtHR has taken the position that uncertainties in identification, existing in each case, provide a sufficient reason for the procedure on the ground that an unlawful comment can be initiated directly against the content provider.55 That is,

47 MTE (n 17) para 77. 48 Delfi [GC] (n 9) para 117. 49 It only highlights that the original article caused a ‘higher than average risk’ for comments containing hate speech, see Delfi (n 31) para 86. 50 Delfi [GC] (n 9) paras 128, 140, 141, 153, 156 and 159. 51 MTE (n 17) paras 76 and 91. 52 Pihl (n 19) para 25. 53 Tamiz (n 40) para 80. 54 ibid para 81. 55 Delfi [GC] (n 9) paras 147−51.

The Relevant Criteria in the Cases before the European Court of Human Rights  213 an attempt at identification (or the absence of it), and the choice between the available procedures, will not affect the decision on the content provider’s responsibility. At the same time, in Pihl, the ECtHR suggested to the applicant that, although it had acquired the IP address of the computer used by the comment’s author, it had failed to take further steps to identify that author.56 Nevertheless, the weight accorded to this circumstance in the adjudication of the case is not completely clear from the decision. C.  The Content Provider In Delfi, the ECtHR placed great emphasis on the economic interests of the content provider.57 According to the ECtHR, provision of the opportunity to post comments served the economic interests of the news portal’s publisher; comments themselves attracted the public to the site. In MTE there were two applicants: one of them was a news portal with economic interests and the other a professional (not-for-profit) organisation; the difference between the two was noted by the ECtHR,58 but it failed to draw the relevant consequences from the distinction.59 This ultimately did not have any negative effect for the not-for-profit applicant, since the ECtHR upheld both applications. In Pihl, however, the statement of reasons in the judgment highlighted that low traffic on the website and the not-for-profit nature of the content provider were important­ considerations.60 As a precursor to the Tamiz case, the claimant filed a claim before the UK courts, not directly against the content provider who opened the article to comments but against the blog provider hosting the blogs (which happened to be Google). As a result, the case is somewhat different from the other three, in that here we have a bigger distance between the real author and the service provider deemed liable by the claimant: the blogger was in touch with the poster of the comments, while the blog provider was in touch with the blogger.61 However, this did not have a significant effect on the eventual ECtHR ruling. D.  The Party Attacked Another relevant consideration involves the identity of the target of the comments. In the MTE case, the target of the comments was a legal entity, the operation and criticism of which clearly qualified as a public matter. In any case, the protection of a legal entity’s rights and those of natural persons should be assessed differently; the ‘moral’ rights of

56 Pihl (n 19) para 34. 57 Delfi (n 31) para 94; Delfi [GC] (n 9) paras 115, 116, 128 and 144. 58 MTE (n 17) para 73. 59 Dirk Voorhoof, ‘Pihl v Sweden: Non-Profit Blog Operator is not Liable for Defamatory Users’ Comments in Case of Prompt Removal upon Notice’, Strasbourg Observers Blog (20 March 2017) at https://strasbourgobservers.com/2017/03/20/pihl-v-sweden-non-profit-blog-operator-is-not-liable-for-­ defamatory-users-comments-in-case-of-prompt-removal-upon-notice/. 60 Pihl (n 19) paras 31 and 37. 61 Tamiz (n 40) paras 4–6.

214  Responsibility for Online Comments legal entities deserve less strict protection.62 In MTE, the ECtHR articulated the need for this differentiation.63 It would be interesting to see whether the ECtHR’s assessment of the comments would have changed if the actual targets of the comments, the managers of the company, had brought the case in their own names and not in the name of the affected company as a legal person, since their honour could be seriously violated by the comments. In Pihl, it is not evident from the ECtHR judgment whether the applicant was a private individual or a public figure. From the content of the article it can be assumed that he qualified to some extent as the latter, but this remains an assumption only and is a circumstance that would have justified more thorough examination by the ECtHR. In the Tamiz case, the ECtHR attached importance to the fact that the applicant was an active politician, hence his tolerance threshold in matters of defamation should have been higher than that of private individuals.64 E.  The Effect of the Comment on the Party Attacked The Delfi and the Pihl judgments failed to examine the effect of the state authorities’ ­decision on the party attacked, but MTE suggested that this factor might be relevant.65 Here it is necessary to distinguish between the violation of a company’s business reputation and a comment that negatively affects the social status of a private individual, because the latter interest is protected more thoroughly.66 In the Tamiz case, the ECtHR found that the impact of comments with offensive content – also considering the generally low level of the anonymous discourses taking place on the Internet – is not grave enough for the state’s obligation to protect good reputation to be established.67 F.  The Conduct of the Content Provider The content provider’s conduct is of key importance. In Delfi, the Grand Chamber established that, although the content provider immediately removed the comment after its attention was drawn to it (ie it performed ex post moderation), that act was not sufficient to escape liability, since the provider should have prevented the publication of the ‘clearly unlawful’ comments (ie it should have performed prior moderation).68 This requirement mutatis mutandis assumes the preliminary assessment of all comments, by which the content provider would clearly become an ‘editor’. In Delfi, the news portal had a separate comment policy, which set content boundaries for users. In addition, it included a disclaimer on the website, that authors were responsible for the content of comments. Furthermore, the portal operated an efficient notice and takedown system, which it used to remove unlawful comments after notification.

62 Uj

v Hungary, App no 23954/10, judgment of 19 July 2011, para 22. (n 17) para 65. 64 Tamiz (n 40) para 81. 65 MTE (n 17) para 85. 66 ibid para 84. 67 Tamiz (n 40) para 80. 68 Delfi [GC] (n 9) para 141. 63 MTE

The Relevant Criteria in the Cases before the European Court of Human Rights  215 As  a result, the content provider could not be considered a ‘passive, clearly technical’ service provider.69 In Delfi, the unlawful comment was accessible for six weeks, even though the service provider removed it upon notification. However, the harmful content was accessible on the website for a lengthy period (in particular considering the ‘lifespan’ of an online article and the related comments). In MTE, the affected websites operated in a similar manner, displaying a disclaimer, and for index.hu there was a published moderation policy, and both content providers operated notice and takedown systems. The offended company failed to request the removal of comments but went to court immediately.70 In Pihl, the content provider removed not only the comment but also the related ­article, since it contained a false factual claim regarding the applicant’s membership of the Nazi Party, thereby violating his reputation.71 The comment was removed by the service provider the day after notification, on the ninth day after publication.72 In the Tamiz case, Google proceeded against the blogger concerned, who in turn removed the offensive comments and the entire post itself. Nevertheless, the claimant argued that his good reputation was damaged by the availability of the comments concerned during the time (more than a month) that elapsed between the notice submitted to Google and the removal itself. It is important to underline that the UK courts rejected the statement of claim primarily on the grounds of the content of the comments. Had they been deemed ‘offensive enough’, the case might have had a different outcome. G.  Sanctions Applied The gravity of the sanction against the content provider is another criterion that might be analysed in examining a violation of Article 10. In Delfi, the €320 damages imposed as compensation by the Estonian courts was not found to be disproportionate by the ECtHR, nor were the possible indirect consequences of the decision. As the Grand Chamber put it, following the decisions made by the Estonian courts, the news portal was not forced to change its business model and no additional costs were incurred. Indeed, Delfi remained one of the most popular news portals in Estonia, in terms of receiving the largest number of anonymous comments, and had in any event already employed moderators earlier.73 Contrary to this, in MTE, the Second Section considered the legal consequences imposed on the content providers to be grave, even in the absence of ordering ­compensation.74 By establishing the infringement in court, and ordering the content providers to pay the not excessively High Court fees, according to the ECtHR, wider consequences resulted from the decision of the Hungarian court, leading the content



69 ibid

paras 144−46, 155 and 159. (n 17) paras 80, 81 and 83. 71 Pihl (n 19) para 30. 72 ibid paras 32, 37. 73 Delfi [GC] (n 9) paras 160 and 161. 74 MTE (n 17) para 86. 70 MTE

216  Responsibility for Online Comments providers to close down their sections for commenting in the future and not to provide this option to their readers.75 H. Summary The gravity of the criteria discussed in this section and their relationship are still unclear. Two extreme situations can be identified. Comments with gravely offensive (hateful, threatening, encouraging violence) content, which appear through the offices of a service provider with economic interests related to the publication of the comment, and which attack a natural person, without prior moderation, with lenient court sanctions, can surely be restricted on the basis of these ECtHR judgments. On the other hand, comments that are ‘only’ defamatory or violate other personal rights (such as privacy), which are published on a not-for-profit website, are made against a legal entity and removed i­ mmediately upon notification and subject to grave official sanctions, can surely be covered by the protection afforded by Article 10 of the ECHR. The numerous situations that arise between these two extremes are to be assessed on a caseby-case basis, weighing the relevant circumstances and determining their weight in the relevant case. IV.  MAIN CRITICISM OF THE JURISPRUDENCE OF THE EUROPEAN COURT OF HUMAN RIGHTS

A.  Liability of a Gatekeeper (Content Provider) The main criticism considers the element of the ECtHR practice present in all four cases: that in certain cases, the content provider can be held liable for comments. In other words, it cannot clearly be considered an intermediary service provider, whose obligation to remove unlawful content emerges only upon notification. It is certain that legal systems that expect more than this – that impose a kind of special editorial obligation on content providers – do not breach Article 10 by this alone. It is still too early to establish whether the ECtHR expects such an attitude towards this issue from legal systems, that is that violation of Article 8 can be established in the absence of a mandatory requirement in the state’s legal system to moderate gravely unlawful content automatically (without a separate notification). On the basis of Pihl, one might think that this probably does not hold, but in that case the violation was ‘not serious’ according to the ECtHR (­defamation); and it follows from the Swedish regulations that, in the event of hate speech, the actual threat and incitement to violence referred to in Delfi, the Swedish authorities would have acted. Regulations providing almost complete (US-type) exemption for the content provider would probably not be compatible with Article 8 of the ECHR.



75 ibid.

Main Criticism of the Jurisprudence of the European Court of Human Rights  217 The European approach has been criticised by many, amongst others Judge Sajó, who wrote a lengthy Dissenting Opinion to the Grand Chamber’s judgment in Delfi, which drew attention to the risk of ‘collateral censorship’ caused by the decision, which encourages interference from the content provider.76 Péter Nádori77 and Anne Cheung78 also fear restriction of the freedom of expression in the future. According to Dirk Voorhoof, Delfi may have far-reaching consequences for the expression of opinion online. It seems that only prior moderation can offer full security for content providers; subsequent removal in itself does not secure full exemption from liability. This would create a new paradigm of online media, enabling personal involvement in debates, and might lead to the termination of comment-posting opportunities, in order to cut costs and avoid the possible danger of liability.79 MTE, Pihl and Tamiz somewhat mitigate this worry but, being aware of these, one can still assume that full security can be afforded only by prior moderation, which inevitably filters out borderline content in addition to ‘clearly’ unlawful comments. It is true that while many bells were rung for the freedom of online expression after the ECtHR decisions, especially after Delfi, only a few were concerned with the effective protection of personality rights in the context of anonymous comments. This issue is usually negated by reference to the ‘inferior’ or low style of Internet communication and the insignificant effect of comments on the assessment of the person attacked. Christina Angelopoulos and Stijn Smet, at the same time, argued that European legal thinking generally is incompatible with the full exemption of content providers from legal liability, and with tipping the balance between freedom of expression and personality rights towards the former.80 B.  Importance of the ‘Economic Service’ The economic interest of the content provider should not be a decisive argument in establishing legal liability for comments. Had we imputed the financial interests of the media as a kind of ‘aggravating circumstance’, professional media could not exist in the absence of proper economic foundations. Even subject to this, the ECtHR approach of identifying an economic interest in allowing the posting of comments, and considering it a circumstance linking the article published by the content provider and the comments posted thereto, thus refuting the content provider’s hosting provider status in the context

76 Delfi [GC] (n 9), Judge Sajó’s Dissenting Opinion, paras 1−3. 77 Nádori (n 45) 217. 78 Cheung (n 27) 289 and 293. 79 Dirk Voorhoof, ‘Qualification of News Portal as Publisher of Users’ Comment may have Far-Reaching Consequences for Online Freedom of Expression: Delfi AS v Estonia’ Strasbourg Observers Blog (25 October 2013) at https://strasbourgobservers.com/2013/10/25/qualification-of-news-portal-as-publisher-of-users-commentmay-have-far-reaching-consequences-for-online-freedom-of-expression-delfi-as-v-estonia/; Dirk Voorhoof, ‘Delfi AS v Estonia: Grand Chamber Confirms Liability of Online News Portal for Offensive Comments Posted by its Readers’ Strasbourg Observers Blog (18 June 2015) at https://strasbourgobservers.com/2015/06/18/delfias-v-estonia-grand-chamber-confirms-liability-of-online-news-portal-for-offensive-comments-posted-by-itsreaders/. 80 Christina Angelopoulos and Stijn Smet, ‘Notice-and-Fair-Balance: How to Reach a Compromise Between Fundamental Rights in European Intermediary Liability’ (2016) 8 Journal of Media Law 266, 287.

218  Responsibility for Online Comments of comments (since the latter must be completely independent from the user content posted on its storage space), is not totally unfounded. The different assessment of not-for-profit websites and large news portals can be justifiable in certain cases. Even though MTE did little to settle this issue, in Pihl the ECtHR already emphasised the small scale and freedom from financial interests of the affected party.81 C.  Expecting Moderation As noted, it follows from the current practice of the ECtHR that where a content provider wants to avoid liability completely, it must moderate all comments in advance or at the latest directly after their posting, in no way relying on notification from the injured party.82 This puts a heavy burden indeed on content providers, even if most comments are not unlawful and it is not excessively complicated to establish this. For providers, the theoretical possibility of being held liable is the real threat rather than the actual legal proceedings, which are initiated against them quite rarely. D.  Assessment of the Comment’s Content From these four cases, it turns out that comments embodying ‘hate speech’, ‘threat’ or ‘incitement to violence’ are to be assessed as the most serious because they are ‘clearly unlawful’,83 while it is a lot more difficult to establish the content provider’s liability for ‘merely’ defamatory comments. Some of the comments in Delfi were classified by the ECtHR as hate speech. In this context, the issue arises that, while the owner of the company affected initiated the procedure in Estonian courts to protect the personality rights of that company, and the Estonian courts held the news portal’s service provider liable for protecting those rights, hate speech is traditionally a criminal law concept, which needs to be restricted to protect some kind of common interest or the public order.84 As such, in Delfi the ECtHR created a category of hate speech to be prosecuted at the individual’s request and to protect his rights, which merges the various limitations on freedom of expression, each of which is legitimate and justifiable in itself, but all of which are otherwise clearly separable. Highlighting ‘threat’ and ‘incitement to violence’ is also problematic. On the one hand, the ECtHR claims that the stylistic value of the expression of opinions on the Internet is generally low, and this is why it is not warranted to assess it according to the content scales established for the traditional media. But if this is true, why take threats and incitement to violence seriously, and why assess them as causing more serious harm than defamation?

81 Voorhoof (n 59). 82 Delfi [GC] (n 9) para 159; Voorhoof (n 59). 83 Delfi [GC] (n 9) paras 128, 140, 141, 153, 156 and 159. 84 See Ivan Hare and James Weinstein (eds), Extreme Speech and Democracy (Oxford, Oxford University Press, 2009).

The Case of Social Media Comments  219 The main issue of decision-making left to the content provider is as regards the procedure and criteria to be used in considering challenged content, which necessarily differ from the content and procedural rules set out by the law. Content providers can decide arbitrarily; they are not tied by constitutional guarantees; they can have their own policies and sets of rules and take decisions that are not transparent; their details and justifications cannot be studied, and there are no clear-cut remedies against them.85 Currently, content or platform service providers (social media, video-sharing platforms) can decide freely regarding the removal of content. Considering the influence these services have on the public sphere (of course this is primarily true for highly popular social media and not for the comment feeds available on certain less popular websites), this is rather dangerous from the point of view of freedom of speech. The service provider and the moderator acting on its behalf are not law enforcement agencies; they decide on content on the basis of different standards and reasons, for example by primarily defending the economic interests and smooth operation of the service provider. They cannot be expected to take over the court’s role, in the absence of the necessary background and guarantees, but their decision-making should be somewhat in line with at least some basic principles, which take into account both freedom of speech and the interests of the democratic public sphere. It is not completely satisfactory to leave the handling of these issues to the notice and takedown procedure: the E-Commerce Directive links the emergence of the obligations of the intermediary service provider to the awareness of the infringement,86 while under the civil law rules applied in the comment cases, a near-immediate or at least very quick takedown also exempts the service provider from liability, while the unlawful nature of comments can only be established to a satisfactory manner by the courts in a rather lengthy procedure.87 It follows that a service provider necessarily over-performs, that is takes down lawful content that it deems to be risky. According to my assessment, in its case law so far, the ECtHR has not been completely principled. While it has made it clear that it expects some active involvement from content providers in their assessment of comments, the Court has failed to specify the criteria for such involvement. Also, by selecting among the various types of restrictions on freedom of expression, classifying some of them as more serious than others, based on unclear ­justifications, the ECtHR has itself contributed to the reinforcement of problems relating to the ‘private censorship’ of freedom of expression, which in any case inevitably proliferate in the era of online gatekeepers. V.  THE CASE OF SOCIAL MEDIA COMMENTS

In the era of Facebook, comments can seem a relic of the past. However, numerous Internet content providers grant space for comments in their social media systems, and general user communication has largely moved to social media. Even with this in mind, 85 Dirk Voorhoof and Eva Lievens, ‘Offensive Online Comments: New ECtHR Judgment’ ECHR Blog (15 February 2016) at echrblog.blogspot.co.uk/2016/02/offensive-online-comments-new-ecthr.html. 86 E-Commerce Directive, Arts 12−14. 87 See this issue in Angelopoulos and Smet (n 80) 285.

220  Responsibility for Online Comments the issues present are worthy of in-depth investigation, since they raise important societal questions. Content providers who allow comment posting are Internet gatekeepers, many other types of which are known (infrastructure providers, platform providers: social media, video-sharing portals, or search engines, webstores, etc) and in connection with which similar liability issues may emerge. So far the ECtHR has not yet addressed these questions. In Delfi, the ECtHR expressly declared that the decision did not apply to other forums of Internet communication that permit the publication of third-party comments.88 However, this does not exclude the possibility that the reasoning in the ­decisions could be used in the future in cases involving other services. The obligations of gatekeepers, the conditions for establishing their legal liability and user rights against them are issues that will determine the future framework of freedom of expression in Internet communication. The arguments brought up in the comment cases are relevant not only in connection with anonymous comments. At least some of the arguments in the ECtHR’s decisions raise the possibility of establishing the content provider’s responsibility even for comments published when the author’s name is given. Hence, for a comment added to a ­Facebook post, not only the platform provider but also the user of the social media account (be it a private individual, a public figure or a content provider itself) would be responsible for an unlawful comment under that post, the author of which can be possibly identified with a single click. According to current ECtHR jurisprudence, if the platform provider or the user of the account failed to act quickly upon notice, and either of them was to be held liable for breach of reputation by a certain State Party, this would not necessarily breach their Article 10 rights. VI. SUMMARY

European legal systems opt for balancing opposing rights,89 aiming for a ‘fair balance’,90 and the ECtHR ‘adopted a middle ground between two diametrically opposing viewpoints on the regulation of the internet’.91 This approach is contrary to the clear-cut rules in the US that generally exempt gatekeepers from liability. Similar legal issues – according to Section 230 of the CDA – cannot possibly arise in the US. In Europe, proper balancing would be greatly enhanced by statutory regulation (as adopted in the UK in 2013),92 because it is clear that neither the e-commerce rules nor the general private law liability rules are fully capable of dealing with the issue of comments, or in general with the liability of online gatekeepers. The debate on the obligation of content providers and the preservation of freedom of speech will doubtless continue. However, in Europe, it can no longer be questioned

88 Delfi [GC] (n 9) para 116. 89 Barendt (n 44) 205–27. 90 Angelopoulos and Smet (n 80) 275. 91 Robert Spano, ‘Intermediary Liability for Online User Comments under the European Convention on Human Rights’ (2017) 17 Human Rights Law Review 665, 679. 92 See Alastair Mullis and Andrew Scott, ‘Tilting at Windmills: The Defamation Act 2013’ (2014) 77 Modern Law Review 87.

Summary  221 whether content providers necessarily perform ‘editing’ regarding users’ comments. The traditional concept of media in the era of online gatekeepers cannot be fully maintained; there is already an unavoidable intervention in the communication process and content by those who do not take part in content production itself. This is a different type of editing from the traditional model, but it is similar in many aspects. Aligning the regulation of media and freedom of expression to this phenomenon is unavoidable.

7 The Future of Regulating Gatekeepers I. INTRODUCTION

T

he operations of online gatekeepers raise numerous difficult questions ­concerning the freedom of speech that shape the future regulation of this area. The previous chapters demonstrated how the Internet and gatekeepers generally interfere with the process of public communication using various means and for various reasons. Some of this interference is at the behest of governments, which may simply encourage action but which may also make some interference mandatory. Gatekeepers may also delete, block, edit or present user-generated content to serve their own interests and intents. Such interference, regardless of its motivation, raises concerns, as it may have a detrimental impact on free speech and access to information, and it may also ­jeopardise the diversity of opinions and the equal dignity of persons expressing their opinions or seeking to access information. Due to the operations of gatekeepers, the critical aspects of the limitation of speech have shifted from government (authorities, courts and state law) to private actors, which are capable of restricting the freedom of individuals en masse, on a daily basis and without the need to engage in strictly regulated legal proceedings (paradoxically, such restrictions are applied while gatekeepers, pursuant to their fundamental goal, also considerably extend the range of opportunities for free speech). As Andrew Tutt noted, ‘preventing government from enacting speech-focused regulation means that powerful private interests will hold enormous power to shape how individuals interact with each other and perceive the world’.1 Grandiose comparisons are frequently made with regard to gatekeepers. For example, Facebook is often compared to a country of its own,2 while Google is compared to the Almighty: ‘[T]he perfect search engine would be like the mind of God.’3 Even though they may not have divine powers, such gatekeepers have an influence over the public sphere that is unprecedented. The use of algorithms also lends a strange, non-human guise to the activities of gatekeepers, recalling Isaac Asimov’s first law of robotics: ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’4

1 Andrew Tutt, ‘The New Speech’ (2014) 41 Hastings Constitutional Law Quarterly 235, 241. 2 Henry Farrell, Margaret Levi and Tom O’Reilly, ‘Mark Zuckerberg Runs a Nation-State, and He’s the King’ Vox (10 April 2018) at https://www.vox.com/the-big-idea/2018/4/9/17214752/zuckerberg-facebook-powerregulation-data-privacy-control-political-theory-data-breach-king. 3 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA, Harvard University Press, 2015) 187. 4 Isaac Asimov, ‘Runaround’ in Isaac Asimov, I, Robot (New York, Doubleday, 1950) 40.

Possible Interpretations of Existing Legal Doctrines  223 In a sense, an algorithm that restricts free speech injures the person who wishes to speak, but it may also fail to protect a person by not limiting access to harmful content: both scenarios seem to violate Asimov’s law. An algorithm may meet our expectations if it is able to limit access to harmful content only. However, to make such distinctions, it would need to be equipped with skills and competences that are similar to those of a court or constitutional court. Of course, there are human beings behind each algorithm, but the ‘automated’ restriction of free speech might bring about results that, under existing legal doctrines, are hard to comprehend. We may not be able to foresee the future of regulating free speech, but some possible directions already seem to be emerging. II.  POSSIBLE INTERPRETATIONS OF EXISTING LEGAL DOCTRINES CONCERNING THE PUBLIC SPHERE

A.  Media Regulation In essence, media regulation serves three purposes: (a) to guarantee the freedom of the press to the widest possible extent (and eliminate the possibility of censorship at the same time); (b) to prevent any damage or threat arising from the exercise of the freedom of the press, and to impose necessary sanctions (by laying down restrictions); and (c) to establish a framework that facilitates the diversity of opinions (plurality in the media market).5 The regulation of different outlets of online public communication (while such services are not necessarily considered ‘media’ in a legal sense) serves the same fundamental purposes, as the protection and the necessary limitation of the freedom of the press is underpinned by the same goals and by the interests of a democratic public sphere, that is considerations relating to free communication, protecting the public from harm and making various sources of information available. We shall review already existing ­regulatory models and the different legal fields concerned in the light of their contribution to achieving these goals. Media regulation employs appropriate measures to achieve the above goals. Within the framework of media regulation, government (media authorities and the courts that review their decisions) has to date typically dealt with media service providers (television and radio companies). In this structure, the interests of the audience are represented by government actors. The question we face now is the extent to which this framework can be applied to the activities of Internet gatekeepers. It might seem that the means of media regulation, as they are applied in the context of television, radio and, more recently, similar online content providers, do not contribute much to solving the problems raised by the emergence of gatekeepers. The prohibition of censorship does not seem applicable in their context, as gatekeepers do not act in the name of a government when they restrict free speech. They do so as private actors, who own or manage the relevant



5 See

ch 1, section IV.A.

224  The Future of Regulating Gatekeepers platform or infrastructure, and the obligations arising from the recognition of certain constitutional rights are not applicable to such activities. Theoretically, this situation might be changed, for example, by prohibiting gatekeepers from deciding on the removal or blocking of content (as happened with regard to Internet access providers through the doctrine of net neutrality), thereby forcing them to play the neutral and passive role of a common carrier.6 Gatekeepers are not editors in the limited sense of the term, so it is hard to hold them directly liable for any harmful content published by their users. This also means that they are not responsible for guaranteeing the diversity of opinions. However, they are engaged in a certain kind of ‘editing’. While they do not produce or commission the production of any content, gatekeepers commonly select and organise available (user) content and decide on what to show to their users. Pursuant to the European legal approach, it is not entirely unlikely that certain means and elements of media regulation might also re-appear in the regulation of gatekeepers. Gatekeepers cannot be expected to assess the legal status of a piece of content in a manner or with a degree of care that is similar to that of a public authority or court, meaning that their decision and decision-making process is necessarily faster, simpler, less thorough and of a lower quality. Gatekeepers may not be expected to act as actual editors by reviewing each and every piece of (user-generated) content before it is published in their system (and in any case, such an expectation would not serve the interests of guaranteeing free speech). While compliance with such an expectation might be feasible from a technical perspective, it would slow down the process of public communication considerably and impose a disproportionate burden on service providers (possibly an impossible one for large platforms). The means of promoting the diversity of available content (eg the requirement to provide balanced news coverage, to guarantee a right of reply and to finance public service media content) are difficult to apply in the context of gatekeepers and online services in general, even though some components and regulatory objectives might be worth preserving. Furthermore, minor platforms may actually be capable of promoting diversity and featuring public interest news. However, diversity, in this context, does not refer to the diversity of the sources from which content is obtained but to the diversity of the content presented to users (in the same sense as it is used with regard to legacy media), which means ensuring that users are able to come across a diverse range of content when using search engines or social media platforms (exposure diversity).7 It also seems possible that gatekeepers (search engines and social media platforms) could be required to display different opinions, and present a certain amount of public service media content, to ­individual users of their service.8 This would mean that an activity, akin to the one already pursued in the legacy media sector by the outlets of the statefunded public service media, would be endorsed by regulatory means. A right of reply could be guaranteed in an online environment in a less timeconsuming or complicated (ie more efficient) manner than is used in a broadcast or 6 W Mike Jayne, ‘Fairness Doctrine 2.0: The Ever-Expanding Definition of Neutrality Under the First Amendment’ (2018) 16 First Amendment Law Review 468, 499. 7 Sarah Eskens, Natali Helberger and Judith Moeller, ‘Challenged by News Personalisation: Five Perspectives on the Right to Receive Information’ (2017) 17(2) Journal of Media Law 259, 272. 8 Jayne (n 6) 499.

Possible Interpretations of Existing Legal Doctrines  225 print media context. Guaranteeing such a right to respond to harmful content could also be considered a step toward guaranteeing diversity.9 Since diversity is a fundamental concept and feature of the Internet due to the large number of users and content providers, emphasis may be placed on featuring public interest content and presenting different opinions. While media regulation as such seems unsuitable for regulating gatekeepers, certain ­regulatory components may still be applied in the context of gatekeepers. B.  The Law of Electronic Commerce In the current legal corpus pertaining to the activities of gatekeepers, the liability exemption rules (e-commerce rules in Europe and Section 230 CDA in the US) are generally considered more important than the general rules that define the limits of the freedom of the press. Even though the doctrines relating to the latter are quite well-wrought and detailed, both in Europe and overseas, they are hardly applied in practice because gatekeepers decide for themselves whether or not to remove a given piece of content or allow it to remain available.10 Currently, it seems that the commonly applied notice and takedown system will remain the cornerstone of regulating gatekeepers in Europe. The EU seeks to reinforce this system, which is mostly maintained and operated by the gatekeepers ­themselves, and it defines procedural requirements and safeguards with regard to this system while keeping its core (ie the power of gatekeepers to decide on content) unchanged. The expectations raised regarding gatekeepers, on the basis of which they are obliged to remove violating content upon receipt of a corresponding notice, may be refined and differentiated even further. This is exactly what Christina Angelopoulos and Stijn Smet propose, with a view to introducing measures that could provide a higher degree of ­protection for free speech in certain situations, as opposed to the uniform obligation to remove the infringing content (whatever the infringement is), and guarantee more effective action against dangerous content. They suggest, for example, that defamatory content should not be removed automatically but the content provider (user) should be given some time to verify the legality of this content; this system is known as the ‘notice-wait-and-takedown’ system.11 They would also introduce more stringent rules with regard to hate speech: the publishers of such content would not only be required to remove the harmful content, but they could also have their user rights or platform access privileges suspended.12 C.  The Law of Contract Contracts between gatekeepers and their users are used as a means of limiting the users’ right to free speech, as their terms and conditions are determined unilaterally 9 Frank Pasquale, ‘Rankings, Reductionism, and Responsibility’ (2006) 54 Cleveland State Law Review 115. 10 ‘Note, Section 230 as First Amendment Rule’ (2018) 131 Harvard Law Review 2027, 2030. 11 Christina Angelopoulos and Stijn Smet, ‘Notice-and-Fair-Balance: How to Reach a Compromise Between Fundamental Rights in European Intermediary Liability’ (2016) 8(2) Journal of Media Law 266, 296–97. 12 ibid 297–99.

226  The Future of Regulating Gatekeepers by ­gatekeepers.13 Under such contracts, users commonly waive all rights and remedies against the gatekeeper, should it remove or hide from other users a piece of content published by the user. This means that a gatekeeper does not breach the contract if it removes a piece of content published by a user, even if it is protected by free speech doctrines. In these circumstances, considering the relationship between a gatekeeper and its user, there is no need to introduce any further rules or arguments to enable a gatekeeper to interfere with its user’s content (eg stipulating that any removal or ranking constitutes the opinion of the platform and, as such, is protected by law). Such contracts govern the legal relationships between gatekeepers and their users, but they do not take into account the interests of society (see media regulation), and the government is reduced to a forum that could decide on contract-related issues only. A user contract, in this sense, is a legal statement that affirms the legitimacy and lawfulness of private regulation (private censorship), and users may not access the platform without accepting the terms of the contract. However, a user contract, as a means of regulating the relationship between the contracting parties, may also be used to limit the discretionary powers of gatekeepers, thereby affording a larger degree of freedom to users to speak as they wish. It seems unlikely that a platform would introduce such terms and conditions voluntarily, so certain means, already applied in the field of consumer protection, might be needed14 (eg specifying certain mandatory elements of user contracts) that could not be ignored by gatekeepers. Legislation could require that contract terms must respect the users’ freedom of speech, that clear content-related rules must be established, proper procedures must be put in place to demonstrate the lawfulness of a given piece of content, appeals must be permitted, etc. The EU’s goal of improving the efficiency of the notice and takedown procedure while introducing appropriate safeguards and respecting the freedom of speech seems to be consistent with a consumer protection-based approach that focuses on the terms of user contracts. It would also be possible to introduce the option of filing class actions, thereby having court judgments handed down in a lawsuit filed by individual users, or a group of users (concerning, for example, general contract terms restricting free speech), that would also apply to all similar contracts concluded with other users. The outcome of the two recent German cases show how contract law can help in protecting constitutional free speech rights between private entities, and also where the possible limits of the law of contracts are. In Themel v Facebook Ireland, Inc,15 the court held that the deletion of the plaintiff’s comment by Facebook constituted a breach of contract, as the platform was required to respect her right to freedom of ­expression under the German Constitution; whereas in User v Facebook Ireland, Inc,16 the court arrived at the completely opposite conclusion, and rejected the plaintiff’s argument that her right to freedom of expression had been infringed by the platform.17 13 Tutt (n 1) 282. 14 Kevin Park, ‘Facebook Used Takedown and It Was Super Effective! Finding a Framework for Protecting User Rights of Expression on Social Networking Sites’ (2012–2013) 68 New York University Annual Survey of American Law 891, 902. 15 Themel v Facebook Ireland, Inc, 18 W 1294/18, 24 August 2018, Higher Regional Court Munich, Germany. See the judgment in German at https://openjur.de/u/2111165.html. 16 User v Facebook Ireland, Inc, 1O 71/18, 28 August 2018, Regional Court in Heidelberg, Germany. See the judgment in German at https://globalfreedomofexpression.columbia.edu/wp-content/uploads/2018/10/User-vFacebook-Ireland.pdf. 17 See ch 5, section IV.B.

Possible Interpretations of Existing Legal Doctrines  227 In respect to the three main goals of media regulation set out in section II.A, the terms commonly applied in user contracts seem to be increasingly further away from the first goal (ie no censorship), but contract terms could in principle also contribute to the achievement of that aim. The sanctioning of unlawful content could also facilitate the achievement of the second goal (ie limitation of harm and dangers caused through speech), as the sanctions to which a user may be subject in the event of publishing unlawful content could also be incorporated in user contracts. Such provisions are already included in some contracts, but it raises concerns that, as explained, this form of interference is possible not only with regard to unlawful content but also with regard to any content a gatekeeper considers undesirable. The limitation of a gatekeeper’s powers of interference and the clarification of contractual sanctions are other important safeguards that would be needed in the context of determining the details of the legal relationship between a gatekeeper and its users. However, it seems unlikely that the achievement of the third goal of media regulation (promoting content diversity and the plurality of opinions) could be facilitated through standardised user contracts. This is because a contract may regulate the relationship between a gatekeeper and an individual user only, and such individual contracts, even if concluded in bulk and with identical terms and conditions, could not protect the interests of diversity and plurality, as those could only be served by the totality of all content published by the entire body of users. D.  The Law of Public Forums Despite all initial hopes and expectations, the Internet has not developed into a fully open public forum, where anyone can present his or her opinion without limitation. A significant portion of the Internet’s infrastructure and services comprises private property, and its functioning and openness may be regulated by its owners.18 Traditional free speech doctrine seeks to protect individual freedom against government influence and interference, and it does not see such property-based limitations as cause for constitutional concern. However, the influence of major online gatekeepers over the public sphere seems to be eroding the practical implementation of free speech. In the offline world, it is conceivable that property rights could be limited as a safeguard for free speech. If a piece of private land (eg shopping malls, airports, etc) serves as an important venue for community discussions because, for example, many people spend time there on a regular basis and may be exposed to various opinions and information while staying on that land, the property rights of the owner might be restricted in certain limited circumstances, and he or she might be required to respect the right of others to speak freely.19 President Trump’s case with Twitter made it clear that a social media platform managed by a political actor or government body might be considered a public forum.

18 Dawn C Nunziato, ‘The Death of the Public Forum in Cyberspace’ (2005) 20 Berkeley Technology Law Journal 1115, 1170–71. 19 Jacob Rowbottom, ‘Property and Participation: A Right of Access for Expressive Activities’ (2005) 2 ­European Human Rights Law Review 186, 202.

228  The Future of Regulating Gatekeepers However, this approach does not apply to the entire platform, and it requires the platform’s user (ie the politician or public body), but not the platform itself, to respect and guarantee the right of other users to speak freely and access information.20 It has even been argued that governmental social media accounts should be subject to special rules designed to guarantee their availability to people. At present, however, the possibility of banning users who harass others and abuse their access from a platform does not seem to raise any concern.21 Thus, platforms are not considered public forums by regulation and practice in general, and they may not be sued for restricting the speech of their users. However, this situation might change. The doorway to changing legal doctrines was opened slightly by the judgment in Packingham, as it recognised the importance of social media platforms for public communication.22 Further steps may be taken along this path. Trevor Puetz, for example, relies on the US doctrine of public forums when he argues that Facebook is a quasi-municipality.23 A platform is unlike actual municipalities or states, in the sense that it does not provide public services (water supply, trash removal, etc) to its users and does not have a monopoly by law, but it is, nonetheless, the exclusive gatekeeper to an important means of communication. Facebook seeks to create strong ties to its users and to exclude its competitors, with a view to ensuring that its users spend more and more time on the platform and return to it more and more frequently. Naturally, other platforms strive to achieve the same goals, and, if successful, such platforms may become fundamental and indispensable tools for their users (and indeed, a large number of users may even develop a psychological dependence on them). These factors both reinforce the public forum-like nature of such platforms and weaken the claim that a platform should not be considered a public forum because users are free to go elsewhere and are bound to find another online venue where they can express their opinions freely and without restriction. This freedom is in fact limited in cases where a user is struggling with a dependency or knows that he or she may not reach certain people anywhere else (and Facebook is also actively encouraging this sense of community).24 Freedom of speech does not include the right to an environment that is suitable for achieving a desired impact by speaking; in other words, every person has a right to speak, but no person may demand that he or she actually make an impact or have others ­actually listen to him or her. However, safeguards are in place to protect free speech on public forums, because a speech delivered on a public forum can reach a wider audience than one delivered in a private apartment. Gatekeepers thus possess considerable influence and control over the freedom of their users, which makes their situation similar to cases in which a court restricted the powers of municipalities based on the doctrine of public forums.25 Gatekeepers perform important public functions, and the primary 20 Knight First Amendment Institute at Columbia University v Trump no 17 Civ 5205 (NRB) (SDNY 23 May 2018). 21 Lyrissa Lidsky, ‘Public Forum 2.0.’ (2011) 91 Boston University Law Review 1975, 2028. 22 Packingham v North Carolina 137 SCt 1730 (2017). 23 Trevor Puetz, ‘Facebook: The New Town Square’ (2014) 44 Southwestern Law Review 385. 24 ibid 401–04. 25 Benjamin F Jackson, ‘Censorship and Freedom of Expression in the Age of Facebook’ (2014) 44 New Mexico Law Review 121, 145.

Possible Interpretations of Existing Legal Doctrines  229 goal of their activities is to become a forum of public communication. This is the very purpose for which they were created, and this is why they are completely open towards society.26 If this is so, why are they not considered public forums in a legal sense? If the degree of influence of a platform makes it an indispensable part of the public sphere (meaning that not having access to that platform constitutes a disadvantage in the marketplace of ideas), government intervention may theoretically be justified. If a ­legislature decides to restrict property rights related to a platform, it may do so with a view to taking action against private censorship by the platform and to guaranteeing equal access for users. In such a situation, the platform may still take action concerning pieces of content that harm or jeopardise others; if it is considered a public forum then it may even be required to do so. However, it must remain neutral regarding content that is not illegal, and it may not delete or block such content on the basis of its own privately crafted rules (which are not necessarily compatible with legal restrictions). The doctrine of public forums can only be used to resolve issues relating to a­ platform’s ‘editorial’ activity (ie the practice by platforms of influencing the public sphere by selecting content) to a limited extent. On the one hand, the service (eg Facebook’s news feed) would be put in an impossible situation if it were required to grant equal access to all to the platform, and the resulting damage to the public sphere would exceed the benefits achieved. On the other hand, if a platform’s right to select and rank pieces of content is not restricted then it remains capable of unduly influencing the public sphere, and this capacity would be inconsistent with its standing as a public forum. A possible compromise between the two extremes would be to require platforms to display certain pieces of content pertaining to public affairs in a prominent location. However, such a requirement would force platforms into an editorial role (which is different from the one they assume voluntarily), a state of affairs that is inconsistent with the nature of a neutral public forum. E.  The Chances of Finding a Comprehensive Regulatory Solution It does not seem possible to reconcile the relationship between online gatekeepers and the public within a single field of law. While analogies are useful, they fail to provide ­comprehensive guidance for future regulation. Each and every legal field and doctrine mentioned so far includes elements that could be incorporated in the regulation of online platforms. It should also be noted that some legal fields, not mentioned here due to their indirect impact on content-related affairs, are in practice capable of regulating some aspects of gatekeepers’ activities far more effectively. Competition law (ie the means of taking action against the holding of excessive market share, concentration of ownership and the abuse of a dominant market position), copyright law and data protection law can be important and useful means of regulating gatekeepers. A government may also use other means to regulate the public sphere. For example, it could impose taxes on advertising revenues in order to set up a state aid scheme for valuable content, should the



26 ibid

146.

230  The Future of Regulating Gatekeepers market share of a service provider jeopardise the operation of other media outlets that perform important public functions.27 A government may also use public funds to finance public service media, and the theoretical underpinnings of providing such funding (as drawn up with regard to television and radio providers) may be adapted to the online environment in some regards. Even though the emergence of a social media platform that would fulfil the functions of the public service media seems unlikely, governments may play an important role in helping to expand the available selection of online content through the funding of privately produced public service media content. III.  THE POSSIBLE MODELS OF FUTURE EUROPEAN REGULATION

The following sections outline four possible regulatory models. Each model may have a number of variations, and the models may also be implemented in combination with each other. The model of traditional media regulation is not taken into consideration as a feasible theoretical alternative as it assumes a kind of control over pieces of content that makes the mere publication of illegal content (ie making it available on a platform, or uploading by a user) punishable in and of itself, as is the case in the context of content broadcast by radio or television. This does not seem to be an acceptable approach. In their present mode of operation, it is not the platforms that decide on publishing a given piece of content. For this reason, any sanction triggered by the act of publication might compel social media platforms to implement preliminary (pre-publication) monitoring to prevent such violations. Such a development would raise concerns because it would bring about fundamental changes in their functioning, and also because in the case of popular platforms the implementation of such preliminary monitoring seems more or less impossible at this point, even with the use of various algorithms, due to the volume and diversity of user-generated content. The four models are presented in the order of increasing burdens on gatekeepers (bearing in mind that, by way of comparison, applying the rules of current media regulation would be even more burdensome than the last-presented model). Current European regulation is akin to the second model, considering that the various concepts presented by individual European bodies regarding the future regulation of social media platforms propose only limited changes to the existing system and suggest extending the platforms’ liabilities and strengthening government ­supervision while preserving the foundations of current regulation. A.  Transferring the US Model to Europe Transposing European legal concepts into the US legal system would most certainly be inconsistent with the First Amendment to the US Constitution. Furthermore, it also seems unlikely, to say the least, that the US regulatory measures developed for

27 Leighton Andrews, ‘We Need European Regulation of Facebook and Google’ OpenDemocracyUK (12 December 2016) at https://www.opendemocracy.net/uk/leighton-andrews/we-need-european-regulation-offacebook-and-google.

The Possible Models of Future European Regulation  231 legacy media (such as the ‘fairness doctrine’ and ‘must carry’ requirements)28 could be applied with regard to gatekeepers.29 However, a provision similar to that established by Section 230 CDA might theoretically be implemented in Europe. It might then be worth assessing the possible models to achieve the threefold objectives of media regulation (see section II.A), that is the prohibition of censorship, the prevention of damage caused by speech and the promotion of diversity of content. Under US law, online platforms are mostly exempted from (but not fully immune to) liability, meaning that the issue of government censorship is settled in a reassuring manner, since platform operators are not incentivised to comply with the government’s attempts to influence them, whether through the promise of benefits in the event that they acquiesce to such demands, or the threat of sanctions if they refuse. However, this form of regulation does not prevent private censorship, as such censorship is not considered a threat to free speech by constitutional doctrine. Furthermore, it does not make it possible to seek legal remedy for any damage that may be caused, at least not by holding gatekeepers liable for such damage. However, it does permit interference through private regulation, without laying down any legal safeguard or guarantee concerning the activities of gatekeepers. As for the diversity of speech, this regulatory model does not permit legal interference, but neither does it prohibit platforms from engaging in ‘editorial’ interference (that may promote or suppress diversity, and expand or limit the scope of public discourse). In conclusion, it seems that this solution provides adequate safeguards against government censorship (ie unjustified interference with free speech) and enables damages to be managed through private regulation, without legal safeguards and therefore jeopardising the freedom of speech. However, it leaves it to the platforms to decide whether to handle the issue of diversity, albeit without imposing an obligation to promote diversity. B.  Preserving the Status Quo through Limited Regulatory Intervention (Weaker Co-regulation Model) Recent developments indicate that the EU, largely due to pressure to combat ‘fake news’, intends to establish a regulatory regime for platforms based on the current notice and takedown procedure established by the E-Commerce Directive.30 The recommendations and communications issued by the EU, as well as the new AVMS Directive,31 aim to establish an appropriate legal and regulatory framework for the decision-making powers and, occasionally, decision-making obligation of platforms under the supervision of government authorities, as a means of tackling the challenges that may arise.

28 See ch 1, section IV.H. 29 Jackson (n 25) 165. 30 See ch 5, section III.B. 31 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018, amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities.

232  The Future of Regulating Gatekeepers The 2018 Recommendation of the Committee of Ministers of the Council of Europe also raises the issue of regulating gatekeepers,32 calling on them to respect freedom of speech, to ensure that there is a clear (legal or ethical) basis for their interference with content,33 and to guarantee transparency and accountability,34 with special regard to the application of their own content-related policies, so that they also comply with the principle of non-discrimination.35 It is also recommended that they guarantee the right to effective remedy and dispute resolution to all users (whether or not they are concerned about the protection of their freedom of speech or the possible violation of their rights by the free speech of others).36 It was even suggested by the EU Commission that the deadline for removing illegal content be shortened: pieces of especially dangerous content, for example speech promoting terrorism, could be required to be removed within one hour of receipt of a notice.37 A Code of Practice laying down the obligations undertaken voluntarily by platform providers was agreed in September 2018.38 On top of this, Facebook, Google, Mozilla and Twitter presented individual roadmaps for implementing the industry code of practice for combating online disinformation. The four roadmaps, accompanied by a calendar for implementing various measures during elections for the European Parliament in May 2019, were presented on 16 October 2018. Most of the measures were aimed at increasing the transparency of political advertisements, introducing fact-checking instruments and equipping society as a whole to combat online disinformation successfully.39 The Code of Practice was heavily criticised by civil organisations and legacy media for failing to set any measurable objectives.40 However, the proposals of the EU, the Recommendation of the Council of Europe and the voluntary undertakings of the market actors concerned would leave the substance of the notice and takedown procedure established by the EU’s E-commerce Directive unchanged. It is recommended, even in most EU documents, that platforms should use voluntary means, beyond state law. Jacob Rowbottom argues that this strategy aims to handle the problem by implementing a regulatory framework in which the regulatory body does not define its content-related expectations precisely nor enforce such expectations consistently. Instead, platforms are allowed to adopt their own content-related

32 Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries. 33 ibid point 2.1.3. 34 ibid point 2.2. 35 ibid point 2.3. 36 ibid point 2.5. 37 ‘Google, Facebook, Twitter Face EU Fines over Extremist Posts’ BBC (12 September 2018) at https://www. bbc.com/news/technology-45495544. 38 EU Code of Practice on Disinformation (2018) at https://ec.europa.eu/digital-single-market/en/news/codepractice-disinformation. 39 ‘Roadmaps to Implement the Code of Practice on Disinformation’ (16 October 2018) at https://ec.europa. eu/digital-single-market/en/news/roadmaps-implement-code-practice-disinformation. See also the Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Action Plan against Disinformation (JOIN(2018) 36 final) at https://ec.europa.eu/digital-single-market/en/news/action-plan-against-disinformation. 40 Natasha Lomas, ‘Tech and Ad Giants Sign up to Europe’s First Weak Bite at “Fake News”’ TechCrunch (26 September 2018) at https://techcrunch.com/2018/09/26/tech-and-ad-giants-sign-up-to-europes-first-weakbite-at-fake-news/?guccounter=1; European Broadcasting Union, ‘Code of Practice on Disinformation Fails to

The Possible Models of Future European Regulation  233 standards and processes, while the regulator supervises and monitors the internal standards and processes in order to make sure that the necessary benchmarks are met.41 Before assessing this approach, it is worth recalling the peculiarities and current lack of balance in the regulation of platforms. On the one hand, a platform’s liability for ­illegal user-generated content is limited (to significantly differing degrees in the US and in Europe), while it is not prohibited by law from interfering on its own initiative, meaning that it is favoured in both regulatory aspects. On the other hand, all stakeholders (users expressing their opinions, users harmed by some opinions, persons interested in public affairs, and the public in general) are in a less favourable position, as they are not afforded such extensive freedom from interference by the platform itself and protection against other users. Rebecca Tushnet also argues, in the context of US law, that since a platform’s liability is limited, its potential for interfering with user-generated content should also be limited.42 The regulation of platforms may serve the interests of free speech if it prevents platforms from being able to step between a speaker and his or her audience without limitation.43 Other US scholars have also suggested considering measures such as regulating platforms, requiring procedural safeguards and guarantees, enforcing neutrality when deciding on user-generated content and increasing the transparency of the decisionmaking process.44 Furthermore, social media platforms may also be required to clarify content-related requirements in their internal policies,45 and to permit government regulation to determine at least some of the provisions of their end-user contracts, thereby preventing such platforms from stipulating unilateral and possibly abusive obligations. Similar obligations may also be introduced with regard to search engine providers in addition to social media platforms.46 Mark Bunting argues that legislatures may establish statutory provisions that require online platforms to clarify how they intend to handle harmful or illegal content, while legislatures may also describe a fair and just approach toward striking a balance between fundamental rights to free speech, privacy, dignity, non-discrimination, intellectual property and free enterprise. Bunting also argues that governments may introduce a wider statutory framework for holding platforms accountable for online content. Such a framework would be aimed at clarifying how consumers expect intermediaries to handle harmful or illegal content, and it would also ensure that the regulation of intermediaries, in the context of online content, is proportionate, accountable, and strikes a fair and responsible balance between the competing rights involved. The regulatory framework should also recognise the differences between intermediaries of different sizes and with Tackle the Scourge of Fake News’ (16 October 2018) at https://www.ebu.ch/news/2018/10/code-of-practice-ondisinformation-fails-to-tackle-the-scourge-of-fake-news. 41 Jacob Rowbottom, ‘If Digital Intermediaries Are to be Regulated, How Should It Be Done?’ Media Policy Project blog (16 July 2018) at http://blogs.lse.ac.uk/mediapolicyproject/2018/07/16/if-digital-intermediaries-areto-be-regulated-how-should-it-be-done/. 42 Rebecca Tushnet, ‘Power Without Responsibility: Intermediaries and the First Amendment’ (2008) 76 George Washington Law Review 986, 1009. 43 ibid 1016. 44 Tutt (n 1) 294. 45 Park (n 14) 945. 46 Jennifer A Chandler, ‘A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet’ (2006–2007) 35 Hofstra Law Review 1095, 1115.

234  The Future of Regulating Gatekeepers varying business models. The proposed framework would be accompanied by an oversight body (possibly an industry-led body, if the intermediaries, with sectoral and government support, are able to set up an independent organisation) that would be capable of passing binding decisions in cooperation with an underlying regulator, which would also form part of such a co-regulatory model. As an alternative option, it could be a regulatory body (either an already existing body or a newly created one) that is financially supported by the industry.47 Lorna Woods and Will Perrin suggest that legislatures should set aside any and all approaches toward content regulation that regard online platforms as publishers, and instead consider social media as a certain kind of public forum (such as a workplace or a public building). In a workplace, the employer is required to protect the health, safety and welfare of its employees, and to guarantee the safety of materials and tools used by them ‘so far as is reasonably practicable’.48 In a building, visitors are protected by the requirement that the occupier of the building must ensure ‘that the visitor will be reasonably safe in using the premises for the purposes for which he is invited or permitted by the occupier to be there’.49 Woods and Perrin describe a co-regulatory scheme where the regulator, acting in cooperation with civil society and online industry actors, designs and implements a process aimed at reducing the harmful effects of social media, which would include the following steps: monitoring and assessing any harm caused; defining plans for preventing significant damage; implementing such plans; and conducting further measurements and developments. The greatest challenge faced by regulators is that the measures taken should be based on a risk analysis of damage, and not on proving such damage in a convincing manner. However, an authority should also have effective means to take action against gatekeepers, such as the power to impose fines.50 The relationship between a procedural reinforcement of the notice and takedown procedure and enhanced government supervision, on the one hand, and the threefold objective of media regulation, on the other hand, may be described as follows: (a) It does not eliminate but at least limits the possibility of government censorship, as the government (as any other user) may request the removal of illegal content only. However, the illegal nature of a piece of content needs to be assessed on the basis of constitutional standards of free speech, and this requirement may pose complex questions for platforms when interpreting the law. Nonetheless, formal procedural guarantees and increased transparency in the decision-making processes of social media platforms limit, but do not eliminate, the possibility of private censorship. (b) In cases determined by government legislation, users may still take action to deal with any harm or damage caused by free speech, such as by requesting a platform

47 Mark Bunting, ‘Keeping Consumers Safe Online’ Communications Chambers (2018) at http://static1.1. sqspcdn.com/static/f/1321365/27941308/1530714958163/Sky+Platform+Accountability+FINAL+020718+ 2200.pdf?token=LLX3se6W1bv%2BQzlBrc1R4KmZz0M%3D. 48 UK Health and Safety at Work etc Act 1974, s 2(1). 49 UK Occupiers’ Liability Act 1957, s 2(2). 50 Lorna Woods and William Perrin, ‘The Internet: To Regulate or Not to Regulate? Written Evidence’ (IRN0047, 2018) at http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/ communications-committee/the-internet-to-regulate-or-not-to-regulate/written/82684.html.

The Possible Models of Future European Regulation  235 to remove illegal content, and platforms may also introduce additional restrictions (ie  they may also remove lawful content that is inconsistent with their internal ­policies). (c) To promote diversity, platforms would be entrusted with the task of selecting what to show to a user from the pool of available content.

C.  Strengthening Government Regulation (Stronger Co-regulation Model) In September 2018, leading programme and Internet access providers in the UK requested that the Government set up independent regulatory oversight concerning social media content.51 Legacy media outlets may find it difficult to accept that social media platforms, their indirect market competitors, are permitted to operate in a significantly more lenient legal environment. Strong co-regulation on the basis of enhanced cooperation between private and government actors may involve numerous aspects of the operation of social media platforms. Such aspects include enhanced government control over content regulations implemented by platforms, recognising platforms as public forums, introducing various requirements to promote diversity and introducing government oversight concerning the removal of user-generated content. In this model, platforms remain free, at least in part, to adopt content-related rules for their users, but they may be required to observe certain government-defined principles such as clarity of regulation and respect for the freedom of speech. Even more powerful interference could also be possible, and in such situations, content-related provisions may be specified by government legislation. Government legislation may also serve to provide assistance to platforms. For example, it could provide that illegal content may not be removed, even upon receipt of a user report, without a binding court order or authority request.52 This would mean that a platform would not be tasked with deciding whether or not a piece of content is legal, although it would also delay the proceedings, thereby making it much less attractive. This model would also enable the application of certain known forms of media ­regulation in the context of communication conducted on a platform. One obvious possibility this opens up is that of introducing a right of reply with regard to search engines’ rankings and social media news items.53 According to Frank Pasquale, a link to the opposing opinion could be displayed upon request by a user, and users may learn about the other side of a story by clicking on that link.54 This solution would not limit free speech in a disproportionate manner, and it would not entail the introduction of a new ban on speech, while it would promote the diversity of opinions and help users to obtain information from a diverse range of sources.

51 Natasha Lomas, ‘UK Media Giants Call for Independent Oversight of Facebook, YouTube, Twitter’ TechCrunch (3 September 2018) at https://techcrunch.com/2018/09/03/uk-media-giants-call-for-independentoversight-of-facebook-youtube-twitter/. 52 Chandler (n 46) 1115–18. 53 See ch 1, section IV.H. 54 Pasquale (n 9) 128; Frank Pasquale, ‘Asterisk Revisited: Debating a Right of Reply on Search Results’ (2008) 3 Journal of Business & Technology Law 61.

236  The Future of Regulating Gatekeepers Naturally, subjecting the content-related decisions of platforms to external and independent review would be the most challenging task, even if such a review were not necessarily carried out by a government body, such as an authority or a court. Such a review might also be implemented under a self-regulatory scheme, which could be similar to the (moderately successful) one used in the context of the press, where industry actors set up an independent body of experts that applies its own procedures and ­sanctions.55 However, such a body could also be set up as a government organ operating in a similar manner to a media authority that decides on traditional media content.56 At the same time, Rowbottom recalls that the decisions delegated to gatekeepers raise important questions in the context of free speech. For example, the risk of private censorship might arise, where the task of defining and enforcing content-related requirements is simply left to the platforms. Another danger could be that private undertakings act as both regulator and decision-maker, which could have a fundamental impact on the views and opinions that find their way to the public. Due to the risk of private censorship, the introduction of a devolved regulatory scheme like this does not appear to be a suitable solution in and of itself, but it may be appropriate when accompanied by other traditional forms of regulation or co-regulation. A regulatory body might play an important role in supervising the internal policies and practices of gatekeepers (as discussed above), and it might also serve as the final forum of appeal concerning individual decisions. Thus, users and content producers who were displeased with a platform’s decision might bring their case to the regulatory body. The involvement of an external forum of appeal might result in more transparent and thorough decisions in cases that involve a question of principle and require consistent application of the rules.57 Damian Tambini argues for the establishment of a new authority for the regulation of social media platforms. An authority of this type would serve as a permanent forum for public debate and discussion of the moral principles and duties of social media platforms and intermediaries. The regulator would be responsible for establishing the appropriate standards (fundamental policies that simplify procedures, are easy for average users to read and independent of the sector, and cover hate speech, harassment, protection of minors and other content-related issues), monitoring compliance with such standards and ruling on user complaints. The authority would also be vested with broad powers of sanction concerning justified complaints, and it could require the submission of transparency reports on the removal of content and the disclosure of the use of algorithms.58 The theoretical possibility of setting up an authority to supervise the editorial decisions of search engines and social media platforms has also been raised in the US, by some scholars who would support the creation of a ‘Federal Search Commission’,59

55 Emily Laidlaw, ‘Private Power, Public Interest: An Examination of Search Engine Accountability’ (2008) 17(1) International Journal of Law and Information Technology 113, 142–43. 56 ibid 143–44. 57 Rowbottom (n 41). 58 Damian Tambini, ‘What would a Social Media Regulator Actually Do?’ Inforrm blog (22 September 2018) at https://inforrm.org/2018/09/22/long-read-what-would-a-social-media-regulator-actually-do-damiantambini/. 59 Oren Bracha and Frank Pasquale, ‘Federal Search Commission? Access, Fairness, and Accountability in the Law of Search’ (2007–2008) 93 Cornell Law Review 1149, 1180–82.

The Possible Models of Future European Regulation  237 following the model of the FCC, arguing that search engines provide a public service that needs to be regulated just like traditional public services in order to guarantee adequate communication, access to information without interference and equal access to users. The revision of the AVMS Directive in 2018 has already opened the door to government regulation of social media platforms (video-sharing sites) in Europe,60 but the option of opening an avenue for reviewing the decisions of social media platforms by an authority is still far from the mainstream of legal thinking. Amid all the debates caused by the spread of fake news, the spectre of an Orwellian Ministry of Truth has been raised to cause unease and worry about the protection of free speech.61 Nonetheless, the first steps to regulate platforms have already been taken. In Ireland, the Digital Safety Commissioner, an independent body, initiates the removal of content that might be harmful for children, and it may also file a court action if the platform concerned fails to take appropriate measures.62 The lack of neutrality toward content is a fundamental characteristic of the functioning of search engines and social media platforms. If search engines and social media platforms are forced to handle every piece of content equally, search engines would be eliminated (as they would be prevented from determining the relevance of pieces of content to user queries) and numerous social media platforms would be deprived of some of their distinctive features (eg Facebook would not be able to select posts that are presented in the news feed of its users). This lack of neutrality (ie ‘bias’) is not a dysfunction but the cornerstone of a platform’s functioning.63 The elimination of this bias does not seem desirable and would not serve the interests of free speech, but the acknowledgement of its existence does not necessarily imply approval for social media platforms to interfere with content in a discriminative (not neutral) manner, to manipulate users or to influence public discourse deliberately. The implementation of a kind of forced equality of opinions, realised through government legislation, does not seem unacceptable in itself (at least from a European perspective), but suitable legal means of accomplishing this have yet to be identified. Even though the concept of ‘due impartiality in news coverage’, a requirement under traditional media regulation, could serve as an appropriate starting point for introducing a new regulatory scheme, a legal system may not require social media platforms to operate with the same degree of impartiality as a television or radio news programme, particularly because (i) a platform does not produce any content that would be relevant in this context and (ii) not even a platform is capable of overseeing the entire body of content generated by its users. Requiring a platform to attempt to present content in an entirely impartial manner would mean that it is subject to the same obligations as a television or radio editor, despite the above-mentioned characteristics. The regulation of electronic programme guides (EPGs) seems to be a more appropriate analogy, as it requires service providers to present certain important pieces of (public service media) 60 See ch 5, section III.B. 61 Cp George Orwell, Nineteen Eighty-Four (London, Secker and Warburg, 1949). 62 Kevin Doyle, ‘New Watchdog to Tackle Social Media Harassment’ Independent.ie (7 February 2017) at https://www.independent.ie/business/technology/news/new-watchdog-to-tackle-social-media-harassment35429907.html. 63 Geoffrey A Manne and Joshua D Wright, ‘If Search Neutrality is the Answer, What’s the Question?’ (2012) Columbia Business Law Review 151, 238.

238  The Future of Regulating Gatekeepers content in a distinctive manner; this obligation may be labelled by various names, such as ‘findability’, ‘due prominence’ or ‘exposure diversity’.64 In the UK, EPGs are regulated by the Communications Act 2003, under which the Office of Communications (Ofcom), the authority in charge of regulating the media, is tasked with providing EPGs with a code as guidance regarding mandatory practices. This code must also include provisions on the prominence of public service channels.65 According to the code of conduct, Ofcom is to determine whether or not such appropriate prominence enables positive discrimination for public service channels.66 The EPGs may be regulated freely in line with the doctrines of media regulation. These doctrines emphasise values such as the need for government action to promote the quality and diversity of programme selection, or the enforcement of competition law principles that focus on ensuring fair competition and establishing a competitive market.67 However, requiring platforms to display certain content in a prominent manner (eg  requiring a social media platform to display an appropriate ratio of public news produced by traditional news outlets and considered as verified and reliable) raises additional problems.68 Such a provision seems to ignore user autonomy, as users may decide not to consume any news on public affairs, or to obtain information from a specified group of sources. Promoting certain pieces of content implies that other pieces of content are demoted (thereby limiting user autonomy), meaning that efforts taken to facilitate diversity may actually result in the limitation of diversity.69 Another option is to enable users to specify their preferences (see Sunstein’s Daily Me),70 so that the content presented to each user is not limited to information a search engine or a social media platform considers relevant to or interesting for the user concerned.71 In conclusion, it seems that, in terms of the threefold objective of media regulation, a move toward increased government interference and strong co-regulation would result in the following outcomes: (a) The likelihood of government censorship would not increase, provided that the government continues to observe the rules defining the limits of free speech, while the government would impose various obligations on social media platforms that promote and reinforce the constitutional guarantees and safeguards of free speech. (b) The method of handling damage caused by free speech would be changed by increased government involvement: on the one hand, reducing the gap between the rights of platforms and the restrictions they are required to implement would make it more

64 Bart van der Sloot, ‘Walking a Thin Line: The Regulation of EPGs’ (2012) 3(2) JIPITEC – Journal of Intellectual Property, Information Technology and E-Commerce Law 138. 65 Communications Act 2003, s 310(2). 66 Ofcom Code of Practice on Electronic Programme Guides, at https://www.ofcom.org.uk/__data/assets/ pdf_file/0031/19399/epgcode.pdf. 67 Sloot (n 64) 144. 68 Natali Helberger, Katharina Kleinen-von Königslöw and Rob van der Noll, ‘Regulating the New Information Intermediaries as Gatekeepers of Information Diversity’ (2015) 17(6) Info 50, 64. 69 ibid. 70 Cass R Sunstein, #Republic. Divided Democracy in the Age of Social Media (Princeton, NJ, Princeton University Press, 2017) ch 1. See ch 2, section II.B. 71 Helberger, Kleinen-von Königslöw and van der Noll (n 68) 65.

The Possible Models of Future European Regulation  239 difficult to handle such damage in an excessive manner that could jeopardise free speech; while on the other hand, making the procedures of platforms more similar to other formal proceedings would sacrifice the very advantages of such procedures (ie speed and efficiency) for the sake of free speech. (c) Unlike the previous options, this model focuses on the social responsibility of platforms, and as such it foresees that measures should be taken to increase the plurality of opinions and the diversity of content. While one might agree with taking such a direction, caution should be exercised not to introduce a regulation with an outcome that, despite all noble intentions, might be quite the opposite of the one intended due to the excessive powers granted to government. D.  Prohibiting Private Regulation The elimination of independent decision-making powers of social media platforms over content might bring about a regulatory scheme that is simpler and more predictable than any form of co-regulation. As Marcelo Thompson points out, Section 230 CDA, in the context of a platform’s immunity regarding illegal content, is based on the assumption that a platform does not interfere with communications between users (unless it is required to remove illegal content, and apart from in important situations determined by law) because it has no incentive to do so.72 However, being relieved of such liability only reinforced the decision-making power of platforms, and they in fact have plenty of incentives to interfere with user communications (users’ safe space, pursuit of a political agenda, etc). As a result, the limits of debates and public discourse on a platform are defined through private regulation, as opposed to government regulation, unless this power of social media platforms is restricted in some form. A possible solution to this problem would be to consider social media platforms as public utilities or public forums to which a service provider is obliged to guarantee equal access.73 In such a situation, a social media platform could not restrict the freedom of communication, just like a phone service provider is prohibited from restricting the content of conversations carried on through its network.74 The idea of prohibiting private censorship might seem attractive at first, and it is not unprecedented even in the world of online gatekeepers; the concept of network neutrality in the context of ­Internet access providers serves quite similar purposes. (This chapter does not go into the details of regulating such service providers in the future. The reason for this is that the most of the problems raised here, that is eliminating government and private censorship, and managing harms caused by speech, could be resolved by making network neutrality mandatory for ISPs. While ISPs would also be prevented from interfering with the goal of promoting content diversity, they seem to be unable to do much for such goals anyway, as their role and tasks are incompatible with such interference.)

72 Marcelo Thompson, ‘Beyond Gatekeeping: The Normative Responsibility of Internet Intermediaries’ (2015–16) 18 Vanderbilt Journal of Entertainment & Technology Law 783, 785. 73 As recommended by Bracha and Pasquale with regard to search engines: Bracha and Pasquale (n 59) 1208–1209. 74 Park (n 14) 945, 952–54.

240  The Future of Regulating Gatekeepers One problem could be resolved but other issues might arise if platforms were to be prohibited by law from engaging in private censorship. It would immediately raise questions as to the platforms’ obligation (in Europe) to remove illegal content from their systems. Requiring platforms to judge the illegal nature of a given piece of content would mean that they were subject to an obligation that is not easy to perform and which would encourage private censorship (ie extensive removal of content). Under this model, it would be reasonable to relieve platforms from any liability for illegal content (similarly to the US approach), or to require platforms not to remove any content unless a given piece was ruled to be illegal by a court (or other authority).75 Unlike the elimination of private censorship as discussed above, a regulatory change in this direction would probably be welcomed by social media platforms, considering that they frequently argue that intermediary service providers should not be expected to examine and judge content, and they would also be required to send any notice to a competent body, ideally a court or other independent and impartial body qualified and with the legitimacy to make these kinds of decisions.76 The elimination of private censorship from two directions (both on the part of ­platforms and on the part of reporting users or ‘flaggers’) would be beneficial for free speech. However, it would also jeopardise the success of taking action against dangerous and harmful content, since it would eliminate the possibility of taking prompt and decisive action against such content, which is otherwise quite an attractive possibility for injured parties. This issue could be resolved by permitting platforms to take action upon receipt of user reports (ie to remove a piece of challenged content if it appears illegal prima facie), but doing so could also compromise the model and open the door to private censorship. Another possible solution would be to accelerate legal procedures aimed at determining whether a given piece of content is illegal or not. However, this does not seem feasible under the existing framework of courts and media authorities, meaning that the establishment of new and fleet-footed bodies would be required, the outcome of which would be uncertain. The prohibition of private censorship does not directly affect the issue of diversity. However, it may have a fundamental impact on the content of all services concerned by prohibiting platforms from distinguishing between pieces of content (following the analogy of network neutrality). For example, search engines would cease to exist as we know them, since determining the relevance of a piece of content to a search query is by definition not neutral but an editorial activity. The situation is less worrisome for social media platforms. While some services (such as Twitter) do not necessarily select and rank user content, or do so only upon request and according to settings established by the user, Facebook’s news feed would undergo fundamental changes. However, this duty of neutrality could be evaded by applying the doctrine of public forums, besides which neutrality is an insufficient outcome from the perspective of traditional media regulation, which also seeks to promote the diversity of opinions and the provision of

75 Chandler (n 46) 1117. 76 Internet Service Providers’ Association (ISPA UK), ‘Written Evidence’ (IRN0108, 2018) at http://data. parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/communications-committee/theinternet-to-regulate-or-not-to-regulate/written/86044.html.

Summary  241 adequate ­information. As such, this model might include measures aimed at increasing the plurality of content, such as the above-mentioned right of response and the promotion of public service news and content, and the performance of such obligations would be exempted from the platforms’ duty of neutrality. In the context of the threefold goals of regulation, this model: (a) seems able to tackle the problems of government and private censorship in a reassuring manner; (b) would use procedures already known from traditional media regulation to handle harmful speech (thereby slowing down and limiting the effectiveness of the process considerably); and (c) would use additional regulatory measures to promote the diversity of opinions presented to the public.

IV. SUMMARY

The global communication space, namely the Internet and nation states, are ‘two ­oppositional and at the same time co-constituting forces in the context of internet governance’ – as Uta Kohl and Carrie Fox warn us – and ‘national consciousness exists in opposition to a global or local Order.’77 The monopoly of the states to regulate the public sphere and to determine the limitations on free speech does not exist anymore. Freedom of speech has undergone fundamental changes. While debates concerning the traditional issues of free speech, such as the enforcement of personality rights, the ban on hate speech, the freedom of advertising and the regulation of legacy media, remain relevant, the emergence and functioning of gatekeepers pose new challenges that necessitate a novel regulatory approach. At first sight, it might seem troubling that large online platforms decide on and resolve free speech issues with much more speed and efficiency than any court or government body, while their decisions have an impact on billions of users. The activities of gatekeepers are not exempted from government legislation, however. Gatekeepers are subject to statutory obligations, which are expected to become more and more far-reaching over time, although at present legal systems tend to delegate their power to enforce the limits of free speech to platforms, and such delegation is effected in a lawful and regulated manner. The legal relationship between gatekeepers and their users, a relationship that is not affected by the constitutional doctrines of free speech, is also governed by law through the contract concluded by and between the parties. However, it does not seem possible to enforce the principles and doctrines of free speech in the online world with the same fervour as offline, while the efficiency and actual impact of enforcing the limits of speech offline might also appear questionable at times. Even so, this should not necessarily be considered an unwelcome development. The ‘law’ is constantly changing; the constitutional recognition of free speech itself is a fairly new and modern development, and, with the emergence of the Internet, the law of free speech

77 Uta Kohl and Carrie Fox, ‘Introduction’ in Uta Kohl (ed), The Net and the Nation State. Multidisciplinary Perspectives on Internet Governance (Cambridge, Cambridge University Press, 2017) 2.

242  The Future of Regulating Gatekeepers is apparently entering a new era of development, the course of which is not absolutely clear at this point. Henry Sumner Maine, who provided a vivid description of legal evolution, would certainly be surprised if the legal approaches of the twentieth century were thought of as immutable.78 Nonetheless, government may not surrender its role as an influencer of the activities of gatekeepers: while technology may change at a rapid pace, the main principles and objectives of regulation (the prohibition of censorship, reasonable limits to speech to prevent harm and the promotion of the diversity of opinions) remain valid and are not affected by changes in technology. Government decision-makers and shapers of public policy need to adopt a systemic approach that takes into account the distinctive features of and changes in gatekeepers’ activities, and such an approach must provide an accurate definition of what gatekeepers are expected to do and indicate what they might expect from the law by accurately laying down the duties and scope of liability of gatekeepers.79 The impact of gatekeepers on the public sphere, the strengthening of private regulation and the need to enforce the public’s interests necessitate the use of new, creative and innovative regulatory methods and institutions, the invention of new ways of making and enforcing rules,80 and a degree of cooperation between public and private actors that is unprecedented in this field. The history of free speech law is thus entering a new stage that is just as exciting and unpredictable as any of its previous stages.

78 Cp Henry S Maine, Ancient Law, Its Connection with the Early History of Society, and Its Relation to Modern Ideas (London, John Murray, 1861). 79 Mark Bunting, ‘From Editorial Obligation to Procedural Accountability: New Policy Approaches to Online Content in the Era of Information Intermediaries’ (2018) 3(2) Journal of Cyber Policy 165, 185. 80 ibid 186. Cp Gillian K Hadfield, Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy (Oxford, Oxford University Press, 2017) 9.

Index abuse of freedom of expression  20, 37, 68, 228 accountability  50–2, 69, 70, 232–3 action and speech  17–18 activism  70 advertisements commercial communications  27, 43 privacy  170 regulation  96 search engines  116, 143–4 social media platforms  170 targeted advertising  23 transparency  197, 232 advisor analogy  119, 129, 140, 145 algorithms Data Protection Directive  134 disclosure  236 editing  84–7, 118–21, 128–30, 140, 144–5 filter bubble theory  150 gatekeepers  84–7, 222–3, 236 news feeds  86–7, 150, 157, 195–7 search engines  118, 120–2, 126–7, 134, 137, 140, 142 social media platforms  23, 84–6, 150, 157–8, 194–7 Alito, Samuel  153 Ammori, Marvin  183 analogies advisor analogy  119, 129, 140, 145 editing  91–2, 140, 144–5 feudalism metaphor  68–9 frontier, metaphor of the  68 library analogy  119–20 marketplace of ideas  8, 10, 15, 47 moderation  188–90 opinion metaphor  120–1, 127, 131 readers’ letters analogy  208 regulation  77–81, 91–4, 95 search engines  117–20, 145 social media platforms  188–90 Anderson, David  47 Angelopoulos, Christina  217, 225 anonymity comments, gatekeepers’ responsibility for online  202–4, 208–11, 215, 217, 220 constitutional protection  203 defamation  204, 208–9, 210–11

ECtHR  53, 204 fake profiles  158 personality rights  73, 204, 217 social media platforms  158, 202, 220 sources, protection of information  27, 52, 53–4, 203 speech, style and manor of  24 appeals  236 applicable law  76–7 Arab Spring  70, 152 archives  133–4 Armenian genocide, denial of  40 Asimov, Isaac  222–3 audience, impact on the see impact on the audience audiovisual services see AVMS Directive Australia  52, 70, 88, 126–30 authoritarianism/autocratic regimes  70, 88, 123, 152 autocomplete suggestions  129–31 autonomy  12, 15, 17, 238 AVMS Directive children, protection of  58, 97 editing  91 freedom of the press  45–7 hate speech  58–9, 97 jurisdiction  76 limitations  28 media, definition of  46–7 pluralism and diversity  60–1 regulation  80, 92, 97, 231, 237 social media platforms  146, 161–2, 164 video-sharing platform services  92, 146, 161–2, 237 Baker, Edwin  12–13 Balkin, Jack M  78, 181 Ballanco, Michael  142 Barendt, Eric  8, 21, 31, 56 Barlow, John Perry  67 Barnett, Steven  60–1 Barron, Jerome  70–1 Beach, David  126–7 Belgium  35–6 Berlin, Isaiah  55 Bernal, Paul  176–7, 193, 197, 199 Berners-Lee, Tim  158–9 Bethe, Hans  178

244  Index bias impartiality, definition of  61 news  60–1, 237–8 objectivity, definition of  61 polarisation  75–6 search engines  138–41 social media platforms  237–8 Bingham, Tom  174 Black, Hugo L  24 black box phenomenon  86, 137 Blackmun, Harry  42–3 Blackstone, William  26, 51 blocking ISPs  102, 103–4, 105, 109 network neutrality  113 regulation  224, 229 removal of content  229 search engines  77, 104, 143 social media platforms  77, 186 Bork, Robert  11, 21 Brandeis, Louis  13–15, 32 Brennan, William J  38, 44–5 broadband as telecommunications service, classification of  112 Brussels Ia Regulation (Recast)  76, 165–6 Bunting, Mark  233 Bush, George W  113–14, 141 caching  80, 95, 106 Cambridge Analytica  158 Canada  114, 135–6, 149 Caplan, Robyn  87 category of speech  16–21, 23–4, 32 celebrities/public figures  29–30, 32–3, 154, 227 censorship  50–2 collateral censorship  98, 217 comments, gatekeepers’ responsibility for online  217, 219 definition  50, 88–9 direct censorship  89 E-Commerce Directive  184–5 filtering  103, 105 freedom of the press  50–2 governments notice and takedown procedure  234 regulation  89, 231, 238 indirect censorship  89 ISPs  103, 105, 113–14 newspapers, licensing of  50–1 notice and takedown procedure  234 prior restraint  50–2 private censorship comments online  219 ISPs  113 notice and takedown procedure  234

regulation  88–9, 226, 229, 231, 236 search engines  123 social media platforms  181, 183–5, 187–8 terms and conditions  226–7 proxy censorship  98–9 regulation  78, 87, 88–9, 223, 231 search engines  123, 144 self-censorship  89 social media platforms  181, 183–7, 187–8, 238–40 terms and conditions  226–7 Chander, Anupam  65, 86 Charter of Fundamental Rights of the EU  28–9, 59–60 cheap speech  21–2, 154–6, 176, 217 Cheung, Anna  217 children, protection of  20, 57–8 AVMS Directive  58, 97 commercial communications  58 filtering  104 obscenity  57 paedophilia  41–2 pornography  42, 72–3, 163 removal of content  237 China civil rights  144–5 political structure  71 search engines  123, 144–5 surveillance  71 United States  144–5 Clinton, Hillary  158, 178–9 codes of conduct  58, 194, 232 Coe, Peter  203 Cohen, Robert  22 comments see gatekeepers’ responsibility for online comments commercial communications  42–3 advertisements  27, 43 children, protection of  58 misleading commercial communications  20 regulation  57, 72 common carrier services  102–3, 110, 111–12, 190–4 community standards  184–7, 190–4 companies see corporations competition abuse of a dominant position  142–3 fines  142–3 network neutrality  109, 111 regulation  229, 238 search engines  116, 138, 142–3 concentration of media ownership  71–2, 90 conduct of content providers  214–15 confidentiality  33, 53–4

Index  245 constitutional protection see also First Amendment (Constitution of US) anonymity  203 freedom of the press  44 jurisdiction  73, 77 like buttons  23–4, 147–8 ranking of hits  139 regulation  81, 86 speech  5, 11, 18–20, 23–4, 73, 85, 119, 147–8 consumers consumerisation of mass media  6 definition  166–7 protection  20, 185, 193, 226 context  31, 72–4, 169, 174–6 contract, terms and conditions of  183–7, 225–7 copyright Copyright Directive  107–8 injunctions  107–8 notice and takedown procedure  123 search engines  123, 145 co-regulation  231–9 corporations concentration of media ownership  71–2, 90 financial support or sponsorship  18 media companies  87–8, 92, 157–8 speech  18 tech companies  87–8, 91–2, 157–8 Costeja, Mario  133–4 Coubert, Gustave  193 Council of Europe (CoE) ECtHR  27, 93 editing  49 freedom of the press  48–9 hate speech  36, 37 Internet Governance strategy  94 ISPs  103–6 network neutrality  112 new media  48–9 regulation  93–4, 232 special privileges  52–3 transparency  232 criminal offences broadcasting  156–7 Cybercrime Convention, Additional Protocol to  59 forgotten, right to be  132 hate speech  37, 59, 191 instigation  34–6 privacy  32 prostitution  100 regulation  72–3, 93, 97 revenge porn  73, 173 social media platforms  152, 156–7, 160, 163, 173–4, 180–2, 191 trafficking for sexual exploitation  99–100

crush videos  40 Curran, James  70 Cybercrime Convention, Additional Protocol to  59 cyber-idealism (optimism)  67 cyber-realism (pessimism)  67–8 Daily Me  75, 87, 137–8, 141, 149–50, 198, 238 damages  76, 132, 215–16, 231 Dark Net  72 data protection algorithms  134 comments, gatekeepers’ responsibility for online  209 data processing  133 Data Protection Directive  132–7 EU  97, 132–7 filtering  104 forgotten, right to be  132–7 GDPR  77, 136–7, 166 journalistic purposes, exemption for  134, 136 privacy  5, 32 public interest  134–7 rectification, erasure or blocking of data  132 regulation  77, 97 search engines  132–3, 137 defamation  29–31, 73 anonymity  204, 208–9, 210–11 assessment of contents  218 autocomplete suggestions  129–31 comments, gatekeepers’ responsibility for online  204, 208–16 community standards  191 defences  31, 124–8, 208–9 democracy  13 emotional distress, intentional infliction of  30 EU  165, 179 extra-legal measures  169 false statements  39 forum shopping  165 hashtags  169 honest opinion  31 injunctions  51, 108–9 innocent dissemination  124–8, 208 links  169 malice  168–9 opinions  168–9 prior restraint  52 public figures  29–30 public interest  30–1 ranking of hits  126 recklessness  30 removal of content  209, 211 search engines  123–31, 210–11, 213, 215 social media platforms  154, 159, 165, 167–9, 176–7, 191

246  Index deletion of content see removal of content democracy  2, 11–15 advertisers  116 citizen participation  8, 11–12, 13–15 freedom of the press  45, 47–8 marketplace of ideas  15, 47 pluralism and diversity  62–3 regulation  69, 70, 72, 74, 82–3 search engines  116, 123, 137–8, 145 social media platforms  147–59, 199 special privileges  52, 55–6 dependency  151, 228 Dicey, Albert V  26, 44 discrimination see also racism and xenophobia equality, principle of  22 gender stereotypes  200 network neutrality  109, 111–13 non-discrimination, principle of  94, 102–3, 112, 232–3 search engines  117–18 viewpoint discrimination  25 distribution media or tech companies  87–8, 92 new actors  48–50 pluralism and diversity  59, 64 public morals, protection of  40–2 search engines  124 social media platforms  146, 158 diversity see pluralism and diversity domain names  68–9 domicile  166 draft cards, burning  17–18, 38 Dutton, William  70 Dworkin, Ronald  12 Eady, David  124, 126 echo chamber  200 E-Commerce Directive  29, 50, 231–5 awareness of violations  94–7, 219 censorship  184–5 comments, gatekeepers’ responsibility for online  205–6, 209, 219 exemptions  95–6, 177 fake news  198 illegality, identification of  95–6 injunctions  106–9 jurisdiction  165 liability  94–8 moderation  188 monitoring  95, 108 notice and takedown procedure  122–3, 125, 171–2, 219, 225–6, 231–5 regulation  80, 83–4, 94–8, 225, 231–2 removal of content  94–8, 177

search engines  122–3, 125, 128–31 social media platforms  162–6, 171–2, 177, 179, 184–5, 188, 198 economic interests  213, 216, 217–18 editing algorithms  84–7 analogies  84–7, 118–21, 128–30, 140, 144–5 Council of Europe, recommendation of  49 discretion  5, 91, 99, 110–11, 140 fact-checkers  197 freedom of the press  49 gatekeepers  4–5 hosting  106 neutrality  110–11, 199 news feeds  92, 195–7 regulation  83–7, 229, 231 search engines  118–19, 120–1, 128–30, 140, 144–5 social media platforms  92, 158–9, 161, 180, 189, 194–7, 199 traditional media  73, 83, 91, 159, 167, 195, 197–8, 201 effect on the audience see impact on the audience effective remedy, right to an  94, 232 elected public officials  29–30, 154 elections  148, 155, 157–8, 177–80, 194 electronic commerce see E-Commerce Directive electronic programme guides (EPGs)  237–8 Emerson, Thomas  8 emotions, manipulation of  23, 84, 151 employees, privacy of  172–3 equality, principle of  22 Eskens, Sarah  87 Estonia anonymity  215 censorship  217 comments, gatekeepers’ responsibility for online  205–10, 212, 214–20 defamation  209–10, 212, 214–15, 218 hate speech  212, 218 notice and takedown system  214–15 removal of content  206, 214–15 EU see European Union European Convention on Human Rights (ECHR) see also European Court of Human Rights (ECtHR) abuse of freedom of expression  20 exhaustion of local remedies  28 moderation  204–5 offensive, shocking or disturbing speech  22 private and family life, right to respect for  32, 135, 172, 210, 211, 216 search engines  117 European Court of Human Rights (ECtHR) anonymity  53, 204

Index  247 blocking  104 commercial communications  27, 43 Council of Europe  27, 93 democracy  13 exhaustion of local remedies  28 freedom of the press  27, 45–6, 49–50 hate speech  27, 37–40 investigative journalism  54 ISPs  205–6 licensing  47 minimum requirement  27 obscenity  40–1 pluralism and diversity  59, 61, 63 private and family life, right to respect for  32, 34 proportionality  28 public figures  31 regulation  79, 94, 97 reputation and honour, protection of  31–2 searches  53–4 social media platforms  172 special privileges  56 speech  20, 39 subsidiarity  28 violence or criminal offences, instigation of  35–6 European Union see also AVMS Directive; E-Commerce Directive Brussels Ia Regulation (Recast)  76, 165–6 Charter of Fundamental Rights of the EU  28–9, 59–60 children, protection of  58, 163 comments, gatekeepers’ responsibility for online  205–6, 220–1 consumer, definition of  166–7 Copyright Directive  107–8 data protection  29, 97 defamation  165, 179 denial of crimes against humanity, war crimes and genocide  39 digital single market strategy  97 discrimination  113 European Anti-Fraud Office (OLAF)  54 European Commission regulation  98, 232 social media platforms  146–7, 180, 198 extremism  194 fake news  179–80, 198 freedom of the press  45, 48 future EU regulation, models of  230–41 harmonisation  28, 58 hate speech  36–7, 194 High-Level Expert Group on Fake News and Online Disinformation  179–80, 198 High Level Group on Media Freedom and Pluralism (European Commission)  48–9

hosting  160 jurisdiction  165–6 liability  94–8 network neutrality  112–13 notice and takedown procedure  122–3, 125, 165–6, 171–2, 219, 225–6, 231–5 Open Internet, Regulation on  112–13 racism and xenophobia, Council Framework Decision on  35, 39, 163 regulation  66, 74, 80, 83–4, 92, 94–8, 101, 225 models of future EU regulation  230–41 social media platforms  181–2 search engines  122–3, 125, 128–32, 134, 142–3, 145 social media platforms  146–7, 160–7, 176–7 soft law  29 terrorism  97, 163 Universal Service Directive  102–3 exhaustion of local remedies  28 extremism  75, 150, 155, 187, 194 Facebook algorithms  84–6, 150, 157–8, 195 anonymity  202, 220 bans  161, 181, 183, 192–4 blocking  77, 186 business interests  159 Cambridge Analytica  158 censorship  183 code of practice  232 comments  202, 219–20 community standards  184–7, 190–2 constitutional protection  23–4 data protection  5 defamation  167–8 editing  92, 158, 195–7 emotions, manipulation of  23, 84 fake news  158, 197 fake profiles  158 filter bubble theory  149–51 Germany  181–2, 186–7, 226 hate speech  181–2, 190–4 Holocaust denial  181–2, 191–2 ideology  196, 199–200 impact on the audience  23, 84, 90 jurisdiction  77, 166–7 likes dependence  151 range of emotions  147 speech, as  23–4, 147–8 location  69 media company, as  88 moderators  184 murder, live feeds of  1, 156–7 neutrality  157–8, 195–200, 237, 240

248  Index news feeds  149–51, 155–9, 183, 195–7, 229, 237 algorithms  150, 157, 195 editing  92, 195–7 fake news  158, 197 personalisation  85, 149 political objectives  196 Trending Topics service  157–8 notice and takedown procedure  171–2 nudity  190–3 number of users  151 political objectives  196 presidential election of 2016  157–9 public forum doctrine  228–9, 240 removal of content  158, 160, 167–8, 171–2, 177, 186–7, 191, 226 social goals  183 speech  23–4, 147–8, 191 suspension  186 targeted messaging  23 threats  175 Trending Topics service  157–9 violence  1, 156–7, 193–4 works of art, bans on  192–4 fake news definition  177–9 editing  197 elections  158, 177–9 EU  179–80, 198, 231 extremism  155 fact-checkers  197, 232 factories  158 social media platforms  155–6, 158, 177–80, 197, 237 transparency in relation to advertising and sponsored content  197 truth  10 fake profiles  158 Falk, Donald  120–1 false light, portrayal in a  32 false statements  39–40 Fargo, Anthony  101 Feintuck, Mike  60–1 feudalism metaphor  68–9 fighting words  20, 35 filtering algorithms  150 censorship  103, 105 cross-cutting  150 definition  149 fake news  198 filter bubble theory  75, 149–52 ISPs  102, 103–4, 105 network neutrality  113 search engines  131, 137–8, 144 social media platforms  149–51

First Amendment (Constitution of US) anonymity  203 Data Protection Directive  136 freedom of the press  44 hate speech  36 ISPs  104 network neutrality  111 pluralism and diversity  63–4 public figures  30 regulation  65–6, 69, 101, 230–1 search engines  116, 123, 145 social media platforms  175, 181, 183, 190, 200 special privileges  56 symbolic speech  38 terminological variations  15 flag, burning the US  38 Fontanelli, Filippo  133 forgotten, right to be  132–7 archives  133–4 criminal proceedings  132 Data Protection Directive  132–7 public interest  134–7 forum shopping  165 Foster, John  113, 193 foundations of freedom of speech  8–43 category of speech  16–20 open debate of public affairs  20–2 Internet-specific questions  22–4 limitations of freedom of speech  24–43 scope of protection  18–20 terminological variations  15–16 theoretical foundations of free speech  8–15 Fox, Carrie  241 France Declaration on the Rights of Man and of the Citizen of 1789  16 fake news  180 fraud  54 freedom of assembly  152 freedom of the press  44–64 censorship  50–2 concept of media  46–50 constitutional protection  44 ECtHR  27, 45–6, 49–50 EU  45–7, 48, 225 functional approach  47 institution, media as an  45–8 licensing  46 media, definition of  46–7, 49 new services  48–9 pluralism and diversity  59–64 public interest  46, 49 quality of sources  49 regulation  66–7, 92, 223, 225 rights-holders of freedom of the press  46–50

Index  249 special privileges awarded to the media  52–6 traditional media  46–9 truth  8–9 freedom of thought  16, 22–3 frontier, metaphor of the  68 fundamental rights see human rights Garton Ash, Timothy  69, 116 gatekeepers see also gatekeepers, regulation of; gatekeepers’ responsibility for online comments data protection  5 defamation  168 definition  4, 82 editors, as  4–5 government, relationship with  4–5 hiding content  4, 5 ISPs  103 public sphere, influence over  3 pushing content  4 search engines  116, 118, 124–5, 145 social media platforms  195 speakers, relationship with  4 traditional media  3–6, 221 gatekeepers, regulation of  82–101 algorithms  84–7, 222–3, 236 analogies  91–4, 95 appeal, need for external forum for  236 authority-gatekeepers, definition of  83 AVMS Directive  92, 97, 231, 237 censorship  87, 88–9, 223, 226, 229, 231, 236, 238 classification  82–3 code of practice  232 concentration of ownership  90 co-regulation model  231–9 Council of Europe, recommendations of  93–4, 232 criminal liability  93, 97 E-Commerce Directive  83–4, 94–8, 225, 231–2 editing  83–7, 91–2, 229, 231 electronic programme guides (EPGs)  237–8 EU  83–4, 92, 94–8, 101, 225, 231–41 freedom of speech of gatekeepers  84–7 freedom of the press  92, 223, 225 future of regulation  222–42 gatekeeper, definition of  82 government intervention  88, 222, 227, 229–31, 235, 238–42 hate speech  97, 225 illegality, identification of  95–6 immunity  98–101 impact on the audience  90–1 ISPs  102, 105–6 legal status  82 liability of gatekeepers  94–101, 222

limited intervention, preserving status quo through  231–5 macro-gatekeepers, definition of  83 material scope  95 media or tech companies, gatekeepers as  87–8 media regulation  223–5 micro-gatekeepers, definition of  83 models of future EU regulation  230–41 private regulation, prohibiting  239–41 stronger co-regulation model  235–9 US model to Europe, transfer of  230–1 weaker co-regulation model  231–5 monitoring  95, 236 negative right, freedom of speech as a  90 neutrality  229, 237–9 news  237–8 notice and takedown procedure  226, 231–5 passive role  83 pluralism and diversity  87, 90, 223–5, 227, 231, 235, 239–42 prevention of damage or threats  223, 231 private regulation  82, 84, 93–4, 222–6, 239–42 censorship  88–9, 226, 229, 231, 236 prohibition  239–40 problems of the media  87–91 proportionality  94, 98, 233 public forums, law of  227–9 public interest  225 public sphere, interpretation of  223–30 purpose of regulation  223, 226–7, 231, 241 removal of content  83–5, 91–8, 224, 226, 229, 232, 235, 239 reply, right of  224–5, 235 roadmaps for combating online disinformation  232 roles  82–4 sanctions  230–1, 236 search engines  236–7 secondary liability regimes  93 social media platforms  227–8, 230–40 social responsibility  87, 239 speech  85–6 status quo  231–5 stronger co-regulation model  235–9 tech companies, gatekeepers as  87–8, 91 terms and conditions  225–7 terrorism, draft directive on  97 transparency  232, 236 types  82–4 weaker co-regulation model  231–5 gatekeepers’ responsibility for online comments  202–21 anonymity  202–4, 208–11, 215, 217, 220 assessment of comment’s content  218–19 conduct of content providers  214–15

250  Index content of comments  211–12 defamation  208–16, 218 E-Commerce Directive  205–6, 209, 219 economic interests, content providers with  213, 216, 217–19 ECtHR  209–20 editors  214, 221 effect of comments on party attacked  213–14 EU  205–6, 220–1 CJEU, case law of  206 E-Commerce Directive  205–6, 209, 219 hate speech  212, 218 hosting  205–6, 208, 210, 217–18 identification  202, 203–4, 206, 209, 212–13 legal responsibility for unlawful comments, basis of  4, 205–9, 216–17, 220 liability  216–17 moderation  202, 204–7, 215, 217–19 not-for-profit websites  213, 216, 218 notice and takedown procedure  214–15, 219 party who has been attacked  213–14 personality rights  207–8, 209, 217–18 political speech  211–12 private and family life, right to respect for  210, 211, 216 readers’ letters analogy  208 removal of comments  202, 204, 209–11, 214–17 reputation  208–16 sanctions  215–16 social media platforms  202, 208, 219–20 speech, comments as  202 video-sharing platforms  208, 219–20 gender stereotypes  200 general interests, publication of information on  56 genocide, denial of  39–40, 161, 181–2, 191–2 Germany autocomplete suggestions  129–30 Civil Code  186 Constitution  186, 226 Criminal Code  180 E-Commerce Directive  177 fake news  180 regulation  181–2 search engines  130 social media platforms  180, 181–2, 186–7, 226 terminological variations  15–16 Gibbons, Thomas  45, 56 globalisation  161, 199–200 Goldman, Eric  140–1 Good Samaritan protection  99–101 Google  1, 222 AdSense  143 AdWords  96 algorithms  120–2, 134 anonymity  210

autocomplete suggestions  129–30 bias  139–40 blocking  77, 104 censorship  144 code of practice  232 competition  142–3 data protection  132–3 defamation  124, 126–30, 210–11, 213, 215 EU law  132, 134, 142–3 forgotten, right to be  135 Google-bombing  141 Google OneBox  120 Google Sites  104 impact on audience  90 indispensability  189 jurisdiction  77 keywords  120, 141 location  69 manipulation  141–3 media company, as  88 opinions  5, 120, 138, 142 PageRank  120, 142 political ideology  143–4, 200 publishers, search engines as  124 ranking of hits  116–17, 119–22, 133, 138–42 self-regulation  123 governments censorship, regulation of  89, 231, 238 excessive power  4 outsourcing monopoly of government to apply law  3 public service media  230 regulation  88, 222, 227, 229–31, 235, 238–42 search engines, manipulation of  140–1 social media platforms  146–7, 179–81 speakers and government, relationship between  4 state aid  17, 60, 229 taxes  229 Grimmelman, James  119, 128–9, 138–40, 144–5 Habermas, Jürgen  4, 6 harassment  173 Harawa, Daniel  176 harmonisation  28, 58, 80 hashtags  169 hate speech  20, 36–40 codes of conduct  194 comments, gatekeepers’ responsibility for online  212, 218 community standards  190–2 Council of Europe recommendation  36, 37 criminal offences  37, 59, 191 Cybercrime Convention, Additional Protocol to  59 definition  190

Index  251 denial of crimes against humanity, war crimes and genocide  39 ECtHR  37–40 EU  36–7, 58–9, 97, 194 Holocaust denial  39–40, 161, 181–2, 191–2 incitement  37, 191 notice, wait and takedown procedure  225 proportionality  38 protected characteristics  191 public order  218 regulation  58–9, 72, 97, 225 removal of content  194 search engines  131–2 social media platforms  173–4, 179, 181–2, 186–7, 191, 194, 226 viewpoint neutrality  36 Helberger, Natali  83 high value speech  21, 176 Hindman, Matthew  71 Hitz, John  152 Holmes, Oliver Wendell  10, 13, 15, 34 Holocaust denial  39–40, 161, 181–2, 191–2 Hong Kong  130, 208 honour see reputation and honour, protection of hosting comments, gatekeepers’ responsibility for online  205–6, 208, 210, 217–18 E-Commerce Directive  83, 106, 205–6 information society services  80 regulation  99 search engines  124 social media platforms  160, 165, 168, 176, 179 human rights Charter of Fundamental Rights of the EU  28–9, 59–60 Human Rights Act 1998  26, 52 ideology  199 Internet access as a fundamental right  81 search engines  144–5 social media platforms  199 human trafficking for sexual exploitation  99–100 Hungary  212–17 Civil Code  206–7, 210 Constitutional Court  207–8 hate speech  212 hosting  206, 210 moderation  215, 217 party who has been attacked  213–14 regulation  93 identification see also anonymity comments, gatekeepers’ responsibility for online  202, 203–4, 206, 209, 212–13 defamation  168 gatekeepers, liability of  93, 212–13

journalists  48, 52, 53–4, 203 sources, protection of journalists’  27, 52, 53–4, 203 ideology  143–4, 196, 199–200 illegality  95–6, 103–9, 113, 122 immunity  98–101, 123, 168 impact on the audience defamation  30, 214 dependency  151, 228 dopamine surges  151 like buttons  147, 151 online comments  213–14 psychiatric harm  151 regulation  82, 90–1 search engines  115–16 social media platforms  23, 84, 90, 147, 149–52 impartiality see bias indecency  174–5, 192–4 indexes, building and use of  115 individualist theory  12–13, 15 information, right to  136 information society services (ISS)  76, 80, 165 injunctions  52, 106–9, 122–3 innocent dissemination defence  124–8, 167, 208 innovation  109, 111 instrumentalism  12–13, 15 insults  191 intellectual property  96, 104 see also copyright intermediaries see gatekeepers; Internet service providers (ISPs) Internet Corporation for Assigned Names and Numbers (ICANN)  68–9 Internet service providers (ISPs)  3, 102–14 blocking  102, 103–4, 105, 109 censorship  103, 105, 113–14 comments, gatekeepers’ responsibility for online  205–6, 216 E-Commerce Directive  219 filtering  102, 103–4, 105 illegal content, obligations regarding  103–9 information service, as  111 injunctions  106–9 network neutrality  102–3, 109–13, 114, 239 regulation  102, 105–6 Universal Service Directive  102–3 Internet-specific questions  22–4 investigative journalism  54, 155 Iran  71 Ireland  167, 226, 237 ISPs see Internet service providers (ISPs) Jack the Ripper  1 Jakubowicz, Karol  48, 115–16 James I, King of England 15–16 Jones, RonNell Andersen  168–9

252  Index journalists accreditation, right of  52 confidentiality  54 Data Protection Directive, exemption from  134, 136 decline of professional media  74–5 defamation laws, protection against abuse of  52 EU  48, 134, 136 freedom of movement  52 identification of journalists  48 institution, journalism as an  45 investigative journalism  54, 75, 155 searches, exemption from house  53–4 sources, protection of  27, 52, 53–4, 203 special privileges  52–3 tabloid press  1, 32 tax benefits  53 judgments, enforcement of  76 jurisdiction  73, 76–7, 165–7 justifications of freedom of speech  13–15 Kahn, Robert  161 Kaminski, Margot E  86 Katsirea, Irini  134 Kennedy, Anthony  64, 152–3 Kenyon, Andrew  56 keywords  120, 141 Klonick, Kate  190, 193 Koenig, Thomas  184 Kohl, Uta  83, 145, 241 Kreimer, Seth  98–9 Krotoszynski, Ronald  136 Laidlaw, Emily  82–3, 113–14 Leonardi, Danilo  69 Lessig, Lawrence  2, 67 let alone, right to be  32 Lê, Uyên  65 library analogy  119–20 licensing  46, 50–1 Lichtenberg, Judith  56 Lidsky, Lyrissa  168–9 like buttons constitutional protection  23–4, 147–8 dopamine surges  151 opinions  148 personality rights  148 social media platforms  147–8, 151 speech, as  23–4, 147–8 likeness or names, unauthorised appropriation of  32 limitations of freedom of speech  24–43 false statements  39–40 hate speech  36–8 paedophilia  41–2

privacy  32–4 public morals, protection of  40–1 reputation and honour, protection of  29–32 symbolic speech  38–9 threats  34–6 violence or criminal offences, instigation to  34–6 links defamation  169 linkfarms  142 opposing opinions, to  235 sharing  148–9 social media platforms  148–9, 151 literature and arts  21 live feeds of murder  1, 156–7 Love, Courtney  169 low value speech  21–2, 154–6, 176, 217 McCallum, Lucy  127 McChesney, Robert  113 McDonald, Michael  127 McLuhan, Marshall  1–2, 157 Magatelli, Cara  172 Maine, Henry Sumner  242 majority, despotism of the  203 manipulation of search results  137–45 advertisements  143–4 algorithms  137, 140, 142 bias  138–41 competition  138, 142–3 external manipulation  141 filtering  137–8, 144 internal manipulation  141–4 neutrality of search engines  137–8, 139–40, 143 ranking of hits  137–42 Manne, Geoffrey  140 Mapplethorpe, Robert  89 marketplace of ideas  8, 10, 15, 47 Marsden, Chris  69 Massaro, Toni M  86 Mathes, Adam  141 media companies  87–8, 92, 157–8 media, definition of  46–7, 49 Meiklejohn, Alexander  11, 21 mere conduits  80, 95, 106–7, 117–19, 129, 139 metadata, assignment of  115 meta-media  115 Mill, John Stuart  9–10, 15 Milton, John  8–9, 15 minority opinions  149, 203 moderation analogies  188–90 comments, gatekeepers’ responsibility for online  202, 204–7, 219 ex post moderation  202, 204 expecting moderation  218

Index  253 legal status  188–90 prior moderation  202, 204, 206–7, 217–18 pros and cons  187–8 readers’ letters analogy  208 removal of content  204, 215 social media platforms  184, 187–91 monitoring  95, 108, 230, 236 monopolies  3, 46, 74, 102, 110, 116, 228, 241 Moore, Alan  1 morals, protection of public  40–1 Morland, Michael  124 murder, live feeds of  1, 156–7 Murphy, Frank  19 must carry rule  64, 110 Nádori, Péter  217 name or likeness, unauthorised appropriation of  32 Napoli, Philip  87 nation states, abandonment of concept of  199–200 national security  20, 52, 104 neutrality see also network neutrality advertisements  143 blocking or removal  229 E-Commerce Directive  96 editing  199 regulation  64, 229, 237–9 search engines  116, 118, 120, 137–8, 139–40, 143 social media platforms  152, 157–8, 190, 195–200, 237, 240 viewpoint neutrality  25, 36 network neutrality  102–3, 109–13, 114 common carriers, ISPs as  110, 111–12 competition  109, 111 Council of Europe, Declaration of  112 discrimination  109, 111–13 editorial discretion  110–11 EU  112–13 innovation  109, 111 ISPs  102–3, 109–13, 114, 239 transparency  109 news and news feeds see also fake news algorithms  86–7, 150, 157, 195–7 AVMS Directive  61 bias  60–1, 237–8 editing  92, 195–7 newsworthiness  33–4 personalisation  85, 149, 196 public figures  33–4 regulation  237–8 reply, right of  235 sensationalism and gossip  155 social media platforms  149–51, 155–9, 177–80, 183, 195–7, 229, 237 newspapers, licensing of  50–1 Noam, Eli  71–2

non-human speakers  86 Norton, Helen M  86 Norway  63 not-for-profit websites  213, 216, 218 notice and takedown procedure comments, gatekeepers’ responsibility for online  214–15, 219 E-Commerce Directive  122–3, 125, 171–2, 219, 225, 231–2 EU  122–3, 125, 165–6, 171–2, 219, 225–6, 231–5 injunctions  106 regulation  226, 231–5 search engines  122–3 social media platforms  171–2 nudity  190–4 Nunziato, Dawn  113 obscenity  40–1, 57 O’Dell, Eoin  167–8 offensive, disturbing or shocking content  22, 58, 174–5 OLAF (European Anti-Fraud Office)  54 on-demand services  161 online comments see gatekeepers’ responsibility for online comments online content providers, regulation of  65–81 analogies  77–81 architecture  67 concentration of ownership  71–2 corporate dominance  71 criminal offences  72–3 cyber-idealism (optimism)  67 cyber-realism (pessimism)  67–8 defence, freedom of speech as a  79 EU  66, 74, 80, 181–2 feudalism metaphor  68–9 freedom of the press  66–7 governments  159–81 harmonisation  80 historical cycle  74 information society services  76, 80 Internet services as the subjects of freedom speech and press  79 jurisdiction  76–7 limits of offline speech in online environment, applying the  79–80 new problems  74–7 old problems in new context  72–4 opinions  188 personality rights, violation of  73 polarisation  75–6 private information, publication of  73 professional media, decline of  74–5 public interest  69 social fragmentation  75–6

254  Index social media platforms  146, 159–200 starting points  77–81 terrorism, propagation and support of  72 opinions see also pluralism and diversity constitutional protection  148 defamation  31, 168–9 ECtHR  49 extremism  150 like buttons  148 links to opposing opinions  235 privacy  172 public opinion  1, 3 ranking of hits  139 regulation  87, 90, 188, 223–5, 227, 231 search engines  5, 115–16, 120–1, 127, 131, 138–9, 141–2 social media platforms  149–50, 168–9, 172, 188, 235 spiral of silence  149 Oster, Jan  47–9, 125, 134 ownership, concentration of media  90 paedophilia  41–2 PageRank  120, 142 Papandrea, Mary-Rose  172–3 party political broadcasts  58, 63 Pasquale, Frank  86, 121, 139, 235 Perrin, Will  234 personalisation  85, 141, 149, 196, 198 personality rights see also defamation; reputation and honour, protection of anonymity  73, 204, 217 comments, gatekeepers’ responsibility for online  207–8, 209, 217–18 filtering  104 forgotten, right to be  132 injunctions  108, 122 like buttons  148 privacy  20, 32 public affairs, open debate of  21 public figures  29–30 search engines  122, 124–6, 129–31, 145 television, regulation of  61 pluralism and diversity AVMS Directive  60–1 Charter of Fundamental Rights of the EU  59–60 concentration of media ownership  90 democracy  62–3 direct access to media  63 EU  48–9 freedom of the press  59–64 news coverage, impartiality of  60–1 public interest  64 regulation  59, 87, 90, 223–5, 227, 231, 235, 239–42

social media platforms  194–7, 235 structural restrictions  60 television and radio  59–60, 64 terms and conditions  226 traditional media  59, 90, 240–1 polarisation  75–6 political content advertisements  232 categorisation  21 censorship  113, 114 comments, gatekeepers’ responsibility for online  211–12 commercial communications  43 community standards  194 constitutional protection  11 definition of political speech  21 ECtHR  27, 35–6 elections  148, 155, 157–8, 177–80, 194 fact-checking  232 high value speech  21 low value speech  21–2 news feeds  196 open debate  20–1 party political broadcasts  58, 63 search engines  143–4, 200 social media platforms  152, 157–9, 194, 196, 199–200 politicians, private lives of  34 pornography  42, 72–3, 105, 173 Post, Robert  11–12, 17, 118, 134, 136–7 Powell, Lewis  42 press, freedom of the see freedom of the press prevention of damage or threats  223, 231 printed press, decline of  74–5 prior restraint  50–2 privacy  23, 32–4 advertisements  170 criminal law  32 damages  132 data protection  5, 32 embarrassing private facts, public disclosure of  32 employees  172–3 false light, portrayal in a  32 false statements  39 forgotten, right to be  132–3 let alone, right to be  32 name or likeness, unauthorised appropriation of  32 network neutrality  109 newsworthiness  33–4 private and family life, right to respect for  32, 135, 172, 210, 211, 216 private sphere, interference with  20 revenge porn  73

Index  255 search engines  116, 124, 131, 137, 141 seclusion or solitude, intrusion upon  32 social media platforms  159, 170–3 private undertakings censorship comments online  219 ISPs  113 notice and takedown procedure  234 regulation  88–9, 226, 229, 231, 236 search engines  123 social media platforms  181, 183–65, 187–8 terms and conditions  226–7 comments, gatekeepers’ responsibility for online  219 network neutrality  112 regulation  82, 84, 93–4, 222–6, 239–42 legal basis  183–7 prohibiting private regulation  239–41 social media platforms  180–200 search engines  123 social media platforms  180–200 terms and conditions  183–7 privileges see special privileges awarded to the media professional media, decline in  74–5 proportionality  28, 38, 94, 98, 233 Prosser, William  32 prostitution  100 psychiatric harm  151 psychological effects  149–52 public affairs, discussions of  20–2, 32, 133, 217 public figures/celebrities  29–30, 32–3, 154, 227 public forum doctrine  152–4, 183–4, 186, 189, 227–9, 239–40 public interest Council of Europe  112 Data Protection Directive  134–7 defamation  30–1 forgotten, right to be  134–7 freedom of the press  46, 49 investigative journalism  54 pluralism and diversity  64 regulation  69, 225 social media platforms  163 sources, protection of information  53–4 special privileges  56 truth  9–10 public morals, protection of  40–1 public order  20, 26, 35–7, 176, 218 public service broadcasting  55–6, 230 public sphere definition  3–4 regulation  223–30 search engines  3 social media platforms  147–9, 157 public utilities  239

publishers  124–8, 130–1, 148–9, 206, 208 Puetz, Trevor  228 racism and xenophobia Cybercrime Convention, Additional Protocol to  59 Framework Directive  35, 39, 163 social media platforms  163 radio and television  57–61, 64, 90 ranking of hits defamation  125 forgotten, right to be  133–4 opinion, as  139 reply, right of  235 search engines  116–22, 124–9, 131–3, 137–42, 145, 235 Rawls, John  55 Raz, Joseph  15 readers’ letters analogy  208 Redish, Martin  12 regulation  57–9, 61 see also gatekeepers, regulation of; online content providers, regulation of commercial communications  57 excessive power  3–4 freedom of the press  57–9 hate speech  58–9 public sphere  3–5 quasi-law, adoption of own  6 search engines  118–19, 122–3, 128–9, 140, 143, 145 self-regulation  100–1, 105–6, 123, 145, 181, 198 religion  28 removal of content see also notice and takedown procedure child protection  237 comments, gatekeepers’ responsibility for online  202, 204, 209–11, 214–17 defamation  209, 211 E-Commerce Directive  94–8, 177 EU  94–8, 177, 232 lifespan of comments  215 moderation  204, 215 regulation  83–5, 91–8, 224, 226, 229, 232, 235, 239 social media platforms  158, 160, 167–8, 171–2, 177, 180–1, 185–8, 191, 194, 226 reply, right to  61, 224–5, 235 reputation and honour, protection of  29–32 see also defamation comments, gatekeepers’ responsibility for online  208–16 emotional distress, intentional infliction of  30 personality rights of public figures  29–30 social media platforms  167–9, 178–9

256  Index revenge porn  73, 173 Richards, Neil  23 Richardson, Megan  169 roadmaps for combating online disinformation  232 robotics, first law of  222–3 Rolph, David  125–6 Rosen, Jeffrey  117 Rowbottom, Jacob  169, 176, 232, 236 Rustad, Michael  184 safe harbour  129 Sajó, András  217 Sambrook, Richard  61 sanctions see also damages; injunctions comments, gatekeepers’ responsibility for online  215–16 fines  142–3 proportionality  215 regulation  230–1, 236 terms and conditions  227 Schauer, Frederick  16–17, 89 Schejter, Amit  110 Schrems, Max  166 search engines  3, 23–4, 115–45 see also Google advertisers  116, 143–4 advisor analogy  119, 129, 140, 145 algorithms  118, 120–1, 126–7, 137, 140, 142 analogies  117–20, 145 autocomplete suggestions  129–31 bias  138–41 business model  116–17 competition  116, 138, 142–3 content and telecommunications services, as  122 defamation  123–31 democracy  116, 123, 137–8, 145 editor analogy  118–19, 120–1, 128–30, 140, 144–5 EU  122–3, 125, 128–31, 145 fair market practices  138 filtering  131, 137–8, 144 forgotten, right to be  132–7 gatekeepers  116, 118, 124–5, 145 indexes, building and use of  115 information services, as  115–16 injunctions  122–3 liability  122–32 library analogy  119–20 manipulation of search results  137–45 mere conduit analogy  117–19, 129, 139 metadata, assignment of  115 meta-media  115 neutrality  116, 118, 120, 137–8, 139–40, 143 notice and takedown procedure  122–3 opinions  139, 141–2 passive transmitters, search engines as  124

personality rights  122, 124–6, 129–32, 145 political content  143–4 privacy  116, 124, 131–3, 137, 141 private libraries  120 publishers, search engines as  124–8, 130–1 ranking of hits  116–22, 126–9, 131–4, 145, 235 manipulation  137–42 opinion, as  139 regulation  118–19, 122–3, 128–9, 140, 143, 145, 236–7 reply, right of  235 role  115–17 search histories  116 snippets, status of  120, 126 speech, search results as  117–23 state-owned search engines  120 transparency  116–17 value-based decisions  116 searches  53–4 seclusion or solitude, intrusion upon  32 self-determination  32 self-regulation  100–1, 105–6, 123, 145, 181, 198 sensationalism and gossip  155 sex offenders  41–2, 81, 152 Shiffrin, Steven  21 Smet, Stijn  217, 225 Smith, Michael  131 snippets, status of  120, 126 Snowden, Edward  149 social cohesion  149–51 social fragmentation  75–6 social media platforms  146–201 see also Facebook algorithms  23, 157–8, 194–7 analogies  188–90 anonymity  158, 202, 220 architecture of platforms  196 AVMS Directive  146, 161–2, 164 bans  152, 154, 164, 188, 194, 228 following, from  154, 227 sex offenders  81 bias  237–8 censorship  181, 183–5, 187–8, 238–40 comments, gatekeepers’ responsibility for online  208, 219–20 community standards  190–4 criminal offences  152, 156–7, 160, 163, 173–4, 180–2, 191 defamation  154, 159, 165, 167–9, 176–7, 191 definition  146 democracy  147–59, 199 E-Commerce Directive  162–6, 171–2, 177, 179, 184–5, 188, 198 editing  158–9, 161, 180, 189, 194–7, 199 fact-checkers, employment of  197 neutrality  199

Index  257 elections  148, 155, 157–8, 177–80, 194 EU  146–7, 160–7, 176–80, 194 E-Commerce Directive  162–6, 171–2, 179, 184–5, 188, 198 European Commission  180, 198 fake news  179–80, 198 jurisdiction  165–6 exemptions  188 extremism  150, 155, 194 fake news  155–6, 158, 177–80, 197–8, 237 filter bubble theory  149–52 governments  146–7, 179–81 grossly offensive, definition of  174–5 hate speech  161, 173–4, 179, 186, 190–2, 194 hosting  160, 165, 168, 176, 179 ideology of a platform  199–200 impact on the audience  23, 84, 90, 147, 149–52 indecency  174–5, 192–4 independent oversight  235 indispensability  152, 185, 228–9 information society services, freedom to provide  165 jurisdiction  165–7 liability  159–60 like buttons  147–8, 151 links, sharing  148–9, 151 moderation  188–91 neutrality  152, 158, 190, 195, 199 new forms of speech  147–9 news  155–6 algorithms  196–7 fake news  155–6, 158, 177–80 feeds  158–9, 195–6 personalisation  196 reply, right of  235 sensationalism and gossip  155 on-demand services  161 opinions  149–50, 168–9, 172, 188, 235 political speech  152, 157–9, 194, 196, 199–200 privacy  159, 170–3 private regulation  180–200 psychological effects  149–52 public forum doctrine  152–4, 183–4, 186, 189, 227–8, 239–40 public sphere, expansion of  147–9, 157 regulation  146, 159–200, 227–8, 230–9 removal of content  177, 180–1, 185, 188, 194 reply, right of  235 reports by users, taking action after  240 reputation  167–9, 178–9 reviews, need for independent  236 sanctions  236 speakers, as  189 suspension  193–4 tech or media companies, as  157–8

terms and conditions  183–7 threats  173–5, 178 transparency  182, 192, 200 video-sharing platforms  146, 161–3, 173 violence, broadcasting  156–7 social responsibility  55–6, 87, 239 social role of media  55–6 sources, protection of information  27, 52, 53–4, 203 Spain  135–6 special privileges awarded to the media  52–6 negative character of freedom  54–5 positive character of freedom  54–6 principal foundations  54–6 privileges of journalists  52–4 public service obligations  55–6 speech see also hate speech abusive tone  24 action  17–18 algorithms  85 category of speech  16–20 comments, gatekeepers’ responsibility for online  202 constitutional protection  5, 11, 18–20, 23–4, 73, 85, 119, 147–8 expansion of group of speakers  23 everyday speech  16 high value speech  21, 176 legal speech  17 like buttons  23–4, 147–8 low value speech  21–2, 217 regulation  85–6 scope  16–20, 23–4 search engine results  24, 117–23 social media platforms  23–4, 147–8, 189, 191 spiral of silence  149 state, relationship of speakers with  110 style and manner of speech  24 symbolic speech  38, 148 speed of transmissions  109 spiral of silence  149 sponsorship  18, 143, 197 standard contracts  227 state aid  17, 60, 229 status quo  231–5 Stephens, Steve  1 Stevens, John Paul  66 Stewart, Potter  44–5, 189 Stumpf, István  207 subsidiarity  28 Sunstein, Cass  56, 64, 75–6, 137–8, 141, 149–50, 238 surveillance  5, 71 suspension  186, 193–4 Suzor, Nicolas  192

258  Index Sweden comments, gatekeepers’ responsibility for online  210, 212–17 defamation  210, 212–13, 216 moderation  217 party who has been attacked  214 symbolic speech  38–9, 148 Szilárd, Leó  178 tabloid press  1, 32 Tambini, Damian  5, 69, 236 targeted messaging  23 taxes  53, 229 tech companies  87–8, 91–2, 157–8 television and radio  57–61, 64, 90 terminological variations  15–16 freedom of expression  15–16 freedom of opinion  16 freedom of speech  15–16 freedom of thought  16 terms and conditions of contract  183–7, 225–7 terrorism draft directive  97 EU  97, 163 filtering  104 propagation and support  72 provocation  163, 176 regulation  72, 97 removal of content  97 social media platforms  163, 176 theology  8–9 theoretical foundations of free speech  8–15 common matters, participation in decisionmaking on  8 democracy, citizen participation in  8, 11–12, 13–15 individual freedom  8, 12–13, 15 justifications of freedom of speech  13–15 social stability and change, balance between  8 truth, discovering the  8–10, 12 Thompson, Marcelo  239 thought, freedom of  16, 22–3 threats  34–6, 173–6, 178, 218 traditional media availability on the Internet  48 constitutional protection  85 decline of traditional media  196 editing  73, 83, 91, 159, 167, 195, 197–8, 201 fake news  178 freedom of the press  46–9 gatekeepers  3–6, 221 journalists, regulation of  173 low value speech  154–6 media, concept of  46 offline speech, limits of  79

pluralism and diversity  59, 90, 240–1 polarisation  75 public sphere  1 readers’ letters analogy  208 regulation, models for  230, 236–8, 240–1 reputation  167 search engines  115, 119, 122 trafficking for sexual exploitation  99–100 transparency advertisements  197, 232 fake news  197 network neutrality  109 political advertisements  232 regulation  232, 236 search engines  116–17 social media platforms  182, 192, 200 Trending Topics service  157–9 trolls  155, 206 Trump, Donald  111–12, 154, 158, 191, 194, 227–8 truth  8–10, 12 Turkey  104 Tushnet, Rebecca  233 Tutt, Andrew  222 Twitter Arab Spring  70 bomb threats  174–5 character limitations  169 codes of practice  232 context  174 defamation by re-tweeting  168 following public figures, bans on  154, 227 hashtags  169 impact on audience  90 joke trial  174–5 location  69 news feeds  158–9 presidential use  154, 194, 227–8 restrictions  146, 154, 169 re-tweeting  149, 168 United Kingdom  25–6 Broadcasting Code (Ofcom)  58 celebrities, confidential but newsworthy information on  33 censorship  50–2 children, protection of  58 comments, gatekeepers’ responsibility for online  208–15, 217 Communications Act 2003  58, 174–5 competition  238 defamation comments, gatekeepers’ responsibility for online  208–14, 215 Defamation Act 2013  30–1 interim injunctions  52

Index  259 search engines  124–8 social medial platforms  165, 167 Digital Economy Act 2017  105 filtering  105 forum shopping  165 freedom of the press  26, 50–1 Google  129, 210–11, 213, 215 grossly offensive, definition of  174–5 hate speech  37 Human Rights Act 1998  26, 52 indecency  174–5 interim injunctions  51 links, sharing  148 Malicious Communications Act 1988  175 notice and takedown procedure  125 Obscene Publications Act 1959  41 offensive content  58 parliamentary sovereignty  26 political broadcasts  58, 63–4 prior restraint  51–2 privacy  33–4, 211 public order  26, 35, 37 regulation  235, 238 removal of content  211, 215 reputation  208–9 rule of law  26 search engines  124–8 watershed  58 United States  10, 24–5 see also First Amendment (Constitution of US) broadband as telecommunications service, classification of  112 censorship  63, 98–9, 113–14 children, protection of  42, 57 China  144–5 civil rights  144–5 clear and present danger doctrine  34–5 commercial communications  42–3 common carriers  102–3, 110, 111–12 Communications Decency Act 1996 copyright  123 Good Samaritan protection  99 ISPs  105 regulation  99, 231, 239 search engines  123, 129, 131, 137 social media platforms  159–60, 164, 168, 176, 179, 181, 184, 188 Constitution  17–20, 44, 113 crush videos  40 defamation  50–1 Digital Millennium Copyright Act 1998  105 direct access to media  63 discrimination  25 draft cards, burning  17–18, 38 EU  201

fake news  158, 177–80 false statements  39 Federal Communications Commission (FCC)  50, 57, 61–2, 103, 111–12 Federal Search Commission, proposal for  236–7 filtering  104–5, 150–1 flag, burning the US  38 foreign policy  152 freedom of the press  44–5, 47–8, 50–1 Good Samaritan protection  99–101 Google  77, 116 hosting  99 Hutchins Commission  55–6 ideology  199 immunity  98–101, 188 intimacy of public events, preservation of  33 ISPs  102–6, 111 liability  98–101 like buttons  23–4, 148 must carry restrictions  64 network neutrality  110–12 newsworthiness  33 obscenity  57 Open Internet regulation  111 paedophilia  41–2 Pizzagate  178 pluralism and diversity  61–4 presidential election of 2016  157–9, 177–80 prior restraint  51 privacy  32–3, 172 procedural guarantees  99, 101 public affairs, open debates on  22 public figures  29–30 public forum doctrine  152–3 public interest  64 regulation  50, 64–6, 69, 81, 98–101, 105–6, 230–1, 233 removal of content  188, 239 reply, right to  62 reputation  168, 178–9 Russian intelligence services  158 search engines  116–19, 121, 123, 129, 131, 137, 145, 236–7 self-determination  32 self-regulation  100–1 sex offenders using social media sites, ban on  81 social media platforms  23–4, 77, 157–60, 166–7, 193–4 Communications Decency Act 1996  159–60, 164, 168, 176, 179, 181, 184, 188 fake news  158, 177–80 foreign policy  152 ideology  199 presidential election 2016  158, 177–80

260  Index privacy  172 public forum doctrine  152–3 reputation  168, 178–9 suspension  193–4 threats  175, 178 speech  17–20, 38 suspension  193–4 television and radio  57 terminological variations  15–16 threats  175, 178 trafficking for sexual exploitation  99–100 video games  57 viewpoint-neutrality  25 Universal Service Directive  102–3 van Eijk, Nico  122 van Hoboken, Joris  115, 120 video games  57 video-sharing platform services AVMS Directive  92, 146, 161–2, 237 comments, gatekeepers’ responsibility for online  208, 219–20 definition  161–2 public sphere, influence over  3 social media platforms  146, 161–3, 173 website operators, as  208

violence or criminal offences, instigation of  34–6 Volokh, Eugene  78, 120–1, 154–5 von Baeyer, Hans Christian  178 Voorhoof, Dirk  217 vulgar abuse  212 Warren, Earl  32 Weaver, Russell  8, 70 website operators, definition of  208 West, Sonja  45 whistleblowing  53 Wi-Fi networks, access to  107 Woods, Lorna  234 works of art, bans on  192–4 Wright, Joshua  140 Wu, Tim  74, 85–6, 109 Wulff, Bettina  129–30 Wulff, Christian  129–30 xenophobia see racism and xenophobia Yemini, Moran  110 Yen, Alfred  68 YouTube  104, 146, 157, 161–2, 175, 193–4 Zuckerberg, Mark  181–3, 191, 193, 199